You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(28) |
Jun
(2) |
Jul
(10) |
Aug
(1) |
Sep
(7) |
Oct
|
Nov
(1) |
Dec
(7) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(5) |
Feb
(7) |
Mar
(10) |
Apr
(12) |
May
(30) |
Jun
(21) |
Jul
(19) |
Aug
(17) |
Sep
(25) |
Oct
(46) |
Nov
(14) |
Dec
(11) |
2009 |
Jan
(5) |
Feb
(36) |
Mar
(17) |
Apr
(20) |
May
(75) |
Jun
(143) |
Jul
(29) |
Aug
(41) |
Sep
(38) |
Oct
(71) |
Nov
(17) |
Dec
(56) |
2010 |
Jan
(48) |
Feb
(31) |
Mar
(56) |
Apr
(24) |
May
(7) |
Jun
(18) |
Jul
(2) |
Aug
(34) |
Sep
(17) |
Oct
(1) |
Nov
|
Dec
(18) |
2011 |
Jan
(12) |
Feb
(19) |
Mar
(25) |
Apr
(11) |
May
(26) |
Jun
(16) |
Jul
(2) |
Aug
(10) |
Sep
(8) |
Oct
(1) |
Nov
|
Dec
(5) |
2012 |
Jan
(1) |
Feb
(3) |
Mar
(3) |
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
(1) |
Sep
(2) |
Oct
|
Nov
(2) |
Dec
|
From: ramkumarlak <ram...@gm...> - 2011-03-25 11:50:16
|
Hi, Recently I downloaded and installed luciddb-0.9.3 version. I loaded some data. The data size is around 1.5GB and I checked the db.dat size it is almost 1.5 GB. Do we need to configure any flags to enable compression or am I missing something ? Regards Ram -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/luciddb-compression-tp6207397p6207397.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Nicholas G. <ngo...@dy...> - 2011-03-17 17:42:37
|
On Mar 17, 2011, at 9:18 AM, Eric Freed wrote: > I have a few questions about creating users. I know you can set a > default catalog or the default schema when creating a user, but can > you set both? And if not, can the default schema or catalog be changed > after the creation? Well, a schema is in a catalog. By setting the schema to a fully prefixed schema you effectively set both. create catalog c2; create schema s2; create user xyz identified by 'XYZ' default schema c2.s2; !closeall !conn jdbc:luciddb:http://localhost XYZ XYZ select * from localdb.sys_root.user_session_parameters where param_name in ('schemaName', 'catalogName'); +--------------+--------------+ | PARAM_NAME | PARAM_VALUE | +--------------+--------------+ | schemaName | S2 | | catalogName | C2 | +--------------+--------------+ What does NOT appear to be working is the "replace" part of this statement: create (or replace) user xyz identified by 'XYZ' default schema localdb.testing; I had to drop and recreate the user to get changes, for the change to take effect. I'll do more research and potentially log a bug for this. Nick |
From: Eric F. <ep...@me...> - 2011-03-17 16:48:16
|
Hi, I have a few questions about creating users. I know you can set a default catalog or the default schema when creating a user, but can you set both? And if not, can the default schema or catalog be changed after the creation? The wiki says that the DBA_USERS view is available starting 0.9.4, so maybe an update statement will work? Thanks |
From: Jeremy L. <je...@vo...> - 2011-03-11 17:04:36
|
There is no .bcp file in this case but shortly after posting this question I tracked down an extra timezone field in my source CSV file from an error found in the Trace.log file. SEVERE: Could not process input: [00000AE2F901F2D0CEB2BF48F5DB1EDD,unknown,unknown,SPANISH,1,unknown,US/Central,UNITED STATES,50309,515,DES MOINES,unknown,unknown,IA,503] Messages: Row has too many columns I attempted to remove this post but you are obviously far too efficient with your responses ;-) -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Re-Problem-With-Foreign-Data-Wrapper-tp6160298p6162145.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2011-03-11 04:24:05
|
Maybe you have an old .bcp file in the same directory and it's missing a definition for the last field? JVS On Thu, Mar 10, 2011 at 1:58 PM, Jeremy Lemaire <je...@vo...> wrote: > In a new implementation of an existing system I am suddenly seeing problems > when trying to import csv data via a flat file wrapper. Although the data > looks fine in the csv file, a select on the following foreign table appears > to be having problems with row delimiters. > > Schema Setup > > create server imp_file_link > foreign data wrapper sys_file_wrapper > options ( > directory '/home/lucid/luciddb/import', > file_extension 'csv', > with_header 'NO', > log_directory 'trace/', > field_delimiter ',', > timestamp_format 'yyyy-MM-dd HH:mm:ss' ); > > create foreign table aws_imp."impression_parameter_set" ( > parameter_set_key varchar(32), > age varchar(16), > gender varchar(16), > "LANGUAGE" varchar(16), > phone_type varchar(16), > carrier varchar(16), > country varchar(16), > zipcode varchar(16), > areacode varchar(16), > dma varchar(16), > income varchar(16), > genre varchar(16), > state varchar(16), > destination varchar(16) > ) server imp_file_link; > > > Source CSV Data > > 00D0F99290EA147DC854F43BE719C761,unknown,unknown,SPANISH,2,T-MOBILE,US/Eastern,UNITED > STATES,07601,201,NYC,unknown,unknown,NJ,502 > 00D122AC18B3A8E6C7B1A24FA171A798,unknown,unknown,SPANISH,2,BOOST,US/Eastern,UNITED > STATES,10128,646,NYC,unknown,unknown,NY,unknown > 00D12A51856E9AEDB4B105F5ACD63D69,unknown,unknown,SPANISH,2,OTHER,US/Mountain,UNITED > STATES,unknown,224,CHICAGO,unknown,unknown,unknown,52 > 00D12CDF15E80B15A338299B6D33E3DA,unknown,unknown,SPANISH,2,VERIZON,US/Arizona,UNITED > STATES,86325,928,PHOENIX,unknown,unknown,AZ,unknown > 00D1B50618C680FDC73B22366EE89D82,unknown,unknown,ENGLISH,2,T-MOBILE,US/Eastern,UNITED > STATES,34744,407,ORLANDO,unknown,unknown,FL,unknown > > > SELECT Result On the Same Lines > > '00D0F99290EA147DC854F43BE719C761','unknown','unknown','SPANISH','2','T-MOBILE','US/Eastern','UNITED > STATES','07601','201','NYC','unknown','unknown','NJ' > '00D122AC18B3A8E6C7B1A24FA171A798','unknown','unknown','SPANISH','2','BOOST','US/Eastern','UNITED > STATES','10128','646','NYC','unknown','unknown','NY' > 'nknown','','','','','','','','','','','','','' > '00D12A51856E9AEDB4B105F5ACD63D69','unknown','unknown','SPANISH','2','OTHER','US/Mountain','UNITED > STATES','unknown','224','CHICAGO','unknown','unknown','unknown' > '00D12CDF15E80B15A338299B6D33E3DA','unknown','unknown','SPANISH','2','VERIZON','US/Arizona','UNITED > STATES','86325','928','PHOENIX','unknown','unknown','AZ' > > > > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Problem-With-Foreign-Data-Wrapper-tp6159515p6159515.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Jeremy L. <je...@vo...> - 2011-03-10 21:58:24
|
In a new implementation of an existing system I am suddenly seeing problems when trying to import csv data via a flat file wrapper. Although the data looks fine in the csv file, a select on the following foreign table appears to be having problems with row delimiters. Schema Setup create server imp_file_link foreign data wrapper sys_file_wrapper options ( directory '/home/lucid/luciddb/import', file_extension 'csv', with_header 'NO', log_directory 'trace/', field_delimiter ',', timestamp_format 'yyyy-MM-dd HH:mm:ss' ); create foreign table aws_imp."impression_parameter_set" ( parameter_set_key varchar(32), age varchar(16), gender varchar(16), "LANGUAGE" varchar(16), phone_type varchar(16), carrier varchar(16), country varchar(16), zipcode varchar(16), areacode varchar(16), dma varchar(16), income varchar(16), genre varchar(16), state varchar(16), destination varchar(16) ) server imp_file_link; Source CSV Data 00D0F99290EA147DC854F43BE719C761,unknown,unknown,SPANISH,2,T-MOBILE,US/Eastern,UNITED STATES,07601,201,NYC,unknown,unknown,NJ,502 00D122AC18B3A8E6C7B1A24FA171A798,unknown,unknown,SPANISH,2,BOOST,US/Eastern,UNITED STATES,10128,646,NYC,unknown,unknown,NY,unknown 00D12A51856E9AEDB4B105F5ACD63D69,unknown,unknown,SPANISH,2,OTHER,US/Mountain,UNITED STATES,unknown,224,CHICAGO,unknown,unknown,unknown,52 00D12CDF15E80B15A338299B6D33E3DA,unknown,unknown,SPANISH,2,VERIZON,US/Arizona,UNITED STATES,86325,928,PHOENIX,unknown,unknown,AZ,unknown 00D1B50618C680FDC73B22366EE89D82,unknown,unknown,ENGLISH,2,T-MOBILE,US/Eastern,UNITED STATES,34744,407,ORLANDO,unknown,unknown,FL,unknown SELECT Result On the Same Lines '00D0F99290EA147DC854F43BE719C761','unknown','unknown','SPANISH','2','T-MOBILE','US/Eastern','UNITED STATES','07601','201','NYC','unknown','unknown','NJ' '00D122AC18B3A8E6C7B1A24FA171A798','unknown','unknown','SPANISH','2','BOOST','US/Eastern','UNITED STATES','10128','646','NYC','unknown','unknown','NY' 'nknown','','','','','','','','','','','','','' '00D12A51856E9AEDB4B105F5ACD63D69','unknown','unknown','SPANISH','2','OTHER','US/Mountain','UNITED STATES','unknown','224','CHICAGO','unknown','unknown','unknown' '00D12CDF15E80B15A338299B6D33E3DA','unknown','unknown','SPANISH','2','VERIZON','US/Arizona','UNITED STATES','86325','928','PHOENIX','unknown','unknown','AZ' -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Problem-With-Foreign-Data-Wrapper-tp6159515p6159515.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2011-03-10 02:59:14
|
On Tue, Mar 8, 2011 at 10:54 PM, Ledion Bitincka <lbi...@sp...> wrote: > Are there any recommendations on the maximum number of concurrent > connections? In my usecase almost all the connections will be reading data. > I have seen that with other db systems the performance drops dramatically as > the number of concurrent connections gets to a couple of hundred. Is luciddb > the same in this regard? Are there any explicit system setting to limit the > number of concurrent connections? There's no governance on concurrent connections (use a connection pool for that), but system parameter "expectedConcurrentStatements" is used to ration out the buffer pool. For small queries, it doesn't matter since each query only needs to pin a few blocks in the buffer pool, but for large/complex queries it matters. You'll know you need to increase it (and/or the buffer pool size) if you hit "Cache scratch memory exhausted". Or throttle via connection pooling. JVS |
From: Ledion B. <lbi...@sp...> - 2011-03-09 06:54:12
|
Are there any recommendations on the maximum number of concurrent connections? In my usecase almost all the connections will be reading data. I have seen that with other db systems the performance drops dramatically as the number of concurrent connections gets to a couple of hundred. Is luciddb the same in this regard? Are there any explicit system setting to limit the number of concurrent connections? --- Ledion Bitincka le...@sp... | Director of Engineering, Southern California Regional Operations Splunk > Get your IT together |
From: John S. <js...@gm...> - 2011-03-09 04:56:49
|
See this entry in the FAQ: http://pub.eigenbase.org/wiki/LucidDbUserFaq#Startup_Error_For_Windows JVS On Tue, Mar 8, 2011 at 2:05 PM, Lars Bayer <ma...@la...> wrote: > Hi Everybody, > > I tried to install and run LucidDB (win64-0.9.3) on Windows XP (64-Bit). Installation was successful, but I cannot run the server. I don't have a clue why... Can anybody help me please? Console output is as follows: > > C:\luciddb-0.9.3\bin>lucidDbServer > Server personality: LucidDB > Loading database... > Exception in thread "main" org.eigenbase.util.EigenbaseException: Failed to load database > at net.sf.farrago.resource.FarragoResource$_Def1.ex(FarragoResource.java:1976) > at net.sf.farrago.db.FarragoDatabase.<init>(FarragoDatabase.java:292) > at net.sf.farrago.db.FarragoDbSingleton.pinReference(FarragoDbSingleton.java:100) > at net.sf.farrago.server.FarragoAbstractServer.start(FarragoAbstractServer.java:232) > at org.luciddb.session.LucidDbServer.main(LucidDbServer.java:62) > Caused by: java.lang.UnsatisfiedLinkError: C:\luciddb-0.9.3\lib\fennel\farrago.dll: This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem > at java.lang.ClassLoader$NativeLibrary.load(Native Method) > at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803) > at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728) > at java.lang.Runtime.loadLibrary0(Runtime.java:823) > at java.lang.System.loadLibrary(System.java:1028) > at org.eigenbase.util.Util.loadLibrary(Util.java:1099) > at net.sf.farrago.fennel.FennelStorage.<clinit>(FennelStorage.java:47) > at net.sf.farrago.db.FarragoDatabase.assertNoFennelHandles(FarragoDatabase.java:502) > at net.sf.farrago.db.FarragoDatabase.loadFennel(FarragoDatabase.java:513) > at net.sf.farrago.db.FarragoDatabase.<init>(FarragoDatabase.java:205) > ... 3 more > > and the last three lines of the Trace.log-file are: > > WARNUNG: Caught java.lang.NoClassDefFoundError during database shutdown:Could not initialize class net.sf.farrago.fennel.FennelStorage > 08.03.2011 22:51:10 org.eigenbase.util.EigenbaseException <init> > SCHWERWIEGEND: org.eigenbase.util.EigenbaseException: Failed to load database > > I'm grateful for any help! > > Thanks, Lars > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Lars B. <ma...@la...> - 2011-03-08 22:23:42
|
Hi Everybody, I tried to install and run LucidDB (win64-0.9.3) on Windows XP (64-Bit). Installation was successful, but I cannot run the server. I don't have a clue why... Can anybody help me please? Console output is as follows: C:\luciddb-0.9.3\bin>lucidDbServer Server personality: LucidDB Loading database... Exception in thread "main" org.eigenbase.util.EigenbaseException: Failed to load database at net.sf.farrago.resource.FarragoResource$_Def1.ex(FarragoResource.java:1976) at net.sf.farrago.db.FarragoDatabase.<init>(FarragoDatabase.java:292) at net.sf.farrago.db.FarragoDbSingleton.pinReference(FarragoDbSingleton.java:100) at net.sf.farrago.server.FarragoAbstractServer.start(FarragoAbstractServer.java:232) at org.luciddb.session.LucidDbServer.main(LucidDbServer.java:62) Caused by: java.lang.UnsatisfiedLinkError: C:\luciddb-0.9.3\lib\fennel\farrago.dll: This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at org.eigenbase.util.Util.loadLibrary(Util.java:1099) at net.sf.farrago.fennel.FennelStorage.<clinit>(FennelStorage.java:47) at net.sf.farrago.db.FarragoDatabase.assertNoFennelHandles(FarragoDatabase.java:502) at net.sf.farrago.db.FarragoDatabase.loadFennel(FarragoDatabase.java:513) at net.sf.farrago.db.FarragoDatabase.<init>(FarragoDatabase.java:205) ... 3 more and the last three lines of the Trace.log-file are: WARNUNG: Caught java.lang.NoClassDefFoundError during database shutdown:Could not initialize class net.sf.farrago.fennel.FennelStorage 08.03.2011 22:51:10 org.eigenbase.util.EigenbaseException <init> SCHWERWIEGEND: org.eigenbase.util.EigenbaseException: Failed to load database I'm grateful for any help! Thanks, Lars |
From: Ledion B. <lbi...@sp...> - 2011-03-08 18:25:30
|
Hmm, interesting! I don't get that - what OS/shell are you using? Can you give this a shot? #!/bin/sh BIN_DIR=$(cd `dirname $0`; pwd) . $BIN_DIR/defineFarragoRuntime.sh # 1. close stdin - LucidDbServer will interpet it as "run in daemon mode" # 2. run LucidDbServer in the background thus making it a real daemon ${JAVA_EXEC} ${JAVA_ARGS} org.luciddb.session.LucidDbServer < /dev/null & Basically, all we need to do to get LucidDbServer in daemon mode is close stdin --- Ledion Bitincka le...@sp... | Director of Engineering, Southern California Regional Operations Splunk > Get your IT together ________________________________________ From: John Sichi [js...@gm...] Sent: Monday, March 07, 2011 11:05 PM To: Mailing list for users of LucidDB Cc: ledion Subject: Re: [luciddb-users] LucidDb as a service(needs Wikifying) Gotta love command lines that look like chat full of emoticons. I gave it a try, and it works, but if I don't exit the shell fast enough, I get all kinds of "Unknown server command: <junk binary characters" spewing out on console. JVS On Sat, Mar 5, 2011 at 11:17 PM, ledion <le...@sp...> wrote: > Here is a simpler way to run luciddb as a daemon. > > (1) create a script called $LUCIDDB_HOME/bin/lucidDbServerDaemon with the > following content: > > #!/bin/sh > > BIN_DIR=$(cd `dirname $0`; pwd) > . $BIN_DIR/defineFarragoRuntime.sh > > > # 1. close stdin - LucidDbServer it will interpet it as "run in daemon mode" > # 2. run LucidDbServer in the background thus making it a real daemon > ${JAVA_EXEC} ${JAVA_ARGS} org.luciddb.session.LucidDbServer 0<&-, <&- & > > > (2) start the server in daemon mode as follows: > $LUCIDDB_HOME/bin/lucidDbServerDaemon > > (3) to stop luciddb server run: kill > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/PG-MySQL-protocol-support-tp4180819p6093725.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: John S. <js...@gm...> - 2011-03-08 07:05:14
|
Gotta love command lines that look like chat full of emoticons. I gave it a try, and it works, but if I don't exit the shell fast enough, I get all kinds of "Unknown server command: <junk binary characters" spewing out on console. JVS On Sat, Mar 5, 2011 at 11:17 PM, ledion <le...@sp...> wrote: > Here is a simpler way to run luciddb as a daemon. > > (1) create a script called $LUCIDDB_HOME/bin/lucidDbServerDaemon with the > following content: > > #!/bin/sh > > BIN_DIR=$(cd `dirname $0`; pwd) > . $BIN_DIR/defineFarragoRuntime.sh > > > # 1. close stdin - LucidDbServer it will interpet it as "run in daemon mode" > # 2. run LucidDbServer in the background thus making it a real daemon > ${JAVA_EXEC} ${JAVA_ARGS} org.luciddb.session.LucidDbServer 0<&-, <&- & > > > (2) start the server in daemon mode as follows: > $LUCIDDB_HOME/bin/lucidDbServerDaemon > > (3) to stop luciddb server run: kill > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/PG-MySQL-protocol-support-tp4180819p6093725.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: ledion <le...@sp...> - 2011-03-06 07:35:19
|
Here is a simpler way to run luciddb as a daemon. (1) create a script called $LUCIDDB_HOME/bin/lucidDbServerDaemon with the following content: #!/bin/sh BIN_DIR=$(cd `dirname $0`; pwd) . $BIN_DIR/defineFarragoRuntime.sh # 1. close stdin - LucidDbServer it will interpet it as "run in daemon mode" # 2. run LucidDbServer in the background thus making it a real daemon ${JAVA_EXEC} ${JAVA_ARGS} org.luciddb.session.LucidDbServer 0<&-, <&- & (2) start the server in daemon mode as follows: $LUCIDDB_HOME/bin/lucidDbServerDaemon (3) to stop luciddb server run: kill -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/PG-MySQL-protocol-support-tp4180819p6093725.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Jeremy L. <je...@vo...> - 2011-03-05 14:38:09
|
Both UPSERTs and INSERTs are done daily. DELETEs are only done if something goes wrong and I need to rebuild the data for a particular day. In all cases I have ALTER TABLE REBUILD statements at the end of each script followed by an ALTER SYSTEM DEALLOCATE OLD. Without this, as you have stated, performance degraded significantly. Unfortunately this is not the problem in this case. -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p6091909.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2011-03-05 07:11:13
|
Do you perform deletions/updates, or only inserts? If anything but inserts, then the presence of deleted rows could account for the slowdown, in which case ALTER TABLE REBUILD is the recommended solution. JVS On Fri, Mar 4, 2011 at 12:19 PM, Jeremy Lemaire <je...@vo...> wrote: > A couple of changes that have been made to the system recently and some > general observation that I should mention: > > ./bin/lucidDbServer now using params -Xms2048m -Xmx4096m > ./binsqllineClient now using params -Xms512m -Xmx5120m > -XX:-UseGCOverheadLimit > > I am also continuing to run only one or two instances of ./bin/sqllineClient > concurrently while doing an import to conserve memory. > > With the changes made failures seem to be less frequent but more severe > (i.e. exceptions rather than hangs) > > There are concurrent queries originating from the Geronimo database pool, > but expectedConcurrentStatements is still set at 32 and we are never > anywhere near this limit so this does not appear to be an issue. > > We are in the last month of Q1. The larger tables in the system are > partitioned by quarter. The system seems to run much faster and with less > failures at the beginning of each quarter than it does at the end. Monthly > partitions may help but I am afraid it would hinder query performance when > crossing partitions. > > > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p6089917.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Jeremy L. <je...@vo...> - 2011-03-04 20:19:52
|
A couple of changes that have been made to the system recently and some general observation that I should mention: ./bin/lucidDbServer now using params -Xms2048m -Xmx4096m ./binsqllineClient now using params -Xms512m -Xmx5120m -XX:-UseGCOverheadLimit I am also continuing to run only one or two instances of ./bin/sqllineClient concurrently while doing an import to conserve memory. With the changes made failures seem to be less frequent but more severe (i.e. exceptions rather than hangs) There are concurrent queries originating from the Geronimo database pool, but expectedConcurrentStatements is still set at 32 and we are never anywhere near this limit so this does not appear to be an issue. We are in the last month of Q1. The larger tables in the system are partitioned by quarter. The system seems to run much faster and with less failures at the beginning of each quarter than it does at the end. Monthly partitions may help but I am afraid it would hinder query performance when crossing partitions. -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p6089917.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Jeremy L. <je...@vo...> - 2011-03-04 19:43:11
|
Last night while LucidDb was importing data, all new client connections hung again. This time it was much different than any of the previous scenarios. Where in the past sqllineClient would just hang indefinitely shortly after the connect, this time a java.io.IOException was thrown: java.sql.SQLException: java.io.IOException: Premature EOF at sun.net.www.http.ChunkedInputStream.readAheadBlocking(ChunkedInputStream.java:538) at sun.net.www.http.ChunkedInputStream.readAhead(ChunkedInputStream.java:582) at sun.net.www.http.ChunkedInputStream.read(ChunkedInputStream.java:669) at java.io.FilterInputStream.read(FilterInputStream.java:116) at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:2446) at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:2441) at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:2430) at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2249) at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2542) at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2552) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1297) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:351) at de.simplicit.vjdbc.servlet.ServletCommandSinkJdkHttpClient.connect(ServletCommandSinkJdkHttpClient.java:55) at de.simplicit.vjdbc.VirtualDriver.connect(VirtualDriver.java:127) at net.sf.farrago.jdbc.client.FarragoUnregisteredVjdbcHttpClientDriver.connect(FarragoUnregisteredVjdbcHttpClientDriver.java:99) at org.tranql.connector.jdbc.JDBCDriverMCF.getPhysicalConnection(JDBCDriverMCF.java:96) at org.tranql.connector.jdbc.JDBCDriverMCF.createManagedConnection(JDBCDriverMCF.java:73) at org.apache.geronimo.connector.outbound.MCFConnectionInterceptor.getConnection(MCFConnectionInterceptor.java:49) at org.apache.geronimo.connector.outbound.LocalXAResourceInsertionInterceptor.getConnection(LocalXAResourceInsertionInterceptor.java:41) at org.apache.geronimo.connector.outbound.SinglePoolConnectionInterceptor.internalGetConnection(SinglePoolConnectionInterceptor.java:71) at org.apache.geronimo.connector.outbound.AbstractSinglePoolConnectionInterceptor.getConnection(AbstractSinglePoolConnectionInterceptor.java:80) at org.apache.geronimo.connector.outbound.TransactionEnlistingInterceptor.getConnection(TransactionEnlistingInterceptor.java:46) at org.apache.geronimo.connector.outbound.TransactionCachingInterceptor.getConnection(TransactionCachingInterceptor.java:96) at org.apache.geronimo.connector.outbound.ConnectionHandleInterceptor.getConnection(ConnectionHandleInterceptor.java:43) at org.apache.geronimo.connector.outbound.TCCLInterceptor.getConnection(TCCLInterceptor.java:39) at org.apache.geronimo.connector.outbound.ConnectionTrackingInterceptor.getConnection(ConnectionTrackingInterceptor.java:66) at org.apache.geronimo.connector.outbound.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:87) at org.tranql.connector.jdbc.DataSource.getConnection(DataSource.java:56) I have not been able to successfully upgrade from 0.9.2 to 0.9.3 yet and consequently I am not sure if this is a new problem or not, but it did require me to kill -9 the lucidDbServer process to continue (!quit and !kill did not work). This happened when connection originated from both a Geronimo database pool and when using sqllineClient. My guess is that it is another memory related problem that will go away once I upgrade my RAM from 16GB to 32GB and move to v.0.9.3, but I thought I throw it out there in case anyone has seen this before. -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p6089792.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Nicholas G. <ngo...@dy...> - 2011-03-02 23:08:20
|
I'd suggest standardizing on LucidDB 0.9.3 (server and driver) and using the HTTP client. Other than the fact that some old wiki/benchmark docs/scripts have the RMI information, is there a reason to use RMI? ie, HTTP is now our primary JDBC communication protocol so unless you have a reason to use it, I'm betting life will be easier for you if you use HTTP. If you can, take a standard, out of the box 0.9.3 installation and you can use the following String url = "jdbc:luciddb:http://localhost:8034"; Class.forName("org.luciddb.jdbc.LucidDbClientDriver").newInstance(); conn = DriverManager.getConnection(url); If you want to use RMI, you'll need to work with what JVS gave you (including the unfortunate omission from our jar process in 0.9.3). Hope that helps! Kind Regards, Nick > > John Sichi wrote: >> >> In 0.9.3, we completed the transition from >> com.lucidera.jdbc.LucidDbHttpDriver to >> org.luciddb.jdbc.LucidDbClientDriver. >> >> The com.lucidera class is now gone, so when you start using the 0.9.3 >> JDBC driver, you'll need to adjust your classname configuration >> accordingly. >> >> The whoops part was that we intended to move >> com.lucidera.jdbc.LucidDbRmiDriver to org.luciddb.jdbc.LucidDbRmiDriver. >> And in fact the new class is there in the source tree, but it is not >> being bundled into LucidDbClient.jar due to a bug in the build. >> >> So...if you are using RMI, keep using the LucidDbClient.jar from 0.9.2 >> (and the old com.lucidera classname). It should work fine against the >> 0.9.3 driver. >> >> We'll rectify this for the next release. >> >> http://issues.eigenbase.org/browse/LDB-223 >> >> JVS >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> luciddb-users mailing list >> luc...@li... >> https://lists.sourceforge.net/lists/listinfo/luciddb-users >> > > > yesterday in > http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-td6011447.html > this post i've explained my problem with jdbc connection...now i've found > this discussion and after had downloaded luciddb-0.9.2 i've changed > LucidDbClient.jar into luciddb plugins (i've also modified the class because > i use RMI) but now the problem is connection refused. > [GRAVE: java.rmi.ConnectException: Connection refused to host: localhost; > nested exception is: java.net.ConnectException: Connection refused] > > > > Best Regards > Mauro > > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/one-little-whoops-for-the-release-notes-tp5197442p6081208.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users |
From: kingfesen <mau...@gm...> - 2011-03-02 14:51:07
|
John Sichi wrote: > > In 0.9.3, we completed the transition from > com.lucidera.jdbc.LucidDbHttpDriver to > org.luciddb.jdbc.LucidDbClientDriver. > > The com.lucidera class is now gone, so when you start using the 0.9.3 > JDBC driver, you'll need to adjust your classname configuration > accordingly. > > The whoops part was that we intended to move > com.lucidera.jdbc.LucidDbRmiDriver to org.luciddb.jdbc.LucidDbRmiDriver. > And in fact the new class is there in the source tree, but it is not > being bundled into LucidDbClient.jar due to a bug in the build. > > So...if you are using RMI, keep using the LucidDbClient.jar from 0.9.2 > (and the old com.lucidera classname). It should work fine against the > 0.9.3 driver. > > We'll rectify this for the next release. > > http://issues.eigenbase.org/browse/LDB-223 > > JVS > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > yesterday in http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-td6011447.html this post i've explained my problem with jdbc connection...now i've found this discussion and after had downloaded luciddb-0.9.2 i've changed LucidDbClient.jar into luciddb plugins (i've also modified the class because i use RMI) but now the problem is connection refused. [GRAVE: java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused] Can you explain me a solution to solve my error that cause this problem? Best Regards Mauro -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/one-little-whoops-for-the-release-notes-tp5197442p6081208.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: kingfesen <mau...@gm...> - 2011-03-01 10:39:57
|
John Sichi wrote: > > Your partsupp file is getting interpreted as having 6 columns when it > is only supposed to have 5. This line in createdbMultiProcess removes > the trailing separators from the rows; maybe you didn't run it. > > cat $i | sed -e 's/.$//g' > ./dataMultiProcess/$datadir/$i > > I think we found out that you can turn on lenient mode for the > flatfile reader to avoid having to do this munging, but I don't > remember fo rsure. > > JVS > > "Unix is user friendly; it's just choosy about its friends." > > On Fri, Feb 25, 2011 at 3:29 AM, kingfesen <mau...@gm...> > wrote: >> >> >> John Sichi wrote: >>> >>> On Wed, Feb 23, 2011 at 9:51 AM, kingfesen <mau...@gm...> >>> wrote: >>>> sorry john, i've again a problem... i've downloaded a full tpch.tar.gz >>>> from >>>> link and i've use(by command !run ~/create_table.sql) and >>>> create_table.sql >>>> (adding a line for create a schema),create_index.sql, but now i've not >>>> understand how load data (my file have .tbl extension for example >>>> nation.tbl) into a schema. i must to create a file.sql where there are >>>> istruction to load data (example insert into tpch.nation * from >>>> nation.tbl) >>>> or i must create a file wrapper if the second mode is correct how i >>>> create >>>> this?? >>> >>> http://pub.eigenbase.org/wiki/LucidDbTpch#LucidDB_Data_Load >>> >>> The script it is referring to is part of the LucidDB source >>> distribution, under luciddb/test/sql/tpch. >>> >>> https://github.com/eigenbase/luciddb/tree/master/test/sql/tpch >>> >>> JVS >>> >>> ------------------------------------------------------------------------------ >>> Free Software Download: Index, Search & Analyze Logs and other IT data >>> in >>> Real-Time with Splunk. Collect, index and harness all the fast moving IT >>> data >>> generated by your applications, servers and devices whether physical, >>> virtual >>> or in the cloud. Deliver compliance at lower cost and gain new business >>> insights. http://p.sf.net/sfu/splunk-dev2dev >>> _______________________________________________ >>> luciddb-users mailing list >>> luc...@li... >>> https://lists.sourceforge.net/lists/listinfo/luciddb-users >>> >>> >> >> i hope this is the last problem message...this database is hostile for me >> (or i'm stupid maybe this second one). I've read the guide, downloaded >> the >> tpch.tar.gz file but after i have run init.sql, create_table.sql and >> load_tables.sql i've this error... >> >> 1/9 insert into tpch.partsupp select * from tpch."partsupp"; >> error: from line 1, to colum 25: Number of insert target columns (5) does >> not equal number of soucre intems (6) (state=,code0). >> >> Where is my mistake? i've add a file code so it more easy correct my >> errors! >> http://luciddb-users.1374590.n2.nabble.com/file/n6064176/init.sql >> init.sql >> http://luciddb-users.1374590.n2.nabble.com/file/n6064176/create_tables.sql >> create_tables.sql >> http://luciddb-users.1374590.n2.nabble.com/file/n6064176/load_tables.sql >> load_tables.sql >> >> Thanks you so much for your patience!!! >> Regards >> -- >> View this message in context: >> http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6064176.html >> Sent from the luciddb-users mailing list archive at Nabble.com. >> >> ------------------------------------------------------------------------------ >> Free Software Download: Index, Search & Analyze Logs and other IT data in >> Real-Time with Splunk. Collect, index and harness all the fast moving IT >> data >> generated by your applications, servers and devices whether physical, >> virtual >> or in the cloud. Deliver compliance at lower cost and gain new business >> insights. http://p.sf.net/sfu/splunk-dev2dev >> _______________________________________________ >> luciddb-users mailing list >> luc...@li... >> https://lists.sourceforge.net/lists/listinfo/luciddb-users >> > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT > data > generated by your applications, servers and devices whether physical, > virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > > finally the data is loaded! Now I have some questions about how to connect via jdbc to my connect.java. the database does not answer me what did I do wrong? import java.sql.*; import java.io.*; public class connect1 { public static void main(String[] args) { Connection conn = null; try { String url = "jdbc:luciddb://localhost:5434"; Class.forName("org.luciddb.jdbc.LucidDbClientDriver").newInstance(); conn = DriverManager.getConnection(url); System.out.println("Connesso al database!!!\n"); conn.close(); } catch (Exception e) { } } } i've set jdbc driver into my classpath (CLASSPATH:=~/luciddb-0.9.3/plugin/LucidDbClientDriver.jar) i've to set a schema? what is wrong?if I can establish a connection then I can run tests on query tpch Regards -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6076771.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Vishal B. <vis...@gm...> - 2011-02-26 08:47:11
|
Sorry, my bad. It was just a case to run "apt-get install libaio1" and things are looking up. Thanks. (the minor 'issue' is already tackled in the FAQ - case of RTFM on my part) Best wishes, Vishal Belsare On Sat, Feb 26, 2011 at 2:08 PM, Vishal Belsare <vis...@gm...> wrote: > John, > > I used the "sudo env JAVA_HOME=$JAVA_HOME ./install.sh" form to run > the install script. > > However, having managed to do that, now I cannot get the server to > fire up. Here's the output. Curiously, I managed to get LucidDB up and > running on a Windows machine a month ago without much trouble, but it > is looking a little tricky under Ubuntu. > > --- > vishal@goedel:/opt/luciddb-0.9.3/bin$ sudo env JAVA_HOME=$JAVA_HOME > ./lucidDbServer > Server personality: LucidDB > Loading database... > Exception in thread "main" org.eigenbase.util.EigenbaseException: > Failed to load database > at net.sf.farrago.resource.FarragoResource$_Def1.ex(FarragoResource.java:2021) > at net.sf.farrago.db.FarragoDatabase.<init>(FarragoDatabase.java:292) > at net.sf.farrago.db.FarragoDbSingleton.pinReference(FarragoDbSingleton.java:100) > at net.sf.farrago.server.FarragoAbstractServer.start(FarragoAbstractServer.java:232) > at org.luciddb.session.LucidDbServer.main(LucidDbServer.java:62) > Caused by: org.eigenbase.util.EigenbaseException: > FennelResource.en_US.libaioRequired() > at net.sf.farrago.resource.FarragoResource$_Def0.ex(FarragoResource.java:1998) > at net.sf.farrago.fennel.FennelDbHandleImpl.handleNativeException(FennelDbHandleImpl.java:340) > at net.sf.farrago.fennel.FennelDbHandleImpl.executeCmd(FennelDbHandleImpl.java:267) > at net.sf.farrago.fennel.FennelDbHandleImpl.executeCmd(FennelDbHandleImpl.java:181) > at net.sf.farrago.fennel.FennelDbHandleImpl.<init>(FennelDbHandleImpl.java:90) > at net.sf.farrago.db.FarragoDatabase.loadFennel(FarragoDatabase.java:567) > at net.sf.farrago.db.FarragoDatabase.<init>(FarragoDatabase.java:205) > ... 3 more > vishal@goedel:/opt/luciddb-0.9.3/bin$ > --- > > > Best wishes, > Vishal Belsare > -- We agnostics often envy the True Believer, who thus acquires so easily that sense of security which is forever denied to us. ~ E. T. Jaynes |
From: Vishal B. <vis...@gm...> - 2011-02-26 08:38:32
|
John, I used the "sudo env JAVA_HOME=$JAVA_HOME ./install.sh" form to run the install script. However, having managed to do that, now I cannot get the server to fire up. Here's the output. Curiously, I managed to get LucidDB up and running on a Windows machine a month ago without much trouble, but it is looking a little tricky under Ubuntu. --- vishal@goedel:/opt/luciddb-0.9.3/bin$ sudo env JAVA_HOME=$JAVA_HOME ./lucidDbServer Server personality: LucidDB Loading database... Exception in thread "main" org.eigenbase.util.EigenbaseException: Failed to load database at net.sf.farrago.resource.FarragoResource$_Def1.ex(FarragoResource.java:2021) at net.sf.farrago.db.FarragoDatabase.<init>(FarragoDatabase.java:292) at net.sf.farrago.db.FarragoDbSingleton.pinReference(FarragoDbSingleton.java:100) at net.sf.farrago.server.FarragoAbstractServer.start(FarragoAbstractServer.java:232) at org.luciddb.session.LucidDbServer.main(LucidDbServer.java:62) Caused by: org.eigenbase.util.EigenbaseException: FennelResource.en_US.libaioRequired() at net.sf.farrago.resource.FarragoResource$_Def0.ex(FarragoResource.java:1998) at net.sf.farrago.fennel.FennelDbHandleImpl.handleNativeException(FennelDbHandleImpl.java:340) at net.sf.farrago.fennel.FennelDbHandleImpl.executeCmd(FennelDbHandleImpl.java:267) at net.sf.farrago.fennel.FennelDbHandleImpl.executeCmd(FennelDbHandleImpl.java:181) at net.sf.farrago.fennel.FennelDbHandleImpl.<init>(FennelDbHandleImpl.java:90) at net.sf.farrago.db.FarragoDatabase.loadFennel(FarragoDatabase.java:567) at net.sf.farrago.db.FarragoDatabase.<init>(FarragoDatabase.java:205) ... 3 more vishal@goedel:/opt/luciddb-0.9.3/bin$ --- Best wishes, Vishal Belsare On Sat, Feb 26, 2011 at 2:29 AM, <luc...@li...> wrote: > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 25 Feb 2011 02:22:51 +0530 > From: Vishal Belsare <vis...@gm...> > Subject: Re: [luciddb-users] luciddb-users Digest, Vol 43, Issue 2 > To: luc...@li... > Message-ID: > <AANLkTi=ofBxaqRN0VNnPVHY52Zi=McD...@ma...> > Content-Type: text/plain; charset=ISO-8859-1 > > vishal@goedel:~$ sudo export $JAVA_HOME > sudo: export: command not found > > Strange. > > On Fri, Feb 25, 2011 at 2:06 AM, > <luc...@li...> wrote: >> >> Message: 6 >> Date: Thu, 24 Feb 2011 12:34:55 -0800 >> From: John Sichi <js...@gm...> >> Subject: Re: [luciddb-users] LucidDB 0.9.3 Installation Issue under >> ? ? ? ?Ubuntu ?10.04 >> To: Mailing list for users of LucidDB >> ? ? ? ?<luc...@li...> >> >> What do you get back from this command? >> >> sudo export $JAVA_HOME >> >> The install script does this: >> >> if [ -z "$JAVA_HOME" ]; then >> ? ?echo "The JAVA_HOME environment variable must be set to the location" >> ? ?echo "of a version 1.6 or higher JVM." >> ? ?exit 1; >> fi >> >> So somehow JAVA_HOME is not visible inside the script, which usually >> means it is set but not exported. >> >> JVS >> >> On Thu, Feb 24, 2011 at 12:26 PM, Vishal Belsare >> <vis...@gm...> wrote: >>> I am trying to install LucidDB on an Ubuntu machine. Untar'ing the >>> archive, and trying to run the install script led to an error about >>> the Java virtual machine being incorrect. I was using the IcedTea >>> OpenJDK, instead of Sun's JRE. >>> Thinking that this might the issue, I installed Sun' JRE, and set it >>> as the default JRE, and confirmed that by using 'which java' and 'java >>> -version'. Showed up fine. Edited, /etc/environment to set JAVA_HOME >>> to /usr/lib/jvm/java-6-sun and confirmed that by an echo $JAVA_HOME, >>> which shows correctly. >>> >>> ----- >>> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo echo $JAVA_HOME >>> /usr/lib/jvm/java-6-sun >>> >>> vishal@goedel:/opt/luciddb-0.9.3/install$ which java >>> /usr/bin/java >>> >>> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo java -version >>> java version "1.6.0_24" >>> Java(TM) SE Runtime Environment (build 1.6.0_24-b07) >>> Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode) >>> ----- >>> >>> However, when I try to run install.sh, I see the following message: >>> -- >>> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo ./install.sh >>> The JAVA_HOME environment variable must be set to the location >>> of a version 1.6 or higher JVM. >>> -- >>> >>> I'd appreciate suggestions to fix this. Thanks. >>> >>> >>> Best wishes, >>> Vishal Belsare >>> >> >> >> End of luciddb-users Digest, Vol 43, Issue 2 >> ******************************************** >> > > > > ------------------------------ > > Message: 2 > Date: Thu, 24 Feb 2011 12:59:41 -0800 > From: John Sichi <js...@gm...> > Subject: Re: [luciddb-users] luciddb-users Digest, Vol 43, Issue 2 > To: Mailing list for users of LucidDB > <luc...@li...> > Message-ID: > <AAN...@ma...> > Content-Type: text/plain; charset=ISO-8859-1 > > Oh, export is a bash builtin so I guess you can't sudo it. > > You can instead do > > sudo su > export | grep JAVA_HOME > > and see if it shows anything. > > One way or another, you'll need to make sure JAVA_HOME is visible to > that script. > > JVS > > On Thu, Feb 24, 2011 at 12:52 PM, Vishal Belsare > <vis...@gm...> wrote: >> vishal@goedel:~$ sudo export $JAVA_HOME >> sudo: export: command not found >> >> Strange. >> >> On Fri, Feb 25, 2011 at 2:06 AM, >> <luc...@li...> wrote: >>> >>> Message: 6 >>> Date: Thu, 24 Feb 2011 12:34:55 -0800 >>> From: John Sichi <js...@gm...> >>> Subject: Re: [luciddb-users] LucidDB 0.9.3 Installation Issue under >>> ? ? ? ?Ubuntu ?10.04 >>> To: Mailing list for users of LucidDB >>> ? ? ? ?<luc...@li...> >>> >>> What do you get back from this command? >>> >>> sudo export $JAVA_HOME >>> >>> The install script does this: >>> >>> if [ -z "$JAVA_HOME" ]; then >>> ? ?echo "The JAVA_HOME environment variable must be set to the location" >>> ? ?echo "of a version 1.6 or higher JVM." >>> ? ?exit 1; >>> fi >>> >>> So somehow JAVA_HOME is not visible inside the script, which usually >>> means it is set but not exported. >>> >>> JVS >>> >>> On Thu, Feb 24, 2011 at 12:26 PM, Vishal Belsare >>> <vis...@gm...> wrote: >>>> I am trying to install LucidDB on an Ubuntu machine. Untar'ing the >>>> archive, and trying to run the install script led to an error about >>>> the Java virtual machine being incorrect. I was using the IcedTea >>>> OpenJDK, instead of Sun's JRE. >>>> Thinking that this might the issue, I installed Sun' JRE, and set it >>>> as the default JRE, and confirmed that by using 'which java' and 'java >>>> -version'. Showed up fine. Edited, /etc/environment to set JAVA_HOME >>>> to /usr/lib/jvm/java-6-sun and confirmed that by an echo $JAVA_HOME, >>>> which shows correctly. >>>> >>>> ----- >>>> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo echo $JAVA_HOME >>>> /usr/lib/jvm/java-6-sun >>>> >>>> vishal@goedel:/opt/luciddb-0.9.3/install$ which java >>>> /usr/bin/java >>>> >>>> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo java -version >>>> java version "1.6.0_24" >>>> Java(TM) SE Runtime Environment (build 1.6.0_24-b07) >>>> Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode) >>>> ----- >>>> >>>> However, when I try to run install.sh, I see the following message: >>>> -- >>>> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo ./install.sh >>>> The JAVA_HOME environment variable must be set to the location >>>> of a version 1.6 or higher JVM. >>>> -- >>>> >>>> I'd appreciate suggestions to fix this. Thanks. >>>> >>>> >>>> Best wishes, >>>> Vishal Belsare -- We agnostics often envy the True Believer, who thus acquires so easily that sense of security which is forever denied to us. ~ E. T. Jaynes |
From: John S. <js...@gm...> - 2011-02-26 06:38:13
|
Your partsupp file is getting interpreted as having 6 columns when it is only supposed to have 5. This line in createdbMultiProcess removes the trailing separators from the rows; maybe you didn't run it. cat $i | sed -e 's/.$//g' > ./dataMultiProcess/$datadir/$i I think we found out that you can turn on lenient mode for the flatfile reader to avoid having to do this munging, but I don't remember fo rsure. JVS "Unix is user friendly; it's just choosy about its friends." On Fri, Feb 25, 2011 at 3:29 AM, kingfesen <mau...@gm...> wrote: > > > John Sichi wrote: >> >> On Wed, Feb 23, 2011 at 9:51 AM, kingfesen <mau...@gm...> >> wrote: >>> sorry john, i've again a problem... i've downloaded a full tpch.tar.gz >>> from >>> link and i've use(by command !run ~/create_table.sql) and >>> create_table.sql >>> (adding a line for create a schema),create_index.sql, but now i've not >>> understand how load data (my file have .tbl extension for example >>> nation.tbl) into a schema. i must to create a file.sql where there are >>> istruction to load data (example insert into tpch.nation * from >>> nation.tbl) >>> or i must create a file wrapper if the second mode is correct how i >>> create >>> this?? >> >> http://pub.eigenbase.org/wiki/LucidDbTpch#LucidDB_Data_Load >> >> The script it is referring to is part of the LucidDB source >> distribution, under luciddb/test/sql/tpch. >> >> https://github.com/eigenbase/luciddb/tree/master/test/sql/tpch >> >> JVS >> >> ------------------------------------------------------------------------------ >> Free Software Download: Index, Search & Analyze Logs and other IT data in >> Real-Time with Splunk. Collect, index and harness all the fast moving IT >> data >> generated by your applications, servers and devices whether physical, >> virtual >> or in the cloud. Deliver compliance at lower cost and gain new business >> insights. http://p.sf.net/sfu/splunk-dev2dev >> _______________________________________________ >> luciddb-users mailing list >> luc...@li... >> https://lists.sourceforge.net/lists/listinfo/luciddb-users >> >> > > i hope this is the last problem message...this database is hostile for me > (or i'm stupid maybe this second one). I've read the guide, downloaded the > tpch.tar.gz file but after i have run init.sql, create_table.sql and > load_tables.sql i've this error... > > 1/9 insert into tpch.partsupp select * from tpch."partsupp"; > error: from line 1, to colum 25: Number of insert target columns (5) does > not equal number of soucre intems (6) (state=,code0). > > Where is my mistake? i've add a file code so it more easy correct my errors! > http://luciddb-users.1374590.n2.nabble.com/file/n6064176/init.sql init.sql > http://luciddb-users.1374590.n2.nabble.com/file/n6064176/create_tables.sql > create_tables.sql > http://luciddb-users.1374590.n2.nabble.com/file/n6064176/load_tables.sql > load_tables.sql > > Thanks you so much for your patience!!! > Regards > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6064176.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: John S. <js...@gm...> - 2011-02-26 05:14:05
|
I reproduced this one; it has to do with some changes with the (still in-progress) implementation for interval types. LucidDB 0.9.3 attempts to revalidate the original (0.9.2) applib definition before replacing it with the new one, leading to this failure. The workaround is to do this before running catalog.sql: drop specific function applib.add_hours_timestamp; Assuming you haven't referenced it in a view or other routine, this should work, and then when you run catalog.sql, it will add in the new version. JVS On Fri, Feb 25, 2011 at 12:58 PM, John Sichi <js...@gm...> wrote: > Hey Jeremy, > > Thanks for reporting this. I don't think we tested out the upgrade > procedure for the last release, so maybe there was a glitch; if so, > it's usually possible to come up with a patch to the catalog.sql > script. > > JVS > > On Fri, Feb 25, 2011 at 12:29 PM, Jeremy Lemaire <je...@vo...> wrote: >> >> Here is a little more info from the Trace.log: >> >> Feb 25, 2011 3:22:50 PM net.sf.farrago.db.FarragoDbSession prepare >> INFO: "TS" + CAST(CAST("N" AS BIGINT) * 60 * 60 * 1000 AS INTERVAL DAY(10) >> TO HOUR) >> Feb 25, 2011 3:22:50 PM org.eigenbase.sql.validate.SqlValidatorException >> <init> >> SEVERE: org.eigenbase.sql.validate.SqlValidatorException: Cast function >> cannot convert value of type BIGINT to type INTERVAL DAY(10) TO HOUR >> Feb 25, 2011 3:22:50 PM org.eigenbase.util.EigenbaseException <init> >> SEVERE: org.eigenbase.util.EigenbaseContextException: From line 1, column 8 >> to line 1, column 77 >> Feb 25, 2011 3:22:50 PM org.eigenbase.util.EigenbaseException <init> >> SEVERE: org.eigenbase.util.EigenbaseException: Invalid definition for >> routine "APPLIB"."ADD_HOURS_TIMESTAMP" >> Feb 25, 2011 3:22:50 PM net.sf.farrago.ddl.DdlValidator validate >> INFO: Revalidate exception on ADD_HOURS_TIMESTAMP: >> org.eigenbase.util.EigenbaseException: Invalid definition for routine >> "APPLIB"."ADD_HOURS_TIMESTAMP"; java.lang.NullPointerException: null >> Feb 25, 2011 3:22:50 PM net.sf.farrago.db.FarragoDbSession rollbackImpl >> INFO: rollback >> Feb 25, 2011 3:22:50 PM net.sf.farrago.jdbc.FarragoJdbcUtil newSqlException >> SEVERE: Invalid definition for routine "APPLIB"."ADD_HOURS_TIMESTAMP" >> Feb 25, 2011 3:22:50 PM net.sf.farrago.jdbc.FarragoJdbcUtil newSqlException >> SEVERE: null >> >> -- >> View this message in context: http://luciddb-users.1374590.n2.nabble.com/0-9-2-to-0-9-3-Upgrade-tp6065470p6065912.html >> Sent from the luciddb-users mailing list archive at Nabble.com. >> >> ------------------------------------------------------------------------------ >> Free Software Download: Index, Search & Analyze Logs and other IT data in >> Real-Time with Splunk. Collect, index and harness all the fast moving IT data >> generated by your applications, servers and devices whether physical, virtual >> or in the cloud. Deliver compliance at lower cost and gain new business >> insights. http://p.sf.net/sfu/splunk-dev2dev >> _______________________________________________ >> luciddb-users mailing list >> luc...@li... >> https://lists.sourceforge.net/lists/listinfo/luciddb-users >> > |
From: John S. <js...@gm...> - 2011-02-25 20:59:55
|
Hey Jeremy, Thanks for reporting this. I don't think we tested out the upgrade procedure for the last release, so maybe there was a glitch; if so, it's usually possible to come up with a patch to the catalog.sql script. JVS On Fri, Feb 25, 2011 at 12:29 PM, Jeremy Lemaire <je...@vo...> wrote: > > Here is a little more info from the Trace.log: > > Feb 25, 2011 3:22:50 PM net.sf.farrago.db.FarragoDbSession prepare > INFO: "TS" + CAST(CAST("N" AS BIGINT) * 60 * 60 * 1000 AS INTERVAL DAY(10) > TO HOUR) > Feb 25, 2011 3:22:50 PM org.eigenbase.sql.validate.SqlValidatorException > <init> > SEVERE: org.eigenbase.sql.validate.SqlValidatorException: Cast function > cannot convert value of type BIGINT to type INTERVAL DAY(10) TO HOUR > Feb 25, 2011 3:22:50 PM org.eigenbase.util.EigenbaseException <init> > SEVERE: org.eigenbase.util.EigenbaseContextException: From line 1, column 8 > to line 1, column 77 > Feb 25, 2011 3:22:50 PM org.eigenbase.util.EigenbaseException <init> > SEVERE: org.eigenbase.util.EigenbaseException: Invalid definition for > routine "APPLIB"."ADD_HOURS_TIMESTAMP" > Feb 25, 2011 3:22:50 PM net.sf.farrago.ddl.DdlValidator validate > INFO: Revalidate exception on ADD_HOURS_TIMESTAMP: > org.eigenbase.util.EigenbaseException: Invalid definition for routine > "APPLIB"."ADD_HOURS_TIMESTAMP"; java.lang.NullPointerException: null > Feb 25, 2011 3:22:50 PM net.sf.farrago.db.FarragoDbSession rollbackImpl > INFO: rollback > Feb 25, 2011 3:22:50 PM net.sf.farrago.jdbc.FarragoJdbcUtil newSqlException > SEVERE: Invalid definition for routine "APPLIB"."ADD_HOURS_TIMESTAMP" > Feb 25, 2011 3:22:50 PM net.sf.farrago.jdbc.FarragoJdbcUtil newSqlException > SEVERE: null > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/0-9-2-to-0-9-3-Upgrade-tp6065470p6065912.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |