You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(28) |
Jun
(2) |
Jul
(10) |
Aug
(1) |
Sep
(7) |
Oct
|
Nov
(1) |
Dec
(7) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(5) |
Feb
(7) |
Mar
(10) |
Apr
(12) |
May
(30) |
Jun
(21) |
Jul
(19) |
Aug
(17) |
Sep
(25) |
Oct
(46) |
Nov
(14) |
Dec
(11) |
2009 |
Jan
(5) |
Feb
(36) |
Mar
(17) |
Apr
(20) |
May
(75) |
Jun
(143) |
Jul
(29) |
Aug
(41) |
Sep
(38) |
Oct
(71) |
Nov
(17) |
Dec
(56) |
2010 |
Jan
(48) |
Feb
(31) |
Mar
(56) |
Apr
(24) |
May
(7) |
Jun
(18) |
Jul
(2) |
Aug
(34) |
Sep
(17) |
Oct
(1) |
Nov
|
Dec
(18) |
2011 |
Jan
(12) |
Feb
(19) |
Mar
(25) |
Apr
(11) |
May
(26) |
Jun
(16) |
Jul
(2) |
Aug
(10) |
Sep
(8) |
Oct
(1) |
Nov
|
Dec
(5) |
2012 |
Jan
(1) |
Feb
(3) |
Mar
(3) |
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
(1) |
Sep
(2) |
Oct
|
Nov
(2) |
Dec
|
From: Nicholas G. <ngo...@dy...> - 2011-05-08 17:18:44
|
Nice work Kevin! On May 2, 2011, at 10:46 AM, Kevin Secretan wrote: > Yeah, many of the Applib extensions aren't coded properly to handle cross-catalog communication. In this case, the "SYS_ROOT.DBA_TABLES" string was hardcoded into the source without the proper "LOCALDB" catalog prefix, so even if you loaded the jar in a separate catalog/schema it would still fail--you'd have to load the SYS_ROOT schema into the separate catalog. I've added a jira task (http://jira.eigenbase.org/browse/EXT-5) to review/fix this for the rest of APPLIB. Nick PS - This should be a relatively straight forward task that would make for a great "first contribution" for anyone who wants to. :) If anyone has been lurking, and would be interested in becoming a contributor this would be a great one to start with! |
From: rails.info i. <rai...@gm...> - 2011-05-08 11:47:43
|
so, no update on this yet ? On Wed, May 4, 2011 at 10:44 AM, rails.info info <rai...@gm...>wrote: > Hello everyone, > > How do you create a superuser for LucidDB. I've tried looking at > http://pub.eigenbase.org/wiki/LucidDbGrant , but I'm not able to figure it > out. > > Any Suggestions ? > |
From: rails.info i. <rai...@gm...> - 2011-05-04 08:44:56
|
Hello everyone, How do you create a superuser for LucidDB. I've tried looking at http://pub.eigenbase.org/wiki/LucidDbGrant , but I'm not able to figure it out. Any Suggestions ? |
From: Felipe I. <pli...@ho...> - 2011-05-04 00:06:12
|
Hi. I'm doing a research on LucidDB, and I'd like to know if there is a way to determine how much space in disk LucidDB use to allocate a given schema. Is there a way to get this information? Anyway, thanks for your help! Att. Felipe Issa. |
From: Julian H. <ju...@hy...> - 2011-05-03 16:42:12
|
LucidDB does not support foreign key constraints (see http://pub.eigenbase.org/wiki/LucidDbTpch#LucidDB_Index_Creation). So, SQLArchitect should not be generating that statement. Julian _____ From: rails.info info [mailto:rai...@gm...] Sent: Tuesday, May 03, 2011 5:35 AM To: luc...@li... Subject: Re: [luciddb-users] SQLArchitect with LucidDB So this was a stupid mistake on my part. I changed the name 'time' to time_value and the error went away ^^ However, now I'm getting other errors like: ALTER TABLE fact_production ADD CONSTRAINT dim_employee_key_fact_production_fk FOREIGN KEY (employee_key) REFERENCES dim_employee (employee_key) ON DELETE NO ACTION ON UPDATE NO ACTION; 14:33:42 [ALTER - 0 row(s), 0.000 secs] org.eigenbase.sql.parser.SqlParseException: Encountered "CONSTRAINT" at line 1, column 33. Was expecting one of: "COLUMN" ... <IDENTIFIER> ... <QUOTED_IDENTIFIER> ... <UNICODE_QUOTED_IDENTIFIER> ... Now, this is generated code by SQLArchitect after I followed the steps documented over here: http://pub.eigenbase.org/wiki/LucidDbPowerArchitect <http://pub.eigenbase.org/wiki/LucidDbPowerArchitect> , so this should work right ? Cheers, Bryan On Tue, May 3, 2011 at 2:07 PM, rails.info info <rai...@gm...> wrote: Hello everyone, I've been trying to use SQL Power Architect with LucidDB. Everything works nicely except for the DDL generation. When I follow the steps decribed here: http://pub.eigenbase.org/wiki/LucidDbPowerArchitect and try to generate the DDL I get the following exception: 2571742 [Thread-79] INFO ca.sqlpower.architect.swingui.SQLScriptDialog - executing: CREATE TABLE dim_time ( time_key INTEGER NOT NULL, time TIME NOT NULL, hours24 SMALLINT NOT NULL, hours12 SMALLINT NOT NULL, minutes SMALLINT NOT NULL, seconds SMALLINT NOT NULL, am_pm CHAR(3) NOT NULL, CONSTRAINT dim_time_key PRIMARY KEY (time_key) ) 2571758 [Thread-79] INFO ca.sqlpower.architect.swingui.SQLScriptDialog - sql statement failed: org.eigenbase.sql.parser.SqlParseException: Encountered "time" at line 6, column 17. Was expecting one of: "CONSTRAINT" ... "PRIMARY" ... "UNIQUE" ... <IDENTIFIER> ... <QUOTED_IDENTIFIER> ... <UNICODE_QUOTED_IDENTIFIER> ... Am I doing something wrong here, or is this a bug ? Regards, Bryan |
From: rails.info i. <rai...@gm...> - 2011-05-03 12:35:33
|
So this was a stupid mistake on my part. I changed the name 'time' to time_value and the error went away ^^ However, now I'm getting other errors like: *ALTER TABLE fact_production ADD CONSTRAINT dim_employee_key_fact_production_fk* *FOREIGN KEY (employee_key)* *REFERENCES dim_employee (employee_key)* *ON DELETE NO ACTION* *ON UPDATE NO ACTION;* *14:33:42 [ALTER - 0 row(s), 0.000 secs] org.eigenbase.sql.parser.SqlParseException: Encountered "CONSTRAINT" at line 1, column 33.* *Was expecting one of:* * "COLUMN" ...* * <IDENTIFIER> ...* * <QUOTED_IDENTIFIER> ...* * <UNICODE_QUOTED_IDENTIFIER> ...* Now, this is generated code by SQLArchitect after I followed the steps documented over here: http://pub.eigenbase.org/wiki/LucidDbPowerArchitect <http://pub.eigenbase.org/wiki/LucidDbPowerArchitect>, so this should work right ? Cheers, Bryan On Tue, May 3, 2011 at 2:07 PM, rails.info info <rai...@gm...>wrote: > Hello everyone, > > I've been trying to use SQL Power Architect with LucidDB. Everything works > nicely except for the DDL generation. When I follow the steps decribed > here: http://pub.eigenbase.org/wiki/LucidDbPowerArchitect > > and try to generate the DDL I get the following exception: > > *2571742 [Thread-79] INFO ca.sqlpower.architect.swingui.SQLScriptDialog > - executing: * > * > * > * > * > *CREATE TABLE dim_time (* > * time_key INTEGER NOT NULL,* > * time TIME NOT NULL,* > * hours24 SMALLINT NOT NULL,* > * hours12 SMALLINT NOT NULL,* > * minutes SMALLINT NOT NULL,* > * seconds SMALLINT NOT NULL,* > * am_pm CHAR(3) NOT NULL,* > * CONSTRAINT dim_time_key PRIMARY KEY (time_key)* > *)* > *2571758 [Thread-79] INFO ca.sqlpower.architect.swingui.SQLScriptDialog > - sql statement failed: org.eigenbase.sql.parser.SqlParseException: > Encountered "time" at line 6, column 17.* > *Was expecting one of:* > * "CONSTRAINT" ...* > * "PRIMARY" ...* > * "UNIQUE" ...* > * <IDENTIFIER> ...* > * <QUOTED_IDENTIFIER> ...* > * <UNICODE_QUOTED_IDENTIFIER> ...* > * > * > * > * > * > * > Am I doing something wrong here, or is this a bug ? > > > Regards, > Bryan > |
From: rails.info i. <rai...@gm...> - 2011-05-03 12:07:09
|
Hello everyone, I've been trying to use SQL Power Architect with LucidDB. Everything works nicely except for the DDL generation. When I follow the steps decribed here: http://pub.eigenbase.org/wiki/LucidDbPowerArchitect and try to generate the DDL I get the following exception: *2571742 [Thread-79] INFO ca.sqlpower.architect.swingui.SQLScriptDialog - executing: * * * * * *CREATE TABLE dim_time (* * time_key INTEGER NOT NULL,* * time TIME NOT NULL,* * hours24 SMALLINT NOT NULL,* * hours12 SMALLINT NOT NULL,* * minutes SMALLINT NOT NULL,* * seconds SMALLINT NOT NULL,* * am_pm CHAR(3) NOT NULL,* * CONSTRAINT dim_time_key PRIMARY KEY (time_key)* *)* *2571758 [Thread-79] INFO ca.sqlpower.architect.swingui.SQLScriptDialog - sql statement failed: org.eigenbase.sql.parser.SqlParseException: Encountered "time" at line 6, column 17.* *Was expecting one of:* * "CONSTRAINT" ...* * "PRIMARY" ...* * "UNIQUE" ...* * <IDENTIFIER> ...* * <QUOTED_IDENTIFIER> ...* * <UNICODE_QUOTED_IDENTIFIER> ...* * * * * * * Am I doing something wrong here, or is this a bug ? Regards, Bryan |
From: Kevin S. <kse...@dy...> - 2011-05-02 18:16:00
|
Hey Eric. Yeah, many of the Applib extensions aren't coded properly to handle cross-catalog communication. In this case, the "SYS_ROOT.DBA_TABLES" string was hardcoded into the source without the proper "LOCALDB" catalog prefix, so even if you loaded the jar in a separate catalog/schema it would still fail--you'd have to load the SYS_ROOT schema into the separate catalog. I just committed a change set so this will be fixed in 0.9.4, though I also uploaded a local build to http://shared.nincheats.net/misc/eigenbase-applib.jar that I'll keep around for a few days (You can just overwrite the existing eigenbase-applib.jar in your plugin directory and after restarting the LucidDB server your query should just work.) On Mon, May 2, 2011 at 9:45 AM, Eric Freed <ep...@me...> wrote: > Hi, > > I am trying to call APPLIB.CREATE_TABLE_AS from a new catalog, but I am > getting an error. I created a catalog: > > create catalog DEMO; > set catalog 'DEMO'; > > CALL LOCALDB.APPLIB.CREATE_TABLE_AS( 'SAMPLE_DATA', 'CUSTOMERS', 'select * > from FILE_DUMP_SCHEMA.CUSTOMERS', true ); > > here is the error: > Error: From line 1, column 22 to line 1, column 40: Table > 'SYS_ROOT.DBA_TABLES' not found > > I guess that the applib is imported into LOCALDB, and cannot be called from > another catalog. Can I import APPLIB into the new catalog? > > > > > ------------------------------------------------------------------------------ > WhatsUp Gold - Download Free Network Management Software > The most intuitive, comprehensive, and cost-effective network > management toolset available today. Delivers lowest initial > acquisition cost and overall TCO of any competing solution. > http://p.sf.net/sfu/whatsupgold-sd > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > > |
From: Eric F. <ep...@me...> - 2011-05-02 17:10:58
|
Hi, I am trying to call APPLIB.CREATE_TABLE_AS from a new catalog, but I am getting an error. I created a catalog: create catalog DEMO; set catalog 'DEMO'; CALL LOCALDB.APPLIB.CREATE_TABLE_AS( 'SAMPLE_DATA', 'CUSTOMERS', 'select * from FILE_DUMP_SCHEMA.CUSTOMERS', true ); here is the error: Error: From line 1, column 22 to line 1, column 40: Table 'SYS_ROOT.DBA_TABLES' not found I guess that the applib is imported into LOCALDB, and cannot be called from another catalog. Can I import APPLIB into the new catalog? |
From: Eric v. C. <evo...@ro...> - 2011-04-27 15:38:07
|
Hi! We are trying to evaluate LucidDB as an alternative database to common RDBMS' for analytic purposes. We have the constraint that we need to connect to LucidDB from a front end application (SAP Business Objects) via: - JDBC (based on Java 5) - ODBC (unixODBC) At the moment we have the problem that the out-of-the-box JDBC driver is based on Java 6 and can't be used by Java 5 implementation. We have already tried the PG2Lucid-Bridge but it seems as there are some difficulties with the metadata functions, as well needed by our front end. Does exist a JDBC driver for Java 5? Any others suggestions? Thanks! Regards, Eric. |
From: Nicholas G. <ngo...@dy...> - 2011-04-26 14:46:36
|
On Apr 26, 2011, at 1:37 AM, Eric von Czapiewski wrote: > I’m searching for the sources of the Lucid JDBC driver (LucidDbClient.jar). Could please anyone tell me where to get this? LucidDB is a "Farrago" vJDBC driver. It's implementation sits in a few spots. In Eigenbase perforce (http://p4webhost.eigenbase.org:8080/ web viewer) //open/dev/farrago/src/org/luciddb/jdbc/ //open/dev/farrago/src/net/sf/farrago/jdbc/client/ vJDBC is an open source project here: http://vjdbc.sourceforge.net/ Although we keep a patched version of our own here since vJDBC isn't very active these days: //open/dev/thirdparty/vjdbc_1_6_5-jvs.zip Nick |
From: Eric v. C. <evo...@ro...> - 2011-04-26 08:55:44
|
Hi there! I'm searching for the sources of the Lucid JDBC driver (LucidDbClient.jar). Could please anyone tell me where to get this? Thanks & Regards, Eric. |
From: Nicholas G. <ngo...@dy...> - 2011-04-23 16:04:40
|
If you are still having issues with this (as you alluded to in another email) can you a) Post the error messages you are now getting, having done JVS's suggested workaround or b) Reproduce the issue, and log an issue at jira.eigenbase.org On Feb 25, 2011, at 9:13 PM, John Sichi wrote: > The workaround is to do this before running catalog.sql: > > drop specific function applib.add_hours_timestamp; > > Assuming you haven't referenced it in a view or other routine, this > should work, and then when you run catalog.sql, it will add in the new > version. |
From: Matt C. <mca...@pe...> - 2011-04-23 14:49:55
|
Dear LucidDB users, I just wanted to let you all know that Nick and myself have been improving LucidDB integration with Kettle (Pentaho Data Integration). Nick did most of the hard work but recently I've stepped in to accommodate a proof of concept on LucidDB. So far I've been testing (and tweaking) the streaming bulk loader for INSERT and UPDATE loads and I've tweaked the data types a bit to make sure the translation from Kettle data types to LucidDB data types goes well. In any case, feel free to give feedback or ask questions regarding your PDI/LucidDB combinations. You can do that on this mailing list or on the Kettle forum. Regards, Matt -- Matt Casters <mca...@pe...> Chief Data Integration, Kettle founder, Author of Pentaho Kettle Solutions<http://www.amazon.com/Pentaho-Kettle-Solutions-Building-Integration/dp/0470635177> (Wiley <http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470635177.html>) Pentaho : The Commercial Open Source Alternative for Business Intelligence |
From: Nicholas G. <ngo...@dy...> - 2011-04-20 23:28:05
|
On Apr 20, 2011, at 3:13 PM, Pedro Alves wrote: > Connecting both of them is not an option - I'm transferring a big db to > work locally over a very slow network. I honestly wouldn't expect a backup > / restore to be arch dependent (maybe a note on the wiki?) I've added a bullet on the wiki about the architecture specific nature of the physical backup. Physical backup and restore (including incremental) is done at a physical level and is highly optimized. Restore is close to the speed of "cp" which is great. However, Fennel storage (db.dat) is architecture (lin/win) specific (think word size, data bit addressing sizes, etc). If it were the same architecture, you'd be impressed I'm sure! ;) Again, if the physical backup/restore won't work because of your architecture differences, and you have a slow network. You can do a LOGICAL backup/restore. First, outlined in the Wiki above is to do a logical export (http://pub.eigenbase.org/wiki/LucidDbSysRoot_EXPORT_SCHEMA_TO_FILE) and then tar/gzip that output and scp those over. Second, you can also do table by table, with inline compression with WRITE_ROWS_TO_FILE and READ_ROWS_TO_FILE http://pub.eigenbase.org/wiki/AppLib_WRITE_ROWS_TO_FILE http://pub.eigenbase.org/wiki/LucidDbAppLib_READ_ROWS_FROM_FILE In both cases (READ/WRITE table functions or EXPORT_SCHEMA_TO_FILE UPD) you'll need to (re)create your tables on the new LucidDB. You can explore exporting/importing the catalog (via XMI) but I'm betting / hoping you have your DDL lying around. > Is it an option just to rsync the catalog dir? Nope... the db.dat is architecture specific. Copying won't work across arch's either. Nick |
From: Pedro A. <pe...@ne...> - 2011-04-20 22:14:02
|
Connecting both of them is not an option - I'm transferring a big db to work locally over a very slow network. I honestly wouldn't expect a backup / restore to be arch dependent (maybe a note on the wiki?) Is it an option just to rsync the catalog dir? -pedro On Wed, Apr 20, 2011 at 08:46:47AM -0700, Nicholas Goodman wrote: > Getting issues with lucid backup / restore. Same version, different > machines, one 64 and one 32 bits. > > Pedro, > Backup/Restore will only work properly on the same architecture; mixing 32 > and 64 bits won't work. There IS however, the EXPORT_SCHEMA_TO_FILE which > is a *logical* export of the data/structures that can be read back in > using the Flat File Foreign Data connector on another server. > We've got some other stuff that would help with in the upcoming 0.9.4 > release ([1]http://pub.eigenbase.org/wiki/LucidDbSysRoot_GENERATE_DDL_FOR) > that will allow you to export your entire DDL and then run it on the new > LucidDB in conjunction with the EXPORT_SCHEMA_TO_FILE. > Also, and perhaps the easiest, you can simply connect the Lin64 instance > to the Lin32 instance (via SYS_JDBC connector) and replicate the data over > directly (INSERT INTO NEWTABLE SELECT * FROM LIN32.OLDTABLE). > Let me know if you need anything more Pedro! > Nicholas Goodman > Founder, CEO > DynamoBI Corporation > [2]ngo...@dy... > > References > > Visible links > 1. http://pub.eigenbase.org/wiki/LucidDbSysRoot_GENERATE_DDL_FOR > 2. mailto:ngo...@dy... > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users -- Pedro Alves |
From: Julian H. <jul...@sq...> - 2011-04-20 17:13:45
|
> Getting issues with lucid backup / restore. Same version, different > machines, one 64 and one 32 bits. Might it be this issue: http://fennel-developers.1374754.n2.nabble.com/Tuple-Accessor-alignment-chan ge-tc6285189.html It's always dangerous to assume that a problem is related to another problem one saw a few days previously. But this 32- versus 64-bit alignment issue can only really occur in two ways: if you are writing data across a network, or if you are reading data in one architecture that was written in another architecture. Julian |
From: Nicholas G. <ngo...@dy...> - 2011-04-20 16:50:57
|
> > Getting issues with lucid backup / restore. Same version, different > machines, one 64 and one 32 bits. Pedro, Backup/Restore will only work properly on the same architecture; mixing 32 and 64 bits won't work. There IS however, the EXPORT_SCHEMA_TO_FILE which is a *logical* export of the data/structures that can be read back in using the Flat File Foreign Data connector on another server. We've got some other stuff that would help with in the upcoming 0.9.4 release (http://pub.eigenbase.org/wiki/LucidDbSysRoot_GENERATE_DDL_FOR) that will allow you to export your entire DDL and then run it on the new LucidDB in conjunction with the EXPORT_SCHEMA_TO_FILE. Also, and perhaps the easiest, you can simply connect the Lin64 instance to the Lin32 instance (via SYS_JDBC connector) and replicate the data over directly (INSERT INTO NEWTABLE SELECT * FROM LIN32.OLDTABLE). Let me know if you need anything more Pedro! Nicholas Goodman Founder, CEO DynamoBI Corporation ngo...@dy... |
From: Pedro A. <pe...@ne...> - 2011-04-20 14:28:06
|
Hey there Getting issues with lucid backup / restore. Same version, different machines, one 64 and one 32 bits. pedro@nicola:bin$ ./sqllineEngine Connecting to jdbc:luciddb: Connected to: LucidDB (version 0.9.3) Driver: LucidDbJdbcDriver (version 0.9) Autocommit status: true Transaction isolation: TRANSACTION_REPEATABLE_READ sqlline version 1.0.8-eb by Marc Prud'hommeaux 0: jdbc:luciddb:> CALL SYS_ROOT.RESTORE_DATABASE_WITHOUT_CATALOG('/path/to/fullbackup'); Error: System call failed: Read from backup file /home/pedro/projectos/stonegate/db/fullbackup/FennelDataDump.dat.gz failed: No such file or directory (state=,code=0) 0: jdbc:luciddb:> Any tips? -- Pedro Alves |
From: Jeremy L. <je...@vo...> - 2011-04-14 14:37:25
|
Here is an additional piece of information, the following exception is being thrown on the coordinator node: INFO: Connecting to datasource FarragoDBMS Apr 14, 2011 10:02:49 AM org.eigenbase.util.EigenbaseException SEVERE: org.eigenbase.util.EigenbaseException: Invalid URL: jdbc:luciddb:http://remotenode.mydomain.com:8034 Apr 14, 2011 10:02:49 AM net.sf.farrago.jdbc.FarragoJdbcUtil newSqlException SEVERE: Invalid URL: jdbc:luciddb:http://remotenode.mydomain.com:8034 Apr 14, 2011 10:02:49 AM de.simplicit.vjdbc.VirtualDriver connect INFO: VJdbc-URL: servlet:http://remotenode.mydomain.com:8034/vjdbc,FarragoDBMS Apr 14, 2011 10:02:49 AM de.simplicit.vjdbc.VirtualDriver connect INFO: VJdbc in Servlet-Mode, using URL http://remotenode.mydomain.com:8034/vjdbc Apr 14, 2011 10:02:49 AM de.simplicit.vjdbc.VirtualDriver connect Is my URL truly invalid and if so what is the proper format? -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Horizontal-Partitioning-Across-Distributed-Servers-tp6224886p6273051.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Nicholas G. <ngo...@dy...> - 2011-03-31 18:23:53
|
JVS, Your contributions to the project, foundation, and building the LucidDB user community are immeasurable. We are indeed extremely sad to see you leave; as we have discussed at length directly we hope to be a long term solution for the impediments leading you to this decision. We at DynamoBI know we have big shoes to fill as we fulfill the remaining project leadership responsibilities (releases, integration branch to branch, website maintenance, etc). LucidDB users should know that things continue to move forward with the numerous existing Eigenbase developers (Kevin S, myself, Julian Hyde, Hunter, chard, jhahn, mberkowitz, sunil, etc) and that a May release for 0.9.4 is still on track. We all wish you well on the variety of open source projects at Facebook; you will be sorely missed here. Nicholas Goodman Founder, CEO DynamoBI Corporation ngo...@dy... > From: John Sichi <js...@gm...> > Date: Wed, Mar 30, 2011 at 9:54 PM > Subject: [luciddb-users] signing off > To: Farrago Developers <far...@ei...>, fen...@ei..., Mailing list for users of LucidDB <luc...@li...> > > > Hey all, > > Due to insurmountable impediments I've experienced within the current > organizational structure, I'm no longer able to be effective at > cultivating increased participation in Eigenbase, so I've decided to > stop working on both Eigenbase and LucidDB in their current forms > entirely. I may continue playing around with the code via a GPL fork > in github; not sure yet. One way or another, I'll no longer be > checking into Perforce, nor will I be reviewing changes there. I have > already resigned from my existing roles in the non-profit, and am > handing off all responsibilities to the leadership there. > > It's been great working with everyone, etc etc. If you have any > questions, feel free to email me privately. > > JVS > > p.s. No, it's not April 1 yet in any time zone. |
From: Jeremy L. <je...@vo...> - 2011-03-31 14:58:14
|
ngoodman wrote: > > Follow on questions: > - Can you (or will you) ultimately use Firewater? It's designed for > PRECISELY the use case you're articulating. > - What is your degree of parallelism set to on your coordinator node? > Using Firewater sounds great however it was not available when I initially developed our system so it would take a lot of effort to get it integrated. Given that I have already written several scripts to do precisely what firewater does I am not sure what the benefit would be of using it at this point so I am inclined to say that I will not use it at this point. Assuming I was given the time to integrate Firewater the first hurdle would be upgrading my existing server from 0.9.2->0.9.3->0.9.4. So far I have not had success with the 0.9.2->0.9.3 upgrade procedure even with http://luciddb-users.1374590.n2.nabble.com/0-9-2-to-0-9-3-Upgrade-td6065470.html John's suggested work around . Going forward this is going to be a huge roadblock for me with or without Firewater. My degreeOfParallelism session setting is at the default of 1. The EXPLAIN PLAN results are attached. Although, I am not sure how useful it will be given that the parameterized prepared statement caused an exception before the plan was returned. http://luciddb-users.1374590.n2.nabble.com/file/n6227465/ExplainPlan.txt ExplainPlan.txt -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Horizontal-Partitioning-Across-Distributed-Servers-tp6224886p6227465.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2011-03-31 04:54:09
|
Hey all, Due to insurmountable impediments I've experienced within the current organizational structure, I'm no longer able to be effective at cultivating increased participation in Eigenbase, so I've decided to stop working on both Eigenbase and LucidDB in their current forms entirely. I may continue playing around with the code via a GPL fork in github; not sure yet. One way or another, I'll no longer be checking into Perforce, nor will I be reviewing changes there. I have already resigned from my existing roles in the non-profit, and am handing off all responsibilities to the leadership there. It's been great working with everyone, etc etc. If you have any questions, feel free to email me privately. JVS p.s. No, it's not April 1 yet in any time zone. |
From: Nicholas G. <ngo...@dy...> - 2011-03-31 00:44:53
|
On Mar 30, 2011, at 1:34 PM, Jeremy Lemaire wrote: > Questions > > 1. Is there a workaround for this other than not using Java > PreparedStatement with parameters? It might be related to the UNION ALL portions and the PreparedStatement. Not JUST the PS and remote server by itself. What does the Explain Plan for: SELECT SUM("COUNT") as "COUNT", f.npa, f.filled, f."LANGUAGE" FROM "INVENTORY_ANALYZER_SCHEMA"."INVENTORY_BY_LANGUAGE_2011_Q1" f LEFT JOIN ad_inventory_warehouse.publisher_dimension pub USING (source_id) WHERE f.source_id IN (?) AND pub.publisher_id IN (?) AND f.DATETIME >= timestamp '2011-03-29 00:00:00' AND f.DATETIME <= timestamp '2011-03-29 23:59:59' GROUP BY f.npa, f.filled, f."LANGUAGE" When run as a Prepared Statement? Also, if you can submit the Explain plans (which will have the information on the remote SQL being executed (with implementation) for both the good/slow ones, that'd be great. > 2. I am currently on LucidDb v0.9.2, will a software upgrade fix the > problem? Maybe... JVS did a bunch of commits as part of his work with Firewater that might help with some of this. Actually, for the type of partitioning you're doing, you almost certain want to use that technology. We are currently preparing it as an "add on" extension to deploy on top of an existing LucidDB installation. We intend to have a Firewater "mod" available with the 0.9.4 release. http://p4webhost.eigenbase.org:8080/@md=d&cd=//open/dy/dev/firewater/&c=rOd@/14145?ac=10 Not quite ready yet, but getting there. > 3. Assuming this is a problem with the LucidDbClient.jar, can I safely use a > 0.9.3 LucidDbClient.jar with a 0.9.2 server installation? We have two customers that have attempted this (and are also subscribed to this email list) which can attest to the issues they faced. vJDBC was ugpraded during these release to fix issues with Explain plan, etc. We were able to ultimately get a 0.9.3 installation to talk to a 0.9.2 LucidDB but reintroduced the bugs to do so. This is, not recommended. :) Follow on questions: - Can you (or will you) ultimately use Firewater? It's designed for PRECISELY the use case you're articulating. - What is your degree of parallelism set to on your coordinator node? |
From: Jeremy L. <je...@vo...> - 2011-03-30 20:35:00
|
Summary As part of a scale-out effort I have used distributed horizontal partitioning and ran into what appears to be a bug when using a Java PreparedStatement with parameters. Specifically if a Java PreparedStatement is used with parameters then the dispatched query sent to the remote server contains all table attributes and no filters. Filtering is not done until the very large result set is returned back to the coordinator. On the other hand if the same query is sent via a Java PreparedStatement with the same parameter values hardcoded then the filters are retained when the query is dispatched to the remote server and the proper table columns are also sent. This results in a query time of 1985ms instead of 72475ms. As would be expected the same query in sqllineClient or Squirrel takes about 2 seconds. Test Setup and Results http://luciddb-users.1374590.n2.nabble.com/file/n6224886/HorizontalPartitioningAcrossDistributedServers.txt HorizontalPartitioningAcrossDistributedServers.txt Questions 1. Is there a workaround for this other than not using Java PreparedStatement with parameters? 2. I am currently on LucidDb v0.9.2, will a software upgrade fix the problem? 3. Assuming this is a problem with the LucidDbClient.jar, can I safely use a 0.9.3 LucidDbClient.jar with a 0.9.2 server installation? -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Horizontal-Partitioning-Across-Distributed-Servers-tp6224886p6224886.html Sent from the luciddb-users mailing list archive at Nabble.com. |