You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(28) |
Jun
(2) |
Jul
(10) |
Aug
(1) |
Sep
(7) |
Oct
|
Nov
(1) |
Dec
(7) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(5) |
Feb
(7) |
Mar
(10) |
Apr
(12) |
May
(30) |
Jun
(21) |
Jul
(19) |
Aug
(17) |
Sep
(25) |
Oct
(46) |
Nov
(14) |
Dec
(11) |
2009 |
Jan
(5) |
Feb
(36) |
Mar
(17) |
Apr
(20) |
May
(75) |
Jun
(143) |
Jul
(29) |
Aug
(41) |
Sep
(38) |
Oct
(71) |
Nov
(17) |
Dec
(56) |
2010 |
Jan
(48) |
Feb
(31) |
Mar
(56) |
Apr
(24) |
May
(7) |
Jun
(18) |
Jul
(2) |
Aug
(34) |
Sep
(17) |
Oct
(1) |
Nov
|
Dec
(18) |
2011 |
Jan
(12) |
Feb
(19) |
Mar
(25) |
Apr
(11) |
May
(26) |
Jun
(16) |
Jul
(2) |
Aug
(10) |
Sep
(8) |
Oct
(1) |
Nov
|
Dec
(5) |
2012 |
Jan
(1) |
Feb
(3) |
Mar
(3) |
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
(1) |
Sep
(2) |
Oct
|
Nov
(2) |
Dec
|
From: Jeremy L. <je...@vo...> - 2011-02-25 20:29:33
|
Here is a little more info from the Trace.log: Feb 25, 2011 3:22:50 PM net.sf.farrago.db.FarragoDbSession prepare INFO: "TS" + CAST(CAST("N" AS BIGINT) * 60 * 60 * 1000 AS INTERVAL DAY(10) TO HOUR) Feb 25, 2011 3:22:50 PM org.eigenbase.sql.validate.SqlValidatorException <init> SEVERE: org.eigenbase.sql.validate.SqlValidatorException: Cast function cannot convert value of type BIGINT to type INTERVAL DAY(10) TO HOUR Feb 25, 2011 3:22:50 PM org.eigenbase.util.EigenbaseException <init> SEVERE: org.eigenbase.util.EigenbaseContextException: From line 1, column 8 to line 1, column 77 Feb 25, 2011 3:22:50 PM org.eigenbase.util.EigenbaseException <init> SEVERE: org.eigenbase.util.EigenbaseException: Invalid definition for routine "APPLIB"."ADD_HOURS_TIMESTAMP" Feb 25, 2011 3:22:50 PM net.sf.farrago.ddl.DdlValidator validate INFO: Revalidate exception on ADD_HOURS_TIMESTAMP: org.eigenbase.util.EigenbaseException: Invalid definition for routine "APPLIB"."ADD_HOURS_TIMESTAMP"; java.lang.NullPointerException: null Feb 25, 2011 3:22:50 PM net.sf.farrago.db.FarragoDbSession rollbackImpl INFO: rollback Feb 25, 2011 3:22:50 PM net.sf.farrago.jdbc.FarragoJdbcUtil newSqlException SEVERE: Invalid definition for routine "APPLIB"."ADD_HOURS_TIMESTAMP" Feb 25, 2011 3:22:50 PM net.sf.farrago.jdbc.FarragoJdbcUtil newSqlException SEVERE: null -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/0-9-2-to-0-9-3-Upgrade-tp6065470p6065912.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Jeremy L. <je...@vo...> - 2011-02-25 18:25:24
|
Closely following the http://pub.eigenbase.org/wiki/LucidDbUpgrade LucidDbUpgrade procedure I attempted to upgrade from 0.9.2 to 0.9.3 but got the following error when executing the install/catalog.sql: 502/1022 alter system set "deviceSchedulerType" = 'aioLinux'; No rows affected (0.024 seconds) 503/1022 -- $Id: //open/dev/luciddb/initsql/installApplib.sql#38 $ 504/1022 create or replace schema localdb.applib; Error: java.lang.NullPointerException: null (state=,code=0) java.sql.SQLException: java.lang.NullPointerException: null at de.simplicit.vjdbc.util.SQLExceptionHelper.wrapSQLException(SQLExceptionHelper.java:47) at de.simplicit.vjdbc.util.SQLExceptionHelper.wrap(SQLExceptionHelper.java:28) at de.simplicit.vjdbc.server.command.CommandProcessor.process(CommandProcessor.java:166) at net.sf.farrago.server.FarragoServletCommandSink.handleRequest(FarragoServletCommandSink.java:125) at net.sf.farrago.server.FarragoServletCommandSink.doPost(FarragoServletCommandSink.java:91) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:530) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:427) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) at org.eclipse.jetty.server.session.SessionHandler.handle(SessionHandler.java:182) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:933) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:362) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:867) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:113) at org.eclipse.jetty.server.Server.handle(Server.java:334) at org.eclipse.jetty.server.HttpConnection.handleRequest(HttpConnection.java:559) at org.eclipse.jetty.server.HttpConnection$RequestHandler.content(HttpConnection.java:1007) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:747) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:209) at org.eclipse.jetty.server.HttpConnection.handle(HttpConnection.java:406) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:462) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436) at java.lang.Thread.run(Thread.java:636) Aborting command set because "force" is false and command failed: "create or replace schema localdb.applib;" I noticed a note in the procedure stating that there was an issue with the applib.jar when upgrading from 0.9.3 to 0.9.4 but nothing pertinent to this upgrade. Any thoughts? -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/0-9-2-to-0-9-3-Upgrade-tp6065470p6065470.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: kingfesen <mau...@gm...> - 2011-02-25 11:29:06
|
John Sichi wrote: > > On Wed, Feb 23, 2011 at 9:51 AM, kingfesen <mau...@gm...> > wrote: >> sorry john, i've again a problem... i've downloaded a full tpch.tar.gz >> from >> link and i've use(by command !run ~/create_table.sql) and >> create_table.sql >> (adding a line for create a schema),create_index.sql, but now i've not >> understand how load data (my file have .tbl extension for example >> nation.tbl) into a schema. i must to create a file.sql where there are >> istruction to load data (example insert into tpch.nation * from >> nation.tbl) >> or i must create a file wrapper if the second mode is correct how i >> create >> this?? > > http://pub.eigenbase.org/wiki/LucidDbTpch#LucidDB_Data_Load > > The script it is referring to is part of the LucidDB source > distribution, under luciddb/test/sql/tpch. > > https://github.com/eigenbase/luciddb/tree/master/test/sql/tpch > > JVS > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT > data > generated by your applications, servers and devices whether physical, > virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > > i hope this is the last problem message...this database is hostile for me (or i'm stupid maybe this second one). I've read the guide, downloaded the tpch.tar.gz file but after i have run init.sql, create_table.sql and load_tables.sql i've this error... 1/9 insert into tpch.partsupp select * from tpch."partsupp"; error: from line 1, to colum 25: Number of insert target columns (5) does not equal number of soucre intems (6) (state=,code0). Where is my mistake? i've add a file code so it more easy correct my errors! http://luciddb-users.1374590.n2.nabble.com/file/n6064176/init.sql init.sql http://luciddb-users.1374590.n2.nabble.com/file/n6064176/create_tables.sql create_tables.sql http://luciddb-users.1374590.n2.nabble.com/file/n6064176/load_tables.sql load_tables.sql Thanks you so much for your patience!!! Regards -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6064176.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2011-02-24 21:02:30
|
Oh, export is a bash builtin so I guess you can't sudo it. You can instead do sudo su export | grep JAVA_HOME and see if it shows anything. One way or another, you'll need to make sure JAVA_HOME is visible to that script. JVS On Thu, Feb 24, 2011 at 12:52 PM, Vishal Belsare <vis...@gm...> wrote: > vishal@goedel:~$ sudo export $JAVA_HOME > sudo: export: command not found > > Strange. > > On Fri, Feb 25, 2011 at 2:06 AM, > <luc...@li...> wrote: >> >> Message: 6 >> Date: Thu, 24 Feb 2011 12:34:55 -0800 >> From: John Sichi <js...@gm...> >> Subject: Re: [luciddb-users] LucidDB 0.9.3 Installation Issue under >> Ubuntu 10.04 >> To: Mailing list for users of LucidDB >> <luc...@li...> >> >> What do you get back from this command? >> >> sudo export $JAVA_HOME >> >> The install script does this: >> >> if [ -z "$JAVA_HOME" ]; then >> echo "The JAVA_HOME environment variable must be set to the location" >> echo "of a version 1.6 or higher JVM." >> exit 1; >> fi >> >> So somehow JAVA_HOME is not visible inside the script, which usually >> means it is set but not exported. >> >> JVS >> >> On Thu, Feb 24, 2011 at 12:26 PM, Vishal Belsare >> <vis...@gm...> wrote: >>> I am trying to install LucidDB on an Ubuntu machine. Untar'ing the >>> archive, and trying to run the install script led to an error about >>> the Java virtual machine being incorrect. I was using the IcedTea >>> OpenJDK, instead of Sun's JRE. >>> Thinking that this might the issue, I installed Sun' JRE, and set it >>> as the default JRE, and confirmed that by using 'which java' and 'java >>> -version'. Showed up fine. Edited, /etc/environment to set JAVA_HOME >>> to /usr/lib/jvm/java-6-sun and confirmed that by an echo $JAVA_HOME, >>> which shows correctly. >>> >>> ----- >>> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo echo $JAVA_HOME >>> /usr/lib/jvm/java-6-sun >>> >>> vishal@goedel:/opt/luciddb-0.9.3/install$ which java >>> /usr/bin/java >>> >>> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo java -version >>> java version "1.6.0_24" >>> Java(TM) SE Runtime Environment (build 1.6.0_24-b07) >>> Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode) >>> ----- >>> >>> However, when I try to run install.sh, I see the following message: >>> -- >>> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo ./install.sh >>> The JAVA_HOME environment variable must be set to the location >>> of a version 1.6 or higher JVM. >>> -- >>> >>> I'd appreciate suggestions to fix this. Thanks. >>> >>> >>> Best wishes, >>> Vishal Belsare >>> >> >> >> End of luciddb-users Digest, Vol 43, Issue 2 >> ******************************************** >> > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Vishal B. <vis...@gm...> - 2011-02-24 20:53:19
|
vishal@goedel:~$ sudo export $JAVA_HOME sudo: export: command not found Strange. On Fri, Feb 25, 2011 at 2:06 AM, <luc...@li...> wrote: > > Message: 6 > Date: Thu, 24 Feb 2011 12:34:55 -0800 > From: John Sichi <js...@gm...> > Subject: Re: [luciddb-users] LucidDB 0.9.3 Installation Issue under > Ubuntu 10.04 > To: Mailing list for users of LucidDB > <luc...@li...> > > What do you get back from this command? > > sudo export $JAVA_HOME > > The install script does this: > > if [ -z "$JAVA_HOME" ]; then > echo "The JAVA_HOME environment variable must be set to the location" > echo "of a version 1.6 or higher JVM." > exit 1; > fi > > So somehow JAVA_HOME is not visible inside the script, which usually > means it is set but not exported. > > JVS > > On Thu, Feb 24, 2011 at 12:26 PM, Vishal Belsare > <vis...@gm...> wrote: >> I am trying to install LucidDB on an Ubuntu machine. Untar'ing the >> archive, and trying to run the install script led to an error about >> the Java virtual machine being incorrect. I was using the IcedTea >> OpenJDK, instead of Sun's JRE. >> Thinking that this might the issue, I installed Sun' JRE, and set it >> as the default JRE, and confirmed that by using 'which java' and 'java >> -version'. Showed up fine. Edited, /etc/environment to set JAVA_HOME >> to /usr/lib/jvm/java-6-sun and confirmed that by an echo $JAVA_HOME, >> which shows correctly. >> >> ----- >> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo echo $JAVA_HOME >> /usr/lib/jvm/java-6-sun >> >> vishal@goedel:/opt/luciddb-0.9.3/install$ which java >> /usr/bin/java >> >> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo java -version >> java version "1.6.0_24" >> Java(TM) SE Runtime Environment (build 1.6.0_24-b07) >> Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode) >> ----- >> >> However, when I try to run install.sh, I see the following message: >> -- >> vishal@goedel:/opt/luciddb-0.9.3/install$ sudo ./install.sh >> The JAVA_HOME environment variable must be set to the location >> of a version 1.6 or higher JVM. >> -- >> >> I'd appreciate suggestions to fix this. Thanks. >> >> >> Best wishes, >> Vishal Belsare >> > > > End of luciddb-users Digest, Vol 43, Issue 2 > ******************************************** > |
From: John S. <js...@gm...> - 2011-02-24 20:36:02
|
What do you get back from this command? sudo export $JAVA_HOME The install script does this: if [ -z "$JAVA_HOME" ]; then echo "The JAVA_HOME environment variable must be set to the location" echo "of a version 1.6 or higher JVM." exit 1; fi So somehow JAVA_HOME is not visible inside the script, which usually means it is set but not exported. JVS On Thu, Feb 24, 2011 at 12:26 PM, Vishal Belsare <vis...@gm...> wrote: > I am trying to install LucidDB on an Ubuntu machine. Untar'ing the > archive, and trying to run the install script led to an error about > the Java virtual machine being incorrect. I was using the IcedTea > OpenJDK, instead of Sun's JRE. > Thinking that this might the issue, I installed Sun' JRE, and set it > as the default JRE, and confirmed that by using 'which java' and 'java > -version'. Showed up fine. Edited, /etc/environment to set JAVA_HOME > to /usr/lib/jvm/java-6-sun and confirmed that by an echo $JAVA_HOME, > which shows correctly. > > ----- > vishal@goedel:/opt/luciddb-0.9.3/install$ sudo echo $JAVA_HOME > /usr/lib/jvm/java-6-sun > > vishal@goedel:/opt/luciddb-0.9.3/install$ which java > /usr/bin/java > > vishal@goedel:/opt/luciddb-0.9.3/install$ sudo java -version > java version "1.6.0_24" > Java(TM) SE Runtime Environment (build 1.6.0_24-b07) > Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode) > ----- > > However, when I try to run install.sh, I see the following message: > -- > vishal@goedel:/opt/luciddb-0.9.3/install$ sudo ./install.sh > The JAVA_HOME environment variable must be set to the location > of a version 1.6 or higher JVM. > -- > > I'd appreciate suggestions to fix this. Thanks. > > > Best wishes, > Vishal Belsare > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Vishal B. <vis...@gm...> - 2011-02-24 20:26:40
|
I am trying to install LucidDB on an Ubuntu machine. Untar'ing the archive, and trying to run the install script led to an error about the Java virtual machine being incorrect. I was using the IcedTea OpenJDK, instead of Sun's JRE. Thinking that this might the issue, I installed Sun' JRE, and set it as the default JRE, and confirmed that by using 'which java' and 'java -version'. Showed up fine. Edited, /etc/environment to set JAVA_HOME to /usr/lib/jvm/java-6-sun and confirmed that by an echo $JAVA_HOME, which shows correctly. ----- vishal@goedel:/opt/luciddb-0.9.3/install$ sudo echo $JAVA_HOME /usr/lib/jvm/java-6-sun vishal@goedel:/opt/luciddb-0.9.3/install$ which java /usr/bin/java vishal@goedel:/opt/luciddb-0.9.3/install$ sudo java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode) ----- However, when I try to run install.sh, I see the following message: -- vishal@goedel:/opt/luciddb-0.9.3/install$ sudo ./install.sh The JAVA_HOME environment variable must be set to the location of a version 1.6 or higher JVM. -- I'd appreciate suggestions to fix this. Thanks. Best wishes, Vishal Belsare |
From: John S. <js...@gm...> - 2011-02-24 08:02:55
|
On Wed, Feb 23, 2011 at 9:51 AM, kingfesen <mau...@gm...> wrote: > sorry john, i've again a problem... i've downloaded a full tpch.tar.gz from > link and i've use(by command !run ~/create_table.sql) and create_table.sql > (adding a line for create a schema),create_index.sql, but now i've not > understand how load data (my file have .tbl extension for example > nation.tbl) into a schema. i must to create a file.sql where there are > istruction to load data (example insert into tpch.nation * from nation.tbl) > or i must create a file wrapper if the second mode is correct how i create > this?? http://pub.eigenbase.org/wiki/LucidDbTpch#LucidDB_Data_Load The script it is referring to is part of the LucidDB source distribution, under luciddb/test/sql/tpch. https://github.com/eigenbase/luciddb/tree/master/test/sql/tpch JVS |
From: Nicholas G. <ngo...@dy...> - 2011-02-23 18:03:57
|
On Feb 23, 2011, at 9:51 AM, kingfesen wrote: > sorry john, i've again a problem... i've downloaded a full tpch.tar.gz from > link and i've use(by command !run ~/create_table.sql) and create_table.sql > (adding a line for create a schema),create_index.sql, but now i've not > understand how load data (my file have .tbl extension for example > nation.tbl) into a schema. i must to create a file.sql where there are > istruction to load data (example insert into tpch.nation * from nation.tbl) > or i must create a file wrapper if the second mode is correct how i create > this?? It is the second - however, you're in luck. We run the TPCH as a test in our suite so we already have DDL/Load scripts. You can find it in our source tree at //open/dev/luciddb/test/sql/tpch You can get all the scripts from GitHub (a read only replica) here: https://github.com/eigenbase/luciddb/tree/master/test/sql/tpch Hope that helps! Nick |
From: kingfesen <mau...@gm...> - 2011-02-23 17:52:02
|
John Sichi wrote: > > LucidDB uses sqllineClient for running .sql files. > > You should work through the getting started and ETL guides first to > get comfortable with it: > > http://pub.eigenbase.org/wiki/LucidDbGettingStarted > http://pub.eigenbase.org/wiki/LucidDbEtlTutorial > > JVS > > On Sat, Feb 12, 2011 at 12:37 AM, kingfesen <mau...@gm...> > wrote: >> >> >> John Sichi wrote: >>> >>> Use the instructions here: >>> >>> http://pub.eigenbase.org/wiki/LucidDbTpch >>> >>> JVS >>> >>> On Thu, Feb 10, 2011 at 6:30 AM, kingfesen <mau...@gm...> >>> wrote: >>>> >>>> Hi all, i've a problem...I have to create and populate a database for >>>> testing >>>> tpch. At the moment i've just installed luciddb (it work's...) >>>> >>>> Now I have to create and populate the database by importing the data >>>> into >>>> files. sql; i'm running on linux machine without grafical interface, >>>> some >>>> can help me ho to load this using command line interface? >>>> >>>> I've also read the official wiki but I do not understand how I >>>> do....please >>>> can yuo help how can i do??? >>>> >>>> regards! >>>> -- >>>> View this message in context: >>>> http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6011447.html >>>> Sent from the luciddb-users mailing list archive at Nabble.com. >>>> >>>> ------------------------------------------------------------------------------ >>>> The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio >>>> XE: >>>> Pinpoint memory and threading errors before they happen. >>>> Find and fix more than 250 security defects in the development cycle. >>>> Locate bottlenecks in serial and parallel code that limit performance. >>>> http://p.sf.net/sfu/intel-dev2devfeb >>>> _______________________________________________ >>>> luciddb-users mailing list >>>> luc...@li... >>>> https://lists.sourceforge.net/lists/listinfo/luciddb-users >>>> >>> >>> ------------------------------------------------------------------------------ >>> The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio >>> XE: >>> Pinpoint memory and threading errors before they happen. >>> Find and fix more than 250 security defects in the development cycle. >>> Locate bottlenecks in serial and parallel code that limit performance. >>> http://p.sf.net/sfu/intel-dev2devfeb >>> _______________________________________________ >>> luciddb-users mailing list >>> luc...@li... >>> https://lists.sourceforge.net/lists/listinfo/luciddb-users >>> >>> >> >> >> Thanks for the guide, but I'm missing something. >> how to upload files tpch I already have and that I used for my >> dissertation? >> >> To try to make you better explain this example: I in the other database >> (monetdb, mysql and firebird) that I analyzed to create and populate the >> database I was using a "command " and here I can not understand how to >> create and populate this database with my data . >> >> examples: >> >> firebird: isql -i ~/createdatabase.sql ~/database /tpch.fdb >> -u<username> >> -p <password> >> isql -i ~/nation.sql ~/database /tpch.fdb (load time) >> -u<username> -p <password> >> >> monetdb: mclient -lsql --database=tpch < ~/createdatabase.sql >> mclient -lsql --database=tpch < ~/nation.sql >> >> mysql: mysql -u <username> -p tpch < ~/createdatabase.sql >> mysql -u <username> -p tpch < ~/nation.sql >> >> into create createdatabase.sql there are the standard sql statements to >> create the database tables >> into nation.sql there are data to populate the table >> >> In luciddb this procedure is possible?? if it possible how? if it not >> possible how? >> >> >> Thanks and regards >> Mauro >> -- >> View this message in context: >> http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6018317.html >> Sent from the luciddb-users mailing list archive at Nabble.com. >> >> ------------------------------------------------------------------------------ >> The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: >> Pinpoint memory and threading errors before they happen. >> Find and fix more than 250 security defects in the development cycle. >> Locate bottlenecks in serial and parallel code that limit performance. >> http://p.sf.net/sfu/intel-dev2devfeb >> _______________________________________________ >> luciddb-users mailing list >> luc...@li... >> https://lists.sourceforge.net/lists/listinfo/luciddb-users >> > > ------------------------------------------------------------------------------ > The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: > Pinpoint memory and threading errors before they happen. > Find and fix more than 250 security defects in the development cycle. > Locate bottlenecks in serial and parallel code that limit performance. > http://p.sf.net/sfu/intel-dev2devfeb > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > > sorry john, i've again a problem... i've downloaded a full tpch.tar.gz from link and i've use(by command !run ~/create_table.sql) and create_table.sql (adding a line for create a schema),create_index.sql, but now i've not understand how load data (my file have .tbl extension for example nation.tbl) into a schema. i must to create a file.sql where there are istruction to load data (example insert into tpch.nation * from nation.tbl) or i must create a file wrapper if the second mode is correct how i create this?? sorry again best regards Mauro -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6057234.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2011-02-12 20:40:31
|
LucidDB uses sqllineClient for running .sql files. You should work through the getting started and ETL guides first to get comfortable with it: http://pub.eigenbase.org/wiki/LucidDbGettingStarted http://pub.eigenbase.org/wiki/LucidDbEtlTutorial JVS On Sat, Feb 12, 2011 at 12:37 AM, kingfesen <mau...@gm...> wrote: > > > John Sichi wrote: >> >> Use the instructions here: >> >> http://pub.eigenbase.org/wiki/LucidDbTpch >> >> JVS >> >> On Thu, Feb 10, 2011 at 6:30 AM, kingfesen <mau...@gm...> >> wrote: >>> >>> Hi all, i've a problem...I have to create and populate a database for >>> testing >>> tpch. At the moment i've just installed luciddb (it work's...) >>> >>> Now I have to create and populate the database by importing the data into >>> files. sql; i'm running on linux machine without grafical interface, some >>> can help me ho to load this using command line interface? >>> >>> I've also read the official wiki but I do not understand how I >>> do....please >>> can yuo help how can i do??? >>> >>> regards! >>> -- >>> View this message in context: >>> http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6011447.html >>> Sent from the luciddb-users mailing list archive at Nabble.com. >>> >>> ------------------------------------------------------------------------------ >>> The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: >>> Pinpoint memory and threading errors before they happen. >>> Find and fix more than 250 security defects in the development cycle. >>> Locate bottlenecks in serial and parallel code that limit performance. >>> http://p.sf.net/sfu/intel-dev2devfeb >>> _______________________________________________ >>> luciddb-users mailing list >>> luc...@li... >>> https://lists.sourceforge.net/lists/listinfo/luciddb-users >>> >> >> ------------------------------------------------------------------------------ >> The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: >> Pinpoint memory and threading errors before they happen. >> Find and fix more than 250 security defects in the development cycle. >> Locate bottlenecks in serial and parallel code that limit performance. >> http://p.sf.net/sfu/intel-dev2devfeb >> _______________________________________________ >> luciddb-users mailing list >> luc...@li... >> https://lists.sourceforge.net/lists/listinfo/luciddb-users >> >> > > > Thanks for the guide, but I'm missing something. > how to upload files tpch I already have and that I used for my dissertation? > > To try to make you better explain this example: I in the other database > (monetdb, mysql and firebird) that I analyzed to create and populate the > database I was using a "command " and here I can not understand how to > create and populate this database with my data . > > examples: > > firebird: isql -i ~/createdatabase.sql ~/database /tpch.fdb -u<username> > -p <password> > isql -i ~/nation.sql ~/database /tpch.fdb (load time) > -u<username> -p <password> > > monetdb: mclient -lsql --database=tpch < ~/createdatabase.sql > mclient -lsql --database=tpch < ~/nation.sql > > mysql: mysql -u <username> -p tpch < ~/createdatabase.sql > mysql -u <username> -p tpch < ~/nation.sql > > into create createdatabase.sql there are the standard sql statements to > create the database tables > into nation.sql there are data to populate the table > > In luciddb this procedure is possible?? if it possible how? if it not > possible how? > > > Thanks and regards > Mauro > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6018317.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: > Pinpoint memory and threading errors before they happen. > Find and fix more than 250 security defects in the development cycle. > Locate bottlenecks in serial and parallel code that limit performance. > http://p.sf.net/sfu/intel-dev2devfeb > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: kingfesen <mau...@gm...> - 2011-02-12 08:37:35
|
John Sichi wrote: > > Use the instructions here: > > http://pub.eigenbase.org/wiki/LucidDbTpch > > JVS > > On Thu, Feb 10, 2011 at 6:30 AM, kingfesen <mau...@gm...> > wrote: >> >> Hi all, i've a problem...I have to create and populate a database for >> testing >> tpch. At the moment i've just installed luciddb (it work's...) >> >> Now I have to create and populate the database by importing the data into >> files. sql; i'm running on linux machine without grafical interface, some >> can help me ho to load this using command line interface? >> >> I've also read the official wiki but I do not understand how I >> do....please >> can yuo help how can i do??? >> >> regards! >> -- >> View this message in context: >> http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6011447.html >> Sent from the luciddb-users mailing list archive at Nabble.com. >> >> ------------------------------------------------------------------------------ >> The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: >> Pinpoint memory and threading errors before they happen. >> Find and fix more than 250 security defects in the development cycle. >> Locate bottlenecks in serial and parallel code that limit performance. >> http://p.sf.net/sfu/intel-dev2devfeb >> _______________________________________________ >> luciddb-users mailing list >> luc...@li... >> https://lists.sourceforge.net/lists/listinfo/luciddb-users >> > > ------------------------------------------------------------------------------ > The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: > Pinpoint memory and threading errors before they happen. > Find and fix more than 250 security defects in the development cycle. > Locate bottlenecks in serial and parallel code that limit performance. > http://p.sf.net/sfu/intel-dev2devfeb > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > > Thanks for the guide, but I'm missing something. how to upload files tpch I already have and that I used for my dissertation? To try to make you better explain this example: I in the other database (monetdb, mysql and firebird) that I analyzed to create and populate the database I was using a "command " and here I can not understand how to create and populate this database with my data . examples: firebird: isql -i ~/createdatabase.sql ~/database /tpch.fdb -u<username> -p <password> isql -i ~/nation.sql ~/database /tpch.fdb (load time) -u<username> -p <password> monetdb: mclient -lsql --database=tpch < ~/createdatabase.sql mclient -lsql --database=tpch < ~/nation.sql mysql: mysql -u <username> -p tpch < ~/createdatabase.sql mysql -u <username> -p tpch < ~/nation.sql into create createdatabase.sql there are the standard sql statements to create the database tables into nation.sql there are data to populate the table In luciddb this procedure is possible?? if it possible how? if it not possible how? Thanks and regards Mauro -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6018317.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2011-02-12 06:34:45
|
Use the instructions here: http://pub.eigenbase.org/wiki/LucidDbTpch JVS On Thu, Feb 10, 2011 at 6:30 AM, kingfesen <mau...@gm...> wrote: > > Hi all, i've a problem...I have to create and populate a database for testing > tpch. At the moment i've just installed luciddb (it work's...) > > Now I have to create and populate the database by importing the data into > files. sql; i'm running on linux machine without grafical interface, some > can help me ho to load this using command line interface? > > I've also read the official wiki but I do not understand how I do....please > can yuo help how can i do??? > > regards! > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6011447.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: > Pinpoint memory and threading errors before they happen. > Find and fix more than 250 security defects in the development cycle. > Locate bottlenecks in serial and parallel code that limit performance. > http://p.sf.net/sfu/intel-dev2devfeb > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: kingfesen <mau...@gm...> - 2011-02-10 14:30:36
|
Hi all, i've a problem...I have to create and populate a database for testing tpch. At the moment i've just installed luciddb (it work's...) Now I have to create and populate the database by importing the data into files. sql; i'm running on linux machine without grafical interface, some can help me ho to load this using command line interface? I've also read the official wiki but I do not understand how I do....please can yuo help how can i do??? regards! -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/load-sql-data-file-tp6011447p6011447.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Nicholas G. <ngo...@dy...> - 2011-01-27 00:30:04
|
Splunk has generously offered to host our next San Francisco meetup: 250 Brannan San Francisco, CA 94107 Details, and to RSVP please go to meetup: http://www.meetup.com/San-Francisco-Eigenbase-Developers/calendar/16200530/ Informal, mainly social oriented networking and chit chat around our favorite "data management framework" and column store database. In addition to simply getting caught up we'll do our consistent "unconference" style presentations. 10-15 minute presentations, voted for at the start of the meeting. Potential topics listed below, however bring your own topics and all is fair game for discussion. a) Proposed Eigenbase migration from P4 to some set of GIT/SVN/... b) DynamoNETWORK update (and *maybe* a demo) c) Firewater update (nick + jvs) d) Pentaho plugins (overview of integrations with Open Source BI tools) Look forward to seeing you all there! Nick PS - They're starting to get to know me by first name on these Wednesday Seattle-SFO flights. :) |
From: John S. <js...@gm...> - 2011-01-21 05:18:20
|
We don't currently have anything like this. It would not be too hard to add a global performance counter to show number of rows loaded (across all tables). However, I'm not sure what to do about the indexing phase, which is done entirely separately after the row loading phase, and which can take a significant amount of time. So unless the table had no indexes at all, your progress bar would climb smoothly, and then stick for a long time while the indexing was being done. It's harder to come up with a simple "counter" for index update since it involves sorting and bitmap merge, which aren't in terms of rows at all. Thoughts on this? JVS On Thu, Jan 20, 2011 at 4:00 PM, Aris Setyawan <ari...@gm...> wrote: > Hi, > > I'm new to LucidDB. > > Can I access undo log record to know processed record count in bulk > loading with LucidDB? > > I need it to make a progress bar in "export-import dbf module" in my > application. Currently, I use "show innodb status" in mysql Innodb, > but the bulk loading and aggregate query is slow. I want to try > LucidDB because of the dbf file imported to database have numerous > column, same with the database table. > > -Aris > > ------------------------------------------------------------------------------ > Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! > Finally, a world-class log management solution at an even better price-free! > Download using promo code Free_Logger_4_Dev2Dev. Offer expires > February 28th, so secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsight-sfd2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Aris S. <ari...@gm...> - 2011-01-21 00:00:39
|
Hi, I'm new to LucidDB. Can I access undo log record to know processed record count in bulk loading with LucidDB? I need it to make a progress bar in "export-import dbf module" in my application. Currently, I use "show innodb status" in mysql Innodb, but the bulk loading and aggregate query is slow. I want to try LucidDB because of the dbf file imported to database have numerous column, same with the database table. -Aris |
From: John S. <js...@gm...> - 2011-01-20 23:43:05
|
It's useful for benchmarking. JVS On Thu, Jan 20, 2011 at 3:13 PM, Nicholas Goodman <ngo...@dy...> wrote: > I'm curious what your thought process is on this. I'm trying to determine what the benefit of doing this would be. I can only think that you're trying to work around a bug but I don't know of any open issues in this regard. > > Nick > > On Jan 20, 2011, at 2:41 PM, Michael <nan...@ya...> wrote: > >> >> Hi all, >> >> Is there someway we could clear the buffers in LucidDB? >> >> Thanks, >> Mike >> -- >> View this message in context: http://luciddb-users.1374590.n2.nabble.com/Clear-buffers-tp5945721p5945721.html >> Sent from the luciddb-users mailing list archive at Nabble.com. >> >> ------------------------------------------------------------------------------ >> Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! >> Finally, a world-class log management solution at an even better price-free! >> Download using promo code Free_Logger_4_Dev2Dev. Offer expires >> February 28th, so secure your free ArcSight Logger TODAY! >> http://p.sf.net/sfu/arcsight-sfd2d >> _______________________________________________ >> luciddb-users mailing list >> luc...@li... >> https://lists.sourceforge.net/lists/listinfo/luciddb-users > > ------------------------------------------------------------------------------ > Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! > Finally, a world-class log management solution at an even better price-free! > Download using promo code Free_Logger_4_Dev2Dev. Offer expires > February 28th, so secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsight-sfd2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Nicholas G. <ngo...@dy...> - 2011-01-20 23:37:00
|
I'm curious what your thought process is on this. I'm trying to determine what the benefit of doing this would be. I can only think that you're trying to work around a bug but I don't know of any open issues in this regard. Nick On Jan 20, 2011, at 2:41 PM, Michael <nan...@ya...> wrote: > > Hi all, > > Is there someway we could clear the buffers in LucidDB? > > Thanks, > Mike > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Clear-buffers-tp5945721p5945721.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! > Finally, a world-class log management solution at an even better price-free! > Download using promo code Free_Logger_4_Dev2Dev. Offer expires > February 28th, so secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsight-sfd2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users |
From: John S. <js...@gm...> - 2011-01-20 23:01:10
|
The guaranteed way is to restart the server. LucidDB uses direct I/O, so nothing is cached by the OS (although of course lower-level caching such as disk controller can always be present). If you want to avoid restarting the server, you can change system parameter "cachePagesInit" to a low number (like 20) and then set it back to its original setting. However, this is dangerous, since if you go too low, you can end up with an unusable system. And for any non-zero value, those last few buffers (e.g. 20*32K) won't be discarded. So bouncing the server is a lot safer and guaranteed. http://pub.eigenbase.org/wiki/LucidDbBufferPoolSizing JVS On Thu, Jan 20, 2011 at 2:41 PM, Michael <nan...@ya...> wrote: > > Hi all, > > Is there someway we could clear the buffers in LucidDB? > > Thanks, > Mike > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Clear-buffers-tp5945721p5945721.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! > Finally, a world-class log management solution at an even better price-free! > Download using promo code Free_Logger_4_Dev2Dev. Offer expires > February 28th, so secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsight-sfd2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Michael <nan...@ya...> - 2011-01-20 22:41:21
|
Hi all, Is there someway we could clear the buffers in LucidDB? Thanks, Mike -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Clear-buffers-tp5945721p5945721.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Nicholas G. <ngo...@dy...> - 2011-01-05 18:34:44
|
On Jan 5, 2011, at 2:44 AM, lynn_19840516 wrote: > Firstly, I'd like to thank LucidDB's developers! The performance is > mind-blowing, especially coming from a conventional RDBMS. We're glad you like it! When you get to the end of your initial project would you be willing to share some of these "mind-blowing" stats compared to your original RDBMS? We're trying to collect this information so potential users can see what kind of "real world" improvements. > 0: jdbc:luciddb:http://localhost> select * from bills; > +----------+-------+--------+--------------+------------+---------+------------+ > > | bill_id | type | state | from_entity | to_entity | holder | approver > 0: jdbc:luciddb:http://localhost> SELECT * from bills where bill_id =1; > Error: From line 1, column 27 to line 1, column 33: Column 'BILL_ID' not > found i > n any table (state=,code=0) ANSI standard dictates that any unquoted identifier (ie, bills in your case) is UPPERCASED and then evaluated. Postgres, which respects case, has the column defined "lower" case (ie, bills). You can change your query to: > select * from bills where "bill_id" = 1; Which will prevent LucidDB from uppercasing the identifier. I'd recommend any tables you create in LucidDB you should create with uppercase identifiers. Foreign data sources, such as Postgres, the case will come from the remote database so you'll just have to deal with whatever case the database had originally. Good luck, and let us know how you get on! Nick PS - You can also use this FAQ for some other common issues you may (or may not) encounter: http://pub.eigenbase.org/wiki/LucidDbUserFaq#Missing_Columns |
From: John S. <js...@gm...> - 2011-01-03 21:05:01
|
I had forgotten about the client memory setting issue...I've logged a bug since we should really fix this (and in general make it easier to configure the memory settings without having to edit scripts directly). http://issues.eigenbase.org/browse/LDB-234 JVS On Thu, Dec 30, 2010 at 6:51 PM, Jeremy Lemaire <je...@vo...> wrote: > > System Parameters > http://luciddb-users.1374590.n2.nabble.com/file/n5877674/system_parameters.txt > system_parameters.txt > > After seeing the jstack I am also leaning towards a problem with exhausted > free memory as opposed to my original concern that it was deadlock. Because > of this I have requested more RAM for this machine and also held off on > submitting anything to JIRA. if you disagree let me know and I will submit > the details we have discussed. > > As for the buffer pool, early on I tried several different settings. 4G for > Java Heap and 6G for the Buffer Pool seemed to work best at the time. My > theory for not making the min and max heap both 4G was that I would not be > able to run more than one instance of sqllineClient. Given that it is only > a 16G system and that lucidDbServer and sqllineClient share Java heap > settings as defined in the defineFarragoRuntime.sh script, it seemed better > to allow those clients that do not require 4G to use as little as 512M and > grow dynamically. However running as many as 5 (memory hungry) instances of > sqllineClient simultaneously, each of which being capable of consuming a max > of 4G of RAM, I can see how memory could quickly become an issue on a 16G > system. My understanding of Java heap however, is that the app will just > chew up swap once it runs out of free which could be why it appears to hang. > Maybe it is not hanging at all but instead just swapping like crazy and > going sloooow. Seemingly this would explain the analyze statements not > completing, but could it go slow enough not to service the socket > connections properly? I don't recall excessive swap but I will be sure to > check if this happens again. > > For now I have made a change to do all inserts in parallel and all analyzes > w/ ESTIMATE (not COMPUTE) serially and this appears to have worked around > the problem. Going forward I will try and get this going on a 32G machine > with version 0.9.3. Also within the next couple of months I should have a > Hadoop cluster in place to offload some of the computation and storage that > LucidDb is needlessly doing now and allow it to focus on OLAP jobs. I think > these changes will make my LucidDb setup much happier. > > Let me know if there is any other information you would like and if you > think a JIRA entry is still warranted. > > > > > > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p5877674.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Learn how Oracle Real Application Clusters (RAC) One Node allows customers > to consolidate database storage, standardize their database environment, and, > should the need arise, upgrade to a full multi-node Oracle RAC database > without downtime or disruption > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: John S. <js...@gm...> - 2011-01-01 06:38:22
|
On Thu, Dec 30, 2010 at 7:07 PM, Jeremy Lemaire <je...@vo...> wrote: > > With the new year fast approaching I am trying to update my views to include > the new 2011 partitions but when I try to drop the existing views I get an > error indicating that I need to cascade the delete. The problem is that I > have no idea what views, tables etc this cascade is going to effect. > > Is there a dry run setting or something similar that will show me what will > be removed if I cascade this delete? We don't yet have a DBA_DEPENDENCIES view, and the DROP command does not report the dependencies. However, we do have a secret weapon: a not-very-well-documented metadata query language called LURQL. We currently only use it internally, but there's a test UDX which allows you to access it. First, run these commands to register the UDX: create schema md; set schema 'md'; set path 'md'; create function lurql( server_name varchar(128), query varchar(65535) ) returns table( class_name varchar(128), obj_name varchar(128), mof_id varchar(128), obj_attrs varchar(65535) ) language java parameter style system defined java no sql external name 'class net.sf.farrago.test.LurqlQueryUdx.queryMedMdr'; Then, execute this to see the first level of dependencies: select class_name,obj_name from table(lurql(cast(null as varchar(128)), 'select c from class LocalView where name=''YOUR_VIEW_NAME'' then (follow origin end supplier then ( follow destination end client as c));' )); (Note that YOUR_VIEW_NAME is surrounded by pairs of single quotes, not double quotes.) To see all of the cascaded dependencies recursively, execute this: select class_name,obj_name from table(lurql(cast(null as varchar(128)), 'select c from class LocalView where name=''YOUR_VIEW_NAME'' then ( recursively ( follow origin end supplier then ( follow destination end client as c)));')); The results should look like this: +-------------+-----------+ | CLASS_NAME | OBJ_NAME | +-------------+-----------+ | LocalView | V3 | | LocalView | V2 | +-------------+-----------+ These assume that your view name is unique across schemas; if that's not the case, I can give you a longer query which deals with name qualification. JVS |
From: Jeremy L. <je...@vo...> - 2010-12-31 03:07:55
|
With the new year fast approaching I am trying to update my views to include the new 2011 partitions but when I try to drop the existing views I get an error indicating that I need to cascade the delete. The problem is that I have no idea what views, tables etc this cascade is going to effect. Is there a dry run setting or something similar that will show me what will be removed if I cascade this delete? -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/drop-view-cascade-tp5877689p5877689.html Sent from the luciddb-users mailing list archive at Nabble.com. |