libopendbx-devel Mailing List for OpenDBX database access library (Page 5)
Brought to you by:
nose
You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(20) |
Feb
(18) |
Mar
(2) |
Apr
(13) |
May
(6) |
Jun
(65) |
Jul
(32) |
Aug
(58) |
Sep
(60) |
Oct
(15) |
Nov
(7) |
Dec
(35) |
2009 |
Jan
(29) |
Feb
(2) |
Mar
(35) |
Apr
(20) |
May
(76) |
Jun
(50) |
Jul
(13) |
Aug
(35) |
Sep
(71) |
Oct
(20) |
Nov
(3) |
Dec
(37) |
2010 |
Jan
(11) |
Feb
(10) |
Mar
(33) |
Apr
(17) |
May
(4) |
Jun
(9) |
Jul
(19) |
Aug
(13) |
Sep
(9) |
Oct
|
Nov
|
Dec
(2) |
2011 |
Jan
(13) |
Feb
|
Mar
(12) |
Apr
(1) |
May
(22) |
Jun
(12) |
Jul
(34) |
Aug
(12) |
Sep
(7) |
Oct
(6) |
Nov
|
Dec
(1) |
2012 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
(23) |
Jun
(7) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
(4) |
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(18) |
Nov
|
Dec
|
2014 |
Jan
(6) |
Feb
|
Mar
|
Apr
(2) |
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(8) |
Nov
|
Dec
|
From: Alain R. <al...@fr...> - 2011-08-15 10:19:03
|
Hi, Just to let you know that I finally solved this problem adding LDFLAGS = -lintl to the sqlite3 backend makefile of opendbx I think a better way to do this would be to add it somwhere in the configure script. Cheers Alain "Alain Rastoul" <al...@fr...> a écrit dans le message de news: j285sb$mf0$1...@do...... Hi Guillermo Did you solve your compiling problem undefined reference to libintl_dgettext ? I have the same problem trying to build the sqlite3 backend. TIA Alain "Guillermo Polito" <gui...@gm...> a écrit dans le message de news: CAOBmb50Te7k=Xan...@pu...... Ok, I tried all that without success :S. But, I tried for the second time to use the compiled dlls from the site-I dunno why the first time did not work-, pasting them into Windows\system32, and it worked, so I'm freezing the open dbx building for now. Thanks for all your help, and sorry for the spam :). Guille On Mon, Jul 11, 2011 at 7:42 AM, Mariano Martinez Peck <mar...@gm...> wrote: Guille I think that a possible solution can be to create the variable $PATH and add it to C:\MinGW\lib\ or to whereever you have the lib folder of mingw. In your $PATH I can see you don't put :/mingw/lib Guille $PATH .:/usr/local/bin:/mingw/bin:/bin:/usr/local/bin:/usr/bin:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/Archivos de programa/QuickTime/QTSystem/:/c/WINDOWS/system32/WindowsPowerShell/v1.0:/c/Archivos de programa/Microsoft SQL Server/90/Tools/binn/:/mingw/bin/ But I do: $ echo $PATH .:/usr/local/bin:/mingw/bin:/bin:/c/oraclexe/app/oracle/product/10.2.0/server/bin:/c/Sybase/OCS-15_0/dll:/c/Sybase/ASE-15_0/jobscheduler/bin:/c/Sybase/ASE-15_0/dll:/c/Sybase/ASE-15_0/bin:/c/Sybase/DBISQL/bin:/c/Sybase/DataAccess/ADONET/dll:/c/Sybase/DataAccess/ODBC/dll:/c/Sybase/DataAccess/OLEDB/dll:/c/Sybase/UAF-2_5/bin:/c/Sybase/OCS-15_0/lib3p:/c/Sybase/OCS-15_0/dll:/c/Sybase/OCS-15_0/bin:/c/SQLServer:/c/mariano/oracle/instantclient_11_1/:/mingw/bin:/mingw/lib:/lib/:/usr/bin:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/Archivos de programa/TortoiseSVN/bin:/c/PostgreSQL/8.3/bin/:/c/Archivos de programa/Microsoft SQL Server/90/Tools/binn/:/c/XEClient/bin:/c/MySQL/bin:/c/Archivos de programa/CMake 2.8/bin:/c/Archivos de programa/Git/cmd:/c/Archivos de programa/Git/bin:/c/Archivos de programa/Cincom/ObjectStudio/dllw32:./dllw32 In addition, maybe we can do something like --disable-nlsduring ./configure ? tell me if helped On Sat, Jul 9, 2011 at 10:41 PM, Guillermo Polito <gui...@gm...> wrote: Yeap. I'm now trying to use Cygwin instead. I'll tell you if I can succeed :). Thanks! On Sat, Jul 9, 2011 at 5:39 PM, Norbert Sendetzky <no...@li...> wrote: Hi Guille > Do you think it's a problem with my minGW instalation? I've also installed > the package in http://gnuwin32.sourceforge.net/packages/gettext.htm to see > if it makes some difference. Yes, I think so. Did you used the documentation as reference? http://linuxnetworks.de/doc/index.php/OpenDBX/Setup/Windows/Building_with_MinGW Norbert ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ libopendbx-devel mailing list lib...@li... https://lists.sourceforge.net/lists/listinfo/libopendbx-devel http://www.linuxnetworks.de/doc/index.php/OpenDBX ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ libopendbx-devel mailing list lib...@li... https://lists.sourceforge.net/lists/listinfo/libopendbx-devel http://www.linuxnetworks.de/doc/index.php/OpenDBX -- Mariano http://marianopeck.wordpress.com ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ libopendbx-devel mailing list lib...@li... https://lists.sourceforge.net/lists/listinfo/libopendbx-devel http://www.linuxnetworks.de/doc/index.php/OpenDBX ---------------------------------------------------------------------------- ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 ---------------------------------------------------------------------------- ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ FREE DOWNLOAD - uberSVN with Social Coding for Subversion. Subversion made easy with a complete admin console. Easy to use, easy to manage, easy to install, easy to extend. Get a Free download of the new open ALM Subversion platform now. http://p.sf.net/sfu/wandisco-dev2dev ------------------------------------------------------------------------------ |
From: Alain R. <al...@fr...> - 2011-08-14 09:52:41
|
Hi Guillermo Did you solve your compiling problem undefined reference to libintl_dgettext ? I have the same problem trying to build the sqlite3 backend. TIA Alain "Guillermo Polito" <gui...@gm...> a écrit dans le message de news: CAOBmb50Te7k=Xan...@ma...... Ok, I tried all that without success :S. But, I tried for the second time to use the compiled dlls from the site-I dunno why the first time did not work-, pasting them into Windows\system32, and it worked, so I'm freezing the open dbx building for now. Thanks for all your help, and sorry for the spam :). Guille On Mon, Jul 11, 2011 at 7:42 AM, Mariano Martinez Peck <mar...@gm...> wrote: Guille I think that a possible solution can be to create the variable $PATH and add it to C:\MinGW\lib\ or to whereever you have the lib folder of mingw. In your $PATH I can see you don't put :/mingw/lib Guille $PATH .:/usr/local/bin:/mingw/bin:/bin:/usr/local/bin:/usr/bin:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/Archivos de programa/QuickTime/QTSystem/:/c/WINDOWS/system32/WindowsPowerShell/v1.0:/c/Archivos de programa/Microsoft SQL Server/90/Tools/binn/:/mingw/bin/ But I do: $ echo $PATH .:/usr/local/bin:/mingw/bin:/bin:/c/oraclexe/app/oracle/product/10.2.0/server/bin:/c/Sybase/OCS-15_0/dll:/c/Sybase/ASE-15_0/jobscheduler/bin:/c/Sybase/ASE-15_0/dll:/c/Sybase/ASE-15_0/bin:/c/Sybase/DBISQL/bin:/c/Sybase/DataAccess/ADONET/dll:/c/Sybase/DataAccess/ODBC/dll:/c/Sybase/DataAccess/OLEDB/dll:/c/Sybase/UAF-2_5/bin:/c/Sybase/OCS-15_0/lib3p:/c/Sybase/OCS-15_0/dll:/c/Sybase/OCS-15_0/bin:/c/SQLServer:/c/mariano/oracle/instantclient_11_1/:/mingw/bin:/mingw/lib:/lib/:/usr/bin:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/Archivos de programa/TortoiseSVN/bin:/c/PostgreSQL/8.3/bin/:/c/Archivos de programa/Microsoft SQL Server/90/Tools/binn/:/c/XEClient/bin:/c/MySQL/bin:/c/Archivos de programa/CMake 2.8/bin:/c/Archivos de programa/Git/cmd:/c/Archivos de programa/Git/bin:/c/Archivos de programa/Cincom/ObjectStudio/dllw32:./dllw32 In addition, maybe we can do something like --disable-nlsduring ./configure ? tell me if helped On Sat, Jul 9, 2011 at 10:41 PM, Guillermo Polito <gui...@gm...> wrote: Yeap. I'm now trying to use Cygwin instead. I'll tell you if I can succeed :). Thanks! On Sat, Jul 9, 2011 at 5:39 PM, Norbert Sendetzky <no...@li...> wrote: Hi Guille > Do you think it's a problem with my minGW instalation? I've also installed > the package in http://gnuwin32.sourceforge.net/packages/gettext.htm to see > if it makes some difference. Yes, I think so. Did you used the documentation as reference? http://linuxnetworks.de/doc/index.php/OpenDBX/Setup/Windows/Building_with_MinGW Norbert ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ libopendbx-devel mailing list lib...@li... https://lists.sourceforge.net/lists/listinfo/libopendbx-devel http://www.linuxnetworks.de/doc/index.php/OpenDBX ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ libopendbx-devel mailing list lib...@li... https://lists.sourceforge.net/lists/listinfo/libopendbx-devel http://www.linuxnetworks.de/doc/index.php/OpenDBX -- Mariano http://marianopeck.wordpress.com ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ libopendbx-devel mailing list lib...@li... https://lists.sourceforge.net/lists/listinfo/libopendbx-devel http://www.linuxnetworks.de/doc/index.php/OpenDBX ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 ------------------------------------------------------------------------------ |
From: Guillermo P. <gui...@gm...> - 2011-07-16 22:36:58
|
Hi! sorry for the delay. I installed the dll with the patch and it didn't work :(. On the other hand, I tested the failing query with the tsql util from freetds, and it worked: 1> select * from ((select t1.c1, null t3 from test t1 where t1.c1 is not null) U NION ALL (select NULL t3, t2.c2 from test t2 where t2.c2 is not null)) tt 2> go c1 t3 1 NULL NULL 3 (2 rows affected) . So I assume that the problem is not in freetds :(. By now, I'll try to change the code to evade that kind of querys... Thanks! Guille On Sat, Jul 16, 2011 at 4:20 PM, Mariano Martinez Peck < mar...@gm...> wrote: > > > On Fri, Jul 15, 2011 at 2:35 PM, Norbert Sendetzky < > no...@li...> wrote: > >> Hi Guille >> >> > gives as result from sql server manager: >> > >> > 1 null >> > null 3 >> > >> > Digging into my code, the problem seems to be the 'odbx_field_value' >> does >> > not give me the right result.. >> >> Any news on this? Did the patch fixed the problem? >> >> > We are working on it :) > We will let you know. > Thanks > > >> >> Norbert >> >> >> ------------------------------------------------------------------------------ >> AppSumo Presents a FREE Video for the SourceForge Community by Eric >> Ries, the creator of the Lean Startup Methodology on "Lean Startup >> Secrets Revealed." This video shows you how to validate your ideas, >> optimize your ideas and identify your business strategy. >> http://p.sf.net/sfu/appsumosfdev2dev >> _______________________________________________ >> libopendbx-devel mailing list >> lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libopendbx-devel >> http://www.linuxnetworks.de/doc/index.php/OpenDBX >> > > > > -- > Mariano > http://marianopeck.wordpress.com > > > > ------------------------------------------------------------------------------ > AppSumo Presents a FREE Video for the SourceForge Community by Eric > Ries, the creator of the Lean Startup Methodology on "Lean Startup > Secrets Revealed." This video shows you how to validate your ideas, > optimize your ideas and identify your business strategy. > http://p.sf.net/sfu/appsumosfdev2dev > _______________________________________________ > libopendbx-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libopendbx-devel > http://www.linuxnetworks.de/doc/index.php/OpenDBX > > |
From: Mariano M. P. <mar...@gm...> - 2011-07-16 19:20:47
|
On Fri, Jul 15, 2011 at 2:35 PM, Norbert Sendetzky <no...@li... > wrote: > Hi Guille > > > gives as result from sql server manager: > > > > 1 null > > null 3 > > > > Digging into my code, the problem seems to be the 'odbx_field_value' does > > not give me the right result.. > > Any news on this? Did the patch fixed the problem? > > We are working on it :) We will let you know. Thanks > > Norbert > > > ------------------------------------------------------------------------------ > AppSumo Presents a FREE Video for the SourceForge Community by Eric > Ries, the creator of the Lean Startup Methodology on "Lean Startup > Secrets Revealed." This video shows you how to validate your ideas, > optimize your ideas and identify your business strategy. > http://p.sf.net/sfu/appsumosfdev2dev > _______________________________________________ > libopendbx-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libopendbx-devel > http://www.linuxnetworks.de/doc/index.php/OpenDBX > -- Mariano http://marianopeck.wordpress.com |
From: Guillermo P. <gui...@gm...> - 2011-07-15 12:41:23
|
What puzzles me is that the insert is successful and I can execute querys from the SQL management studio and they work. I'll try inserting a value like that. Thanks! On Fri, Jul 15, 2011 at 9:35 AM, Norbert Sendetzky <no...@li... > wrote: > Hi Guille > > > Create table test4 (fechia datetime) > > insert into test4 values(''10-10-10'') > > > > select * from test4 -> [Microsoft][ODBC SQL Server Driver]String data, > > right truncation > > > > ¿Is it a known issue? > > No, no that I'm aware of, but your value is wrong. Can you try > '2010-10-10 00:00:00'? > > Norbert > > > ------------------------------------------------------------------------------ > AppSumo Presents a FREE Video for the SourceForge Community by Eric > Ries, the creator of the Lean Startup Methodology on "Lean Startup > Secrets Revealed." This video shows you how to validate your ideas, > optimize your ideas and identify your business strategy. > http://p.sf.net/sfu/appsumosfdev2dev > _______________________________________________ > libopendbx-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libopendbx-devel > http://www.linuxnetworks.de/doc/index.php/OpenDBX > |
From: Norbert S. <no...@li...> - 2011-07-15 12:35:53
|
Hi Guille > gives as result from sql server manager: > > 1 null > null 3 > > Digging into my code, the problem seems to be the 'odbx_field_value' does > not give me the right result.. Any news on this? Did the patch fixed the problem? Norbert |
From: Norbert S. <no...@li...> - 2011-07-15 12:35:45
|
Hi Guille > Create table test4 (fechia datetime) > insert into test4 values(''10-10-10'') > > select * from test4 -> [Microsoft][ODBC SQL Server Driver]String data, > right truncation > > ¿Is it a known issue? No, no that I'm aware of, but your value is wrong. Can you try '2010-10-10 00:00:00'? Norbert |
From: Guillermo P. <gui...@gm...> - 2011-07-15 12:30:26
|
Hi Norbert! I was testing the ODBC backend and I'm getting lot's of string truncated errors on select statements :(. I was able to build a simple reproduceable case doing a select from a datetime column: Create table test4 (fechia datetime) insert into test4 values(''10-10-10'') select * from test4 -> [Microsoft][ODBC SQL Server Driver]String data, right truncation ¿Is it a known issue? I'm using the ODBC backend dll I downloaded in the site. I'll try getting one compiled by myself and post my results. Thanks! Guille |
From: Norbert S. <no...@li...> - 2011-07-13 04:48:11
|
Hi Guille > I was testing doing some tests with mssql from windows using tds and odbc, > and I found a bug very similar to what happened not long ago with sqlite. > > Executing the following queries: > > Create table test (c1 int NULL, c2 int NULL) > insert into test values(1, NULL) > insert into test values(NULL, 3) > select * from ((select t1.c1, null t3 from test t1 where t1.c1 is not null) > UNION ALL (select NULL t3, t2.c2 from test t2 where t2.c2 is not null)) > > > gives as result from sql server manager: > > 1 null > null 3 > > > results from opendbx + freetds on windows: > > 1 null > null *null* > > > using freetds I'm getting a null where a 3 was expected... > > Digging into my code, the problem seems to be the 'odbx_field_value' does > not give me the right result.. > > Can you confirm if I'm doing something wrong or not? I don't think you are doing something wrong. Could you please apply the attached patch to the OpenDBX source code (it's against OpenDBX 1.5 but that shouldn't matter much) and rebuild the mssql backend? The dblib documentation states to use dblen() to check if it's really a NULL value. If this doesn't help, it may be a but in FreeTDS. Norbert |
From: Guillermo P. <gui...@gm...> - 2011-07-13 02:35:15
|
Hi Norbert! I was testing doing some tests with mssql from windows using tds and odbc, and I found a bug very similar to what happened not long ago with sqlite. Executing the following queries: Create table test (c1 int NULL, c2 int NULL) insert into test values(1, NULL) insert into test values(NULL, 3) select * from ((select t1.c1, null t3 from test t1 where t1.c1 is not null) UNION ALL (select NULL t3, t2.c2 from test t2 where t2.c2 is not null)) gives as result from sql server manager: 1 null null 3 results from opendbx + freetds on windows: 1 null null *null* results from opendbx + odbc on windows: 1 null null 3 using freetds I'm getting a null where a 3 was expected... Digging into my code, the problem seems to be the 'odbx_field_value' does not give me the right result.. Can you confirm if I'm doing something wrong or not? Thanks in advance! Guille |
From: Guillermo P. <gui...@gm...> - 2011-07-12 00:01:20
|
Ok, I tried all that without success :S. But, I tried for the second time to use the compiled dlls from the site-I dunno why the first time did not work-, pasting them into Windows\system32, and it worked, so I'm freezing the open dbx building for now. Thanks for all your help, and sorry for the spam :). Guille On Mon, Jul 11, 2011 at 7:42 AM, Mariano Martinez Peck < mar...@gm...> wrote: > Guille I think that a possible solution can be to create the variable $PATH > and add it to C:\MinGW\lib\ > or to whereever you have the lib folder of mingw. > > In your $PATH I can see you don't put :/mingw/lib > > Guille $PATH > > > .:/usr/local/bin:/mingw/bin:/bin:/usr/local/bin:/usr/bin:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/Archivos > de > programa/QuickTime/QTSystem/:/c/WINDOWS/system32/WindowsPowerShell/v1.0:/c/Archivos > de programa/Microsoft SQL Server/90/Tools/binn/:/mingw/bin/ > > But I do: > > $ echo $PATH > > .:/usr/local/bin:/mingw/bin:/bin:/c/oraclexe/app/oracle/product/10.2.0/server/bin:/c/Sybase/OCS-15_0/dll:/c/Sybase/ASE-15_0/jobscheduler/bin:/c/Sybase/ASE-15_0/dll:/c/Sybase/ASE-15_0/bin:/c/Sybase/DBISQL/bin:/c/Sybase/DataAccess/ADONET/dll:/c/Sybase/DataAccess/ODBC/dll:/c/Sybase/DataAccess/OLEDB/dll:/c/Sybase/UAF-2_5/bin:/c/Sybase/OCS-15_0/lib3p:/c/Sybase/OCS-15_0/dll:/c/Sybase/OCS-15_0/bin:/c/SQLServer:/c/mariano/oracle/instantclient_11_1/:/mingw/bin: > /mingw/lib:/lib/:/usr/bin:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/Archivos > de programa/TortoiseSVN/bin:/c/PostgreSQL/8.3/bin/:/c/Archivos de > programa/Microsoft SQL > Server/90/Tools/binn/:/c/XEClient/bin:/c/MySQL/bin:/c/Archivos de > programa/CMake 2.8/bin:/c/Archivos de programa/Git/cmd:/c/Archivos de > programa/Git/bin:/c/Archivos de programa/Cincom/ObjectStudio/dllw32:./dllw32 > > > In addition, maybe we can do something like > > --disable-nls > > during ./configure ? > > tell me if helped > > On Sat, Jul 9, 2011 at 10:41 PM, Guillermo Polito < > gui...@gm...> wrote: > >> Yeap. I'm now trying to use Cygwin instead. I'll tell you if I can >> succeed :). >> >> Thanks! >> >> >> On Sat, Jul 9, 2011 at 5:39 PM, Norbert Sendetzky < >> no...@li...> wrote: >> >>> Hi Guille >>> >>> > Do you think it's a problem with my minGW instalation? I've also >>> installed >>> > the package in http://gnuwin32.sourceforge.net/packages/gettext.htm to >>> see >>> > if it makes some difference. >>> >>> Yes, I think so. Did you used the documentation as reference? >>> >>> >>> http://linuxnetworks.de/doc/index.php/OpenDBX/Setup/Windows/Building_with_MinGW >>> >>> >>> Norbert >>> >>> >>> ------------------------------------------------------------------------------ >>> All of the data generated in your IT infrastructure is seriously >>> valuable. >>> Why? It contains a definitive record of application performance, security >>> threats, fraudulent activity, and more. Splunk takes this data and makes >>> sense of it. IT sense. And common sense. >>> http://p.sf.net/sfu/splunk-d2d-c2 >>> _______________________________________________ >>> libopendbx-devel mailing list >>> lib...@li... >>> https://lists.sourceforge.net/lists/listinfo/libopendbx-devel >>> http://www.linuxnetworks.de/doc/index.php/OpenDBX >>> >> >> >> >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> libopendbx-devel mailing list >> lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libopendbx-devel >> http://www.linuxnetworks.de/doc/index.php/OpenDBX >> >> > > > -- > Mariano > http://marianopeck.wordpress.com > > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > libopendbx-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libopendbx-devel > http://www.linuxnetworks.de/doc/index.php/OpenDBX > > |
From: Mariano M. P. <mar...@gm...> - 2011-07-11 10:42:19
|
Guille I think that a possible solution can be to create the variable $PATH and add it to C:\MinGW\lib\ or to whereever you have the lib folder of mingw. In your $PATH I can see you don't put :/mingw/lib Guille $PATH .:/usr/local/bin:/mingw/bin:/bin:/usr/local/bin:/usr/bin:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/Archivos de programa/QuickTime/QTSystem/:/c/WINDOWS/system32/WindowsPowerShell/v1.0:/c/Archivos de programa/Microsoft SQL Server/90/Tools/binn/:/mingw/bin/ But I do: $ echo $PATH .:/usr/local/bin:/mingw/bin:/bin:/c/oraclexe/app/oracle/product/10.2.0/server/bin:/c/Sybase/OCS-15_0/dll:/c/Sybase/ASE-15_0/jobscheduler/bin:/c/Sybase/ASE-15_0/dll:/c/Sybase/ASE-15_0/bin:/c/Sybase/DBISQL/bin:/c/Sybase/DataAccess/ADONET/dll:/c/Sybase/DataAccess/ODBC/dll:/c/Sybase/DataAccess/OLEDB/dll:/c/Sybase/UAF-2_5/bin:/c/Sybase/OCS-15_0/lib3p:/c/Sybase/OCS-15_0/dll:/c/Sybase/OCS-15_0/bin:/c/SQLServer:/c/mariano/oracle/instantclient_11_1/:/mingw/bin: /mingw/lib:/lib/:/usr/bin:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/Archivos de programa/TortoiseSVN/bin:/c/PostgreSQL/8.3/bin/:/c/Archivos de programa/Microsoft SQL Server/90/Tools/binn/:/c/XEClient/bin:/c/MySQL/bin:/c/Archivos de programa/CMake 2.8/bin:/c/Archivos de programa/Git/cmd:/c/Archivos de programa/Git/bin:/c/Archivos de programa/Cincom/ObjectStudio/dllw32:./dllw32 In addition, maybe we can do something like --disable-nls during ./configure ? tell me if helped On Sat, Jul 9, 2011 at 10:41 PM, Guillermo Polito <gui...@gm... > wrote: > Yeap. I'm now trying to use Cygwin instead. I'll tell you if I can > succeed :). > > Thanks! > > > On Sat, Jul 9, 2011 at 5:39 PM, Norbert Sendetzky < > no...@li...> wrote: > >> Hi Guille >> >> > Do you think it's a problem with my minGW instalation? I've also >> installed >> > the package in http://gnuwin32.sourceforge.net/packages/gettext.htm to >> see >> > if it makes some difference. >> >> Yes, I think so. Did you used the documentation as reference? >> >> >> http://linuxnetworks.de/doc/index.php/OpenDBX/Setup/Windows/Building_with_MinGW >> >> >> Norbert >> >> >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> libopendbx-devel mailing list >> lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libopendbx-devel >> http://www.linuxnetworks.de/doc/index.php/OpenDBX >> > > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > libopendbx-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libopendbx-devel > http://www.linuxnetworks.de/doc/index.php/OpenDBX > > -- Mariano http://marianopeck.wordpress.com |
From: Zhao T. <zha...@gm...> - 2011-07-11 02:38:21
|
I think this is not very good, contrary the opendbx project in mind,This is convenient, but users also need to call functions from the native database library, I would like to use the native database library user may directly. I talk about my reasons for using odbx, because I may be use backend(mysql,pgsql,orcale), in order to reduce the workload so used opendbx, so I think opendbx advantage is more background support, instead of calling the native library, so I suggest that support a variety of data types at after. 2011/7/7 Norbert Sendetzky <no...@li...>: > On 07/05/2011 12:08 PM, Zhao Tongyi wrote: >> the pre patch at pgsql_odbx_get_option lose break.sorry >> >> 2011/7/5 Zhao Tongyi<zha...@gm...>: >>> I need to insert the "BYTEA" type data into the PGSQL database, but >>> found OPENDBX on pgsql_odbx_escapt function call the PGEscaptSring, so >>> I added this patch at opendbx-1.4.5, use odbx_set_option set using >>> PGEscaptString or PGEscaptBytea. >>> In addition to the Opendbx project not update at long time . > > I've thought some time about your implementation and requirements and > I'm not sure it's the right solution for the problem. If I understand > correctly, only binary values have to be escaped using the > PQescapeByteaConn/PQescapeBytea function before they can inserted into a > SQL string. This means, that this function doesn't work for string > values and depending on your column, you have to switch between the > PQescapeByteaConn and PQescapeStringConn functions. Therefore, using a > connection option to switch between those functions doesn't seem to be > at good solution. > > Furthermore, I was wondering if we can integrate such special handling > for a specific backend at all of if it's too special. A solution would > then to provide a function that returns the bare connection handle of > the native database library. This could then be used to call the native > library function directly but have to be used with care and doing this > isn't portable. > > What do you think? > > > Norbert > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > libopendbx-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libopendbx-devel > http://www.linuxnetworks.de/doc/index.php/OpenDBX > -- Best regards, Tongyi ,Zhao |
From: Guillermo P. <gui...@gm...> - 2011-07-09 20:42:00
|
Yeap. I'm now trying to use Cygwin instead. I'll tell you if I can succeed :). Thanks! On Sat, Jul 9, 2011 at 5:39 PM, Norbert Sendetzky <no...@li...>wrote: > Hi Guille > > > Do you think it's a problem with my minGW instalation? I've also > installed > > the package in http://gnuwin32.sourceforge.net/packages/gettext.htm to > see > > if it makes some difference. > > Yes, I think so. Did you used the documentation as reference? > > > http://linuxnetworks.de/doc/index.php/OpenDBX/Setup/Windows/Building_with_MinGW > > > Norbert > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > libopendbx-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libopendbx-devel > http://www.linuxnetworks.de/doc/index.php/OpenDBX > |
From: Norbert S. <no...@li...> - 2011-07-09 20:39:11
|
Hi Guille > Do you think it's a problem with my minGW instalation? I've also installed > the package in http://gnuwin32.sourceforge.net/packages/gettext.htm to see > if it makes some difference. Yes, I think so. Did you used the documentation as reference? http://linuxnetworks.de/doc/index.php/OpenDBX/Setup/Windows/Building_with_MinGW Norbert |
From: Guillermo P. <gui...@gm...> - 2011-07-09 20:03:28
|
Hi Norbert! On Thu, Jul 7, 2011 at 4:42 AM, Norbert Sendetzky <no...@li...>wrote: > Hi Guille > > > I'm having problems to setup an opendbx-windows environment with mssql, > > using latest MinGW and MSys. I've been already be able to compile and > put > > freetds to work, and I'm now having problems compiling opendbx :). > > If you want to connect to an MS SQL server in an windows environment, > please prefer the OpenDBX odbc backend :-) > Actually, I'm one of the mantainers of the DBXTalk project (before known as SqueakDBX) and It'll be wonderful to compile it in both ways :). > > > I run the configure this way: > > > > *CPPFLAGS="-I/c/MinGW/msys/1.0/local/freetds/include" > > LDFLAGS="-L/c/MinGW/msys/1.0/local/freetds/lib" ./configure > --disable-utils > > --with-backends="mssql"* > > > > When I run the make, I have some errors like the following: > > > > *undefined reference to `libintl_snprintf'* > > > > And in the config.log file I see the next lines: > > > > *configure:18866: gcc -std=gnu99 -o conftest.exe -g -O2 > > -I/c/MinGW/msys/1.0/local/freetds/include > > -L/c/MinGW/msys/1.0/local/freetds/lib conftest.c>&5 > > conftest.c:72:6: warning: conflicting types for built-in function > 'snprintf' > > * > > Can you send me the make output including the output of STDERR? Based on > your sent make.log I would assume everthing is OK. > The standard error output is the following: .libs/libmssqlbackend_la-mssql_basic.o: In function `mssql_odbx_row_fetch': C:\MinGW\msys\1.0\home\Administrador\libopendbx-1.5.0\backends\mssql/mssql_basic.c:502: undefined reference to `libintl_snprintf' .libs/libmssqlbackend_la-mssql_basic.o: In function `mssql_odbx_bind': C:\MinGW\msys\1.0\home\Administrador\libopendbx-1.5.0\backends\mssql/mssql_basic.c:139: undefined reference to `libintl_dgettext' C:\MinGW\msys\1.0\home\Administrador\libopendbx-1.5.0\backends\mssql/mssql_basic.c:139: undefined reference to `libintl_dgettext' .libs/libmssqlbackend_la-mssql_basic.o: In function `mssql_err_handler': C:\MinGW\msys\1.0\home\Administrador\libopendbx-1.5.0\backends\mssql/mssql_basic.c:816: undefined reference to `libintl_snprintf' C:\MinGW\msys\1.0\home\Administrador\libopendbx-1.5.0\backends\mssql/mssql_basic.c:818: undefined reference to `libintl_snprintf' C:\MinGW\msys\1.0\home\Administrador\libopendbx-1.5.0\backends\mssql/mssql_basic.c:809: undefined reference to `libintl_fprintf' C:\MinGW\msys\1.0\home\Administrador\libopendbx-1.5.0\backends\mssql/mssql_basic.c:810: undefined reference to `libintl_fprintf' .libs/libmssqlbackend_la-mssql_basic.o: In function `mssql_msg_handler': C:\MinGW\msys\1.0\home\Administrador\libopendbx-1.5.0\backends\mssql/mssql_basic.c:848: undefined reference to `libintl_snprintf' C:\MinGW\msys\1.0\home\Administrador\libopendbx-1.5.0\backends\mssql/mssql_basic.c:842: undefined reference to `libintl_fprintf' C:\MinGW\msys\1.0\home\Administrador\libopendbx-1.5.0\backends\mssql/mssql_basic.c:850: undefined reference to `libintl_snprintf' collect2: ld returned 1 exit status make[3]: *** [libmssqlbackend.la] Error 1 make[2]: *** [all-recursive] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 Do you think it's a problem with my minGW instalation? I've also installed the package in http://gnuwin32.sourceforge.net/packages/gettext.htm to see if it makes some difference. Thanks in advance! Guille |
From: Diogenes M. <dio...@gm...> - 2011-07-08 20:18:27
|
well, the pragma can't be sending via glorp.. you must request the db connection to glorp and send "normal" sql with the pragmas. then remember, in each time, when you commit, glorp must be rearrange the cache.. that is very expensive in system with a lot objects.. is highly recommended don't use a NonCachePolicy.. in very large insert proceses.. the commit unit work is not the same than db commit.. Well, but this topic is for other list :).. Best, PD: Great Holidays. and Turn Off the computer ;). On Fri, Jul 8, 2011 at 4:58 PM, Alain Rastoul <al...@fr...> wrote: > ** > Hi diogenes, > > I didn't find yet where to change the pragma for sqlite via glorp, I > didn't have time until now and I'm on holidays for two weeks and I won't use > my pc. > however I found that when I do one transaction for each insert it is very > slow, and one transaction for all (1000) inserts it is very fast (less than > 1sec). > Here you'll find my code (unfinished but it's first try). > Beware that in the changeset there is my SQLitePlatform and SQLiteSequence > code so if you have done some work on it load only the Customer and > TpccGlorDescriptor classes. > Thoses classes (SQLite) are a slightly modified version of a copy of > another platform. > It does'nt retrieve row id yet. > > Cheers > > Alain > > "Diogenes Moreira" <dio...@gm...> a écrit dans le message de > news: CAL...@ma... > ... > hi alan. > > i dont know if sqlite implemetation has a problem.. but in glorp some times > do unnecesary read.. by example when you making "N times" > CommitAndContinue.. and because it, you see a lot time spending in > InputEventPollingFetcher>>wait. > if you send to me or send to SqueakDbx list you glorp code(ST), may be, I > can send to you some advices. or may be, we can find the delay source. > > Best. > > pd: please check in you sqlite the pragmas.. by example synchronous ( > http://www.sqlite.org/pragma.html#pragma_synchronous) that increment the > performance to much. > > On Tue, Jul 5, 2011 at 6:46 PM, Alain Rastoul < > alr...@pu... <al...@fr...>> wrote: > >> ** >> Hi Mariano, >> I don't want to do multi threading with sqlite because I know it doesn't >> work. >> I was curious about the squeakdbx (or opendbx architecture) because of the >> not so good performance and the time spent in waiting , I do not >> understand the squeakdbx package vs opendbx package: the doc is mentioning a >> squeakdbx plugin dll but I have no squeakdbx dll ? >> >> You are saying that in that case the external call is counted on the >> InputEventPollingFetcher>> wait and not in primitives (?). >> I will investigate with FFI/SQlite and it should be the same (I've seen >> some messages about incorrect profiling reports in primitives), >> >> I expected much better performance with sqlite , and glorp is very good >> (5% of the time), I would have expected the contrary. >> >> Thanks >> >> Cheers >> Alain >> >> "Mariano Martinez Peck" <mar...@gm...> a écrit dans le message >> de >> news:CAA+-=mVV...@ma...<news:CAA+-=mVV3zvP...@pu...> >> ... >> >> >> On Tue, Jul 5, 2011 at 10:50 PM, Alain Rastoul <al...@fr...> wrote: >> >>> Hi, >>> (sorry for sending this mail again, my pc was off for a long time and the >>> message was dated from 2007, people who sort their messages would not see >>> it) >>> >>> I've done a small program in Pharo 1.3 with glorp+opendbx that insert >>> 1000 >>> rows in a customer table in a sqlite db. >>> The 1000 insert takes 140 sec (very slow), but the Pharo profiler says >>> that >>> it spend 95% >>> of the time waiting for input. >>> (in InputEventPollingFetcher>> waitForInput) >>> I was wondering if the queries are executed in another thread than the vm >>> thread ? >>> >> >> Hi Alain. No. Squeak/Pharo's thread architecture is the so called green >> thread, that is, only ONE OS thread is used. Internally, the language >> reifies Process, Scheduler, #fork: , etc etc etc. But from the OS point of >> view there is only one thread for the VM. So.....the regular FFI blocks the >> VM. What does it mean? that while the C function called by FFI is being >> executed, the WHOLE VM is block. Notihgn can happen at the same time. >> Imagine the function that retrieves the results and needs to wait for >> them.....TERRIBLE. So...if the backend does not support async quieries, then >> you are screw and dbx may be slow in Pharo. Nothing to do. >> >> However, some backends support async queries, and opendbx let us configure >> this. This is explained in: >> >> http://www.squeakdbx.org/Architecture%20and%20desing?_s=FlIhkPQOOFSlqf8C&_k=j-3_7Kw_&_n&18 >> where it says "External call implementation" >> >> You can see the list of backends that support async queries in here: >> >> http://www.squeakdbx.org/documentation/Asynchronous%20queries?_s=FlIhkPQOOFSlqf8C&_k=j-3_7Kw_&_n&17 >> >> Notice that there is some room for improvements, but we didn't have time >> so far. Hernik told us some good ideas. But since we didn't need more power >> so far we couldn't find time to integrate his ideas. I am forwarding now the >> emails to the mailing list. If you can take a look and provide code, it >> would be awesome. Basically, it improves how and how much we wait in each >> side: image and opendbx. >> >> Finally, notice that Eliot is working in a multithreared FFI for Cog, but >> it is not yet available as far as I know. >> >> Cheers >> >> Mariano >> >> (I thought I've seen a document about opendbx architecture but could'nt >>> find >>> it on the site). >>> >>> TIA >>> Alain >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> All of the data generated in your IT infrastructure is seriously >>> valuable. >>> Why? It contains a definitive record of application performance, security >>> threats, fraudulent activity, and more. Splunk takes this data and makes >>> sense of it. IT sense. And common sense. >>> http://p.sf.net/sfu/splunk-d2d-c2 >>> _______________________________________________ >>> libopendbx-devel mailing list >>> >>> libopendbx-...@pu... >>> >>> https://lists.sourceforge.net/lists/listinfo/libopendbx-devel >>> http://www.linuxnetworks.de/doc/index.php/OpenDBX >>> >> >> >> >> -- >> Mariano >> http://marianopeck.wordpress.com >> >> ------------------------------ >> >> >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> >> ------------------------------ >> >> >> >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> libopendbx-devel mailing list >> lib...@li...<lib...@pu...> >> https://lists.sourceforge.net/lists/listinfo/libopendbx-devel >> http://www.linuxnetworks.de/doc/index.php/OpenDBX >> >> > ------------------------------ > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > > ------------------------------ > > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > libopendbx-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libopendbx-devel > http://www.linuxnetworks.de/doc/index.php/OpenDBX > > |
From: Alain R. <al...@fr...> - 2011-07-08 19:59:11
|
'From Pharo1.3 of 16 June 2011 [Latest update: #13269] on 8 July 2011 at 9:51:53 pm'! Object subclass: #DBConnection instanceVariableNames: 'connection' classVariableNames: '' poolDictionaries: '' category: 'Tpcc-Model'! Object subclass: #Customer instanceVariableNames: 'id wareHouseId districtId firstName lastName middle address1 address2 city state zip country phone since credit creditLim balance ytdPayment paymentCnt deliveryCnt' classVariableNames: '' poolDictionaries: '' category: 'Tpcc-Model'! !Customer commentStamp: 'AlainRastoul 7/7/2011 16:19' prior: 0! I am a customer in the tpcc benchmark system. Instance Variables: id : (C_ID) unique customer id . 96,000 unique IDs, 3,000 are populated per district. integer wareHouseId : (C_W_ID) 2*W unique IDs districtId : (C_D_ID) 20 unique IDs firstName : (C_FIRST) variable text, size 64 (16 in tpcc benchemark) lastName (C_LAST) variable text, size 64 (16 in tpcc benchemark) middle (C_MIDDLE) fixed text, size 2 address1 <ProtoObject | PseudoContext> address2 <ProtoObject | PseudoContext> city <ProtoObject | PseudoContext> state <ProtoObject | PseudoContext> zip <ProtoObject | PseudoContext> country <ProtoObject | PseudoContext> phone <ProtoObject | PseudoContext> since <ProtoObject | PseudoContext> credit <ProtoObject | PseudoContext> creditLim <ProtoObject | PseudoContext> balance <ProtoObject | PseudoContext> ytdPayment <ProtoObject | PseudoContext> paymentCnt <ProtoObject | PseudoContext> deliveryCnt <ProtoObject | PseudoContext> from TPCC benchmark: Primary Key: (C_W_ID, C_D_ID, C_ID) (C_W_ID, C_D_ID) Foreign Key, references (D_W_ID, D_ID) ! NamedSequence subclass: #SQLiteSequence instanceVariableNames: '' classVariableNames: '' poolDictionaries: '' category: 'Glorp-Database'! DatabasePlatform subclass: #SQLitePlatform instanceVariableNames: '' classVariableNames: '' poolDictionaries: '' category: 'Glorp-Database'! Smalltalk renameClassNamed: #TpccGlorpDescriptor as: #TpccGlorpDescriptor! DescriptorSystem subclass: #TpccGlorpDescriptor instanceVariableNames: '' classVariableNames: '' poolDictionaries: '' category: 'Tpcc-Model'! !DBConnection methodsFor: 'as yet unclassified' stamp: 'AlainRastoul 7/4/2011 21:59'! connection ^connection! ! !DBConnection methodsFor: 'as yet unclassified' stamp: 'AlainRastoul 7/4/2011 21:47'! connection: anObject connection := anObject! ! !DBConnection class methodsFor: 'as yet unclassified' stamp: 'AlainRastoul 7/4/2011 21:54'! sqlite: aDb "comment stating purpose of message" | settings dbc | settings := DBXConnectionSettings host: SmalltalkImage current imagePath, '/' port: '' database: aDb userName: '' userPassword: ''. dbc := DBConnection new connection: (DBXConnection platform: DBXSqlitePlatform new settings: settings). ^dbc ! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! address1 ^address1! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! address1: anObject address1 := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! address2 ^address2! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! address2: anObject address2 := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! balance ^balance! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! balance: anObject balance := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! city ^city! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! city: anObject city := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! country ^country! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! country: anObject country := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! credit ^credit! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! credit: anObject credit := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! creditLim ^creditLim! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! creditLim: anObject creditLim := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! deliveryCnt ^deliveryCnt! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! deliveryCnt: anObject deliveryCnt := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! districtId ^districtId! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! districtId: anObject districtId := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! firstName ^firstName! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! firstName: anObject firstName := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! id ^id! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! id: anObject id := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! lastName ^lastName! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! lastName: anObject lastName := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! middle ^middle! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! middle: anObject middle := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! paymentCnt ^paymentCnt! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! paymentCnt: anObject paymentCnt := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! phone ^phone! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! phone: anObject phone := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! since ^since! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! since: anObject since := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! state ^state! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! state: anObject state := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! wareHouseId ^wareHouseId! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! wareHouseId: anObject wareHouseId := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! ytdPayment ^ytdPayment! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! ytdPayment: anObject ytdPayment := anObject! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! zip ^zip! ! !Customer methodsFor: 'accessing' stamp: 'AlainRastoul 7/7/2011 21:29'! zip: anObject zip := anObject! ! !Customer class methodsFor: 'as yet unclassified' stamp: 'AlainRastoul 7/4/2011 20:24'! glorpSetupDescriptor: aDescriptor forSystem: aSystem | table | table := aSystem tableNamed: 'Customer'. aDescriptor table: table.! ! !Customer class methodsFor: 'benchmarks-utilities' stamp: 'AlainRastoul 7/7/2011 21:32'! newLoginForSQLite "get a login on a sqlite test db" ^Login new database: SQLitePlatform new; username: ''; password: ''; host: SmalltalkImage current imagePath, '/' ; connectString: 'testdb.dat'. ! ! !Customer class methodsFor: 'benchmarks-utilities' stamp: 'AlainRastoul 7/8/2011 20:56'! newPopulation " a random Customer population generator for benchmarks self newPopulation. " | customers | customers := (1 to: 1000) inject: OrderedCollection new into: [:all :one | |customer | customer := Customer new "id: one;" " let the db geenrate the id an retrieve it " wareHouseId: 1; districtId: 2; firstName: self pickAFirstName; lastName: self pickAName. all add: customer. all]. ^customers. ! ! !Customer class methodsFor: 'benchmarks-utilities' stamp: 'AlainRastoul 7/8/2011 20:56'! newSessionForLogin: aLogin accessor: anAccessor " returns a new session for the given login" | session | session := GlorpSession new system: (TpccGlorpDescriptor forPlatform: aLogin database) ; accessor: anAccessor. ^session! ! !Customer class methodsFor: 'benchmarks-utilities' stamp: 'AlainRastoul 7/8/2011 21:16'! pickACountryAndStoreFor: aCustomer " pick a random country and ware house id " | adresses | self flag: #todo. adresses := #( " country towns zipcode and street names" #('Scottland' #("town zipcodes" #( 'Edimburgh' 'EHXX') #('Aberdeen' 'ABXX') #('Inverness' 'IVXX' ) ) #( "street names") ) #('France' #("town zipcodes" #( 'Paris' '75XXX') #('Lyon' '69XXX') #('Marseille' '13XXX' ) ) #( "street names") ) ). ! ! !Customer class methodsFor: 'benchmarks-utilities' stamp: 'AlainRastoul 7/8/2011 20:57'! pickAFirstName " pick a random first name from a predefined list " ^#( 'John' 'Roger' 'Stephan' 'Mike' 'Julia' 'Elisabeth' 'Mary' 'Dave' 'Edgar' 'Gertrud' 'Elmut' 'Veronica' 'Alex' ) atRandom. ! ! !Customer class methodsFor: 'benchmarks-utilities' stamp: 'AlainRastoul 7/8/2011 20:57'! pickAName " random Customer names generator from the tpcc benchmark specification nb: slightly different implementation p64. The customer last name (C_LAST) must be generated by the concatenation of three variable length syllables selected from the following list: 0 1 2 3 4 5 6 7 8 9 BAR OUGHT ABLE PRI PRES ESE ANTI CALLY ATION EING Given a number between 0 and 999, each of the three syllables is determined by the corresponding digit in the three digit representation of the number. For example, the number 371 generates the name PRICALLYOUGHT, and the number 40 generates the name BARPRESBAR." | syllabes aName| syllabes := #( 'BAR' 'OUGHT' 'ABLE' 'PRI' 'PRES' 'ESE' 'ANTI' 'CALLY' 'ATION' 'EING' ). aName := WriteStream on: String new. 3 timesRepeat: [ aName nextPutAll: syllabes atRandom ]. ^aName contents. ! ! !Customer class methodsFor: 'benchmarks' stamp: 'AlainRastoul 7/7/2011 21:29'! createTablesIn: session accessor: accessor " create the tables " session reset. session inTransactionDo: [session system allTables do: [:each | accessor createTable: each ifError: [:error |Transcript show: error messageText]]]. ! ! !Customer class methodsFor: 'benchmarks' stamp: 'AlainRastoul 7/8/2011 20:56'! insertDataIn: aSession | list | aSession reset. list := Customer newPopulation . TimeProfileBrowser onBlock: [ aSession beginUnitOfWork. list do: [ :customer | aSession register: customer ]. aSession commitUnitOfWork. ]. ! ! !Customer class methodsFor: 'benchmarks' stamp: 'AlainRastoul 7/7/2011 21:31'! newCustomersBench "A small benchmark that creates a list of customers and insert thoses customer in a database self newCustomersBench " | login accessor session list | DBXPlatform enableDebugMode . login := self newLoginForSQLite. accessor := DatabaseAccessor forLogin: login. accessor login. session := self newSessionForLogin: login accessor: accessor. self createTablesIn: session accessor: accessor. self insertDataIn: session. session logout! ! !SQLiteSequence methodsFor: 'sequencing' stamp: 'AlainRastoul 7/4/2011 23:30'! getSequenceValueFromDatabaseFor: aField in: aRow using: aSession! ! !SQLiteSequence methodsFor: 'sequencing' stamp: 'AlainRastoul 7/4/2011 23:30'! postWriteAssignSequenceValueFor: aDatabaseField in: aDatabaseRow using: anAccessor aDatabaseRow at: aDatabaseField put: ((anAccessor executeSQLString: 'select last_insert_rowid()') first atIndex: 1).! ! !SQLiteSequence methodsFor: 'sequencing' stamp: 'AlainRastoul 7/4/2011 23:30'! reserveSequenceNumbers: anInteger in: aSession for: aTable "No real sequences here, just identity columns, which we can't pre-allocate"! ! !SQLiteSequence methodsFor: 'testing' stamp: 'AlainRastoul 7/4/2011 23:30'! isIdentityColumn ^true.! ! !SQLiteSequence class methodsFor: 'LICENSE' stamp: 'AlainRastoul 7/4/2011 23:30'! LICENSE ^'Copyright 2000-2004 Alan Knight. This class is part of the GLORP system (see http://www.glorp.org), licensed under the GNU Lesser General Public License, with clarifications with respect to Smalltalk library usage (LGPL(S)). This code is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the package comment, or the COPYING.TXT file that should accompany this distribution, or the GNU Lesser General Public License.'! ! !SQLitePlatform methodsFor: 'constants' stamp: 'AlainRastoul 7/4/2011 21:17'! areSequencesExplicitlyCreated ^false! ! !SQLitePlatform methodsFor: 'constants' stamp: 'AlainRastoul 7/4/2011 21:17'! asSqueakDBXAdaptor ^DBXSqlitePlatform new.! ! !SQLitePlatform methodsFor: 'constants' stamp: 'AlainRastoul 7/5/2011 20:06'! databaseSequenceClass ^SQLiteSequence.! ! !SQLitePlatform methodsFor: 'constants' stamp: 'AlainRastoul 7/4/2011 20:00'! initializeReservedWords super initializeReservedWords. reservedWords add: 'key'! ! !SQLitePlatform methodsFor: 'constants' stamp: 'AlainRastoul 7/4/2011 20:00'! maximumLengthOfColumnName "^<Integer> I return the max. length of a column name" ^128! ! !SQLitePlatform methodsFor: 'constants' stamp: 'AlainRastoul 7/4/2011 20:00'! maximumLengthOfTableName "^<Integer> I return the max. length of a table name" ^128! ! !SQLitePlatform methodsFor: 'constants' stamp: 'AlainRastoul 7/4/2011 20:00'! supportsConstraints ^true! ! !SQLitePlatform methodsFor: 'constants' stamp: 'AlainRastoul 7/4/2011 20:00'! supportsMillisecondsInTimes "I'm guessing here" ^true.! ! !SQLitePlatform methodsFor: 'binding' stamp: 'AlainRastoul 7/4/2011 20:00'! bindingsForGroupWritingFor: aCommand "Return the bindings array for a group write. This can be in different formats, depending on the database and perhaps the mechanism in place." ^aCommand batchStatementBindings.! ! !SQLitePlatform methodsFor: 'binding' stamp: 'AlainRastoul 7/4/2011 20:00'! maximumSizeToGroupWriteFor: aCollectionOfDatabaseRows "If we are going to group write, how many rows of this collection should we do it for at once" ^aCollectionOfDatabaseRows size min: 250.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 21:43'! bit ^self typeNamed: #bit ifAbsentPut: [GlorpBooleanType new typeString: 'tinyint'].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 21:41'! blob ^self typeNamed: #blob ifAbsentPut: [GlorpBlobType new typeString: 'blob'; queryType: (self varbinary)].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! boolean ^self bit.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! char ^self typeNamed: #char ifAbsentPut: [GlorpCharType new].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! clob ^self typeNamed: #clob ifAbsentPut: [GlorpClobType new typeString: 'text'].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 21:39'! date ^self typeNamed: #date ifAbsentPut: [GlorpDateType new typeString: 'date'].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! decimal ^self numeric.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! double ^self float.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! float ^self typeNamed: #float ifAbsentPut: [GlorpMSSQLFloatType new].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! float4 ^self float.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! float8 ^self float.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! int ^self integer.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! int2 ^self smallint.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! int4 ^self integer.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! int8 ^self numeric.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! numeric ^self typeNamed: #numeric ifAbsentPut: [GlorpNumericType new].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! real ^self float.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! sequence ^self serial.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 21:46'! serial ^self typeNamed: #serial ifAbsentPut: [GlorpSerialType new typeString: 'int autoincrement '].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! text ^super text queryType: self varchar.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! time ^self typeNamed: #time ifAbsentPut: [GlorpTimeType new typeString: 'datetime'].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! timeStampTypeString ^'datetime'.! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! timestamp ^self typeNamed: #timestamp ifAbsentPut: [GlorpTimeStampType new typeString: 'datetime'].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! tinyint ^self typeNamed: #tinyInt ifAbsentPut: [GlorpIntegerType new typeString: 'tinyint'].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! varbinary ^self typeNamed: #varbinary ifAbsentPut: [GlorpVarBinaryType new].! ! !SQLitePlatform methodsFor: 'types' stamp: 'AlainRastoul 7/4/2011 20:00'! varchar ^self typeNamed: #varchar ifAbsentPut: [GlorpVarCharType new].! ! !SQLitePlatform methodsFor: 'conversion-boolean' stamp: 'AlainRastoul 7/4/2011 20:00'! booleanToBooleanConverter ^DelegatingDatabaseConverter named: #booleanToBoolean hostedBy: self fromStToDb: #convertBooleanToInteger:for: fromDbToSt: #convertDBBooleanToBoolean:for:.! ! !SQLitePlatform methodsFor: 'conversion-strings' stamp: 'AlainRastoul 7/4/2011 20:00'! charactersThatNeedEscaping "There seem to be all kind of contradictory bits of information about what sql server does/requires for escaped characters, all of which differ from standard sql. Empirically the only thing that requires escaping appears to be single quote" ^#($' ).! ! !SQLitePlatform methodsFor: 'conversion-strings' stamp: 'AlainRastoul 7/4/2011 20:00'! escapeFor: aCharacter ^String with: $' with: aCharacter. " ^'\', (aCharacter asInteger printStringRadix: 16)."! ! !SQLitePlatform methodsFor: 'conversion-strings' stamp: 'AlainRastoul 7/4/2011 20:00'! printBlob: aByteArray on: aStream for: aType aByteArray isNil ifTrue: [^aStream nextPutAll: 'NULL']. aStream nextPutAll: '0x'. aByteArray do: [:each | each printOn: aStream paddedWith: $0 to: 2 base: 16].! ! !SQLitePlatform methodsFor: 'database-specific' stamp: 'AlainRastoul 7/4/2011 20:00'! compoundOperationFor: aSymbol "Return the platform specific version of a compound statement symbol" aSymbol == #INTERSECT ifTrue: [^'WHERE EXISTS']. aSymbol == #MINUS ifTrue: [^'WHERE NOT EXISTS']. ^aSymbol.! ! !SQLitePlatform methodsFor: 'database-specific' stamp: 'AlainRastoul 7/5/2011 21:46'! printPostLimit: anInteger on: aCommand aCommand nextPutAll: ' LIMIT '. anInteger printOn: aCommand.! ! !SQLitePlatform methodsFor: 'database-specific' stamp: 'AlainRastoul 7/4/2011 20:00'! queryWithUnsupportedOperationsEliminatedFrom: aQuery do: aBlock "If aQuery has operations that we don't support, rewrite it to do them in terms of lower level operations. In particular, rewrite INTERSECT/EXCEPT operations into EXISTS clauses in a single query. Pass the new query to aBlock." | newQuery | newQuery := aQuery rewriteIntersect. newQuery := newQuery rewriteExcept. newQuery == aQuery ifFalse: [aBlock value: newQuery].! ! !SQLitePlatform methodsFor: 'conversion-times' stamp: 'AlainRastoul 7/4/2011 20:00'! dateConverter "SQL server doesn't have plain dates, and doesn't accept them" ^DelegatingDatabaseConverter named: #date hostedBy: self fromStToDb: #dateToTimestampConversion:for: fromDbToSt: #readDate:for:. "#printDate:for:"! ! !SQLitePlatform methodsFor: 'conversion-times' stamp: 'AlainRastoul 7/4/2011 21:17'! dateToTimestampConversion: aDate for: aType aDate isNil ifTrue: [^aDate]. ^aDate asTimestamp.! ! !SQLitePlatform methodsFor: 'conversion-times' stamp: 'AlainRastoul 7/4/2011 20:00'! printDate: aDate for: aType "Print a date (or timestamp) as yyyy-mm-dd" | stream | aDate isNil ifTrue: [^'NULL']. stream := WriteStream on: String new. stream nextPutAll: '{ d '''. self printDate: aDate isoFormatOn: stream. stream nextPutAll: ''' }'. ^stream contents.! ! !SQLitePlatform methodsFor: 'conversion-times' stamp: 'AlainRastoul 7/4/2011 20:00'! printTime: aTime for: aType "Print a time (or timestamp) as hh:mm:ss.fff" | stream | aTime isNil ifTrue: [^'NULL']. stream := WriteStream on: String new. stream nextPutAll: '{ t '''. self printTime: aTime isoFormatOn: stream milliseconds: self supportsMillisecondsInTimes. stream nextPutAll: ''' }'. ^stream contents.! ! !SQLitePlatform methodsFor: 'conversion-times' stamp: 'AlainRastoul 7/4/2011 20:00'! printTimestamp: aTimestamp on: stream for: aType aTimestamp isNil ifTrue: [stream nextPutAll: 'NULL'. ^self]. stream nextPutAll: '{ ts '''. self printDate: aTimestamp isoFormatOn: stream. stream nextPutAll: ' '. self printTime: aTimestamp isoFormatOn: stream. stream nextPutAll: ''' }'.! ! !SQLitePlatform methodsFor: 'exdi specific' stamp: 'AlainRastoul 7/4/2011 20:00'! exdiTypeForDates ^#Timestamp.! ! !SQLitePlatform methodsFor: 'functions' stamp: 'AlainRastoul 7/4/2011 20:00'! initializeFunctions super initializeFunctions. functions at: #, put: (InfixFunction named: '+'); at: #copyFrom:to: put: (SubstringFunction named: 'SUBSTRING')! ! !SQLitePlatform methodsFor: 'testing' stamp: 'AlainRastoul 7/4/2011 21:38'! isODBCPlatform ^false! ! !SQLitePlatform methodsFor: 'testing' stamp: 'AlainRastoul 7/4/2011 20:00'! supportsANSIJoins "Do we support the JOIN <tableName> USING <criteria> syntax. Currently hard-coded, but may also vary by database version" ^true.! ! !SQLitePlatform methodsFor: 'testing' stamp: 'AlainRastoul 7/4/2011 20:08'! supportsBinding "Binding works only with VW EXDI so far" ^true.! ! !SQLitePlatform methodsFor: 'testing' stamp: 'AlainRastoul 7/4/2011 20:00'! supportsDecimalsOnAllNumerics "Return true if a general 'numeric' type will allow numbers after the decimal place" ^false.! ! !SQLitePlatform methodsFor: 'testing' stamp: 'AlainRastoul 7/4/2011 20:00'! supportsGroupWritingFor: aCommand ^aCommand supportsGroupWriting.! ! !SQLitePlatform methodsFor: 'testing' stamp: 'AlainRastoul 7/4/2011 20:00'! supportsLimit "Do we support anything analogous to the postgresql LIMIT, returning only the first N rows" ^true.! ! !SQLitePlatform methodsFor: 'testing' stamp: 'AlainRastoul 7/4/2011 20:00'! supportsMultipleOpenCursors "Can this database support multiple open cursors at once" ^false.! ! !SQLitePlatform methodsFor: 'testing' stamp: 'AlainRastoul 7/5/2011 21:48'! supportsTableOwners "Return true if this platform supports table owners, i.e. expects table names of the form Bern.TW_* rather than just TW_* in its SQL." "Access, Firebird and PostGreSQL do not, Oracle does, others I know not." ^false! ! !SQLitePlatform methodsFor: 'testing' stamp: 'AlainRastoul 7/4/2011 20:00'! usesArrayBindingRatherThanGrouping "Return true if we use array binding for grouped writes rather than printing the sql multiple times. Only applies if we support grouped writes" ^false.! ! !SQLitePlatform methodsFor: 'testing' stamp: 'AlainRastoul 7/4/2011 20:00'! usesIdentityColumns ^true.! ! !SQLitePlatform class methodsFor: 'LICENSE' stamp: 'AlainRastoul 7/4/2011 20:00'! LICENSE ^'Copyright 2000-2004 Alan Knight. This class is part of the GLORP system (see http://www.glorp.org), licensed under the GNU Lesser General Public License, with clarifications with respect to Smalltalk library usage (LGPL(S)). This code is distributed WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the package comment, or the COPYING.TXT file that should accompany this distribution, or the GNU Lesser General Public License.'! ! !TpccGlorpDescriptor methodsFor: 'as yet unclassified' stamp: 'AlainRastoul 7/4/2011 20:16'! allTableNames ^#( 'Customer' )! ! !TpccGlorpDescriptor methodsFor: 'as yet unclassified' stamp: 'AlainRastoul 7/4/2011 20:25'! classModelForCustomer: aClassModel "0 halt." aClassModel newAttributeNamed: #id. aClassModel newAttributeNamed: #wareHouseId. aClassModel newAttributeNamed: #districtId. aClassModel newAttributeNamed: #firstName. aClassModel newAttributeNamed: #lastName. aClassModel newAttributeNamed: #address1. aClassModel newAttributeNamed: #address2. aClassModel newAttributeNamed: #city. aClassModel newAttributeNamed: #state. aClassModel newAttributeNamed: #zip. aClassModel newAttributeNamed: #country. ! ! !TpccGlorpDescriptor methodsFor: 'as yet unclassified' stamp: 'AlainRastoul 7/4/2011 20:25'! constructAllClasses "0 halt." ^(super constructAllClasses) add: Customer; yourself ! ! !TpccGlorpDescriptor methodsFor: 'as yet unclassified' stamp: 'AlainRastoul 7/4/2011 20:24'! descriptorForCustomer: aDescriptor | table | " 0 halt." table := self tableNamed: 'Customer'. aDescriptor table: table. (aDescriptor newMapping: DirectMapping) from: #id to: (table fieldNamed: 'id'). (aDescriptor newMapping: DirectMapping) from: #wareHouseId to: (table fieldNamed: 'wareHouseId'). (aDescriptor newMapping: DirectMapping) from: #districtId to: (table fieldNamed: 'districtId'). (aDescriptor newMapping: DirectMapping) from: #firstName to: (table fieldNamed: 'firstName'). (aDescriptor newMapping: DirectMapping) from: #lastName to: (table fieldNamed: 'lastName'). (aDescriptor newMapping: DirectMapping) from: #address1 to: (table fieldNamed: 'address1'). (aDescriptor newMapping: DirectMapping) from: #address2 to: (table fieldNamed: 'address2'). (aDescriptor newMapping: DirectMapping) from: #city to: (table fieldNamed: 'city'). (aDescriptor newMapping: DirectMapping) from: #state to: (table fieldNamed: 'state'). (aDescriptor newMapping: DirectMapping) from: #zip to: (table fieldNamed: 'zip'). (aDescriptor newMapping: DirectMapping) from: #country to: (table fieldNamed: 'country'). ! ! !TpccGlorpDescriptor methodsFor: 'as yet unclassified' stamp: 'AlainRastoul 7/5/2011 20:58'! tableForCUSTOMER: aTable "id wareHouseId districtId firstName lastName middle address1 address2 city state zip phone since credit creditLim balance ytdPayment paymentCnt deliveryCnt" "0 halt." " (aTable createFieldNamed: 'wareHouseId' type: (platform integer)) bePrimaryKey. (aTable createFieldNamed: 'districtId' type: (platform integer)) bePrimaryKey. (aTable createFieldNamed: 'id' type: (platform integer) ) bePrimaryKey . " aTable createFieldNamed: 'wareHouseId' type: (platform integer). aTable createFieldNamed: 'districtId' type: (platform integer). (aTable createFieldNamed: 'id' type: (platform integer)) bePrimaryKey . aTable createFieldNamed: 'firstName' type: (platform varChar: 64). aTable createFieldNamed: 'lastName' type: (platform varChar: 64). aTable createFieldNamed: 'middle' type: (platform char: 2). aTable createFieldNamed: 'address1' type: (platform varChar: 64). aTable createFieldNamed: 'address2' type: (platform varChar: 64). aTable createFieldNamed: 'city' type: (platform varChar: 64). aTable createFieldNamed: 'state' type: (platform char: 64). aTable createFieldNamed: 'zip' type: (platform char: 16). aTable createFieldNamed: 'country' type: (platform char: 64). aTable createFieldNamed: 'phone' type: (platform varChar: 64). ! ! TpccGlorpDescriptor removeSelector: #descriptorForCUSTOMER:! TpccGlorpDescriptor removeSelector: #descriptorForPerson:! TpccGlorpDescriptor removeSelector: #tableForCustomer:! SQLitePlatform removeSelector: #printPreLimit:on:! SQLitePlatform removeSelector: #useMicrosoftOuterJoins! Customer class removeSelector: #generate! Customer class removeSelector: #pickACountryAndStore! !Customer class reorganize! ('as yet unclassified' glorpSetupDescriptor:forSystem:) ('benchmarks-utilities' newLoginForSQLite newPopulation newSessionForLogin:accessor: pickACountryAndStoreFor: pickAFirstName pickAName) ('benchmarks' createTablesIn:accessor: insertDataIn: newCustomersBench) ! DBConnection removeSelector: #sqlite:! |
From: Norbert S. <no...@li...> - 2011-07-07 07:42:36
|
Hi Guille > I'm having problems to setup an opendbx-windows environment with mssql, > using latest MinGW and MSys. I've been already be able to compile and put > freetds to work, and I'm now having problems compiling opendbx :). If you want to connect to an MS SQL server in an windows environment, please prefer the OpenDBX odbc backend :-) > I run the configure this way: > > *CPPFLAGS="-I/c/MinGW/msys/1.0/local/freetds/include" > LDFLAGS="-L/c/MinGW/msys/1.0/local/freetds/lib" ./configure --disable-utils > --with-backends="mssql"* > > When I run the make, I have some errors like the following: > > *undefined reference to `libintl_snprintf'* > > And in the config.log file I see the next lines: > > *configure:18866: gcc -std=gnu99 -o conftest.exe -g -O2 > -I/c/MinGW/msys/1.0/local/freetds/include > -L/c/MinGW/msys/1.0/local/freetds/lib conftest.c>&5 > conftest.c:72:6: warning: conflicting types for built-in function 'snprintf' > * Can you send me the make output including the output of STDERR? Based on your sent make.log I would assume everthing is OK. Norbert |
From: Norbert S. <no...@li...> - 2011-07-07 07:27:13
|
HI Thibault >> OK, but you don't gain much because this is only one malloc you >> will save. One more is needed for the auxiliary data and I think >> several are done by the native client libraries. The reason I opted >> against this was the choices you have to care about when you a) get >> an already initialized structure and b) get a structure that only >> seem to be initialized due to random data. > > The mallocs in the native client libraries are unavoidable, my point > was just to minimize their use since each one involves a lot of > overhead. I did not get the point about initialized data. Is there a > difference between stack and heap? Usually it's faster to get memory from the stack instead of the heap but on the stack only fixed size memory can be allocated. This means you can't use a dynamic number of connections. >> Many native database libraries support prepared statements. It's >> relatively simple: - Send the SQL with markers to the server and >> get back a handle - bind your values to this handle - Send only the >> data to the server (one or several times) - Get the binary values >> from the result set > >> Contrary to e.g. DBI I would favour a single function that copies >> the data to a memory area provided by the client. This can be used >> with all types of data and is faster and safer than allocating >> memory that must be freed by the client. > > I agree on this. The design I had in mind was more minimalistic such > as not to involve any memory allocation (except on stack): > odbx_prepare(conn_obj, query_obj, statement); // The statement > contains markers like %s or %i which are replaced by native client > library markers odbx_exec_prepared(conn_obj, query_obj, result_obj, > ...); // The variadic arguments are interpreted according to the > types known from the previous function and stored in query_obj The native database libraries usually provide a function like bind( stmt, position, value) They don't work with varadic arguments but you have to bind a value to every marker (usually the question mark) before executing the prepared statement. >> No, as this would interfere with the positive return values for >> e.g. odbx_result(). > > Misunderstanding, I meant to return values other than errors (In-band > error codes, which are dangerous > https://www.securecoding.cert.org/confluence/display/seccode/ERR02-C.+Avoid+in-band+error+indicators > ). The negative error codes design would allow this practice to be > used in other functions than odbx_result. Instead, ODBX_RES_ROWS and > NOROWS could be removed in favour of a mean to get the number of rows > after a SELECT. ODBX_RES_TIMEOUT in turn could be included as an > error code. OK, I see what you mean but you don't have in-band error codes in odbx_result() because this function only returns codes and no values. Nevertheless, there are functions with in-band errors (all that does return values from the database): - odbx_error - odbx_rows_affected - odbx_column_count - odbx_column_name - odbx_column_type - odbx_field_length - odbx_field_value >> The initial design decision was to allow having multiple result >> sets per connection even if most databases doesn't support this. >> But I didn't want to limit the client libraries that would be able >> to so even if none of the currently implemented ones can do so :-) > > Misunderstanding, I meant to call odbx_field_value( handle, result, i > ) rather than odbx_field_value( result, i ). The user might think > that he can free the connection and then still read the result > structure. Including the handle in all functions might however be > excessive, so you could simply replace odbx_result_finish(result) by > odbx_result_finish(handle, result). Thus, users have to keep the > handle alive while parsing the result structure. You are right, extending odbx_result_finish() by the connection handle can be a strong indicator for developers to keep the handle. Norbert |
From: Norbert S. <no...@li...> - 2011-07-07 07:03:24
|
On 07/05/2011 12:08 PM, Zhao Tongyi wrote: > the pre patch at pgsql_odbx_get_option lose break.sorry > > 2011/7/5 Zhao Tongyi<zha...@gm...>: >> I need to insert the "BYTEA" type data into the PGSQL database, but >> found OPENDBX on pgsql_odbx_escapt function call the PGEscaptSring, so >> I added this patch at opendbx-1.4.5, use odbx_set_option set using >> PGEscaptString or PGEscaptBytea. >> In addition to the Opendbx project not update at long time . I've thought some time about your implementation and requirements and I'm not sure it's the right solution for the problem. If I understand correctly, only binary values have to be escaped using the PQescapeByteaConn/PQescapeBytea function before they can inserted into a SQL string. This means, that this function doesn't work for string values and depending on your column, you have to switch between the PQescapeByteaConn and PQescapeStringConn functions. Therefore, using a connection option to switch between those functions doesn't seem to be at good solution. Furthermore, I was wondering if we can integrate such special handling for a specific backend at all of if it's too special. A solution would then to provide a function that returns the bare connection handle of the native database library. This could then be used to call the native library function directly but have to be used with care and doing this isn't portable. What do you think? Norbert |
From: Mariano M. P. <mar...@gm...> - 2011-07-06 21:26:55
|
On Tue, Jul 5, 2011 at 11:46 PM, Alain Rastoul <al...@fr...> wrote: > ** > Hi Mariano, > I don't want to do multi threading with sqlite because I know it doesn't > work. > I was curious about the squeakdbx (or opendbx architecture) because of the > not so good performance and the time spent in waiting , I do not > understand the squeakdbx package vs opendbx package: the doc is mentioning a > squeakdbx plugin dll but I have no squeakdbx dll ? > Sorry. THat's outdated. Once (2 years ago) Esteban Lorenzano tried to write a plugin to avoid FFI. The idea was that such plugin could avoid locking the VM. But I don't remember why we didn't succeded. > > You are saying that in that case the external call is counted on the > InputEventPollingFetcher>> wait and not in primitives (?). > maybe :) but I don't know > I will investigate with FFI/SQlite and it should be the same (I've seen > some messages about incorrect profiling reports in primitives), > > Yes, primitives are not really well measured in profilers. Check the new profiler announced by Eliot Miranda, it fixes this problem. > I expected much better performance with sqlite , and glorp is very good > (5% of the time), I would have expected the contrary. > Sorry I didn't understand. > > Thanks > > Cheers > Alain > > "Mariano Martinez Peck" <mar...@gm...> a écrit dans le message de > news:CAA+-=mVV...@ma...... > > > On Tue, Jul 5, 2011 at 10:50 PM, Alain Rastoul <al...@fr...> wrote: > >> Hi, >> (sorry for sending this mail again, my pc was off for a long time and the >> message was dated from 2007, people who sort their messages would not see >> it) >> >> I've done a small program in Pharo 1.3 with glorp+opendbx that insert 1000 >> rows in a customer table in a sqlite db. >> The 1000 insert takes 140 sec (very slow), but the Pharo profiler says >> that >> it spend 95% >> of the time waiting for input. >> (in InputEventPollingFetcher>> waitForInput) >> I was wondering if the queries are executed in another thread than the vm >> thread ? >> > > Hi Alain. No. Squeak/Pharo's thread architecture is the so called green > thread, that is, only ONE OS thread is used. Internally, the language > reifies Process, Scheduler, #fork: , etc etc etc. But from the OS point of > view there is only one thread for the VM. So.....the regular FFI blocks the > VM. What does it mean? that while the C function called by FFI is being > executed, the WHOLE VM is block. Notihgn can happen at the same time. > Imagine the function that retrieves the results and needs to wait for > them.....TERRIBLE. So...if the backend does not support async quieries, then > you are screw and dbx may be slow in Pharo. Nothing to do. > > However, some backends support async queries, and opendbx let us configure > this. This is explained in: > > http://www.squeakdbx.org/Architecture%20and%20desing?_s=FlIhkPQOOFSlqf8C&_k=j-3_7Kw_&_n&18 > where it says "External call implementation" > > You can see the list of backends that support async queries in here: > > http://www.squeakdbx.org/documentation/Asynchronous%20queries?_s=FlIhkPQOOFSlqf8C&_k=j-3_7Kw_&_n&17 > > Notice that there is some room for improvements, but we didn't have time so > far. Hernik told us some good ideas. But since we didn't need more power so > far we couldn't find time to integrate his ideas. I am forwarding now the > emails to the mailing list. If you can take a look and provide code, it > would be awesome. Basically, it improves how and how much we wait in each > side: image and opendbx. > > Finally, notice that Eliot is working in a multithreared FFI for Cog, but > it is not yet available as far as I know. > > Cheers > > Mariano > > (I thought I've seen a document about opendbx architecture but could'nt >> find >> it on the site). >> >> TIA >> Alain >> >> >> >> >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> libopendbx-devel mailing list >> lib...@pu... >> >> https://lists.sourceforge.net/lists/listinfo/libopendbx-devel >> http://www.linuxnetworks.de/doc/index.php/OpenDBX >> > > > > -- > Mariano > http://marianopeck.wordpress.com > > ------------------------------ > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > > ------------------------------ > > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > libopendbx-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libopendbx-devel > http://www.linuxnetworks.de/doc/index.php/OpenDBX > > -- Mariano http://marianopeck.wordpress.com |
From: Alain R. <al...@fr...> - 2011-07-06 17:54:46
|
Thank you Diogenes, I will try your advice about synchronous pragma and send you my code, but I have to work for a customer this evening :( I will subscribe to the squeak dbx mailing list too. thanks again Regards, Alain "Diogenes Moreira" <dio...@gm...> a écrit dans le message de news: CAL...@ma...... hi alan. i dont know if sqlite implemetation has a problem.. but in glorp some times do unnecesary read.. by example when you making "N times" CommitAndContinue.. and because it, you see a lot time spending in InputEventPollingFetcher>>wait. if you send to me or send to SqueakDbx list you glorp code(ST), may be, I can send to you some advices. or may be, we can find the delay source. Best. pd: please check in you sqlite the pragmas.. by example synchronous (http://www.sqlite.org/pragma.html#pragma_synchronous) that increment the performance to much. On Tue, Jul 5, 2011 at 6:46 PM, Alain Rastoul <al...@fr...> wrote: Hi Mariano, I don't want to do multi threading with sqlite because I know it doesn't work. I was curious about the squeakdbx (or opendbx architecture) because of the not so good performance and the time spent in waiting , I do not understand the squeakdbx package vs opendbx package: the doc is mentioning a squeakdbx plugin dll but I have no squeakdbx dll ? You are saying that in that case the external call is counted on the InputEventPollingFetcher>> wait and not in primitives (?). I will investigate with FFI/SQlite and it should be the same (I've seen some messages about incorrect profiling reports in primitives), I expected much better performance with sqlite , and glorp is very good (5% of the time), I would have expected the contrary. Thanks Cheers Alain "Mariano Martinez Peck" <mar...@gm...> a écrit dans le message de news:CAA+-=mVV3zvP...@pu...... On Tue, Jul 5, 2011 at 10:50 PM, Alain Rastoul <al...@fr...> wrote: Hi, (sorry for sending this mail again, my pc was off for a long time and the message was dated from 2007, people who sort their messages would not see it) I've done a small program in Pharo 1.3 with glorp+opendbx that insert 1000 rows in a customer table in a sqlite db. The 1000 insert takes 140 sec (very slow), but the Pharo profiler says that it spend 95% of the time waiting for input. (in InputEventPollingFetcher>> waitForInput) I was wondering if the queries are executed in another thread than the vm thread ? Hi Alain. No. Squeak/Pharo's thread architecture is the so called green thread, that is, only ONE OS thread is used. Internally, the language reifies Process, Scheduler, #fork: , etc etc etc. But from the OS point of view there is only one thread for the VM. So.....the regular FFI blocks the VM. What does it mean? that while the C function called by FFI is being executed, the WHOLE VM is block. Notihgn can happen at the same time. Imagine the function that retrieves the results and needs to wait for them.....TERRIBLE. So...if the backend does not support async quieries, then you are screw and dbx may be slow in Pharo. Nothing to do. However, some backends support async queries, and opendbx let us configure this. This is explained in: http://www.squeakdbx.org/Architecture%20and%20desing?_s=FlIhkPQOOFSlqf8C&_k=j-3_7Kw_&_n&18 where it says "External call implementation" You can see the list of backends that support async queries in here: http://www.squeakdbx.org/documentation/Asynchronous%20queries?_s=FlIhkPQOOFSlqf8C&_k=j-3_7Kw_&_n&17 Notice that there is some room for improvements, but we didn't have time so far. Hernik told us some good ideas. But since we didn't need more power so far we couldn't find time to integrate his ideas. I am forwarding now the emails to the mailing list. If you can take a look and provide code, it would be awesome. Basically, it improves how and how much we wait in each side: image and opendbx. Finally, notice that Eliot is working in a multithreared FFI for Cog, but it is not yet available as far as I know. Cheers Mariano (I thought I've seen a document about opendbx architecture but could'nt find it on the site). TIA Alain ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ libopendbx-devel mailing list libopendbx-...@pu... https://lists.sourceforge.net/lists/listinfo/libopendbx-devel http://www.linuxnetworks.de/doc/index.php/OpenDBX -- Mariano http://marianopeck.wordpress.com -------------------------------------------------------------------------- ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 -------------------------------------------------------------------------- ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ libopendbx-devel mailing list lib...@li... https://lists.sourceforge.net/lists/listinfo/libopendbx-devel http://www.linuxnetworks.de/doc/index.php/OpenDBX ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 ------------------------------------------------------------------------------ |
From: Thibault R. <tr...@kt...> - 2011-07-06 14:34:14
|
Hi Norbert, > OK, but you don't gain much because this is only one malloc you will > save. One more is needed for the auxiliary data and I think several are > done by the native client libraries. The reason I opted against this was > the choices you have to care about when you a) get an already > initialized structure and b) get a structure that only seem to be > initialized due to random data. The mallocs in the native client libraries are unavoidable, my point was just to minimize their use since each one involves a lot of overhead. I did not get the point about initialized data. Is there a difference between stack and heap? > Many native database libraries support prepared statements. It's > relatively simple: > - Send the SQL with markers to the server and get back a handle > - bind your values to this handle > - Send only the data to the server (one or several times) > - Get the binary values from the result set > Contrary to e.g. DBI I would favour a single function that copies the > data to a memory area provided by the client. This can be used with all > types of data and is faster and safer than allocating memory that must > be freed by the client. I agree on this. The design I had in mind was more minimalistic such as not to involve any memory allocation (except on stack): odbx_prepare(conn_obj, query_obj, statement); // The statement contains markers like %s or %i which are replaced by native client library markers odbx_exec_prepared(conn_obj, query_obj, result_obj, ...); // The variadic arguments are interpreted according to the types known from the previous function and stored in query_obj > No, as this would interfere with the positive return values for e.g. > odbx_result(). Misunderstanding, I meant to return values other than errors (In-band error codes, which are dangerous https://www.securecoding.cert.org/confluence/display/seccode/ERR02-C.+Avoid+in-band+error+indicators ). The negative error codes design would allow this practice to be used in other functions than odbx_result. Instead, ODBX_RES_ROWS and NOROWS could be removed in favour of a mean to get the number of rows after a SELECT. ODBX_RES_TIMEOUT in turn could be included as an error code. > The initial design decision was to allow having multiple result sets per > connection even if most databases doesn't support this. But I didn't > want to limit the client libraries that would be able to so even if none > of the currently implemented ones can do so :-) Misunderstanding, I meant to call odbx_field_value( handle, result, i ) rather than odbx_field_value( result, i ). The user might think that he can free the connection and then still read the result structure. Including the handle in all functions might however be excessive, so you could simply replace odbx_result_finish(result) by odbx_result_finish(handle, result). Thus, users have to keep the handle alive while parsing the result structure. Regards, Thibault ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ libopendbx-devel mailing list lib...@li... https://lists.sourceforge.net/lists/listinfo/libopendbx-devel http://www.linuxnetworks.de/doc/index.php/OpenDBX |
From: Diogenes M. <dio...@gm...> - 2011-07-06 05:03:06
|
hi alan. i dont know if sqlite implemetation has a problem.. but in glorp some times do unnecesary read.. by example when you making "N times" CommitAndContinue.. and because it, you see a lot time spending in InputEventPollingFetcher>>wait. if you send to me or send to SqueakDbx list you glorp code(ST), may be, I can send to you some advices. or may be, we can find the delay source. Best. pd: please check in you sqlite the pragmas.. by example synchronous ( http://www.sqlite.org/pragma.html#pragma_synchronous) that increment the performance to much. On Tue, Jul 5, 2011 at 6:46 PM, Alain Rastoul <al...@fr...> wrote: > ** > Hi Mariano, > I don't want to do multi threading with sqlite because I know it doesn't > work. > I was curious about the squeakdbx (or opendbx architecture) because of the > not so good performance and the time spent in waiting , I do not > understand the squeakdbx package vs opendbx package: the doc is mentioning a > squeakdbx plugin dll but I have no squeakdbx dll ? > > You are saying that in that case the external call is counted on the > InputEventPollingFetcher>> wait and not in primitives (?). > I will investigate with FFI/SQlite and it should be the same (I've seen > some messages about incorrect profiling reports in primitives), > > I expected much better performance with sqlite , and glorp is very good > (5% of the time), I would have expected the contrary. > > Thanks > > Cheers > Alain > > "Mariano Martinez Peck" <mar...@gm...> a écrit dans le message de > news:CAA+-=mVV...@ma...... > > > On Tue, Jul 5, 2011 at 10:50 PM, Alain Rastoul <al...@fr...> wrote: > >> Hi, >> (sorry for sending this mail again, my pc was off for a long time and the >> message was dated from 2007, people who sort their messages would not see >> it) >> >> I've done a small program in Pharo 1.3 with glorp+opendbx that insert 1000 >> rows in a customer table in a sqlite db. >> The 1000 insert takes 140 sec (very slow), but the Pharo profiler says >> that >> it spend 95% >> of the time waiting for input. >> (in InputEventPollingFetcher>> waitForInput) >> I was wondering if the queries are executed in another thread than the vm >> thread ? >> > > Hi Alain. No. Squeak/Pharo's thread architecture is the so called green > thread, that is, only ONE OS thread is used. Internally, the language > reifies Process, Scheduler, #fork: , etc etc etc. But from the OS point of > view there is only one thread for the VM. So.....the regular FFI blocks the > VM. What does it mean? that while the C function called by FFI is being > executed, the WHOLE VM is block. Notihgn can happen at the same time. > Imagine the function that retrieves the results and needs to wait for > them.....TERRIBLE. So...if the backend does not support async quieries, then > you are screw and dbx may be slow in Pharo. Nothing to do. > > However, some backends support async queries, and opendbx let us configure > this. This is explained in: > > http://www.squeakdbx.org/Architecture%20and%20desing?_s=FlIhkPQOOFSlqf8C&_k=j-3_7Kw_&_n&18 > where it says "External call implementation" > > You can see the list of backends that support async queries in here: > > http://www.squeakdbx.org/documentation/Asynchronous%20queries?_s=FlIhkPQOOFSlqf8C&_k=j-3_7Kw_&_n&17 > > Notice that there is some room for improvements, but we didn't have time so > far. Hernik told us some good ideas. But since we didn't need more power so > far we couldn't find time to integrate his ideas. I am forwarding now the > emails to the mailing list. If you can take a look and provide code, it > would be awesome. Basically, it improves how and how much we wait in each > side: image and opendbx. > > Finally, notice that Eliot is working in a multithreared FFI for Cog, but > it is not yet available as far as I know. > > Cheers > > Mariano > > (I thought I've seen a document about opendbx architecture but could'nt >> find >> it on the site). >> >> TIA >> Alain >> >> >> >> >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> libopendbx-devel mailing list >> lib...@pu... >> >> https://lists.sourceforge.net/lists/listinfo/libopendbx-devel >> http://www.linuxnetworks.de/doc/index.php/OpenDBX >> > > > > -- > Mariano > http://marianopeck.wordpress.com > > ------------------------------ > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > > ------------------------------ > > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > libopendbx-devel mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libopendbx-devel > http://www.linuxnetworks.de/doc/index.php/OpenDBX > > |