You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(77) |
Sep
(18) |
Oct
(29) |
Nov
(13) |
Dec
(5) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(18) |
Feb
(18) |
Mar
(10) |
Apr
(22) |
May
(2) |
Jun
(25) |
Jul
(12) |
Aug
(10) |
Sep
(19) |
Oct
(19) |
Nov
(20) |
Dec
(16) |
2003 |
Jan
(5) |
Feb
(5) |
Mar
(30) |
Apr
(1) |
May
(4) |
Jun
(37) |
Jul
(23) |
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
(5) |
2004 |
Jan
(2) |
Feb
(5) |
Mar
(31) |
Apr
(3) |
May
(2) |
Jun
(3) |
Jul
(22) |
Aug
|
Sep
(1) |
Oct
|
Nov
(3) |
Dec
(6) |
2005 |
Jan
|
Feb
(4) |
Mar
|
Apr
(15) |
May
(5) |
Jun
(1) |
Jul
(29) |
Aug
(42) |
Sep
(24) |
Oct
(11) |
Nov
(8) |
Dec
|
2006 |
Jan
(3) |
Feb
(3) |
Mar
(1) |
Apr
(10) |
May
(21) |
Jun
(3) |
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(2) |
Dec
|
2007 |
Jan
(2) |
Feb
(20) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(3) |
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
(9) |
2008 |
Jan
(8) |
Feb
(27) |
Mar
(24) |
Apr
(17) |
May
(9) |
Jun
(11) |
Jul
(9) |
Aug
|
Sep
(7) |
Oct
(14) |
Nov
(6) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(6) |
2010 |
Jan
(32) |
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(1) |
Jul
(4) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(6) |
2011 |
Jan
(10) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
(10) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
(4) |
2013 |
Jan
(23) |
Feb
(15) |
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(6) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(5) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(6) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(5) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(5) |
Dec
|
From: <mar...@mh...> - 2013-02-15 00:22:18
|
Christoph Kottke writes: > hi, > > i've installed the latest cvs revisions for libdbi and libdbi-driver and > when ever i try to fetch more than > 3 row from an firebird table it's end in an segfault. > > but when i revert the "bull patch" in dbi_result.c it works like a charm. > Hi, thanks for the bug report. This one barely came in time to stop me from packaging the long-awaited 0.9 releases :-/ Unfortunately, I haven't been able to make firebird operational on my FreeBSD box for at least two years so I cannot do any real debugging here. I plan to set up a Debian box shortly which may be a better platform for this database engine. It took me a couple of hours to more or less understand what's going on in theory. In any case, I've got a hunch. I assume the "bull patch" was correct as it fixed a memory leak shown by valgrind, without causing any problems with the drivers that I can test over here. What if the firebird driver just happened to work ok as long as libdbi prevented one particular bordercase to occur, at the expense of some bits of leaked memory? In that case it would be wrong to revert the patch. Instead, someone in the know (Christian please ???) would have to review the result fetching code of the firebird driver with an eye on numrows_matched. According to what I've seen in the driver code, firebird makes it particularly hard to get things right. Also, Christoph, did you run make check? If that didn't crash, we need extra tests to discover problems like these. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Christoph K. <chr...@gm...> - 2013-02-14 11:47:35
|
hi, i've installed the latest cvs revisions for libdbi and libdbi-driver and when ever i try to fetch more than 3 row from an firebird table it's end in an segfault. but when i revert the "bull patch" in dbi_result.c it works like a charm. christoph |
From: <mar...@mh...> - 2013-02-08 01:07:08
|
mar...@mh... writes: > Hi, > > I've moved libdbi to 0.9.0 in CVS and I've hoped to get libdbi-drivers > release-ready as well. I've installed PostgreSQL 9.2 today to see if > things worked as smoothly as they did a while back. I had to learn > that the test harness doesn't manage to get through the database > encoding test. After disabling that one, I still get 47 failures, > while both mysql and sqlite3 succeed without a hitch. Is anyone using > PostgreSQL with libdbi, and if yes, which versions? kinda talking to myself, but maybe someone's still listening... Turns out that there were both encoding and binary data issues. PostgreSQL >= 9.0 uses hex-encoded binary data which screwed up all tests related to BYTEA data. I had to rewrite the entire BYTEA decoding stuff in the pgsql driver to handle this new format. We're now back at 0 errors, but I can test this against PostgreSQL 9.2 only. Would someone be kind enough to test the code on older versions of this engine? regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: <mar...@mh...> - 2013-02-05 00:46:24
|
Hi, I've moved libdbi to 0.9.0 in CVS and I've hoped to get libdbi-drivers release-ready as well. I've installed PostgreSQL 9.2 today to see if things worked as smoothly as they did a while back. I had to learn that the test harness doesn't manage to get through the database encoding test. After disabling that one, I still get 47 failures, while both mysql and sqlite3 succeed without a hitch. Is anyone using PostgreSQL with libdbi, and if yes, which versions? regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Toby T. <to...@te...> - 2013-01-27 02:40:44
|
On 26/01/13 8:30 PM, mar...@mh... wrote: > Toby Thain writes: > > Yes, you just found out the hard way that it's wise to include ENGINE > > specifications in MySQL CREATE TABLE's. > > > > True indeed. Actually I think the news is even worse: I seem to recall that MySQL can even *ignore* ENGINE specifications if the engine isn't installed. Leaving you, again, with MyISAM-the-inadequate. More silent fail. I think I've wasted some time on that one in the past. --Toby > > regards, > Markus > |
From: <mar...@mh...> - 2013-01-27 01:29:43
|
Toby Thain writes: > Yes, you just found out the hard way that it's wise to include ENGINE > specifications in MySQL CREATE TABLE's. > True indeed. The problem with MySQL's storage engines is that the transaction_support and savepoint_support driver capabilities cannot be relied upon because they depend on a couple of factors - how the database server was compiled and configured, how particular tables were created and so on. The driver caps may get things wrong in quite a few cases. BTW I've finished updating the test kit which should now cover all aspects of transactions and savepoints as well. I'd greatly appreciate if everyone could give the current cvs revisions a try and report success and failures. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Toby T. <to...@te...> - 2013-01-26 04:08:20
|
On 25/01/13 7:40 PM, mar...@mh... wrote: > Markus Hoenicka writes: > > Another possible reason for problems is the table type. If someone > > uses MyISAM tables, the result I got would be expected. However, MySQL > > uses InnoDB tables as default these days, and I double-checked that > > they were used in my tests. > > Turns out that InnoDB is the default engine only in versions 5.5.5 and > later. On my box, MyISAM tables were created by default. The test Yes, you just found out the hard way that it's wise to include ENGINE specifications in MySQL CREATE TABLE's. They took far too long to change the default engine to InnoDB. --Toby > correctly found out that the latter don't support transactions :-( > > I've changed the CREATE TABLE statement on my box to ask for InnoDB > tables instead. This eliminates the failure of the rollback test. > > I'll try and finish the test suite asap. The transaction tests require > some polish as they should run only if the driver supports > transactions. > > regards, > Markus > |
From: <mar...@mh...> - 2013-01-26 00:44:35
|
Markus Hoenicka writes: > Another possible reason for problems is the table type. If someone > uses MyISAM tables, the result I got would be expected. However, MySQL > uses InnoDB tables as default these days, and I double-checked that > they were used in my tests. Turns out that InnoDB is the default engine only in versions 5.5.5 and later. On my box, MyISAM tables were created by default. The test correctly found out that the latter don't support transactions :-( I've changed the CREATE TABLE statement on my box to ask for InnoDB tables instead. This eliminates the failure of the rollback test. I'll try and finish the test suite asap. The transaction tests require some polish as they should run only if the driver supports transactions. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Markus H. <mar...@mh...> - 2013-01-25 15:04:48
|
Rainer Gerhards <rge...@hq...> was heard to say: > we have given it a try today and things look pretty good :-). Sounds good :-) > Unfortunately, we can reproduce the problem with MySQL. I barely > remember that MySQL by default has implicit commits enabled, what > needs to be turned off if you need real ones. > As Olivier already mentioned, any eplicit START TRANSACTION overrides autocommit, so this shouldn't interfere here. Even if an application using libdbi turns autocommit off or on deliberately, START TRANSACTION should still work as expected. Another possible reason for problems is the table type. If someone uses MyISAM tables, the result I got would be expected. However, MySQL uses InnoDB tables as default these days, and I double-checked that they were used in my tests. I'll try to test the pgsql driver on the weekend to see if that causes problems too. I'll also check the MySQL logs to see if I find something weird, and I'll run some transactions in MySQL's plain ol' command line interface just to make sure it isn't MySQL playing tricks on us. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Olivier D. <web...@aj...> - 2013-01-25 14:48:43
|
Hello, > we have given it a try today and things look pretty good :-). Unfortunately, we can reproduce the problem with MySQL. I barely remember that MySQL by default has implicit commits enabled, what needs to be turned off if you need real ones. By default, MySQL has autocommit enabled. This is disabled automatically when you issue "START TRANSACTION" statement (or changed default behaviour in MySQL config file). See http://dev.mysql.com/doc/refman/5.5/en/commit.html Olivier |
From: Rainer G. <rge...@hq...> - 2013-01-25 14:36:21
|
> I've checked in new versions of libdbi/src/dbi_main.c containing the > implementations of the transaction functions and libdbi- > drivers/tests/test_dbi.c with a first shot at the required test functions. > MySQL seems to fail to rollback transactions on my box whereas SQLite3 > succeeds. I'm too tired now to track this down, maybe someone else can > have a look at this too. I hope I'll get back to this on the weekend. Hi Markus, we have given it a try today and things look pretty good :-). Unfortunately, we can reproduce the problem with MySQL. I barely remember that MySQL by default has implicit commits enabled, what needs to be turned off if you need real ones. At least I have check the tx support in our native mysql driver that Ulrike did and there is this call. Maybe it's useful for you: http://git.adiscon.com/?p=rsyslog.git;a=blob;f=plugins/ommysql/ommysql.c;h=2dfa29de74bd4b172ea6073bb970759753ace491;hb=HEAD#l223 Thanks again for your help! Rainer |
From: <mar...@mh...> - 2013-01-25 00:57:56
|
Rainer Gerhards writes: > > Zitat von Rainer Gerhards <rge...@hq...>: > > > > > sorry for the long silence, we got sidetracked ourselfs. Finally, we > > > yesterday tried to write some test programs to get started. To do so, > > > I did a cvs checkout for both libdbi and libdbi-drivers (according to > > > instructions on the site). I can see the new transaction functions > > > inside the sgml files as well as the headers. > > > However, I do not find any implementation (.c files). > > > > > > Am I overlooking something? > > > > > > > No. Apparently my bad. I'll fix that tonight. > > Thanks a lot! > Rainer I've checked in new versions of libdbi/src/dbi_main.c containing the implementations of the transaction functions and libdbi-drivers/tests/test_dbi.c with a first shot at the required test functions. MySQL seems to fail to rollback transactions on my box whereas SQLite3 succeeds. I'm too tired now to track this down, maybe someone else can have a look at this too. I hope I'll get back to this on the weekend. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Rainer G. <rge...@hq...> - 2013-01-24 08:13:42
|
> Rainer Gerhards writes: > > > This should be easy to retrofit. I'lls see if I find > > some time but feel > free to beat me at it. > > > > I am quite busy myself at the moment, but I could try and see if I > could > craft something along that path... > > > > Hi all, > > I've stolen some time from myself to provide a first shot at transaction and > savepoint support, see the current cvs revisions of libdbi and libdbi-drivers. > The code is entirely untested except that the drivers which I use myself > compile and don't crash upon loading. I didn't get round to adding the > documentation and the tests, but feel free to test the current code yourself. > Usage should be pretty obvious if you look at the diffs. I'm sure some rough > edges remain, but then... it's a start. Hi Markus, sorry for the long silence, we got sidetracked ourselfs. Finally, we yesterday tried to write some test programs to get started. To do so, I did a cvs checkout for both libdbi and libdbi-drivers (according to instructions on the site). I can see the new transaction functions inside the sgml files as well as the headers. However, I do not find any implementation (.c files). Am I overlooking something? Rainer |
From: Rainer G. <rge...@hq...> - 2013-01-24 08:13:35
|
> Zitat von Rainer Gerhards <rge...@hq...>: > > > sorry for the long silence, we got sidetracked ourselfs. Finally, we > > yesterday tried to write some test programs to get started. To do so, > > I did a cvs checkout for both libdbi and libdbi-drivers (according to > > instructions on the site). I can see the new transaction functions > > inside the sgml files as well as the headers. > > However, I do not find any implementation (.c files). > > > > Am I overlooking something? > > > > No. Apparently my bad. I'll fix that tonight. Thanks a lot! Rainer |
From: Markus H. <mar...@mh...> - 2013-01-24 08:09:39
|
Zitat von Rainer Gerhards <rge...@hq...>: > sorry for the long silence, we got sidetracked ourselfs. Finally, we > yesterday tried to write some test programs to get started. To do > so, I did a cvs checkout for both libdbi and libdbi-drivers > (according to instructions on the site). I can see the new > transaction functions inside the sgml files as well as the headers. > However, I do not find any implementation (.c files). > > Am I overlooking something? > No. Apparently my bad. I'll fix that tonight. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: <mar...@mh...> - 2013-01-09 21:39:35
|
Olivier Doucet writes: > Performance are now correct :) I checked with valgrind / callgrind, > and function mysql_data_seek is not called anymore. > > You can now patch other drivers, and maybe release a new version ? I've updated the remaining drivers and the driver documentation. Did anyone else have a chance to test the code lately? regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Markus H. <mar...@mh...> - 2013-01-09 16:07:57
|
Olivier Doucet <web...@aj...> was heard to say: >> Do you have any numbers? Would be nice to know what speed gain this >> (almost) one-line optimization effected. > > I've done a benchmark with rrdtool that uses libdbi. > http://tof.canardpc.com/view/31953cab-4329-45e9-9e30-eed2d9148ee1.jpg > max value on X-Axis is equivalent to ~ 35k rows retrieved from MySQL. > > Quite impressive, right ? ;) > Looks like my 30 min effort was well spent then. Thanks for kicking my butt to get this done. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Olivier D. <web...@aj...> - 2013-01-09 15:35:50
|
> libdbi-drivers uses the mysql_config script shipped with MySQL to > obtain the list of required libraries to link against, *unless* you > use the --with-mysql-libdir configure switch. Yes I was using this configure switch. I copied configure line from Red Hat SRPM. > Do you have any numbers? Would be nice to know what speed gain this > (almost) one-line optimization effected. I've done a benchmark with rrdtool that uses libdbi. http://tof.canardpc.com/view/31953cab-4329-45e9-9e30-eed2d9148ee1.jpg max value on X-Axis is equivalent to ~ 35k rows retrieved from MySQL. Quite impressive, right ? ;) Olivier |
From: Markus H. <mar...@mh...> - 2013-01-09 15:22:30
|
Olivier Doucet <web...@aj...> was heard to say: Hi, > I finally succeeded in compiling both component for my system (I added > flag -lpthread when both linking / compiling or else I have following > error : '/usr/lib64/dbd/libdbdmysql.so: undefined symbol: > pthread_mutex_trylock'). libdbi-drivers uses the mysql_config script shipped with MySQL to obtain the list of required libraries to link against, *unless* you use the --with-mysql-libdir configure switch. Please try and see if libdbi-drivers compiles fine without that switch. Otherwise try manually what "mysql_config --libs" reports. > > Performance are now correct :) I checked with valgrind / callgrind, > and function mysql_data_seek is not called anymore. Do you have any numbers? Would be nice to know what speed gain this (almost) one-line optimization effected. > > You can now patch other drivers, and maybe release a new version ? > Thank you ! > I think I'll be able to finish migrating the drivers tonight as this requires changes in just one line per driver. New version is a different issue as I have to update the documentation and make sure everything is ready for prime time. It would be great if we could find a couple of kind souls who test the current CVS revisions on as many platforms as possible to find showstoppers, if any. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Olivier D. <web...@aj...> - 2013-01-09 14:02:00
|
Hello, > I've implemented the suggested changes and checked in the updated > files. If you want to speed test the code, please check out, build, > and install the current cvs revisions of both libdbi and > libdbi-drivers. I finally succeeded in compiling both component for my system (I added flag -lpthread when both linking / compiling or else I have following error : '/usr/lib64/dbd/libdbdmysql.so: undefined symbol: pthread_mutex_trylock'). Performance are now correct :) I checked with valgrind / callgrind, and function mysql_data_seek is not called anymore. You can now patch other drivers, and maybe release a new version ? Thank you ! Olivier |
From: <mar...@mh...> - 2013-01-09 00:03:38
|
Markus Hoenicka writes: > Hi, > > I'll have to read the code again a little more thoroughly, but to the > best of my knowledge libdbi emulates MySQL's approach to retrieving > rows from result sets. In order to walk through the rows of e.g. a > PostgreSQL result set you have to retrieve the rows by index > sequentially, so libdbi has to maintain an internal pointer anyway. We > do not have to add one, so there are no extra changes. Also, libdbi > internally already mixes the cursor style and next_row style calls, > because we have to cater for database engines which use either of > these methods without exposing these differences to the libdbi user. > > As for the API change, we have extensive driver API changes between > 0.8.x and the upcoming(TM) 0.9 release anyway, think of the recent > addition of the transaction stuff. You won't be able to keep your > 0.8.x drivers once you switch to libdbi 0.9. You'll probably notice > problems only if you build from cvs regularly (and only if you update > one but not the other), but I expect those users to know what they're > doing. > Hi, I've implemented the suggested changes and checked in the updated files. If you want to speed test the code, please check out, build, and install the current cvs revisions of both libdbi and libdbi-drivers. I've updated the mysql, pgsql, and sqlite3 drivers at this time, but making the remaining drivers compile is trivial. I've added the suggested check to speed up sequential row fetching in mysql. I do see a small but significant decrease in the time required to run gmake check (11.78 vs. 12.51 s), although the test code does not test retrieving boatloads of rows specifically. I'd appreciate if someone with a nice testcase (Olivier?) could give the changes a try and report some numbers. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Markus H. <mar...@mh...> - 2013-01-08 14:38:34
|
Mike Rylander <mry...@gm...> was heard to say: > Markus, > > Would it be worth the ugliness of a module-global variable to bury the > value of the row index used in the previous call to > dbd_mysql.c::dbd_goto_row() > inside the MySQL driver itself, side-stepping the function signature change > for other drivers? The benefit of avoiding the API change may not be > outweighed by the potential fragility of naively tracking the state > internally, of course, since in practice folks install and upgrade to new > versions of drivers and the libdbi core at the same time, but then again, > it may. I'm a little nervous about the potential for problems when mixing > direct goto_row() (cursor style) and next_row() calls, but I haven't looked > at the code to see if there's actually an issue there... > Hi, I'll have to read the code again a little more thoroughly, but to the best of my knowledge libdbi emulates MySQL's approach to retrieving rows from result sets. In order to walk through the rows of e.g. a PostgreSQL result set you have to retrieve the rows by index sequentially, so libdbi has to maintain an internal pointer anyway. We do not have to add one, so there are no extra changes. Also, libdbi internally already mixes the cursor style and next_row style calls, because we have to cater for database engines which use either of these methods without exposing these differences to the libdbi user. As for the API change, we have extensive driver API changes between 0.8.x and the upcoming(TM) 0.9 release anyway, think of the recent addition of the transaction stuff. You won't be able to keep your 0.8.x drivers once you switch to libdbi 0.9. You'll probably notice problems only if you build from cvs regularly (and only if you update one but not the other), but I expect those users to know what they're doing. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Mike R. <mry...@gm...> - 2013-01-08 13:53:56
|
Markus, Would it be worth the ugliness of a module-global variable to bury the value of the row index used in the previous call to dbd_mysql.c::dbd_goto_row() inside the MySQL driver itself, side-stepping the function signature change for other drivers? The benefit of avoiding the API change may not be outweighed by the potential fragility of naively tracking the state internally, of course, since in practice folks install and upgrade to new versions of drivers and the libdbi core at the same time, but then again, it may. I'm a little nervous about the potential for problems when mixing direct goto_row() (cursor style) and next_row() calls, but I haven't looked at the code to see if there's actually an issue there... --miker On Tue, Jan 8, 2013 at 4:46 AM, Markus Hoenicka < mar...@mh...> wrote: > Olivier Doucet <web...@aj...> was heard to say: > > > Hello everyone, > > > > I'm following a quite old topic about libdbi speed issues. > > I was able to track the cause of these issues : The major problem is > > how libdbi goes from one row to another. > > > > RRDTool (the tool that used libdbi and that I was inspecting) is using > > dbi_result_next_row() function (as stated in libdbi documentation > > btw). > > > > This function moves from one row to another with function > > dbi_result_seek_row(), incrementing currentRow index each time. This > > gives a call to dbd_mysql.c::dbd_goto_row() that uses > > mysql_data_seek() each time... > > > > That's why for a query result of 34k rows (yes it happens. No it is > > not a problem in the query itself), we have tens of thousands of call > > to this function (which is very low), and this is definitely not > > needed, because as we use fetch_row(), we automatically move from one > > row to another. Seeking is just a useless task (as internal driver > > does not know where we are, and needs to start from row 0 and seek to > > the given row - where we already were). > > > > I'm absolutely not a libdbi user, and I don't know what could be done > > outside libdbi to not use dbi_result_next_row() and use directly > > RESULT->onn->driver->functions->fetch_row() directly. Is it possible ? > > > > And/or patching dbi_result.c : > > just check RESULT->currowidx near line 102 before calling doing > > goto_row() function and call it only if we are not on the good row. Am > > I right ? > > > > Hi, > > your analysis is pretty much correct. If you look at the comments in > dbd_mysql.c::dbd_goto_row(), the original author of the mysql driver > was well aware of the limitations of his implementation. The reason is > that other database APIs, e.g. PostgreSQL, allow to fetch rows from a > result set by index, whereas the MySQL API assumes that you step > through the rows sequentially. The original design of libdbi appears > to somewhat favor PostgreSQL in this respect. > > Anyway, without having thought about the issue in too much detail, one > possible solution comes to mind. We could modify the driver function > dbd_goto_row() by passing both the wanted row index rowidx and the > current row index currowidx(which libdbi keeps track of anyway). This > would allow drivers to decide whether they have to actually seek the > position. pgsql doesn't have to anyway, and mysql doesn't have to if > rowidx = currowidx+1. This API change would not mandate changes to > existing drivers as they may ignore the additional parameter and keep > working as before, but it may offer options to speed up queries in > some drivers. > > regards, > Markus > > > > -- > Markus Hoenicka > http://www.mhoenicka.de > AQ score 38 > > > > > ------------------------------------------------------------------------------ > Master SQL Server Development, Administration, T-SQL, SSAS, SSIS, SSRS > and more. Get SQL Server skills now (including 2012) with LearnDevNow - > 200+ hours of step-by-step video tutorials by Microsoft MVPs and experts. > SALE $99.99 this month only - learn more at: > http://p.sf.net/sfu/learnmore_122512 > _______________________________________________ > libdbi-users mailing list > lib...@li... > https://lists.sourceforge.net/lists/listinfo/libdbi-users > -- Mike Rylander | Director of Research and Development | Equinox Software, Inc. / Your Library's Guide to Open Source | phone: 1-877-OPEN-ILS (673-6457) | email: mi...@es... | web: http://www.esilibrary.com |
From: Markus H. <mar...@mh...> - 2013-01-08 11:23:27
|
Olivier Doucet <web...@aj...> was heard to say: > Hi Markus, > > 2013/1/8 Markus Hoenicka <mar...@mh...>: >> We could modify the driver function >> dbd_goto_row() by passing both the wanted row index rowidx and the >> current row index currowidx(which libdbi keeps track of anyway). > > This is one way to fix the problem, I agree. Unfortunately my level in > C is too low to make such huge changes without breaking everything > else :) Anyone willing to create the patch for this ? > I'll commit these changes asap (may take a day or two). Can you build libdbi and libdbi-drivers from the cvs sources? You seem to have a good test case to see if these changes help. > Is there a way, outside libdbi, to fix this problem ? For example, go > over dbd_goto_row() and call fetch_row directly() ? Or maybe the > behaviour is different between database engines ? > I'm afraid you can't do that except if you bypass the abstraction layer altogether and use libmysqlclient natively. regards, Markus -- Markus Hoenicka http://www.mhoenicka.de AQ score 38 |
From: Olivier D. <web...@aj...> - 2013-01-08 10:47:20
|
Hi Markus, 2013/1/8 Markus Hoenicka <mar...@mh...>: > We could modify the driver function > dbd_goto_row() by passing both the wanted row index rowidx and the > current row index currowidx(which libdbi keeps track of anyway). This is one way to fix the problem, I agree. Unfortunately my level in C is too low to make such huge changes without breaking everything else :) Anyone willing to create the patch for this ? Is there a way, outside libdbi, to fix this problem ? For example, go over dbd_goto_row() and call fetch_row directly() ? Or maybe the behaviour is different between database engines ? Olivier |