You can subscribe to this list here.
2002 |
Jan
|
Feb
(11) |
Mar
(3) |
Apr
(13) |
May
(10) |
Jun
(6) |
Jul
(13) |
Aug
(11) |
Sep
(12) |
Oct
(8) |
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(17) |
Feb
(10) |
Mar
(7) |
Apr
|
May
(32) |
Jun
(5) |
Jul
(10) |
Aug
(5) |
Sep
(3) |
Oct
(1) |
Nov
|
Dec
|
2004 |
Jan
|
Feb
(2) |
Mar
(1) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
(1) |
Mar
(2) |
Apr
(1) |
May
(2) |
Jun
(2) |
Jul
(4) |
Aug
(2) |
Sep
(3) |
Oct
(5) |
Nov
(1) |
Dec
|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(3) |
2007 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
(1) |
Jun
(1) |
Jul
(2) |
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(8) |
Oct
(11) |
Nov
(26) |
Dec
(28) |
2009 |
Jan
(6) |
Feb
(10) |
Mar
(15) |
Apr
(16) |
May
(36) |
Jun
(18) |
Jul
(30) |
Aug
(6) |
Sep
(1) |
Oct
|
Nov
|
Dec
(13) |
2010 |
Jan
(8) |
Feb
(5) |
Mar
(4) |
Apr
(1) |
May
(1) |
Jun
(4) |
Jul
(3) |
Aug
(2) |
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Matthew B. <ma...@by...> - 2002-07-05 11:38:55
|
Hello; I've just noticed an undesirable interaction between multithreaded Ruby applications and the MySQL C API (if not other DB APIs); my web spider application was running along fine but suddenly after an unpredictable number of minutes, hours etc. it would stop dead. I think the problem is that I use a "lock tables" statement which, if used with Ruby threads, blocks the whole Ruby process until the table is unlocked. Trouble is if the 'thread' holding the lock is an internal Ruby thread in the same process it'll never get run to unlock it :-/ Is there a general way around this problem with the mysql_ API? My impression is no, since I can't find any way of telling mysql to use non-blocking semantics through its C API. Anyhow even if we leave it like this it's probably worth adding a note about this pitfall to the docs. cheers, -- Matthew Bloch Bytemark Computer Consulting http://www.bytemark.co.uk/ tel. +44 (0) 8707 455026 |
From: Michael N. <uu...@rz...> - 2002-07-03 20:28:23
|
On Wed, Jul 03, 2002 at 03:59:11PM +0100, Stephen Davies wrote: > > Hi, > > Thanks for the Ruby DBI/DBD drivers. > > I attach a patch for ruby-dbi which adds fetch_scroll support for the > Postgres DBD. The patch is against the current CVS. > I think this is a useful addition to be applied in the CVS. Yes, indeed. Thank you very much for it. It's in Ruby/DBI 0.0.16 which I released two seconds ago. Regards, Michael |
From: Michael N. <uu...@rz...> - 2002-07-03 19:38:23
|
On Mon, Jul 01, 2002 at 08:34:26AM -0500, Dave Thomas wrote: > Michael Neumann <uu...@rz...> writes: > > > On Sun, Jun 30, 2002 at 10:02:00PM -0500, Dave Thomas wrote: > > > > > > Somewhat lamely I posted the lack of a row count with Postgres #do as > > > a bug, then went back through the mailing list archives to see that it > > > had been discussed back in April. However, I didn't see a resolution. > > > > > > Is this something that can be addressed? If I had a vote, I'd go with > > > the option that assumed a #do was used with a none-SELECT statement. > > > > Please correct me if I'm wrong or I misunderstood something. > > > > Method #do returns PGresult#num_tuples, i.e. the number of tuples in the query result. > > But that's not what #do in other DBDs return; num_tuples != rows processed count. > > Furthermore #do is usually not called with a SELECT statement, so I don't think that > > anyone is depending on the return value of method #do. > > > > Statement#rows also returns PGresult#num_tuples, but should return > > the rows processed count instead. > > I'm suggesting changing #do to return the number of rows processed > by non-select statements. I think this would be consistent with the > intent. Yes, of course. > If you need the number of rows returned by a select, there are (many) > other ways to do it. But right now there's no way to find the number > affected by an update or a delete. > > I'd rather not see a driver-specific method: this is a pretty > fundamental feature, and making it non-portable would be a shame. I've now changed method Statement#rows (which is called and returned by Database#do) to return the Row Processed Count (affected rows). Old behaviour is still available through method Statement#['pg_row_count']. Both changes are in -current (as well as license change from GNU GPL to BSD). Regards, Michael |
From: Stephen D. <st...@da...> - 2002-07-03 14:59:19
|
Hi, Thanks for the Ruby DBI/DBD drivers. I attach a patch for ruby-dbi which adds fetch_scroll support for the Postgres DBD. The patch is against the current CVS. I think this is a useful addition to be applied in the CVS. I haven't joined the list at the moment so please include me in copy. Regards, Steve Davies |
From: Dave T. <Da...@Pr...> - 2002-07-01 13:34:28
|
Michael Neumann <uu...@rz...> writes: > On Sun, Jun 30, 2002 at 10:02:00PM -0500, Dave Thomas wrote: > > > > Somewhat lamely I posted the lack of a row count with Postgres #do as > > a bug, then went back through the mailing list archives to see that it > > had been discussed back in April. However, I didn't see a resolution. > > > > Is this something that can be addressed? If I had a vote, I'd go with > > the option that assumed a #do was used with a none-SELECT statement. > > Please correct me if I'm wrong or I misunderstood something. > > Method #do returns PGresult#num_tuples, i.e. the number of tuples in the query result. > But that's not what #do in other DBDs return; num_tuples != rows processed count. > Furthermore #do is usually not called with a SELECT statement, so I don't think that > anyone is depending on the return value of method #do. > > Statement#rows also returns PGresult#num_tuples, but should return > the rows processed count instead. I'm suggesting changing #do to return the number of rows processed by non-select statements. I think this would be consistent with the intent. If you need the number of rows returned by a select, there are (many) other ways to do it. But right now there's no way to find the number affected by an update or a delete. I'd rather not see a driver-specific method: this is a pretty fundamental feature, and making it non-portable would be a shame. Cheers Dave |
From: Michael N. <uu...@rz...> - 2002-07-01 12:34:05
|
On Sun, Jun 30, 2002 at 10:02:00PM -0500, Dave Thomas wrote: > > Somewhat lamely I posted the lack of a row count with Postgres #do as > a bug, then went back through the mailing list archives to see that it > had been discussed back in April. However, I didn't see a resolution. > > Is this something that can be addressed? If I had a vote, I'd go with > the option that assumed a #do was used with a none-SELECT statement. Please correct me if I'm wrong or I misunderstood something. Method #do returns PGresult#num_tuples, i.e. the number of tuples in the query result. But that's not what #do in other DBDs return; num_tuples != rows processed count. Furthermore #do is usually not called with a SELECT statement, so I don't think that anyone is depending on the return value of method #do. Statement#rows also returns PGresult#num_tuples, but should return the rows processed count instead. So what do you (and others) suggest? - Add a method to return the number of rows in a query result (that is what #rows currently return) - Add a method to return the rows processed count For the first time, I would leave method #rows as is (we can correct it's behaviour later) and add a method which returns the same value as #rows currently returns, i.e. the number of rows in a query result. Would it suffice to make it a driver specific function (e.g. sth["dbd_rows_in_result"] or sth.func("rows_in_result")), as not all databases supports this? This would be a temporary solution as long as the DBD/DBI specs will get updated. > Right now, this is something of a problem for me, as I need to check > the count following an update to ensure integrity of a long-running > transaction. Regards, Michael |
From: Dave T. <Da...@Pr...> - 2002-07-01 03:01:56
|
Somewhat lamely I posted the lack of a row count with Postgres #do as a bug, then went back through the mailing list archives to see that it had been discussed back in April. However, I didn't see a resolution. Is this something that can be addressed? If I had a vote, I'd go with the option that assumed a #do was used with a none-SELECT statement. Right now, this is something of a problem for me, as I need to check the count following an update to ensure integrity of a long-running transaction. Cheers Dave |
From: Paul D. <pa...@sn...> - 2002-06-30 21:01:55
|
At 22:06 +0100 6/30/02, Matthew Bloch wrote: >Hello: using Ruby 1.6.7 with mysql-ruby 2.4.2a and ruby-dbi 0.14, am I safe >trying to use the same database handle from multiple Ruby threads? Only I'm >getting a "Commands out of sync error" from MySQL, detailed here: > > http://www.mysql.com/doc/C/o/Commands_out_of_sync.html > >and wondered whether particular sequences of commands etc. would be unsafe. >For instance if I use a select_all statement with a supplied block, can I >still issue further queries inside the block? Will delve into the source >tonight, but I assume this is a bug, or at least there is some undocumented >'unsafe' use of the mysql module? This is probably related to use of the mysql_use_result attribute rather than mysql_store_result. Does that provide any clues to what's going on or why you're seeing the problems you're seeing? > >cheers, > >-- >Matthew Bloch Bytemark Computer Consulting > http://www.bytemark.co.uk/ > tel. +44 (0) 8707 455026 > |
From: Matthew B. <ma...@by...> - 2002-06-30 20:51:44
|
Hello: using Ruby 1.6.7 with mysql-ruby 2.4.2a and ruby-dbi 0.14, am I safe trying to use the same database handle from multiple Ruby threads? Only I'm getting a "Commands out of sync error" from MySQL, detailed here: http://www.mysql.com/doc/C/o/Commands_out_of_sync.html and wondered whether particular sequences of commands etc. would be unsafe. For instance if I use a select_all statement with a supplied block, can I still issue further queries inside the block? Will delve into the source tonight, but I assume this is a bug, or at least there is some undocumented 'unsafe' use of the mysql module? cheers, -- Matthew Bloch Bytemark Computer Consulting http://www.bytemark.co.uk/ tel. +44 (0) 8707 455026 |
From: Michael N. <uu...@rz...> - 2002-06-17 12:30:58
|
Hi, Are there any points why Ruby-DBI should not be put under the BSD license? Currently it's under a dual GPL/Ruby license. If nobody has arguments against using a BSD license, I'll make the transition starting with the next version. Regards, Michael -- Michael Neumann *** eMail uu...@rz... |
From: Michael N. <uu...@rz...> - 2002-06-13 15:53:28
|
On Thu, Jun 13, 2002 at 04:32:00AM -0700, Sean Chittenden wrote: > How is it possible to store binary data in PostgreSQL via DBI? When I > insert binary data (ex: Marhsal.dump({"uga"=>"booga"})) into the > database, it escapes the data improperly. I'm storing the data in a > bytea column type. Below is a fix for bytea values. It's in -current. Note that you still have to encode the bytea values, only decoding works automatically. Regards, Michael Index: Pg.rb =================================================================== RCS file: /cvsroot/ruby-dbi/src/lib/dbd_pg/Pg.rb,v retrieving revision 1.18 diff -c -r1.18 Pg.rb *** Pg.rb 17 Apr 2002 13:38:38 -0000 1.18 --- Pg.rb 13 Jun 2002 15:43:52 -0000 *************** *** 8,14 **** module DBD module Pg ! VERSION = "0.2.1" USED_DBD_VERSION = "0.2" class Driver < DBI::BaseDriver --- 8,14 ---- module DBD module Pg ! VERSION = "0.3.0" USED_DBD_VERSION = "0.2" class Driver < DBI::BaseDriver *************** *** 285,291 **** def load_type_map @type_map = Hash.new ! @coerce = DBI::SQL::BasicQuote::Coerce.new res = send_sql("SELECT typname, typelem FROM pg_type") --- 285,291 ---- def load_type_map @type_map = Hash.new ! @coerce = PgCoerce.new res = send_sql("SELECT typname, typelem FROM pg_type") *************** *** 298,303 **** --- 298,304 ---- when '_float4','_float8' then :as_float when '_timestamp' then :as_timestamp when '_date' then :as_date + when '_bytea' then :as_bytea else :as_str end } *************** *** 358,363 **** --- 359,379 ---- raise DBI::DatabaseError.new(err.message) end + # + # encodes a string as bytea value. + # + # for encoding rules see: + # http://www.postgresql.org/idocs/index.php?datatype-binary.html + # + def __encode_bytea(str) + a = str.split(/\\/, -1).collect! {|s| + s.gsub!(/'/, "\\\\047") # ' => \\047 + s.gsub!(/\000/, "\\\\000") # \0 => \\000 + s + } + a.join("\\\\") # \ => \\ + end + end # Database ################################################################ *************** *** 478,484 **** end end # Tuples ! end # module Pg end # module DBD end # module DBI --- 494,515 ---- end end # Tuples ! ! ################################################################ ! class PgCoerce < DBI::SQL::BasicQuote::Coerce ! # ! # for decoding rules see: ! # http://www.postgresql.org/idocs/index.php?datatype-binary.html ! # ! def as_bytea(str) ! a = str.split(/\\\\/, -1).collect! {|s| ! s.gsub!(/\\[0-7][0-7][0-7]/) {|o| o[1..-1].oct.chr} # \### => chr(###) ! s ! } ! a.join("\\") # \\ => \ ! end ! end ! end # module Pg end # module DBD end # module DBI |
From: Sean C. <se...@ch...> - 2002-06-13 11:36:23
|
I'm writing a module that's selecting a 'timestamp without time zone' data type, but when I go to perform the select statement, I get the following error: /usr/local/lib/ruby/site_ruby/1.7/DBD/Pg/Pg.rb:261:in `convert': Unsupported Type (typeid=1114) (DBI::InterfaceError) from /usr/local/lib/ruby/site_ruby/1.7/DBD/Pg/Pg.rb:476:in `fill_array' from /usr/local/lib/ruby/site_ruby/1.7/DBD/Pg/Pg.rb:475:in `each_with_index' from /usr/local/lib/ruby/site_ruby/1.7/DBD/Pg/Pg.rb:475:in `each' from /usr/local/lib/ruby/site_ruby/1.7/DBD/Pg/Pg.rb:475:in `each_with_index' from /usr/local/lib/ruby/site_ruby/1.7/DBD/Pg/Pg.rb:475:in `fill_array' from /usr/local/lib/ruby/site_ruby/1.7/DBD/Pg/Pg.rb:457:in `fetchrow' from /usr/local/lib/ruby/site_ruby/1.7/DBD/Pg/Pg.rb:412:in `fetch' from /usr/home/sean/open_source/ruby-dbi/src/lib/dbi/dbi.rb:766:in `fetch' from /usr/home/sean/open_source/ruby-dbi/src/lib/dbi/dbi.rb:601:in `select_one' from /usr/home/sean/open_source/ruby-dbi/src/lib/dbi/dbi.rb:600:in `execute' from /usr/home/sean/open_source/ruby-dbi/src/lib/dbi/dbi.rb:600:in `select_one' from ./session/dbi.rb:47:in `dbi_load' from ./session.rb:50:in `call' from ./session.rb:50:in `load' from ./session.rb:48:in `each' from ./session.rb:48:in `load' from ./test.rb:23 Which is correct because typid 1114 doesn't exist as a typeid in the database. The problem is, it doesn't exist in the database and I don't know where the value 1114 is coming from. Any thoughts? I can send code/schema to anyone who's interested. -sc -- Sean Chittenden |
From: Sean C. <se...@ch...> - 2002-06-13 11:32:01
|
How is it possible to store binary data in PostgreSQL via DBI? When I insert binary data (ex: Marhsal.dump({"uga"=>"booga"})) into the database, it escapes the data improperly. I'm storing the data in a bytea column type. Before insert: "\004\a{\006\"\010uga\"\nbooga" After insert: "\\004\\007{\\006\"\\010uga\"\\012booga" Needless to say the two strings aren't the same. Anyone have any ideas as to why this is happening and how I can get around this? When inserting into a bytea column, DBI seems to be doing too much work. -sc -- Sean Chittenden |
From: Michael N. <uu...@rz...> - 2002-05-14 18:54:53
|
Hi, Just now I've released Ruby/DBI 0.0.14 which fixes the latest Mysql-DBD bug and includes the patch for Postgres-DBD which increases performance up to factor 1000. Regards, Michael -- Michael Neumann *** eMail uu...@rz... |
From: Michael N. <uu...@rz...> - 2002-05-14 17:47:46
|
On Mon, May 13, 2002 at 11:35:54AM -0700, Brad Hilton wrote: > On Sun, 2002-05-12 at 21:20, Sean Chittenden wrote: > > > Right now the could looks like: > > > > > > @res_handle.free if @res_handle > > > > > > Perhaps that needs to be changed to: > > > > > > @res_handle.free if (@res_handle and @res_handle.is_a? MysqlRes) > > > > > > I don't know... > > > > I'm a PostgreSQL guy myself. Could you test the above change and see > > if it works for you? You seem to have done the diagnosis correctly > > and I think it should work. -sc > > Actually, I think a better solution than my previous one is to set > query_with_result = true in Mysql.rb line 313 in execute(). It is > obvious that the code expects query_with_result to be set to true in > that instance, and there are many places in the code that assume > @res_handle is a MysqlRes object, so I believe this would be the best > solution. > > Attached is a sample script which exposes the problem, as well as a > patch to fix the bug. This will not completely fix the bug, I fear. When executing statements concurrently, it could happen that method #do sets query_with_result=false, then #execute interrupts and sets query_with_result=true .... The solution is to use a Mutex, which prevents parallel execution of methods #do and #execute. The following patch fixes it (checked into -current): ----- patch ------ 5c5 < # Version : 0.3.1 --- > # Version : 0.3.2 24a25 > require "thread" # for Mutex 30c31 < VERSION = "0.3.1" --- > VERSION = "0.3.2" 148a150,154 > def initialize(handle, attr) > super > @mutex = Mutex.new > end > 208d213 < 210d214 < @handle.query_with_result = false 212,213c216,220 < @handle.query(sql) < @handle.affected_rows --- > @mutex.synchronize { > @handle.query_with_result = false > @handle.query(sql) > @handle.affected_rows # return value > } 220c227 < Statement.new(self, @handle, statement) --- > Statement.new(self, @handle, statement, @mutex) 297c304 < def initialize(parent, handle, statement) --- > def initialize(parent, handle, statement, mutex) 300c307 < @parent, @handle = parent, handle --- > @parent, @handle, @mutex = parent, handle, mutex 303d309 < @handle.query_with_result = true # automatically switches store_result on 313,315c319,325 < @res_handle = @handle.query(@prep_stmt.bind(@params)) < @current_row = 0 < @rows = @handle.affected_rows --- > sql = @prep_stmt.bind(@params) > @mutex.synchronize { > @handle.query_with_result = true > @res_handle = @handle.query(sql) > @current_row = 0 > @rows = @handle.affected_rows > } -- Michael Neumann *** eMail uu...@rz... |
From: Brad H. <bh...@vp...> - 2002-05-13 18:36:00
|
On Sun, 2002-05-12 at 21:20, Sean Chittenden wrote: > > Right now the could looks like: > > > > @res_handle.free if @res_handle > > > > Perhaps that needs to be changed to: > > > > @res_handle.free if (@res_handle and @res_handle.is_a? MysqlRes) > > > > I don't know... > > I'm a PostgreSQL guy myself. Could you test the above change and see > if it works for you? You seem to have done the diagnosis correctly > and I think it should work. -sc Actually, I think a better solution than my previous one is to set query_with_result = true in Mysql.rb line 313 in execute(). It is obvious that the code expects query_with_result to be set to true in that instance, and there are many places in the code that assume @res_handle is a MysqlRes object, so I believe this would be the best solution. Attached is a sample script which exposes the problem, as well as a patch to fix the bug. Thanks, -Brad -- patch -- --- Mysql.rb.orig Mon May 13 11:32:33 2002 +++ Mysql.rb Mon May 13 09:48:52 2002 @@ -310,6 +310,7 @@ class Statement < DBI::BaseStatement end def execute + @handle.query_with_result = true @res_handle = @handle.query(@prep_stmt.bind(@params)) @current_row = 0 @rows = @handle.affected_rows |
From: Sean C. <se...@ch...> - 2002-05-13 04:20:15
|
> > > Could the maintainer of ruby-dbd-mysql look into this please? > > > Basically, every time a @res_handle is grabbed from Mysql#query > > > there is a false assumption that it is a MysqlRes object, and not a > > > reference to the original Mysql object. > > > > I think it might be a fair assumption, but I'm neither the dbd_mysql > > maintainer, nor am I familair with what that option does. > > Unfortunately, neither do I. I just stumbled on the problem and thought > I'd report the invalid assumption that dbd-mysql is making about > Mysql#query. Perhaps the code could be fixed by adding an addition > check. Right now the could looks like: > > @res_handle.free if @res_handle > > Perhaps that needs to be changed to: > > @res_handle.free if (@res_handle and @res_handle.is_a? MysqlRes) > > I don't know... I'm a PostgreSQL guy myself. Could you test the above change and see if it works for you? You seem to have done the diagnosis correctly and I think it should work. -sc -- Sean Chittenden |
From: Brad H. <bh...@vp...> - 2002-05-11 22:53:59
|
On Sat, 2002-05-11 at 15:45, Brad Hilton wrote: > I don't know what could possibly be setting query_with_result to false Check that. Mysql.rb sets query_with_result to false in line 210. So it needs to be prepared for Mysql#query returning a Mysql object in that case. -Brad |
From: Brad H. <bh...@vp...> - 2002-05-11 22:45:19
|
On Sat, 2002-05-11 at 11:33, Sean Chittenden wrote: > > The problem is that ruby-dbd-mysql assumes that Mysql#query always > > MysqlRes objects, when that isn't true. If query_with_result is > > false, Mysql#query returns self (a Mysql object). > > Hmmm.... alright. What's the value of query_with_result being false? > Is it's point to save on the number of objects created? I don't see > any direct benefit of toggling this option. I don't know what the benefit of toggling query_with_result is either. I never directly manipulate the Mysql object, since I always use ruby-dbi as a higher level access point. I don't know what could possibly be setting query_with_result to false, but by the mere fact that it can happen, and thus break ruby-dbd-mysql, it seems that something needs to be fixed. > > > Could the maintainer of ruby-dbd-mysql look into this please? > > Basically, every time a @res_handle is grabbed from Mysql#query > > there is a false assumption that it is a MysqlRes object, and not a > > reference to the original Mysql object. > > I think it might be a fair assumption, but I'm neither the dbd_mysql > maintainer, nor am I familair with what that option does. Unfortunately, neither do I. I just stumbled on the problem and thought I'd report the invalid assumption that dbd-mysql is making about Mysql#query. Perhaps the code could be fixed by adding an addition check. Right now the could looks like: @res_handle.free if @res_handle Perhaps that needs to be changed to: @res_handle.free if (@res_handle and @res_handle.is_a? MysqlRes) I don't know... -Brad |
From: Sean C. <se...@ch...> - 2002-05-11 18:33:06
|
[moving discussion to rub...@li..., reply there] > On Fri, 2002-05-10 at 14:44, Brad Hilton wrote: > > usr/local/lib/ruby/site_ruby/1.6/DBD/Mysql/Mysql.rb:321:in `finish': > > undefined method `free' for #<Mysql:0x81a9d44> (NameError) > > from /usr/local/lib/ruby/site_ruby/1.6/dbi/dbi.rb:701:in > > `finish' > > Well, to follow up to myself: > > It turns out the error is expected. Mysql objects don't have the > methods 'free' and 'fetch_fields' defined. Those methods are > actually in the class MysqlRes, which I found out after looking at > the c code. > > The problem is that ruby-dbd-mysql assumes that Mysql#query always > MysqlRes objects, when that isn't true. If query_with_result is > false, Mysql#query returns self (a Mysql object). Hmmm.... alright. What's the value of query_with_result being false? Is it's point to save on the number of objects created? I don't see any direct benefit of toggling this option. > Could the maintainer of ruby-dbd-mysql look into this please? > Basically, every time a @res_handle is grabbed from Mysql#query > there is a false assumption that it is a MysqlRes object, and not a > reference to the original Mysql object. I think it might be a fair assumption, but I'm neither the dbd_mysql maintainer, nor am I familair with what that option does. Could you ellaborate? -sc -- Sean Chittenden |
From: Brad H. <bh...@vp...> - 2002-05-11 15:02:42
|
On Fri, 2002-05-10 at 14:44, Brad Hilton wrote: > usr/local/lib/ruby/site_ruby/1.6/DBD/Mysql/Mysql.rb:321:in `finish': > undefined method `free' for #<Mysql:0x81a9d44> (NameError) > from /usr/local/lib/ruby/site_ruby/1.6/dbi/dbi.rb:701:in > `finish' Well, to follow up to myself: It turns out the error is expected. Mysql objects don't have the methods 'free' and 'fetch_fields' defined. Those methods are actually in the class MysqlRes, which I found out after looking at the c code. The problem is that ruby-dbd-mysql assumes that Mysql#query always MysqlRes objects, when that isn't true. If query_with_result is false, Mysql#query returns self (a Mysql object). Could the maintainer of ruby-dbd-mysql look into this please? Basically, every time a @res_handle is grabbed from Mysql#query there is a false assumption that it is a MysqlRes object, and not a reference to the original Mysql object. Thanks, -Brad |
From: Brad H. <bh...@vp...> - 2002-05-10 21:44:19
|
Hello, Sorry to cross-post to ruby-dbi-devel, but I wasn't sure which list might be more appropriate for a problem I'm having. In one of my programs I am using ruby-dbi/ruby-dbd-mysql 0.0.13 and ruby-mysql 2.4.2a on a FreeBSD 4.5 box running ruby 1.6.7 (2002-05-01). I am inserting thousands of rows on a daily basis, and *sometimes* (I can't reproduce the problem on demand) the script dies with messages like these: usr/local/lib/ruby/site_ruby/1.6/DBD/Mysql/Mysql.rb:321:in `finish': undefined method `free' for #<Mysql:0x81a9d44> (NameError) from /usr/local/lib/ruby/site_ruby/1.6/dbi/dbi.rb:701:in `finish' or: /usr/local/lib/ruby/site_ruby/1.6/DBD/Mysql/Mysql.rb:368:in `column_info': undefined method `fetch_fields' for #<Mysql:0x81a8d44> (NameError) from /usr/local/lib/ruby/site_ruby/1.6/dbi/dbi.rb:714:in `column_names' from /usr/local/lib/ruby/site_ruby/1.6/dbi/dbi.rb:695:in `execute' ----------------------------- The strange thing is, the same exact command succeeds thousands of times prior to this error. How could the Mysql object not contain the methods "fetch_fields" and "free"? They are compiled in to the mysql.so file which is necessary to create the Mysql object in the first place. How could those methods just disappear? It is like the Mysql object had all of its methods removed somehow. Does anyone have any suggestions? Thanks, -Brad |
From: <m_...@mv...> - 2002-05-01 17:02:13
|
# oops, I sent to rub...@ya... .. hi, all. I'm using ruby-dbi-all-0.0.13. When str is "", as_date(str) goes wrong. It returns #<DBI::Date:0x... @month=nil, @year=nil, @day=nil> There is a problem similar to it also in as_time() method. --- sql.rb.ORG Wed Apr 17 05:49:33 2002 +++ sql.rb Thu May 2 01:24:35 2002 @@ -47,8 +47,8 @@ end def as_time(str) - return nil if str.nil? t = as_timestamp(str) + return nil if t.nil? DBI::Time.new(t.hour, t.min, t.sec) end @@ -67,7 +67,7 @@ def as_date(str) - return nil if str.nil? + return nil if str.nil? or str.empty? ary = ParseDate.parsedate(str) DBI::Date.new(*ary[0,3]) rescue |
From: Zakaria <zak...@so...> - 2002-04-26 06:11:23
|
he changes in the #3 solution is not that big and doesn't break backward compatibility at all. We just add DBI::DatabaseHandle#dml_prepare and DBI::DatabaseHandle#dml_execute (for the shortcut). dml_prepare method will result in a DMLHandle class that have rows method but no fetch method. Anybody knows how they handle it in Perl DBI? On Min, 2002-04-21 at 04:07, Gabriel Emerson wrote: > You have good points. What I don't like about #2 is the inconsistency, > but #3 is a large change to impose. If rows_affected() was added, and > rows() still worked, but was deprecated for DML, that would give us a > way out that could keep everyone happy. Then we can obsolete the > affected_rows behavior of rows() after sufficient warning. > > As far as I know this only affects PostgreSQL. I wouldn't want to > impose as large an API change as #3 on users of other DBDs. I use DBI > in production, as I think more and more businesses are, so API > preservation and measured change are important. > > Just my 2 cents. > > --Gabriel > > > Hi, > > > > > > 2. Add DBI::StatementHandle#num_rows > > While this is easy, but this make the API less elegant because > > it force the programmer to remember two different method > > for the same operation (number of rows processed). > > > > 3. Add special method for DML Query (insert, update, delete) > > so we can reliably separate Select and Non-Select query. > > This is reasonable because you can't use row fetching method > > on DBI::DatabaseHandle#execute('UPDATE something...') result. > > Regards, -- Zakaria http://pemula.linux.or.id |
From: Gabriel E. <ga...@de...> - 2002-04-20 21:09:10
|
You have good points. What I don't like about #2 is the inconsistency, but #3 is a large change to impose. If rows_affected() was added, and rows() still worked, but was deprecated for DML, that would give us a way out that could keep everyone happy. Then we can obsolete the affected_rows behavior of rows() after sufficient warning. As far as I know this only affects PostgreSQL. I wouldn't want to impose as large an API change as #3 on users of other DBDs. I use DBI in production, as I think more and more businesses are, so API preservation and measured change are important. Just my 2 cents. --Gabriel > Hi, > > > 2. Add DBI::StatementHandle#num_rows > While this is easy, but this make the API less elegant because > it force the programmer to remember two different method > for the same operation (number of rows processed). > > 3. Add special method for DML Query (insert, update, delete) > so we can reliably separate Select and Non-Select query. > This is reasonable because you can't use row fetching method > on DBI::DatabaseHandle#execute('UPDATE something...') result. > |