You can subscribe to this list here.
2002 |
Jan
|
Feb
(11) |
Mar
(3) |
Apr
(13) |
May
(10) |
Jun
(6) |
Jul
(13) |
Aug
(11) |
Sep
(12) |
Oct
(8) |
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(17) |
Feb
(10) |
Mar
(7) |
Apr
|
May
(32) |
Jun
(5) |
Jul
(10) |
Aug
(5) |
Sep
(3) |
Oct
(1) |
Nov
|
Dec
|
2004 |
Jan
|
Feb
(2) |
Mar
(1) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
(1) |
Mar
(2) |
Apr
(1) |
May
(2) |
Jun
(2) |
Jul
(4) |
Aug
(2) |
Sep
(3) |
Oct
(5) |
Nov
(1) |
Dec
|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(3) |
2007 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
(1) |
Jun
(1) |
Jul
(2) |
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(8) |
Oct
(11) |
Nov
(26) |
Dec
(28) |
2009 |
Jan
(6) |
Feb
(10) |
Mar
(15) |
Apr
(16) |
May
(36) |
Jun
(18) |
Jul
(30) |
Aug
(6) |
Sep
(1) |
Oct
|
Nov
|
Dec
(13) |
2010 |
Jan
(8) |
Feb
(5) |
Mar
(4) |
Apr
(1) |
May
(1) |
Jun
(4) |
Jul
(3) |
Aug
(2) |
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Brian C. <B.C...@po...> - 2003-03-31 18:00:35
|
On Mon, Mar 31, 2003 at 10:39:00AM -0600, Paul DuBois wrote: > At 15:07 +0100 3/31/03, Brian Candler wrote: > >I was writing some code like this: > > > > s = db.prepare("select bar from foo where baz=?") > > ... > >(1) > > s.execute(key) do |sth| > > result = sth.fetch > > end > > > >Unfortunately, I discovered that it doesn't work like that - the block is > >simply ignored. > > Right. execute takes a block for a database handle, not a statement > handle. Sure. But do you have any objections to StatementHandle#execute being able to take a block? The current behaviour went against POLS for me. I was optimising existing code which looked like this: dbh.execute(stmt,args) do |sth| ... end and I was expecting to be able to replace it with p = dbh.prepare(stmt) ... p.execute(args) do |sth| ... end However the current behaviour doesn't let you do this, which means you have to rewrite your code more than I expected :-( Regards, Brian. |
From: Paul D. <pa...@sn...> - 2003-03-31 16:39:14
|
At 15:07 +0100 3/31/03, Brian Candler wrote: >I was writing some code like this: > > s = db.prepare("select bar from foo where baz=?") > ... >(1) > s.execute(key) do |sth| > result = sth.fetch > end > >Unfortunately, I discovered that it doesn't work like that - the block is >simply ignored. Right. execute takes a block for a database handle, not a statement handle. Example: dbh.execute("SHOW DATABASES") do |sth| sth.fetch do |row| puts row[0] end end > It also doesn't work like this: > >(2) > t = s.execute(key) > result = t.fetch > >After some poking around, it turns out that when you run 'execute' on a >StatementHandle it returns an empty DBI::Row object. You have to do: > >(3) > s.execute(key) > result = s.fetch > >I think it would be nice to support the block format as in (1), which I >think would only need 'yield self if block_given?' in the execute method. > >I think it would also be nice to return self rather than the empty DBI::Row, >so that you can chain execute with other operations: > > result = s.execute(key).fetch > >What does anyone else think? It ought to be a simple change (the attached >patch is untested though, as I need my app to work with the current API >anyway) > >Regards, > >Brian. > >--- dbi.rb.orig Mon Mar 31 15:04:37 2003 >+++ dbi.rb Mon Mar 31 15:05:23 2003 >@@ -747,6 +747,11 @@ > #if @row.nil? > @row = DBI::Row.new(column_names,nil) > #end >+ if block_given? >+ yield self >+ else >+ self >+ end > end > > def finish |
From: Brian C. <B.C...@po...> - 2003-03-31 14:07:59
|
I was writing some code like this: s = db.prepare("select bar from foo where baz=?") ... (1) s.execute(key) do |sth| result = sth.fetch end Unfortunately, I discovered that it doesn't work like that - the block is simply ignored. It also doesn't work like this: (2) t = s.execute(key) result = t.fetch After some poking around, it turns out that when you run 'execute' on a StatementHandle it returns an empty DBI::Row object. You have to do: (3) s.execute(key) result = s.fetch I think it would be nice to support the block format as in (1), which I think would only need 'yield self if block_given?' in the execute method. I think it would also be nice to return self rather than the empty DBI::Row, so that you can chain execute with other operations: result = s.execute(key).fetch What does anyone else think? It ought to be a simple change (the attached patch is untested though, as I need my app to work with the current API anyway) Regards, Brian. --- dbi.rb.orig Mon Mar 31 15:04:37 2003 +++ dbi.rb Mon Mar 31 15:05:23 2003 @@ -747,6 +747,11 @@ #if @row.nil? @row = DBI::Row.new(column_names,nil) #end + if block_given? + yield self + else + self + end end def finish |
From: Brian C. <B.C...@po...> - 2003-03-28 00:14:14
|
On Thu, Mar 27, 2003 at 04:21:26PM -0600, Paul DuBois wrote: > Should I add the @mutex thingies? The reason I didn't, when > adding the transaction support, is that I am sufficiently Ruby-illiterate > not to know whether or not they're needed. And not for any more technical > reason than that! :-) Well, I've picked up a fair amount of Ruby, but there's plenty that I don't understand about Mysql :-) One of the things that seems apparent about the Mysql API is that you have to issue multiple API calls to complete one 'action'. For example, 'do' does the following steps: @handle.query_with_result = false # << API @handle.query(sql) # << API @handle.affected_rows # return value # << API and 'execute' does these: @handle.query_with_result = true # << API @res_handle = @handle.query(sql) # << API @column_info = self.column_info @coerce = DBI::SQL::BasicQuote::Coerce.new @current_row = 0 @rows = @handle.affected_rows # << API Each @handle method call is a call into the C API. Now, Ruby's threading system is not as fine-grained as OS threading, where a context switch between two threads could happen almost anywhere. In particular, once it has committed to calling a C function, e.g. @handle.query(sql) then I believe that library function will run to completion; there's no way that Ruby's threading can interrupt it. However, in normal operation, a context switch _could_ happen in between any two Ruby expressions. Imagine if you have thread 1 trying to do an 'execute', and almost at the same time thread 2 is doing a 'do'. Thread 1 starts by: @handle.query_with_result = true Now let's say that the Ruby threading mechanism decides that now would be a good time for a context switch. It then moves onto thread 2, which does @handle.query_with_result = false @handle.query(sql) ...etc At some time later, thread 1 gets a look in again. It then does the next stage of its operation, @res_handle = @handle.query(sql) However, this will be broken, because the @handle is in the wrong mode (query_with_result has been set to false in the mean time) Therefore, because one Mysql 'operation' is in fact multiple API calls, you must ensure that while you are doing a sequence of operations on @handle, nobody else does anything on @handle either. That's where the mutex comes in. Once a piece of code has grabbed the mutex, it doesn't prevent a thread context switch happening; but what it does mean is that if any other thread also tries to grab the mutex, it will be stopped until the mutex becomes free again. So if thread 1 does: @mutex.synchronize { @handle.query_with_result = true @res_handle = @handle.query(sql) @column_info = self.column_info @coerce = DBI::SQL::BasicQuote::Coerce.new @current_row = 0 @rows = @handle.affected_rows } and in the middle of that a context switch occurs, and thread 2 tries to do @mutex.synchronize { @handle.query_with_result = false @handle.query(sql) @handle.affected_rows # return value } it will block at the @mutex.synchronize. Thread 1 will get its turn back sooner or later, complete its work, at which point thread 2 can grab the mutex and have its go. Back to commit, rollback and set autocommit. None of these operations return a value, which keeps things simple. However, I would imagine that it is a good idea to tell the API 'query_with_result = false' before issuing them. In that case, you'd want to do @mutex.synchronize { @handle.query_with_result = false @handle.query("commit") # or whatever } Otherwise, when you issue "commit" then the state of the query_with_result flag is indeterminate, and I don't know enough about mysql to say if that's safe or not. But even if it is, and the single operation @handle.query("commit") is all you need, you still don't want to issue it whilst another thread is in the middle of performing some other query. You could destroy the results of that query, for example corrupting the value in @handle.affected_rows before it has been read. The only way prevent this is cooperation between the methods: *all* methods which have any side effects on @handle must agree to grab the mutex while performing them. So I'd say that the commit/rollback/autocommit methods really ought to honour it as well. Regards, Brian. |
From: Brian C. <B.C...@po...> - 2003-03-27 22:04:15
|
Hello, I was pointed in the direction of this list from ruby-talk by Paul DuBois (thanks Paul). I heard there's now transaction support for Mysql in CVS for ruby-dbi. I just added this in myself too, so looks like I was wasting my time :-) I've had a look at the changes in Mysql.rb from 1.19 to 1.20, and it looks good. My only comment/observation is about the thread-safety of this patch. The code says: def commit if @have_transactions @handle.query("COMMIT") else ... (similarly for rollback and setting autocommit mode). However, everywhere else that calls query() is protected by @mutex. So shouldn't these statements have the same protection? Or is the possibility of two threads interfering on a transaction so horrible to contemplate that it's not worth putting up any pretence of protection? (Perhaps DatabaseHandle#transaction should have its own mutex? But that wouldn't protect you if you just did commit/rollback yourself) Apart from that, would it perhaps be time for a new release of ruby-dbi? While I'm here, a couple of other things I noticed. (1) In the pg driver: if attr =~ /^pg_/ or attr != /_/ ^^^^^^^^^^^ I think that should be "attr !~ /_/" otherwise I can't see how it does anything useful. (2) There is a note from mneumann on the project page under 'RFE' saying 'Write a DBD based on the package Ruby/OCI8, when this becomes stable.' I'd just like to say that I've been using Ruby/OCI8 with Oracle 9i quite heavily for the last couple of weeks, with the DBD which comes with it. Once a few bugs had been ironed out (the author released 0.1.2) it's working beautifully. Regards, Brian. |
From: Michael N. <uu...@st...> - 2003-03-11 09:27:15
|
Travis Whitton wrote: > Hello all - I've been using an older version of DBI, 0.0.12, on a DB > server here at work. This older version returns all fetch_hash results > as strings without exception. My workstation is running DBI 0.0.18, > which returns dates as DBI::Date objects. This is convienent most of the > time; however, I'm passing data back and forth using XML RPC, which > means that only standard types are allowed. If I attempt to pass an > array of hashes containing strings, everything is fine. But, if I try to > pass an array of hashes containing strings and DBI::Date objects, a > crash occurs because DBI::Date is not a standard type. Now, onto my > question, is it possible to make a newer version of DBI provide the old > behaviour of always returning strings? This would be very helpful in my > particular case, and I would hate to have to stay with an older version, > I know that I can convert all the data manually, but I'd really rather > not because it's so inefficient to go across the whole set of data if it > isn't necessary. XML-RPC for Ruby allows to pass user defined classes as hashes. That's non-standard and will not work with other implementations. Is that an option for you? Of course you could always patch the newest DBI version to get the old behaviour, or enhance the xmlrpc4r to convert DBI::Date objects into strings (user defined conversion would be a nice feature for xmlrpc4r; It's now on my (very long) todo list). Regards, Michael |
From: Travis W. <wh...@at...> - 2003-03-10 20:51:55
|
Hello all - I've been using an older version of DBI, 0.0.12, on a DB server here at work. This older version returns all fetch_hash results as strings without exception. My workstation is running DBI 0.0.18, which returns dates as DBI::Date objects. This is convienent most of the time; however, I'm passing data back and forth using XML RPC, which means that only standard types are allowed. If I attempt to pass an array of hashes containing strings, everything is fine. But, if I try to pass an array of hashes containing strings and DBI::Date objects, a crash occurs because DBI::Date is not a standard type. Now, onto my question, is it possible to make a newer version of DBI provide the old behaviour of always returning strings? This would be very helpful in my particular case, and I would hate to have to stay with an older version, I know that I can convert all the data manually, but I'd really rather not because it's so inefficient to go across the whole set of data if it isn't necessary. Thanks much, Travis Whitton <wh...@at...> |
From: Paul D. <pa...@sn...> - 2003-02-13 22:21:28
|
At 22:07 +0000 2/13/03, Gabriel Emerson wrote: >I need to double check this, but I think in Perl DBI I can not only do the >equivalent of this: > >sql = %Q{ > SELECT * > FROM bar, baz > WHERE bar.bar_val = ? > AND bar.bar_id = baz.baz_id >} >foo = 72 >dbh.select_one(sql, foo) > >but this also: >sql = %Q{ > SELECT * > FROM bar, baz > WHERE bar.bar_val = ? > AND bar.bar_id = ? >} >foo = 72 >sql_expr = 'baz.baz_id' >dbh.select_one(sql, foo, sql_expr) > >It seems that Ruby DBI doesn't support anything for ? replacement but >constant values for quoting. It doesn't support table or field >identifiers, or bits of sql code. Placeholders are for data values, not identifiers. Correct. And that's how it works in Perl DBI as well. There is now in Perl DBI a quote_identifier() function that may be used to generate properly-quoted identifiers for insertion into queries, but it's new and not supported by all drivers. (For example, DBD::mysql doesn't support it yet.) > >I am at a loss for how to test whether something is a value or a >[field|table|bit of aql code], but some possibilities would be an abstract >class to inherit from if you wanted to pass a non-value, like sql_expr = >DBI::NonVal.new('baz.baz_id'), so we can do kind_of? on it. Or we could >use perhaps a new character for the substitution for this case. > >I realize we can already just #{} in the strings for whatever we want to >replace ahead of time, but sometimes it would be nice to have this type of >substitution in automatic query generation scenarios. > >--Gabriel |
From: Gabriel E. <ga...@de...> - 2003-02-13 22:09:45
|
I need to double check this, but I think in Perl DBI I can not only do the equivalent of this: sql = %Q{ SELECT * FROM bar, baz WHERE bar.bar_val = ? AND bar.bar_id = baz.baz_id } foo = 72 dbh.select_one(sql, foo) but this also: sql = %Q{ SELECT * FROM bar, baz WHERE bar.bar_val = ? AND bar.bar_id = ? } foo = 72 sql_expr = 'baz.baz_id' dbh.select_one(sql, foo, sql_expr) It seems that Ruby DBI doesn't support anything for ? replacement but constant values for quoting. It doesn't support table or field identifiers, or bits of sql code. I am at a loss for how to test whether something is a value or a [field|table|bit of aql code], but some possibilities would be an abstract class to inherit from if you wanted to pass a non-value, like sql_expr = DBI::NonVal.new('baz.baz_id'), so we can do kind_of? on it. Or we could use perhaps a new character for the substitution for this case. I realize we can already just #{} in the strings for whatever we want to replace ahead of time, but sometimes it would be nice to have this type of substitution in automatic query generation scenarios. --Gabriel |
From: MoonWolf <moo...@mo...> - 2003-02-13 13:26:53
|
> * problem in a 'SELECT' statement was corrected. > * fix DBI::Timestamp quote format for Timestamp sorting > before '2003-2-13 1.2.3.0' => after '2003-02-13 01:02:03' 'SQLite.c.patch.030213' is mistake 'SQLite.c.patch.030213b' is right |
From: Shirai,Kaoru <sh...@ko...> - 2003-02-13 12:57:48
|
Hi. StatementHandle#each(#fetch) always gives the same Row object to block, so we get broken result from StatementHandle#to_a. Example: stat.execute stat.each { |row| p row } stat.execute p stat.to_a Output: ["foo"] ["bar"] ["boo"] [["boo"], ["boo"], ["boo"]] => Should be [["foo"], ["bar"], ["boo"]] We have to use StatementHandle#fetch_all to get correct result. Is there any reason to keep StatementHandle#to_a returning broken array? I suggest making StatementHandle#to_a to be an alias of StatementHandle#fetch_all. -- sh...@ko... Shirai,Kaoru Korinkan Ltd. |
From: MoonWolf <moo...@mo...> - 2003-02-12 15:42:11
|
* problem in a 'SELECT' statement was corrected. * fix DBI::Timestamp quote format for Timestamp sorting before '2003-2-13 1.2.3.0' => after '2003-02-13 01:02:03' |
From: Oliver M. B. <ol...@go...> - 2003-02-09 19:52:10
|
Hi! On Thu, Feb 06, 2003 at 06:41:15PM +0100, "Oliver M. Bolzer" <ol...@go...> wrote... > On Thu, Feb 06, 2003 at 05:21:00PM +0100, "Oliver M. Bolzer" <ol...@go...> wrote... > > A really quick glance into DBD::Pg shows that the type conversion is > > done according to type_map and the type of a column according to pg_result.type Either nobody is interested or everybody is on vacation, so I added support for PostgreSQL-Arrays into DBD::Pg this weekend. Too much snow outside to do anything else :-) This code handles all array-types that use an ',' as delimitor (all built-in except box), even with dimensions depper than 1. The array-parsing code started as port from the JDBC-driver included in PostgerSQL but is now better than it, because of multi-dimension support. Perl's DBD::Pg does not seem to contain support for Arrays. Using the following table with Arrays: CREATE TABLE test ( strings varchar(10)[], ints integer[][] ); INSERT INTO test VALUES ( '{"fo,o", "bar{baz"}', '{{1,2},{2,3}}'); testdb=> SELECT * FROM test; strings | ints --------------------+--------------- {"fo,o","bar{baz"} | {{1,2},{2,3}} (1 row) While the current implementation of DBD::Pg only results in a String that is excatly "{{1,2},{2,3}}", with my patch, the arrays are properly converted to Ruby's Arrays and the elements converted to the correct type. dbh = DBI.connect( driver_url, dbuser, dbpass ) row = dbh.select_one( 'SELECT * FROM test' ) p row[0] # => ["fo,o", "bar{baz"] p row[1] # => [[1, 2], [2, 3]] Please consided applying the patch. Tested with PostgrSQL 7.2.1 and 7.3.1, based on $Id: Pg.rb,v 1.27 2002/09/26 13:40:32 mneumann Exp $ -- Oliver M. Bolzer ol...@go... GPG (PGP) Fingerprint = 621B 52F6 2AC1 36DB 8761 018F 8786 87AD EF50 D1FF |
From: MoonWolf <moo...@mo...> - 2003-02-09 08:49:28
|
after 'UPDATE' statement, RPC is incorrect -- MoonWolf <moo...@mo...> |
From: Ben G. <bg...@in...> - 2003-02-06 19:21:56
|
Hi all, I recently wrote a ruby file that some people might find useful. Its purpose is to make generating SQL statements (especially those involving "WHERE" clauses easier. The "meat" is an abstract class "Clause" with some subclasses, FixedClause, BasicClause, AndClause, OrClause... These clauses together use the "Interpreter" design pattern to allow nesting of clauses, so an "AndClause" can contain 2 or more other types of clause, and those clauses can be other "AndClause"s, "OrClause"s or "BasicClause"s. Here's a sample usage: require 'clauses' include DBI::SQL::Clauses frank_20_plus = AndClause.new(BasicClause.new(*%w"Name LIKE Frank%"), BasicClause.new("Age", ">", "20")) dave_carp_30 = AndClause.new(LiteralClause.new("Name LIKE 'Dave%'"), BasicClause.new("Age", "30"), BasicClause.new("City", "Carp")) friends = OrClause.new(frank_20_plus, dave_carp_30, "Name = 'Gunther Schwinn'") res = friends.get_pair # res[0] == "(((Name LIKE ?) AND (Age > ?)) OR ((Name LIKE ?) AND (Age = ?) AND (City = ?)) OR (Name = ?))" # res[1] == ["Frank%", 20, "Dave%", 30, "Carp", "Gunther Schwinn"] # or res = friends.get_clause # res == "(((Name LIKE "Frank%") AND (Age > 20)) OR ((Name LIKE "Dave%") AND (Age = 30) AND (City = "Carp")) OR (Name = "Gunther Schwinn"))" DBI.connect('DBI:Mysql:test', 'test', 'password') { |dbh| dbh.prepareFixedSelect('Friends', '*', friends) { |sql| all = dbh.select_all(sql) } } The file, "clauses.rb" puts the Clause classes in the DBI::SQL::Clauses module, and adds a few methods to DBI.DatabaseHandle. There are still some implementation details I'm trying to figure out, such as, how best to handle quoting of strings in the clauses -- they don't know about any active DatabaseHandle so they can't use the current DB to quote. You can get the file here: http://infofiend.com/dbi/clauses.rb If anybody finds it useful I'd be happy to either put it up on RAA or roll it into the DBI code somewhere. Ben -- Ben Giddings <bg...@in...> |
From: Oliver M. B. <ol...@go...> - 2003-02-06 17:41:20
|
On Thu, Feb 06, 2003 at 05:21:00PM +0100, "Oliver M. Bolzer" <ol...@go...> wrote... > A really quick glance into DBD::Pg shows that the type conversion is > done according to type_map and the type of a column according to pg_result.type > > For example BIGINT[] gives type 1016, VARCHAR(255)[] gives 1015 as type > on my system, but they are not contained in type_map, so they are converted > to String.... I've done some more digging into PostgreSQL itself. After looking, how DBD::Pg assembles it's type_map and repeating the queries myself, i tought that the pg_type table does not contain information about Array types, but consulting the PostgreSQL documentation, it seems like DBD::Pg's assembly of type_map is wrong. Currently, type_map is assembled using typname and typelem from pg_type. The typeid from PGResult.type(index) comes from pg_attribute.atttypid, which is an reference to pg_type.oid, not pg_type.typelem. pg_type.typdelem is the type of elements of an array. But because an INT[] array always wraps INTs, the current code functions correctly. Applying the following patch, the code still works the same way. Even the mysterious 1114 timestamp type could be folded into the case block. IMHO, this is more correct and could be the base for Array support. What do you think? --- Pg.rb.orig 2003-02-06 18:32:10.000000000 +0100 +++ Pg.rb 2003-02-06 18:32:37.000000000 +0100 @@ -299,18 +299,18 @@ @type_map = Hash.new @coerce = PgCoerce.new - res = @connection.exec("SELECT typname, typelem FROM pg_type") + res = @connection.exec("SELECT typname, oid FROM pg_type") res.result.each { |name, idstr| @type_map[idstr.to_i] = case name - when '_bool' then :as_bool - when '_int8', '_int4', '_int2' then :as_int - when '_varchar' then :as_str - when '_float4','_float8' then :as_float - when '_timestamp', '_timestamptz' then :as_timestamp - when '_date' then :as_date - when '_bytea' then :as_bytea + when 'bool' then :as_bool + when 'int8', 'int4', 'int2' then :as_int + when 'varchar' then :as_str + when 'float4','float8' then :as_float + when 'timestamp', 'timestamptz' then :as_timestamp + when 'date' then :as_date + when 'bytea' then :as_bytea else :as_str end } -- Oliver M. Bolzer ol...@go... GPG (PGP) Fingerprint = 621B 52F6 2AC1 36DB 8761 018F 8786 87AD EF50 D1FF |
From: Oliver M. B. <ol...@go...> - 2003-02-06 16:21:04
|
Hi! First of all, thanks to everybody for maintaining this great DBI package. I'm currently writing an Object Repository for Ruby (something similar to Java Data Objects) and want to use PostgeSQL's array types in order to save extra tables and the neccersary joins for simple lists. As it seems, DBD::Pg, has no support for Arrays and returns such values as Strings in PostgreSQL's escaping (INT[] becomes "{ 1,2,3}", VARCHAR(x)[] becomes "{foo,bar}" ). Of course I could parse them myself, but has anybody already done work adding support for Arrays to DBD::Pg ? If I was to do it, where should I start ? A really quick glance into DBD::Pg shows that the type conversion is done according to type_map and the type of a column according to pg_result.type For example BIGINT[] gives type 1016, VARCHAR(255)[] gives 1015 as type on my system, but they are not contained in type_map, so they are converted to String.... -- Oliver M. Bolzer ol...@go... GPG (PGP) Fingerprint = 621B 52F6 2AC1 36DB 8761 018F 8786 87AD EF50 D1FF |
From: Michael N. <uu...@rz...> - 2003-01-25 11:36:51
|
Paul DuBois wrote: > There is some code in dbi.rb that attempts to make driver-loading > work in case-insensitive fashion. That is, it allows the driver > name to be given in any lettercase, so that DSNs such as these > are synonymous: > > dbi:Mysql:test:localhost > dbi:mysql:test:localhost > > Ironically, this code works *except* on filesystems that are not > case sensitive. :-) For example, the code does not work on HFS+ > filesystems on Mac OS X. > > Cause: Suppose you give a DSN of dbi:mysql:test:localhost. > load_driver attempts to require 'DBD/mysql/mysql', which *works* > on a non-case-sensitive file system. load_driver then assumes > that DBI::DBD.const_get(mysql) can be used to get the appropriate > module constant for generating the driver handle. That fails because > the constant has to be Mysql. > > Solution: If const_get fails, look through all the available constants > to see which one matches the given driver name in a caseless match. > That gives you the constant with the proper capitalization. > > > The patch below fixes the problem. It can be improved, I'm sure, > but it does work for non-case-sensitive filesystems while preserving > existing behavior for case-sensitive filesystems. Thanks. But I think, to allow case-insensitive DBD names was a big mistake. And possibly it will get depreciated in the next major version. Regards, Michael |
From: Michael N. <uu...@rz...> - 2003-01-25 11:33:54
|
Paul DuBois wrote: > When you generate a statement handle for processing a statement, > is it intended that you invoke finish only when the statement > returns a result set, or also for statements such as INSERT, DELETE, > etc. This, of course, depends on the database. However, better call finish. > That is, which is these is better: > > sth = dbh.prepare("INSERT ....") > sth.execute > sth.finish This is even more secure: dbh.prepare("INSERT ....") do |sth| sth.execute end > or: > > > sth = dbh.prepare("INSERT ....") > sth.execute > # (no finish call here) If you do this 100 times or less in a loop, you'll probably run out of cursors (Oracle-DB). Ruby's GC will reclaim memory and free the statments, but creating cursors is faster than the GC can interact. > Also, in the Perl DBI, use of finish is discouraged if you fetch > all the rows of the result set, because it gets invoked for you and > is redundant. It's used when you don't fetch everything and need > to tell DBI explicitly to cancel. From my reading of the Ruby DBI > docs, I don't get quite this sense about finish. Is it deprecated > after a complete-fetch loop in the same way as for Perl? This might get you into problems and is perhaps not the best programming style, as you have to know the internals of DBI to understand what's going on. And who knows, this might change in the future! If you want to fetch all rows, use the select_all methods (with or without iterator). Otherwise better call finish, even if it gets called implicitly by the DBI (there's no real performance hit in doing so, it's just another "if" stmt). The best is, you use the block-forms of the methods whenever possible, as they call finish or cancel when required for you. Regards, Michael |
From: Paul D. <pa...@sn...> - 2003-01-24 23:37:50
|
There is some code in dbi.rb that attempts to make driver-loading work in case-insensitive fashion. That is, it allows the driver name to be given in any lettercase, so that DSNs such as these are synonymous: dbi:Mysql:test:localhost dbi:mysql:test:localhost Ironically, this code works *except* on filesystems that are not case sensitive. :-) For example, the code does not work on HFS+ filesystems on Mac OS X. Cause: Suppose you give a DSN of dbi:mysql:test:localhost. load_driver attempts to require 'DBD/mysql/mysql', which *works* on a non-case-sensitive file system. load_driver then assumes that DBI::DBD.const_get(mysql) can be used to get the appropriate module constant for generating the driver handle. That fails because the constant has to be Mysql. Solution: If const_get fails, look through all the available constants to see which one matches the given driver name in a caseless match. That gives you the constant with the proper capitalization. The patch below fixes the problem. It can be improved, I'm sure, but it does work for non-case-sensitive filesystems while preserving existing behavior for case-sensitive filesystems. --- dbi.rb.orig Tue Oct 22 10:06:04 2002 +++ dbi.rb Fri Jan 24 17:30:51 2003 @@ -460,8 +460,24 @@ end found ||= driver_name + + # on a filesystem that is not case-sensitive, the initial require + # attempt that loads the driver may succeed even though the lettercase + # of driver_name doesn't match the actual filename. If that happens, + # const_get will fail and it become necessary to look though the list + # of constants and look for a caseless match. The result of this + # this match provides the constant with the proper lettercase -- which + # can be used to generate the driver handle. - dr = DBI::DBD.const_get(found.intern) + dr = nil + begin + dr = DBI::DBD.const_get(found.intern) + rescue NameError + # caseless look for constants to find actual constant + found = found.downcase + found = DBI::DBD.constants.find { |e| e.downcase == found } + dr = DBI::DBD.const_get(found.intern) unless found.nil? + end dbd_dr = dr::Driver.new drh = DBI::DriverHandle.new(dbd_dr) drh.trace(@@trace_mode, @@trace_output) |
From: Paul D. <pa...@sn...> - 2003-01-24 20:52:28
|
The statement handle []= method for DBD::Mysql is defined like this: def []=(attr, value) case attr when 'mysql_use_result' @attr['mysql_store_result'] = ! value @attr['mysql_use_result'] = value when 'mysql_store_result' @attr['mysql_use_result'] = ! value @attr['mysql_store_result'] = value else raise NotSupportedError end end However, mysql_store_result and mysql_use_result seem to be ignored elsewhere in the driver. Is this something that got put in as a start toward implementing the two fetch modes, but never finished? (For example, fetch_scroll probably ought to raise an exception in mysql_use_result mode for all scroll types except SQL_FETCH_NEXT...) |
From: Paul D. <pa...@sn...> - 2003-01-24 19:02:20
|
When you generate a statement handle for processing a statement, is it intended that you invoke finish only when the statement returns a result set, or also for statements such as INSERT, DELETE, etc. That is, which is these is better: sth = dbh.prepare("INSERT ....") sth.execute sth.finish or: sth = dbh.prepare("INSERT ....") sth.execute # (no finish call here) Also, in the Perl DBI, use of finish is discouraged if you fetch all the rows of the result set, because it gets invoked for you and is redundant. It's used when you don't fetch everything and need to tell DBI explicitly to cancel. From my reading of the Ruby DBI docs, I don't get quite this sense about finish. Is it deprecated after a complete-fetch loop in the same way as for Perl? |
From: Paul D. <pa...@sn...> - 2003-01-24 18:56:47
|
MySQL has supported transactions for a while now, but DBD::Mysql doesn't implement the Ruby DBI transaction abstraction. Has anyone else considered how to do this? I have a patch for it, and it's working, but I'd be interested in comparing notes with anyone else who's worked on this problem... |
From: Michael N. <uu...@rz...> - 2003-01-22 20:10:53
|
Paul DuBois wrote: > The MySQL driver for Ruby DBI appears to *require* that you specify > a database name when connecting. Is this part of the DBI spec itself > (if so, I don't see it), or some quirk of the MySQL driver? Not part of DBI spec. > I ask because there are perfectly good reasons to connect with no > database name, e.g., if you just want to query the server for general > status information. Perl DBI allows the database name to be unspecified. This could also be achived by calling (and of course implementing) a driver specific function (of the Mysql DriverHandle class). But Perl's solution seems to be more natural. Regards, Michael |
From: Michael N. <uu...@rz...> - 2003-01-22 11:22:34
|
Paul DuBois wrote: > >Which version of AspectR are you using? Try the newest (0.3.5). > > > > > >Regards, > > > > Michael > > Hm. I am using AspectR 0.3.5. > > My versions are: > > Ruby MySQL module 2.4.3c > Ruby DBI 0.0.18 > AspectR 0.3.5 > > I modified the script to be a little simpler (and to connect successfully). > > > require "dbi" > require "dbi/trace" > > DBI.trace(0) > dbh = DBI.connect('dbi:Mysql:database=test', 'paul','somepass') > dbh.disconnect > > I'd expect no output at all from the script , but this is what I get: > > % ruby trace_test.rb > -> connect for #<DBI::DriverHandle:0x1842b0> ("database=test", > "paul", "somepass", > nil) > <- connect for #<DBI::DriverHandle:0x1842b0> = > #<DBI::DatabaseHandle:0x173dfc> > -> disconnect for #<DBI::DatabaseHandle:0x173dfc> () > <- disconnect for #<DBI::DatabaseHandle:0x173dfc> = nil > > If I modify it so that the trace level is 1, 2, or 3, the trace output > is identical. Seems odd. > > I get the same results on Mac OS X and on RedHat 7.0 (other than that the > object addresses are different, of course). > > Anything I can do to figure out what's going on here? Very strange! And which version of Ruby do you use? I've only tested it with 1.6.x versions. 1.7 and 1.8 versions may have problems. Regards, Michael |