sqlobject-discuss Mailing List for SQLObject (Page 403)
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
You can subscribe to this list here.
2003 |
Jan
|
Feb
(2) |
Mar
(43) |
Apr
(204) |
May
(208) |
Jun
(102) |
Jul
(113) |
Aug
(63) |
Sep
(88) |
Oct
(85) |
Nov
(95) |
Dec
(62) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(38) |
Feb
(93) |
Mar
(125) |
Apr
(89) |
May
(66) |
Jun
(65) |
Jul
(53) |
Aug
(65) |
Sep
(79) |
Oct
(60) |
Nov
(171) |
Dec
(176) |
2005 |
Jan
(264) |
Feb
(260) |
Mar
(145) |
Apr
(153) |
May
(192) |
Jun
(166) |
Jul
(265) |
Aug
(340) |
Sep
(300) |
Oct
(469) |
Nov
(316) |
Dec
(235) |
2006 |
Jan
(236) |
Feb
(156) |
Mar
(229) |
Apr
(221) |
May
(257) |
Jun
(161) |
Jul
(97) |
Aug
(169) |
Sep
(159) |
Oct
(400) |
Nov
(136) |
Dec
(134) |
2007 |
Jan
(152) |
Feb
(101) |
Mar
(115) |
Apr
(120) |
May
(129) |
Jun
(82) |
Jul
(118) |
Aug
(82) |
Sep
(30) |
Oct
(101) |
Nov
(137) |
Dec
(53) |
2008 |
Jan
(83) |
Feb
(139) |
Mar
(55) |
Apr
(69) |
May
(82) |
Jun
(31) |
Jul
(66) |
Aug
(30) |
Sep
(21) |
Oct
(37) |
Nov
(41) |
Dec
(65) |
2009 |
Jan
(69) |
Feb
(46) |
Mar
(22) |
Apr
(20) |
May
(39) |
Jun
(30) |
Jul
(36) |
Aug
(58) |
Sep
(38) |
Oct
(20) |
Nov
(10) |
Dec
(11) |
2010 |
Jan
(24) |
Feb
(63) |
Mar
(22) |
Apr
(72) |
May
(8) |
Jun
(13) |
Jul
(35) |
Aug
(23) |
Sep
(12) |
Oct
(26) |
Nov
(11) |
Dec
(30) |
2011 |
Jan
(15) |
Feb
(44) |
Mar
(36) |
Apr
(26) |
May
(27) |
Jun
(10) |
Jul
(28) |
Aug
(12) |
Sep
|
Oct
|
Nov
(17) |
Dec
(16) |
2012 |
Jan
(12) |
Feb
(31) |
Mar
(23) |
Apr
(14) |
May
(10) |
Jun
(26) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(6) |
2013 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(13) |
Jun
(7) |
Jul
(5) |
Aug
(15) |
Sep
(25) |
Oct
(18) |
Nov
(7) |
Dec
(3) |
2014 |
Jan
(1) |
Feb
(5) |
Mar
|
Apr
(3) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
(5) |
Sep
|
Oct
(11) |
Nov
|
Dec
(62) |
2015 |
Jan
(8) |
Feb
(3) |
Mar
(15) |
Apr
|
May
|
Jun
(6) |
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
(19) |
2016 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(4) |
May
(3) |
Jun
(7) |
Jul
(14) |
Aug
(13) |
Sep
(6) |
Oct
(2) |
Nov
(3) |
Dec
|
2017 |
Jan
(6) |
Feb
(14) |
Mar
(2) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(3) |
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
(44) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
2021 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Ian B. <ia...@co...> - 2003-11-06 19:01:32
|
On Nov 6, 2003, at 12:44 PM, Frank Barknecht wrote: > I have to feed my database via a CSV-file from the outside world. > Loading the CSV via Mysql's "load data infile..." query takes a rather > long time and, maybe more importantly, invalidates several SQLObject > instances. This leads to errors in the Webware application I'm > running, where SQLObjects are called, until the Webware AppServer > eventually hangs. The outside query has a duration of about 10 > seconds for 55,000 rows. Well, you could try locking SQLObject instance access while you're loading the CSV file, if there's a concurrency issue. I think you could do this in MySQLConnection, maybe in the _query method (i.e., check for a lock and wait until it is released). > Now I'm contemplating strategies against this. I'm not quite sure > where the problems lie, though. The only real "solution" I have is > restarting the Webware AppServer, but I wonder, if I could just as > well "restart" my SQLObjects somehow. Using 0.5 you should be able to .expire() the instances. DBConnection.Transaction.rollback does this, and you could probably separate that code out, maybe make an expireAll method on DBConnection instances. Ultimately that method should also find all sub-connections (i.e., transactions), each of which have their own cache and set of instances, but that's probably not an issue for you using MySQL. > (And, slightly off-topic here, but I know, many here use Webware: How > do I restart Webware's Appserver in an elegant way from inside > Webware?) Look at the auto-reloader. I can't remember off the top of my head how it works, but I think it just sets a flag in the Application which causes it to restart. But if you have stalled threads this won't work, the only way to restart then is to kill -9. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Frank B. <fb...@fo...> - 2003-11-06 18:45:00
|
Hallo, I have to feed my database via a CSV-file from the outside world. Loading the CSV via Mysql's "load data infile..." query takes a rather long time and, maybe more importantly, invalidates several SQLObject instances. This leads to errors in the Webware application I'm running, where SQLObjects are called, until the Webware AppServer eventually hangs. The outside query has a duration of about 10 seconds for 55,000 rows. Now I'm contemplating strategies against this. I'm not quite sure where the problems lie, though. The only real "solution" I have is restarting the Webware AppServer, but I wonder, if I could just as well "restart" my SQLObjects somehow. (And, slightly off-topic here, but I know, many here use Webware: How do I restart Webware's Appserver in an elegant way from inside Webware?) Another solution might be to load the csv into a temporary table and then run a SQLObject controlled "select into real_table..." Thanks for any helping insight! ciao -- Frank Barknecht _ ______footils.org__ |
From: Ian B. <ia...@co...> - 2003-11-05 18:36:01
|
On Nov 3, 2003, at 4:53 PM, Chris AtLee wrote: > I found that by adding '"IS": operator.eq' to operatorMap in > SQLBuilder.py fixed this error, but I don't know if this will impact > other DB connection types. Yes, that looks right. I've committed it to CVS. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: James R. <jcr...@ya...> - 2003-11-05 18:13:39
|
Mauro, I have a similar need. I have started to work on finishing (actually mergeing and testing) the Sybase stuff that is currently branched in CVS. Since Sybase is so close to MSSQL (or vice versa), the MSSQL SQLObject DBConnection will probably just "point to, extend from,or contain" the Sybase DBConnection. > > So, I'd really like to use it on Windows (alas, > this is the platform > > of choice where I work) to access SQL server and > Oracle DBs. Right now > > we use ODBC for it all, since we can live with it > and its rather As far as ODBC is concerned, my first thought was to have a simple wrapper layer around the other drivers (where you'd pass in the class for the type of server you plan on talking to), but I have since been experimenting with a python driver for mssql (http://pymssql.sourceforge.net/). So, I haven't done anything with ODBC, yet. > > MS-SQL and Oracle in the past, so I was wondering > at how much would be > > needed still to make it work. If it isn't too > much, I may be able to > > lend you a hand. > > > > Or, at least I could be a beta tester ;-) Long story short: I will probably commit a unit tested Sybase/MSSQL DBConnection within the next week or so. James __________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree |
From: Randall R. <ra...@ra...> - 2003-11-05 04:03:57
|
Does using a negative for orderBy work in 0.5? I'm not sure what I'm doing wrong here, if anything: >>> views = AdView.select(orderBy="-id") >>> len(views) 43 >>> views[0] Traceback (most recent call last): File "<stdin>", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site- packages/SQLObject/SQLObject.py", line 1175, in __getitem__ return list(self.clone(start=start, end=start+1))[0] File "/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site- packages/SQLObject/DBConnection.py", line 168, in _iterSelect cursor.execute(query) psycopg.ProgrammingError: ERROR: Non-integer constant in ORDER BY SELECT ad_view.id, ad_view.ad_id, ad_view.internet_address, ad_view.view_time, ad_view.hit_time FROM ad_view WHERE 1 = 1 ORDER BY 'id' DESC LIMIT 1 Everything seems in order except for the quotes around 'id'. This happened with a 0.4x I had, which was why I just upgraded to 0.5. Note for those upgrading to 0.5 : Delete the old installation first, or you'll get lots of errors about import. :) Perhaps this is common knowledge... -- Randall Randall ra...@ra... |
From: Luke O. <lu...@me...> - 2003-11-04 23:10:27
|
Yes, we do just this (with Postgres) using DBPool, in order to share the same connections for our SQLObject and non-SQLObject functionality. Just need to override a few functions (we create the DBPools in a separate module that handles creating one for DSNs we haven't seen yet on the fly. the module-global variable _stores hands these out. You could put similar functionality to create the pool into the Connection's code below if you weren't sharing with non-SQLObject) Can't speak for how DBPool compares to SQLObject's native pooling. - Luke ------------> code from SQLObject import PostgresConnection as PC from PostgresPool import _stores class PostgresConnection(PC): def getConnection(self): return _stores.getConnection(self.dsn) def releaseConnection(self,conn): conn.close() def makeConnection(self): pass Quoting Frank Barknecht <fb...@fo...>: > Hallo, > > when running an SQLObject-enhanced Webware application, is it possible > and does it make sense, to "pool" the database connection the way, > Webware's DBPool does? I'm expecially considering this for MySQL. > > ciao > -- > Frank Barknecht _ ______footils.org__ > |
From: Frank B. <fb...@fo...> - 2003-11-04 21:33:44
|
Hallo, when running an SQLObject-enhanced Webware application, is it possible and does it make sense, to "pool" the database connection the way, Webware's DBPool does? I'm expecially considering this for MySQL. ciao -- Frank Barknecht _ ______footils.org__ |
From: Ian B. <ia...@co...> - 2003-11-04 15:42:59
|
I'm not actually involved with SQL Server or Oracle, other people are working on them. Perhaps they can say more. On Nov 4, 2003, at 9:20 AM, Mauro Cicognini wrote: > Hi, > I just read your release announcement on Python-announce and perused > the SQLObject docs a bit. > > I found it all extremely interesting and in fact being in large part > exactly something I've always thought to write whenever I got the time > (i.e. never, actually ;-). I particularly like the new-style feeling > of it all, and the fact that it's all as much OO as possible. > > So, I'd really like to use it on Windows (alas, this is the platform > of choice where I work) to access SQL server and Oracle DBs. Right now > we use ODBC for it all, since we can live with it and its rather > suboptimal performance (no, no mxODBC either, it's for-pay for > commercial users like us and sadly my management isn't ready to fork > over money to M.A. Lemburg although he'd much deserve it). > > To sum it up, I read that there's been some interest and work on > MS-SQL and Oracle in the past, so I was wondering at how much would be > needed still to make it work. If it isn't too much, I may be able to > lend you a hand. > > Or, at least I could be a beta tester ;-) > > Thanks in advance for your response and best regards, > > Mauro Cicognini > > -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Ian B. <ia...@co...> - 2003-11-04 02:23:36
|
On Nov 3, 2003, at 1:01 PM, Frank Barknecht wrote: >> As a result I actually got rid of __len__ at some point (hmm... maybe >> between 0.4 and 0.5, though if so I forgot to document it), and >> replaced it with a .count() method so database calls wouldn't be >> implicitly triggered. Sadly this ruins truth-testing, though maybe >> that was dangerous too. >> >> Anyway, this shouldn't happen in 0.5. > > Hm, I'm using current CVS, and there SQLObject.py still has a __len__, > but no "count" method. I actually don't use a pure list in my code > but a class called "ProductDisplay" derived from list. With some > little changes I could make without inheriting from list: I only use > "for x in productDisplay_instance: do things" which should be possible > without ProductDisplay being a list. Would going this route help? You're right, I didn't remove __len__. But I fully intended to ;) Yes, if you just iterate over SelectResults it should work fine. Even if you just use: def makelist(v): result = [] for val in v: result.append(val) return result That should save you from excessive counts. I plan to release 0.5.1 soon, to fix a couple little bugs. I'll probably add .count() in there too. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Chris A. <ch...@at...> - 2003-11-03 22:53:49
|
This code: from SQLObject import * conn = DBMConnection(".") class TestMe(SQLObject): _connection = conn testCol = StringCol(default = None) TestMe.dropTable(ifExists = True) TestMe.createTable(ifNotExists = True) t = TestMe.new() print list(TestMe.select(TestMe.q.testCol == None)) fails with this error: Traceback (most recent call last): File "test.py", line 14, in ? print list(TestMe.select(TestMe.q.testCol == None)) File "/usr/lib/python2.3/site-packages/SQLObject/DBConnection.py", line 1216, in next if SQLBuilder.execute(self.select.clause, self): File "/usr/lib/python2.3/site-packages/SQLObject/SQLBuilder.py", line 81, in execute return expr.execute(executor) File "/usr/lib/python2.3/site-packages/SQLObject/SQLBuilder.py", line 243, in execute return operatorMap[self.op.upper()](execute(self.expr1, executor), KeyError: 'IS' I found that by adding '"IS": operator.eq' to operatorMap in SQLBuilder.py fixed this error, but I don't know if this will impact other DB connection types. Cheers, Chris -- Chris AtLee <ch...@at...> |
From: Frank B. <fb...@fo...> - 2003-11-03 19:01:30
|
Hallo, Ian Bicking hat gesagt: // Ian Bicking wrote: > On Nov 3, 2003, at 10:51 AM, Frank Barknecht wrote: > There's a wonky behavior in list() (and maybe various related methods) > where it will get the length before iterating over the value, > presumably so it can pre-allocate space. Drove me nuts, because it > ignores any exceptions and so seems very magical. > > As a result I actually got rid of __len__ at some point (hmm... maybe > between 0.4 and 0.5, though if so I forgot to document it), and > replaced it with a .count() method so database calls wouldn't be > implicitly triggered. Sadly this ruins truth-testing, though maybe > that was dangerous too. > > Anyway, this shouldn't happen in 0.5. Hm, I'm using current CVS, and there SQLObject.py still has a __len__, but no "count" method. I actually don't use a pure list in my code but a class called "ProductDisplay" derived from list. With some little changes I could make without inheriting from list: I only use "for x in productDisplay_instance: do things" which should be possible without ProductDisplay being a list. Would going this route help? ciao -- Frank Barknecht _ ______footils.org__ |
From: Ian B. <ia...@co...> - 2003-11-03 18:00:01
|
On Nov 3, 2003, at 10:51 AM, Frank Barknecht wrote: > Hallo, > > while tuning an application, I found that a "count(*)" takes rather > long. This in itself would be okay, but somehow SQLObject counts two > times, where one should be enough. The code in question is this: > > # Normal is my SQLObject > # self._q.sql holds the current query, for example: > # (normal.title LIKE '%beat%') > > allResults = len(Normal.select(self._q.sql)) > print "--- after first count" > ps = list(Normal.select(self._q.sql)[0,10]) > print "--- after second count" > > # do somthing with "p in ps" here... > > This results in the following queries by SO: > > QueryOne: SELECT COUNT(*) FROM normal WHERE (normal.artist LIKE > '%beat%') > COUNT results: 163 > --- after first count > QueryOne: SELECT COUNT(*) FROM normal WHERE (normal.artist LIKE > '%beat%') > COUNT results: 163 > Select: SELECT normal.id, normal.description, ... FROM normal WHERE > (normal.artist LIKE '%beat%') LIMIT 10 > --- after second count > > I'd expect that "len(Normal.select(self._q.sql)" will do a count(*), > but why the second one? I tried to force a single limit-query by using > list(). How could I get at the count of all results without doing > several SQLs for each result? There's a wonky behavior in list() (and maybe various related methods) where it will get the length before iterating over the value, presumably so it can pre-allocate space. Drove me nuts, because it ignores any exceptions and so seems very magical. As a result I actually got rid of __len__ at some point (hmm... maybe between 0.4 and 0.5, though if so I forgot to document it), and replaced it with a .count() method so database calls wouldn't be implicitly triggered. Sadly this ruins truth-testing, though maybe that was dangerous too. Anyway, this shouldn't happen in 0.5. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Frank B. <fb...@fo...> - 2003-11-03 16:52:01
|
Hallo, while tuning an application, I found that a "count(*)" takes rather long. This in itself would be okay, but somehow SQLObject counts two times, where one should be enough. The code in question is this: # Normal is my SQLObject # self._q.sql holds the current query, for example: # (normal.title LIKE '%beat%') allResults = len(Normal.select(self._q.sql)) print "--- after first count" ps = list(Normal.select(self._q.sql)[0,10]) print "--- after second count" # do somthing with "p in ps" here... This results in the following queries by SO: QueryOne: SELECT COUNT(*) FROM normal WHERE (normal.artist LIKE '%beat%') COUNT results: 163 --- after first count QueryOne: SELECT COUNT(*) FROM normal WHERE (normal.artist LIKE '%beat%') COUNT results: 163 Select: SELECT normal.id, normal.description, ... FROM normal WHERE (normal.artist LIKE '%beat%') LIMIT 10 --- after second count I'd expect that "len(Normal.select(self._q.sql)" will do a count(*), but why the second one? I tried to force a single limit-query by using list(). How could I get at the count of all results without doing several SQLs for each result? ciao -- Frank Barknecht _ ______footils.org__ |
From: Ian B. <ia...@co...> - 2003-11-01 21:39:57
|
SQLObject 0.5 ============= SQLObject is an Object-Relational Mapper (ORM), wrapping your database tables in easy-to-use Python classes. Changes ------- * Firebird support. * Generic conversion routines (makes a BoolCol possible). * Non-integer primary keys. * Instance (row) caches can be synced or expired relative to the database. * Many transaction-related bugs fixed. * And smaller stuff too. Where ----- Home: http://sqlobject.org Changes: http://sqlobject.org/docs/News.html Docs: http://sqlobject.org/docs/SQLObject.html Download: http://prdownloads.sf.net/sqlobject/SQLObject-0.5.tar.gz?download -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Ian B. <ia...@co...> - 2003-11-01 21:08:24
|
I'm getting ready to release 0.5 and doing some last testing. Firebird (kinterbasdb) seems to return read-write buffers instead of Python strings. Very strange. Anyway, I'll probably just ignore that for the release, but what's up with that? We probably don't need to deal with it, but it's wonky and maybe we should (or at maybe include an option to str-ify such results). -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Ian B. <ia...@co...> - 2003-10-31 17:43:26
|
On Oct 31, 2003, at 11:38 AM, Bruno Trevisan wrote: > If you don't want to go like this, you can create that field as an > integer and manually create a sequence. Then, you can "SELECT > nextval('sequence')" to get the next id and use it to create a new > record. Yes, you'll have to write this SQL manually, and then use the output when you call .new() (passing id in). I don't know of a way to do this with MySQL or SQLite, which don't have explicit sequences. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Bruno T. <bt...@as...> - 2003-10-31 17:38:37
|
Hi Sidnei > I was needing a way to figure out the next id that will be generated > by an insert using Postgres, and stumbled over queryInsertID. From > what I can see, that one gives the last inserted id. Now the question > is: There is a easy way to get the *next id*? It seems i need to do a > CREATE SEQUENCE if the table was just created/doesnt exist yet, but Im > not the Postgres guru here, so I'm not sure. Postgres has a special type called 'serial'. If you create a column with this type set, lets say customer_id, postgres will create a sequence called customer_id_seq and will set a DEFAULT constraint with nextval('customer_id_seq') to that column. When you do an INSERT and you don't pass in value for that field, DEFAULT will be used, thus incrementing the sequence. If you don't want to go like this, you can create that field as an integer and manually create a sequence. Then, you can "SELECT nextval('sequence')" to get the next id and use it to create a new record. []'s Bruno Trevisan bt...@as... |=3D| Async Open Source |=3D| D. Alexandrina, 253= 4 http://www.async.com.br/ |=3D| +55 16 261-2331 |=3D| 13566-290 |=3D| +55 16 9781-8717 |=3D| S=E3o Carlos, SP, B= rasil |
From: Sidnei da S. <si...@pl...> - 2003-10-31 15:17:42
|
Hello folks, I was needing a way to figure out the next id that will be generated by an insert using Postgres, and stumbled over queryInsertID. From what I can see, that one gives the last inserted id. Now the question is: There is a easy way to get the *next id*? It seems i need to do a CREATE SEQUENCE if the table was just created/doesnt exist yet, but Im not the Postgres guru here, so I'm not sure. Clues? []'s -- Sidnei da Silva <si...@aw...> dreamcatching :: making your dreams come true http://awkly.org Bus error -- driver executed. |
From: James R. <jcr...@ya...> - 2003-10-29 18:31:56
|
> > My questions: > > 1) What do you think of (me) making some changes > to DBConnection to > > support an ODBCConnection (with minor alterations > to all of the > > connection classes to allow use/wrapping as a > "dialect" )? > > What kind of flexibility are you looking for? > Someone else using ODBC > (Magnus Lycka?) wanted to keep parameters separate > so that the database > driver could handle the quoting. Maybe the same > issue as you are > having. Agreed, the quoting can be handled else (inside of Converters.py?). But, you still have the issue of the specific sql "dialects." So, my first thought was to wrap the already existing *Connection classes inside of an ODBConnection class, i.e.: class ODBCConnection: def __init__(self, other_conn_clz, ...): odbc = do_odbc_try_catches_for_mx_etc() # wrapped=True would mean for the ?Connection # not to try and load its own driver params # during the __init__ call self.dialect = other_conn_clz(wrapped=True) self._queryInsertID = self.dialect._queryInsertID . . . setup_other_stuff_here_too() def makeConnection(self): return odbc.do_your_stuff() My second thought (trust me, there is no basis in reality for either one of these, yet) is to possibly use "classmethod(s)" instead of instantiating the wrapped class. Either way, a small alteration to the existing *Connection classes would be necessary. > Another option would be to use the database name to > greater effect. If I understand you correctly, that seems like it could be similar to above (but using the Connection name)...Or not, if you are talking about code similar to what Col.py uses (i.e. _firebirdInsertID(...)) > > do you mind the addition > > of an MSSQLConnection (really it would be more > > like "a completion" of the Sybase stuff > > currently branched in cvs)? > > Yeah, that's fine. I just can't support non-OSS > databases. > > The stuff doesn't really have to be a branch, I > don't think, though > maybe it touches things more than I'm expecting. I will research & test before I commit (we have ASE 11.9.2 and MSSQL2k here). > > > BTW, I found a semi-solution to the kinterbasdb > > iterator issue, but it > > only added to the firebird autocommit "hack." The > > solution involves setting retaining=1... > What is retain supposed to mean? From the kinterbasdb page: "The commit and rollback methods of kinterbasdb.Connection accept an optional boolean parameter retaining (default False) to indicate whether to recycle the transactional context of the transaction being resolved by the method call. If retaining is True, the infrastructural support for the transaction active at the time of the method call will be "retained" (efficiently and transparently recycled) after the database server has committed or rolled back the conceptual transaction." I don't know if you recall the mindless drivel that I submitted a while back on the "transactional nature of firebird." But the "cursor unknown" is caused by a select being part of a transaction. Why? I'm glad you asked: when you do a "rollback" w/o the retaining=1 it also rolls back the "select transaction" also. James __________________________________ Do you Yahoo!? Exclusive Video Premiere - Britney Spears http://launch.yahoo.com/promos/britneyspears/ |
From: Ian B. <ia...@co...> - 2003-10-29 05:05:26
|
I copied this to the list because this exactly the sort of discussion the list is for... On Tuesday, October 28, 2003, at 10:25 PM, James Ralston wrote: > Ian, > > I've just been tasked with the unpleasant job of writing a web based > application using MSSQL as the backend database. To add insult to > injury, it has to run on Solaris. The server will probably have an > ODBC driver installed to make the leap. Long story short: if I am > able to do the app in python, I would not be able to use SQLObject in > it's current form. > > My questions: > 1) What do you think of (me) making some changes to DBConnection to > support an ODBCConnection (with minor alterations to all of the > connection classes to allow use/wrapping as a "dialect" )? What kind of flexibility are you looking for? Someone else using ODBC (Magnus Lycka?) wanted to keep parameters separate so that the database driver could handle the quoting. Maybe the same issue as you are having. Another option would be to use the database name to greater effect. > 2) I know you prefer OSS db's (as do I), but do you mind the addition > of an MSSQLConnection (really it would be more like "a completion" of > the Sybase stuff currently branched in cvs)? Yeah, that's fine. I just can't support non-OSS databases. The stuff doesn't really have to be a branch, I don't think, though maybe it touches things more than I'm expecting. > BTW, I found a semi-solution to the kinterbasdb iterator issue, but it > only added to the firebird autocommit "hack." The solution involves > setting retaining=1 to the conn.commit call (or adding an even more > hackish shouldRetain attribute to the _iterSelect call in the DBAPI > class). I played with it on my machine, but I am reluctant to let > this code be seen in public :-) What is retain supposed to mean? -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Ian B. <ia...@co...> - 2003-10-27 23:37:43
|
On Monday, October 27, 2003, at 04:47 PM, Sidnei da Silva wrote: > The tricky part for me is that I'm not using the SQLObject Transaction > class, but instead, I'm wrapping DBConnection on a Zope-specific > Transaction class. :) Okay, just be sure to look at how SQLObject's Transaction class works. You could probably subclass from it. It's not too complicated, but it is a little funny (maybe in a bad way, in particular Transaction.__getattr__). > | >AFAICS, in the current implementation, the state of 'x' from > | >transaction '2' would override the state of 'x' as saved by > | >transaction '1'. Does that sound correct? > | > | To clarify, we're talking about "object with id x", which is actually > | two instances. > | > | Yes, transaction 2 will just clobber 1. This is what other people > have > | talked about as optimistic locking, and it requires adding special > | support in SQLObject. > | > | I think it will mean something like a magic last_changed column in > | SQLObject tables, which gets updated each time a change is made (but > | you can't update it manually). Then each time you do an update you > add > | "WHERE id = <my ID> AND last_changed = <my expectation of > | last_changed". Then you test how many rows your query effected > | (cursor.rowcount I think, but this might be database driver > specific), > | and if you get 0 then you raise an exception. last_changed could > | probably be an integer, with the value of time.time()*1000 (will that > | get too big for databases?). > > Probably yes. I think that requiring a last_changed column in each > table may not be a good idea though. As long as we just care for > changes that happen when two transactions overlap, we could just have > a module-level registry of objects that changed on a transaction, then > we wouldn't need any change to the databases at all. Note that if > there are two separate transactions which dont overlap, last_changed > is meaningless, so not useful. > > <snip part about connections x transactions > Yes, that would be a lot easier. The disadvantage being if you want non-SQLObject code to update the database, or if you are using multiple processes. But that's mostly just something to note in the documentation for a feature like this -- the expedient solution doesn't preclude the option of a database solution. If so, then the records should go in the DBConnection object (which all the related transactions will share). > | Yes, fetching the same object within the same transaction will give > you > | back the object that has already been created, and is pretty quick to > | do. If a transaction implicitly is restarted after a commit or > | rollback, then you can even reuse the transaction and have the > benefit > | of a valid cache. > > Amazing. Theres one last question now: currently the transaction is > implicitly restarted or not? I don't think so. It doesn't do a BEGIN, at least. Probably what should happen is that commit or rollback should "close" the transaction. But in doing that, you can put the transaction back into a pool of transactions, and DBConnection.transaction can fetch from that pool. No, actually it should be separate. You can commit (or rollback) a transaction, then continue using the transaction (assuming a BEGIN is sent). But once Zope ends its transaction, the SQLObject transaction should be "closed", which returns it back to a pool of transactions which can be reused. An option to DBConnection can control whether transactions are pooled, or just thrown away when they are returned via a close. > OT: I changed my code to have a connection cache of one connection per > thread, so I guess this is going to be just enough for my needs *and* > still have the benefit of a connection cache that lasts through > transactions. I have to check more throughly that the rollback is > clearing the instance cache though, because I'm using my own > Transaction class. One-per-thread would also be an alternative to pooling. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Sidnei da S. <si...@aw...> - 2003-10-27 23:00:12
|
| >Ok, I think I'm starting to understand what fits where now. I think | >that the problem is that I was using the same connection for different | >transactions here. | | The terminology is confusing, I have to change it sometime. A | DBConnection can be used directly. Or you can create a transaction | from the DBConnection. The DBConnection still controls that | transaction, but the transaction takes the place of a connection for | the SQLObject instances. "Connection" is a confusing word -- each | transaction is a connection. Each transaction is also incidentally | tied to its own database connection, but they all share one | DBConnection (subclass) instance. This makes it sound complicated, but | it's just the poor terminology. The tricky part for me is that I'm not using the SQLObject Transaction class, but instead, I'm wrapping DBConnection on a Zope-specific Transaction class. :) | >1. The transaction is rolled back, the objects are in invalid state | >and should be purged from cache | | This basically happens. Instances aren't purged from the | connection-level cache, but column values are purged from each | instances' personal cache (that's what .expire() does). | | >2. Transaction is commited, the objects are valid and the cache is | >valid. | | Yes. Great! | >AFAICS, in the current implementation, the state of 'x' from | >transaction '2' would override the state of 'x' as saved by | >transaction '1'. Does that sound correct? | | To clarify, we're talking about "object with id x", which is actually | two instances. | | Yes, transaction 2 will just clobber 1. This is what other people have | talked about as optimistic locking, and it requires adding special | support in SQLObject. | | I think it will mean something like a magic last_changed column in | SQLObject tables, which gets updated each time a change is made (but | you can't update it manually). Then each time you do an update you add | "WHERE id = <my ID> AND last_changed = <my expectation of | last_changed". Then you test how many rows your query effected | (cursor.rowcount I think, but this might be database driver specific), | and if you get 0 then you raise an exception. last_changed could | probably be an integer, with the value of time.time()*1000 (will that | get too big for databases?). Probably yes. I think that requiring a last_changed column in each table may not be a good idea though. As long as we just care for changes that happen when two transactions overlap, we could just have a module-level registry of objects that changed on a transaction, then we wouldn't need any change to the databases at all. Note that if there are two separate transactions which dont overlap, last_changed is meaningless, so not useful. <snip part about connections x transactions > | Yes, fetching the same object within the same transaction will give you | back the object that has already been created, and is pretty quick to | do. If a transaction implicitly is restarted after a commit or | rollback, then you can even reuse the transaction and have the benefit | of a valid cache. Amazing. Theres one last question now: currently the transaction is implicitly restarted or not? OT: I changed my code to have a connection cache of one connection per thread, so I guess this is going to be just enough for my needs *and* still have the benefit of a connection cache that lasts through transactions. I have to check more throughly that the rollback is clearing the instance cache though, because I'm using my own Transaction class. -- Sidnei da Silva <si...@aw...> dreamcatching :: making your dreams come true http://awkly.org Any sufficiently advanced bug is indistinguishable from a feature. -- Rich Kulawiec |
From: Ian B. <ia...@co...> - 2003-10-27 22:39:28
|
On Monday, October 27, 2003, at 03:35 PM, Sidnei da Silva wrote: > On Mon, Oct 27, 2003 at 03:16:51PM -0600, Ian Bicking wrote: > | I think there's maybe some confusion about the different objects. > | > | First, caching happens per-connection. Which means each transaction > | has its own cache, if you are using transactions. If you create an > | object by a certain ID in a transaction, any attempt to get that > object > | by that ID from the same transaction will give you back the same > object > | you got before, guaranteed. But in a different transaction you will > | get a different object, since the two transactions are isolated. > > Ok, I think I'm starting to understand what fits where now. I think > that the problem is that I was using the same connection for different > transactions here. The terminology is confusing, I have to change it sometime. A DBConnection can be used directly. Or you can create a transaction from the DBConnection. The DBConnection still controls that transaction, but the transaction takes the place of a connection for the SQLObject instances. "Connection" is a confusing word -- each transaction is a connection. Each transaction is also incidentally tied to its own database connection, but they all share one DBConnection (subclass) instance. This makes it sound complicated, but it's just the poor terminology. > | It just occurred to me there's some sloppiness when it comes to > | committing or rolling back a transaction. The objects stick around, > | but once you've finalized the transaction I don't know what use they > | are. Those instances should actually be invalid. Unless you assume > | that by committing or rolling back a transaction, you are implicitly > | restarting the transaction and can keep using it. Anyway, I don't > | think this applies to this circumstance, but it should be worked out. > > If by 'those instances' you mean the instances involved in the current > transaction, I think that there are two cases, assuming that > both commit and rollback start a new transaction implicitly, which I > think it's a good thing (albeit a bit magical): > > 1. The transaction is rolled back, the objects are in invalid state > and should be purged from cache This basically happens. Instances aren't purged from the connection-level cache, but column values are purged from each instances' personal cache (that's what .expire() does). > 2. Transaction is commited, the objects are valid and the cache is > valid. Yes. > | So -- attributes changes immediately update the database. They also > | update the internal cache of that column. Other instances that are > in > | other transactions are not effected. They aren't effected when you > | commit the transaction. It seems like at some point they should be > | effected (expired, perhaps), but I'm not sure where. > > Ok, so this is what I had in mind, and where it starts getting > tricky. A situation like this would cause a conflict error in ZODB: > > - transaction 1 starts, with object x > - transaction 2 starts, with object x > - transaction 1 changes object x and commits > - transaction 2 changes object x and tries to commit > - before commiting the state of object x from transaction 2 is > tested against the object commited and its detected that the > state changed during transaction 2 > - a ConflictError is raised > > AFAICS, in the current implementation, the state of 'x' from > transaction '2' would override the state of 'x' as saved by > transaction '1'. Does that sound correct? To clarify, we're talking about "object with id x", which is actually two instances. Yes, transaction 2 will just clobber 1. This is what other people have talked about as optimistic locking, and it requires adding special support in SQLObject. I think it will mean something like a magic last_changed column in SQLObject tables, which gets updated each time a change is made (but you can't update it manually). Then each time you do an update you add "WHERE id = <my ID> AND last_changed = <my expectation of last_changed". Then you test how many rows your query effected (cursor.rowcount I think, but this might be database driver specific), and if you get 0 then you raise an exception. last_changed could probably be an integer, with the value of time.time()*1000 (will that get too big for databases?). I'm not sure how bit a change this would be -- probably not that bad. > <snip dubious cache interpretation :) /> > > | SQLObject should already handle this, at least for an individual > | instance. It knows when you change an attribute and updates its own > | cache. > > Ok, so based on this, and on the fact that there is a cache per > connection, and another for transaction, is it safe to assume that as > long as I have *exactly* one and *the same* connection being used > during a single transaction, creating an SQLObject instance multiple > times should have minimal effect on performance? I feel like I've been > doing a lot of confusion with connections x transactions x instances. Yes, fetching the same object within the same transaction will give you back the object that has already been created, and is pretty quick to do. If a transaction implicitly is restarted after a commit or rollback, then you can even reuse the transaction and have the benefit of a valid cache. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Sidnei da S. <si...@aw...> - 2003-10-27 21:52:53
|
On Mon, Oct 27, 2003 at 02:59:40PM -0600, Ian Bicking wrote: | So for some reason an exception propagated out of SQLObject.new won't | cause the TransactionManager to roll back the transaction, even though | exceptions anywhere else do work. How odd. What does SQLObject's | connection to TransactionManager look like? | | Does it work any differently if you successfully create (or interact | with) a SQLObject instance, then get an exception in SQLObject.new? | (As opposed to an exception in SQLObject.new being the first query for | the transaction) Ok, I seem to have located the source of problem here. The problem showed to be that I didn't read the traceback correctly :) The exception was actually happening during TestCase.setUp(), so there was no-one to catch the exception and call Transaction.abort(). In normal Zope operation, ZPublisher does this job, that's why I missed it. Sorry for the inconvenience. It was a SNAFU [1]. [1] http://www.dict.org/bin/Dict?Query=snafu&Form=Dict1&Strategy=*&Database=vera []'s -- Sidnei da Silva <si...@aw...> dreamcatching :: making your dreams come true http://awkly.org A year spent in artificial intelligence is enough to make one believe in God. |
From: Sidnei da S. <si...@aw...> - 2003-10-27 21:43:29
|
On Mon, Oct 27, 2003 at 03:16:51PM -0600, Ian Bicking wrote: | I think there's maybe some confusion about the different objects. | | First, caching happens per-connection. Which means each transaction | has its own cache, if you are using transactions. If you create an | object by a certain ID in a transaction, any attempt to get that object | by that ID from the same transaction will give you back the same object | you got before, guaranteed. But in a different transaction you will | get a different object, since the two transactions are isolated. Ok, I think I'm starting to understand what fits where now. I think that the problem is that I was using the same connection for different transactions here. | It just occurred to me there's some sloppiness when it comes to | committing or rolling back a transaction. The objects stick around, | but once you've finalized the transaction I don't know what use they | are. Those instances should actually be invalid. Unless you assume | that by committing or rolling back a transaction, you are implicitly | restarting the transaction and can keep using it. Anyway, I don't | think this applies to this circumstance, but it should be worked out. If by 'those instances' you mean the instances involved in the current transaction, I think that there are two cases, assuming that both commit and rollback start a new transaction implicitly, which I think it's a good thing (albeit a bit magical): 1. The transaction is rolled back, the objects are in invalid state and should be purged from cache 2. Transaction is commited, the objects are valid and the cache is valid. | So -- attributes changes immediately update the database. They also | update the internal cache of that column. Other instances that are in | other transactions are not effected. They aren't effected when you | commit the transaction. It seems like at some point they should be | effected (expired, perhaps), but I'm not sure where. Ok, so this is what I had in mind, and where it starts getting tricky. A situation like this would cause a conflict error in ZODB: - transaction 1 starts, with object x - transaction 2 starts, with object x - transaction 1 changes object x and commits - transaction 2 changes object x and tries to commit - before commiting the state of object x from transaction 2 is tested against the object commited and its detected that the state changed during transaction 2 - a ConflictError is raised AFAICS, in the current implementation, the state of 'x' from transaction '2' would override the state of 'x' as saved by transaction '1'. Does that sound correct? <snip dubious cache interpretation :) /> | SQLObject should already handle this, at least for an individual | instance. It knows when you change an attribute and updates its own | cache. Ok, so based on this, and on the fact that there is a cache per connection, and another for transaction, is it safe to assume that as long as I have *exactly* one and *the same* connection being used during a single transaction, creating an SQLObject instance multiple times should have minimal effect on performance? I feel like I've been doing a lot of confusion with connections x transactions x instances. | I imagine ObjectModified is meant for something like ZODB, where | persistence is added more transparently to an instance (the expense | being that something must be watching the instance and telling it when | it's been changed). I think SQLObject can ignore that message, as | instances are self-aware. Yes, I think you're right here, except if the case about conflicts above holds true. -- Sidnei da Silva <si...@aw...> dreamcatching :: making your dreams come true http://awkly.org /earth: file system full. |