sqlobject-discuss Mailing List for SQLObject (Page 414)
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
You can subscribe to this list here.
2003 |
Jan
|
Feb
(2) |
Mar
(43) |
Apr
(204) |
May
(208) |
Jun
(102) |
Jul
(113) |
Aug
(63) |
Sep
(88) |
Oct
(85) |
Nov
(95) |
Dec
(62) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(38) |
Feb
(93) |
Mar
(125) |
Apr
(89) |
May
(66) |
Jun
(65) |
Jul
(53) |
Aug
(65) |
Sep
(79) |
Oct
(60) |
Nov
(171) |
Dec
(176) |
2005 |
Jan
(264) |
Feb
(260) |
Mar
(145) |
Apr
(153) |
May
(192) |
Jun
(166) |
Jul
(265) |
Aug
(340) |
Sep
(300) |
Oct
(469) |
Nov
(316) |
Dec
(235) |
2006 |
Jan
(236) |
Feb
(156) |
Mar
(229) |
Apr
(221) |
May
(257) |
Jun
(161) |
Jul
(97) |
Aug
(169) |
Sep
(159) |
Oct
(400) |
Nov
(136) |
Dec
(134) |
2007 |
Jan
(152) |
Feb
(101) |
Mar
(115) |
Apr
(120) |
May
(129) |
Jun
(82) |
Jul
(118) |
Aug
(82) |
Sep
(30) |
Oct
(101) |
Nov
(137) |
Dec
(53) |
2008 |
Jan
(83) |
Feb
(139) |
Mar
(55) |
Apr
(69) |
May
(82) |
Jun
(31) |
Jul
(66) |
Aug
(30) |
Sep
(21) |
Oct
(37) |
Nov
(41) |
Dec
(65) |
2009 |
Jan
(69) |
Feb
(46) |
Mar
(22) |
Apr
(20) |
May
(39) |
Jun
(30) |
Jul
(36) |
Aug
(58) |
Sep
(38) |
Oct
(20) |
Nov
(10) |
Dec
(11) |
2010 |
Jan
(24) |
Feb
(63) |
Mar
(22) |
Apr
(72) |
May
(8) |
Jun
(13) |
Jul
(35) |
Aug
(23) |
Sep
(12) |
Oct
(26) |
Nov
(11) |
Dec
(30) |
2011 |
Jan
(15) |
Feb
(44) |
Mar
(36) |
Apr
(26) |
May
(27) |
Jun
(10) |
Jul
(28) |
Aug
(12) |
Sep
|
Oct
|
Nov
(17) |
Dec
(16) |
2012 |
Jan
(12) |
Feb
(31) |
Mar
(23) |
Apr
(14) |
May
(10) |
Jun
(26) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(6) |
2013 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(13) |
Jun
(7) |
Jul
(5) |
Aug
(15) |
Sep
(25) |
Oct
(18) |
Nov
(7) |
Dec
(3) |
2014 |
Jan
(1) |
Feb
(5) |
Mar
|
Apr
(3) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
(5) |
Sep
|
Oct
(11) |
Nov
|
Dec
(62) |
2015 |
Jan
(8) |
Feb
(3) |
Mar
(15) |
Apr
|
May
|
Jun
(6) |
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
(19) |
2016 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(4) |
May
(3) |
Jun
(7) |
Jul
(14) |
Aug
(13) |
Sep
(6) |
Oct
(2) |
Nov
(3) |
Dec
|
2017 |
Jan
(6) |
Feb
(14) |
Mar
(2) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(3) |
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
(44) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
2021 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Brad B. <br...@bb...> - 2003-07-25 13:21:05
|
On Thu, Jul 24, 2003 at 08:57:19PM -0500, Ian Bicking wrote: > On Thu, 2003-07-24 at 14:42, Brad Bollenbach wrote: > > Hi all, > > > > How can I use an SQLObject-derived class as an abstract base class? > > Hmm... I'm not exactly clear what an abstract base class is. Is there a > particular problem you are trying to solve? An abstract base class is a class that's not intended to be directly instantiated, instead defining methods that are intended to be overridden in derived classes. The problem I'm trying to solve is a web monitoring system: MonitorBase | | | | | -> StatusMonitor | --> KeywordMonitor ----> MD5ChecksumMonitor So, I want to be able to do something like (untested code): class MonitorBase(SQLObject): """I'm the base class from which all monitors shall be derived. """ created_date = DateTimeCol(notNone = True) last_run_date = DateTimeCol(notNone = True) active = IntCol() ... def doMonitorCheck(self): """Run the check for this monitor.""" raise NotImplementedError("doMonitorCheck is an abstract method.") class StatusMonitor(MonitorBase): """I monitor websites for the 200 OK status.""" def doMonitorCheck(self): ...logic to check for 200 OK... Except this breaks, because any property access on StatusMonitor tries to look for a table called status_monitor. Unless I specify _table = 'monitor' in every derived class, which seems a bit clunky. What would make sense here? Am I approaching this incorrectly or should there be some way of marking SQLObject classes as abstract? It seems to me that if SQLObject wants to support truly transparent persistence then it should allow me to layout my classes without getting in the way (or getting clunky). -- Brad Bollenbach BBnet.ca |
From: Ian B. <ia...@co...> - 2003-07-25 02:12:43
|
On Thu, 2003-07-24 at 13:18, Brad Bollenbach wrote: > > > here's another feature I need for my project : I have several tables > > > whose primary key is composed of two or three columns. > > > > Well, if there's going to be composite keys, that might as well be > > generalized to all the columns (i.e., any attribute could be a composite > > of several columns). > > What's the use case for this? Well, there's the possibility of things like a point type or something, where there's more than one column that gets turned into one Python object (immutable, of course). But I waver. Now I think I agree with you. There's lots of things that SQLObject *could* do, but I don't think it should do all those. So long as there are ways to code, manually, what you want to do, this is enough for most of these cases. Only when something is really common or really hairy or just not possible with the current code should we add more features. Like you said, we can do this with properties, and it's not too hard. For the id it's a bit more difficult, and I can deal with improving that. In combination with the non-integer keys, composite keys might not be too bad. (Though I draw the line at mutable primary keys!) And you can't do them with properties. I'd like to do some type coercion stuff, like I've mentioned but not implemented for a long time. Then I have to hold off on more features, because SQLObject could become difficult to understand. I think it's much more readable to do: class Position(SQLObject): x = FloatCol() y = FloatCol() def _get_pos(self): return (self.x, self.y) def _set_pos(self, value): self.set(x=value[0], y=value[1]) Than to do: class Position(SQLObject): pos = CompositeCol(FloatCol('x'), FloatCol('y')) Oh, sure, that's much more elegant looking, much more economical in typing. But it obscures what is made very explicit in the first example. And if you really wanted to you could always do: class TupleComposite(object): def __init__(self, *cols): self.cols = cols def __get__(self, obj, objtype): return tuple([getattr(obj, attr) for attr in self.cols]) def __set__(self, obj, value): obj.set(**dict(zip(self.cols, value))) class Position(SQLObject): x = FloatCol() y = FloatCol() pos = TupleComposite('x', 'y') But then it's up to you to explain what TupleComposite is, and DictComposite, and StructComposite, and whatever else you end up creating -- instead of SQLObject coming to be what is perceived to be a monolithic and complex system that is inaccessible to newcomers (which happens to a lot of projects). Anyway, which is a long way of saying I now agree, just composite ids are called for. They might look like: class Whatever(SQLObject): domain = StringCol() subdomain = StringCol() id = ('domain', 'subdomain') Ian |
From: Ian B. <ia...@co...> - 2003-07-25 01:56:49
|
On Thu, 2003-07-24 at 14:42, Brad Bollenbach wrote: > Hi all, > > How can I use an SQLObject-derived class as an abstract base class? Hmm... I'm not exactly clear what an abstract base class is. Is there a particular problem you are trying to solve? The problem with two classes for one table is that it's very ambiguous. Generally a row is an instance, and an instance is a row, very one-to-one. That gets very mucky with multiple classes for one table. I feel like this should be possible to handle in one class, even if it may not feel as object-oriented (but it can probably be just as polymorphic, which is the better half of OO, even if it doesn't have a "proper" class hierarchy, which isn't very important). Ian |
From: Brad B. <br...@bb...> - 2003-07-24 19:41:27
|
Hi all, How can I use an SQLObject-derived class as an abstract base class? E.g. class Foo(SQLObject): bar = IntCol() def abstractMethod(self): raise NotImplementedError("Override me.") class Bar(Foo): def abstractMethod(self): print self.bar If I now do: b = Bar.new(bar = 1) b.abstractMethod() this will break, with an error message (traceback not shown): psycopg.ProgrammingError: ERROR: Relation "bar" does not exist If I add _table = 'foo' to Foo, the same error results. If I add _table = 'foo' to Bar instead, it Does The Right Thing, however this is ugly. Bar will never actually create a table, I just want to be able to define the common functionality and (of course) the data mapping in the base class Foo, and then override and extend in Bar, whilst having my properties magically save themselves into the database. How do I lay this out then? Regards, -- Brad Bollenbach BBnet.ca |
From: Brad B. <br...@bb...> - 2003-07-24 18:16:59
|
On Thu, Jul 17, 2003 at 01:41:02PM -0500, Ian Bicking wrote: > On Thu, 2003-07-17 at 11:09, Fran?ois Girault wrote: > > Hi all, > > > > here's another feature I need for my project : I have several tables > > whose primary key is composed of two or three columns. > > Well, if there's going to be composite keys, that might as well be > generalized to all the columns (i.e., any attribute could be a composite > of several columns). What's the use case for this? Composition seems fairly specific to primary key definitions; I don't see it useful as a general purpose thing that belongs in SQLObject column definition semantics. I would have thought it preferable to define columns as normal, and then create another property to abstract them. e.g. (slightly contrived, and totally untested) class Date(SQLObject): day = IntCol() month = IntCol() year = IntCol() def get_date(self): return "%d/%02d/%02d" % (self.year, self.month, self.day) def set_date(self, value): self.year, self.month, self.day = [int(x) for x in value.split("-")] date = property(get_date, set_date) -- Brad Bollenbach BBnet.ca |
From: Brad B. <br...@bb...> - 2003-07-24 18:05:47
|
On Thu, Jul 17, 2003 at 06:09:28PM +0200, Fran?ois Girault wrote: > Hi all, > > here's another feature I need for my project : I have several tables > whose primary key is composed of two or three columns. [snip] > As I want to do it clean, I think I'm going to realize a "Key" or "Id" object. > > This object would contain a list of Column objects, which together are > the primary key [snip] > I will start this soon, so I'd like to have some suggestions from > SQLObject's dev&users before diving in (Sorry for the delayed response. I briefly skimmed this before, and now that I know I need this, I'd like to contribute) I gather you've already implemented this. If so, how? There are various semantics that could be used. SQL style: class MonitoredDevice(SQLObject): """I'm a device being monitored.""" _cacheValues = False monitor = ForeignKey('Monitor') device = ForeignKey('Device') one_month_rate = CurrencyCol(notNone = True) expiration_date = DateTimeCol(notNone = True) ... primary_key(monitor, device) # or PrimaryKey(monitor, device) SQLObject style: class MonitoredDevice(SQLObject): """I'm a device being monitored.""" _cacheValues = False monitor = ForeignKey('Monitor') device = ForeignKey('Device') one_month_rate = CurrencyCol(notNone = True) expiration_date = DateTimeCol(notNone = True) ... _primary_key = (monitor, device) or maybe something else. In any case, these composite keys would supplant the autogenerated id column. Any other semantic ideas that I've missed? Is supplanting the autogenerated id column even possible in SQLObject without rewriting everything? -- Brad Bollenbach BBnet.ca |
From: Sidnei da S. <si...@re...> - 2003-07-23 17:52:11
|
On Thu, Jul 17, 2003 at 09:47:29PM -0500, Ian Bicking wrote: | On Thu, 2003-07-17 at 21:29, John A. Barbuto wrote: | > Hi, | > | > I noticed something strange when looking at the MySQL query log: after | > every query by SQLObject, a COMMIT was being issued. I tracked it down | > to the _runWithConnection method in the DBAPI class, which calls | > conn.commit() in line 3. I'm baffled as to why this is necessary. I'm | > not sure about Postgres, but in MySQL a COMMIT is ignored unless you're | > using InnoDB and AUTOCOMMIT=0. Any clues? | | Huh... I feel like I must have had some reason for that, but I don't | know what that might be. Speaking of the said line, I just got hit by it when trying to integrate SQLObject and Zope3. The problem was on a unittest, so not that big, but I would like to see an option to make this commit optional. Im not good at naming, but I would suggest something like `commit_before_execute` or something like that. It would be passed on the connection initialization and checked on the _runWithConnection method. Thoughts? []'s -- Sidnei da Silva (dreamcatcher) <si...@re...> Debian GNU/Linux 2.4.20-powerpc ppc "The Computer made me do it." |
From: Edmund L. <el...@in...> - 2003-07-21 22:16:57
|
Matt Goodall wrote: > The problem is probably best demonstrated by (untested) example with a > description of what happens inside SQLObject's caches: > > id = Obj.new(col1='a', col2='b') > # added to cache with key == int(1) > > o = Obj(id) > # found in cache with key == int(1) > > o = Obj(str(id)) # note the cast to a string > # reads from database, adds to cache with key == str(1) > # Eek, two objects now in cache! Argh! This problem is very easy to run into. For example, if you pass an object ID to another page as a field variable, then it is all too easy to forget that you have to cast the value back to an integer before using it. Example: From one servlet that has fiddled with obj1 = MyObject(3), redirect to another page with a URL like http://my.url/Index?id=3 Then in the Index.py servlet: id = self.request().field("id") obj2 = MyObject(id) Oops! obj1 and obj2 aren't the same objects anymore! It's just too easy to do this!! ...Edmund. |
From: Edmund L. <el...@in...> - 2003-07-21 20:09:18
|
The changes to Converter.py to refactor it has broken operations on the results of select operations involving queries... Example: >>> p = Person.select(Person.q.id == 1) >>> p <SQLObject.SQLObject.SelectResults object at 0x816960c> >>> len(p) Traceback (most recent call last): File "<stdin>", line 1, in ? File "SQLObject/SQLObject.py", line 1084, in __len__ count = conn.countSelect(self) File "SQLObject/DBConnection.py", line 149, in countSelect q = "SELECT COUNT(*) FROM %s WHERE %s" % \ File "SQLObject/DBConnection.py", line 175, in whereClauseForSelect q = str(select.clause) File "SQLObject/SQLBuilder.py", line 157, in __str__ return self.sqlRepr() File "SQLObject/SQLBuilder.py", line 219, in sqlRepr return "(%s %s %s)" % (sqlRepr(self.expr1), self.op, sqlRepr(self.expr2)) File "SQLObject/Converters.py", line 99, in sqlRepr raise ValueError, "Unknown SQL builtin type: %s for %s" % \ ValueError: Unknown SQL builtin type: <type 'instance'> for co_person.id |
From: Matt G. <ma...@po...> - 2003-07-21 10:35:30
|
Ian Bicking wrote: >On Thu, 2003-07-17 at 05:37, Matt Goodall wrote: > > >>Hi, >> >>Is there a good reason why DBConnection.py tries to load all supported >>modules at the top of the module rather than where they are actually >>required? i.e. why is psycopg imported for the whole module rather than >>just for PostgresConnection? It should be relatively easy to refactor >>the code so that the individual Connection classes load the DB-API >>module as needed. >> >>A change like this would certainly improve load time for the module (not >>really all that important) but I think it will also make it easier for >>the Connection classes to support multiple dbapi DB-API drivers, i.e. >>psycopg, pyPgSQL etc. >> >>Would you accept a patch for this? >> >> > >Yes, that would be fine. > Atatched. I've tried it for all connection types and it seems to work. I don't know whether I like it all that much (a connection factory might be better), see what you think. No unittest this time I'm afriad. Oh yeah, I hope you don't mind but I rearranged the imports at the top of the DBConnection module. I actually did it without thinking but it's nicer now so I left it in. ;-) I want to try to get PyPGSQL support in next, hopefully this week. Cheers, Matt -- Matt Goodall, Pollenation Internet Ltd w: http://www.pollenationinternet.com e: ma...@po... |
From: Matt G. <ma...@po...> - 2003-07-21 00:12:40
|
Objects cannot be deleted when the connection is created with cache=False due to a bug in the cache system. Here's the traceback: ----- Traceback (most recent call last): File "<stdin>", line 1, in ? File "/home/matt/python-ext//lib/python2.2/site-packages/SQLObject/SQLObject.py", line 926, in delete obj.destroySelf() File "/home/matt/python-ext//lib/python2.2/site-packages/SQLObject/SQLObject.py", line 922, in destroySelf self._connection.cache.purge(self.id, self.__class__) File "/home/matt/python-ext//lib/python2.2/site-packages/SQLObject/Cache.py", line 173, in purge self.caches[cls.__name__].purge(id) File "/home/matt/python-ext//lib/python2.2/site-packages/SQLObject/Cache.py", line 131, in purge if self.cache.has_key(id): AttributeError: 'CacheFactory' object has no attribute 'cache' ----- Obviously, the problem is that purge() does not check the doCache flag before accessing the missing self.cache attribute. The fix is easy enough but the same check may need applying to a couple of other methods as well: expire and clear. A better solution than the doCache flag may be to make self.cache a null object, in this case a dictionary-like object that does nothing but pretend to be empty. The current doCache knowledge is then limited to the constructor and the other methods don't need to worry. Cheers, Matt -- Matt Goodall, Pollenation Internet Ltd w: http://www.pollenation.net e: ma...@po... |
From: Matt G. <ma...@po...> - 2003-07-20 15:21:57
|
I just found an interesting problem due to programmer (i.e. me) error but it leaves the SQLObject caches damaged so I wondered if SQLObject should cope better. The problem is probably best demonstrated by (untested) example with a description of what happens inside SQLObject's caches: id = Obj.new(col1='a', col2='b') # added to cache with key == int(1) o = Obj(id) # found in cache with key == int(1) o = Obj(str(id)) # note the cast to a string # reads from database, adds to cache with key == str(1) # Eek, two objects now in cache! o.col1, o.col2 = o.col2, o.col1 # writes change to db, only object str(id) in cache matches o = Obj(str(id)) # found in cache with key == str(1) o = Obj(id) # found in cache with key == int(1) but bad data Lovely isn't it, and all because I stored the object's id in a hidden form field ;-). I won't tell you how long it took to work out what was happening! My app is working again now but should SQLObject have let this happen? The obvious thing to do is to force the id to int() inside the Obj constructor but that would break any attempt to allow alphanumeric ids. So, perhaps a better solution is to always use a string as the cache keys? There's a TestCase attached that I was using to help debug this. Some of it is already covered by existing tests but it could be useful anyway. Cheers, Matt -- Matt Goodall, Pollenation Internet Ltd w: http://www.pollenation.net e: ma...@po... |
From: Brad B. <br...@bb...> - 2003-07-19 17:47:27
|
Hey all, The slides of my SQLObject talk at EuroPython are now available online: http://www.europython.org/Talks/Slides/sqlobject_ep_talk.tar.gz Now back to your regularly scheduled programming... -- Brad Bollenbach BBnet.ca |
From: Frank B. <fb...@fo...> - 2003-07-18 23:02:20
|
Hallo, Jim Vickroy hat gesagt: // Jim Vickroy wrote: > Is there any plan to support Microsoft SQL Server? If so, when? Any > suggestions on how I might start to add this support? Last time I looked, you just had to adapt DBMConnection to make SQLObject work with another backend. It's easy, just give it a go ;) ciao -- Frank Barknecht _ ______footils.org__ |
From: Jim V. <Jim...@no...> - 2003-07-18 16:30:04
|
Howdy, Is there any plan to support Microsoft SQL Server? If so, when? Any suggestions on how I might start to add this support? Thanks, -- jv P.S. Please excuse me if this issue has been previously addressed. A few attempts to access the mailing list archives produced this result: ERROR Either your mailing list name was misspelled or your mailing list has not been archived yet. If this list has just been created, please retry in 2-4 hours |
From: Ian B. <ia...@co...> - 2003-07-18 02:46:51
|
On Thu, 2003-07-17 at 21:29, John A. Barbuto wrote: > Hi, > > I noticed something strange when looking at the MySQL query log: after > every query by SQLObject, a COMMIT was being issued. I tracked it down > to the _runWithConnection method in the DBAPI class, which calls > conn.commit() in line 3. I'm baffled as to why this is necessary. I'm > not sure about Postgres, but in MySQL a COMMIT is ignored unless you're > using InnoDB and AUTOCOMMIT=0. Any clues? Huh... I feel like I must have had some reason for that, but I don't know what that might be. > Speaking of InnoDB, it's becoming more and more popular, so shouldn't > MySQL transactions be supported in SQLObject? It won't break things > with non-InnoDB tables, it'll just be silently ignored. Would a patch > for this be accepted? Is there anything currently keeping you from using transactions in this situation? It's not tested, but so long as MySQLdb supports it, I don't see any reason SQLObject shouldn't already. Ian |
From: John A. B. <ja...@os...> - 2003-07-18 02:29:26
|
Hi, I noticed something strange when looking at the MySQL query log: after every query by SQLObject, a COMMIT was being issued. I tracked it down to the _runWithConnection method in the DBAPI class, which calls conn.commit() in line 3. I'm baffled as to why this is necessary. I'm not sure about Postgres, but in MySQL a COMMIT is ignored unless you're using InnoDB and AUTOCOMMIT=3D0. Any clues? Speaking of InnoDB, it's becoming more and more popular, so shouldn't MySQL transactions be supported in SQLObject? It won't break things with non-InnoDB tables, it'll just be silently ignored. Would a patch for this be accepted? -jab --=20 John A. Barbuto ja...@os... Senior System Administrator, Open Source Development Network http://www.osdn.com/ |
From: Ian B. <ia...@co...> - 2003-07-17 18:40:23
|
On Thu, 2003-07-17 at 11:09, Fran=E7ois Girault wrote: > Hi all, >=20 > here's another feature I need for my project : I have several tables > whose primary key is composed of two or three columns. Well, if there's going to be composite keys, that might as well be generalized to all the columns (i.e., any attribute could be a composite of several columns). The .id should remain largely the same, except it becomes a tuple. The individual columns should not be individually accessible from Python (except like .id[0], .id[1], etc). Implementation-wise, I'm not sure, that's a messy issue. Ian |
From: Ian B. <ia...@co...> - 2003-07-17 18:04:19
|
On Thu, 2003-07-17 at 05:37, Matt Goodall wrote: > Hi, > > Is there a good reason why DBConnection.py tries to load all supported > modules at the top of the module rather than where they are actually > required? i.e. why is psycopg imported for the whole module rather than > just for PostgresConnection? It should be relatively easy to refactor > the code so that the individual Connection classes load the DB-API > module as needed. > > A change like this would certainly improve load time for the module (not > really all that important) but I think it will also make it easier for > the Connection classes to support multiple dbapi DB-API drivers, i.e. > psycopg, pyPgSQL etc. > > Would you accept a patch for this? Yes, that would be fine. Ian |
From: G. <fra...@cl...> - 2003-07-17 16:16:17
|
Hi all, here's another feature I need for my project : I have several tables whose primary key is composed of two or three columns. You could tell me : add an id to your table and that's ok... but not for my needs (damned)! =20 So I'd like to implement support for that... As I want to do it clean, I think I'm going to realize a "Key" or "Id" obje= ct. This object would contain a list of Column objects, which together are the primary key its SQLRepr would be (imo) simple to generate, just a 'col1=3Dx AND col2=3Dy AND col3=3Dz' to replace in WHERE clauses for select, update and delete. I will start this soon, so I'd like to have some suggestions from SQLObject's dev&users before diving in Best Regards, Fran=E7ois Girault |
From: Matt G. <ma...@po...> - 2003-07-17 10:27:42
|
Hi, Is there a good reason why DBConnection.py tries to load all supported modules at the top of the module rather than where they are actually required? i.e. why is psycopg imported for the whole module rather than just for PostgresConnection? It should be relatively easy to refactor the code so that the individual Connection classes load the DB-API module as needed. A change like this would certainly improve load time for the module (not really all that important) but I think it will also make it easier for the Connection classes to support multiple dbapi DB-API drivers, i.e. psycopg, pyPgSQL etc. Would you accept a patch for this? - Matt -- Matt Goodall, Pollenation Internet Ltd w: http://www.pollenationinternet.com e: ma...@po... |
From: Ian B. <ia...@co...> - 2003-07-17 01:20:23
|
On Wed, 2003-07-16 at 20:07, Matt Goodall wrote: > I'm fairly sure that SelectResults.__getitem__ is using wrong ops when > asked to retrieve a single object from the "list". As it happens this > only seems to matter with SQLite, possibly due to its different > LIMIT-OFFSET behaviour. See attached patch. No, I think there was weird stuff with all of them due to that. So thanks for the fix. Unit test much appreciated! Ian |
From: Matt G. <ma...@po...> - 2003-07-17 01:07:20
|
I'm fairly sure that SelectResults.__getitem__ is using wrong ops when asked to retrieve a single object from the "list". As it happens this only seems to matter with SQLite, possibly due to its different LIMIT-OFFSET behaviour. See attached patch. The current code always returns the first row in the results no matter what index you asked for. Another effect of this is that an IndexError is never thrown so any loops expecting the exception never end. I have attached a quick unit test which performs a simple test of various iteration mechanisms. - Matt -- Matt Goodall, Pollenation Internet Ltd w: http://www.pollenation.net e: ma...@po... |
From: Ian B. <ia...@co...> - 2003-07-17 00:51:45
|
On Wed, 2003-07-16 at 19:44, Matt Goodall wrote: > The fact that creating the list did not fail implies that it is not > trying to get the length of t. I dunno though, it's very late here so I > may be missing something obvious. list() gets the length if it can, but it doesn't cause a problem if it can't (it catches all exceptions). It took me a while to figure out what was happening, since I made the COUNT(*) raise an exception to see who was calling it, and the exception just disappeared. Ian |
From: Matt G. <ma...@po...> - 2003-07-17 00:45:03
|
On Thu, 2003-07-17 at 01:05, Ian Bicking wrote: > On Wed, 2003-07-16 at 17:42, Matt Goodall wrote: > > > It would, though, create a bunch of SELECTs, like SELECT blah > > > LIMIT 1 OFFSET 0, SELECT blah LIMIT 1 OFFSET 1, etc. > > > > > > The problem is the way you're using it. > > > > Unfortunately, it's not my code - it's another package that is walking > > the list using an index. I can probably change that package but > > couldn't/shouldn't SQLObject be a bit more graceful under these > > circumstances? > > I don't know. If the code is going to treat select results as a list, a > list seems like a reasonable thing to use (i.e., > list(Whatever.select())). Perhaps you're right. The best solution is probably to change the code I'm using to iterate over the results in the normal manner. This code in question could actually take a slice first which would work really nicely with SQLObject. Something for me to look into. > > You could, generically, have a proxy class that batches the fetches, and > caches the results temporarily (while it's waiting for your code to do > another loop). There's no reason it has to have specific support for > SQLObject either. Maybe such a thing already exists (probably does in > *someone's* code... maybe on the ASPN recipe site). Now that's not a bad idea, thanks. > > > To be honest, it's really the multiple COUNT calls that bother me as it > > seems unnecessary. The multiple SELECT calls to retrieve the objects are > > to be expected but can possibly be avoided. > > Yeah. That is weird. Ah... after some effort, I'm afraid I found why > it happens. list() tries to get the length of the things it's > converting. Are you sure it's that simple? I was just playing with exactly this idea too and wrote the following code to experiment: class Test: def __init__(self): self.counter = 0 def __iter__(self): return self def next(self): self.counter = self.counter + 1 if self.counter > 5: raise StopIteration() return self.counter t = Test() a = list(t) print a # ok up to here print len(t) # this fails The fact that creating the list did not fail implies that it is not trying to get the length of t. I dunno though, it's very late here so I may be missing something obvious. - Matt -- Matt Goodall, Pollenation Internet Ltd w: http://www.pollenation.net e: ma...@po... |