sqlobject-discuss Mailing List for SQLObject (Page 354)
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
You can subscribe to this list here.
2003 |
Jan
|
Feb
(2) |
Mar
(43) |
Apr
(204) |
May
(208) |
Jun
(102) |
Jul
(113) |
Aug
(63) |
Sep
(88) |
Oct
(85) |
Nov
(95) |
Dec
(62) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(38) |
Feb
(93) |
Mar
(125) |
Apr
(89) |
May
(66) |
Jun
(65) |
Jul
(53) |
Aug
(65) |
Sep
(79) |
Oct
(60) |
Nov
(171) |
Dec
(176) |
2005 |
Jan
(264) |
Feb
(260) |
Mar
(145) |
Apr
(153) |
May
(192) |
Jun
(166) |
Jul
(265) |
Aug
(340) |
Sep
(300) |
Oct
(469) |
Nov
(316) |
Dec
(235) |
2006 |
Jan
(236) |
Feb
(156) |
Mar
(229) |
Apr
(221) |
May
(257) |
Jun
(161) |
Jul
(97) |
Aug
(169) |
Sep
(159) |
Oct
(400) |
Nov
(136) |
Dec
(134) |
2007 |
Jan
(152) |
Feb
(101) |
Mar
(115) |
Apr
(120) |
May
(129) |
Jun
(82) |
Jul
(118) |
Aug
(82) |
Sep
(30) |
Oct
(101) |
Nov
(137) |
Dec
(53) |
2008 |
Jan
(83) |
Feb
(139) |
Mar
(55) |
Apr
(69) |
May
(82) |
Jun
(31) |
Jul
(66) |
Aug
(30) |
Sep
(21) |
Oct
(37) |
Nov
(41) |
Dec
(65) |
2009 |
Jan
(69) |
Feb
(46) |
Mar
(22) |
Apr
(20) |
May
(39) |
Jun
(30) |
Jul
(36) |
Aug
(58) |
Sep
(38) |
Oct
(20) |
Nov
(10) |
Dec
(11) |
2010 |
Jan
(24) |
Feb
(63) |
Mar
(22) |
Apr
(72) |
May
(8) |
Jun
(13) |
Jul
(35) |
Aug
(23) |
Sep
(12) |
Oct
(26) |
Nov
(11) |
Dec
(30) |
2011 |
Jan
(15) |
Feb
(44) |
Mar
(36) |
Apr
(26) |
May
(27) |
Jun
(10) |
Jul
(28) |
Aug
(12) |
Sep
|
Oct
|
Nov
(17) |
Dec
(16) |
2012 |
Jan
(12) |
Feb
(31) |
Mar
(23) |
Apr
(14) |
May
(10) |
Jun
(26) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(6) |
2013 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(13) |
Jun
(7) |
Jul
(5) |
Aug
(15) |
Sep
(25) |
Oct
(18) |
Nov
(7) |
Dec
(3) |
2014 |
Jan
(1) |
Feb
(5) |
Mar
|
Apr
(3) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
(5) |
Sep
|
Oct
(11) |
Nov
|
Dec
(62) |
2015 |
Jan
(8) |
Feb
(3) |
Mar
(15) |
Apr
|
May
|
Jun
(6) |
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
(19) |
2016 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(4) |
May
(3) |
Jun
(7) |
Jul
(14) |
Aug
(13) |
Sep
(6) |
Oct
(2) |
Nov
(3) |
Dec
|
2017 |
Jan
(6) |
Feb
(14) |
Mar
(2) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(3) |
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
(44) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
2021 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Oleg B. <ph...@ph...> - 2005-01-03 09:10:51
|
Hello! (CC'ing to the list.) On Mon, Jan 03, 2005 at 10:43:10AM +0800, Hong Yaun wrote: > IMHO IntCol should convert the string to integers, and only raises > errors when the conversion can not be made, Why?! You've declared IntCol, an *integer* column. Of course, it expects to be passed in only integers, not strings. If strings, why not other types, for example objects thta implements __int__()? > on the following grounds: > > I have encountered this problem when I am passing the user input value > collected in the GUI components (wxPython in particular) to the > database. The GUI components returns their values as string, so I would > either convert the strings to integer in my own program, or have The logic is flawed. You are connecting GUI and SQLObject in your program. The job of connecting and converting is neither in the GUI nor in SQLObject - it is the job of your program. It is easy to implement, btw. See below. > SQLObject do this for me. Since I have already declared the column to be > integer with SQLObject, it seems more natural that SQLObject would do > this conversion for me, otherwise I have to declare the column type > again elsewhere in my program. No, you don't have to. You have to declare your own descendant of IntCol, and implement a validator that will convert values according to your rules. See UnicodeCol for a good example. > Second, I think this behavior is more compatible with the way that > database is coping with type conversion. You can for example simply send > a string in the SQL command to an int column in the database, and the > database would perform the conversion whenever possible. I have never understand that behaviour of databases. And of programming langauges. Why on Earth one would want a typesystem where (s)he can add an integer to a string and got a result instead of TypeError? Oleg. -- Oleg Broytmann http://phd.pp.ru/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Oleg B. <ph...@ma...> - 2005-01-02 20:17:41
|
On Sun, Jan 02, 2005 at 11:03:40PM +0300, Oleg Broytmann wrote: > On Mon, Jan 03, 2005 at 01:22:18AM +0800, Hong Yaun wrote: > > i[0].number = '10' > [skip] > > 10 <type 'str'> > > What version are you using? The bug has been fixed in the Sunbversion > repository and the fixed version soon will be released as SQLObject 0.6.1. Sorry, I've spoken too fast. The bug was fixed only for columns that have a validator - the value is passed through fromPython() and toPython() calls. IntCol does not have from/toPython and hence doesn't convret a string value to an int. I am not sure in waht way that should be fixed. Should IntCol silently accepts strings and converts them to integers, or should it raises TypeError? Oleg. -- Oleg Broytmann http://phd.pp.ru/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Oleg B. <ph...@ma...> - 2005-01-02 20:03:50
|
On Mon, Jan 03, 2005 at 01:22:18AM +0800, Hong Yaun wrote: > i[0].number = '10' [skip] > 10 <type 'str'> What version are you using? The bug has been fixed in the Sunbversion repository and the fixed version soon will be released as SQLObject 0.6.1. Oleg. -- Oleg Broytmann http://phd.pp.ru/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Hong Y. <hon...@ho...> - 2005-01-02 17:23:40
|
I find that under some cercumstances a column declared as a certain type can return a different type as shown in the following sample code, tested with SQLObject 0.6.0 The test table, with PostgreSQL 7.4.6: CREATE TABLE test ( id int4 NOT NULL DEFAULT nextval('test_id_seq'::text), number int2 NOT NULL, CONSTRAINT pkey PRIMARY KEY (id) ) WITHOUT OIDS; ALTER TABLE test OWNER TO test; The test script: from sqlobject import * __connection__ = 'postgres://test:test@localhost/test?debug=0' class test(SQLObject): number = IntCol() for i in [3, 5, 8]: test(number = i) i = test.select(test.q.number == 8) i[0].number = '10' results = test.select() for r in results: print r.number, type(r.number) While I would expect the result to be all <type 'int'>, the actuall output is: 3 <type 'int'> 5 <type 'int'> 10 <type 'str'> It seems that sqlobject has remembered that the type of column 'number' was set to '10', a string type, though I thought IntCol should take care of converting the string to int before sending it to the SQL database. I think this kind of inconsistency of column return type will cause great confusion to the programms that use sqlobject and should be fixed. Best Regards Hong Yuan |
From: Stuart B. <st...@st...> - 2005-01-02 04:04:12
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Ian Bicking wrote: |> Is there actually a use case for allowing each column to have a |> different encoding? | | Probably not. Though there's probably a use case for every database | connection to have a different encoding. The use case is for non-Unicode aware databases that want to store text in a particular character set. I imagine the encoding would be specified ~ as part of the connection string. There will be an efficiency advantage to specifying the encoding for Unicode aware databases too if your Unicode aware database supports multiple client encodings (such as PostgreSQL), although I suspect it wouldn't be noticible unless SQLObject is modified to use bound parameters. |> I know for PostgreSQL it is simply a matter of |> setting the database encoding to Unicode and sending everything as UTF-8 |> by simply encoding the entire query (which takes care of other issues |> like Unicode column names as well). The only use cases I can come up |> with for your scenario should be usng BINARY columns instead of VARCHAR |> - - in particular, since the database doesn't know the encoding you are |> using then all your basic string operations, sorting etc. are now broken. | | | This suggests we should do it in a way that we allow Unicode-aware | databases to get Unicode data directly, and other databases use | transparent encoding. | |> Hmm... perhaps if you need to store text in some encoding that doesn't |> contain the ASCII character set it might be necessary, but I don't know |> what character sets these are or if any databases actually support them. |> I've gone through the list of encodings PostgreSQL supports and they all |> contain the basic latin letters and can be used to encode SQL |> statements, so I suspect this is not a requirement. | | | That seems overly aggressive. It just feels very wrong to encode the | entire query. Ideally, we just throw a Unicode SQL command at the database driver (which for PostgreSQL, is possible with psycopg2). For psycopg 1, you have to take care of the encoding yourself, which is simply a matter of issuing a 'SET client_encoding TO UNICODE' and then encoding all Unicode strings as UTF8 <rant>(because PostgreSQL, like Java, seems to have decided Unicode == UTF8)</rant>. Encoding the entire query has the advantage that Unicode column names, Unicode table names, Unicode in WHERE clauses etc. are all handled correctly. eg. Foo.select(u"WHERE name >= '\N{LATIN CAPITAL LETTER A WITH GRAVE}") If we don't encode the entire query, developers have to worry about what parts of SQLObject require ASCII only strings and what parts of SQLObject accept Unicode strings which is really frustrating to those of us following the recommended 'Unicode everywhere' practice. It also will cause trouble with modern DB drivers that happily accept Unicode strings and do the right thing because you have no idea what encoding the connection is set to use. It could be argued that the correct thing for them to do if they receive a non-ASCII traditional string is to raise an exception (since the encoding is not known, it can't tell the backend what it is). I don't see any advantage to only encoding portions of the query. Forcing parts of the SQL statement to remain ASCII would be needlessly restrictive and a source of bugs (since Unicode strings are viral in Python, you often find them cropping up in places you didn't expect them). Internally, we have patched SQLObject to *always* return Unicode strings and transparently encode/decode. I'd say *this* might be overly agressive because it is not backwards compatible (and the reason we never pushed this patch back upstream), but there needs to be an option to do it this way because 'Unicode everywhere' has been recommended practice since Unicode support was first bolted onto Python. It also means that when I'm wearing my DBA hat I don't have to worry about other developers pissing in the pool and polluting my nice clean database with meaningless bytestreams. I think the following may be a good design, which maintains backwards compatibility for people working with legacy systems or who are idealogically opposed to working with Unicode strings. It isn't best practice, but might be common ground. It also doesn't involve much work ;) 1) The bulk of SQLObject doesn't care what sort of strings it sees. It just works with strings as Python intended. 2) At the point of issuing the cursor.execute(), the query will be encoded into the encoding the database backend expects to see (or just passed through as a Unicode string if the driver supports that). For non-Unicode aware databases, the developer will need to specify the encoding when opening the connection (defaulting to ASCII) 3) When results are retrieved, they are returned as-is (traditional string, encoded) if the column type is StringCol, or decoded into Unicode if the column type is UnicodeCol. I don't think adding a 'unicode=True' parameter to StringCol would be good, as developers will forget to add it and we end up with hidden bugs again. 4) Docs are updated to use UnicodeCol rather than StringCol. Point 4 is actually important, as otherwise people will continue to use StringCol. This will cause them trouble when they throw Unicode at it (which stores correctly, but they get encoded strings back). - -- Stuart Bishop <st...@st...> http://www.stuartbishop.net/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (GNU/Linux) iD8DBQFB13KVAfqZj7rGN0oRAmTzAJ45l9hc/Ag7I/0UCRt1gdbwP0UhPgCfd+zH dZrfcE6sbDOReEoiV/2CPkA= =hL4y -----END PGP SIGNATURE----- |
From: Sidnei da S. <si...@aw...> - 2004-12-31 19:19:57
|
| It's compatible with threads, you just have to pass the connection in | anytime it might be ambiguous what transaction you are working in (i.e., | whenever you are dealing with class methods). Though I'm curious what | people do in Zope 3, if there's anything special in sqlo... there | transactions are often per-thread (which is actually a lot more | convenient, since you don't have to pass the connection around, and | usually what you really want anyway). We keep a connection always open for each thread and used a descriptor for the _connection attribute of SQLObject to fetch this open connection using the thread id. -- Sidnei da Silva <si...@aw...> http://awkly.org - dreamcatching :: making your dreams come true http://www.enfoldsystems.com http://plone.org/about/team#dreamcatcher Know Thy User. |
From: Ian B. <ia...@co...> - 2004-12-31 19:09:34
|
Luke Opperman wrote: > Quoting Oleg Broytmann <ph...@ph...>: > >> On Fri, Dec 31, 2004 at 11:05:41AM -0200, Carlos Ribeiro wrote: >> >>> > conn = connectionForURI("...") >>> > if connection.supportTransactions: >>> > conn = conn.transaction() >>> > >>> > class Person(SQLObject): >>> > _connection = conn >>> >> Neither. You can call conn.begin(), conn.commit() and conn.rollback() >> at your will. After calling conn.rollback() you have to call >> conn.begin(). That's simple. > > > Am I correct in thinking this is unusable in a multi-threaded > environment, like > Webware? The Person class shares one underlying connection to the database > (pool is bypassed by Transaction), so if I have two interleaved requests > to an > UpdateUser process (whether for different user records or not) they will > both > try to begin(), and when one commits the other's changes up to that > point go > along, or a rollback in one ends the transaction for the other? It's compatible with threads, you just have to pass the connection in anytime it might be ambiguous what transaction you are working in (i.e., whenever you are dealing with class methods). Though I'm curious what people do in Zope 3, if there's anything special in sqlo... there transactions are often per-thread (which is actually a lot more convenient, since you don't have to pass the connection around, and usually what you really want anyway). -- Ian Bicking / ia...@co... / http://blog.ianbicking.org |
From: Oleg B. <ph...@ma...> - 2004-12-31 18:33:58
|
On Fri, Dec 31, 2004 at 11:57:55AM -0600, Luke Opperman wrote: > >>> conn = connectionForURI("...") > >>> if connection.supportTransactions: > >>> conn = conn.transaction() > >>> > >>> class Person(SQLObject): > >>> _connection = conn > >> > > Neither. You can call conn.begin(), conn.commit() and conn.rollback() > >at your will. After calling conn.rollback() you have to call > >conn.begin(). That's simple. > > Am I correct in thinking this is unusable in a multi-threaded > environment I don't know. I have never used threads and I hope I'll never use it. The above code works great in a forking server (Quixote+SCGI). Oleg. -- Oleg Broytmann http://phd.pp.ru/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Carlos R. <car...@gm...> - 2004-12-31 18:25:53
|
On Fri, 31 Dec 2004 11:57:55 -0600, Luke Opperman <lu...@me...> wrote: > Quoting Oleg Broytmann <ph...@ph...>: > > > On Fri, Dec 31, 2004 at 11:05:41AM -0200, Carlos Ribeiro wrote: > >> > conn = connectionForURI("...") > >> > if connection.supportTransactions: > >> > conn = conn.transaction() > >> > > >> > class Person(SQLObject): > >> > _connection = conn > >> > > Neither. You can call conn.begin(), conn.commit() and conn.rollback() > > at your will. After calling conn.rollback() you have to call > > conn.begin(). That's simple. > > Am I correct in thinking this is unusable in a multi-threaded > environment, like > Webware? The Person class shares one underlying connection to the database > (pool is bypassed by Transaction), so if I have two interleaved requests to an > UpdateUser process (whether for different user records or not) they will both > try to begin(), and when one commits the other's changes up to that point go > along, or a rollback in one ends the transaction for the other? > > Or am I missing something? I may be wrong, but some kinds of connection objects are thread-sefe. There's some info on the threadsafety value that is exposed by the DBAPI 2.0... I don't have enough experience to tell how does it behave in practice. -- Carlos Ribeiro Consultoria em Projetos blog: http://rascunhosrotos.blogspot.com blog: http://pythonnotes.blogspot.com mail: car...@gm... mail: car...@ya... |
From: Luke O. <lu...@me...> - 2004-12-31 17:58:03
|
Quoting Oleg Broytmann <ph...@ph...>: > On Fri, Dec 31, 2004 at 11:05:41AM -0200, Carlos Ribeiro wrote: >> > conn = connectionForURI("...") >> > if connection.supportTransactions: >> > conn = conn.transaction() >> > >> > class Person(SQLObject): >> > _connection = conn >> > Neither. You can call conn.begin(), conn.commit() and conn.rollback() > at your will. After calling conn.rollback() you have to call > conn.begin(). That's simple. Am I correct in thinking this is unusable in a multi-threaded environment, like Webware? The Person class shares one underlying connection to the database (pool is bypassed by Transaction), so if I have two interleaved requests to an UpdateUser process (whether for different user records or not) they will both try to begin(), and when one commits the other's changes up to that point go along, or a rollback in one ends the transaction for the other? Or am I missing something? - Luke |
From: Oleg B. <ph...@ph...> - 2004-12-31 13:11:51
|
On Fri, Dec 31, 2004 at 11:05:41AM -0200, Carlos Ribeiro wrote: > > conn = connectionForURI("...") > > if connection.supportTransactions: > > conn = conn.transaction() > > > > class Person(SQLObject): > > _connection = conn > > Genuinely curious. What are the actual semantics of this? Do it opens > a transaction for the entire lifetime of the connection, or does it > use a transaction for each operation? The first is not useful, and the > later seems like overkill (not to mention that the granularity of the > transaction, in both cases, leaves a lot to be desired). Neither. You can call conn.begin(), conn.commit() and conn.rollback() at your will. After calling conn.rollback() you have to call conn.begin(). That's simple. Oleg. -- Oleg Broytmann http://phd.pp.ru/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Carlos R. <car...@gm...> - 2004-12-31 13:05:49
|
On Fri, 31 Dec 2004 15:59:15 +0300, Oleg Broytmann <ph...@ma...> wrote: > On Fri, Dec 31, 2004 at 01:02:25AM -0500, Brian Beck wrote: > > - Do only methods that are passed the transaction provided by > > conn.transaction() take advantage of it? If so, how do you insert new > > rows within a transaction? I can only see one way to create a new row, > > i.e.: Person(name='Brian'), but no way to pass this a transaction > > 'instance' (I tried a few different ways). > > The simplest method is to create a connection, wrap it into a > transactiona and after that decalre your tables: > > conn = connectionForURI("...") > if connection.supportTransactions: > conn = conn.transaction() > > class Person(SQLObject): > _connection = conn Genuinely curious. What are the actual semantics of this? Do it opens a transaction for the entire lifetime of the connection, or does it use a transaction for each operation? The first is not useful, and the later seems like overkill (not to mention that the granularity of the transaction, in both cases, leaves a lot to be desired). -- Carlos Ribeiro Consultoria em Projetos blog: http://rascunhosrotos.blogspot.com blog: http://pythonnotes.blogspot.com mail: car...@gm... mail: car...@ya... |
From: Oleg B. <ph...@ma...> - 2004-12-31 12:59:25
|
On Fri, Dec 31, 2004 at 01:02:25AM -0500, Brian Beck wrote: > - Do only methods that are passed the transaction provided by > conn.transaction() take advantage of it? If so, how do you insert new > rows within a transaction? I can only see one way to create a new row, > i.e.: Person(name='Brian'), but no way to pass this a transaction > 'instance' (I tried a few different ways). The simplest method is to create a connection, wrap it into a transactiona and after that decalre your tables: conn = connectionForURI("...") if connection.supportTransactions: conn = conn.transaction() class Person(SQLObject): _connection = conn Oleg. -- Oleg Broytmann http://phd.pp.ru/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Luke O. <lu...@me...> - 2004-12-31 08:39:28
|
Quoting Brian Beck <ex...@gm...>: > - Do only methods that are passed the transaction provided by > conn.transaction() take advantage of it? If so, how do you insert new > rows within a transaction? I can only see one way to create a new row, > i.e.: Person(name='Brian'), but no way to pass this a transaction > 'instance' (I tried a few different ways). trans = conn.transaction() ... Person(name='Brian', connection=trans) (To your question, yes, and the constructor/create method takes a connection argument like all the others.) On the topic of 0.6.1 and transactions, how are people dealing with pre-existing objects that need to be updated in a transaction? In my base class I add a method to return a copy of the object in a new transaction: def inConnection(connection): return self.__class__.get(self.id, connection=connection) brian = Person.get(1) ... # use brian for display or whatever trans = conn.transaction() ... # now need to update brian in trans brianForUpdate = brian.inConnection(trans) brianForUpdate.set(...) Although I suppose nearly the same effect could be had by: brian._connection = trans Although, will brian be moved out of the old connection's cache? Be added to the new connection's (transaction's) cache? If I have other references of the cached object, they'll suddenly be in the transaction, right? Seems to me that's why I orginally wrote inConnection to re-instantiate. So how are other people dealing with this situation? I know way way back we talked about a copyToConnection method which did a little more (would create the object in the new connection if it didn't exist in that database say). - Luke P.S. Speaking of local additions to all my SO classes, I have a method getByID that wraps get() in a try-except and returns None if it wasn't found, for code situations where it's less clear to use exceptions: person = Person.get(req.field('id',None)) if not person: person = Person(...) Useful? |
From: Brian B. <ex...@gm...> - 2004-12-31 06:02:49
|
Oleg Broytmann wrote: > Looks ok. What is wrong in your case? I'm not sure if I have the specific code anymore, but I will try to reproduce it if necessary. The issue is described below. - Do only methods that are passed the transaction provided by conn.transaction() take advantage of it? If so, how do you insert new rows within a transaction? I can only see one way to create a new row, i.e.: Person(name='Brian'), but no way to pass this a transaction 'instance' (I tried a few different ways). Given the above question, I can't remember exactly on what code I settled in which I thought row creations were being done within a transaction. But I did a time.clock() in 3 places: before create/insert (t1), after create/insert (t2), and after commit (t3). The difference between t2 and t1 was very large, and the difference between t3 and t2 was close to 0. So, insertions were being auto-committed and thus not within the transaction. A better explanation of how to do this would be appreciated. -- Brian Beck Adventurer of the First Order |
From: pkoelle <pk...@gm...> - 2004-12-30 18:46:53
|
Ian Bicking wrote: > If you want something that gets triggered on every update, then > overriding _SO_setValue() and set() would probably make sense. It's not > a stable interface, in that it might get changed in the future (but not > for a little while at least). > Thanks Ian and Oleg, def _SO_setValue(self, name, value, fromPython, toPython): ...mystuff goes here...and name is the name of the column ;) super(Father, self)._SO_setValue(name, value, fromPython, toPython) seems to do the trick. thanks again Paul |
From: Ian B. <ia...@co...> - 2004-12-30 17:06:20
|
Brian Beck wrote: > Starting with version 2.0 (a while ago), SQLite has supported > transactions, so maybe the documentation can be updated, and code if > necessary? Doing some timings with insert/update and commit with > SQLObject seems to suggest that even though SQLite supports > transactions, SQLObject doesn't really take advantage of them. Is this > the case? The documentation was inaccurate about SQLite and transactions; I've removed that from the documentation. -- Ian Bicking / ia...@co... / http://blog.ianbicking.org |
From: Ian B. <ia...@co...> - 2004-12-30 17:04:09
|
Stuart Bishop wrote: > Is there actually a use case for allowing each column to have a > different encoding? Probably not. Though there's probably a use case for every database connection to have a different encoding. > I know for PostgreSQL it is simply a matter of > setting the database encoding to Unicode and sending everything as UTF-8 > by simply encoding the entire query (which takes care of other issues > like Unicode column names as well). The only use cases I can come up > with for your scenario should be usng BINARY columns instead of VARCHAR > - - in particular, since the database doesn't know the encoding you are > using then all your basic string operations, sorting etc. are now broken. This suggests we should do it in a way that we allow Unicode-aware databases to get Unicode data directly, and other databases use transparent encoding. > Hmm... perhaps if you need to store text in some encoding that doesn't > contain the ASCII character set it might be necessary, but I don't know > what character sets these are or if any databases actually support them. > I've gone through the list of encodings PostgreSQL supports and they all > contain the basic latin letters and can be used to encode SQL > statements, so I suspect this is not a requirement. That seems overly aggressive. It just feels very wrong to encode the entire query. -- Ian Bicking / ia...@co... / http://blog.ianbicking.org |
From: Ian B. <ia...@co...> - 2004-12-30 17:01:15
|
pkoelle wrote: > I am surely overlooking something simple but haven't found a way > override the setter and not hardcoding the columns name. > I can do something like: > > def _setName(self, value): > blabla > self._SO_set_Name(value) > > but this applies to the Name column only and I have to perform the > *same* operation for *all* columns and rather like to avoid writing the > same code for each column. Basically I'd like to update an ini file > through ConfigParser on writes, where the option (in terms of > ConfigParser) should be the name attribute of the column being changed. > I have no idea if messing with _SO_setValue() is a sane approach. > Moreover, there seems no way to get the name attribute of the column at > runtime... Any ideas? If you want something that gets triggered on every update, then overriding _SO_setValue() and set() would probably make sense. It's not a stable interface, in that it might get changed in the future (but not for a little while at least). -- Ian Bicking / ia...@co... / http://blog.ianbicking.org |
From: Oleg B. <ph...@ma...> - 2004-12-30 16:39:20
|
On Thu, Dec 30, 2004 at 05:28:41PM +0100, pkoelle wrote: > def _setName(self, value): > blabla > self._SO_set_Name(value) def _setName(self, name, value): blabla setter = getattr(self, "_SO_set_%s" % name) setter(value) > Moreover, there seems no way to get the name attribute of the column at > runtime... Any ideas? It is col.name. Oleg. -- Oleg Broytmann http://phd.pp.ru/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: pkoelle <pk...@gm...> - 2004-12-30 16:30:09
|
Hi list, I am surely overlooking something simple but haven't found a way override the setter and not hardcoding the columns name. I can do something like: def _setName(self, value): blabla self._SO_set_Name(value) but this applies to the Name column only and I have to perform the *same* operation for *all* columns and rather like to avoid writing the same code for each column. Basically I'd like to update an ini file through ConfigParser on writes, where the option (in terms of ConfigParser) should be the name attribute of the column being changed. I have no idea if messing with _SO_setValue() is a sane approach. Moreover, there seems no way to get the name attribute of the column at runtime... Any ideas? thanks Paul |
From: Oleg B. <ph...@ph...> - 2004-12-30 15:17:12
|
On Thu, Dec 30, 2004 at 05:06:24PM +0200, Max Ischenko wrote: > > Try dbusers = User.selectBy(isGuest=False, > > email=email.encode(dbEncoding)) > > Hey, surely I could figure out that myself, that was not the point. ;-) But currently it seems as the only way. Oleg. -- Oleg Broytmann http://phd.pp.ru/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Max I. <ma...@uc...> - 2004-12-30 15:08:56
|
Oleg Broytmann wrote: > On Wed, Dec 29, 2004 at 10:19:17AM +0200, Max Ischenko wrote: > >>dbusers = User.selectBy(isGuest=False, email=email) >>n = dbusers.count() > > > Try dbusers = User.selectBy(isGuest=False, email=email.encode(dbEncoding)) Hey, surely I could figure out that myself, that was not the point. ;-) |
From: Oleg B. <ph...@ph...> - 2004-12-30 09:15:40
|
Also it is good time to overhaul Plan06.txt. Some things were implemented already, some plans were changed. Especiall I'm interested to hear your plans about inheritance. Oleg. -- Oleg Broytmann http://phd.pp.ru/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Oleg B. <ph...@ph...> - 2004-12-30 08:28:52
|
On Wed, Dec 29, 2004 at 11:19:45AM -0600, Ian Bicking wrote: > Oleg Broytmann wrote: > > I cannot build SQLObject.txt using rst2html.py - SQLObject.txt > >references external code snippets that are not exist. > > Woops, fixed in 505. $ ./build Traceback (most recent call last): File "./examplestripper.py", line 96, in ? snipAll(arg) File "./examplestripper.py", line 87, in snipAll snipAll(fn) File "./examplestripper.py", line 89, in snipAll snipFile(dir) File "./examplestripper.py", line 67, in snipFile f = open(fn + ".py.html") IOError: [Errno 2] No such file or directory: './snippets/slicing-batch.py.html' /home/phd/work/SQLObject/SQLObject/docs/FAQ.txt:26: (SEVERE/4) Problems with "raw" directive path: [Errno 2] No such file or directory: u'../examples/snippets/leftjoin-simple.html'. .. raw:: html :file: ../examples/snippets/leftjoin-simple.html Exiting due to level-4 (SEVERE) system message. Oleg. -- Oleg Broytmann http://phd.pp.ru/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |