sqlobject-discuss Mailing List for SQLObject (Page 17)
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
You can subscribe to this list here.
2003 |
Jan
|
Feb
(2) |
Mar
(43) |
Apr
(204) |
May
(208) |
Jun
(102) |
Jul
(113) |
Aug
(63) |
Sep
(88) |
Oct
(85) |
Nov
(95) |
Dec
(62) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(38) |
Feb
(93) |
Mar
(125) |
Apr
(89) |
May
(66) |
Jun
(65) |
Jul
(53) |
Aug
(65) |
Sep
(79) |
Oct
(60) |
Nov
(171) |
Dec
(176) |
2005 |
Jan
(264) |
Feb
(260) |
Mar
(145) |
Apr
(153) |
May
(192) |
Jun
(166) |
Jul
(265) |
Aug
(340) |
Sep
(300) |
Oct
(469) |
Nov
(316) |
Dec
(235) |
2006 |
Jan
(236) |
Feb
(156) |
Mar
(229) |
Apr
(221) |
May
(257) |
Jun
(161) |
Jul
(97) |
Aug
(169) |
Sep
(159) |
Oct
(400) |
Nov
(136) |
Dec
(134) |
2007 |
Jan
(152) |
Feb
(101) |
Mar
(115) |
Apr
(120) |
May
(129) |
Jun
(82) |
Jul
(118) |
Aug
(82) |
Sep
(30) |
Oct
(101) |
Nov
(137) |
Dec
(53) |
2008 |
Jan
(83) |
Feb
(139) |
Mar
(55) |
Apr
(69) |
May
(82) |
Jun
(31) |
Jul
(66) |
Aug
(30) |
Sep
(21) |
Oct
(37) |
Nov
(41) |
Dec
(65) |
2009 |
Jan
(69) |
Feb
(46) |
Mar
(22) |
Apr
(20) |
May
(39) |
Jun
(30) |
Jul
(36) |
Aug
(58) |
Sep
(38) |
Oct
(20) |
Nov
(10) |
Dec
(11) |
2010 |
Jan
(24) |
Feb
(63) |
Mar
(22) |
Apr
(72) |
May
(8) |
Jun
(13) |
Jul
(35) |
Aug
(23) |
Sep
(12) |
Oct
(26) |
Nov
(11) |
Dec
(30) |
2011 |
Jan
(15) |
Feb
(44) |
Mar
(36) |
Apr
(26) |
May
(27) |
Jun
(10) |
Jul
(28) |
Aug
(12) |
Sep
|
Oct
|
Nov
(17) |
Dec
(16) |
2012 |
Jan
(12) |
Feb
(31) |
Mar
(23) |
Apr
(14) |
May
(10) |
Jun
(26) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(6) |
2013 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(13) |
Jun
(7) |
Jul
(5) |
Aug
(15) |
Sep
(25) |
Oct
(18) |
Nov
(7) |
Dec
(3) |
2014 |
Jan
(1) |
Feb
(5) |
Mar
|
Apr
(3) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
(5) |
Sep
|
Oct
(11) |
Nov
|
Dec
(62) |
2015 |
Jan
(8) |
Feb
(3) |
Mar
(15) |
Apr
|
May
|
Jun
(6) |
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
(19) |
2016 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(4) |
May
(3) |
Jun
(7) |
Jul
(14) |
Aug
(13) |
Sep
(6) |
Oct
(2) |
Nov
(3) |
Dec
|
2017 |
Jan
(6) |
Feb
(14) |
Mar
(2) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(3) |
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
(44) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
2021 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Oleg B. <ph...@ph...> - 2013-04-24 08:09:20
|
Hi! On Wed, Apr 24, 2013 at 08:51:57AM +0100, "Maciej (Matchek) Blizi??ski" <ma...@op...> wrote: > 2013/4/16 Oleg Broytman <ph...@ph...> > > On Tue, Apr 16, 2013 at 10:52:29AM +0100, "Maciej (Matchek) Blizi??ski" <ma...@op...> wrote: > > > File "/opt/csw/lib/python/site-packages/sqlobject/mysql/mysqlconnection.py", > > > line 71, in makeConnection > > > conn.ping(True) # Attempt to reconnect. This setting is persistent. > > > ProgrammingError: (2014, "Commands out of sync; you can't run this command now") > > > > "Commands out of sync"means the application calls functions in the > > wrong order: > > https://dev.mysql.com/doc/refman/5.1/en/commands-out-of-sync.html > > Is the app multithreaded? Could it be the app tries to reuse the same > > transaction in different threads? > > That was my suspicion too, but it's an wsgi application which doesn't > use threads. I experimented by reducing the number of wsgi application > copies / threads in Apache config to 1. It didn't help, so it's > probably not that. > > I've got an update: I noticed that MySQLdb[1] has a new version (1.2.4) > so I upgraded it and the problem seems to have gone away. Maybe it was > some interplay between SqlObject and the MySQL driver for Python, or > just simply the db driver had a problem. I'm glad it's fixed. Good luck! Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Maciej (M. B. <ma...@op...> - 2013-04-24 07:52:45
|
2013/4/16 Oleg Broytman <ph...@ph...> > > On Tue, Apr 16, 2013 at 10:52:29AM +0100, "Maciej (Matchek) Blizi??ski" <ma...@op...> wrote: > > File "/opt/csw/lib/python/site-packages/sqlobject/mysql/mysqlconnection.py", > > line 71, in makeConnection > > conn.ping(True) # Attempt to reconnect. This setting is persistent. > > ProgrammingError: (2014, "Commands out of sync; you can't run this command now") > > conn.ping() is intended to reopen the connection after a timeout; > AFAIR the default timeout is 3600 seconds, not a few minutes. There > shouldn't be any problem with ping after a minute or two. > > "Commands out of sync"means the application calls functions in the > wrong order: > https://dev.mysql.com/doc/refman/5.1/en/commands-out-of-sync.html > Is the app multithreaded? Could it be the app tries to reuse the same > transaction in different threads? That was my suspicion too, but it's an wsgi application which doesn't use threads. I experimented by reducing the number of wsgi application copies / threads in Apache config to 1. It didn't help, so it's probably not that. I've got an update: I noticed that MySQLdb[1] has a new version (1.2.4) so I upgraded it and the problem seems to have gone away. Maybe it was some interplay between SqlObject and the MySQL driver for Python, or just simply the db driver had a problem. Maciej [1] https://pypi.python.org/pypi/MySQL-python |
From: Oleg B. <ph...@ph...> - 2013-04-16 18:50:13
|
Hi! On Tue, Apr 16, 2013 at 10:52:29AM +0100, "Maciej (Matchek) Blizi??ski" <ma...@op...> wrote: > File "/opt/csw/lib/python/site-packages/sqlobject/mysql/mysqlconnection.py", > line 71, in makeConnection > conn.ping(True) # Attempt to reconnect. This setting is persistent. > ProgrammingError: (2014, "Commands out of sync; you can't run this command now") conn.ping() is intended to reopen the connection after a timeout; AFAIR the default timeout is 3600 seconds, not a few minutes. There shouldn't be any problem with ping after a minute or two. "Commands out of sync"means the application calls functions in the wrong order: https://dev.mysql.com/doc/refman/5.1/en/commands-out-of-sync.html Is the app multithreaded? Could it be the app tries to reuse the same transaction in different threads? > Does it look like a problem with my application, or does it look like > something that should be handled on the SqlObject side? Or should I > check for the state of the connection at the start of this function? None of that, I'm sure. Unfortunately I cannot help further -- I seldom use MySQL, I use Postgres for bigger projects and SQLite for smaller ones, so I have to rely on on other people's feedback. Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Maciej (M. B. <ma...@op...> - 2013-04-16 09:53:18
|
Hello list, I'm running into a problem when running a small web app in webpy with sqlobject. The app is a REST interface which handles URLs such as: GET /pkgdb/rest/srv4/7793c4a5ecd6494b25d475632618e44c/pkg-stats/ HTTP/1.1 and PUT /releases/catalogs/unstable/i386/SunOS5.9/7793c4a5ecd6494b25d475632618e44c/ HTTP/1.1 When called with PUT, it does a few checks and inserts a row into a table. When the application runs, it initially works for a minute or two, but eventually gets into state in which it always fails with this exception: Traceback (most recent call last): File "/opt/csw/lib/python/site-packages/web/application.py", line 239, in process return self.handle() File "/opt/csw/lib/python/site-packages/web/application.py", line 230, in handle return self._delegate(fn, self.fvars, args) File "/opt/csw/lib/python/site-packages/web/application.py", line 420, in _delegate return handle_class(cls) File "/opt/csw/lib/python/site-packages/web/application.py", line 396, in handle_class return tocall(*args) File "/home/maciej/src/opencsw-gar/lib/web/releases_web.py", line 187, in PUT srv4 = models.Srv4FileStats.selectBy(md5_sum=md5_sum).getOne() File "/opt/csw/lib/python/site-packages/sqlobject/sresults.py", line 277, in getOne results = list(self) File "/opt/csw/lib/python/site-packages/sqlobject/sresults.py", line 181, in __iter__ return iter(list(self.lazyIter())) File "/opt/csw/lib/python/site-packages/sqlobject/sresults.py", line 189, in lazyIter return conn.iterSelect(self) File "/opt/csw/lib/python/site-packages/sqlobject/dbconnection.py", line 471, in iterSelect return select.IterationClass(self, self.getConnection(), File "/opt/csw/lib/python/site-packages/sqlobject/dbconnection.py", line 336, in getConnection conn = self.makeConnection() File "/opt/csw/lib/python/site-packages/sqlobject/mysql/mysqlconnection.py", line 71, in makeConnection conn.ping(True) # Attempt to reconnect. This setting is persistent. ProgrammingError: (2014, "Commands out of sync; you can't run this command now") This state persists until I restart the web server ‒ any query which requires a connection to the database will fail with the same error, visible as 500 Internal Server Error on the HTTP client side. >From reading the mysqlconnection.py source code, this function only intends to throw SqlObject exceptions, and not exceptions specific to the database engine. I'm guessing that it isn't expected that conn.ping() could throw an exception? The failing function in my app starts here: https://sourceforge.net/apps/trac/gar/browser/csw/mgar/gar/v2/lib/web/releases_web.py#L143 Does it look like a problem with my application, or does it look like something that should be handled on the SqlObject side? Or should I check for the state of the connection at the start of this function? Versions of software I'm using: SqlObject 1.3.2 Python 2.6.8 MySQL Server 5.5.30 MySQLdb python module 1.2.3 Apache 2.2.22 Oracle Solaris 10 9/10 s10x_u9wos_14a X86 Maciej |
From: Andrew Z <ah...@gm...> - 2013-02-06 16:22:29
|
On Mon, Feb 4, 2013 at 2:40 PM, Oleg Broytman <ph...@ph...> wrote: > On Mon, Feb 04, 2013 at 02:20:59PM -0700, Andrew Z <ah...@gm...> wrote: >> With SQLObject 1.3.2 and mssql backend, how do you specify the schema >> equivalent to the command 'use foo'? I want to use a schema that is >> neither 'dbo' not the default for the user. I don't see how to do it >> using the URI after glancing over the documentation and code. > > Using non-default schema is only implemented for Postgres. You can > see how it's implemented in sqlobject/postgres/pgconnection.py and > implement something similar for mssql. Thank you for the tip. Postgres makes it easier because it has a session-level command for specifying the schema search path, but Microsoft SQL Server requires the schema to be pretended to every table reference for every query. For now I will stick with the default schema. Best regards, Andrew |
From: Oleg B. <ph...@ph...> - 2013-02-04 21:40:37
|
On Mon, Feb 04, 2013 at 02:20:59PM -0700, Andrew Z <ah...@gm...> wrote: > With SQLObject 1.3.2 and mssql backend, how do you specify the schema > equivalent to the command 'use foo'? I want to use a schema that is > neither 'dbo' not the default for the user. I don't see how to do it > using the URI after glancing over the documentation and code. Using non-default schema is only implemented for Postgres. You can see how it's implemented in sqlobject/postgres/pgconnection.py and implement something similar for mssql. Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Andrew Z <ah...@gm...> - 2013-02-04 21:21:26
|
With SQLObject 1.3.2 and mssql backend, how do you specify the schema equivalent to the command 'use foo'? I want to use a schema that is neither 'dbo' not the default for the user. I don't see how to do it using the URI after glancing over the documentation and code. Andrew |
From: Oleg B. <ph...@ph...> - 2013-02-03 10:07:19
|
Hello! On Sat, Feb 02, 2013 at 07:44:23PM -0700, Andrew Z <ah...@gm...> wrote: > I read http://www.sqlobject.org/DeveloperGuide.html#testing and am > still unclear on exactly how to run a test. Sorry, this may be a > basic question, but I hope the Developer Guide could be more explicit. > > After installing py.test, I basically did this: > > $ svn co http://svn.colorstudy.com/SQLObject/trunk SQLObject > $ cd SQLObject > $ python sqlobject/tests/test_unicode.py -D sqlite:///tmp/foo.db > > However: > Traceback (most recent call last): > File "sqlobject/tests/test_unicode.py", line 1, in <module> > from sqlobject import * > ImportError: No module named sqlobject > > I also tried something like and got basically the same error > > $ pytest sqlobject/tests/test_unicode.py -D sqlite:///tmp/foo.db Use py.test: $ py.test sqlobject/tests/test_unicode.py -D sqlite:///tmp/foo.db py.test is a library from py.lib; pytest is from logilab -- a completely different library. During the years I've working on SQLObject I developed a number of shell scripts to run tests and collect reports. If you are intersted I can send them to you. Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Andrew Z <ah...@gm...> - 2013-02-03 02:44:53
|
I read http://www.sqlobject.org/DeveloperGuide.html#testing and am still unclear on exactly how to run a test. Sorry, this may be a basic question, but I hope the Developer Guide could be more explicit. After installing py.test, I basically did this: $ svn co http://svn.colorstudy.com/SQLObject/trunk SQLObject $ cd SQLObject $ python sqlobject/tests/test_unicode.py -D sqlite:///tmp/foo.db However: Traceback (most recent call last): File "sqlobject/tests/test_unicode.py", line 1, in <module> from sqlobject import * ImportError: No module named sqlobject I also tried something like and got basically the same error $ pytest sqlobject/tests/test_unicode.py -D sqlite:///tmp/foo.db Andrew |
From: Oleg B. <ph...@ph...> - 2013-01-22 20:33:11
|
Hi! On Tue, Jan 22, 2013 at 03:21:06PM -0500, Markos Kapes <mk...@gm...> wrote: > Any good reason why this code should fail to insert while the underlying query in mysql shell works? > cursor=cbx.conn.cursor() > cursor.execute(u"INSERT INTO notes (message, author) VALUES ('test', 'test');") > Furthermore, this fails silently, so no error to give me a clue. > Selects still work as expected, but somehow, all the old legacy code I've got that uses statements of the above type have stopped working. > As always, thanks much for the advice, If there is no error, in what way it fails? It didn't insert the row? Could it be an automatic rollback at the end of a transaction? Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Markos K. <mk...@gm...> - 2013-01-22 20:21:16
|
Any good reason why this code should fail to insert while the underlying query in mysql shell works? cursor=cbx.conn.cursor() cursor.execute(u"INSERT INTO notes (message, author) VALUES ('test', 'test');") Furthermore, this fails silently, so no error to give me a clue. Selects still work as expected, but somehow, all the old legacy code I've got that uses statements of the above type have stopped working. As always, thanks much for the advice, --Markos Kapes |
From: Oleg B. <ph...@ph...> - 2013-01-18 20:46:28
|
Hello! SourceForge insisted on upgrading registered projects to their new backend called Allura, so I allowed them to upgrade the project. See http://sourceforge.net/projects/sqlobject/ The most notable changes are: administrative interface: https://sourceforge.net/p/sqlobject/admin/ tickets: http://sourceforge.net/p/sqlobject/_list/tickets and wiki: http://sourceforge.net/p/sqlobject/wiki/Home/ Not sure how to use the new wiki to the benefits of the community. Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Stéphane B. <ste...@ya...> - 2013-01-11 22:42:59
|
Shed off some pounds doesnt have to be hard http://2ndshopthai.com/cottageadvertisingdavidroberts/ |
From: Oleg B. <ph...@ph...> - 2012-12-30 21:48:42
|
Hello and happy New Year! On Sun, Dec 30, 2012 at 09:46:00PM +0100, Tomas Vondra <tv...@fu...> wrote: > I'm learning SQLObject - checking if we could use it on our projects, > and I got stuck at hashing passwords inside the database. > > Imagine a simple table with info about users: > > CREATE TABLE users ( > id INT PRIMARY KEY, > login TEXT NOT NULL UNIQUE, > pwdhash TEXT NOT NULL > ) > > where "pwdhash" is a hashed password. We're using PostgreSQL and we > usually handle this inside the database using a pgcrypto module, that > provides various hash/crypto functions. An insert into the table then > looks like this > > INSERT INTO users VALUES (1, 'login', crypt('mypassword', > gen_salt('bf'))) > > which generates a salt, computes the hash and stores that into a single > text column (salt+hash). The authentication then looks like this: > > SELECT id, login FROM users WHERE login = 'login' AND pwdhash = > crypt('mypassword', pwdhash) > > which reuses the salt stored in the column. > > I'm investigating if we could do this with SQLObject I think it's possible with many lines of code. SQLObject doesn't send raw values on INSERT/UPDATE -- it calls sqlrepr(value) which in turn calls value.__sqlrepr__(dbname) if the value has __sqlrepr__ method. So you have to return a wrapper with __sqlrepr__ method, and it can be returned from a validator. See the following program as a small example: from formencode import validators class CryptValue(object): def __init__(self, value): self.value = value def __sqlrepr__(self, db): assert db == 'postgres' return "crypt('%s')" % self.value class CryptValidator(validators.Validator): def from_python(self, value, state): return CryptValue(value) class SOCryptCol(SOCol): def createValidators(self, dataType=None): return [CryptValidator()] def _sqlType(self): return 'TEXT NOT NULL' class CryptCol(Col): baseClass = SOCryptCol class Test(SQLObject): test1 = StringCol() test2 = CryptCol() Test.createTable() test = Test(test1='1', test2='2') print test It produces the following debugging output: 1/QueryR : CREATE TABLE test ( id INTEGER PRIMARY KEY AUTOINCREMENT, test1 TEXT, test2 TEXT NOT NULL ) 2/QueryIns: INSERT INTO test (test1, test2) VALUES ('1', crypt('2')) I hope it'd be helpful as a starting point. Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Tomas V. <tv...@fu...> - 2012-12-30 20:59:12
|
Hi, I'm learning SQLObject - checking if we could use it on our projects, and I got stuck at hashing passwords inside the database. Imagine a simple table with info about users: CREATE TABLE users ( id INT PRIMARY KEY, login TEXT NOT NULL UNIQUE, pwdhash TEXT NOT NULL ) where "pwdhash" is a hashed password. We're using PostgreSQL and we usually handle this inside the database using a pgcrypto module, that provides various hash/crypto functions. An insert into the table then looks like this INSERT INTO users VALUES (1, 'login', crypt('mypassword', gen_salt('bf'))) which generates a salt, computes the hash and stores that into a single text column (salt+hash). The authentication then looks like this: SELECT id, login FROM users WHERE login = 'login' AND pwdhash = crypt('mypassword', pwdhash) which reuses the salt stored in the column. I'm investigating if we could do this with SQLObject, but it seems to me the answer is 'no'. I see it's possible to define magic attributes, but that's not enough as I need to rewrite the SQL (to add the calls to the crypt/gen_salt functions). I've done similar evaluations with SQLAlchemy and it supports 'hybrid values' and 'type decorators' to do this. Is it possible to do something similar in SQLObject or do I have to move the functionality to the application level? regards Tomas |
From: Sophana K <sop...@gm...> - 2012-12-05 22:34:34
|
I just found out that my the script that launches my web server doesn't launch correctly the python virtualenv that I had setup. I was using a very old version of sqlobjects(0.10.3) and of mysqldb. I just fixed this, and will now wait for the next freeze to happen (if it ever happens again)... Thanks for your great support. On Tue, Dec 4, 2012 at 6:47 PM, Oleg Broytman <ph...@ph...> wrote: > Hi! Pity to listen you have problems. > > On Tue, Dec 04, 2012 at 01:02:52PM +0100, Sophana K <sop...@gm...> > wrote: > > Since about one year ago (maybe more...), from time to time (about every > > week/month), the python process completely freezes under high load. > > It'd be helpful to find a version of SO that doesn't freeze. > Unfortunately it requires to rollback your code and to run a lot of > experiments. > > > Reading the code, I don't understand the call path from dbConnection to > the > > SqlHub. > > There shouldn't be any path -- you use sqlhub as the connection: > > class MyClass(SQLObject): > _connection = sqlhub # Actually, this is the default > > sqlhub.threadConnection = connectionFromURI('...') > > Sqlhub's __get__ and __set__ methods return the real connection. > > > How is the connection pool managed? > > You can see the code at dbconnection.py stared at the line 332: class > DBAPI, method getConnection. You can explicitly disable the pool by > setting dbConnection._pool = None. > > > Is it thread safe? > > Should be. The pool is protected by _poolLock. Does something in the > code trigger your suspicions? > > Oleg. > -- > Oleg Broytman http://phdru.name/ ph...@ph... > Programmers don't die, they just GOSUB without RETURN. > > > ------------------------------------------------------------------------------ > LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial > Remotely access PCs and mobile devices and provide instant support > Improve your efficiency, and focus on delivering more value-add services > Discover what IT Professionals Know. Rescue delivers > http://p.sf.net/sfu/logmein_12329d2d > _______________________________________________ > sqlobject-discuss mailing list > sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlobject-discuss > |
From: Oleg B. <ph...@ph...> - 2012-12-04 22:52:28
|
Hi! Pity to listen you have problems. On Tue, Dec 04, 2012 at 01:02:52PM +0100, Sophana K <sop...@gm...> wrote: > Since about one year ago (maybe more...), from time to time (about every > week/month), the python process completely freezes under high load. It'd be helpful to find a version of SO that doesn't freeze. Unfortunately it requires to rollback your code and to run a lot of experiments. > Reading the code, I don't understand the call path from dbConnection to the > SqlHub. There shouldn't be any path -- you use sqlhub as the connection: class MyClass(SQLObject): _connection = sqlhub # Actually, this is the default sqlhub.threadConnection = connectionFromURI('...') Sqlhub's __get__ and __set__ methods return the real connection. > How is the connection pool managed? You can see the code at dbconnection.py stared at the line 332: class DBAPI, method getConnection. You can explicitly disable the pool by setting dbConnection._pool = None. > Is it thread safe? Should be. The pool is protected by _poolLock. Does something in the code trigger your suspicions? Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Oleg B. <ph...@ph...> - 2012-12-04 18:41:32
|
Hi! Pity to listen you have problems. On Tue, Dec 04, 2012 at 01:02:52PM +0100, Sophana K <sop...@gm...> wrote: > Since about one year ago (maybe more...), from time to time (about every > week/month), the python process completely freezes under high load. It'd be helpful to find a version of SO that doesn't freeze. Unfortunately it requires to rollback your code and to run a lot of experiments. > Reading the code, I don't understand the call path from dbConnection to the > SqlHub. There shouldn't be any path -- you use sqlhub as the connection: class MyClass(SQLObject): _connection = sqlhub # Actually, this is the default sqlhub.threadConnection = connectionFromURI('...') Sqlhub's __get__ and __set__ methods return the real connection. > How is the connection pool managed? You can see the code at dbconnection.py stared at the line 332: class DBAPI, method getConnection. You can explicitly disable the pool by setting dbConnection._pool = None. > Is it thread safe? Should be. The pool is protected by _poolLock. Does something in the code trigger your suspicions? On Tue, Dec 04, 2012 at 01:02:52PM +0100, Sophana K <sop...@gm...> wrote: Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Sophana K <sop...@gm...> - 2012-12-04 12:03:04
|
Hi I'm using sqlobjects with webware for python on my website in production since 2007. Since about one year ago (maybe more...), from time to time (about every week/month), the python process completely freezes under high load. I upgraded to sqlobject 1.3.2 with latest mysqlDB, and still have the problem. All signals are ignored. it can only be killed with a kill -9. strace show the python process is stuck in a futex kernel call. Recently, I tried to gdb the frozen python process: All threads are waiting for a semaphore, except one, that is waiting probably for a mysql response, in the _mysql.so object of python-mysql. unfortunately, the gdb pystack macro doesn't work for me. So I can't get the python stack. It seems that gdb cpu gets all cpu dureing a very very long time. I haven't waited enough... All threads waiting for a semaphore. I don't know if it is the python GIL or another mysql or sqlobject related lock. The GIL would explain that signals are ineffective. #0 0x00110416 in __kernel_vsyscall () #1 0x005a8865 in sem_wait@@GLIBC_2.1 () from /lib/libpthread.so.0 #2 0x0067eafb in PyThread_acquire_lock (lock=0x8dc7028, waitflag=1) at Python/thread_pthread.h:349 #3 0x00682eb8 in lock_PyThread_acquire_lock (self=0x854c430, args=0xb7f3102c) at Modules/threadmodule.c:46 #4 0x0060d7ed in PyCFunction_Call (func=0x8eeb0ec, arg=0xb7f3102c, kw=0x0) at Objects/methodobject.c:108 #5 0x0065ac72 in PyEval_EvalFrameEx (f=0x93d7ff4, throwflag=0) at Python/ceval.c:3564 #6 0x00659fcd in PyEval_EvalFrameEx (f=0x8e2e53c, throwflag=0) at Python/ceval.c:3650 #7 0x0065b6bf in PyEval_EvalCodeEx (co=0x8922410, globals=0x891e3e4, locals=0x0, args=0x9cfa2dc, argcount=1, kws=0x9cfa2e0, kwcount=0, defs=0x891af18, defcount=1, closure=0x0) at Python/ceval.c:2831 #8 0x00659844 in PyEval_EvalFrameEx (f=0x9cfa12c, throwflag=0) at Python/ceval.c:3660 #9 0x00659fcd in PyEval_EvalFrameEx (f=0x8dccfa4, throwflag=0) at Python/ceval.c:3650 #10 0x0065b6bf in PyEval_EvalCodeEx (co=0x87ad848, globals=0x87a924c, locals=0x0, args=0x8d43818, argcount=2, kws=0x0, .... except one thread: ( I still have to check all 25 threads...) #0 0x00110416 in __kernel_vsyscall () #1 0x005a952b in read () from /lib/libpthread.so.0 #2 0x00295338 in vio_read () from /usr/lib/mysql/libmysqlclient_r.so.15 #3 0x002953ae in vio_read_buff () from /usr/lib/mysql/libmysqlclient_r.so.15 #4 0x002967ab in ?? () from /usr/lib/mysql/libmysqlclient_r.so.15 #5 0x00296b9b in my_net_read () from /usr/lib/mysql/libmysqlclient_r.so.15 #6 0x0028fe39 in cli_safe_read () from /usr/lib/mysql/libmysqlclient_r.so.15 #7 0x00290c35 in ?? () from /usr/lib/mysql/libmysqlclient_r.so.15 #8 0x0028f1e4 in mysql_real_query () from /usr/lib/mysql/libmysqlclient_r.so.15 #9 0x00243f53 in _mysql_ConnectionObject_query (self=0x941160c, args=0x978138c) at _mysql.c:2008 #10 0x0060d7ed in PyCFunction_Call (func=0x9c300cc, arg=0x978138c, kw=0x0) at Objects/methodobject.c:108 #11 0x0065ac72 in PyEval_EvalFrameEx (f=0x940d924, throwflag=0) at Python/ceval.c:3564 #12 0x00659fcd in PyEval_EvalFrameEx (f=0x9b12c64, throwflag=0) at Python/ceval.c:3650 #13 0x00659fcd in PyEval_EvalFrameEx (f=0x9b7cdf4, throwflag=0) at Python/ceval.c:3650 #14 0x0065b6bf in PyEval_EvalCodeEx (co=0x8d2ead0, globals=0x8d304f4, locals=0x0, args=0x9240fec, argcount=2, kws=0x9240ff4, kwcount=0, defs=0x8d37358, defcount=1, closure=0x0) at Python/ceval.c:2831 ... Note: I also get some "ProgrammingError: Commands out of sync; you can't run this command now" errors from time to time. This is why I'm suspecting a wrong connection management between my threads. Could it be related to the fact that I'm using the sqlHub.processConnection feature of SqlObject? Reading the code, I don't understand the call path from dbConnection to the SqlHub. How is the connection pool managed? Is it thread safe? Thanks in advance for your support Best regards |
From: Oleg B. <ph...@ph...> - 2012-10-20 09:39:13
|
Hello! I'm pleased to announce versions 1.3 2 and 1.2.4, minor bugfix releases of SQLObject. What is SQLObject ================= SQLObject is an object-relational mapper. Your database tables are described as classes, and rows are instances of those classes. SQLObject is meant to be easy to use and quick to get started with. SQLObject supports a number of backends: MySQL, PostgreSQL, SQLite, Firebird, Sybase, MSSQL and MaxDB (also known as SAPDB). Where is SQLObject ================== Site: http://sqlobject.org Development: http://sqlobject.org/devel/ Mailing list: https://lists.sourceforge.net/mailman/listinfo/sqlobject-discuss Archives: http://news.gmane.org/gmane.comp.python.sqlobject Download: http://pypi.python.org/pypi/SQLObject/1.3.2 http://pypi.python.org/pypi/SQLObject/1.2.4 News and changes: http://sqlobject.org/News.html What's New ========== * Fixed a bug in sqlbuilder.Select.filter - removed comparison with SQLTrueClause. * Neil Muller fixed a number of tests. For a more complete list, please see the news: http://sqlobject.org/News.html Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Oleg B. <ph...@ph...> - 2012-09-06 08:11:38
|
On Thu, Sep 06, 2012 at 09:41:46AM +0200, Gert Burger <ger...@gm...> wrote: > Most of our inserts occur in small batches which wont benefit much from > bypassing SQLO. From my profiling it doesn't seem like that is the problem. > > It seems like the expiredCache for each table is growing continuously. > Since my last email the size of expiredCache for some tables have grown > to over a million entries and it never seems to decrease. In total the > expired caches have about 4 million entries over all tables and current > memory usage(150MB RES) doesn't match that amount of objects(of row data). > > So my next question is, how do I prevent the expired caches from growing > uncontrollably? For some reason the weak references are staying active > and therefore they are not removed from the caches. Call connection.cache.clear() after a batch of insertions. Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Gert B. <ger...@gm...> - 2012-09-06 07:42:00
|
Oleg Broytman wrote, On 27/08/2012 10:57: > Hello! > > On Mon, Aug 27, 2012 at 10:32:50AM +0200, Gert Burger <ger...@gm...> wrote: >> We have a RPC server which CPU usage increases over time, after a week >> it will max out one core. >> >> >From my testing(Using pyrasite's dump stack) it seems that all the time >> is spend in: >> >> File >> "/usr/local/lib/python2.6/dist-packages/SQLObject-1.3.1-py2.6.egg/sqlobject/cache.py", >> line 263, in allIDs >> for id, value in self.expiredCache.items(): >> >> Currently the overhead for each commit and rollback is around 2-3 seconds. >> >> Almost all of our tables have caching disabled via 'sqlmeta: cacheValues >> = False'. > > sqlmeta.cacheValues disables caching of attributes -- with > cacheValues=False every time a program reads and attribute > (value = row.column) SQLObject issues a SELECT query. > There is also row caching -- SQLObject caches rows in the connection > cache. The code above is related to that cache. To disable the cache > pass cache=False to connection constructor or in the DB URI: > schema://host:port/db?cache=0. I have this already. > >> No state is kept in memory between requests and we typically write >> significantly more rows to the DB than what is read, ie. A lot of writes >> which are not read often and a few hundred rows which are read often and >> that can be changed by other processes. > > So you need mass insertion. See the related FAQ entry: > http://sqlobject.org/FAQ.html#how-to-do-mass-insertion > > Oleg. > Most of our inserts occur in small batches which wont benefit much from bypassing SQLO. From my profiling it doesn't seem like that is the problem. It seems like the expiredCache for each table is growing continuously. Since my last email the size of expiredCache for some tables have grown to over a million entries and it never seems to decrease. In total the expired caches have about 4 million entries over all tables and current memory usage(150MB RES) doesn't match that amount of objects(of row data). So my next question is, how do I prevent the expired caches from growing uncontrollably? For some reason the weak references are staying active and therefore they are not removed from the caches. Regards Gert Burger |
From: Oleg B. <ph...@ph...> - 2012-08-27 09:12:55
|
Hello! On Mon, Aug 27, 2012 at 10:32:50AM +0200, Gert Burger <ger...@gm...> wrote: > We have a RPC server which CPU usage increases over time, after a week > it will max out one core. > > >From my testing(Using pyrasite's dump stack) it seems that all the time > is spend in: > > File > "/usr/local/lib/python2.6/dist-packages/SQLObject-1.3.1-py2.6.egg/sqlobject/cache.py", > line 263, in allIDs > for id, value in self.expiredCache.items(): > > Currently the overhead for each commit and rollback is around 2-3 seconds. > > Almost all of our tables have caching disabled via 'sqlmeta: cacheValues > = False'. sqlmeta.cacheValues disables caching of attributes -- with cacheValues=False every time a program reads and attribute (value = row.column) SQLObject issues a SELECT query. There is also row caching -- SQLObject caches rows in the connection cache. The code above is related to that cache. To disable the cache pass cache=False to connection constructor or in the DB URI: schema://host:port/db?cache=0. > No state is kept in memory between requests and we typically write > significantly more rows to the DB than what is read, ie. A lot of writes > which are not read often and a few hundred rows which are read often and > that can be changed by other processes. So you need mass insertion. See the related FAQ entry: http://sqlobject.org/FAQ.html#how-to-do-mass-insertion Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |
From: Gert B. <ger...@gm...> - 2012-08-27 08:33:06
|
Hi We have a RPC server which CPU usage increases over time, after a week it will max out one core. >From my testing(Using pyrasite's dump stack) it seems that all the time is spend in: file "/usr/lib/pymodules/python2.6/turbogears/database.py", line 275, in commit self.threadingLocal.connection.commit() File "/usr/local/lib/python2.6/dist-packages/SQLObject-1.3.1-py2.6.egg/sqlobject/dbconnection.py", line 802, in commit subCaches = [(sub[0], sub[1].allIDs()) for sub in self.cache.allSubCachesByClassNames().items()] File "/usr/local/lib/python2.6/dist-packages/SQLObject-1.3.1-py2.6.egg/sqlobject/cache.py", line 263, in allIDs for id, value in self.expiredCache.items(): Currently the overhead for each commit and rollback is around 2-3 seconds. Almost all of our tables have caching disabled via 'sqlmeta: cacheValues = False'. No state is kept in memory between requests and we typically write significantly more rows to the DB than what is read, ie. A lot of writes which are not read often and a few hundred rows which are read often and that can be changed by other processes. Does anyone have a suggestion on how to mitigate or further debug the problem? Regards Gert Burger |
From: Oleg B. <ph...@ph...> - 2012-06-25 18:50:45
|
Hi! On Mon, Jun 25, 2012 at 06:49:03AM +0200, Tom Coetser <su...@ic...> wrote: > On Friday 22 June 2012 21:42:28 Oleg Broytman wrote: > > Tom, can I see the values for the following configuration parameters: > > > > grep 'standard_conforming_strings\|escape_string_warning\|backslash_quote' > > /etc/postgresql/9.1/main/postgresql.conf > > > > Mine are: > > > > #backslash_quote = safe_encoding # on, off, or safe_encoding > > #escape_string_warning = on > > #standard_conforming_strings = on > > I am still running PostgreSQL 8.4 : > > $ grep 'standard_conforming_strings\|escape_string_warning\|backslash_quote' > /etc/postgresql/8.4/main/postgresql.conf > > #backslash_quote = safe_encoding # on, off, or safe_encoding > #escape_string_warning = on > #standard_conforming_strings = off Thanks. Everything is default. Ok. > > I'm trying to understand the difference in our setup... > > Could it be the PostgreSQL version? I am not sure. The problem seems to be I need to double backslashes when I use E'' escaped strings and that backslash doubling takes place long before SQLObject passes the string to the backend. Without backslash doubling Postgres report problems with encodings - all encodings I tried: latin1, koi8-r, utf-8. Without E'' escaped strings - i.e., with plain ''-quoted strings with backslashes - I do not need to double backslashes, and Pg doesn't generate warnings about E'' escapes. It seems in your case backslashes must not be doubled, and E'' escapes suppress warnings. Yes, it could be a difference in Pg versions. So for now my resolution is: I'm removing the patch from SQLObject because without it SQLObject works fine with different versions of Pg; to suppress warnings about E'' escapes try to set escape_string_warning = off and restart Postgres. I'm saving the patch to a file for future pondering. Oleg. -- Oleg Broytman http://phdru.name/ ph...@ph... Programmers don't die, they just GOSUB without RETURN. |