Simon Cross escribió:
> On 6/5/07, jonhattan <jonathan@...> wrote:
>> The memory grows and grows indefinitely. If I change to postgres the
>> problem dissapear. I've searched a lot about sqlobject+sqlite and
>> threading without finding a solution.
> I imagine that the important difference between Postgres and Sqlite is
> that Sqlite connections are only usable on the thread they're created
> on. So if you're starting 10 000 threads, SQLObject has to create 10
> 000 connections to the SQLite database while in the Postgres case
> threads can share connections.
I figure out that connections are destroyed when the thread finish.. So
it could be increase the load of the system, not memory consuming. Isn't
> It's possible that this alone is causing your problems (tests in our
> work code reliably trigger Sqlite problems with only 20 threads
> writing concurrently).
actually I have a limit of 20 threads:
while threading.activeCount() > 19:
What I see is that threads are not being 'freed' as they finish. Perhaps
the question is 'how can I destroy all references to a SQLObject?'
If I do:
o = mySQLObject(item = xxx)
sys.getrefcount(o) - 1 # the value is 2
del o # still one reference so the object
is not removed from memory
> You might also want to check whether turning off caching in SQLObject
> helps at all (it's vaguely possible that the extra SQLite connections
> result in many more objects being cached). See
I don't know if you mean something more than
cacheValues = False
It seems they do nothing.
> If turning off caching doesn't help, I suggest simply rate limiting
> the threads, storing the results in a temporary array and then having
> the main thread do all the writes to sqlite (this shouldn't be any
> slower than having lots of threads write). Something like:
> import time
> class get_page(Thread):
> def __init__(self, id, results):
> self.id = id
> self.results = results
> def run():
> while True:
> if len(self.results) > 50:
> f = urlopen("http://example.com/article/%d" % id)
> # main
> tmpdata = 
> done = 0
> for i in xrange(1,10000):
> t = get_page(i,tmpdata)
> while done < 10000:
> if not tmpdata:
> mySQLObject(item = tmpdata.pop())
> done += 1
> This code is, of course, completely untested. :)
I'll try to implement that way and comment back.
> I don't see why people are picky about it when the Banach-Tarski
> paradox is clearly a Biblical principle - look at Mark 6:38-44. What,
> you have a different interpretation of the loaves and fishes thing?
> -- Daniel Martin, snowplow.org