On Mon, 10 Jan 2005 17:35:28 -0600, Ian Bicking <ianb@...> wrote:
> Martin Blais wrote:
> >>.sync and .expire are meant to be used to ensure consistency. This
> >>doesn't allow for historical objects, e.g., an object that represented
> >>the database the-way-it-was. But I'm guessing that's not actually what
> >>you guys are trying to do...?
> > the current situation is (as far as i understand it): because of the
> > cache, using sqlobject with multiple processes will currently fail if
> > you expect consistency (i.e. you have to be aware that looking up an
> > object will possibly return an obsolete object). this is OK, this is
> > a property of caching. thus i disabled object-level caching.
> > the problem is, however, that disabling the cache does not work. the
> > disabled cache will serve stuff from its expired cache, and newly
> > looked up objects will be added in there as expired!?! (the cache
> > object system is always used, even when caching is disabled). i do
> > not understand why the cache looks up stuff from the expired cache
> > (code: file cache.py, lines 93-110 (the else part).
> The whole situation is complicated. To fully turn caching off and
> ensure complete cache consistency, you'd also have to turn off
> _cacheValues (which caches column values) and you'd kill performance.
> Or (with a small API addition) you could do that and change your
> programming style, and get OK performance.
hmm... looking at the code i am under the impression that _cacheValues
and the object-level caching were orthogonal. What I need is to
disable object-level caching, but I do want values-caching.
> What you want to do is *periodically* ensure consistency with the
> database. Right now there are some ways to do that, mostly with .sync()
> and .expire(). You are trying to do this with the cache, so that
> everytime you fetch an object from the database you get a new object
> with new values from the database. I don't think that's bad, per se
> (except insofar as you can't do it with SQLObject ;), but I'm not sure
> if it has any advantages over expiring all the instances? It also
> ensures internal consistency, since there will never be two objects that
> represent the same row, but have different sets of cached data.
ok, so if i understand correctly you're recommending to do something like this::
o = MyObj(id=7)
# use o..
o = MuObj(id=7)
# use o..
I think that if I forget to call the required methods I will end up
with hard-to-find bugs. I would rather clear all the objects
everytime I begin processing a request (this is from an applicatino
that gets called from mod_python/apache). this makes more sense.
so i should call cache.clear() before every request i suppose. that
way WITHIN my request I get object-level caching, which is ok.
however, expireAll() doesn't help. if i understand correctly, expired
objects are still cached, just held by weakrefs instead of strongrefs.
that still doesn't explain why expired objects are being served by
the cache when object-level caching is disabled (i really think there
is a bug there).
> Some degree of caching is just necessary for an ORM like SQLObject,
> since the caching makes up for the lack of joins. I.e., since you can't
> fetch data from several tables at a time, the caching at least makes the
> alternative operation (fetching several objects) fairly quick in a large
> number of cases.
> > (i think that such a special "cache" deserves a comment in the code by
> > whoever wrote it. also, the sequences of acquire/release of the locks
> > assumes a particular call pattern from the caller, which is waaay away
> > in the code, and that isn't documented either.)
> Hmm... the comment isn't really on the right side of those two cases, is
> it? I'll try to put some more docstrings in there.
docstrings in the cache would be great. thanks!
if you have a minute, please tell why the cache serves expired objects
when self.doCache is false...