You can subscribe to this list here.
2002 |
Jan
(2) |
Feb
(157) |
Mar
(111) |
Apr
(61) |
May
(68) |
Jun
(45) |
Jul
(101) |
Aug
(132) |
Sep
(148) |
Oct
(227) |
Nov
(141) |
Dec
(285) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(518) |
Feb
(462) |
Mar
(390) |
Apr
(488) |
May
(321) |
Jun
(336) |
Jul
(268) |
Aug
(374) |
Sep
(211) |
Oct
(246) |
Nov
(239) |
Dec
(173) |
2004 |
Jan
(110) |
Feb
(131) |
Mar
(85) |
Apr
(120) |
May
(82) |
Jun
(101) |
Jul
(54) |
Aug
(65) |
Sep
(94) |
Oct
(51) |
Nov
(56) |
Dec
(168) |
2005 |
Jan
(146) |
Feb
(98) |
Mar
(75) |
Apr
(118) |
May
(85) |
Jun
(75) |
Jul
(44) |
Aug
(94) |
Sep
(70) |
Oct
(84) |
Nov
(115) |
Dec
(52) |
2006 |
Jan
(113) |
Feb
(83) |
Mar
(217) |
Apr
(158) |
May
(219) |
Jun
(218) |
Jul
(189) |
Aug
(39) |
Sep
(3) |
Oct
(7) |
Nov
(4) |
Dec
(2) |
2007 |
Jan
|
Feb
(2) |
Mar
(7) |
Apr
(3) |
May
(3) |
Jun
(8) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(7) |
Dec
|
2008 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
(4) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2009 |
Jan
(6) |
Feb
|
Mar
(1) |
Apr
(2) |
May
(1) |
Jun
(1) |
Jul
(10) |
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
(3) |
2010 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2012 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Paul S. <pau...@ne...> - 2002-03-02 19:48:13
|
Gavin_King/Cirrus%CI...@ci... wrote: >>What I was really curious about was what had changed that I now need >>this, as I'm not doing XSL specific in any of my code. I imagine I've >>had the wrong JAR files there from day one, but that nothing in my code >>caused them to be needed and I've only just found out. >> > > The session factory parses the generic XML -> custom XML stylesheet when > its instantiated. This is suboptimal, I suppose, since most people aren't > using the XML generation (its not even documented yet). You could parse it > lazily, but then you would need a synchronized block, whic i've been > avoiding as much as possible. It would be okay in this case. > I think that we need to document the dependancies a little better, in particular what version's are required. Even better, what "spec" they need to implement. I'm trying to deploy in a Turbine / Velocity environment on a servlet 2.2 compliant app server. I've all kinds of dependencies, and some of them conflict. Without knowing precisely what's where it's a *lot* of just messing around until I find some combo what works, but not really sure exactly why without trudgning around in source code. To deploy this in any kind of production environment you'd expect to know exactly what you're doing, and we don't really specify things to that level at present. PaulS :( |
From: Doug C. <de...@fl...> - 2002-03-02 19:19:43
|
>> The database didn't "see" T2 read obj1 since it came from the cache. >> So the database can't know that T2 and T3 interfere, and T2 must be >> rolled back. > yes, im aware of this one. thats why we certainly need to check version > numbers when we update. But no other transaction wrote ojb2, so its version number is OK. Only the versions of objects it depends on have changed (if indeed they are versioned at all). Perhaps this is why you say the isolation level will be weakened? e |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-02 08:29:11
|
> The database didn't "see" T2 read obj1 since it came from the cache. > So the database can't know that T2 and T3 interfere, and T2 must be > rolled back. yes, im aware of this one. thats why we certainly need to check version numbers when we update. |
From: Doug C. <de...@fl...> - 2002-03-02 08:15:10
|
>> wouldn't know: the database could order these transactions either way. >> It's an example of why I've been queasy about this cache all along. > I believe my proposal avoids this problem by > 1. Only caching stuff that came direct from a select. (important) It does seem safer that way, but... > 2. Never caching an instance until we know all updating transactions are > finished you can never know. The database has a lot of latitude, within the isolation level model, to reorder things. > You pointed out the problem that the select could return stale data, but > it can never return data that was stale as of the start of the transaction. > If we restrict it to only cache stuff that no-ones _begun_ to modify at > any time _after_ the _start_ of the transaction, it can't possibly add > stale data to the cache, right? Well, we could also break transaction isolation in another way T1 starts T1 loads obj1 v1 T1 caches obj1 v1 T1 completes T2 starts (obj1 v1) T2 reads obj1 v1 from cache T3 starts (obj1 v1) T3 updates obj1 v2 T3 commits obj1 v2 T2 updates obj2 (based on obj1 v1) <<< error T2 commits The database didn't "see" T2 read obj1 since it came from the cache. So the database can't know that T2 and T3 interfere, and T2 must be rolled back. > I think its possible to merge the good points of these proposals. So > I'm very happy we are having this discussion. Agreed. I think version columns are the safest route. e |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-02 07:47:59
|
> If we restrict it to only cache stuff that no-ones _begun_ to modify at > any time _after_ the _start_ of the transaction, it can't possibly add > stale data to the cache, right? uurrrghhhh was badly put. There are two rules: 1. if some session started updating an instance, you cant cache it 2. if some session finished updating the instance _after_ the start of the current sassion, you cant cache it |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-02 07:37:56
|
> > transaction 1 updates foo > > transaction 2 updates foo > > transaction 1 commits > > transaction 2 commits > > transaction 2 grabs its timestamp > > transaction 1 grabs its timestamp > You wouldn't know (but it's no worse than the alternative, timestamp > at update time, is it?). In fact, even if you synchronized, you > wouldn't know: the database could order these transactions either way. > It's an example of why I've been queasy about this cache all along. I believe my proposal avoids this problem by 1. Only caching stuff that came direct from a select. (important) 2. Never caching an instance until we know all updating transactions are finished You pointed out the problem that the select could return stale data, but it can never return data that was stale as of the start of the transaction. If we restrict it to only cache stuff that no-ones _begun_ to modify at any time _after_ the _start_ of the transaction, it can't possibly add stale data to the cache, right? > Well, if you are trying to get serializable transactions, you can't > see any modifications made after you've begun. If we allow reading > data written after session start, we've entered non-ACID territory. cool. Thats an improvement to what I had in mind. I think its possible to merge the good points of these proposals. So I'm very happy we are having this discussion. |
From: Doug C. <de...@fl...> - 2002-03-02 07:17:12
|
>> Maintain (potentially) multiple versions of objects in the cache, each >> timestamped with the commit time of the writing session. The session >> enters the object into the cache when the session commits. Sessions >> can also enter objects at load time (on a cache miss) with the >> timestamp set to the session start time. > Q1. > the "commit time" would at best be a time "sometime after commit", unless > we did something awful like synchronizing around the commit(). So how would > you know which version actually represented the latest version in the > following: > (suppose isolation level is read committed or less) > transaction 1 updates foo > transaction 2 updates foo > transaction 1 commits > transaction 2 commits > transaction 2 grabs its timestamp > transaction 1 grabs its timestamp You wouldn't know (but it's no worse than the alternative, timestamp at update time, is it?). In fact, even if you synchronized, you wouldn't know: the database could order these transactions either way. It's an example of why I've been queasy about this cache all along. [Version info could be used to force (adjust) the order of the timestamps.] >> Then any read simply takes (a version of) an object from the cache if >> the object's timestamp is less than the session start time. Of course, >> it must take the newest such instance (older than the session). > Q2. > I think I see part of your reasoning for "less than the session start > time", but could you please elaborate.... Well, if you are trying to get serializable transactions, you can't see any modifications made after you've begun. If we allow reading data written after session start, we've entered non-ACID territory. > Q3. We still need to do the stale-check, right? I believe so, both because of Q1, and to protect against people trying to use this across multiple JVMs. >> - there is no synchronization, except to maintain the integrity of the >> data structure (i.e., no lock counts, etc.) > If by "sychronization" you mean blocking other threads, any proposed > solution _must_ obey this, far as im concerned. Yes, I mean both thread synchronization, and cache "book keeping" or locking to redirect loads to the database. e |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-02 06:50:58
|
> Here's an alternate design... I have some questions: > Maintain (potentially) multiple versions of objects in the cache, each > timestamped with the commit time of the writing session. The session > enters the object into the cache when the session commits. Sessions > can also enter objects at load time (on a cache miss) with the > timestamp set to the session start time. Q1. the "commit time" would at best be a time "sometime after commit", unless we did something awful like synchronizing around the commit(). So how would you know which version actually represented the latest version in the following: (suppose isolation level is read committed or less) transaction 1 updates foo transaction 2 updates foo transaction 1 commits transaction 2 commits transaction 2 grabs its timestamp transaction 1 grabs its timestamp > Then any read simply takes (a version of) an object from the cache if > the object's timestamp is less than the session start time. Of course, > it must take the newest such instance (older than the session). Q2. I think I see part of your reasoning for "less than the session start time", but could you please elaborate.... Q3. We still need to do the stale-check, right? As I said to Doug in a private email, I've gone cold on the idea of doing stale-checking using anything other than version numbers. The performance cost is too high.... (i will elaborate in *this* later) > - there is no synchronization, except to maintain the integrity of the > data structure (i.e., no lock counts, etc.) If by "sychronization" you mean blocking other threads, any proposed solution _must_ obey this, far as im concerned. |
From: Doug C. <de...@fl...> - 2002-03-02 06:29:58
|
> aaahhhh of course. interesting. So we should use a timestamp from before > the start of the session? Here's an alternate design... Maintain (potentially) multiple versions of objects in the cache, each timestamped with the commit time of the writing session. The session enters the object into the cache when the session commits. Sessions can also enter objects at load time (on a cache miss) with the timestamp set to the session start time. Then any read simply takes (a version of) an object from the cache if the object's timestamp is less than the session start time. Of course, it must take the newest such instance (older than the session). A hashbelt or similar technique can be used to delete objects from the cache which have a newer instance which itself is older than all open sessions. This cache could be implemented with a Map of SortedSets. The oldest open session could be implemented with a SortedSet. It has the advantages: - cached objects are only ever loaded once (updates are taken directly from the writer session) - a session always gets a cached object once that object is loaded - nothing is ever removed from the cache except for expired obsolete versions by hashbelt - there is no synchronization, except to maintain the integrity of the data structure (i.e., no lock counts, etc.) e |
From: Doug C. <de...@fl...> - 2002-03-02 05:41:22
|
>> Hmm. The before-select timestamp is the local time? If the database >> uses versioning (or caching) internally, the select could return stuff >> as old as the start of the transaction... which could be earlier than >> after-update even when before-select isn't. > aaahhhh of course. interesting. So we should use a timestamp from before > the start of the session? That would be better. I think the best you can do with local timestamps is to record the commit time of the writer's transaction, and see that it preceeds the begin time of the reader's transaction. e |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-02 05:30:19
|
> Hmm. The before-select timestamp is the local time? If the database > uses versioning (or caching) internally, the select could return stuff > as old as the start of the transaction... which could be earlier than > after-update even when before-select isn't. aaahhhh of course. interesting. So we should use a timestamp from before the start of the session? PS i should have mentioned two cases where we _know_ this algorithm breaks 1. using JTA. 2. non-exclusive access to database any *other* problems? |
From: Doug C. <de...@fl...> - 2002-03-02 05:21:10
|
> When Loading Objects > ==================== > 1. grab the current timestamp (before-select) > 2. issue SQL selects > 3. hydrate objects > 4. disassemble hydrated objects > synchronized { > 5. check that 'before-select' is greater than 'after-update' for that > instance > 6. if so, add the disassembled state to the cache > } Hmm. The before-select timestamp is the local time? If the database uses versioning (or caching) internally, the select could return stuff as old as the start of the transaction... which could be earlier than after-update even when before-select isn't. e |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-02 05:03:55
|
> What I was really curious about was what had changed that I now need > this, as I'm not doing XSL specific in any of my code. I imagine I've > had the wrong JAR files there from day one, but that nothing in my code > caused them to be needed and I've only just found out. The session factory parses the generic XML -> custom XML stylesheet when its instantiated. This is suboptimal, I suppose, since most people aren't using the XML generation (its not even documented yet). You could parse it lazily, but then you would need a synchronized block, whic i've been avoiding as much as possible. It would be okay in this case. I wish double-checked-locking wouldn't be broken.... |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-02 04:59:06
|
Alright....ima put my head on the block here. Heres the caching algorithm: When Loading Objects ==================== 1. grab the current timestamp (before-select) 2. issue SQL selects 3. hydrate objects 4. disassemble hydrated objects synchronized { 5. check that 'before-select' is greater than 'after-update' for that instance 6. if so, add the disassembled state to the cache } Before Updating or Deleting =========================== synchronized { 1. remove cached object from cache 2. mark instance uncacheable, by incrementing counter (update-count) } 3. issue SQL Update or Delete, with a check for stale state After Transaction Completion ============================ synchronized { 1. decrement 'update-count' for uncacheable instance 2. if 'update-count'==0, set 'after-update' to system time } Before A Load by ID =================== 1. look for cached state in cache 2. assemble object from cached state 3. call PersistentLifecycle.load(), if necessary (There is a seperate cache per class, the synchronized blocks lock the cache itself.) Are there bugs in this?? |
From: Paul S. <pau...@ne...> - 2002-03-02 04:46:31
|
Hi Gavin, thanks for the prompt reply, as always. I have a copy of JDOM and Xalan there, but I don't think they are the right versions. I really think the version of external JAR's needs to be specified, or at least the specification version that it needs to implement. In my case I had a copy of Xalan 1.2.1 sitting there that doesn't have the class mentioned in the Exception stack trace, so I imagine it's the wrong version. What I was really curious about was what had changed that I now need this, as I'm not doing XSL specific in any of my code. I imagine I've had the wrong JAR files there from day one, but that nothing in my code caused them to be needed and I've only just found out. Regards, PaulS :) Gavin_King/Cirrus%CI...@ci... wrote: >>So what's changed? What is the list of external dependencies? I can't >>find it anywhere. >> > > you need Xalan. and also JDOM if you want XML generation. > > dependencies are mentioned here: > > http://hibernate.sourceforge.net/hibernate.html > > (the same document is also in your doc subdirectory.) > > probably should start packaging the jars along with hibernate > (not sure of the licensing issues involved in this - i guess > xerces/xalan are bsd license so its ok). > > somebody else mentioned that we should keep them in CVS also. > > > |
From: Doug C. <de...@fl...> - 2002-03-02 04:24:01
|
> Ah...theres one more element to this. We _copy_ things in and out of the > cache. sessions still have their own instances (always). The only > point of the cache is to eliminate some sql select statements. I > should have made this clearer. Oh. Now I am much more comfortable. e |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-02 04:08:15
|
> This depends on user code telling the session of its intention to > modify an object before it's first read. Otherwise, if I have a shared > copy of something (i.e., from the cache) and decide I want to modify > it, how can you prevent modification from affecting other holders of > the object? The write "lock" has to be exclusive, and it can't be if > there are already readers. Ah...theres one more element to this. We _copy_ things in and out of the cache. sessions still have their own instances (always). The only point of the cache is to eliminate some sql select statements. I should have made this clearer. > The necessity of write-locking objects before reading them puts a > significant burden on the user code that didn't exist before. It can > also be the source of hard to locate bugs in application code. It's > why JDO has JDO-identity and class "enhancement" (mangling). yes I don't want to force applications to do a loadForUpdate() .... if this can't be done transparently, it shouldn't be done at all. > > sessions which got the object from the cache will need to update it > > using a stale checking update (above). > I don't see what problem this solves. I understand optimistic locking, > and it has its place, but it assumes every transaction has its own > copy of the object, not a shared copy from the cache. exactly. they have a copy of the cached state. > > Once we know the outcome of all updating > > sessions, we can mark the instance as cachable again. > A big synchronization bottleneck. not that big. keep a counter in the cache. at the completion of each session decrement the counter and when it hits zero, we can start caching that instance again. (have a look at the code in CVS) > > This approach avoids redefining the transaction isolation level from how > > its defined by the database and still delegates all locking to the DB. > > Well, actually it would weaken serializable isolation, but i *think* it > > leaves repeatable read untouched. > Depending on the user code obeying the new write-lock (read with > intent to modify) rules. In any case it would break serializable > isolation. absolutely it will weaken serializable (to repeatable read, I *think*). If it weakens the isolation further than this, I am happy to drop the whole idea.... (It certainly doesn't weaken read committed.) but think about the situations where this kind of caching is useful: 1. static data 2. 'timeout data' 3. data which is rarely updated (=> concurrent updates even rarer) these are not the kinds of scenarios where we need serializable isolation. Anyway, the cache is configured on a class-by-class basis, so if it will be a bad thing, don't use it. > > Do you see any obvious problems with this approach that makes it a waste > > of time? > We will probably all learn something! no doubt. > Overall, it makes me queasy. I'd stick to (a) cache per session, or > (b) cache of read-only objects, and an error on any attempt to write > to one. I *am* inclined towards agreement here. Nevertheless, I would like to give the implementation i've come up with a chance to prove itself. I may have a completely flawed concept here and theres something im not seeing. If so, im happy to drop it. At present everything except collections are cached. Its been possible to do this without major changes to existing code. (the changes I *have* made are useful for other reasons and can be defended on that basis.) To cache collections *will* require changes. > One extension to (b) is to provide a mechanism to evict something from > the cache so it can be written, but this means either waiting, or > failure, if there are any readers, which means you need reference > counts. It would also mean tracking evicted objects until they are > recached. All of this sounds an awful lot like locking! this again assumes we actually share an object instance between sessions. I doubt very much that it would be possible to do this without totally rewriting hibernate. All code assumes that objects are non-shared. thanks for your input - it is very needed Gavin |
From: Doug C. <de...@fl...> - 2002-03-02 02:37:57
|
> What I have in mind is something *very* moderate. Basically my notion > of caching is to allow sessions to use state loaded by a prevous session > as long as everyone is only trying to _read_ the instance. As soon as > some session wants to update it, we mark the instance uncacheable (in the > cache) and force a return to the "normal" behaviour of hibernate. This depends on user code telling the session of its intention to modify an object before it's first read. Otherwise, if I have a shared copy of something (i.e., from the cache) and decide I want to modify it, how can you prevent modification from affecting other holders of the object? The write "lock" has to be exclusive, and it can't be if there are already readers. The necessity of write-locking objects before reading them puts a significant burden on the user code that didn't exist before. It can also be the source of hard to locate bugs in application code. It's why JDO has JDO-identity and class "enhancement" (mangling). > sessions which got the object from the cache will need to update it > using a stale checking update (above). I don't see what problem this solves. I understand optimistic locking, and it has its place, but it assumes every transaction has its own copy of the object, not a shared copy from the cache. > Once we know the outcome of all updating > sessions, we can mark the instance as cachable again. A big synchronization bottleneck. > This approach avoids redefining the transaction isolation level from how > its defined by the database and still delegates all locking to the DB. > Well, actually it would weaken serializable isolation, but i *think* it > leaves repeatable read untouched. Depending on the user code obeying the new write-lock (read with intent to modify) rules. In any case it would break serializable isolation. > Do you see any obvious problems with this approach that makes it a waste > of time? We will probably all learn something! Overall, it makes me queasy. I'd stick to (a) cache per session, or (b) cache of read-only objects, and an error on any attempt to write to one. One extension to (b) is to provide a mechanism to evict something from the cache so it can be written, but this means either waiting, or failure, if there are any readers, which means you need reference counts. It would also mean tracking evicted objects until they are recached. All of this sounds an awful lot like locking! e |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-02 02:00:36
|
You are absolutely spot on in most of your points, Doug. But it was something I did need to suck-and-see.... > I liked your shortcut method of dirty checking, also. It was a big > performance win, I suspect, and I wonder if you are trading it away > for a feature (caching) of dubious value for some types of collections. yeah the shortcut method stays (at least as an optimisation). > I hope you will keep "deep lazy collections" in mind in the design. > Here are some things to consider... yeah its probably a bigger performance boost than caching for some applications. > Where are cloning and diffing necessary? > Is it possible for an application to avoid it (e.g., by not assigning > a collection to more than one persistent property, and/or by not > caching the collection)? I will need to keep working through this to really answer the question. But what I know already is that fully diffing the object graphs is not going to solve the problems I thought it would solve, so its off the table for now. After playing around with the code for a while, I already realised that radical changes won't be the answer. What I *am* trying to do is implement stale checking for nonversioned objects. For versioned objects, what you do is this UPDATE some_table SET foo='foo' WHERE id=20002 AND version=69 for non-versioned objects, it would have to be UPDATE some_table SET foo='foo' WHERE id=20002 AND foo='old value' This is useful for more than caching. Its also useful for optimistic locking, which is something I would like to provide better support for in the future. Interestingly, it seems to be a *huge* negative performance hit. > I have watched other object database projects founder on caching > implementation problems. There is *no* easy way to do it. In order to > support multiple transactions you *must* have multiple versions of > objects materialized. ODMG pays lip service to this; JDO defines > various kinds of object equality to support it. Yeah, I have said earlier that one of the main aims of the project is to *not* implement a database in RAM, on top of the database on disk. This would be the path to endless concurrency bugs, nonscalability and low performance. > One of the reasons I was drawn to Hibernate was its simplicity, and > its dependence on the JDBC and database layers for the capabilities > they provide, such as transactions. JDBC and the database also (can > and sometimes do) provide caching. I liked the fact that Hibernate > "bit the bullet" and supported materialization of an object > independently in several transactions (sessions). It seems to me that > there are only two caching strategies consistent with this: (a) let > the database do it, and/or (b) a separate cache per session. What I have in mind is something *very* moderate. Basically my notion of caching is to allow sessions to use state loaded by a prevous session as long as everyone is only trying to _read_ the instance. As soon as some session wants to update it, we mark the instance uncacheable (in the cache) and force a return to the "normal" behaviour of hibernate. sessions which got the object from the cache will need to update it using a stale checking update (above). Once we know the outcome of all updating sessions, we can mark the instance as cachable again. This approach avoids redefining the transaction isolation level from how its defined by the database and still delegates all locking to the DB. Well, actually it would weaken serializable isolation, but i *think* it leaves repeatable read untouched. Do you see any obvious problems with this approach that makes it a waste of time? |
From: Doug C. <de...@fl...> - 2002-03-01 18:17:29
|
> Well, I've decided I'm going to have to "refactor" the collection code. It > turns out that we really *do* need to be able to do things like clone the > entire state of a collection + fully diff the state of two collections. I > liked my shortcut method of dirty checking for a while, but now its > becoming a real hassle to do other things where I really need to capture > the whole state (eg. caching). The codebase is now full of ugly workarounds > for collections because they don't have this ability. I liked your shortcut method of dirty checking, also. It was a big performance win, I suspect, and I wonder if you are trading it away for a feature (caching) of dubious value for some types of collections. I hope you will keep "deep lazy collections" in mind in the design. Here are some things to consider... Where are cloning and diffing necessary? Is it possible for an application to avoid it (e.g., by not assigning a collection to more than one persistent property, and/or by not caching the collection)? Can a deep lazy collection use other techniques to avoid fully materializing the collection, e.g., version numbers? > This means there will be no new features or bugfixes from me until this job > is finished. cross-transaction caching is sort of implemented, but I'm sure > there are problems there. I have watched other object database projects founder on caching implementation problems. There is *no* easy way to do it. In order to support multiple transactions you *must* have multiple versions of objects materialized. ODMG pays lip service to this; JDO defines various kinds of object equality to support it. One of the reasons I was drawn to Hibernate was its simplicity, and its dependence on the JDBC and database layers for the capabilities they provide, such as transactions. JDBC and the database also (can and sometimes do) provide caching. I liked the fact that Hibernate "bit the bullet" and supported materialization of an object independently in several transactions (sessions). It seems to me that there are only two caching strategies consistent with this: (a) let the database do it, and/or (b) a separate cache per session. > What I will do is add a method clone(Object x) to the Type interface. Then > all mutable types will be expected to implement this. (immutable types can > just return x). Hopefully, clone() can do a shallow clone for deep persistent collections? (e.g., using "allocate on write") I guess this depends on what clone() is being used for. > I will change the implemntation of equals() for collections to diff the > object graph instead of doing ==. Of course, == implies equals(). Then there's JDO equals(). But I assume this is for dirty checking? e |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-01 12:36:09
|
> Then I can simplify a heap of code on RelationalDatabaseSession and just > generally look a hellavalot more elegant. I mean the code ... not me .... |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-01 12:32:39
|
Well, I've decided I'm going to have to "refactor" the collection code. It turns out that we really *do* need to be able to do things like clone the entire state of a collection + fully diff the state of two collections. I liked my shortcut method of dirty checking for a while, but now its becoming a real hassle to do other things where I really need to capture the whole state (eg. caching). The codebase is now full of ugly workarounds for collections because they don't have this ability. This means there will be no new features or bugfixes from me until this job is finished. cross-transaction caching is sort of implemented, but I'm sure there are problems there. What I will do is add a method clone(Object x) to the Type interface. Then all mutable types will be expected to implement this. (immutable types can just return x). I will change the implemntation of equals() for collections to diff the object graph instead of doing ==. Then I can simplify a heap of code on RelationalDatabaseSession and just generally look a hellavalot more elegant. |
From: Son To <son...@ya...> - 2002-03-01 09:40:27
|
Or use jdk1.4.0 --- Gavin_King/Cirrus%CI...@ci... wrote: > > > So what's changed? What is the list of external > dependencies? I can't > > find it anywhere. > > you need Xalan. and also JDOM if you want XML > generation. > > dependencies are mentioned here: > > http://hibernate.sourceforge.net/hibernate.html > > (the same document is also in your doc > subdirectory.) > > probably should start packaging the jars along with > hibernate > (not sure of the licensing issues involved in this > - i guess > xerces/xalan are bsd license so its ok). > > somebody else mentioned that we should keep them in > CVS also. > > > _______________________________________________ > Hibernate-devel mailing list > Hib...@li... > https://lists.sourceforge.net/lists/listinfo/hibernate-devel __________________________________________________ Do You Yahoo!? Yahoo! Greetings - Send FREE e-cards for every occasion! http://greetings.yahoo.com |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-03-01 09:05:00
|
> So what's changed? What is the list of external dependencies? I can't > find it anywhere. you need Xalan. and also JDOM if you want XML generation. dependencies are mentioned here: http://hibernate.sourceforge.net/hibernate.html (the same document is also in your doc subdirectory.) probably should start packaging the jars along with hibernate (not sure of the licensing issues involved in this - i guess xerces/xalan are bsd license so its ok). somebody else mentioned that we should keep them in CVS also. |
From: Paul S. <pau...@ne...> - 2002-03-01 08:52:31
|
Hi, I get an exception when trying to use 0.9.6 or 0.9.7 (moving up from 0.9.5): javax.xml.transform.TransformerFactoryConfigurationError: java.lang.ClassNotFoundException: org.apache.xalan.processor.TransformerFa ctoryImpl at javax.xml.transform.TransformerFactory.newInstance(TransformerFactory.java:121) at cirrus.hibernate.impl.RelationalDatabaseSessionFactory.<init>(RelationalDatabaseSessionFactory.java:90) at cirrus.hibernate.impl.RelationalDatastore.buildSessionFactory(RelationalDatastore.java:56) at com.nebulon.dm.hibernate.HibernateFactory.init(HibernateFactory.java:97) So what's changed? What is the list of external dependencies? I can't find it anywhere. PaulS :( |