You can subscribe to this list here.
2002 |
Jan
(2) |
Feb
(157) |
Mar
(111) |
Apr
(61) |
May
(68) |
Jun
(45) |
Jul
(101) |
Aug
(132) |
Sep
(148) |
Oct
(227) |
Nov
(141) |
Dec
(285) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(518) |
Feb
(462) |
Mar
(390) |
Apr
(488) |
May
(321) |
Jun
(336) |
Jul
(268) |
Aug
(374) |
Sep
(211) |
Oct
(246) |
Nov
(239) |
Dec
(173) |
2004 |
Jan
(110) |
Feb
(131) |
Mar
(85) |
Apr
(120) |
May
(82) |
Jun
(101) |
Jul
(54) |
Aug
(65) |
Sep
(94) |
Oct
(51) |
Nov
(56) |
Dec
(168) |
2005 |
Jan
(146) |
Feb
(98) |
Mar
(75) |
Apr
(118) |
May
(85) |
Jun
(75) |
Jul
(44) |
Aug
(94) |
Sep
(70) |
Oct
(84) |
Nov
(115) |
Dec
(52) |
2006 |
Jan
(113) |
Feb
(83) |
Mar
(217) |
Apr
(158) |
May
(219) |
Jun
(218) |
Jul
(189) |
Aug
(39) |
Sep
(3) |
Oct
(7) |
Nov
(4) |
Dec
(2) |
2007 |
Jan
|
Feb
(2) |
Mar
(7) |
Apr
(3) |
May
(3) |
Jun
(8) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(7) |
Dec
|
2008 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
(4) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2009 |
Jan
(6) |
Feb
|
Mar
(1) |
Apr
(2) |
May
(1) |
Jun
(1) |
Jul
(10) |
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
(3) |
2010 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2012 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Gavin K. <ga...@ap...> - 2002-09-11 07:49:09
|
> It appears to me that the onUpdate() method is not getting called when it should. onUpdate() gets called only when you explicitly pass a transient object to Session.update(). ie. its called when a transient object becomes "sessional". Its not called when an dirty object is UPDATEd on the database. I'm not sure if that is the source of some confusion, or if there is an actual bug. I know it seems to make sense to have a callback just before changes are persisted, and I spent a lot of time thinking about that. However, I decided that this was actually not a good idea for a number of reasons (I will describe them if you want but my girlie wants to us the computer now so I don't have time...) > Before I go digging around and trying to fix this, I wanted to make sure no one else was working on it. Yeah, could you please check thats its working as I described. There is a test for this but tests are not infallible. > PS - Is anyone working on adding outer join support for Oracle? If not, then I'd like to volunteer to do this. Yes, please. No-one is working on this AFAIK and I'm not planning to implement it myself. I'm playing around with a little ODMG-compliant API adaptor at the moment. Much less powerful API than Hibernate, because ODMG was designed as a lowest-common-denominator thing. But its is an established standard. Doesn't hurt to support it as an alternative. |
From: Jon L. <jon...@xe...> - 2002-09-11 07:35:19
|
Hi Gavin (and all the rest), It appears to me that the onUpdate() method is not getting called when = it should. I have an object then implements the Lifecycle interface, = and I have run a simple test with it where I create an object, load an = object, update an object, and delete an object. Just for testing I = added "System.out" messages in each of the callbacks. The callbacks get = called for everything except for "onUpdate". Has anyone else noticed = this problem? Before I go digging around and trying to fix this, I wanted to make sure = no one else was working on it. Jon... PS - Is anyone working on adding outer join support for Oracle? If not, = then I'd like to volunteer to do this. |
From: Gavin K. <ga...@ap...> - 2002-09-11 07:25:40
|
Anyone who wants to try out Sebastien Guimont's Hibernate XDoclet extension can get it from CVS in the new "Tools" module. Many thanks to Sebastien for this :) Enjoy! |
From: Gavin K. <ga...@ap...> - 2002-09-10 10:33:53
|
How exactly are you meant to obtain a JNDI EventContext? Can you just case the InitialContext to EventContext? |
From: Daniel M. <da...@sp...> - 2002-09-10 02:05:52
|
The summary of the situation is that I think (thought) that Session.lockAll(Iterator) would be a useful addition to hibernate. It would mean one trip to the db for the whole set of objects locked rather than 1 per object, and all objects would be locked atomically, (maybe) reducing deadlocks. The forum thread is at: https://sourceforge.net/forum/forum.php?thread_id=729431&forum_id=128638 > > Personally I'd rather introduce a lockAll(Iterator) method, > > Yes, you are perfectly correct, of course. I'm just feeling a bit quesy > about adding a method to the session that takes an iterator when every > other method takes a single object or a query..... > Agreed, it is a bit of an eyesore. > > Alright, I'll 'fess up. I don't really think people should be needing to > think about locking for themselves. I added the existing functionality > because users requested it and I sort of needed an answer to "can Hibernate > do pessimistic locking?". However, its not an approach I would personally > use in very many situations. I'm a little worried that if we clutter the > Session interface with too many methods dealing with concurrency, then > people will think they need to manage concurrency themselves whereas in > most cases they dont. > Agreed. It's great not having to do anything special 90% of the time. What chance of factoring out a ConcurrencyManager retrievable from Session for all the 'black-art' methods? At least this way there is only one point to document with 'advanced use only'. I think there are probably more methods which could be added to this interface, checking object versions without locking, adding objects which need version checks at commit time, maybe others. However I'm mindful of the fact that these wouldn't be useful for most situations and could be classified as bloat. Also, if you want to remain database neutral, some of these things, including locking, are worth diddly-squat as you have to cater to the lowest common denominator. > > But if you want to implement a lock(Iterator), I'm happy to include that Given that I've decided to try and remain db neutral I'll implement the l.c.d. solution first. I'll have a go at it after that. Maybe the locking won't even be that much help. And maybe I'll find some other things that would fit itno a ConcurrencyManager interface. > (its only *one* extra method, after all). > Famous last words? :) >I don't think we need to call it > lockAll(), given Java's rules for resolving overloaded methods. > This would be determined by static type info though. I can't imagine anyone writing Object o = set.iterator(); but there might be more imaginative people out there. Peace of mind for three letters. Cheers Dan > > :) > > Gavin |
From: Gavin K. <ga...@ap...> - 2002-09-08 17:01:01
|
>> 2. Is this a good idea performance wise, and design wise. >performance is not really the issue. Actually, let me rephrase that. Performance is 100% the issue - in the sense that (in general) we can't tell if an object becomes derefenced without going to the database. Under certain restrictive circumstances (for example, the restrictions we place upon collections) you might be able to tell. But Hibernate entities embody a far richer model than Hibernate collections..... |
From: Gavin K. <ga...@ap...> - 2002-09-08 16:12:10
|
> 1. Is it feasible? If detecting that an object had been *dereferenced* was a straightforward problem, then we could do something much better than a callback. The problem is that detection of unreferenced objects is a hard problem. On the other hand if what you mean by "dereference" is just "removal from a collection" well, that would probably be trivial to implement. However, it seems of fairly minimal value, since the application already knows it just removed an object from a collection. > 2. Is this a good idea performance wise, and design wise. performance is not really the issue. defining exactly what are the semantics of "dereferenced" would be the real problem. |
From: Gavin K. <ga...@ap...> - 2002-09-08 16:04:27
|
I wasn't following this thread very closely but I did sort of manage to pick up on the main ideas. Does someone want to write some kind of little informal summary of this discussion so I can add it to the documentation? TIA Gavin |
From: Urberg, J. <ju...@ve...> - 2002-09-05 12:46:31
|
I like the idea of making it more consistent. That will make it easier to use. I personally would vote for defaulting cascade-to-save to null since I don't use primitives as ids. It might be worth finding out how many folks use primitives vs. objects as ids before choosing a default. Regards, John -----Original Message----- From: Gavin King [mailto:ga...@ap...] Sent: Wednesday, September 04, 2002 9:11 PM To: hibernate list Subject: [Hibernate] Re: New functionality in CVS > (3) After fielding so much user confusion over the semantics of cascade > ="all", I found a way to extend the functionality consistent with the > current semantics and (hopefully) without any risk of breaking existing > code. I will need some feedback about the details here, though: > > There are still some remaining wrinkles here: > * objects with primitive ids can't have null ids. > * the treatment of a transient object with a non-null id is different > between save() and flush(). Okay, hows this for an approach to unifying the semantics of cascade + at the same time adding some extra flexibility: * no cascades ever occur from an update() as is the case in CVS now * Cascades that occur from a save() and cascades that occur from a flush() should behave *exactly* the same as far as the user is concerned (there is a very slight difference in implementation because of the foreign key constraint issue). * The decision of when to save() a child and when to update() it is made by the user in the mapping file. The user would specify an id value that indicates "save" in the id tag (or somewhere), eg. <id name="id" type="string" cascade-to-save="null"> <generator class="uuid.hex"/> </id> <id name="id" type="string" cascade-to-save="any"> <generator class="assigned"/> </id> <id name="id" type="long" cascade-to-save="-1"/> <generator class="native"/> </id> The allowed values of the cascade-to-save attribute would be: "null" (cascade to save when id is null, cascade to update otherwise) "any" (always cascade to save) a value (cascade to save when id has that value, update otherwise) "none" (always cascade to update) Two choices: If not specified in the mapping file, we would default to "any". This change would break some existing systems that are using update() to save() newly instantiated children. However, it would be a very simple mapping file change to make those systems work again (add cascade-to-save="null"). How does everyone feel about this? I am aware that this is a fairly significant change so if anyone thinks I'm going off the deep end with this, please speak up. My justification for all this is * it feels more like an extension of existing functionality rather than a complete discontinuity. * to the extent that it is discontinuous, it is a normalization of existing semantics * this is in response to user feedback The end result of these changes should be, as far as the user is concerned: If you want a child object to be persisted, you simply reference it from the parent. You don't need to worry anymore about which circumstances result in an implicit save(), an implicit update() or nothing. On the other hand, if you want a child to be deleted you either delete the parent while it holds a reference to the child OR you delete the child itself. My view is thats a big improvement as far as understandability is concerned. The fact that I could express the expected behaviour in one paragraph is evidence for that assertion. opinions? Gavin ------------------------------------------------------- This sf.net email is sponsored by: OSDN - Tired of that same old cell phone? Get a new here for FREE! https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 _______________________________________________ hibernate-devel mailing list hib...@li... https://lists.sourceforge.net/lists/listinfo/hibernate-devel |
From: Gavin K. <ga...@ap...> - 2002-09-05 12:25:01
|
Well, I suppose this would work: <class name="FooBarProxy" table="FooBar"> <id="fooBarId" /> <property name="bar" /> <set role="baz" table="Baz" lazy="true" cascade="all"> <key column="fooBarId" /> <one-to-many class="FooBazProxy" /> <!--- changed --> </set> </class> <class name="FooBazProxy" table="FooBaz"> <id="fooBazId" /> <one-to-one class="FooBaz"/> <!-- added --> <property name="baz" /> </class> but it would impact your Java class... You won't have this problem once we implement normalized table mappings. |
From: Mark W. <mor...@SM...> - 2002-09-05 10:35:51
|
Hi all... I'm having a little problem trying to map my objects. My mapping looks like this: <class name="Foo" table="Foo" discriminator-value="0"> <id="fooId" /> <property name="name" /> <subclass name="FooBar" discriminator-value="1"> <one-to-one name="proxy" class="FooBarProxy" constrained="false" cascade="all" /> </subclass> <subclass name="FooBaz" discriminator-value="2"> <one-to-many name="proxy" class="FooBazProxy" constrained="false" cascade="all" /> </subclass> </class> <class name="FooBarProxy" table="FooBar"> <id="fooBarId" /> <property name="bar" /> <set role="baz" table="Baz" lazy="true" cascade="all"> <key column="fooBarId" /> <one-to-many class="FooBaz" /> </set> </class> <class name="FooBazProxy" table="FooBaz"> <id="fooBazId" /> <property name="baz" /> </class> The db looks like this: create table Foo ( fooId bigint not null primary key, discriminator bigint not null, name varchar(256) ) create table FooBar (fooBarId bigint not null primary key, name varchar(256) ) create table FooBaz (fooBazId bigint not null primary key, name varchar(256), fooBarId bigint ) This is what I'm trying to do: FooBar fb = new FooBar(); Set set = new HashSet(); set.add(new FooBaz()); fb.setBaz(set); session.save(fb); FooBar and FooBaz are both subclasses of Foo. Here's the error log: Hibernate: insert into Foo ( name, fooId, discriminator ) values ( ?, ?, 1 ) Hibernate: insert into FooBar ( bar, fooBarId ) values ( ?, ? ) Hibernate: insert into Foo ( name, fooId, discriminator ) values ( ?, ?, 2 ) Hibernate: insert into FooBaz ( baz, fooBazId ) values ( ?, ? ) Hibernate: update Foo set fooBarId = ? where fooId = ? Sep 5, 2002 2:10:56 AM cirrus.hibernate.helpers.JDBCExceptionReporter logExceptions WARNING: SQL Error: 904, SQLState: 42000 Sep 5, 2002 2:10:56 AM cirrus.hibernate.helpers.JDBCExceptionReporter logExceptions SEVERE: ORA-00904: "FOOBARID": invalid identifier I understand why Hibernate tries to set fooBarId in table Foo instead of table FooBaz. What I want to know is if there is another way I can define the mapping and/or the classes to achieve what I want to do. I guess I could use a many-to-many association instead, but I'd really rather not introduce another table if it's at all avoidable. Does anyone have any suggestions? Thanks, -Mark |
From: Christian M. <vc...@cl...> - 2002-09-05 03:26:01
|
I vote for this, it will be so much cleaner that way and as you pointed out, fixing current code is only a matter of editing metadata files.. Regards Christian Meunier ----- Original Message ----- From: "Gavin King" <ga...@ap...> To: "hibernate list" <hib...@li...> Sent: Thursday, September 05, 2002 4:10 AM Subject: [Hibernate] Re: New functionality in CVS > > (3) After fielding so much user confusion over the semantics of cascade > > ="all", I found a way to extend the functionality consistent with the > > current semantics and (hopefully) without any risk of breaking existing > > code. I will need some feedback about the details here, though: > > > > There are still some remaining wrinkles here: > > * objects with primitive ids can't have null ids. > > * the treatment of a transient object with a non-null id is different > > between save() and flush(). > > Okay, hows this for an approach to unifying the semantics of cascade + > at the same time adding some extra flexibility: > > * no cascades ever occur from an update() as is the case in CVS now > > * Cascades that occur from a save() and cascades that occur from a flush() > should behave *exactly* the same as far as the user is concerned (there is > a very slight difference in implementation because of the foreign key > constraint issue). > > * The decision of when to save() a child and when to update() it is made > by the user in the mapping file. The user would specify an id value > that indicates "save" in the id tag (or somewhere), eg. > > <id name="id" type="string" cascade-to-save="null"> > <generator class="uuid.hex"/> > </id> > > <id name="id" type="string" cascade-to-save="any"> > <generator class="assigned"/> > </id> > > <id name="id" type="long" cascade-to-save="-1"/> > <generator class="native"/> > </id> > > The allowed values of the cascade-to-save attribute would be: > > "null" (cascade to save when id is null, cascade to update otherwise) > "any" (always cascade to save) > a value (cascade to save when id has that value, update otherwise) > "none" (always cascade to update) > > Two choices: > > If not specified in the mapping file, we would default to "any". This change > would break some existing systems that are using update() to save() > newly instantiated children. However, it would be a very simple mapping > file change to make those systems work again (add cascade-to-save="null"). > > How does everyone feel about this? I am aware that this is a fairly > significant > change so if anyone thinks I'm going off the deep end with this, please > speak > up. My justification for all this is > > * it feels more like an extension of existing functionality rather than a > complete > discontinuity. > * to the extent that it is discontinuous, it is a normalization of existing > semantics > * this is in response to user feedback > > The end result of these changes should be, as far as the user is concerned: > > If you want a child object to be persisted, you simply reference it from the > parent. You don't need to worry anymore about which circumstances result > in an implicit save(), an implicit update() or nothing. On the other hand, > if > you want a child to be deleted you either delete the parent while it holds > a reference to the child OR you delete the child itself. > > My view is thats a big improvement as far as understandability is concerned. > The fact that I could express the expected behaviour in one paragraph is > evidence for that assertion. > > opinions? > > Gavin > > > > ------------------------------------------------------- > This sf.net email is sponsored by: OSDN - Tired of that same old > cell phone? Get a new here for FREE! > https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 > _______________________________________________ > hibernate-devel mailing list > hib...@li... > https://lists.sourceforge.net/lists/listinfo/hibernate-devel > > |
From: Gavin K. <ga...@ap...> - 2002-09-05 02:30:55
|
> I dont understand why in a read only cache ( what Christoph is trying to > achieve) you need a transaction aware distributed caching. For a read-only cache, we can just use a JCS distributed cache. P.S. in my discussions of LockServers, I assumed that everyone would realise that the LockServer doesn't become a central point of failure. In case some people didn't realise; well, it doesn't. Failure of the LockServer would mean that servers were no longer able to access the cache. It would not stop them reading data directly from the database. The "lock" doesn't represent a lock upon a data item. It represents a lock upon the *cached* data item. No transaction may ever block another transaction inside the persistence layer. The database itself manages concurrency issues. |
From: Gavin K. <ga...@ap...> - 2002-09-05 02:12:47
|
> (3) After fielding so much user confusion over the semantics of cascade > ="all", I found a way to extend the functionality consistent with the > current semantics and (hopefully) without any risk of breaking existing > code. I will need some feedback about the details here, though: > > There are still some remaining wrinkles here: > * objects with primitive ids can't have null ids. > * the treatment of a transient object with a non-null id is different > between save() and flush(). Okay, hows this for an approach to unifying the semantics of cascade + at the same time adding some extra flexibility: * no cascades ever occur from an update() as is the case in CVS now * Cascades that occur from a save() and cascades that occur from a flush() should behave *exactly* the same as far as the user is concerned (there is a very slight difference in implementation because of the foreign key constraint issue). * The decision of when to save() a child and when to update() it is made by the user in the mapping file. The user would specify an id value that indicates "save" in the id tag (or somewhere), eg. <id name="id" type="string" cascade-to-save="null"> <generator class="uuid.hex"/> </id> <id name="id" type="string" cascade-to-save="any"> <generator class="assigned"/> </id> <id name="id" type="long" cascade-to-save="-1"/> <generator class="native"/> </id> The allowed values of the cascade-to-save attribute would be: "null" (cascade to save when id is null, cascade to update otherwise) "any" (always cascade to save) a value (cascade to save when id has that value, update otherwise) "none" (always cascade to update) Two choices: If not specified in the mapping file, we would default to "any". This change would break some existing systems that are using update() to save() newly instantiated children. However, it would be a very simple mapping file change to make those systems work again (add cascade-to-save="null"). How does everyone feel about this? I am aware that this is a fairly significant change so if anyone thinks I'm going off the deep end with this, please speak up. My justification for all this is * it feels more like an extension of existing functionality rather than a complete discontinuity. * to the extent that it is discontinuous, it is a normalization of existing semantics * this is in response to user feedback The end result of these changes should be, as far as the user is concerned: If you want a child object to be persisted, you simply reference it from the parent. You don't need to worry anymore about which circumstances result in an implicit save(), an implicit update() or nothing. On the other hand, if you want a child to be deleted you either delete the parent while it holds a reference to the child OR you delete the child itself. My view is thats a big improvement as far as understandability is concerned. The fact that I could express the expected behaviour in one paragraph is evidence for that assertion. opinions? Gavin |
From: Christian M. <vc...@cl...> - 2002-09-04 16:13:16
|
How you described the cache behaviour you wanted ("I want each node to have its own cache, but if one node writes to its db the data should be invalidated in all caches") led me to think we were talking about read only cache ;) Now it's clear ;) Thanks Chris ----- Original Message ----- From: "Christoph Sturm" <ch...@mc...> To: "Christian Meunier" <vc...@cl...>; <Gavin_King/Cirrus%CI...@ci...> Cc: <hib...@li...> Sent: Wednesday, September 04, 2002 5:57 PM Subject: Re: [Hibernate] Re: [Hibernate-devel] distributed caching > Hey Christian! > > What I am trying to achieve is a distributed read-write cache. I didnt > explicitly mention it in my post, but from the context (i was mentioning > writes) it could be guessed. I think Gavin understood me better because we > had a conversation about distributed read write caches before. > > Sorry if I wasnt precise enough. > > regards > christoph |
From: Christoph S. <ch...@mc...> - 2002-09-04 15:59:43
|
Hey Christian! What I am trying to achieve is a distributed read-write cache. I didnt explicitly mention it in my post, but from the context (i was mentioning writes) it could be guessed. I think Gavin understood me better because we had a conversation about distributed read write caches before. Sorry if I wasnt precise enough. regards christoph ----- Original Message ----- From: "Christian Meunier" <vc...@cl...> To: "Christoph Sturm" <ch...@mc...>; <Gavin_King/Cirrus%CI...@ci...> Cc: <hib...@li...> Sent: Wednesday, September 04, 2002 5:50 PM Subject: Re: [Hibernate] Re: [Hibernate-devel] distributed caching > I must miss something here.... > > I dont understand why in a read only cache ( what Christoph is trying to > achieve) you need a transaction aware distributed caching. > As i understand read only cache behaviour, when you read an object from the > database, cache it, whe you update/delete an object, flush it from the > cache, right ? > > Gavin could you please show me an example where the isolation of transaction > could be broken without a transaction aware distributed caching please ? > > Regards > Christian Meunier > > ----- Original Message ----- > From: <Gavin_King/Cirrus%CI...@ci...> > To: "Christoph Sturm" <ch...@mc...> > Cc: <hib...@li...> > Sent: Wednesday, September 04, 2002 5:03 PM > Subject: [Hibernate] Re: [Hibernate-devel] distributed caching > > > > > > Hi Christoph sorry about the slow ping time. It takes a while to collect > > thoughts and write a response to some of these things..... > > > > >>> Yesterday I was thinking about implementing a distributed cache for > > Hibernate. I want each node to have its own cache, but if one node writes > > to its db the data should be invalidated in all caches. You once mentioned > > that hibernate would need a transaction aware distributed cache to support > > distributed caching. > > I dont get why this is necessary. Can you tell me what kind of problems > you > > think i will run into when trying to implement such a beast, and where you > > think I could start? > > > > I was thinking about using jcs as cache, and when a session is committed, > > just invalidate all written objects in the cache. <<< > > > > If you have a look over cirrus.hibernate.ReadWriteCache you'll see that > > theres some interesting logic that ensures transaction isolation is > > preserved. A cache entry carries around with it: > > > > (0) the cached data item if the item is fresh > > (1) the time it was cached > > (2) a lock count if any transactions are currently attempting to update > the > > item > > (3) the time at which all locks had been released for a stale item > > > > All transactions lock an item before attempting to update it and unlock it > > after transaction completion. > > > > ie. the item has a lifecycle like this: > > > > lock > > <--------- > > ------> fresh ------> locked ---------> stale > > put lock release > > > > (actually the item may be locked and released multiple times while in the > > "locked" state until the lock count hits zero but the difficulty of > > representing that surpasses my minimal ascii-art skills.) > > > > A transaction may read an item of data from the cache if the transaction > > start time is AFTER the time at which the item was cached. (If not, the > > transaction must go to the database to see what state the database thinks > > that transaction should see.) > > > > A transaction may put an item into the cache if > > > > (a) there is no item in the cache for that id > > OR > > (b) the item is not fresh AND > > (c) the item in the cache with that id is unlocked AND > > (d) the time it was unlocked BEFORE the transaction start time > > > > So what all this means is that when doing a put, when locking, and when > > releasing, the transaction has to grab the current cache entry, modify it, > > and put it back in the cache _as_an_atomic_operation_. > > > > If you look at ReadWriteCache, atomicity is enforced by making each of > > these methods synchronized (a rare use of synchronized blocks in > > Hibernate). However, in a distributed environment you would need some > other > > kind of method of synchronizing access from multiple servers. > > > > I imagine you would implement this using something like the following: > > > > * Create a new implementation of CacheConcurrencyStrategy > > -DistributedCacheConcurrencyStrategy > > * DistributedCacheConcurrencyStrategy would delegate its functionality to > > ReadWriteCache which in turn delegates to JCSCache (which must be a > > distributed JCS cache, so all servers see the same lock count + > timestamps) > > * implement a LockServer process that would sit somewhere on the network > > and hand out very-short-duration locks on a particular id. > > * DistributedCacheConcurrencyStrategy would use the LockServer to > > synchronize access to the JCS Cache between multiservers. > > > > Locks would be expired on the same timescale as the cache timeout (which > is > > assumed in all this to be >> than the transaction timeout) to allow for > > misbehaving processes, server failures, etc. > > > > Of course, any kind of distributed synchronization has a *very* major > > impact upon system scalability. > > > > I think this would be a very *fun* kind of thing to implement and would be > > practical for some systems. It would also be a great demonstration of the > > fexibility of our approach because clearly this is exactly the kind of > > thing that Hibernate was never meant to be good for! > > > > :) > > > > Gavin > > > > > > > > ------------------------------------------------------- > > This sf.net email is sponsored by: OSDN - Tired of that same old > > cell phone? Get a new here for FREE! > > https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 > > _______________________________________________ > > hibernate-devel mailing list > > hib...@li... > > https://lists.sourceforge.net/lists/listinfo/hibernate-devel > > > > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by: OSDN - Tired of that same old > cell phone? Get a new here for FREE! > https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 > _______________________________________________ > hibernate-devel mailing list > hib...@li... > https://lists.sourceforge.net/lists/listinfo/hibernate-devel > |
From: Christian M. <vc...@cl...> - 2002-09-04 15:50:25
|
I must miss something here.... I dont understand why in a read only cache ( what Christoph is trying to achieve) you need a transaction aware distributed caching. As i understand read only cache behaviour, when you read an object from the database, cache it, whe you update/delete an object, flush it from the cache, right ? Gavin could you please show me an example where the isolation of transaction could be broken without a transaction aware distributed caching please ? Regards Christian Meunier ----- Original Message ----- From: <Gavin_King/Cirrus%CI...@ci...> To: "Christoph Sturm" <ch...@mc...> Cc: <hib...@li...> Sent: Wednesday, September 04, 2002 5:03 PM Subject: [Hibernate] Re: [Hibernate-devel] distributed caching > > Hi Christoph sorry about the slow ping time. It takes a while to collect > thoughts and write a response to some of these things..... > > >>> Yesterday I was thinking about implementing a distributed cache for > Hibernate. I want each node to have its own cache, but if one node writes > to its db the data should be invalidated in all caches. You once mentioned > that hibernate would need a transaction aware distributed cache to support > distributed caching. > I dont get why this is necessary. Can you tell me what kind of problems you > think i will run into when trying to implement such a beast, and where you > think I could start? > > I was thinking about using jcs as cache, and when a session is committed, > just invalidate all written objects in the cache. <<< > > If you have a look over cirrus.hibernate.ReadWriteCache you'll see that > theres some interesting logic that ensures transaction isolation is > preserved. A cache entry carries around with it: > > (0) the cached data item if the item is fresh > (1) the time it was cached > (2) a lock count if any transactions are currently attempting to update the > item > (3) the time at which all locks had been released for a stale item > > All transactions lock an item before attempting to update it and unlock it > after transaction completion. > > ie. the item has a lifecycle like this: > > lock > <--------- > ------> fresh ------> locked ---------> stale > put lock release > > (actually the item may be locked and released multiple times while in the > "locked" state until the lock count hits zero but the difficulty of > representing that surpasses my minimal ascii-art skills.) > > A transaction may read an item of data from the cache if the transaction > start time is AFTER the time at which the item was cached. (If not, the > transaction must go to the database to see what state the database thinks > that transaction should see.) > > A transaction may put an item into the cache if > > (a) there is no item in the cache for that id > OR > (b) the item is not fresh AND > (c) the item in the cache with that id is unlocked AND > (d) the time it was unlocked BEFORE the transaction start time > > So what all this means is that when doing a put, when locking, and when > releasing, the transaction has to grab the current cache entry, modify it, > and put it back in the cache _as_an_atomic_operation_. > > If you look at ReadWriteCache, atomicity is enforced by making each of > these methods synchronized (a rare use of synchronized blocks in > Hibernate). However, in a distributed environment you would need some other > kind of method of synchronizing access from multiple servers. > > I imagine you would implement this using something like the following: > > * Create a new implementation of CacheConcurrencyStrategy > -DistributedCacheConcurrencyStrategy > * DistributedCacheConcurrencyStrategy would delegate its functionality to > ReadWriteCache which in turn delegates to JCSCache (which must be a > distributed JCS cache, so all servers see the same lock count + timestamps) > * implement a LockServer process that would sit somewhere on the network > and hand out very-short-duration locks on a particular id. > * DistributedCacheConcurrencyStrategy would use the LockServer to > synchronize access to the JCS Cache between multiservers. > > Locks would be expired on the same timescale as the cache timeout (which is > assumed in all this to be >> than the transaction timeout) to allow for > misbehaving processes, server failures, etc. > > Of course, any kind of distributed synchronization has a *very* major > impact upon system scalability. > > I think this would be a very *fun* kind of thing to implement and would be > practical for some systems. It would also be a great demonstration of the > fexibility of our approach because clearly this is exactly the kind of > thing that Hibernate was never meant to be good for! > > :) > > Gavin > > > > ------------------------------------------------------- > This sf.net email is sponsored by: OSDN - Tired of that same old > cell phone? Get a new here for FREE! > https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 > _______________________________________________ > hibernate-devel mailing list > hib...@li... > https://lists.sourceforge.net/lists/listinfo/hibernate-devel > > |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-09-04 15:21:14
|
Hi Christoph sorry about the slow ping time. It takes a while to collect thoughts and write a response to some of these things..... >>> Yesterday I was thinking about implementing a distributed cache for Hibernate. I want each node to have its own cache, but if one node writes to its db the data should be invalidated in all caches. You once mentioned that hibernate would need a transaction aware distributed cache to support distributed caching. I dont get why this is necessary. Can you tell me what kind of problems you think i will run into when trying to implement such a beast, and where you think I could start? I was thinking about using jcs as cache, and when a session is committed, just invalidate all written objects in the cache. <<< If you have a look over cirrus.hibernate.ReadWriteCache you'll see that theres some interesting logic that ensures transaction isolation is preserved. A cache entry carries around with it: (0) the cached data item if the item is fresh (1) the time it was cached (2) a lock count if any transactions are currently attempting to update the item (3) the time at which all locks had been released for a stale item All transactions lock an item before attempting to update it and unlock it after transaction completion. ie. the item has a lifecycle like this: lock <--------- ------> fresh ------> locked ---------> stale put lock release (actually the item may be locked and released multiple times while in the "locked" state until the lock count hits zero but the difficulty of representing that surpasses my minimal ascii-art skills.) A transaction may read an item of data from the cache if the transaction start time is AFTER the time at which the item was cached. (If not, the transaction must go to the database to see what state the database thinks that transaction should see.) A transaction may put an item into the cache if (a) there is no item in the cache for that id OR (b) the item is not fresh AND (c) the item in the cache with that id is unlocked AND (d) the time it was unlocked BEFORE the transaction start time So what all this means is that when doing a put, when locking, and when releasing, the transaction has to grab the current cache entry, modify it, and put it back in the cache _as_an_atomic_operation_. If you look at ReadWriteCache, atomicity is enforced by making each of these methods synchronized (a rare use of synchronized blocks in Hibernate). However, in a distributed environment you would need some other kind of method of synchronizing access from multiple servers. I imagine you would implement this using something like the following: * Create a new implementation of CacheConcurrencyStrategy -DistributedCacheConcurrencyStrategy * DistributedCacheConcurrencyStrategy would delegate its functionality to ReadWriteCache which in turn delegates to JCSCache (which must be a distributed JCS cache, so all servers see the same lock count + timestamps) * implement a LockServer process that would sit somewhere on the network and hand out very-short-duration locks on a particular id. * DistributedCacheConcurrencyStrategy would use the LockServer to synchronize access to the JCS Cache between multiservers. Locks would be expired on the same timescale as the cache timeout (which is assumed in all this to be >> than the transaction timeout) to allow for misbehaving processes, server failures, etc. Of course, any kind of distributed synchronization has a *very* major impact upon system scalability. I think this would be a very *fun* kind of thing to implement and would be practical for some systems. It would also be a great demonstration of the fexibility of our approach because clearly this is exactly the kind of thing that Hibernate was never meant to be good for! :) Gavin |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-09-04 10:38:56
|
Just been playing with the brand new Hibernate XDoclet extension writte= n by S=E9bastien Guimont. Very very nice. I think theres a bit of work still= to go before its all fully documented and in CVS but I'm really looking forwa= rd to using the finished product. I'm most impressed by just how concise i= t is. A little tag in the class comment (approximately) like: @hibernate.bean table-name=3D"TABLE" discriminator-value=3D"X" and one for each property something like: @hibernate.field column-name=3D"COL" column-length=3D"20" its actually a really nice way to handle metadata. I know a lot of people have asked about this functionality, so many tha= nks to S=E9bastien for going off and doing all the work on this.= |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-09-04 07:05:19
|
(1) I implemented Validatable. It would be cool if people could try this out and see if anything extra is required here. (2) You may now use the <composite-id> declaration to specify properties of the persistent class itself as the id properties. The form of the declaration is: <composite-id> <!-- name and class attibutes are missing --> <property name="foo"/> <property name="bar"/> </composite-id> to load() an instance, simply instantiate an instance of the class, set its id properties and call: session.load(object, object); (3) After fielding so much user confusion over the semantics of cascade ="all", I found a way to extend the functionality consistent with the current semantics and (hopefully) without any risk of breaking existing code. I will need some feedback about the details here, though: Essentially I have scrapped the notion of a cascaded update(). The functionality performed by cascaded updates is now performed every time the session is flushed. This means that any lifecycle objects are automatically updated or saved as soon as they are detected (in a collection or many-to-one association). The choice of update() or save() is still made on the basis of wether or not the id property is null. There are still some remaining wrinkles here: * objects with primitive ids can't have null ids. Should we interpret 0 as null? * the treatment of a transient object with a non-null id is different between save() and flush(). A cascaded save() would cause the object to be save()d whereas if discovered at flush-time, it would be update()d. Should we tweak the semantics of cascaded saves? (Such a change has a fairly large potential to break existing code.) Alternatively, should we reintroduce cascaded update as a concept and tweak the new behaviour of flush() to be consistent with save()? Or is the new behaviour actually fine? Everyone should be aware that, even though this looks a lot like "persistence by reachability", it still isn't because delete() remains a manual process. I have very many misgivings about introducing any kind of reference-counting scheme to solve the garbage collection problem... Gavin P.S. After this email, I will be recieving mail mainly at ga...@ap... (thanks to Anton + to Christoph who also offered me use of a mail server) |
From: Christian M. <vc...@cl...> - 2002-09-03 22:03:43
|
Hi Christoph, you have already distributed caching with JCS ;) Just use a lateral cache. Regards Jelan ----- Original Message -----=20 From: Christoph Sturm=20 To: hib...@li...=20 Sent: Tuesday, September 03, 2002 5:46 PM Subject: [Hibernate-devel] distributed caching Hi Gavin and others! Yesterday I was thinking about implementing a distributed cache for = Hibernate. I want each node to have its own cache, but if one node = writes to its db the data should be invalidated in all caches. You once = mentioned that hibernate would need a transaction aware distributed = cache to support distributed caching. =20 I dont get why this is necessary. Can you tell me what kind of = problems you think i will run into when trying to implement such a = beast, and where you think I could start? I was thinking about using jcs as cache, and when a session is = committed, just invalidate all written objects in the cache.=20 regards=20 chris |
From: Christoph S. <ch...@mc...> - 2002-09-03 15:48:06
|
Hi Gavin and others! Yesterday I was thinking about implementing a distributed cache for = Hibernate. I want each node to have its own cache, but if one node = writes to its db the data should be invalidated in all caches. You once = mentioned that hibernate would need a transaction aware distributed = cache to support distributed caching. =20 I dont get why this is necessary. Can you tell me what kind of problems = you think i will run into when trying to implement such a beast, and = where you think I could start? I was thinking about using jcs as cache, and when a session is = committed, just invalidate all written objects in the cache.=20 regards=20 chris |
From: Jon L. <jon...@xe...> - 2002-09-03 15:41:49
|
Hi Chris, Yes, you are correct... I'm not quite sure what I was thinking when I put that in there... :-) This was a simplified version of my real code to be used an example for all of you, and I guess I made a mistake when I was doing that. What I really meant to express is that you might want to generate an ID that is used as the key to store the Hibernate session in the request. By doing this, you can use multiple instances of the filter to support multiple Hibernate sessions to different databases. Thanks for finding my mistake so I could clarify my original intention of that "vSessionId" variable! Jon... ----- Original Message ----- From: "Christoph Sturm" <ch...@mc...> To: "Jon Lipsky" <jon...@xe...>; <hib...@li...> Sent: Tuesday, September 03, 2002 12:21 PM Subject: Re: [Hibernate-devel] mvc & lazy loading > Hey Jon! > > I also think that this is a great idea :) > > What i dont like about your approach is that it always creates a session. > > > String vSessionId = > > ((HttpServletRequest)request).getSession(true).getId(); > > Session vSession = (Session)request.getAttribute(vSessionId); > > Why did you do it this way? Is it not ok to use a string constant as > attribute name? > > I think my approach will be to open the session in the controller, and close > it only in the filter, if there is an open session. > > regards > chris > > > > ----- Original Message ----- > From: "Jon Lipsky" <jon...@xe...> > To: "Christoph Sturm" <ch...@mc...>; "Brad Clow" > <bra...@wo...>; <hib...@li...> > Sent: Tuesday, August 27, 2002 11:46 AM > Subject: Re: [Hibernate-devel] mvc & lazy loading > > > > Hi All, > > > > I use a Filter (a new addition in the 2.3 servlet spec) to open and close > my > > Hibernate sessions. By doing it this way it doesn't matter if I am using > > Velocity or JSP or something else to access Hibernate. As far as the > "view" > > is concerned the Hibernate session just exists, and only the Filter has to > > worry about opening and closing it. > > > > I was looking at the examples inclued with Hibernate and I was thinking > that > > maybe an example should be added of using a Filter since it's a good way > to > > cleanly seperate the creation and closing of the sessions for a web > > application. > > > > Jon... > > > > PS - Here is a code snippet to get you started if you want to do it this > > way: > > > > package example; > > > > import cirrus.hibernate.Datastore; > > import cirrus.hibernate.Session; > > import cirrus.hibernate.SessionFactory; > > import cirrus.hibernate.Hibernate; > > > > import javax.servlet.*; > > import javax.servlet.http.HttpServletRequest; > > import java.io.IOException; > > > > public class HibernateFilter implements Filter > > { > > static org.apache.log4j.Category log = > > org.apache.log4j.Category.getInstance(HibernateFilter.class.getName()); > > > > private Datastore datastore; > > private SessionFactory sessions; > > > > public void doFilter(ServletRequest request, ServletResponse response, > > FilterChain chain) throws IOException, ServletException > > { > > try > > { > > // Get the http session id from the request, then we will try to get > the > > Hiberate > > // Session from the request. If it doesn't exist, then we will create > > it, otherwise > > // we will use the one that already exists. > > String vSessionId = > > ((HttpServletRequest)request).getSession(true).getId(); > > Session vSession = (Session)request.getAttribute(vSessionId); > > > > if (vSession == null) > > { > > vSession = sessions.openSession(); > > request.setAttribute(vSessionId, vSession); > > > > if (log.isDebugEnabled()) > > { > > log.debug("Opened hibernate session."); > > } > > } > > } > > catch (Exception exc) > > { > > log.error("Error opening Hibernate session.", exc); > > } > > > > try > > { > > chain.doFilter(request, response); > > } > > finally > > { > > try > > { > > String vSessionId = > ((HttpServletRequest)request).getSession().getId(); > > Session vSession = (Session)request.getAttribute(vSessionId); > > > > // Only try to close the connection if it is open, since it might have > > been > > // closed somewhere else by mistake. > > if (vSession.isOpen()) > > { > > vSession.close(); > > > > if (log.isDebugEnabled()) > > { > > log.debug("Closed hibernate session."); > > } > > } > > } > > catch (Exception exc) > > { > > log.error("Error closing Hibernate session.", exc); > > } > > } > > } > > > > public void init(FilterConfig aConfig) throws ServletException > > { > > // Initialize your datastore > > datastore = Hibernate.createDatastore(); > > > > // Initialize your the object -> db mappings > > // ... > > > > // Initialize your session > > sessions = datastore.buildSessionFactory(); > > } > > > > public void destroy() > > { > > > > } > > } > > > > > > > > ----- Original Message ----- > > From: "Christoph Sturm" <ch...@mc...> > > To: "Brad Clow" <bra...@wo...>; > > <hib...@li...> > > Sent: Tuesday, August 27, 2002 10:47 AM > > Subject: Re: [Hibernate-devel] mvc & lazy loading > > > > > > > Hi Brad! > > > > > > This subject is an interesting one that I was also thinking of lately. > > > I did a test app with maverick (mav.sourceforge.net), and there it was > > > really easy. If the controller(=model) implements ModelLifetime, a > discard > > > function is called when the views are finished and the model is > discarded. > > > There I closed my session. Other frameworks that just forward to the > view > > > dont offer this functionality. > > > For most of my stuff I use webwork, so I'd like a solution that works > > there > > > too. I was thinking of closing the session in the finalize method of my > > > controller, but then I dont really know when the session will be closed. > > > Another possibility would be to to pass the session to velocity, and > close > > > it in the velocity view servlet after all is rendered. > > > > > > How did you implement it? > > > > > > regards > > > chris > > > ----- Original Message ----- > > > From: "Brad Clow" <bra...@wo...> > > > To: <hib...@li...> > > > Sent: Tuesday, August 27, 2002 12:38 AM > > > Subject: [Hibernate-devel] mvc & lazy loading > > > > > > > > > > > > > > to date, we have avoided using lazy loading when writing a web app in > > > > one of the standard mvc type frameworks (eg. struts, webwork, etc). > > > > this is because objects r typically retrieved, placed in the request > > > > attributes, session closed and control is then passed to the view > (JSP, > > > > velocity, etc). if the view attempts to access a lazy loaded > collection > > > > in one of the objects an exception is thrown as the associated session > > > > is closed. > > > > > > > > what do other people do? > > > > > > > > yesterday, i spent a few hours writing a very simple webapp framework > > > > that uses velocity for the view. it enables the velocity rendering to > > > > be done while a session is open. so far this is working quite well > for > > > > us. > > > > > > > > comments? > > > > > > > > brad > > > > > > > > > _______________________________ > > > > > brad clow > > > > > chief technical officer > > > > > workingmouse > > > > > > > > > > email: bra...@wo... > > > > > web: http://www.workingmouse.com > > > > > > > > > > > > > > > > > ------------------------------------------------------- > > > > This sf.net email is sponsored by: OSDN - Tired of that same old > > > > cell phone? Get a new here for FREE! > > > > > > > > > > https://www.inphonic.com/r.asp?r____________________________________________ > > > ___ > > > > Hibernate-devel mailing list > > > > Hib...@li... > > > > https://lists.sourceforge.net/lists/listinfo/hibernate-devel > > > > > > > > > > > > > > > > ------------------------------------------------------- > > > This sf.net email is sponsored by: OSDN - Tired of that same old > > > cell phone? Get a new here for FREE! > > > https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 > > > _______________________________________________ > > > Hibernate-devel mailing list > > > Hib...@li... > > > https://lists.sourceforge.net/lists/listinfo/hibernate-devel > > > > > > > > > > > ------------------------------------------------------- > > This sf.net email is sponsored by: OSDN - Tired of that same old > > cell phone? Get a new here for FREE! > > https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 > > _______________________________________________ > > Hibernate-devel mailing list > > Hib...@li... > > https://lists.sourceforge.net/lists/listinfo/hibernate-devel > > > |
From: Yaron Z. <ya...@id...> - 2002-09-03 15:08:11
|
Hi, I would like to propose a new callback method: OnDereference(Session session, Owner object); This would be called for objects which are deleted from collections, and enable the application an easy way to decide whether to delete this object from the database or not. The owner object is the object which has a reference (Java wise) to the collection that includes this object. Three questions: 1. Is it feasible? 2. Is this a good idea performance wise, and design wise. 3. Would it be useful to add a third parameter - the collection which referenced the object till now (for non arrays only...) Thanks, Yaron. |
From: Yaron Z. <ya...@id...> - 2002-09-03 15:00:21
|
I'm voting for this callback -----Original Message----- From: Gavin_King/Cirrus%CI...@ci... [mailto:Gavin_King/Cirrus%CI...@ci...] Sent: Monday, September 02, 2002 5:59 AM To: hib...@li... Subject: [Hibernate-devel] Validation interface? Would anyone have a use for a new callback interface public interface Validatable { public void validate() throws ValidationException; } that would be called just before saving or updating an object's state? The object wouldn't be allowed to adjust it's own state from inside validate(), just throw an exception if it's invariants were violated. I've wondered about this a few times, but it never passed the bloat test really.... (Remember, i'm trying to avoid turning Hibernate into a framework.) However, if its a thing that would be used its sooo easy to implement. ------------------------------------------------------- This sf.net email is sponsored by: OSDN - Tired of that same old cell phone? Get a new here for FREE! https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 _______________________________________________ Hibernate-devel mailing list Hib...@li... https://lists.sourceforge.net/lists/listinfo/hibernate-devel |