You can subscribe to this list here.
2002 |
Jan
(2) |
Feb
(157) |
Mar
(111) |
Apr
(61) |
May
(68) |
Jun
(45) |
Jul
(101) |
Aug
(132) |
Sep
(148) |
Oct
(227) |
Nov
(141) |
Dec
(285) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(518) |
Feb
(462) |
Mar
(390) |
Apr
(488) |
May
(321) |
Jun
(336) |
Jul
(268) |
Aug
(374) |
Sep
(211) |
Oct
(246) |
Nov
(239) |
Dec
(173) |
2004 |
Jan
(110) |
Feb
(131) |
Mar
(85) |
Apr
(120) |
May
(82) |
Jun
(101) |
Jul
(54) |
Aug
(65) |
Sep
(94) |
Oct
(51) |
Nov
(56) |
Dec
(168) |
2005 |
Jan
(146) |
Feb
(98) |
Mar
(75) |
Apr
(118) |
May
(85) |
Jun
(75) |
Jul
(44) |
Aug
(94) |
Sep
(70) |
Oct
(84) |
Nov
(115) |
Dec
(52) |
2006 |
Jan
(113) |
Feb
(83) |
Mar
(217) |
Apr
(158) |
May
(219) |
Jun
(218) |
Jul
(189) |
Aug
(39) |
Sep
(3) |
Oct
(7) |
Nov
(4) |
Dec
(2) |
2007 |
Jan
|
Feb
(2) |
Mar
(7) |
Apr
(3) |
May
(3) |
Jun
(8) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(7) |
Dec
|
2008 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
(4) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2009 |
Jan
(6) |
Feb
|
Mar
(1) |
Apr
(2) |
May
(1) |
Jun
(1) |
Jul
(10) |
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
(3) |
2010 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2012 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jon L. <jon...@xe...> - 2002-06-07 07:32:42
|
I think it might make sense to enable this by default since doing it this way should support every database. Jon... P.S. - Sorry about the spaces, I didn't know I did it... I ask my developers to do the same, so hopefully they won't find out I didn't follow my own rules. :-) ----- Original Message ----- From: <Gavin_King/Cirrus%CI...@ci...> To: "Jon Lipsky" <jon...@xe...> Cc: <hib...@li...> Sent: Friday, June 07, 2002 8:51 AM Subject: Re: [Hibernate-devel] Using streams for binary types > > > I just checked in a change to add support for using streams when setting > > binary types. > > Thanks Jon. :) I wonder if this option should be enabled by default..... > > Somewhere back in the dim dark past I did some experiments with stuff like > this but I forget the details. > > Gavin > > P.S. Sorry to be a whingy nitpicker, but can we please keep using tabs > instead of spaces for indenting the sourcecode. > |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-06-07 07:05:31
|
> I just checked in a change to add support for using streams when setting > binary types. Thanks Jon. :) I wonder if this option should be enabled by default..... Somewhere back in the dim dark past I did some experiments with stuff like this but I forget the details. Gavin P.S. Sorry to be a whingy nitpicker, but can we please keep using tabs instead of spaces for indenting the sourcecode. |
From: Jon L. <jon...@xe...> - 2002-06-07 06:53:14
|
Hi, I just checked in a change to add support for using streams when setting binary types. I don't know about other databases, but Oracle has a limitation on the number of bytes you can set when you use setBytes to update a LONG RAW column. If you want to store more data than this limit, then you need to use streams, otherwise you will get an Oracle exception. I added a new hibernate property that allows you to toggle this functionality on and off since other databases don't have this limitation. The system property that I added is: hibernate.use_streams_for_binary By setting this to true, it will use streams to both get and set the binary types. Cheers, Jon... |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-06-06 13:43:58
|
>For multiple datasources, use multiple instances of SessionFactory. Each >>instance of SessionFactory may be configured using a different set of >>properties by using >> >> Datastore datastore = Hibernate.createDatastore()........; >> Properties props = .... ; >> SessionFactory sf = datastore.buildSessionFactory(props); >> >sorry, i was not aware of this (as i said, i'm new to hibernate :). so, >if you specify a hibernate.session_factory_name at this level, it takes >precedence over whatever VM level properties you might have? Exactly :) The properties contain all the database connection parameters, etc, so this is the generic way of accessing multiple datastores. ========================== What I just finished doing (right now) was implementing Serialization for Sessions. This turned out to be mush easier than I expected (I had been putting it off for ages because I had been expecting pain) but it still isn't heavily tested. What this new feature means, though, is that you can maintain an open, disconnected Session in a session bean or servlet with HttpSession failover. The only requirement is that your business objects also be serializable. If you have more than one SessionFactory in the application and you want to be able to migrate between servers, you are going to need to provide names for the SessionFactories, unfortunately. (The Session needs to know which is "its" Factory - even if it moves from one machine to another.) What I've got at the moment is this: 1. If no name is specified in hibernate.session_factory_name, the SessionFactory gets a name like "hibernate/session_factory/default/1". This name is just used internally by the serialization mechanism. The SessionFactory is NOT bound into JNDI. 2. If a name is specified, hibernate attempts to bind the SessionFactory to that name (using hibernate.jndi.class + hibernate.jndi.url OR VM defaults if these are not specified). 3. If we fail to bind, report that in the log, and carry on, using the name internally for the serialization mechanism. Does that sound reasonable? |
From: Viktor S. <phr...@im...> - 2002-06-06 13:16:40
|
hi, Gavin_King/Cirrus%CI...@ci... wrote: >For multiple datasources, use multiple instances of SessionFactory. Each >instance of SessionFactory may be configured using a different set of >properties by using > > Datastore datastore = Hibernate.createDatastore()........; > Properties props = .... ; > SessionFactory sf = datastore.buildSessionFactory(props); > sorry, i was not aware of this (as i said, i'm new to hibernate :). so, if you specify a hibernate.session_factory_name at this level, it takes precedence over whatever VM level properties you might have? viktor |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-06-06 06:22:40
|
>Based on what you described, i can't really see how hibernate would work >with multiple datasources in a j2ee environment (which i think is really >important: the reasons for using J2EE are less, if you dont have to >juggle multiple datasources, etc). For multiple datasources, use multiple instances of SessionFactory. Each instance of SessionFactory may be configured using a different set of properties by using Datastore datastore = Hibernate.createDatastore()........; Properties props = .... ; SessionFactory sf = datastore.buildSessionFactory(props); >It seems to me that putting the JNDI binding stuff in hibernate is >probably a bad idea, _especially_ using VM level settings. ie. they need not be VM-level settings. But in the common case of a single database they _can_ be. > - and just put a resource ref (or env-ref?) in the beans' descriptor >pointing to the sessionfactory it's going to use (or more, if you happen >to need more in the same bean) Thats the intended usage. You couldn't do that before because the SessionFactory didn't implement javax.naming.Referenceable. >of course, nothing prevents me from doing this anyway, i guess :) As currently implemented, you wouldn't be able to bind the SessionFactory yourself, because the instance is only registered with a javax.naming.spi.ObjectFactory if a session_factory_name is specified. I could change this so that every instance is registered with the ObjectFactory class and then its the application's reponsibility to bind it to the name server. However, I can't really see that this provides that much extra flexibility...... > but in >what you described i don't really see the point in having all these jndi >features in the hibernate core. what i really like about hibernate so >far, is that it tries to do only one thing, and does that in an elegant >fashion. i think the jndi stuff is just bloat, that in a simple case you >don't need, in a complex case you can't use. I think you can still use it in the complex case you describe. I think it _would_ be bloat if it would have taken me more than a few lines of code to implement or if you would be forced to use this functionality, which you certainly _aren't_. I appreciate your input..... peace Gavin |
From: Viktor S. <phr...@im...> - 2002-06-06 05:26:36
|
hi, Gavin_King/Cirrus%CI...@ci... wrote: >The code I just committed to CVS makes the SessionFactory bind itself to >JNDI when the user specifies the property hibernate.session_factory_name. >This allows BMP EJBs, for example, to obtain their SessionFactory with a >JNDI lookup. I have a question about this: the way I've done it, we use the >same properties hibernate.jndi.url + hibernate.jndi.class for >DatasourceCOnnectionProvider (which looks up a Datasource in JNDI) as for >SessionFactory. Is there a strong reason to decouple these? ie. would it be >normal to have the Datasource registered with a different JNDI provider >from the SessionFactory? > >I want to make this reasonably flexible without having an explosion of >these hibernate.xxxx properties..... > > i'm new to hibernate, i haven't actually used it for much yet, so excuse my ignorance :) Based on what you described, i can't really see how hibernate would work with multiple datasources in a j2ee environment (which i think is really important: the reasons for using J2EE are less, if you dont have to juggle multiple datasources, etc). It seems to me that putting the JNDI binding stuff in hibernate is probably a bad idea, _especially_ using VM level settings. Something like this would be more flexible i think: - configure the SessionFactories you need in one (or more) startup class and bind them in JNDI. take the parameters from wherever you like (eg. in weblogic you can pass params to a startup class) - and just put a resource ref (or env-ref?) in the beans' descriptor pointing to the sessionfactory it's going to use (or more, if you happen to need more in the same bean) of course, nothing prevents me from doing this anyway, i guess :) but in what you described i don't really see the point in having all these jndi features in the hibernate core. what i really like about hibernate so far, is that it tries to do only one thing, and does that in an elegant fashion. i think the jndi stuff is just bloat, that in a simple case you don't need, in a complex case you can't use. best regards, viktor |
From: Viktor S. <phr...@im...> - 2002-06-06 05:24:44
|
hi, Gavin_King/Cirrus%CI...@ci... wrote: >The code I just committed to CVS makes the SessionFactory bind itself to >JNDI when the user specifies the property hibernate.session_factory_name. >This allows BMP EJBs, for example, to obtain their SessionFactory with a >JNDI lookup. I have a question about this: the way I've done it, we use the >same properties hibernate.jndi.url + hibernate.jndi.class for >DatasourceCOnnectionProvider (which looks up a Datasource in JNDI) as for >SessionFactory. Is there a strong reason to decouple these? ie. would it be >normal to have the Datasource registered with a different JNDI provider >from the SessionFactory? > >I want to make this reasonably flexible without having an explosion of >these hibernate.xxxx properties..... > > i'm new to hibernate, i haven't actually used it for much yet, so excuse my ignorance :) Based on what you described, i can't really see how hibernate would work with multiple datasources in a j2ee environment (which i think is really important: the reasons for using J2EE are less, if you dont have to juggle multiple datasources, etc). It seems to me that putting the JNDI binding stuff in hibernate is probably a bad idea, _especially_ using VM level settings. Something like this would be more flexible i think: - configure the SessionFactories you need in one (or more) startup class and bind them in JNDI. take the parameters from wherever you like (eg. in weblogic you can pass params to a startup class) - and just put a resource ref (or env-ref?) in the beans' descriptor pointing to the sessionfactory it's going to use (or more, if you happen to need more in the same bean) of course, nothing prevents me from doing this anyway, i guess :) but in what you described i don't really see the point in having all these jndi features in the hibernate core. what i really like about hibernate so far, is that it tries to do only one thing, and does that in an elegant fashion. i think the jndi stuff is just bloat, that in a simple case you don't need, in a complex case you can't use. best regards, viktor |
From: Jon L. <jon...@xe...> - 2002-06-05 14:13:59
|
I think this make sense... Doing it this way means you have everything related with persistenace (Connections and the SessionFactory) all bound to the same JNDI tree. The only example I can think of where you would have a datasource registered with a different JNDI provider is if you lookup a remote data source from an external J2EE container, and I think that is not exactly an everyday occurance. ----- Original Message ----- From: <Gavin_King/Cirrus%CI...@ci...> To: <hib...@li...> Sent: Wednesday, June 05, 2002 2:07 PM Subject: [Hibernate-devel] Hibernate + JNDI > The code I just committed to CVS makes the SessionFactory bind itself to > JNDI when the user specifies the property hibernate.session_factory_name. > This allows BMP EJBs, for example, to obtain their SessionFactory with a > JNDI lookup. I have a question about this: the way I've done it, we use the > same properties hibernate.jndi.url + hibernate.jndi.class for > DatasourceCOnnectionProvider (which looks up a Datasource in JNDI) as for > SessionFactory. Is there a strong reason to decouple these? ie. would it be > normal to have the Datasource registered with a different JNDI provider > from the SessionFactory? > > I want to make this reasonably flexible without having an explosion of > these hibernate.xxxx properties..... > > > _______________________________________________________________ > > Don't miss the 2002 Sprint PCS Application Developer's Conference > August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm > > _______________________________________________ > Hibernate-devel mailing list > Hib...@li... > https://lists.sourceforge.net/lists/listinfo/hibernate-devel > |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-06-05 12:22:19
|
The code I just committed to CVS makes the SessionFactory bind itself to JNDI when the user specifies the property hibernate.session_factory_name. This allows BMP EJBs, for example, to obtain their SessionFactory with a JNDI lookup. I have a question about this: the way I've done it, we use the same properties hibernate.jndi.url + hibernate.jndi.class for DatasourceCOnnectionProvider (which looks up a Datasource in JNDI) as for SessionFactory. Is there a strong reason to decouple these? ie. would it be normal to have the Datasource registered with a different JNDI provider from the SessionFactory? I want to make this reasonably flexible without having an explosion of these hibernate.xxxx properties..... |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-06-05 06:18:46
|
I released a new version last night, just to get a bunch of changes out there and into use.... There are changes to the exceptions model that could potentially break existing application code, so heads up on that. (I doubt this will be a problem for many people.) Check out all the new Exception subclasses in the API docs. Other new features are lifecycle objects (cascade="all" only, so far), a proper ConnectionProvider framework and Jon Lipsky's composite key query patch. Performance enhancements include less automatic flushing and improved deep fetching. For version 1.0 I am waiting on: 1. finished Transaction API 2. Referenceable SessionFactory (I have justabout finished this .. will commit it in a few hours probably) 3. opinions on my proposal to replace PersistentLifecycle (see my last post on lifecycle objects thread) I figure if we are going to make any changes to major APIs in the near future, they should be done before we call something 1.0. Thats why I'm keen to resolve the question of a new lifecycle interface at this stage. (to recap this issue, i'm thinking of providing the following:) package cirrus.hibernate; interface Lifecycle { // replaces PersistentLifecycle public boolean save(Session s) throws ....; public boolean update(Session s) throws ....; public boolean delete(Session s) throws ....; public void load(Session s); } The new interface lets the object veto operations (which might have occurred by cascade, for example) and adds a specific callback for update (). I've removed the store() callback because I can't think of any particularly good use for it (does anyone use this??). Please let me know what you think. |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-06-04 04:05:24
|
> I think the meaning got lost in my verbiage here. What I meant to suggest > was that cascading be available via overloaded methods such as: > > save/update(Object o, boolean cascade); //I wouldn't include cascade delete But how would you know where to stop? (is it a shallow or deep cascade?) How would you know _which_ properties to cascade and which properties to ignore? I think this would be useful only for object models with a very tree-like structure where you can always cascade a whole "branch". The mapping-file and callback approaches both allow "selective" cascades to some relationships but not others. PersistentLifecycle already provides callbacks that let the application cascade save() and delete(). It seems clear to me now that we also need to provide an update() callback. I'm thinking of deprecating PersistentLifecycle and replacing it with: package cirrus.hibernate; public interface Lifecycle { public boolean save(Session s) throws .....; // used to be create() public boolean update(Session s) throws ....; // called from update() public boolean delete(Session s) throws ....; public void load(Session s); // notice I got rid of store() } (The return value allows the object to veto the requested operation transparently to the rest of the application - its like a shortcircuit.) What do people think of this suggestion? I think its a big improvement. > The other problem I see with the cascade all option is that cascade delete > is only viable in some cases, whereas cascade save and update are viable in > almost all cases. fair enough, its clear that we do need the various cascade options that I was originally thinking of: cascade="none" cascade="all" cascade="delete" cascade="update" cascae="save" cascade="save|update" cascade="save|delete" > What I am really suggesting here is that cascade delete be the *only* > cascade type tag - all other cascades to be determined by the overloaded > method calls. This is because delete is the only operation where semantic > knowledge of the Parent-Child dependencies determines how the operation is > executed. Thats fair enough, but not everyone would agree that the mapping file may only contain stuff thats part of the object model. I'm happy to provide maximum flexibility here and let people use exactly which subset of functionality fits their philosophical approach. > Personally, I like the idea of this being a separate method (persist), > since save/update/insert all lose their meaning when they cascade to other > operations. From your example where foo and baz need updating and bar > needs saving. > > foo -> bar -> baz > > By calling update(foo), I implicitly call update(bar). Instead of an > error, the method is redirected to save(bar). Yet if I call save(bar) > directly I would get an error. Why? Instead, I would push > save/update/insert off center stage - people can use them if they know > exactly what operation they want to perform - and instead use persist() as > the normal case, which doesn't imply any one operation. Save/update/insert > would then cascade to only their own operations, allowing optimization > since they don't have to figure out what operation to perform. Actually update() in its current implementation has semantics sooo close to your persist() that its not worth providing two seperate methods. Arguably update() is badly named but lets make the question of naming a seperate question. eg. session.update(newObject) actually calls session.save(newObject) if the id is null. Its silly to have two seperate methods that 1. update the object if its id is non-null otherwise save it 2. update the object if its id is non-null otherwise throw an exception > Insert is a tough one, since it is difficult to tell if a referenced object > that has an assigned id was meant to be inserted or updated. The only way > to tell is an SQL error or EXISTS check. You could stipulate that > persist() only does saves and updates, no inserts. Yeah - basically insert() has turned out to be an operation available at the "top" level only. I doubt that this will really be a huge issue to most users. Gavin. |
From: Eric E. <ev...@pr...> - 2002-06-03 21:43:29
|
Gavin- > >The decision to cascade saves and updates is really based on efficiency, > >and only the application can know when to take advantage of persisting > >select portions of the graph - thus I would not want to make the decision > >static by placing it in the mapping file. > >In my opinion, the current design gives maximum flexibility here. There >is a tension between two conflicting goals: > > * dont want to update() objects unecessarily or delete() objects >incorrectly > * dont want to force the application to have to spell out every single >save/update/delete in code > >I think what we have is a good balance. Hibernate never does an update or >delete unless you explicitly request it *somewhere*. However, by specifying >it in the mapping file, you can save having to request it *every time*. I think the meaning got lost in my verbiage here. What I meant to suggest was that cascading be available via overloaded methods such as: save/update(Object o, boolean cascade); //I wouldn't include cascade delete This saves the application from having to call these operations for each object in the graph and allows the application to decide when it can optimize by acting on only a portion of the graph. IMHO, the key problem with placing the cascade all option in the mapping file is that once this option is set, the application has no choice but to allow the cascade to happen. If the application knows that only one object out of a graph of a hundred changed, shouldn't it be able to optimize? The other problem I see with the cascade all option is that cascade delete is only viable in some cases, whereas cascade save and update are viable in almost all cases. For instances, an Organization-Organization relation (where one entity is not strictly dependant on the other) should cascade create and save, but not delete. As a result, cascade all can't be used in these instances and the application will *still* end up spelling out all of the calls. This could be addressed by splitting up the cascade option into save/update/delete as you had suggested, but why make it so complicated? > >Second, after taking the other cascading operations out of the mapping > >file, I would use the tag 'dependant' to indicate a relationship in which > >deleting the parent results in the child being deleted. Also, since the > >child lives and dies with the parent, I would allow de-referencing by the > >parent to implicitly delete the child if the call to parent.update() is > >allowing cascades. > >The first part of this (cascading deletes without cascading save + update) >can be addressed by allowing cascade="delete" in the mapping file. This was >something I was prepared to add anyway. Your cascade="delete" tag is the same as my "dependant" tag, so I'll just call it cascade delete. What I am really suggesting here is that cascade delete be the *only* cascade type tag - all other cascades to be determined by the overloaded method calls. This is because delete is the only operation where semantic knowledge of the Parent-Child dependencies determines how the operation is executed. If Pet is dependant on Person such that Pet should be deleted with Person, Pet must be deleted with Person every time- to not do so would be an error for the model. This type of relationship can be put in the mapping file and enforced by having delete act as the mapping file dictates. But if Person is modified and Pet is not, not updating Pet is an optimization that the application can choose, not an error in the same way that failing to delete would be. >The second part - deleting objects dereferenced by the parent - is in >general a Hard Problem. You would probably be able to make it work for >children one level deep fairly easily - but anything beyond that would have >very deep implications for the complexity + efficiency of Hibernate. I >already >implemented something like this for collections. The only way I could >really make it work with any expectation of efficiency was with the >following restrictions: > > * collections may be referenced by exactly one parent object/collection > * collections may only be retrieved by retrieving their parent > >Even then, the code for collections got real hairy, as anyone who has >looked at RelationalDatabaseSession can testify. Basically I regularly >start itching to rip out support for subcollections/toplevel-collections >purely for the purpose of simplifying understandability of that code. Yeah, I knew this was a tall order :) Maybe I'll work on it someday. . . > > Finally, my most questionable idea - add a PersistentLifcycle method: > > boolean impliciteDelete(Session, ParentEntity) > > to allow dependant entities to determine if they should be deleted. > > This would give dependant entities a chance to examine their own state > > for flags(Company issue or non-Company issue) and possibly check for > > other parents who might be holding a reference to them. Checking for > > parent references could get messy. . . I don't have a good answer for > > this. > >If this would be useful, It would be trivial to do. I would add a new >interface: > >interface CascadeVetoer { > public boolean vetoCascade(Object parent, Operation action); >} > >or something...... > >But would it really be _that_ useful? > > >Third, merge the functionality of insert(), save(), and update() into one > >new method (persist?). Assuming that these operations cascade, it becomes > >a chore to keep track of which portions of your graph are new (save) or > >existing (update). For a cascading persist operation, persist() could >walk > >the tree and assume that entities with assigned IDs are existing and those > >without IDs are new. > >This would be very nice. I think we can actually just add this behaviour >straight into update(), with rules like: > >1. if reached object is associated with session, ignore it (currently >throws exception) >2. if reached object has no id property, throw exception (current >behaviour) >3. if reached object has a null id, save it (currently throws exception) >4. if reached object has non-null id, update it (current behaviour) > >The most problematic is (3). How do we detect null values for primitive >ids? I guess we just consider that an unsupported case.... > >One further complication: > >foo -> bar -> baz > >foo, baz need updating. bar needs saving. Under the semantics I just >proposed, session.update(foo) would cascade to session.save(bar). I suspect >that what you would really want is for the next cascade to be >session.update(baz), not session.save(baz) so we would need to remember >that we are cascading an update, not just a save. > >This way, we would support two distinct programming styles > > * find/update/delete (with versioned data) > * find/load/save/delete (with versioned or unversioned data) > >Oh, I just realised something I forgot about before. At present insert() >cascades to save(). Is this best? Should insert() cascade to insert()? >Don't know what is the best semantics here.... Personally, I like the idea of this being a separate method (persist), since save/update/insert all lose their meaning when they cascade to other operations. From your example where foo and baz need updating and bar needs saving. foo -> bar -> baz By calling update(foo), I implicitly call update(bar). Instead of an error, the method is redirected to save(bar). Yet if I call save(bar) directly I would get an error. Why? Instead, I would push save/update/insert off center stage - people can use them if they know exactly what operation they want to perform - and instead use persist() as the normal case, which doesn't imply any one operation. Save/update/insert would then cascade to only their own operations, allowing optimization since they don't have to figure out what operation to perform. And, of course, persist would be overloaded to allow cascading! :) Insert is a tough one, since it is difficult to tell if a referenced object that has an assigned id was meant to be inserted or updated. The only way to tell is an SQL error or EXISTS check. You could stipulate that persist() only does saves and updates, no inserts. Or leave it as an option in the persist method as to whether or not persist is supposed to figure out if an object needs to be inserted or updated. Hopefully, people would use the option only if they really needed to. . . Best wishes, Eric Everman |
From: Anton v. S. <an...@ap...> - 2002-05-31 04:20:41
|
> >> because of the rule which reads <include name="**/*.properties"/>. > > Could you do me a favor + remove that include from build.xml + > then test the build Anton? TIA... That worked - after a full clean build, the jar no longer contains hibernate.properties. I've commented out that rule, and checked build.xml back in. FWIW, I also changed the version number in build.xml to 0.9.13. > I use Eclipse. Shhhhhhh..... Heh, me too. Except I haven't gotten around to trying to get Hibernate in Eclipse to generate a jar, so I just right-clicked on build.xml, in Eclipse, and selected "Run Ant". I was pleasantly surprised that it worked first time... Anton |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-05-31 03:42:48
|
>> When I use build.xml to build the "dist" target, the resulting >> .jar ends up containing a copy of hibernate.properties, I assume >> because of the rule which reads <include name="**/*.properties"/>. Could you do me a favor + remove that include from build.xml + then test the build Anton? TIA... >> However, the distribution jars on SourceForge don't contain >> hibernate.properties. Are they built some other way? >> Not a problem, I'm just wondering what secret technique is being >> used to build the distribution jars. I use Eclipse. Shhhhhhh..... (I actually never ever used build.xml other people contributed it. Unfortunately that means it gets out of date.) |
From: Anton v. S. <an...@ap...> - 2002-05-31 00:56:19
|
This is just an idle question: When I use build.xml to build the "dist" target, the resulting .jar ends up containing a copy of hibernate.properties, I assume because of the rule which reads <include name="**/*.properties"/>. However, the distribution jars on SourceForge don't contain hibernate.properties. Are they built some other way? The reason I noticed this is that after switching to calling buildSessionFactory() with no arguments, when hibernate.properties is in the jar, it is loaded by default. When I got a message about a missing com.ibm.db2.jdbc.app.DB2Driver, I knew something was wrong, since I don't use DB2... Removing the file from the jar allows my own external properties file to be loaded. Not a problem, I'm just wondering what secret technique is being used to build the distribution jars. Anton |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-05-30 01:57:40
|
QUOTE: ====================================================================== Gavin - I readily admit that the following are random thoughts which are only somewhat related to lifecycle objects. . . First off, the relationship between the layers of real world business objects, OO modeling of business objects, and relational database persistence is murky at best. Hibernate does a great job between the bottom two layers, but there are naturally a lot of places where the mappings between the layers is difficult. A change in state of business objects should represent a *real* change in state, and that change should be persisted (assuming the change is committed, etc) regardless of the dependency or hierarchy of those business objects. For example, if a Person is loaded with a many-to-one child entity of Shoes, changes made to the Person's Shoes are no more or less real then changes made to the Person. The same would be true if a new Person were created with Shoes. These types of changes are examples where the real world and the OO world mesh really well. Things get more interesting when business objects are removed (avoiding the term delete). If a Person is removed, he will take his Shoes with him, but he will leave behind his Projects. In this case, the dependency / hierarchy of the business objects determines how their persistence is handled. Even more complex, Shoes may or may not be company issue - Person's take non-Company issue Shoes with them when they leave, but Company issue Shoes stay with the Company. In this case its a flag in the child entity or the fact that the child has multiple parents that prevents it from being removed. Here the real world and the persistence layer mesh nicely allowing Shoes to persist without a Person, while the OO world would have GC'ed the Shoes. To completely blur the lines and dive into some of the implementation details, conceptually what should happen when an existing (persisted) Person is given new Shoes to replace his old ones? What if the Person gives his old Shoes to a different Person? . . . . The first & second design goals of Hibernate - Support for Java programming constructs & natural OO and declarative programming model - really imply the idea of a transparent persistence service, persistence that just *there* without having to worry about it, "Just use objects the way you always have." The key exception would be the concept of explicit deletes, which is actually an improvement on OO for many business purposes. Towards those goals, I would assume that all entities recursively cascade save() and update() unless told not to by an overloaded method (or visa versa for backwards compatibility). In the real world and the OO world, the state of a parent includes the state of its children, so a cascading, recursive behavior is intuitive. The decision to cascade saves and updates is really based on efficiency, and only the application can know when to take advantage of persisting select portions of the graph - thus I would not want to make the decision static by placing it in the mapping file. One other reason to *not* cascade saves and updates would be architecture (such as ejb), but the original implementation of save and update would still be available. Second, after taking the other cascading operations out of the mapping file, I would use the tag 'dependant' to indicate a relationship in which deleting the parent results in the child being deleted. Also, since the child lives and dies with the parent, I would allow de-referencing by the parent to implicitly delete the child if the call to parent.update() is allowing cascades. Third, merge the functionality of insert(), save(), and update() into one new method (persist?). Assuming that these operations cascade, it becomes a chore to keep track of which portions of your graph are new (save) or existing (update). For a cascading persist operation, persist() could walk the tree and assume that entities with assigned IDs are existing and those without IDs are new. One place where this could be a problem is an implicit delete. If PersonA gives her dependent Shoes to PersonB, Shoes will be deleted. When PersonB is persisted, Shoes will already have an ID but will not be in the database. I would be prepared to catch this type of SQL error and assume the above scenario, recovering by using an INSERT instead of an UPDATE. Finally, my most questionable idea - add a PersistentLifcycle method: boolean impliciteDelete(Session, ParentEntity) to allow dependant entities to determine if they should be deleted. This would give dependant entities a chance to examine their own state for flags (Company issue or non-Company issue) and possibly check for other parents who might be holding a reference to them. Checking for parent references could get messy. . . I don't have a good answer for this. And there you have it. I'm interested to know if you think this is a valid direction for Hibernate. Apologies if I paraphrased your design goals to much for my own purposes :) If you think this will generate useful discussion, feel free to post it to the public or dev forum. Thanks for your ear, Eric Everman ================================================================== >The decision to cascade saves and updates is really based on efficiency, >and only the application can know when to take advantage of persisting >select portions of the graph - thus I would not want to make the decision >static by placing it in the mapping file. In my opinion, the current design gives maximum flexibility here. There is a tension between two conflicting goals: * dont want to update() objects unecessarily or delete() objects incorrectly * dont want to force the application to have to spell out every single save/update/delete in code I think what we have is a good balance. Hibernate never does an update or delete unless you explicitly request it *somewhere*. However, by specifying it in the mapping file, you can save having to request it *every time*. >Second, after taking the other cascading operations out of the mapping >file, I would use the tag 'dependant' to indicate a relationship in which >deleting the parent results in the child being deleted. Also, since the >child lives and dies with the parent, I would allow de-referencing by the >parent to implicitly delete the child if the call to parent.update() is >allowing cascades. The first part of this (cascading deletes without cascading save + update) can be addressed by allowing cascade="delete" in the mapping file. This was something I was prepared to add anyway. The second part - deleting objects dereferenced by the parent - is in general a Hard Problem. You would probably be able to make it work for children one level deep fairly easily - but anything beyond that would have very deep implications for the complexity + efficiency of Hibernate. I already implemented something like this for collections. The only way I could really make it work with any expectation of efficiency was with the following restrictions: * collections may be referenced by exactly one parent object/collection * collections may only be retrieved by retrieving their parent Even then, the code for collections got real hairy, as anyone who has looked at RelationalDatabaseSession can testify. Basically I regularly start itching to rip out support for subcollections/toplevel-collections purely for the purpose of simplifying understandability of that code. > Finally, my most questionable idea - add a PersistentLifcycle method: > boolean impliciteDelete(Session, ParentEntity) > to allow dependant entities to determine if they should be deleted. > This would give dependant entities a chance to examine their own state > for flags(Company issue or non-Company issue) and possibly check for > other parents who might be holding a reference to them. Checking for > parent references could get messy. . . I don't have a good answer for > this. If this would be useful, It would be trivial to do. I would add a new interface: interface CascadeVetoer { public boolean vetoCascade(Object parent, Operation action); } or something...... But would it really be _that_ useful? >Third, merge the functionality of insert(), save(), and update() into one >new method (persist?). Assuming that these operations cascade, it becomes >a chore to keep track of which portions of your graph are new (save) or >existing (update). For a cascading persist operation, persist() could walk >the tree and assume that entities with assigned IDs are existing and those >without IDs are new. This would be very nice. I think we can actually just add this behaviour straight into update(), with rules like: 1. if reached object is associated with session, ignore it (currently throws exception) 2. if reached object has no id property, throw exception (current behaviour) 3. if reached object has a null id, save it (currently throws exception) 4. if reached object has non-null id, update it (current behaviour) The most problematic is (3). How do we detect null values for primitive ids? I guess we just consider that an unsupported case.... One further complication: foo -> bar -> baz foo, baz need updating. bar needs saving. Under the semantics I just proposed, session.update(foo) would cascade to session.save(bar). I suspect that what you would really want is for the next cascade to be session.update(baz), not session.save(baz) so we would need to remember that we are cascading an update, not just a save. This way, we would support two distinct programming styles * find/update/delete (with versioned data) * find/load/save/delete (with versioned or unversioned data) Oh, I just realised something I forgot about before. At present insert() cascades to save(). Is this best? Should insert() cascade to insert()? Don't know what is the best semantics here.... peace Gavin |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-05-29 11:26:28
|
> I can quite easily implement arbitrarily-deep fetching so that we reduce > this to 1 + 5 = 6 SELECTs. (And I might do this tonight, actually.) > Optimally we could do it in a single select. That would require some > refactoring (unifying Query and ByIDQuery). Yeah, I just implemented deeper fetching so now, potentially, you can get a whole tree of objects in one go (but only if you got ANSI-style outerjoins). ie. * Session.load() will grab the whole tree in one SELECT (excluding collections) * Session.find() grabs root objects only in the first SELECT, then theres another SELECT for each immediate child of a root object. This is an improvement. An even bigger improvement find() to grab everything in one SELECT. |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-05-29 08:13:32
|
Now that 1.0 is functionally complete, i'll just outline the kind of things I'm thinking will go into 1.1. * Serializable Session * Referenceable SessionFactory * normalized table per subclass relational model * smarter fetching We need serializable sessions to support a nice programming model when using hibernate long transactions in - servlets configured for http session failover or - stateful session beans. Unfortunately this will be quite tricky to implement because we need to reconstruct all the Session's references to instances of ClassPersister, CollectionPersister at deserialization. *yick* We need a referenceable SessionFactory to support the kosher J2EE programming model. (ie. the application obtains the SessionFactory from JNDI.) I AM LOOKING FOR A VOLUNTEER TO START IMPLEMENTING THIS...... The last two items should fall out fairly easily from a general refactor / remodel of the various bits of code that render SQL SELECTs. There are four places this is done at present - directly on cirrus.hibernate.impl.ClassPersister - directly on cirrus.hibernate.impl.CollectionPersister - on cirrus.hibernate.query.Query - on cirrus.hibernate.impl.ByIDQuery By default hibernate "fetches" one object in each row of a ResultSet. If you happen to be using a database which supports ANSI-style outerjoins, you may force hibernate to fetch a one-level-deep object graph in a single fetch (by setting hibernate.use_outer_join). But not from a query. Urrrghhh. Did that make any sense? Consider: s.find("select foo.bar.baz.qux from foo in class Foo where foo.count=1") if there were 5 instances of Foo with count==1, this query would require 1 SELECT statement. However: s.find("select foo from foo in class Foo where foo.count=1") would require 1 + 5 + 5 = 11 SELECT statements! The first select would fetch the root instances of Foo. The next five selects would fetch instances of Bar and Baz (ie. children one and two levels deep). The next five selects would fetch instances of Qux (which have no children). On the other hand, if hibernate.use_outer_join=false, 1 + 5 + 5 + 5 = 16 SELECTs would be required. I can quite easily implement arbitrarily-deep fetching so that we reduce this to 1 + 5 = 6 SELECTs. (And I might do this tonight, actually.) Optimally we could do it in a single select. That would require some refactoring (unifying Query and ByIDQuery). Now, if all this sounds like a mess ... well .... it _is_ ..... but its not as bad as it sounds. The way Hibernate is structured its a simple matter of rendering the right SQL, which is a fairly easy problem. The actual code that loads objects from ResultSets is quite capable of accepting more complicated fetches. In my defence, I would like to point out that my aims for performance (in the early stages of the project) where only to compare favorably with the performance of CMP entity beans or handcoded JDBC. Clever fetching algorithms were a little beyond scope. There are advantages to this approach. Currently people have a nice API and programming model to develop applications with - and they can expect performance similar to what they are used to from other common approaches. As the Hibernate develops, they can expect their applications to get faster, with no extra work on their part! (A couple of people already saw this happen in the last release.) Still, this is probably the messiest part of the project at present and the time is ripe for improvements. Anyone want to sign up for either of the first two tasks? Gavin |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-05-29 07:14:38
|
> I assume calling save() early on isn't practical, before initialization of > the object is complete. calling save() "early on" is _exactly_ the same as calling create(). Its inefficient, however, because you end up INSERTing a row full of nulls that must then be immediately UPDATEd. So you accomplish in two SQL statements what would normally be doable in one. (Thats why i'm deprecating create, incidently.) Somewhere back in the dim dark past this was how hibernate avoided violating foreign key constraints. Now it has a much cleverer approach.... |
From: Anton v. S. <an...@ap...> - 2002-05-29 07:01:34
|
Would it be OK to make a change to allow Hibernate to optionally autoclose externally provided connections? I guess it could be done with this in RelationalDatabaseSessionFactory: public Session openSession(java.sql.Connection connection, boolean autoClose) { return openSession(connection, autoClose, 0); //prevents this session from adding things to cache } ...which of course would have to be added to the Session interface. The Resin connections I'm using are wrapped, so if Hibernate were to call close() on them, it ought to simply return them to the Resin pool. I haven't yet tried this out - maybe I'll do that tomorrow. In a related and even more minor request, would it be OK to change the dependence on C3P0 to a dynamic one, so that the JAR isn't needed if you're not using it? I got the impression this would only require a change in one location - I don't remember the details now, but again, I can take a stab at this tomorrow. Anton |
From: Anton v. S. <an...@ap...> - 2002-05-29 06:45:09
|
> to force associated objects to be saved/deleted/updated along with the > parent object. This should save a lot of application code. Sounds great! > (b) would it be useful to support cascade="save", > cascade="delete", cascade > ="update" along with cascade="all", cascade="none"? > > I can't really see any great use to the extra stuff in (b). I can't think of *any* uses for that, but perhaps that's just my limited imagination. If you don't support the options, perhaps use cascade="true" or "false"? > In other news, I decided to deprecate Session.create(). Does anyone have > any objections to this? I ran across something that might have some relevance here: I implemented PersistentLifecycle in all of my classes, primarily to save a reference to the session in each object, since I find passing the session around between objects too intrusive. When you create an object using an ordinary constructor, though, it obviously can't automatically get a reference to the session, which leads to problems if the methods in that class expect to be able to access the session. So some mechanism is needed to set the session for objects created directly by constructor: passing the session to the constructor, or using a setSession() method, perhaps. I assume calling save() early on isn't practical, before initialization of the object is complete. I just mention this because I assume the issue it wouldn't arise with Session.create(). I haven't done enough actual application coding yet to know how much of an issue this will be in practice, though, and I haven't yet chosen a solution. Having the constructors accept a session may be OK. Please let me know if I'm missing a better way. Anton |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-05-29 02:49:14
|
I have finished the lifecycle objects feature (finally!) which I suspect a lot of people will be very happy about. I'm looking for a bit of input on details: Basically, at the moment, you can do things like: <many-to-one name="child" cascade="all"/> <set role="children" cascade="all"/> <key column="parent_id" <one-to-many class="ChildClass"/> </set> to force associated objects to be saved/deleted/updated along with the parent object. This should save a lot of application code. I was wondering: (a) is 'dependant' better than 'cascade' (b) would it be useful to support cascade="save", cascade="delete", cascade ="update" along with cascade="all", cascade="none"? I can't really see any great use to the extra stuff in (b). In other news, I decided to deprecate Session.create(). Does anyone have any objections to this? I am now working on documentation / tests / examples .... no new features for now. Gavin. |
From: Anton v. S. <an...@ap...> - 2002-05-28 21:48:11
|
I've checked in the transaction changes previously discussed. The major changes are that the hibernate.transaction_strategy property is now used to select an appropriate TransactionFactory; and that beginTransaction was added to the Session interface. A summary of the code changes: cirrus.hibernate: * Hibernate.java - removed buildTransactionFactory (no longer needed, afaict) * Session.java - added beginTransaction to Session interface * RelationalDatabaseSession.java - added beginTransaction method * RelationalDatabaseSessionFactory.java - added transactionFactory variable and private buildTransactionFactory method. cirrus.hibernate.transaction: TransactionFactoryImpl changed to JDBCTransactionFactory ...plus necessary refactoring. I have yet to do any significant Javadocs, but I'll get to that shortly. Anton |
From: Gavin_King/Cirrus%<CI...@ci...> - 2002-05-28 03:15:16
|
Just added support for queries like: from foo in class Foo where foo.barSet.size = 1 I would have done this ages ago, if I would have realized how easy it would be...... Also I removed the requirement for the brackets in from foo in class Foo where exists(foo.barSet.elements) and 1.0 > all(foo.barSet.elements) so now you can just write from foo in class Foo where exists foo.barSet.elements and 1.0 > all foo.barSet.elements which is arguably neater. :) |