You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
(6) |
May
(17) |
Jun
|
Jul
|
Aug
|
Sep
(5) |
Oct
(4) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(14) |
Feb
(13) |
Mar
(1) |
Apr
(5) |
May
|
Jun
(3) |
Jul
(1) |
Aug
(17) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2004 |
Jan
(7) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(11) |
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(7) |
Nov
|
Dec
|
2006 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Dan C. <da...@ch...> - 2006-06-07 22:20:57
|
Wondering if anybody else would like to get rid of XORM's dependency on JDOM. I'd like to just use javax.xml.parsers.DocumentBuilder and org.w3c.dom stuff. I volunteer to do it. )_( Dan |
From: Harry E. <ha...@gm...> - 2006-02-09 22:18:58
|
Not super hip on JDK 1.5, in general (God, I'm becoming a curmudgeon), but other than that, this sounds like the right way to go. I think the switch to JDK 1.5 is right as well, but I don't have to like it. +1 On 2/9/06, Dan Checkoway <da...@ch...> wrote: > +1 > > )_( Dan > > ----- Original Message ----- > From: "Wes Biggs" <we...@ca...> > To: <xor...@li...> > Sent: Thursday, February 09, 2006 8:11 AM > Subject: [Xorm-devel] XORM major revisions > > > > I'm thinking of switching XORM future development to have the following > > dependencies: > > > > * Add requirement for JDK 1.5 and clean up code in ways this allows > > * Add requirement for JDO 2.0, now that there is an API JAR available. > > Most of the 2.0 methods would be initially non-functional, but some key > > ones, like PersistenceManager.newInstance(), would replace XORM > > proprietary methods where applicable. > > * Include the POJQ project for compilable queries. This would replace = the > > experimental CodeQuery framework in XORM today, and remove the need for > > BCEL. > > > > These changes should all be backward compatible for users, provided tha= t > > they recompile against the JDO2 jar (which alters the argument types an= d > > return types of some methods). > > > > I don't see XORM being a particularly active project, or hoping to surp= ass > > much larger open source projects like JPOX (the JDO 2.0 reference > > implementation) in terms of overall implementation of the JDO spec, but > > this allows us to align its usage with the features that have been > > standardized in JDO 2 and maintain focus on implementing those features > > which we feel are most important and applicable to our projects, or tho= se > > that we're interesting in looking at from an experimental point of view= . > > > > Thoughts? > > > > Wes > > > > > > ------------------------------------------------------- > > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > > files > > for problems? Stop! Download the new AJAX search engine that makes > > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486&dat= =3D121642 > > _______________________________________________ > > Xorm-devel mailing list > > Xor...@li... > > https://lists.sourceforge.net/lists/listinfo/xorm-devel > > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486&dat= =3D121642 > _______________________________________________ > Xorm-devel mailing list > Xor...@li... > https://lists.sourceforge.net/lists/listinfo/xorm-devel > |
From: Dan C. <da...@ch...> - 2006-02-09 16:14:28
|
+1 )_( Dan ----- Original Message ----- From: "Wes Biggs" <we...@ca...> To: <xor...@li...> Sent: Thursday, February 09, 2006 8:11 AM Subject: [Xorm-devel] XORM major revisions > I'm thinking of switching XORM future development to have the following > dependencies: > > * Add requirement for JDK 1.5 and clean up code in ways this allows > * Add requirement for JDO 2.0, now that there is an API JAR available. > Most of the 2.0 methods would be initially non-functional, but some key > ones, like PersistenceManager.newInstance(), would replace XORM > proprietary methods where applicable. > * Include the POJQ project for compilable queries. This would replace the > experimental CodeQuery framework in XORM today, and remove the need for > BCEL. > > These changes should all be backward compatible for users, provided that > they recompile against the JDO2 jar (which alters the argument types and > return types of some methods). > > I don't see XORM being a particularly active project, or hoping to surpass > much larger open source projects like JPOX (the JDO 2.0 reference > implementation) in terms of overall implementation of the JDO spec, but > this allows us to align its usage with the features that have been > standardized in JDO 2 and maintain focus on implementing those features > which we feel are most important and applicable to our projects, or those > that we're interesting in looking at from an experimental point of view. > > Thoughts? > > Wes > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 > _______________________________________________ > Xorm-devel mailing list > Xor...@li... > https://lists.sourceforge.net/lists/listinfo/xorm-devel > |
From: Wes B. <we...@ca...> - 2006-02-09 16:11:36
|
I'm thinking of switching XORM future development to have the following dependencies: * Add requirement for JDK 1.5 and clean up code in ways this allows * Add requirement for JDO 2.0, now that there is an API JAR available. Most of the 2.0 methods would be initially non-functional, but some key ones, like PersistenceManager.newInstance(), would replace XORM proprietary methods where applicable. * Include the POJQ project for compilable queries. This would replace the experimental CodeQuery framework in XORM today, and remove the need for BCEL. These changes should all be backward compatible for users, provided that they recompile against the JDO2 jar (which alters the argument types and return types of some methods). I don't see XORM being a particularly active project, or hoping to surpass much larger open source projects like JPOX (the JDO 2.0 reference implementation) in terms of overall implementation of the JDO spec, but this allows us to align its usage with the features that have been standardized in JDO 2 and maintain focus on implementing those features which we feel are most important and applicable to our projects, or those that we're interesting in looking at from an experimental point of view. Thoughts? Wes |
From: Wes B. <we...@ca...> - 2005-10-10 09:55:08
|
Harry Evans wrote: >The JCache version gives you 1 big cache, with objects being copied >around as needed. > That's sort of annoying, especially if you have applications that use partially (but not fully) overlapping sets of tables. I suppose it's an implementation detail on the cache layer, but I agree it doesn't sound that flexible for all kinds of distributed JDO systems. > >I could really use some input on the approach to take, as I can see >the value in each, and not enough discriminating factors to choose >either one. > > Thanks for doing the analysis. I think that an eviction-notice-only approach would be a good lightweight starting point. XORM's all about doing things differently, anyway. Wes |
From: Harry E. <ha...@gm...> - 2005-10-10 01:20:55
|
I took a look at jCache. It seems to do object coordination in the cache. In other words, it actually moves objects around from cache instance to cache instance, based on demand. The version I was looking at doing for Xorm would be coordinated eviction only. The differences; 1. JCache gives a distributed hashtable view of the cache. If one vm's cache doesn't have a particular key, but another one does, it will copy the object over, and give it to the requester. JCache also gives you a strategy to load an object from a datasource, if it the requested key is not currently cached (think Xorm lookups by primary key for non cached objects). 2. The approach I was taking was eviction based. If an object is changed in VM 1, it would disappear from VM 2' cache, such that a request from VM 2 would cause a miss, and it would go back to the DB directly. The JCache version gives you 1 big cache, with objects being copied around as needed. The approach I was taking doesn't pass around objects, only local cache eviction notices. The only big worry I have is the amount of data being shipped around by JCache for all those miss / copy commands. The equivalent in my approach is all the eviction notices being shipped around. I am open to either approach. I can see a JCache like approach having the benefit that it is only steps away from also being able to hold locks on the same objects. However, I doubt the current implementation would be malleable enough to do that with on its own.=20 My version doesn't get you any closer to locking, but "might" be more network friendly. I could really use some input on the approach to take, as I can see the value in each, and not enough discriminating factors to choose either one. Harry On 10/5/05, Wes Biggs <we...@ca...> wrote: > Yes, collection relationships are always thrown away currently. That's > because XORM is not smart about bidirectional relationships. So to add > distributed caching on top of currently functionality, they don't need > to be propagated. > > Wes > > Harry Evans wrote: > > >NO! to caching persistence capable objects directly. That is too big > >of a change for me to even contemplate right now. > > > >I will take a look at JCache (it seems like an awful lot of folks are > >doing cool things with JavaGroups. Where have I been?). I think a > >map based api would most likely be pretty easy to integrate into LRU > >or some derivative, given that it is implemented using a map. > > > >On the Collections front, when I scanned through the code, I think I > >found all the places in InterfaceInvocationHandler, InterfaceManager, > >and TransactionImpl where xorm was interacting with the DataCache. I > >saw where row structures were being put in, updated and taken out of > >the cache that backed the InterfaceInvocationHandlers (TransactionImpl > >lines 247-282), but the same areas of code also update Collection > >Relationship proxies, and don't seem to make any cache calls > >(TransactionImpl lines 225-240). > > > >Even if it did, they probably wouldn't be in cache, given the fact > >that even the DataCache interface specifies that anything without a > >primary key that you call add() with is, by contract, thrown away. > >However, it might be worth making the calls for Collection mods, even > >if they don't end up getting stored, to give the cache layer a chance > >to deal with propagating the changes. I just need to get a little bit > >better understanding of where this info is being held onto in (if at > >all), if it isn't the DataCache layer. (Maybe collections are always > >thrown away with the InterfaceInvocationHandler, and then requeried?) > > > >Harry > > > >On 10/5/05, Wes Biggs <we...@ca...> wrote: > > > > > >>Changes to collections are always accomplished by changes to the data > >>model. That being said, I forget if XORM is keeping many-to-many table > >>Rows in cache currently -- it would need to for this scheme to work, > >>unless you want to change the whole thing and cache the > >>persistence-capable objects directly (I don't think this is a good idea= ). > >> > >>You might look at JCache.sourceforge.net which uses JavaGroups. This i= s > >>the approach that some of the commercial JDO vendors have taken -- > >>integrate a third party distributed cache management solution. The > >>JCache API (it's a JSR) is basically a Map, and the implementation > >>supposedly takes care of all the notification and sync behind the > >>scenes. But that might be harder to integrate with the existing > >>LRUCache, etc. > >> > >>Wes > >> > >>Harry Evans wrote: > >> > >> > >> > >>>Having spoken with Doug, I am looking at using the JGroups (or > >>>JavaGroups, gotta love that Sun trademark enforcement) as the > >>>mechanism for transporting the notifications. > >>> > >>>Thoughts: > >>>1. Modify LRUCache to optionally notify a listener when a Row is > >>>added, updated, or removed. > >>>2. Subclass LRUCache to use (most likely) a JGroups NotificationBus > >>>for cache events. > >>>3. Setup JGroups to manage this notification, and provide properties > >>>the subclass can read to allow for enhanced settings (UDP versus TCP > >>>notifications, etc). > >>>4. I walked through some xorm code, and it doesn't appear that > >>>relationship proxy information is ever directly conveyed to the cache > >>>level, which means that there is no way, at that level, to invalidate > >>>the Collection case I listed previously. I could really use some > >>>insight into an approach for this area (*cough* Wes *cough*) as this > >>>seems like a kinda important part. I might have this wrong, but it > >>>appears xorm manages all collection level information (12M and M2M) at > >>>the ObjectProxy level, and not at all in the Row DataCache level. Is > >>>that correct? > >>>5. I am unclear whether InterfaceInvocationHandlers need to have > >>>their Row cleared in this scheme, or if they generally pass out of > >>>scope frequently enough that the Row is the main thing to focus on. > >>>This has to do with the concern that they hold a direct hard reference > >>>to a Row object, so throwing that out of cache isn't going to > >>>accomplish much if they are still pointing to it. > >>> > >>> I am really interested in the JGroups DistributedLockManager class, > >>>but it would have to be combined / enhanced with cache management to > >>>use it, and it seems like there should be an option, regardless, to > >>>only do cache management without the overhead of distributed locks. > >>>The JBossCache stuff also might be more appropriate in the long term, > >>>as it uses JGroups stuff, but seems like it can handle both locks and > >>>caching. I will have to do more research. > >>> > >>>Input very much appreciated. > >>> > >>>Harry > >>> > >>>On 10/4/05, *Harry Evans* <ha...@gm... <mailto:ha...@gm...>> > >>>wrote: > >>> > >>> Optimistic Distributed Cache Management > >>> > >>> This is a proposal for a simple distributed cache invalidation > >>> strategy for xorm. It might be implementable at only the cache > >>> manager layer, or might require integration at a higher layer. It > >>> does not do coordinated lock management, but would do reasonable c= ache > >>> invalidation in the cases where the same enitity was not modified = in > >>> multiple instances in rapid succession. This is seen as the first > >>> phase in a more pessimistic strategy that would do actual lock > >>> management. > >>> > >>> It uses the multicast transmission facilities in Java to set up a = peer > >>> to peer local network on which cache invalidation messages are sen= t to > >>> other servers using the same datastore. Detected events are > >>> broadcast > >>> to other servers, allowing them to either hollow the referenced ob= ject > >>> or remove them completely from cache. Below are the major pieces = of > >>> the implementation as I see them. > >>> > >>> I am looking at using JRMS (Java Reliable Multicast Service) jar > >>> (http://www.experimentalstuff.com/Technologies/JRMS/index.html) to > >>> reasonably guarantee the receipt of packets to each server > >>> (specifically the LRMP variety, with late-join catchup disabled). > >>> Basic desription of major areas below, followed by questions. > >>> > >>> I would really really like to find a way to do this at the cache > >>> manager level, as it would be a simple properties change to enable= it, > >>> by using a version of the LRU cache that implemented the additiona= l > >>> functionality, but I am not sure this is possible. Feedback > >>> appreciated. > >>> > >>> Harry > >>> > >>> Detection: > >>> "Broadcast" means notification sent out over multicast > >>> Objects that are deleted must be broadcast to be removed > >>> Objects that have simple attributes or one way references updated = must > >>> be broadcast to be hollowed or removed > >>> Changes to collections must be broadcast: > >>> ObjParent has a collection that contains objects, including > >>> ObjChild > >>> ObjChild has a direct reference to ObjParent > >>> ObjChild has its reference to ObjParent removed (or reassigned t= o > >>> ObjNewParent) > >>> ObjChild must be broadcast to be hollowed or removed > >>> ObjParent must either be broadcast to be hollowed or removed or > >>> must > >>> be broadcast have its collection set to unresolved > >>> ObjNewParent must either be broadcast to be hollowed or removed = or > >>> must be broadcast to have its collection set to unresolved > >>> > >>> Transmission: > >>> Initial transmission is seen as non locking multicast packets > >>> All cache instances are both transmitters and receivers of packets > >>> (there is no central server) > >>> When a cache instance detects a broadcast event: > >>> It formulates a broadcast packet, containing the table name and > >>> primary key of the Object > >>> (Should this be class instead of table?) > >>> It transmits the packet on the multicast address for the group > >>> Each cache instance (except the sender) receives the packet > >>> Each cache instance purges (or makes hollow?) the referenced obj= ect > >>> (Does this need to distinguish between hollow vs removal for cha= nge > >>> vs delete, respectively?) > >>> (How are collections handled?) > >>> > >>> Packet format: > >>> Packets are datagrams, pssoibly of fixed length: > >>> Format: (Action ( Remove vs hollow?) + ) Primary key + Hash of > >>> (Class | table name?) > >>> (Can we use fixed length for packet? ie can we reliably hash cl= ass > >>> or table name?) > >>> Packet is fixed length to minimize message size. Variable lengt= h is > >>> possible, but seems bad. > >>> If using a hash, each cache manager calculates hash value upon e= ntry > >>> of (class | table) into cache space. Unknown hashes are ignored (= as > >>> they aren't in cache). > >>> > >>> Questions: > >>> How do we detect the appropriate broadcast events at the cache > >>> manager level? > >>> Can multiple VMs on the same host bind to the same multicast group > >>> address? If not, do we need some form of local relay? (Tested to = be > >>> okay on windows) > >>> Do we use table name, class name, or something else for entity > >>> representation? > >>> Can we use a hash of the entity representation (class or table nam= e) > >>> to allow for fixed length packets? > >>> Which hash formula do we use? > >>> Is there too much multicast traffic on a network of 60 machines? (= we > >>> can use separate addresses for separate clusters) > >>> Are networks (or switches) commonly configured to support local > >>> multicast packets? > >>> > >>> > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > _______________________________________________ > Xorm-devel mailing list > Xor...@li... > https://lists.sourceforge.net/lists/listinfo/xorm-devel > |
From: Wes B. <we...@ca...> - 2005-10-05 15:15:28
|
Yes, collection relationships are always thrown away currently. That's because XORM is not smart about bidirectional relationships. So to add distributed caching on top of currently functionality, they don't need to be propagated. Wes Harry Evans wrote: >NO! to caching persistence capable objects directly. That is too big >of a change for me to even contemplate right now. > >I will take a look at JCache (it seems like an awful lot of folks are >doing cool things with JavaGroups. Where have I been?). I think a >map based api would most likely be pretty easy to integrate into LRU >or some derivative, given that it is implemented using a map. > >On the Collections front, when I scanned through the code, I think I >found all the places in InterfaceInvocationHandler, InterfaceManager, >and TransactionImpl where xorm was interacting with the DataCache. I >saw where row structures were being put in, updated and taken out of >the cache that backed the InterfaceInvocationHandlers (TransactionImpl >lines 247-282), but the same areas of code also update Collection >Relationship proxies, and don't seem to make any cache calls >(TransactionImpl lines 225-240). > >Even if it did, they probably wouldn't be in cache, given the fact >that even the DataCache interface specifies that anything without a >primary key that you call add() with is, by contract, thrown away. >However, it might be worth making the calls for Collection mods, even >if they don't end up getting stored, to give the cache layer a chance >to deal with propagating the changes. I just need to get a little bit >better understanding of where this info is being held onto in (if at >all), if it isn't the DataCache layer. (Maybe collections are always >thrown away with the InterfaceInvocationHandler, and then requeried?) > >Harry > >On 10/5/05, Wes Biggs <we...@ca...> wrote: > > >>Changes to collections are always accomplished by changes to the data >>model. That being said, I forget if XORM is keeping many-to-many table >>Rows in cache currently -- it would need to for this scheme to work, >>unless you want to change the whole thing and cache the >>persistence-capable objects directly (I don't think this is a good idea). >> >>You might look at JCache.sourceforge.net which uses JavaGroups. This is >>the approach that some of the commercial JDO vendors have taken -- >>integrate a third party distributed cache management solution. The >>JCache API (it's a JSR) is basically a Map, and the implementation >>supposedly takes care of all the notification and sync behind the >>scenes. But that might be harder to integrate with the existing >>LRUCache, etc. >> >>Wes >> >>Harry Evans wrote: >> >> >> >>>Having spoken with Doug, I am looking at using the JGroups (or >>>JavaGroups, gotta love that Sun trademark enforcement) as the >>>mechanism for transporting the notifications. >>> >>>Thoughts: >>>1. Modify LRUCache to optionally notify a listener when a Row is >>>added, updated, or removed. >>>2. Subclass LRUCache to use (most likely) a JGroups NotificationBus >>>for cache events. >>>3. Setup JGroups to manage this notification, and provide properties >>>the subclass can read to allow for enhanced settings (UDP versus TCP >>>notifications, etc). >>>4. I walked through some xorm code, and it doesn't appear that >>>relationship proxy information is ever directly conveyed to the cache >>>level, which means that there is no way, at that level, to invalidate >>>the Collection case I listed previously. I could really use some >>>insight into an approach for this area (*cough* Wes *cough*) as this >>>seems like a kinda important part. I might have this wrong, but it >>>appears xorm manages all collection level information (12M and M2M) at >>>the ObjectProxy level, and not at all in the Row DataCache level. Is >>>that correct? >>>5. I am unclear whether InterfaceInvocationHandlers need to have >>>their Row cleared in this scheme, or if they generally pass out of >>>scope frequently enough that the Row is the main thing to focus on. >>>This has to do with the concern that they hold a direct hard reference >>>to a Row object, so throwing that out of cache isn't going to >>>accomplish much if they are still pointing to it. >>> >>> I am really interested in the JGroups DistributedLockManager class, >>>but it would have to be combined / enhanced with cache management to >>>use it, and it seems like there should be an option, regardless, to >>>only do cache management without the overhead of distributed locks. >>>The JBossCache stuff also might be more appropriate in the long term, >>>as it uses JGroups stuff, but seems like it can handle both locks and >>>caching. I will have to do more research. >>> >>>Input very much appreciated. >>> >>>Harry >>> >>>On 10/4/05, *Harry Evans* <ha...@gm... <mailto:ha...@gm...>> >>>wrote: >>> >>> Optimistic Distributed Cache Management >>> >>> This is a proposal for a simple distributed cache invalidation >>> strategy for xorm. It might be implementable at only the cache >>> manager layer, or might require integration at a higher layer. It >>> does not do coordinated lock management, but would do reasonable cache >>> invalidation in the cases where the same enitity was not modified in >>> multiple instances in rapid succession. This is seen as the first >>> phase in a more pessimistic strategy that would do actual lock >>> management. >>> >>> It uses the multicast transmission facilities in Java to set up a peer >>> to peer local network on which cache invalidation messages are sent to >>> other servers using the same datastore. Detected events are >>> broadcast >>> to other servers, allowing them to either hollow the referenced object >>> or remove them completely from cache. Below are the major pieces of >>> the implementation as I see them. >>> >>> I am looking at using JRMS (Java Reliable Multicast Service) jar >>> (http://www.experimentalstuff.com/Technologies/JRMS/index.html) to >>> reasonably guarantee the receipt of packets to each server >>> (specifically the LRMP variety, with late-join catchup disabled). >>> Basic desription of major areas below, followed by questions. >>> >>> I would really really like to find a way to do this at the cache >>> manager level, as it would be a simple properties change to enable it, >>> by using a version of the LRU cache that implemented the additional >>> functionality, but I am not sure this is possible. Feedback >>> appreciated. >>> >>> Harry >>> >>> Detection: >>> "Broadcast" means notification sent out over multicast >>> Objects that are deleted must be broadcast to be removed >>> Objects that have simple attributes or one way references updated must >>> be broadcast to be hollowed or removed >>> Changes to collections must be broadcast: >>> ObjParent has a collection that contains objects, including >>> ObjChild >>> ObjChild has a direct reference to ObjParent >>> ObjChild has its reference to ObjParent removed (or reassigned to >>> ObjNewParent) >>> ObjChild must be broadcast to be hollowed or removed >>> ObjParent must either be broadcast to be hollowed or removed or >>> must >>> be broadcast have its collection set to unresolved >>> ObjNewParent must either be broadcast to be hollowed or removed or >>> must be broadcast to have its collection set to unresolved >>> >>> Transmission: >>> Initial transmission is seen as non locking multicast packets >>> All cache instances are both transmitters and receivers of packets >>> (there is no central server) >>> When a cache instance detects a broadcast event: >>> It formulates a broadcast packet, containing the table name and >>> primary key of the Object >>> (Should this be class instead of table?) >>> It transmits the packet on the multicast address for the group >>> Each cache instance (except the sender) receives the packet >>> Each cache instance purges (or makes hollow?) the referenced object >>> (Does this need to distinguish between hollow vs removal for change >>> vs delete, respectively?) >>> (How are collections handled?) >>> >>> Packet format: >>> Packets are datagrams, pssoibly of fixed length: >>> Format: (Action ( Remove vs hollow?) + ) Primary key + Hash of >>> (Class | table name?) >>> (Can we use fixed length for packet? ie can we reliably hash class >>> or table name?) >>> Packet is fixed length to minimize message size. Variable length is >>> possible, but seems bad. >>> If using a hash, each cache manager calculates hash value upon entry >>> of (class | table) into cache space. Unknown hashes are ignored (as >>> they aren't in cache). >>> >>> Questions: >>> How do we detect the appropriate broadcast events at the cache >>> manager level? >>> Can multiple VMs on the same host bind to the same multicast group >>> address? If not, do we need some form of local relay? (Tested to be >>> okay on windows) >>> Do we use table name, class name, or something else for entity >>> representation? >>> Can we use a hash of the entity representation (class or table name) >>> to allow for fixed length packets? >>> Which hash formula do we use? >>> Is there too much multicast traffic on a network of 60 machines? (we >>> can use separate addresses for separate clusters) >>> Are networks (or switches) commonly configured to support local >>> multicast packets? >>> >>> |
From: Harry E. <ha...@gm...> - 2005-10-05 14:26:08
|
NO! to caching persistence capable objects directly. That is too big of a change for me to even contemplate right now. I will take a look at JCache (it seems like an awful lot of folks are doing cool things with JavaGroups. Where have I been?). I think a map based api would most likely be pretty easy to integrate into LRU or some derivative, given that it is implemented using a map. On the Collections front, when I scanned through the code, I think I found all the places in InterfaceInvocationHandler, InterfaceManager, and TransactionImpl where xorm was interacting with the DataCache. I saw where row structures were being put in, updated and taken out of the cache that backed the InterfaceInvocationHandlers (TransactionImpl lines 247-282), but the same areas of code also update Collection Relationship proxies, and don't seem to make any cache calls (TransactionImpl lines 225-240). Even if it did, they probably wouldn't be in cache, given the fact that even the DataCache interface specifies that anything without a primary key that you call add() with is, by contract, thrown away.=20 However, it might be worth making the calls for Collection mods, even if they don't end up getting stored, to give the cache layer a chance to deal with propagating the changes. I just need to get a little bit better understanding of where this info is being held onto in (if at all), if it isn't the DataCache layer. (Maybe collections are always thrown away with the InterfaceInvocationHandler, and then requeried?) Harry On 10/5/05, Wes Biggs <we...@ca...> wrote: > Changes to collections are always accomplished by changes to the data > model. That being said, I forget if XORM is keeping many-to-many table > Rows in cache currently -- it would need to for this scheme to work, > unless you want to change the whole thing and cache the > persistence-capable objects directly (I don't think this is a good idea). > > You might look at JCache.sourceforge.net which uses JavaGroups. This is > the approach that some of the commercial JDO vendors have taken -- > integrate a third party distributed cache management solution. The > JCache API (it's a JSR) is basically a Map, and the implementation > supposedly takes care of all the notification and sync behind the > scenes. But that might be harder to integrate with the existing > LRUCache, etc. > > Wes > > Harry Evans wrote: > > > Having spoken with Doug, I am looking at using the JGroups (or > > JavaGroups, gotta love that Sun trademark enforcement) as the > > mechanism for transporting the notifications. > > > > Thoughts: > > 1. Modify LRUCache to optionally notify a listener when a Row is > > added, updated, or removed. > > 2. Subclass LRUCache to use (most likely) a JGroups NotificationBus > > for cache events. > > 3. Setup JGroups to manage this notification, and provide properties > > the subclass can read to allow for enhanced settings (UDP versus TCP > > notifications, etc). > > 4. I walked through some xorm code, and it doesn't appear that > > relationship proxy information is ever directly conveyed to the cache > > level, which means that there is no way, at that level, to invalidate > > the Collection case I listed previously. I could really use some > > insight into an approach for this area (*cough* Wes *cough*) as this > > seems like a kinda important part. I might have this wrong, but it > > appears xorm manages all collection level information (12M and M2M) at > > the ObjectProxy level, and not at all in the Row DataCache level. Is > > that correct? > > 5. I am unclear whether InterfaceInvocationHandlers need to have > > their Row cleared in this scheme, or if they generally pass out of > > scope frequently enough that the Row is the main thing to focus on. > > This has to do with the concern that they hold a direct hard reference > > to a Row object, so throwing that out of cache isn't going to > > accomplish much if they are still pointing to it. > > > > I am really interested in the JGroups DistributedLockManager class, > > but it would have to be combined / enhanced with cache management to > > use it, and it seems like there should be an option, regardless, to > > only do cache management without the overhead of distributed locks. > > The JBossCache stuff also might be more appropriate in the long term, > > as it uses JGroups stuff, but seems like it can handle both locks and > > caching. I will have to do more research. > > > > Input very much appreciated. > > > > Harry > > > > On 10/4/05, *Harry Evans* <ha...@gm... <mailto:ha...@gm...>> > > wrote: > > > > Optimistic Distributed Cache Management > > > > This is a proposal for a simple distributed cache invalidation > > strategy for xorm. It might be implementable at only the cache > > manager layer, or might require integration at a higher layer. It > > does not do coordinated lock management, but would do reasonable ca= che > > invalidation in the cases where the same enitity was not modified i= n > > multiple instances in rapid succession. This is seen as the first > > phase in a more pessimistic strategy that would do actual lock > > management. > > > > It uses the multicast transmission facilities in Java to set up a p= eer > > to peer local network on which cache invalidation messages are sent= to > > other servers using the same datastore. Detected events are > > broadcast > > to other servers, allowing them to either hollow the referenced obj= ect > > or remove them completely from cache. Below are the major pieces o= f > > the implementation as I see them. > > > > I am looking at using JRMS (Java Reliable Multicast Service) jar > > (http://www.experimentalstuff.com/Technologies/JRMS/index.html) to > > reasonably guarantee the receipt of packets to each server > > (specifically the LRMP variety, with late-join catchup disabled). > > Basic desription of major areas below, followed by questions. > > > > I would really really like to find a way to do this at the cache > > manager level, as it would be a simple properties change to enable = it, > > by using a version of the LRU cache that implemented the additional > > functionality, but I am not sure this is possible. Feedback > > appreciated. > > > > Harry > > > > Detection: > > "Broadcast" means notification sent out over multicast > > Objects that are deleted must be broadcast to be removed > > Objects that have simple attributes or one way references updated m= ust > > be broadcast to be hollowed or removed > > Changes to collections must be broadcast: > > ObjParent has a collection that contains objects, including > > ObjChild > > ObjChild has a direct reference to ObjParent > > ObjChild has its reference to ObjParent removed (or reassigned to > > ObjNewParent) > > ObjChild must be broadcast to be hollowed or removed > > ObjParent must either be broadcast to be hollowed or removed or > > must > > be broadcast have its collection set to unresolved > > ObjNewParent must either be broadcast to be hollowed or removed o= r > > must be broadcast to have its collection set to unresolved > > > > Transmission: > > Initial transmission is seen as non locking multicast packets > > All cache instances are both transmitters and receivers of packets > > (there is no central server) > > When a cache instance detects a broadcast event: > > It formulates a broadcast packet, containing the table name and > > primary key of the Object > > (Should this be class instead of table?) > > It transmits the packet on the multicast address for the group > > Each cache instance (except the sender) receives the packet > > Each cache instance purges (or makes hollow?) the referenced obje= ct > > (Does this need to distinguish between hollow vs removal for chan= ge > > vs delete, respectively?) > > (How are collections handled?) > > > > Packet format: > > Packets are datagrams, pssoibly of fixed length: > > Format: (Action ( Remove vs hollow?) + ) Primary key + Hash of > > (Class | table name?) > > (Can we use fixed length for packet? ie can we reliably hash cla= ss > > or table name?) > > Packet is fixed length to minimize message size. Variable length= is > > possible, but seems bad. > > If using a hash, each cache manager calculates hash value upon en= try > > of (class | table) into cache space. Unknown hashes are ignored (a= s > > they aren't in cache). > > > > Questions: > > How do we detect the appropriate broadcast events at the cache > > manager level? > > Can multiple VMs on the same host bind to the same multicast group > > address? If not, do we need some form of local relay? (Tested to b= e > > okay on windows) > > Do we use table name, class name, or something else for entity > > representation? > > Can we use a hash of the entity representation (class or table name= ) > > to allow for fixed length packets? > > Which hash formula do we use? > > Is there too much multicast traffic on a network of 60 machines? (w= e > > can use separate addresses for separate clusters) > > Are networks (or switches) commonly configured to support local > > multicast packets? > > > > > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > _______________________________________________ > Xorm-devel mailing list > Xor...@li... > https://lists.sourceforge.net/lists/listinfo/xorm-devel > |
From: Wes B. <we...@ca...> - 2005-10-05 13:59:31
|
Changes to collections are always accomplished by changes to the data model. That being said, I forget if XORM is keeping many-to-many table Rows in cache currently -- it would need to for this scheme to work, unless you want to change the whole thing and cache the persistence-capable objects directly (I don't think this is a good idea). You might look at JCache.sourceforge.net which uses JavaGroups. This is the approach that some of the commercial JDO vendors have taken -- integrate a third party distributed cache management solution. The JCache API (it's a JSR) is basically a Map, and the implementation supposedly takes care of all the notification and sync behind the scenes. But that might be harder to integrate with the existing LRUCache, etc. Wes Harry Evans wrote: > Having spoken with Doug, I am looking at using the JGroups (or > JavaGroups, gotta love that Sun trademark enforcement) as the > mechanism for transporting the notifications. > > Thoughts: > 1. Modify LRUCache to optionally notify a listener when a Row is > added, updated, or removed. > 2. Subclass LRUCache to use (most likely) a JGroups NotificationBus > for cache events. > 3. Setup JGroups to manage this notification, and provide properties > the subclass can read to allow for enhanced settings (UDP versus TCP > notifications, etc). > 4. I walked through some xorm code, and it doesn't appear that > relationship proxy information is ever directly conveyed to the cache > level, which means that there is no way, at that level, to invalidate > the Collection case I listed previously. I could really use some > insight into an approach for this area (*cough* Wes *cough*) as this > seems like a kinda important part. I might have this wrong, but it > appears xorm manages all collection level information (12M and M2M) at > the ObjectProxy level, and not at all in the Row DataCache level. Is > that correct? > 5. I am unclear whether InterfaceInvocationHandlers need to have > their Row cleared in this scheme, or if they generally pass out of > scope frequently enough that the Row is the main thing to focus on. > This has to do with the concern that they hold a direct hard reference > to a Row object, so throwing that out of cache isn't going to > accomplish much if they are still pointing to it. > > I am really interested in the JGroups DistributedLockManager class, > but it would have to be combined / enhanced with cache management to > use it, and it seems like there should be an option, regardless, to > only do cache management without the overhead of distributed locks. > The JBossCache stuff also might be more appropriate in the long term, > as it uses JGroups stuff, but seems like it can handle both locks and > caching. I will have to do more research. > > Input very much appreciated. > > Harry > > On 10/4/05, *Harry Evans* <ha...@gm... <mailto:ha...@gm...>> > wrote: > > Optimistic Distributed Cache Management > > This is a proposal for a simple distributed cache invalidation > strategy for xorm. It might be implementable at only the cache > manager layer, or might require integration at a higher layer. It > does not do coordinated lock management, but would do reasonable cache > invalidation in the cases where the same enitity was not modified in > multiple instances in rapid succession. This is seen as the first > phase in a more pessimistic strategy that would do actual lock > management. > > It uses the multicast transmission facilities in Java to set up a peer > to peer local network on which cache invalidation messages are sent to > other servers using the same datastore. Detected events are > broadcast > to other servers, allowing them to either hollow the referenced object > or remove them completely from cache. Below are the major pieces of > the implementation as I see them. > > I am looking at using JRMS (Java Reliable Multicast Service) jar > (http://www.experimentalstuff.com/Technologies/JRMS/index.html) to > reasonably guarantee the receipt of packets to each server > (specifically the LRMP variety, with late-join catchup disabled). > Basic desription of major areas below, followed by questions. > > I would really really like to find a way to do this at the cache > manager level, as it would be a simple properties change to enable it, > by using a version of the LRU cache that implemented the additional > functionality, but I am not sure this is possible. Feedback > appreciated. > > Harry > > Detection: > "Broadcast" means notification sent out over multicast > Objects that are deleted must be broadcast to be removed > Objects that have simple attributes or one way references updated must > be broadcast to be hollowed or removed > Changes to collections must be broadcast: > ObjParent has a collection that contains objects, including > ObjChild > ObjChild has a direct reference to ObjParent > ObjChild has its reference to ObjParent removed (or reassigned to > ObjNewParent) > ObjChild must be broadcast to be hollowed or removed > ObjParent must either be broadcast to be hollowed or removed or > must > be broadcast have its collection set to unresolved > ObjNewParent must either be broadcast to be hollowed or removed or > must be broadcast to have its collection set to unresolved > > Transmission: > Initial transmission is seen as non locking multicast packets > All cache instances are both transmitters and receivers of packets > (there is no central server) > When a cache instance detects a broadcast event: > It formulates a broadcast packet, containing the table name and > primary key of the Object > (Should this be class instead of table?) > It transmits the packet on the multicast address for the group > Each cache instance (except the sender) receives the packet > Each cache instance purges (or makes hollow?) the referenced object > (Does this need to distinguish between hollow vs removal for change > vs delete, respectively?) > (How are collections handled?) > > Packet format: > Packets are datagrams, pssoibly of fixed length: > Format: (Action ( Remove vs hollow?) + ) Primary key + Hash of > (Class | table name?) > (Can we use fixed length for packet? ie can we reliably hash class > or table name?) > Packet is fixed length to minimize message size. Variable length is > possible, but seems bad. > If using a hash, each cache manager calculates hash value upon entry > of (class | table) into cache space. Unknown hashes are ignored (as > they aren't in cache). > > Questions: > How do we detect the appropriate broadcast events at the cache > manager level? > Can multiple VMs on the same host bind to the same multicast group > address? If not, do we need some form of local relay? (Tested to be > okay on windows) > Do we use table name, class name, or something else for entity > representation? > Can we use a hash of the entity representation (class or table name) > to allow for fixed length packets? > Which hash formula do we use? > Is there too much multicast traffic on a network of 60 machines? (we > can use separate addresses for separate clusters) > Are networks (or switches) commonly configured to support local > multicast packets? > > |
From: Harry E. <ha...@gm...> - 2005-10-05 10:58:20
|
Having spoken with Doug, I am looking at using the JGroups (or JavaGroups, gotta love that Sun trademark enforcement) as the mechanism for transportin= g the notifications. Thoughts: 1. Modify LRUCache to optionally notify a listener when a Row is added, updated, or removed. 2. Subclass LRUCache to use (most likely) a JGroups NotificationBus for cache events. 3. Setup JGroups to manage this notification, and provide properties the subclass can read to allow for enhanced settings (UDP versus TCP notifications, etc). 4. I walked through some xorm code, and it doesn't appear that relationship proxy information is ever directly conveyed to the cache level, which means that there is no way, at that level, to invalidate the Collection case I listed previously. I could really use some insight into an approach for thi= s area (*cough* Wes *cough*) as this seems like a kinda important part. I might have this wrong, but it appears xorm manages all collection level information (12M and M2M) at the ObjectProxy level, and not at all in the Row DataCache level. Is that correct? 5. I am unclear whether InterfaceInvocationHandlers need to have their Row cleared in this scheme, or if they generally pass out of scope frequently enough that the Row is the main thing to focus on. This has to do with the concern that they hold a direct hard reference to a Row object, so throwing that out of cache isn't going to accomplish much if they are still pointing to it. I am really interested in the JGroups DistributedLockManager class, but it would have to be combined / enhanced with cache management to use it, and i= t seems like there should be an option, regardless, to only do cache management without the overhead of distributed locks. The JBossCache stuff also might be more appropriate in the long term, as it uses JGroups stuff, but seems like it can handle both locks and caching. I will have to do more research. Input very much appreciated. Harry On 10/4/05, Harry Evans <ha...@gm...> wrote: > > Optimistic Distributed Cache Management > > This is a proposal for a simple distributed cache invalidation > strategy for xorm. It might be implementable at only the cache > manager layer, or might require integration at a higher layer. It > does not do coordinated lock management, but would do reasonable cache > invalidation in the cases where the same enitity was not modified in > multiple instances in rapid succession. This is seen as the first > phase in a more pessimistic strategy that would do actual lock > management. > > It uses the multicast transmission facilities in Java to set up a peer > to peer local network on which cache invalidation messages are sent to > other servers using the same datastore. Detected events are broadcast > to other servers, allowing them to either hollow the referenced object > or remove them completely from cache. Below are the major pieces of > the implementation as I see them. > > I am looking at using JRMS (Java Reliable Multicast Service) jar > (http://www.experimentalstuff.com/Technologies/JRMS/index.html) to > reasonably guarantee the receipt of packets to each server > (specifically the LRMP variety, with late-join catchup disabled). > Basic desription of major areas below, followed by questions. > > I would really really like to find a way to do this at the cache > manager level, as it would be a simple properties change to enable it, > by using a version of the LRU cache that implemented the additional > functionality, but I am not sure this is possible. Feedback > appreciated. > > Harry > > Detection: > "Broadcast" means notification sent out over multicast > Objects that are deleted must be broadcast to be removed > Objects that have simple attributes or one way references updated must > be broadcast to be hollowed or removed > Changes to collections must be broadcast: > ObjParent has a collection that contains objects, including ObjChild > ObjChild has a direct reference to ObjParent > ObjChild has its reference to ObjParent removed (or reassigned to > ObjNewParent) > ObjChild must be broadcast to be hollowed or removed > ObjParent must either be broadcast to be hollowed or removed or must > be broadcast have its collection set to unresolved > ObjNewParent must either be broadcast to be hollowed or removed or > must be broadcast to have its collection set to unresolved > > Transmission: > Initial transmission is seen as non locking multicast packets > All cache instances are both transmitters and receivers of packets > (there is no central server) > When a cache instance detects a broadcast event: > It formulates a broadcast packet, containing the table name and > primary key of the Object > (Should this be class instead of table?) > It transmits the packet on the multicast address for the group > Each cache instance (except the sender) receives the packet > Each cache instance purges (or makes hollow?) the referenced object > (Does this need to distinguish between hollow vs removal for change > vs delete, respectively?) > (How are collections handled?) > > Packet format: > Packets are datagrams, pssoibly of fixed length: > Format: (Action ( Remove vs hollow?) + ) Primary key + Hash of > (Class | table name?) > (Can we use fixed length for packet? ie can we reliably hash class > or table name?) > Packet is fixed length to minimize message size. Variable length is > possible, but seems bad. > If using a hash, each cache manager calculates hash value upon entry > of (class | table) into cache space. Unknown hashes are ignored (as > they aren't in cache). > > Questions: > How do we detect the appropriate broadcast events at the cache manager > level? > Can multiple VMs on the same host bind to the same multicast group > address? If not, do we need some form of local relay? (Tested to be > okay on windows) > Do we use table name, class name, or something else for entity > representation? > Can we use a hash of the entity representation (class or table name) > to allow for fixed length packets? > Which hash formula do we use? > Is there too much multicast traffic on a network of 60 machines? (we > can use separate addresses for separate clusters) > Are networks (or switches) commonly configured to support local > multicast packets? > |
From: Harry E. <ha...@gm...> - 2005-10-04 09:22:49
|
Optimistic Distributed Cache Management This is a proposal for a simple distributed cache invalidation strategy for xorm. It might be implementable at only the cache manager layer, or might require integration at a higher layer. It does not do coordinated lock management, but would do reasonable cache invalidation in the cases where the same enitity was not modified in multiple instances in rapid succession. This is seen as the first phase in a more pessimistic strategy that would do actual lock management. It uses the multicast transmission facilities in Java to set up a peer to peer local network on which cache invalidation messages are sent to other servers using the same datastore. Detected events are broadcast to other servers, allowing them to either hollow the referenced object or remove them completely from cache. Below are the major pieces of the implementation as I see them. I am looking at using JRMS (Java Reliable Multicast Service) jar (http://www.experimentalstuff.com/Technologies/JRMS/index.html) to reasonably guarantee the receipt of packets to each server (specifically the LRMP variety, with late-join catchup disabled).=20 Basic desription of major areas below, followed by questions. I would really really like to find a way to do this at the cache manager level, as it would be a simple properties change to enable it, by using a version of the LRU cache that implemented the additional functionality, but I am not sure this is possible. Feedback appreciated. Harry Detection: "Broadcast" means notification sent out over multicast Objects that are deleted must be broadcast to be removed Objects that have simple attributes or one way references updated must be broadcast to be hollowed or removed Changes to collections must be broadcast: ObjParent has a collection that contains objects, including ObjChild ObjChild has a direct reference to ObjParent ObjChild has its reference to ObjParent removed (or reassigned to ObjNewParent) ObjChild must be broadcast to be hollowed or removed ObjParent must either be broadcast to be hollowed or removed or must be broadcast have its collection set to unresolved ObjNewParent must either be broadcast to be hollowed or removed or must be broadcast to have its collection set to unresolved Transmission: Initial transmission is seen as non locking multicast packets All cache instances are both transmitters and receivers of packets (there is no central server) When a cache instance detects a broadcast event: It formulates a broadcast packet, containing the table name and primary key of the Object (Should this be class instead of table?) It transmits the packet on the multicast address for the group Each cache instance (except the sender) receives the packet Each cache instance purges (or makes hollow?) the referenced object (Does this need to distinguish between hollow vs removal for change vs delete, respectively?) (How are collections handled?) Packet format: Packets are datagrams, pssoibly of fixed length: Format: (Action ( Remove vs hollow?) + ) Primary key + Hash of (Class | table name?) (Can we use fixed length for packet? ie can we reliably hash class or table name?) Packet is fixed length to minimize message size. Variable length is possible, but seems bad. If using a hash, each cache manager calculates hash value upon entry of (class | table) into cache space. Unknown hashes are ignored (as they aren't in cache). Questions: How do we detect the appropriate broadcast events at the cache manager leve= l? Can multiple VMs on the same host bind to the same multicast group address? If not, do we need some form of local relay? (Tested to be okay on windows) Do we use table name, class name, or something else for entity representati= on? Can we use a hash of the entity representation (class or table name) to allow for fixed length packets? Which hash formula do we use? Is there too much multicast traffic on a network of 60 machines? (we can use separate addresses for separate clusters) Are networks (or switches) commonly configured to support local multicast packets? |
From: Wes B. <we...@ca...> - 2004-07-08 18:26:29
|
Dimitri, Thanks for the reply. The support for regular JDO enhancement is one reason I think we should work closely with the reference implementation project, as it will provide this feature. I agree that this is important to drive further adoption of XORM. Wes Dimitri Ognibene wrote: >>That is, >>I will invest in making the RI an open, >>pluggable architecture that >>allows us to do some of the more interesting >>things that have evolved in >>XORM, such as mapping abstract/interface >>properties and query methods, >>writing code-based queries, and using runtime >>instead of compile-time >>class enhancement. The overall goal would be >>to provide a GPLed product >>built on top of the ASL core that is both JDO >>2.0 compliant (passes the >>TCK) and includes unique features. >> >>If you could respond with a vote on (a) whether >>you think this is a >>reasonable approach and (b) whether you would >>be willing to commit to >>helping with a JDO 2.0 RI, that would be great. >> >> >Hi Wes, >I vote yes on b >About the a i think that XORM is only a starting >point, because it requires we to write wrapping >classes to properly map objects in many >situations. >And we must define when runtime enhancement is >more useful versus more efficient compile-time >class enhancement. >Interface enhancement in xorm view is not the >enhancement of all classes wich implements that >interface but is the creation at run time of a >class that implements that interface and haven't >any particular behavieour. >I think we must still work on this. >However i'm still interested in helping the >developent of the RI >Regards > Dimitri Ognibene > > > > |
From: <ogn...@ya...> - 2004-07-08 10:03:12
|
> That is, > I will invest in making the RI an open, > pluggable architecture that > allows us to do some of the more interesting > things that have evolved in > XORM, such as mapping abstract/interface > properties and query methods, > writing code-based queries, and using runtime > instead of compile-time > class enhancement. The overall goal would be > to provide a GPLed product > built on top of the ASL core that is both JDO > 2.0 compliant (passes the > TCK) and includes unique features. > > If you could respond with a vote on (a) whether > you think this is a > reasonable approach and (b) whether you would > be willing to commit to > helping with a JDO 2.0 RI, that would be great. Hi Wes, I vote yes on b About the a i think that XORM is only a starting point, because it requires we to write wrapping classes to properly map objects in many situations. And we must define when runtime enhancement is more useful versus more efficient compile-time class enhancement. Interface enhancement in xorm view is not the enhancement of all classes wich implements that interface but is the creation at run time of a class that implements that interface and haven't any particular behavieour. I think we must still work on this. However i'm still interested in helping the developent of the RI Regards Dimitri Ognibene ____________________________________________________________ Yahoo! Companion - Scarica gratis la toolbar di Ricerca di Yahoo! http://companion.yahoo.it |
From: Wes B. <we...@ca...> - 2004-07-06 07:06:53
|
XORM developers, JDO 2.0 has published an early draft that I encourage you to check out (see the news link I posted on sourceforge for the URL to the download). There are concurrent plans for an open source reference implementation of JDO 2.0, incubated within the Apache Java project and released under an Apache/BSD style license. It will be a separate project from OJB which specifically implements the JDO 2.0 requirements, and will probably use the JDO 1.0 RI as its foundation (though I'm not convinced we shouldn't build from scratch) -- Sun is working on the legalities of freeing that source at the moment. So what role does XORM have to play in this emerging environment? What I would like to do as a developer is contribute both code and architecture to the JDO 2.0 reference implementation that ensures that the value added or non-spec features of XORM can be included. That is, I will invest in making the RI an open, pluggable architecture that allows us to do some of the more interesting things that have evolved in XORM, such as mapping abstract/interface properties and query methods, writing code-based queries, and using runtime instead of compile-time class enhancement. The overall goal would be to provide a GPLed product built on top of the ASL core that is both JDO 2.0 compliant (passes the TCK) and includes unique features. If you could respond with a vote on (a) whether you think this is a reasonable approach and (b) whether you would be willing to commit to helping with a JDO 2.0 RI, that would be great. If I can go back to the JDO Expert Group and say "X number of developers from XORM will be involved in the RI" (where X > just me), the team here can probably have significant input into its design. Regards, Wes |
From: Wes B. <we...@ca...> - 2004-07-06 06:51:29
|
Harry Evans wrote: > For objects that have been seen, but are not currently locked, txnId > and expires would be null. For objects that are locked, all values > would be non null. Locks can be removed lazily, by checking a lock > when needed, and if it is expired, nulling txnId and expires. It's OK for the lock manager to not keep strong references to every object it has seen, right? Also, should the lock manager be informed of newly persisted objects (i.e. creates) or is it only concerned with updates? I think it's the latter but wanted to make sure I was understanding the whole scheme. Some good news for caching in general -- JDO 2.0 will provide a PersistenceManagerFactory.getDataStoreCache() method that can be used to pin or evict objects, extents, or all cached data. I think the next thing to document (again, in PDF would be best) is the full interaction diagram between a user-level transaction, XORM's DataCache (LRUCache) and the LockManager. Wes |
From: Wes B. <we...@ca...> - 2004-06-18 06:15:26
|
This looks really good, thanks for the writeup. Two quick comments: 1) This isn't cache management, it's lock management.. I'm guessing the existing CacheManager implementation would be the component that interacts with this, though. 2) We might want to see if the javax.transaction.xa interfaces map well to this in terms of a standardized API. I'll look at the Javadoc and see if I can find a fit. Cheers, Wes Harry Evans wrote: > I have a simple design for distributed cache management. It has a > couple of holes, but doesn't seem like a bad first pass at it. I > would like to get feedback from the group to see what others think. > > CacheManager methods (don't worry about local vs. remote right now) > -lock(Object o, transactionId) : Lock > Checks version of object, and attempts to create a lock object for > it. Version is either some kind of change counter, or some time kind > of time stamp of last update time in the datastore. > --o > The xorm managed object to lock > --transactionId > a relatively unique id for the transaction obtaining the lock. Used > to allow the same transaction to acquire a lock on the same object > more than once. > --returns > If the object is already locked, returns the special LOCK_FAILED Lock > object. (An overridden version of the method might allow the caller to > specify waiting until the lock is obtained) > If a lock is granted, returns a Lock object with status LOCK_SUCCESS, > and a valid lock expiration time. > If a lock is granted, but the object is stale, returns a Lock object > with LOCK_REFRESH, and a valid lock expiration time. This means that > the caller has obtained a lock on the object, but needs to refresh the > object with the data store. > > > -commit(Object o, transactionId) : Lock > Verifies that this object is has a lock held by the transactionId. If > so, it will update the version in the Cache Manager to match the > version held by the object. This method should be called AFTER a > change to the datastore, but before committing the changes to the > datastore. This method also depends on the version of the object > being updated based on the changes to the datastore, even though it is > not yet committed. Regardless of outcome, this method will result in > releasing the lock on this object in the CacheManager. > --o > The xorm managed object that a lock was previously obtained for > --transactionId > the transactionId that was used to obtain the lock for o > --returns > If there is a lock for this object, and the transactionId matches, and > the lock has not expired, returns COMMIT_SUCCESS > Otherwise if the lock does not exist, is not for this transactionId, > or has expired, returns COMMIT_FAILURE. The proper action on a > COMMIT_FAILURE might be a retry, or a rollback. > > -check(Object o) : boolean > Checks whether the object is the most current version of the object > seen by the CacheManager. > --o > The object to check > --returns > Returns true if this version of the object is the most recent seen by > the CacheManager > otherwise false. > > -refreshLock(Object o, transactionId) : LOCK > Checks for a valid lock on the object, and if all is good, extends the > expiration of the Lock. Might not be needed (could just be rolled > into lock method). > --return values as per lock() method. > > > The concept behind all this is that the CacheManager object does not > talk to the db, but simply deals with objects as it sees them. Given > a compliant local implementation of XORM, all objects would be seen > whenever a transaction was involved, so there shouldn't be a race > condition issue. A simple sequence diagram follows: > > XORM CacheManager DataStore > ---- ------------ --------- > | | | > | |---startTrans()-- | | > | | | | | > | |<---------------- | | > | | | | > | |---lock(o, tId)--------->| | | > | |<----------Lock object---| | | > | | | | > | |---refreshObject(o) (if stale)--------------------->| | > | |<---------------------------------refreshedObject---| | > | | | | > | |---make changes-- | | > | | | | | > | |<---------------- | | > | |---commitTrans() (begin)- | | > | | | | | > | |<------------------------ | | > | | | | > | |---makeUpdates (don't commit)---------------------->| | > | |<-----------------------------------------success---| | > | | | | | > | |---commit(o, tid)------->| | | | > | |<--------------success---| | | | > | | | | | > | |---commitTransaction------------------------------->| | > | |<-----------------------------------------success---| | > | | | | > | |---commitTrans() (end)- | | > | | | | | > | |<---------------------- | | > | | | > > I think the whole scheme might work, if you assume that no one > operates outside of it. I don't know if that is a reasonable caveat > or not. Obviously, it is possible for the distributed cache manager > to provide methods for locking or committing more than one object at > the same time. Also, it is assumes that the distributed cache manager > would most likely be "remote" from the XORM instance using it, so the > "local" methods I showed should be assumed to be local facades to a > remote distributed cache manager. I tis probably obvious (but worth > stating) that all XORM instances dealing with a particular datastore > would use the same CacheManager, possibly with CacheManager > replication for failover. Persistent connections, and good > communications design should make it relatively low overhead data wise. > > The Lock object as it is internally held by the CacheManager might > look something like this: > Lock > ---- > class : Class > id : objectPrimaryKey > version : Version (int, timestamp, etc) > txnId : TransactionId (int, long, etc) > expires: Time (long, Date, etc) > > For objects that have been seen, but are not currently locked, txnId > and expires would be null. For objects that are locked, all values > would be non null. Locks can be removed lazily, by checking a lock > when needed, and if it is expired, nulling txnId and expires. > > This might be way off base, but it seemed like a good first pass. I > am not trying to reinvent the wheel here, but it seems like something > like this might solve most of the "coordinated" or "distributed" cache > management issues we see with xorm right now. > > Thoughts / comments / questions? > > Harry |
From: Harry E. <ha...@tr...> - 2004-06-16 04:03:38
|
I have a simple design for distributed cache management. It has a couple of holes, but doesn't seem like a bad first pass at it. I would like to get feedback from the group to see what others think. CacheManager methods (don't worry about local vs. remote right now) -lock(Object o, transactionId) : Lock Checks version of object, and attempts to create a lock object for it. Version is either some kind of change counter, or some time kind of time stamp of last update time in the datastore. --o The xorm managed object to lock --transactionId a relatively unique id for the transaction obtaining the lock. Used to allow the same transaction to acquire a lock on the same object more than once. --returns If the object is already locked, returns the special LOCK_FAILED Lock object. (An overridden version of the method might allow the caller to specify waiting until the lock is obtained) If a lock is granted, returns a Lock object with status LOCK_SUCCESS, and a valid lock expiration time. If a lock is granted, but the object is stale, returns a Lock object with LOCK_REFRESH, and a valid lock expiration time. This means that the caller has obtained a lock on the object, but needs to refresh the object with the data store. -commit(Object o, transactionId) : Lock Verifies that this object is has a lock held by the transactionId. If so, it will update the version in the Cache Manager to match the version held by the object. This method should be called AFTER a change to the datastore, but before committing the changes to the datastore. This method also depends on the version of the object being updated based on the changes to the datastore, even though it is not yet committed. Regardless of outcome, this method will result in releasing the lock on this object in the CacheManager. --o The xorm managed object that a lock was previously obtained for --transactionId the transactionId that was used to obtain the lock for o --returns If there is a lock for this object, and the transactionId matches, and the lock has not expired, returns COMMIT_SUCCESS Otherwise if the lock does not exist, is not for this transactionId, or has expired, returns COMMIT_FAILURE. The proper action on a COMMIT_FAILURE might be a retry, or a rollback. -check(Object o) : boolean Checks whether the object is the most current version of the object seen by the CacheManager. --o The object to check --returns Returns true if this version of the object is the most recent seen by the CacheManager otherwise false. -refreshLock(Object o, transactionId) : LOCK Checks for a valid lock on the object, and if all is good, extends the expiration of the Lock. Might not be needed (could just be rolled into lock method). --return values as per lock() method. The concept behind all this is that the CacheManager object does not talk to the db, but simply deals with objects as it sees them. Given a compliant local implementation of XORM, all objects would be seen whenever a transaction was involved, so there shouldn't be a race condition issue. A simple sequence diagram follows: XORM CacheManager DataStore ---- ------------ --------- | | | | |---startTrans()-- | | | | | | | | |<---------------- | | | | | | | |---lock(o, tId)--------->| | | | |<----------Lock object---| | | | | | | | |---refreshObject(o) (if stale)--------------------->| | | |<---------------------------------refreshedObject---| | | | | | | |---make changes-- | | | | | | | | |<---------------- | | | |---commitTrans() (begin)- | | | | | | | | |<------------------------ | | | | | | | |---makeUpdates (don't commit)---------------------->| | | |<-----------------------------------------success---| | | | | | | | |---commit(o, tid)------->| | | | | |<--------------success---| | | | | | | | | | |---commitTransaction------------------------------->| | | |<-----------------------------------------success---| | | | | | | |---commitTrans() (end)- | | | | | | | | |<---------------------- | | | | | I think the whole scheme might work, if you assume that no one operates outside of it. I don't know if that is a reasonable caveat or not. Obviously, it is possible for the distributed cache manager to provide methods for locking or committing more than one object at the same time. Also, it is assumes that the distributed cache manager would most likely be "remote" from the XORM instance using it, so the "local" methods I showed should be assumed to be local facades to a remote distributed cache manager. I tis probably obvious (but worth stating) that all XORM instances dealing with a particular datastore would use the same CacheManager, possibly with CacheManager replication for failover. Persistent connections, and good communications design should make it relatively low overhead data wise. The Lock object as it is internally held by the CacheManager might look something like this: Lock ---- class : Class id : objectPrimaryKey version : Version (int, timestamp, etc) txnId : TransactionId (int, long, etc) expires: Time (long, Date, etc) For objects that have been seen, but are not currently locked, txnId and expires would be null. For objects that are locked, all values would be non null. Locks can be removed lazily, by checking a lock when needed, and if it is expired, nulling txnId and expires. This might be way off base, but it seemed like a good first pass. I am not trying to reinvent the wheel here, but it seems like something like this might solve most of the "coordinated" or "distributed" cache management issues we see with xorm right now. Thoughts / comments / questions? Harry |
From: <fi...@so...> - 2004-06-12 22:02:19
|
Hi wes, often flags are mapped using numeric sql types. How can this be implemented in your type converter? I tought about a new format paramater of field type interpretetion. It could contain the default true numeric value in string form.. do u like it? This is the simplest patch.. but it can be better to add a property with datasource wide definition. Or better move the conversion inside BaseSqlDriver. We can add the fieldClass to the Column class and use the field class and the format to choose the right ResultSet.getXXXX method. And we can add the method ReadToColumn(ResultSet,Column) to the BaseSqlDriver inside it we can use a switch or mapping to choose the right ResultSet.getXXXX method. The question is:(Are) Do u see any drowback? Can user defined classes be a problem bye sj777 |
From: <fi...@so...> - 2004-06-11 09:50:36
|
hi, I'm interested in implementing data change sensitive object, and i'll post a class diagram as soon as possible. Is some one interested in the project? we can simply use a makeChangeSensitive and makeNotChangeSensitive. My question are: Can this object listen for changes inside a transaction? I don't think so but it's mho. Can this object notify for external(operted on the datastore) and internal(simply invocation of setXXX method) changes?And if so it must notify only on true data changes not on refreshing, so in that case we must check new and old values. and the best one: collection and extent, like query result, how can be managed? If they are only used as model for gui display one cannot check for real change but if there is change sensitive code.... Does some one know of some jdo implementation, or similar project, that has this feature? I'll be glad of any suggestion u can send. bye Dimitri Ognibene P.S. Wes can I commit in CVS the changes the I posted to make the InterfaceManager preserve jdo instances identity outside transactions? |
From: <fi...@so...> - 2004-06-08 02:27:19
|
problems i've encountered using xorm: 1) case of properties name in xml. 2)determine if the proxy intercept the right properties 3)field type depending on datastore without check 4)refresh and concurrence 5)implementing class behaveour like an observer model pattern my todo are : 1) a simple utility that from an interface and jdo property and .jdo files show to the user which methods xorm will upgrade and by now the actual type of the property, it must check that primary and foreign keys who refer them has the same type. 2)modify the method select in BaseSqlDriver subsituing the line // TODO will this be flexible enough? row.setValue(c, rs.getObject(++pos)); with something like row.setValue(c, c.readFromDB(rs,pos)); The Column.readFromDB(ResultSet,int) method would be type aware i'll lookiing for more docs on data conversion.. we have the type of the db value, it could be useful the type of the property to choose the right rs.getxxxx method, but i think this is not a new problem there must be a lot of doc.. and i've some (j2sdk docs/guide/jdbc/getstart/mapping.html for example) 3)can we verify the compliance to jdo instance lifecycle specs? it could be useful to doc the xorm vision and present a property to increase compliance (and usability) despite of efficiency. It could be usefull in multiuser enviroment... 3.a add a property to enable the select... for update syntax for the same reason 4) FIRST OF ALL Finish project for my last university examination.. where i'm still using XORM. I had to encapsulate XORM in a wrapper and define interfaces from my behaveoural objects to XORM enhaced instances.. and sometimes i'd to add abstract class for complex storage and type conversion... if i doc it in english,.. sobbb:() it can be usefull. But i could be uch simply if XORM could work on behaveoural classes... if i've time i'll look to aop.. the only reason for this posting is.. i could make use of some help bye sj777 |
From: <fi...@so...> - 2004-06-08 02:25:04
|
Hi this are the changes I made to org.xorm.InterfaceInvocationHandler.java to preserve the identity outside transactions and prevents duplication of instances there is some other changes to make the system more compliant jdo specs but i still have problems in modifing this code, we must separe jdo status managing from row management and interface handling... one way could be use the state pattern... there are some todo ad some question(usually marked with "sj777 ??") in this lines.. I'll remove them before commit in cvs but some answer will be helpfull Comparing: C:\Progetti\xorm-beta6\src\org\xorm\InterfaceInvocationHandler.java(old) To: C:\Program Files\xorm-beta6\src\org\xorm\InterfaceInvocationHandler.java(new) ==== ==== 127 <! // The transaction this object is operating within, or null 128 <! private TransactionImpl txn; !> //sj777 An object doesn't need anymore a reference to the transaction !> //sj777 now it is managed by interfacemanager, it sounds like better too !> // The transaction this object is operating within, or null//sj777 commented out !> //private TransactionImpl txn;//sj777 commented out !> !> //added relationship with InterfaceManager to //sj777 !> // preserve PersistenceManager wide uniquiness of objects with the same id //sj777 !> // despite Transactional property and Transaction life span//sj777 !> private InterfaceManager im; //sj777 commented out !> //sj777 new constructor added !> //sj777 this constructor is used attach retrieved persistent instance to InterfaceManager !> //sj777 if one creates an object with a non null row it must be an persistent object becouse !> //sj777 it is retrivied from datastore, !> //sj777 this method is usually called from when navigating a collection !> //sj777 or iteratig troug a a query results !> //sj777 so the objects must be persistent, and so related to PersistenceManager. !> public InterfaceInvocationHandler(InterfaceManagerFactory factory, ClassMapping mapping, Row row,InterfaceManager _im){ !> //sj777 calls default constructor !> this(factory, mapping, row); !> //sj777 sets the InterfaceManger that manages this instance !> im=_im; !> //notifies the InterfaceManger to manage this instance !> im.manage((InterfaceInvocationHandler)this); !> } 172 <! if (txn != null) { !> //if (txn != null) {//sj777 old one !> //sj777 what's happen if this method is called on a transient instance?? wich isn't related !> //sj777 to an instance Manager? !> //sj777 i think that i'll move any change, and possibly any access to !> //sj777 jdostate out of non jpo spec methods... (TODO) !> //sj777 this method should only be used to the values, the state must !> //sj777 be changed elsewhere, more clean and consistently !> if (im!=null&&im.currentTransaction()!= null&&im.currentTransaction().isActive()) {//sj777 added to be more jdo compliant 181 <! return (txn == null) ? null : (InterfaceManager) txn.getPersistenceManager(); !> //return (txn == null) ? null : (InterfaceManager) txn.getPersistenceManager();//sj777 anything to ask? :) !> //sj777 we have passed from relationship with TransactionImpl to relationship with IntrfaceManager !> //sj777 so we can mantain identity outside transcation scope, Any question? any suggestion? any sleep(1.35 am)? !> return im; //sj777 added: now InterfaceInvocationHandelr is !> //directly associated to the PersistenceManager alias InterfaceManager !> if ((im!=null) &&(im!=mgr))throw new JDOUserException();//sj777An object is bound to its PersistentManager.. !> //sj777 todo implement Persistent non-transactional behaveour 205 <! * This is the only method where the txn field gets set. !> //sj777 outdated comment* This is the only method where the txn field gets set. 211 <! this.txn = txn; 212 <! txn.attach(proxy); !> if ((im!=null)&&txn!=im.currentTransaction()) throw new JDOUserException();//sj777 transaction and persistence manager are strictly bound in jdo specs !> //this.txn = txn;//sj777 commented out !> txn.attach(proxy);//sj777 this is right a transaction must know wich objects she uses and it can also attach transient objects !> //sj777 todo must refresh !> //sj777 but we have maked reacheble objects transactional.. !> //sj777 it's consistent?!?.. wes help !> else throw new JDOUserException(); //sj777 see JDO spec 239 <! boolean retainValues = txn.getRetainValues(); !> //boolean retainValues = txn.getRetainValues();//sj777 old line !> boolean retainValues = getInterfaceManager().currentTransaction().getRetainValues();//sj777 270 <! txn = null; !> im.stopManaging(this);//sj777 !> im=null;//sj777 !> // txn = null;//sj777 297 <! txn = null; !> im.stopManaging(this); !> im=null; !> ///txn = null;//sj777 !> case STATUS_PERSISTENT_NONTRANSACTIONAL://sj777 todo we must still put somewhere refresh.. !> if (isTransactional()&&im!= null && im.currentTransaction()!=null &&im.currentTransaction().isActive()) //sj777 !> enterTransaction((TransactionImpl) im.currentTransaction());//sj777 336 <! if (txn != null && txn.isActive()) { !> if (isTransactional()&&im!= null && im.currentTransaction()!=null &&im.currentTransaction().isActive()) {//sj777 are these controls redundat?? 338 <! enterTransaction(txn); !> enterTransaction((TransactionImpl) im.currentTransaction());//sj777 346 <! if (txn != null) { !> //if (txn != null) {//sj777 old !> !> if (isTransactional()&&im!= null //sj777 new !> && im.currentTransaction()!=null //sj777 new !> &&im.currentTransaction().isActive()){//sj777 new 356 <! public void makePersistent(InterfaceManager mgr) { 357 <! if (txn != null) { 358 <! // Object was transactional already 359 <! if (mgr == txn.getInterfaceManager()) return; !> void makePersistent(InterfaceManager mgr) {//sj777 this was public why.. it looks like better now !> //if (txn != null) {//sj777 !> if (isPersistent()&&im != null) {//sj777 new !> // Object was transactional already//sj777?? old comment... transactional != persistent !> //objcet was persistent already//sj777 more exact(persistent non transactional) !> if (mgr == im) return; !> //sj777 makePersistent can be called only inside a transaction !> if (mgr.currentTransaction()==null //sj777 new !> ||!mgr.currentTransaction().isActive())//sj777 new !> throw new JDOUserException("makePersistent called outside an active transaction context");//sj777 !> im=mgr; !> mgr.manage(this);//sj777 and so the Persistence manager can know who he manages !> //sj777 todo the relationship must be connected to the same mgr !> //sj777 if it was created at transient time it can be connected to !> //sj777 null Manager !> im.stopManaging(this);//sj777 !> im.manage(this);//sj777 !> return true;//sj777 it's my interpretation of jdo specs !> /** 572 <! refresh(txn.getInterfaceManager()); !> refresh(txn.getInterfaceManager());//sj777 !> */ //sj777 !> ///sj777 todo how tp implement for transient objects without primary key? !> return false; 667 <! refresh(txn.getInterfaceManager()); !> refresh(im);//sj777 670 <! txn = (TransactionImpl) txn.getInterfaceManager().currentTransaction(); !> TransactionImpl txn = (TransactionImpl) im.currentTransaction();//sj777 !> //sj777 what is the problem with this? !> //sj777 why don't add a property for having refresh only !> //sj777 in concurrent enviroment !> // !> //sj777 i think that this must be optimized, !> //sj777 too many get and put in maps.... TODOOOOOOOOOOOOO 735 <! other.makePersistent(txn.getInterfaceManager()); !> other.makePersistent(im);//sj777 767 <! if (txn != null && txn.isActive() && !isDirty()) { !> //if (txn != null && txn.isActive() && !isDirty()) {//sj777 old !> if (im!=null&&im.currentTransaction()!= null && im.currentTransaction().isActive() && !isDirty()&&isTransactional()) {//sj777 new 785 <! InterfaceManager mgr = null; !> //sj777 now we have the InterfaceManager directly why don't use it !> /*InterfaceManager mgr = null; 788 <! } 789 <! rp = new RelationshipProxy(mgr, mapping, this, !> }*/ !> //rp = new RelationshipProxy(mgr, mapping, this, ///sj777 old !> rp = new RelationshipProxy(im, mapping, this, ///sj777 new 798 <! if (txn != null && txn.isActive()) { 799 <! txn.attachRelationship(rp); !> //sj777 TODO move to before of internal cache use to manage makePersistent calls !> //sj777 and WES it look like that the check must be if (isTransactional())??? !> //if (txn != null && txn.isActive()) {//sj777 old !> if (im!=null&&im.currentTransaction()!= null && im.currentTransaction().isActive()) {//sj777 new !> ((TransactionImpl)im.currentTransaction()).attachRelationship(rp);//sj777 new 863 <! txn.getInterfaceManager().refreshColumns(getRow(), dfg); !> //txn.getInterfaceManager().refreshColumns(getRow(), dfg);//sj777 !> im.refreshColumns(getRow(), dfg);//sj777 873 <! value = txn.getInterfaceManager() 874 <! .lookup(returnTypeMapping, value); !> //value = txn.getInterfaceManager()///sj777 !> value = im.lookup(returnTypeMapping, value);//sj777 !> private void check(String currentMethod){ !> //return; !> if(!lastCheck&&(primaryKey!=null&&(row==null||!primaryKey.equals(row.getPrim aryKeyValue()))|| !> (row!=null&&row.getPrimaryKeyValue()!=null&&!row.getPrimaryKeyValue().equals (primaryKey)))){ !> synchronized(System.out){ !> System.out.println("ATTENZIONE"); !> System.out.println("row.getPrimaryKeyValue !=priamaryKey:"+ mapping.getMappedClass()); !> if(primaryKey!=null) !> System.out.println("primaryKey "+ primaryKey+" classe: "+primaryKey.getClass().getName()); !> else !> System.out.println("primaryKey "+ primaryKey+" "); !> !> if(row!=null) !> if (row.getPrimaryKeyValue()!=null) !> System.out.println("row.getPrimaryKeyValue "+ row.getPrimaryKeyValue()+" nuova classe: "+row.getPrimaryKeyValue().getClass().getName()); !> else !> System.out.println("row.getPrimaryKeyValue==null"); !> else !> System.out.println("row==null"); !> !> System.out.println("metodo precedente: "+methodName); !> System.out.println("metodo attuale: "+currentMethod); !> } !> } !> lastCheck=(primaryKey!=null&&(row==null||!primaryKey.equals(row.getPrimaryKe yValue()))||(row!=null&&row.getPrimaryKeyValue()!=null&&!row.getPrimaryKeyVa lue().equals(primaryKey))); !> if (row!=null&&row.getPrimaryKeyValue()!=null&&(!row.getPrimaryKeyValue().equal s(lastRowVal))){ !> synchronized(System.out){ !> System.out.println("ATTENZIONE"); !> System.out.println("row.getPrimaryKeyValue cambiata in:"+ mapping.getMappedClass()); !> if (lastRow==null)System.out.println("lastRow==null"); !> else if (lastRowVal!=null) !> System.out.println("row.getPrimaryKeyValue vecchia"+ lastRowVal+" vecchia classe: "+lastRowVal.getClass().getName()); !> else !> System.out.println("primaryKey vecchia"+ lastRowVal+" "); !> !> if(row!=null) !> if (row.getPrimaryKeyValue()!=null) !> System.out.println("row.getPrimaryKeyValue nuova"+ row.getPrimaryKeyValue()+" nuova classe: "+row.getPrimaryKeyValue().getClass().getName()); !> else !> System.out.println("row.getPrimaryKeyValue==null"); !> else !> System.out.println("row==null"); !> System.out.println("metodo precedente: "+methodName); !> System.out.println("metodo attuale: "+currentMethod); !> } !> } !> if (primaryKey!=null&&!primaryKey.equals(lastVal)){ !> synchronized(System.out){ !> System.out.println("ATTENZIONE"); !> System.out.println("primaryKey cambiata in:"+ mapping.getMappedClass()); !> if (lastVal!=null) !> System.out.println("primaryKey vecchia"+ lastVal+" vecchia classe: "+lastVal.getClass().getName()); !> else !> System.out.println("primaryKey vecchia"+ lastVal+" "); !> if(primaryKey!=null) !> System.out.println("primaryKey nuova"+ primaryKey+" nuova classe: "+primaryKey.getClass().getName()); !> else !> System.out.println("primaryKey nuova"+ primaryKey+" "); !> System.out.println("metodo precedente: "+methodName); !> System.out.println("metodo attuale: "+currentMethod); !> } !> !> } !> methodName=currentMethod; !> lastVal=primaryKey; !> lastRowVal=(row==null)?null:row.getPrimaryKeyValue(); !> lastRow=row; !> !> } !> !> private String methodName=""; !> private Object lastVal=null; !> private Object lastRowVal=null; !> private Object lastRow=null; !> private boolean lastCheck=false; |
From: <fi...@so...> - 2004-06-08 00:48:57
|
hi these are the changes I made in InterfaceManager to track managed instances and make sure of unique identity. it is a useful spec of jdo Comparing: C:\Program Files\xorm-beta6\src\org\xorm\InterfaceManager.java To: C:\Progetti\xorm-beta6\src\org\xorm\InterfaceManager.java ==== ==== 27 <! import java.util.HashMap;//sj777 used to track managed instances 64 <! import java.lang.ref.SoftReference; //sj777 used to track managed instances 65 <! import java.lang.ref.Reference;//sj777 used to track managed instances 94 <! //sj777 added class 95 <! //sj777 this class is used to mantain a soft reference to 96 <! //sj777 all InterfaceInvocationHandler(IIH) managed by this 97 <! //sj777 InterfaceManager 98 <! //sj777 I think it is cleaner than efficient but 99 <! //sj777 for the moment it is not bad 100 <! //sj777 objects must be retrieved by using the same syntax 101 <! //sj777 that was used using the TransactionImpl.getMethod 102 <! //sj777 so the keys are the mapped class of the object 103 <! //sj777 and the primarykey of the IIH 104 <! //sj777 the objects are referenced through a map of maps 105 <! //sj777 there are some simple optimizations we can do like 106 <! //sj777 1. add a buffer that olds the last used mapped class key 107 <! //sj777 and its associated class, this would be very cheap 108 <! //sj777 to implement and can prove useful expecially inside queries 109 <! //sj777 2. add a method to retrieve more than one object... 110 <! //sj777 of the same class if it sort the list and 111 <! //sj777 it can make more efficient check and request 112 <! //sj777 to load the data from datastore by primarykey 113 <! //sj777 ?? should I move this to a stand alone class? 114 <! //sj777 ?? is this much different for your row factory wide cache? 115 <! //sj777 ?? with your implementation of factory cache can 2 instances 116 <! //sj777 of IIH with the same jdo identinty, and so perhaps the same row 117 <! //sj777 interfere ? 118 <! */ 119 <! 120 <! 121 <! private class DoubleKeyWeakMap{//sj777 122 <! //sj777 TODO can we know a value to init Map size from InterfaceManagerFactory,//sj777 123 <! //sj777 in ex: how many mappings... i'll see//sj777 124 <! Map primaryMap;//sj777 125 <! 126 <! public DoubleKeyWeakMap(){//sj777 127 <! primaryMap=new HashMap();//sj777 128 <! 129 <! } 130 <! public DoubleKeyWeakMap(Map _primaryMap){//sj777 131 <! primaryMap =_primaryMap; 132 <! 133 <! } 134 <! //sj777 addedmethod 135 <! public synchronized Object get(Object key1,Object key2){ 136 <! 137 <! Map wm = (HashMap)primaryMap.get(key1); 138 <! if (wm!=null){ 139 <! Reference r =(Reference )wm.get(key2); 140 <! if (r!=null) return r.get(); 141 <! } 142 <! return null; 143 <! }; 144 <! public synchronized Object getProxy(Object key1,Object key2){ 145 <! Map wm = (HashMap)primaryMap.get(key1); 146 <! if (wm!=null) synchronized(wm){ 147 <! boolean myflag =wm.containsKey(key2); 148 <! 149 <! Iterator i =wm.keySet().iterator(); 150 <! while (i.hasNext()){ 151 <! Object ks=i.next(); 152 <! 153 <! } 154 <! Reference r =(Reference )wm.get(key2); 155 <! if (r!=null) { 156 <! Object o=r.get(); 157 <! if(o!=null&&o instanceof InterfaceInvocationHandler) 158 <! return ((InterfaceInvocationHandler)o).getProxy(); 159 <! }}return null; 160 <! }; 161 <! 162 <! public synchronized void remove(Object key1,Object key2){ 163 <! Map wm = (HashMap)primaryMap.get(key1); 164 <! if (wm!=null) wm.remove(key2); 165 <! 166 <! 167 <! }; 168 <! //sj777 169 <! public synchronized void put(Object key1,Object key2,Object value){ 170 <! Map wm = (Map)primaryMap.get(key1); 171 <! if (wm==null) { 172 <! wm= new HashMap(); 173 <! primaryMap.put(key1,wm); 174 <! } 175 <! wm.put(key2,new SoftReference(value)); 176 <! } 177 <! } 178 <! 179 <! private DoubleKeyWeakMap managedInstances; 180 <! 181 <! /** 194 <! managedInstances=new InterfaceManager.DoubleKeyWeakMap();//sj777 start to take track of managed instances 291 <! //TransactionImpl txn = (TransactionImpl) currentTransaction();//sj777 !> TransactionImpl txn = (TransactionImpl) currentTransaction(); 293 <! //Object proxy = txn.get(classMapping.getMappedClass(), primaryKey);//sj777 old line 294 <! Object proxy =managedInstances.getProxy(classMapping.getMappedClass(),primaryKey); 295 <! !> Object proxy = txn.get(classMapping.getMappedClass(), primaryKey); 305 <! //proxy = rowToProxy(classMapping, row);//sj777 old 306 <! //sj777 InterfaceInvocationHandler will automatically add itself to this interfacemanager, 307 <! //sj777 todo but not to the transaction if there is an active one 308 <! //sj777 the problem is that 309 <! InterfaceInvocationHandler handler = new InterfaceInvocationHandler(factory, classMapping, row,this);//sj777 310 <! proxy = handler.newProxy();//sj777 I hope that the state of the object can be hollow... jdo spec seem complex !> proxy = rowToProxy(classMapping, row); 316 <! * sj777 no more use for this... problems? 485 <! * sj777 : and add its to the instances managed by this InterfaceManager 486 <! * sj777: and gives an identity to it and do love it forever too 490 <! //if (!currentTransaction().isActive()) {//sj777 old 491 <! if (currentTransaction()==null||!currentTransaction().isActive()) {//sj777 new !> if (!currentTransaction().isActive()) { 496 <! //sj777 todo jdo specs says that there are 2 types of 497 <! //sj777 persitent new objects : 498 <! //sj777 1)persistent new 499 <! //sj777 2)persistent new provisionally... i think this is the right 500 <! //sj777 place to mark that distinction, any suggestion? 501 <! 508 <! if (mgr == this) return; // already managed//sj777 ?? are you sure !> if (mgr == this) return; // already managed 614 <! if (!txn.contains(object)) {//sj777 TODO if it's not transactional.. this line is wrong !> if (!txn.contains(object)) { 618 <! ///ObjectState handler = getObjectState(object);//sj777 old 619 <! InterfaceInvocationHandler handler =//sj777 620 <! InterfaceInvocationHandler.getHandler(object);//sj777 !> ObjectState handler = getObjectState(object); 622 <! stopManaging(handler);//sj777 setStatus i'd like that the handler itself could ask 623 <! //sj777 to be not managed but it actually the status of 624 <! //sj777 an handler is modified directly by other classes 625 <! //sj777 i think we must refactor 626 <! 1103 <! } 1104 <! 1105 <! //sj777 1106 <! void stopManaging(InterfaceInvocationHandler handler){ 1107 <! managedInstances.remove(handler.getClassMapping().getMappedClass(),handler.g etObjectId()); 1108 <! //sj777 todo make sure that the handler clears its reference to the interfacemanager 1109 <! } 1110 <! //sj777 todo doc 1111 <! void manage(InterfaceInvocationHandler obj){ 1112 <! managedInstances.put(obj.getClassMapping().getMappedClass(),obj.getObjectId( ),obj); 1113 <! //sj777 todo make sure that the handler sets its reference to the interfacemanager |
From: <fi...@so...> - 2004-06-08 00:45:08
|
this is the change I made in RelationshipProxy... it's only a "transient patch" but it seems to work this are the diffs, i can't get access to cvs today Comparing: C:\Program Files\xorm-beta6\src\org\xorm\RelationshipProxy.java (new) To: C:\Progetti\xorm-beta6\src\org\xorm\RelationshipProxy.java (old) ==== ==== 308 <! //handler.makePersistent(mgr);//sj777 old 309 <! handler.makePersistent(owner.getInterfaceManager());//sj777 momentary patch 310 <! //sj777 needed because an instance of this class can be obtained 311 <! //sj777 outside a transaction from a new transient InterfaceInvocatinHandler 312 <! //sj777 and the mgr field is null in such condition 313 <! //sj777 we must work to make this field depend on the owner one 314 <! //sj777 any suggestion? !> handler.makePersistent(mgr); |
From: Wes B. <we...@ca...> - 2004-06-05 16:34:28
|
I got both copies you sent, but sometimes SourceForge lists are slow. Specifically for primary key mappings there is an extension you can use in your .jdo file. <class name="MyPersistentClass"> <extension vendor-name="XORM" key="table" value="my_table" /> <extension vendor-name="XORM" key="datastore-identity-type" value="java.lang.Long" /> <!-- fields etc --> </class> Looks like this never made it into the documentation. Anyway, this will basically cast all your primary keys to java.lang.Long, since that is what they look like coming back from the database. Wes fi...@so... wrote: > Sorry if this is a copy: > Did anyone know why objects changes their mapped field type apparently > without reason? > u can try, under mysql if it can matter, to call refresh on an object > with an integer primary key. > Magically it will be casted to java.lang.Long.. > Is this a nice nice spell? > Solving this problem is fundamental to preserve the coherence of > various instances mapping the same data on the datastore... > it' about 6.22 am... zzzz |
From: <fi...@so...> - 2004-06-05 11:39:58
|
Sorry if this is a copy: Did anyone know why objects changes their mapped field type apparently without reason? u can try, under mysql if it can matter, to call refresh on an object with an integer primary key. Magically it will be casted to java.lang.Long.. Is this a nice nice spell? Solving this problem is fundamental to preserve the coherence of various instances mapping the same data on the datastore... it' about 6.22 am... zzzz |