From: Thompson, B. B. <BRY...@sa...> - 2006-04-11 18:35:19
|
Kevin, That is interesting feedback. I personally do not think of COM when I = think about GOM. I am not actually very read in on the Microsoft = architectures and have not looked at them in several years, but I do not consider = them to be persistent object database systems, data extensible, etc. In any case, you can get strongly typed behavior from GOM by using = property and association classes to define property and link constraints. I do = not find much overhead in the property access and type checking mechanisms. 2/3rds of the current time for our application on a bulk load operation = is the serialization of objects and laying them down on to page images. = If we use a hard reference cache so that no cache evictions occur until the = tx commit, then the processing time drops by 2/3rds. This is why one of = my goals has been a rewrite of the transaction system and physical row allocation systems of jdbm. I certainly agree that handling index updates transparently is = difficult for "standard" java objects. I think that an interesting approach could be = done with Java 5 annotations and a BCE package. From what I have been able = to see about JDO, there are very few implementations that provide what I = might call a native java persistent object system that is performance = oriented. E.g., JDO achieved through an O-R mapper and stored in an RDBMS appears = to be a common strategy. What I have see is pretty non-transparent. You write your native java objects to be explicitly persistence aware and do things like indicate = when a property value change has invalidated an index entry for an object. = I take it that this solution is not satisfying for you and that you are looking for a means to achieve transparent persistence for native java? What I tend to do with GOM is write "skins" which implement application interfaces and are backed by the generic data object. That gives me = the application semantics for the persistent objects and strong type = checking as well as interoperability across GOM implementations. I don't find this = to be that much effort, but it does smack of something that would benefit = from automation. You could also subclass the generic object for a specific implementation and then operate in a type safe traditional OO manner, = but that locks you into a specific object hierarchy and backend. I am noodling with both UML driven authoring of the class, property and association constraints and BCE based optimizations for skins over live generic objects but I do not have anything concrete to offer in either = case. -bryan ________________________________________ From: jdb...@li... [mailto:jdb...@li...] On Behalf Of Kevin = Day Sent: Tuesday, April 11, 2006 12:49 PM To: JDBM Developer listserv Subject: re[2]: [Jdbm-developer] primary and secondary indexes in the = record manager Bryan- =A0 Yes - I've looked at the GOM implementation (and have worked = extensively with similar approaches - Microsoft COM, mozilla cross platform COM, etc...).=A0 I can see the utility in some situations, but for the = development I do, losing the runtime type checking would be a massive problem.=A0 = I'm trying to think of jdbm as a true object database, and not just a way = of storing key/value pairs.=A0 Certainly, I could create wrapper classes = that call the GOM framework, then work in an object oriented manner, but = that is a heck of a lot of work (and boring work at that). =A0 As a user of an oodbm, I expect to be able to work with objects the way = I normally work with objects, but have the persistent capabilities.=A0 = The solution I'm going for is much closer to JDO than COM...=A0 So, the = idea of using events and listeners is fine - but we have to have a true object oriented mechanism for driving those events.=A0 Short of code = injection, aspect=A0weaving,=A0or some ugly notification requirement imposed by = the API, I just don't see how to achieve this.=A0 The alternative, then, is to = provide some sort of reverse lookup method that either caches index info for = all retrieved records or provides some sort of view into the index keys (or = node locations) for each recid. =A0 =A0 =A0 Expanding on my comment re: blink trees:=A0 We could certainly store = the index pages on DATA pages, if desired - but there may be advantages (locality again!) to storing the index pages in their own page thread.=A0 Given = that index page objects are likely to change much, much more frequently than = data pages, it may make sense from a caching and free page perspective to = keep them in a separate page thread. =A0 Likewise, in my proof of concept, I could have stored the version = tables on DATA pages, but I chose not to because version tables are extremely = short lived, and will undergo a lot of churn.=A0 If the version tables were interspersed with the data, we'd wind up with a huge amount of fragmentation.=A0 I think (but can not say with absolute certainty) = that maintaining these data structures in a separate page thread should = optimize caching behavior, allocation, etc... =A0 - K =A0=20 >=20 Kevin,=20 =A0 Can you expand on your comment about blink trees? =A0Why do you see = that they need to be outside of the record management API? =A0I was hoping that = some data structures persisted as records could opt out of the default concurrency control mechanisms and use consistency based mechanisms, = but that does not mean that we need to store them directly on pages does = it?=20 =A0 I see index maintenance as the role of a framework layer over the = record management API. =A0Have you looked at the framework that I have = implemented? =A0[1]. =A0It handles index maintenance by catching property updates = and generating events. =A0Indices register as listeners for events and = handle the removal under the old key and the insert under the new key (the old = key, new key, and the object are part of the event). =A0This framework is = designed around generic data rather than explicitly declared java fields and reflection. =A0If you want to manage indices and your classes have = fields that are getting serialized, serve as your index keys, and you have setters = for those fields then you need to generate notification events from those setters and have your indices registered as listeners.=20 =A0 -bryan=20 =A0 [1] http://proto.cognitiveweb.org/projects/cweb/multiproject/cweb-generic-na= tive /index.html=20 =A0 From: jdb...@li... [mailto:jdb...@li...] On Behalf Of Kevin = Day Sent: Monday, April 10, 2006 9:46 PM To: JDBM Developer listserv Subject: [Jdbm-developer] primary and secondary indexes in the record manager=20 =A0 One thing that I've been thinking about in a background thread:=20 =A0 We've discussed the possibility of making indexes a first order citizen = of jdbm (instead of storing a BTree in the record manager itself, on the = data pages). =A0This has lots of advantages from a concurrency perspective = (we can use BLink trees). =A0It also creates a programming paradigm that is = much closer to how people are used to working with databases.=20 =A0 This is where secondary indexes come in.=20 =A0 My experience so far with implementing multi-index constructs in jdbm = has been that keeping the indexes in sync is an absolute bugger. =A0I've = had to resort to special serializers that get the serialized form of the keys = used by a given record when that record is initially retrieved, then use = those keys to remove and re-insert values during any update of the record.=20 =A0 The problem with this is that you wind up having to deserialize the = full record, then turn around and serialize keys for each index that record = is a part of. =A0And you have to do it for every single read (it isn't = possible to determine the index key values during the update itself, because the = object itself has already been changed).=20 =A0 Another approach to this problem would be to maintain index meta data = in a suplimentary index, keyed by recid. =A0Whenever an update is performed, = the recid would be retrieved, and the page and node of the object could be retrieved. =A0This would allow for efficient updates, but changes to = the main index tree (like a re-balance operation) could require a significant = number of changes in the siplimentary index.=20 =A0 A compromise would be to store only the page where the node for a given record exists, then do a linear search for the recid during updates.=20 =A0 I'm really not sure where to go with all of this, but I think it's appropriate to start talking about it. =A0If we have a suplimentary = index, then it may make sense to toss the physical row location into it and = ditch the translation pages entirely. =A0Individual record lookups would = require a tree search, but in 99% of my uses of jdbm this is the case already. = =A0This is closer to the primary key index idea that Alex floated awhile back.=20 =A0 I'm not at all saying this is the way to go, but I would like to begin getting a handle on how we can maintain indexes for objects stored in = jdbm.=20 =A0 - K < |
From: Thompson, B. B. <BRY...@sa...> - 2006-04-11 18:49:33
|
Kevin, I can definitely see keeping btree node allocations to their own page pool. It would make sense to introduce a container concept for allocations. A btree is a natural case where you (a) want the nodes allocated together; and (b) you don't want other records allocated with the btree nodes. Hash tables would be another. However the same concept could be reused for application collections. -bryan _____ From: jdb...@li... [mailto:jdb...@li...] On Behalf Of Kevin Day Sent: Tuesday, April 11, 2006 12:49 PM To: JDBM Developer listserv Subject: re[2]: [Jdbm-developer] primary and secondary indexes in the record manager Bryan- Yes - I've looked at the GOM implementation (and have worked extensively with similar approaches - Microsoft COM, mozilla cross platform COM, etc...). I can see the utility in some situations, but for the development I do, losing the runtime type checking would be a massive problem. I'm trying to think of jdbm as a true object database, and not just a way of storing key/value pairs. Certainly, I could create wrapper classes that call the GOM framework, then work in an object oriented manner, but that is a heck of a lot of work (and boring work at that). As a user of an oodbm, I expect to be able to work with objects the way I normally work with objects, but have the persistent capabilities. The solution I'm going for is much closer to JDO than COM... So, the idea of using events and listeners is fine - but we have to have a true object oriented mechanism for driving those events. Short of code injection, aspect weaving, or some ugly notification requirement imposed by the API, I just don't see how to achieve this. The alternative, then, is to provide some sort of reverse lookup method that either caches index info for all retrieved records or provides some sort of view into the index keys (or node locations) for each recid. Expanding on my comment re: blink trees: We could certainly store the index pages on DATA pages, if desired - but there may be advantages (locality again!) to storing the index pages in their own page thread. Given that index page objects are likely to change much, much more frequently than data pages, it may make sense from a caching and free page perspective to keep them in a separate page thread. Likewise, in my proof of concept, I could have stored the version tables on DATA pages, but I chose not to because version tables are extremely short lived, and will undergo a lot of churn. If the version tables were interspersed with the data, we'd wind up with a huge amount of fragmentation. I think (but can not say with absolute certainty) that maintaining these data structures in a separate page thread should optimize caching behavior, allocation, etc... - K > Kevin, Can you expand on your comment about blink trees? Why do you see that they need to be outside of the record management API? I was hoping that some data structures persisted as records could opt out of the default concurrency control mechanisms and use consistency based mechanisms, but that does not mean that we need to store them directly on pages does it? I see index maintenance as the role of a framework layer over the record management API. Have you looked at the framework that I have implemented? [1]. It handles index maintenance by catching property updates and generating events. Indices register as listeners for events and handle the removal under the old key and the insert under the new key (the old key, new key, and the object are part of the event). This framework is designed around generic data rather than explicitly declared java fields and reflection. If you want to manage indices and your classes have fields that are getting serialized, serve as your index keys, and you have setters for those fields then you need to generate notification events from those setters and have your indices registered as listeners. -bryan [1] http://proto.cognitiveweb.org/projects/cweb/multiproject/cweb-generic-native /index.html <http://proto.cognitiveweb.org/projects/cweb/multiproject/cweb-generic-nativ e/index.html> From: jdb...@li... <mailto:jdb...@li...> [mailto:jdb...@li...] <mailto:jdb...@li...> On Behalf Of Kevin Day Sent: Monday, April 10, 2006 9:46 PM To: JDBM Developer listserv Subject: [Jdbm-developer] primary and secondary indexes in the record manager One thing that I've been thinking about in a background thread: We've discussed the possibility of making indexes a first order citizen of jdbm (instead of storing a BTree in the record manager itself, on the data pages). This has lots of advantages from a concurrency perspective (we can use BLink trees). It also creates a programming paradigm that is much closer to how people are used to working with databases. This is where secondary indexes come in. My experience so far with implementing multi-index constructs in jdbm has been that keeping the indexes in sync is an absolute bugger. I've had to resort to special serializers that get the serialized form of the keys used by a given record when that record is initially retrieved, then use those keys to remove and re-insert values during any update of the record. The problem with this is that you wind up having to deserialize the full record, then turn around and serialize keys for each index that record is a part of. And you have to do it for every single read (it isn't possible to determine the index key values during the update itself, because the object itself has already been changed). Another approach to this problem would be to maintain index meta data in a suplimentary index, keyed by recid. Whenever an update is performed, the recid would be retrieved, and the page and node of the object could be retrieved. This would allow for efficient updates, but changes to the main index tree (like a re-balance operation) could require a significant number of changes in the siplimentary index. A compromise would be to store only the page where the node for a given record exists, then do a linear search for the recid during updates. I'm really not sure where to go with all of this, but I think it's appropriate to start talking about it. If we have a suplimentary index, then it may make sense to toss the physical row location into it and ditch the translation pages entirely. Individual record lookups would require a tree search, but in 99% of my uses of jdbm this is the case already. This is closer to the primary key index idea that Alex floated awhile back. I'm not at all saying this is the way to go, but I would like to begin getting a handle on how we can maintain indexes for objects stored in jdbm. - K < ------------------------------------------------------- This SF.Net email is sponsored by xPML, a groundbreaking scripting language that extends applications into web and mobile media. Attend the live webcast and join the prime developer group breaking into this new coding territory! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 _______________________________________________ Jdbm-developer mailing list Jdb...@li... https://lists.sourceforge.net/lists/listinfo/jdbm-developer |
From: Kevin D. <ke...@tr...> - 2006-04-11 19:29:16
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <STYLE type=text/css> P, UL, OL, DL, DIR, MENU, PRE { margin: 0 auto;}</STYLE> <META content="MSHTML 6.00.2900.2802" name=GENERATOR></HEAD> <BODY leftMargin=1 topMargin=1 rightMargin=1><FONT face=Tahoma> <DIV><FONT face=Arial size=2>Bryan-</FONT></DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2></FONT></DIV> <DIV><FONT face=Arial size=2>MS COM and GOM have a lot in common, actually - but that's a conversation for another day after we ship jdbm2 :-)</FONT></DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2>I think that you may have a mis-perception of how JDO works. To the actual user of the system, it's completely transparent. There is work that has to be done in an external configuration file to define the O/R mapping, and setting up the indexes in the underlying datastore, but once that's done, using JDO is almost exactly what we are talking about for jdbm - you start a transaction, you do stuff and you call commit (the biggest difference is that in JDO I don't think you have to call update() when you change an object ).</FONT></DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2>With the GOM approach, the overhead of property access, etc... is not the concern for me - it's the impact on the developer and the style of coding. It certainly would be possible to use annotations and automatically generate objects that conform to the described interface, etc... But it moves far afield from the concept of POJOs.</FONT></DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2>Run-time overriding (for example using AspectJ) would be another way to get at this - but you force the user to declare which calls can change the object state, etc... - it's just a ton of extra work just to get persistence.</FONT></DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2>Anyway, I think that it is best if we focus on the jdbm subsystem itself at this stage - we can certainly include hooks, etc... for making the higher level stuff happen - but I'd like to focus on getting the core functionality of a highly concurrent, fully functional oodbm up and running. One of the core features of most oodbms is the ability to navigate (and query) the object store using indexes, and I think we need to seriously consider if/how to do this in jdbm. If a higher level system doesn't take advantage of those indexes, then that's fine - no harm done.</FONT></DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2>The vast majority of jdbm users are using it as a persistent tree map, which implies index based access. For high concurrency, we need to offer these users map capability that is concurrent aware, and if we are going down that path, then we should at least talk about secondary indexes, because having multi-keyed maps is useful for a huge number of development situations.</FONT></DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2>BTW - another reason for why index trees shouldn't reside in the regular record manager is concurrent updates - if tx1 causes the tree to re-balance, we need to have a mechanism for ensuring that tx2's logical view of the tree stays the same, but that any changes that tx2 wanted to make will still work. That's why I'm proposing a b-link structure that holds recid references, and those recids are resolved in the context of whatever transaction happens to be active. </FONT></DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2>This approach (I think) implies a certain amount of garbage collection functionality, and the GC operation requires visibility to both the versioned and non-versioned logical store. This also implies that the GC must *know* which objects are index objects, which in turn implies that the indexes are in some way managed in jdbm (something akin to SQL INSERT INDEX, DROP INDEX, etc... commands).</FONT></DIV> <DIV><FONT face=Arial size=2></FONT> </DIV> <DIV><FONT face=Arial size=2>- K</FONT></DIV> <DIV><FONT face=Arial size=2> </FONT> <TABLE> <TBODY> <TR> <TD width=1 bgColor=blue><FONT face=Arial size=2></FONT></TD> <TD><FONT face=Arial size=2><FONT color=blue>> Kevin,<BR><BR>That is interesting feedback. I personally do not think of COM when I think<BR>about GOM. I am not actually very read in on the Microsoft architectures<BR>and have not looked at them in several years, but I do not consider them to<BR>be persistent object database systems, data extensible, etc.<BR><BR>In any case, you can get strongly typed behavior from GOM by using property<BR>and association classes to define property and link constraints. I do not<BR>find much overhead in the property access and type checking mechanisms.<BR>2/3rds of the current time for our application on a bulk load operation is<BR>the serialization of objects and laying them down on to page images. If we<BR>use a hard reference cache so that no cache evictions occur until the tx<BR>commit, then the processing time drops by 2/3rds. This is why one of my<BR>goals has been a rewrite of the transaction system and physical row<BR>allocation systems of jdbm.<BR><BR>I certainly agree that handling index updates transparently is difficult for<BR>"standard" java objects. I think that an interesting approach could be done<BR>with Java 5 annotations and a BCE package. From what I have been able to<BR>see about JDO, there are very few implementations that provide what I might<BR>call a native java persistent object system that is performance oriented.<BR>E.g., JDO achieved through an O-R mapper and stored in an RDBMS appears to<BR>be a common strategy.<BR><BR>What I have see is pretty non-transparent. You write your native java<BR>objects to be explicitly persistence aware and do things like indicate when<BR>a property value change has invalidated an index entry for an object. I<BR>take it that this solution is not satisfying for you and that you are<BR>looking for a means to achieve transparent persistence for native java?<BR><BR>What I tend to do with GOM is write "skins" which implement application<BR>interfaces and are backed by the generic data object. That gives me the<BR>application semantics for the persistent objects and strong type checking as<BR>well as interoperability across GOM implementations. I don't find this to<BR>be that much effort, but it does smack of something that would benefit from<BR>automation. You could also subclass the generic object for a specific<BR>implementation and then operate in a type safe traditional OO manner, but<BR>that locks you into a specific object hierarchy and backend.<BR><BR>I am noodling with both UML driven authoring of the class, property and<BR>association constraints and BCE based optimizations for skins over live<BR>generic objects but I do not have anything concrete to offer in either case.<BR><BR>-bryan<BR><BR>________________________________________<BR>From: <A href="mailto:jdb...@li..."><FONT color=#0000ff>jdb...@li...</FONT></A><BR><A href="mailto:jdb...@li..."><FONT color=#0000ff>[mailto:jdb...@li...]</FONT></A> On Behalf Of Kevin Day<BR>Sent: Tuesday, April 11, 2006 12:49 PM<BR>To: JDBM Developer listserv<BR>Subject: re[2]: [Jdbm-developer] primary and secondary indexes in the record<BR>manager<BR><BR>Bryan-<BR> <BR>Yes - I've looked at the GOM implementation (and have worked extensively<BR>with similar approaches - Microsoft COM, mozilla cross platform COM,<BR>etc...). I can see the utility in some situations, but for the development<BR>I do, losing the runtime type checking would be a massive problem. I'm<BR>trying to think of jdbm as a true object database, and not just a way of<BR>storing key/value pairs. Certainly, I could create wrapper classes that<BR>call the GOM framework, then work in an object oriented manner, but that is<BR>a heck of a lot of work (and boring work at that).<BR> <BR>As a user of an oodbm, I expect to be able to work with objects the way I<BR>normally work with objects, but have the persistent capabilities. The<BR>solution I'm going for is much closer to JDO than COM... So, the idea of<BR>using events and listeners is fine - but we have to have a true object<BR>oriented mechanism for driving those events. Short of code injection,<BR>aspect weaving, or some ugly notification requirement imposed by the API, I<BR>just don't see how to achieve this. The alternative, then, is to provide<BR>some sort of reverse lookup method that either caches index info for all<BR>retrieved records or provides some sort of view into the index keys (or node<BR>locations) for each recid.<BR> <BR> <BR> <BR>Expanding on my comment re: blink trees: We could certainly store the index<BR>pages on DATA pages, if desired - but there may be advantages (locality<BR>again!) to storing the index pages in their own page thread. Given that<BR>index page objects are likely to change much, much more frequently than data<BR>pages, it may make sense from a caching and free page perspective to keep<BR>them in a separate page thread.<BR> <BR>Likewise, in my proof of concept, I could have stored the version tables on<BR>DATA pages, but I chose not to because version tables are extremely short<BR>lived, and will undergo a lot of churn. If the version tables were<BR>interspersed with the data, we'd wind up with a huge amount of<BR>fragmentation. I think (but can not say with absolute certainty) that<BR>maintaining these data structures in a separate page thread should optimize<BR>caching behavior, allocation, etc...<BR> <BR>- K<BR> <BR><BR>> <BR>Kevin, <BR> <BR>Can you expand on your comment about blink trees? Why do you see that they<BR>need to be outside of the record management API? I was hoping that some<BR>data structures persisted as records could opt out of the default<BR>concurrency control mechanisms and use consistency based mechanisms, but<BR>that does not mean that we need to store them directly on pages does it? <BR> <BR>I see index maintenance as the role of a framework layer over the record<BR>management API. Have you looked at the framework that I have implemented?<BR> [1]. It handles index maintenance by catching property updates and<BR>generating events. Indices register as listeners for events and handle the<BR>removal under the old key and the insert under the new key (the old key, new<BR>key, and the object are part of the event). This framework is designed<BR>around generic data rather than explicitly declared java fields and<BR>reflection. If you want to manage indices and your classes have fields that<BR>are getting serialized, serve as your index keys, and you have setters for<BR>those fields then you need to generate notification events from those<BR>setters and have your indices registered as listeners. <BR> <BR>-bryan <BR> <BR>[1]<BR><A href="http://proto.cognitiveweb.org/projects/cweb/multiproject/cweb-generic-native"><FONT color=#0000ff>http://proto.cognitiveweb.org/projects/cweb/multiproject/cweb-generic-native</FONT></A><BR>/index.html <BR> <BR><BR><BR>From: <A href="mailto:jdb...@li..."><FONT color=#0000ff>jdb...@li...</FONT></A><BR><A href="mailto:jdb...@li..."><FONT color=#0000ff>[mailto:jdb...@li...]</FONT></A> On Behalf Of Kevin Day<BR>Sent: Monday, April 10, 2006 9:46 PM<BR>To: JDBM Developer listserv<BR>Subject: [Jdbm-developer] primary and secondary indexes in the record<BR>manager <BR> <BR><BR>One thing that I've been thinking about in a background thread: <BR><BR> <BR><BR>We've discussed the possibility of making indexes a first order citizen of<BR>jdbm (instead of storing a BTree in the record manager itself, on the data<BR>pages). This has lots of advantages from a concurrency perspective (we can<BR>use BLink trees). It also creates a programming paradigm that is much<BR>closer to how people are used to working with databases. <BR><BR> <BR><BR>This is where secondary indexes come in. <BR><BR> <BR><BR>My experience so far with implementing multi-index constructs in jdbm has<BR>been that keeping the indexes in sync is an absolute bugger. I've had to<BR>resort to special serializers that get the serialized form of the keys used<BR>by a given record when that record is initially retrieved, then use those<BR>keys to remove and re-insert values during any update of the record. <BR><BR> <BR><BR>The problem with this is that you wind up having to deserialize the full<BR>record, then turn around and serialize keys for each index that record is a<BR>part of. And you have to do it for every single read (it isn't possible to<BR>determine the index key values during the update itself, because the object<BR>itself has already been changed). <BR><BR> <BR><BR>Another approach to this problem would be to maintain index meta data in a<BR>suplimentary index, keyed by recid. Whenever an update is performed, the<BR>recid would be retrieved, and the page and node of the object could be<BR>retrieved. This would allow for efficient updates, but changes to the main<BR>index tree (like a re-balance operation) could require a significant number<BR>of changes in the siplimentary index. <BR><BR> <BR><BR>A compromise would be to store only the page where the node for a given<BR>record exists, then do a linear search for the recid during updates. <BR><BR> <BR><BR>I'm really not sure where to go with all of this, but I think it's<BR>appropriate to start talking about it. If we have a suplimentary index,<BR>then it may make sense to toss the physical row location into it and ditch<BR>the translation pages entirely. Individual record lookups would require a<BR>tree search, but in 99% of my uses of jdbm this is the case already. This<BR>is closer to the primary key index idea that Alex floated awhile back. <BR><BR> <BR><BR>I'm not at all saying this is the way to go, but I would like to begin<BR>getting a handle on how we can maintain indexes for objects stored in jdbm. <BR><BR> <BR><BR>- K <<BR><BR><BR><BR>-------------------------------------------------------<BR>This SF.Net email is sponsored by xPML, a groundbreaking scripting language<BR>that extends applications into web and mobile media. Attend the live webcast<BR>and join the prime developer group breaking into this new coding territory!<BR><A href="http://sel.as-us.falkag.net/sel?cmd"><FONT color=#0000ff>http://sel.as-us.falkag.net/sel?cmd</FONT></A><BR><BR><<BR></FONT></FONT></TD></TR></TBODY></TABLE></DIV></FONT></BODY></HTML> |