Thread: [Modeling-users] Consistency among different processes?
Status: Abandoned
Brought to you by:
sbigaret
From: Federico H. <fh...@vi...> - 2003-09-24 20:50:40
|
Modeling's manual seems to imply that it caches objects in memory indefinitely, and raises a few issues of cache consistency among EdititingContexts, as well as strategies to work around them. The workarounds, and even the "to-be-implemented clean solution" of creating a mechanism for ECs to broadcast changes all seem to assume, however, that all ECs reside in the same address space (a Zope server, for instance). This need not be the case, though: Imagine a typical client-server application that runs in a single address space on the client machine. If a machine fetches some rows, and then another one changes them, the first machine will never see the changes? It may even break them? This doesn't seem like something that can be fixed by broadcasting... Fede |
From: Sebastien B. <sbi...@us...> - 2003-09-25 13:18:27
|
Federico Heinz <fh...@vi...> wrote: > Modeling's manual seems to imply that it caches objects in memory > indefinitely, and raises a few issues of cache consistency among > EdititingContexts, as well as strategies to work around them. To be precise, an object's row is cached as long as one instance for this row is registered in one EC. > The workarounds, and even the "to-be-implemented clean solution" of > creating a mechanism for ECs to broadcast changes all seem to assume, > however, that all ECs reside in the same address space (a Zope server, > for instance). This need not be the case, though: Imagine a typical > client-server application that runs in a single address space on the > client machine. If a machine fetches some rows, and then another one > changes them, the first machine will never see the changes? It may even > break them? This doesn't seem like something that can be fixed by > broadcasting... I see the point. This is the current status. Say you have two instances (so, two adress spaces) with two ECs, ec1 and ec2. Both query and update an object obj1. - if ec1 save its changes, ec2 will not see the changes, because it has already fetched (and cached) the obj1. We'll need the 'refresh' parameter for fetch() to actually the changes, - if ec1 saves its changes, then ec2 saves its changes: ec2 will actually override the changes made by ec1, and won't even notice this. (In fact, as the documentation says and as you noted, we currently also have this problem between two different ECs in the same address space --but this will be solved by delivering notifications to the appropriate objects) Now imagine that optimistic locking is implemented. Optimistic locking examines the data when they are about to be saved in the db. When the data in the db differs from the cached one, an error is raised. This general mechanism can be used to make different address spaces notice the changes when they save changes (while the 'refresh' parameter of fetch would enable an EC to discard its cache and its changes, and to refetch the data). Of course, optimistic locking will allow you to supply your own delegate to handle such situations. For example, you might decide to override the changes for a given entity, or mix the changes observed in the db with the ones made on the object, etc.). This would probably address most problems, I think. Now if you want two different address spaces to be notified of changes made by the other before any attempt to save the changes in the db, we would need a more general notification mechanism which should be able to broadcast changes through the network, but even then, I suspect this is a hard problem to solve. Another cleaner solution (and maybe the only one that can be guaranteed to be 100% safe) could be to explicitely lock the appropriate row before any attempt to modify an object, and to release the lock only after changes has been made --this is the so-called pessimistic locking strategy. -- S=E9bastien. |
From: Federico H. <fh...@vi...> - 2003-09-25 16:41:19
|
On Thu, 2003-09-25 at 10:18, Sebastien Bigaret wrote: > To be precise, an object's row is cached as long as one instance for > this row is registered in one EC. OK... So the question WRT cache life seems to be "when does a row get deregistered from an EC?". It would seem reasonable to think that every time the EC gets a user-level fetch request (as opposed as a fetch request due to accessing a fault object), it clears its cache, since the application now is obviously interested in another set of objects instead of the ones already in memory. This could create problems if the application kept references to objects between fetches, but I'd argue that doing this is a Bad Thing to boot. Should the need arise for this kind of functionality (kind of hard to imagine, but life is weird), a method cumulativeFetch() could be added to the EditingContext, which fetches without clearing the cache first. > I see the point. This is the current status. Say you have two instances > (so, two adress spaces) with two ECs, ec1 and ec2. Both query and update > an object obj1. Your description matches what I figured, and I like the optimistic locking idea. Is the implementation of optimistic lockig scheduled any time soon? Ideas on how much effort it would entail to implement? > (In fact, as the documentation says and as you noted, we currently also > have this problem between two different ECs in the same address space > --but this will be solved by delivering notifications to the > appropriate objects) I must admit I'm kinda skeptic about the notification idea... Assume ec1 and ec2 above are in the same address space now. When ec1 commits changes to object x, it can notify ec2 of this... but what is ec2 going to do with this information? If ec2 has uncommited changes to x, it has to resort to the same kind of logic that we'd use in the optimistic lock case. In the end, thus, the only thing we gain is that we skip a fetch. Not that this is not important performance-wise, the point I want to make is that notification alone does not solve the problem, we also need the optimistic lock resolution for it to work. > Now if you want two > different address spaces to be notified of changes made by the other > before any attempt to save the changes in the db, we would need a more > general notification mechanism which should be able to broadcast changes > through the network, but even then, I suspect this is a hard problem to > solve. This would be a nightmare, gazillions of things could go wrong, and they would certainly do so in the worst possible sequence. We don't want to pursue this. > Another cleaner solution (and maybe the only one that can be > guaranteed to be 100% safe) could be to explicitely lock the appropriate > row before any attempt to modify an object, and to release the lock only > after changes has been made --this is the so-called pessimistic locking > strategy. We could implement this as a method of persistent objects, thus x.lock() would perform a locking read on the row until the transaction's done or rolled back. Of course, this means that the programmer will have to take care of which objects to lock, but such is the fate of the pessimistic locking programmer :-) Fede |
From: Sebastien B. <sbi...@us...> - 2003-09-25 18:40:49
|
Federico Heinz <fh...@vi...> wrote: > On Thu, 2003-09-25 at 10:18, Sebastien Bigaret wrote: > > To be precise, an object's row is cached as long as one instance for > > this row is registered in one EC. >=20 > OK... So the question WRT cache life seems to be "when does a row get > deregistered from an EC?". It would seem reasonable to think that every > time the EC gets a user-level fetch request (as opposed as a fetch > request due to accessing a fault object), it clears its cache, since the > application now is obviously interested in another set of objects > instead of the ones already in memory. In fact, there are two levels of caching. 1. Within EC:=20 - you fetch an object obj1, and modify it, =20=20 - then you submit another query, which returns obj1 as well: there, you don't want to override the modification you've made but not saved. 2. The database's rows cache, held by Database, to which the framework refer for various tasks, such as: building fetched objects, computing the changes that needs to be forwarded to the database, etc. > This could create problems if the > application kept references to objects between fetches, but I'd argue > that doing this is a Bad Thing to boot. Should the need arise for this > kind of functionality (kind of hard to imagine, but life is weird), a > method cumulativeFetch() could be added to the EditingContext, which > fetches without clearing the cache first. When referring to fetch(), both mechanisms can be triggered: 1. -> if the object already exists in the EC, possibly modified, you'll get that one. 2. -> otherwise, the database cache is searched for the row, and if found, that one will be used instead. (1.) is probably something you do not want to change, (2.) can be annoying, and that's the situations where this is annoying that 'refresh' will address (and in addition to the default mechanism, it will allow you to do whatever you want through a specific delegate if the object actually changed, just like with optimistic locking) In fact, clearing the cache cannot be the default, just because you'd probably won't rely in the framework to modify the data in your back. Suppose, for example, that when fetching, an previously fetched object has been deleted in the meantime (by an other applications): what you the framework do? Should it take the responsability to delete the object in the EC that fetched the data? Or suppose you modified the relationships in the EC, and that when fetching these relationships have changed: discarding the data could lead to inconsistencies in the graph of objects, since most of the time, relationships have an inverse (and constitue a bi-directional UML associations). When you say that it is a bad thing to hold references to objects between fetching, you forget that the objects themselves actually hold ref. to others they are in relation with. Now if you want to be sure to get fresh data (until 'refresh' for fetch is available), you can make sure that the row is deregistered by calling ec.dispose() on (each of) your EC. Be aware that this method invalidates any object the EC hold and that it discards any updates/deletes/ etc. that are not saved yet. This also has a significant impact on performances, since every objects will need to be refetched and rebuild. If it's not clear enough, feel free to ask for more ;) Maybe I'm not thinking/answering the right way, so if you have a specific example in mind that could help. > > I see the point. This is the current status. Say you have two instances > > (so, two adress spaces) with two ECs, ec1 and ec2. Both query and update > > an object obj1. >=20 > Your description matches what I figured, and I like the optimistic > locking idea. Is the implementation of optimistic lockig scheduled any > time soon? Ideas on how much effort it would entail to implement? Not now, but I can make a plan for it, say, this week-end if you wish. > > (In fact, as the documentation says and as you noted, we currently also > > have this problem between two different ECs in the same address space > > --but this will be solved by delivering notifications to the > > appropriate objects) >=20 > I must admit I'm kinda skeptic about the notification idea... Assume ec1 > and ec2 above are in the same address space now. When ec1 commits > changes to object x, it can notify ec2 of this... but what is ec2 going > to do with this information? If ec2 has uncommited changes to x, it has > to resort to the same kind of logic that we'd use in the optimistic lock > case. In the end, thus, the only thing we gain is that we skip a fetch. > Not that this is not important performance-wise, the point I want to > make is that notification alone does not solve the problem, we also need > the optimistic lock resolution for it to work. Agreed, just because the modifications could have been made by any bash/perl/... script who won't post any modifications ;) Back on the notifications, at least they could solve the case where the framework runs in a single address space (this is the case in Zope, for example, or in any threaded application) and an EC save changes that you'd like to see appear in other ECs. > > Now if you want two > > different address spaces to be notified of changes made by the other > > before any attempt to save the changes in the db, we would need a more > > general notification mechanism which should be able to broadcast changes > > through the network, but even then, I suspect this is a hard problem to > > solve. >=20 > This would be a nightmare, gazillions of things could go wrong, and they > would certainly do so in the worst possible sequence. We don't want to > pursue this. I really like the way you put it ;) and totally agree. In fact, this also applies to particular situations where the data can be changed by any mean outside the framework. Such situations require specific and specialized use-cases and actions, so this make sense I guess to leave it open (but we still need to provide the tool for handling them, such as refresh and optimistic locking). > > Another cleaner solution (and maybe the only one that can be > > guaranteed to be 100% safe) could be to explicitely lock the appropriate > > row before any attempt to modify an object, and to release the lock only > > after changes has been made --this is the so-called pessimistic locking > > strategy. >=20 > We could implement this as a method of persistent objects, thus x.lock() > would perform a locking read on the row until the transaction's done or > rolled back. Of course, this means that the programmer will have to take > care of which objects to lock, but such is the fate of the pessimistic > locking programmer :-) Yes, that could be done; this is in fact the very basis for automatic pessimistic locking: lock the object as sson as it is modified (binding lock() to willChange()), release the lock when it is saved and/or refaulted. -- S=E9bastien. |
From: Federico H. <fh...@vi...> - 2003-09-25 21:22:08
|
I think I have not explained myself good enough, and you're answering just to what I said, not what I meant :-) The point I'm trying to make is that applications usually let the user perform transactions on a certain part of the database, and there's little point to keeping the data from previous transactions around. For example, imagine a simple app that allows you to delete, insert or modify the Author's info. The user of such an app will repeatedly look up an Author, change an attribute or two, commit the change, and start over again. When the user looks up the second Author, it doesn't make a lot of sense to keep the first Author's object around, does it? Yes, it is possible that the application will let the user navigate the second Author's books, and if the first Author has co-authored a book with the second, it might also let the user navigate back to the first Author... but I think it's affordable (and in some cases even desirable) that this navigation results in the first Author being fetched again from the database, instead of just fishing a likely stale copy from the cache. On Thu, 2003-09-25 at 15:40, Sebastien Bigaret wrote: > In fact, there are two levels of caching. > 1. Within EC:=20 > - you fetch an object obj1, and modify it, > - then you submit another query, which returns obj1 as well: there, > you don't want to override the modification you've made but not > saved. Hmmm... Well, this is the thing. I mean, what are you doing submitting another query while you still have uncommitted changes? I understand this is exactly the right behavior if you are traversing a series of relationships that brings you back to the original table/entity, but my understanding is that when the program says ec.fetch(...), it is actually stating that it's done with the results of the last fetch, and wants to start anew. So if you try to do a fetch while changes are pending, the EC should either commit the changes (don't like it) or raise an exception (much better) > 2. The database's rows cache, held by Database, to which the framework > refer for various tasks, such as: building fetched objects, computing > the changes that needs to be forwarded to the database, etc. I'm arguing that for most applications this cache ought to be flushed with each user-level fetch. I understand that, for single-user applications, longer-term caching can be a significance performance boost (although it may, as noted in the documentation, lead to memory footprint bloat if the application is not restarted regularly). In any other environment, I feel that the risk of the cache becoming stale seriously outweights performance concerns. > When referring to fetch(), both mechanisms can be triggered: > 1. -> if the object already exists in the EC, possibly modified, you'll > get that one. I've argued above that fetch() should fail if there are pending changes. > 2. -> otherwise, the database cache is searched for the row, and if > found, that one will be used instead. > (2.) can be annoying, and that's the situations where this is annoying > that 'refresh' will address (and in addition to the default > mechanism, it will allow you to do whatever you want through a > specific delegate if the object actually changed, just like with > optimistic locking) I agree that optimistic locking could make the long-term caching of rows workable. It will also, however, make conflicts more likely, because rows that have been longer in the database cache have a larger probability of becoming stale. > In fact, clearing the cache cannot be the default, just because you'd > probably won't rely in the framework to modify the data in your back. > Suppose, for example, that when fetching, an previously fetched object > has been deleted in the meantime (by an other applications): what you > the framework do? Should it take the responsability to delete the > object in the EC that fetched the data? If the database doesn't keep a long-term cache, the call to fetch() will not return the deleted row. If the deletion takes place after the fetch(), of course, we'll have to resort to the whole optimistic locking thing. Come to think of it, I think most of my argument rests upon the idea that it's desirable to minimize the likelyhood of optimistic locking conflict occurrence, which sound intuitively right to me, but I don't have any hard data to back it up. =20 > Not now, but I can make a plan for it, say, this week-end if you wish. Well, that would be great! I'm trying (and still failing :-) ) to figure out which module does what in the framework, so an expert opinion on what would need to be done to get optimistic locking and vertical mapping working would be a wonderful thing. > > I must admit I'm kinda skeptic about the notification idea... > Agreed, just because the modifications could have been made by any > bash/perl/... script who won't post any modifications ;) Back on the > notifications, at least they could solve the case where the framework > runs in a single address space (this is the case in Zope, for example, > or in any threaded application) and an EC save changes that you'd like > to see appear in other ECs. I'm saying that I don't see how notification could solve conflicts even in a single address space. If both ec1 and ec2 have pending modifications for obj1, and ec1 commits the change and notifies ec2... what will ec2 do with its changes? Fede |
From: Sebastien B. <sbi...@us...> - 2003-09-25 22:08:56
|
Federico Heinz <fh...@vi...> wrote: > The point I'm trying to make is that applications usually let the user > perform transactions on a certain part of the database, and there's > little point to keeping the data from previous transactions around. Let me rephrase what your idea is, so that we're sure we're speaking of the same thing: you consider that fetching is the preliminary phase, then you modify objects (without fetching explicitely anymore), than you save changes and finally discards the object(s) you fetched. If you want this, you can simply ec.dispose() after you've done with the changes, and the objects will be invalidated and the corresponding cached rows will be removed (I assume here that you only have one EC at a time). This way, you'll get the exact behaviour you're asking for. > For example, imagine a simple app that allows you to delete, insert or > modify the Author's info. The user of such an app will repeatedly look > up an Author, change an attribute or two, commit the change, and start > over again. When the user looks up the second Author, it doesn't make a > lot of sense to keep the first Author's object around, does it? Yes, it > is possible that the application will let the user navigate the second > Author's books, and if the first Author has co-authored a book with the > second, it might also let the user navigate back to the first Author... > but I think it's affordable (and in some cases even desirable) that this > navigation results in the first Author being fetched again from the > database, instead of just fishing a likely stale copy from the cache. Agreed, this is possible --but you must also understand that other people might think differently, esp. wrt performance issues. Here we just have a couple of objects, so this does not make much differences, but if you need to work on a bigger set of objects that's another story. Speaking of perf., here are some figures I've just produced on my installation: - fetching 5000 simple objects (one attributes, a to-one and a to-many relationship, plus an ID) -> 4.77s - fetching the same 5000 objects while they are already fetched in the EC: 798 ms That's a gain factor of ~6x --just because in the 2nd case, objects were not re-created and re-populated with their values. > On Thu, 2003-09-25 at 15:40, Sebastien Bigaret wrote: > > In fact, there are two levels of caching. > > 1. Within EC:=20 > > - you fetch an object obj1, and modify it, > > - then you submit another query, which returns obj1 as well: there, > > you don't want to override the modification you've made but not > > saved. >=20 > Hmmm... Well, this is the thing. I mean, what are you doing submitting > another query while you still have uncommitted changes? I understand > this is exactly the right behavior if you are traversing a series of > relationships that brings you back to the original table/entity, but my > understanding is that when the program says ec.fetch(...), it is > actually stating that it's done with the results of the last fetch, and > wants to start anew. So if you try to do a fetch while changes are > pending, the EC should either commit the changes (don't like it) or > raise an exception (much better) Okay, let's go for a little illustration! I want to be able to fetch while some changes are uncommitted, because I may be in the middle of a modification process, and now, for example, I need to fetch some other objects to add them to my initial object's relationships. And I want that both the initial changes, and the one I make after the subsequent fetch(es), are all done atomically wrt the db. For example, i may want to fetch an author, get its books, remove all books from the author's list, delete the author, fetch an other author based on any user specification (no traversal here, but real fetch), assign the former author's books to the latter's list of books, then, and only then, save the changes. > > 2. The database's rows cache, held by Database, to which the framework > > refer for various tasks, such as: building fetched objects, computing > > the changes that needs to be forwarded to the database, etc. >=20 > I'm arguing that for most applications this cache ought to be flushed > with each user-level fetch. I understand that, for single-user > applications, longer-term caching can be a significance performance > boost (although it may, as noted in the documentation, lead to memory > footprint bloat if the application is not restarted regularly). In any > other environment, I feel that the risk of the cache becoming stale > seriously outweights performance concerns. I think we both could argue endlessly ;) but I do understand your point. All that I say is that both mechanisms should be supported. And they are, actually. [...] > > 2. -> otherwise, the database cache is searched for the row, and if > > found, that one will be used instead. > > (2.) can be annoying, and that's the situations where this is annoying > > that 'refresh' will address (and in addition to the default > > mechanism, it will allow you to do whatever you want through a > > specific delegate if the object actually changed, just like with > > optimistic locking) >=20 > I agree that optimistic locking could make the long-term caching of rows > workable. It will also, however, make conflicts more likely, because > rows that have been longer in the database cache have a larger > probability of becoming stale. >=20 > > In fact, clearing the cache cannot be the default, just because you'd > > probably won't rely in the framework to modify the data in your back. > > Suppose, for example, that when fetching, an previously fetched object > > has been deleted in the meantime (by an other applications): what you > > the framework do? Should it take the responsability to delete the > > object in the EC that fetched the data? >=20 > If the database doesn't keep a long-term cache, the call to fetch() will > not return the deleted row. If the deletion takes place after the > fetch(), of course, we'll have to resort to the whole optimistic locking > thing. Come to think of it, I think most of my argument rests upon the > idea that it's desirable to minimize the likelyhood of optimistic > locking conflict occurrence, which sound intuitively right to me, but I > don't have any hard data to back it up. That sounds a reasonable goal, but some others will argue that this is not their priority number one. In other words, that's a application-design decision, and I do not want to make this decision within the framework but, again, I think we'd better offer the developper the choice by giving him the tools. Agreed however, some of these tools are still missing. > > Not now, but I can make a plan for it, say, this week-end if you wish. >=20 > Well, that would be great! I'm trying (and still failing :-) ) to figure > out which module does what in the framework, so an expert opinion on > what would need to be done to get optimistic locking and vertical > mapping working would be a wonderful thing. Okay, I'll try hard to make this happen this week-end, then. BTW, I know there should be documentation for the framework's architecture. Hopefully this will be done one day, but the todo list is sooo long... > > > I must admit I'm kinda skeptic about the notification idea... > > Agreed, just because the modifications could have been made by any > > bash/perl/... script who won't post any modifications ;) Back on the > > notifications, at least they could solve the case where the framework > > runs in a single address space (this is the case in Zope, for example, > > or in any threaded application) and an EC save changes that you'd like > > to see appear in other ECs. >=20 > I'm saying that I don't see how notification could solve conflicts even > in a single address space. If both ec1 and ec2 have pending > modifications for obj1, and ec1 commits the change and notifies ec2... > what will ec2 do with its changes? You can, for example: - either ignore the changes, and rely on optimistic locking (this will probably be the default), - decide to examine the objects, apply the saved changes, then re-apply the uncommitted changes. Example: you modify a person's phone number, at this point you get a notification saying that the tel. number and the middle name has changed: you apply those changes, then re-apply the uncommitted changes for the phone number) [This can maybe be done automatically, although there are some subtle points when it comes to relationships --this needs to be investigated] - ask the user, - ... add your aplication requirements here ;) You can think about these notifications as a mean to provide a specific and application-specific behaviour for minimizing failures under optimistic locking strategies, at least when in a single address-space. Moreover, these notifications are really needed if for any reason you ''choose'' the no-locking policy (which is the only supported policy by now, and the reason why the User's Guide details the problem when using one EC per session). Does all this make sense wrt your own claims & requirements? -- S=E9bastien. |
From: Federico H. <fh...@vi...> - 2003-09-25 23:59:35
|
On Thu, 2003-09-25 at 19:08, Sebastien Bigaret wrote: > If you want this, you can simply ec.dispose() after you've done with > the changes, and the objects will be invalidated and the corresponding > cached rows will be removed (I assume here that you only have one EC > at a time). This way, you'll get the exact behaviour you're asking > for. OK, sold! :-) > Okay, I'll try hard to make this happen this week-end, then. BTW, I know > there should be documentation for the framework's architecture. > Hopefully this will be done one day, but the todo list is sooo long... Sounds familiar... :-) We'll have to communicate stuff internally, so we're likely to produce *some* form of representation of the architecture. If we do, we'll definitely share it. > You can think about these notifications as a mean to provide a specific > and application-specific behaviour for minimizing failures under > optimistic locking strategies, at least when in a single > address-space. It sounds to me that what you'd be getting is more like early announcement that a conflict is coming, but the conflict-resolution logic will still have to be there. > Moreover, these notifications are really needed if for > any reason you ''choose'' the no-locking policy (which is the only > supported policy by now, and the reason why the User's Guide details the > problem when using one EC per session). This is, of course, true. But if we're talking priorities, I think optimistic locking would be first, because you'll pretty much need its infrastructure to resolve the conflict you've just been informed of. > Does all this make sense wrt your own claims & requirements? It does. And I'm not making claims or requiring things... I'm just exploring possibilities. By the way, I *like* this whole exchange! Makes me feel good about working with you. Fede |
From: Sebastien B. <sbi...@us...> - 2003-09-26 22:34:13
|
Federico Heinz <fh...@vi...> wrote: [...] > > Okay, I'll try hard to make this happen this week-end, then. BTW, I know > > there should be documentation for the framework's architecture. > > Hopefully this will be done one day, but the todo list is sooo long... >=20 > Sounds familiar... :-) We'll have to communicate stuff internally, so > we're likely to produce *some* form of representation of the > architecture. If we do, we'll definitely share it. That would be great. Do not hesitate to share it early, this is something we can work on together if you wish. > > You can think about these notifications as a mean to provide a specific > > and application-specific behaviour for minimizing failures under > > optimistic locking strategies, at least when in a single > > address-space. >=20 > It sounds to me that what you'd be getting is more like early > announcement that a conflict is coming, but the conflict-resolution > logic will still have to be there. Right, the conflict-resolution still needs to be there, but that's a annoucement that a "conflict" has already occurred: an object is now out-of-sync because another EC saved its changes (inside a single address-space, we rely on the same db-cache). On the other hand, we'll also need an other conflict resolution (possibly the same) when an other process has modified the data --that's when optimistic locking fails. > > Moreover, these notifications are really needed if for > > any reason you ''choose'' the no-locking policy (which is the only > > supported policy by now, and the reason why the User's Guide details the > > problem when using one EC per session). >=20 > This is, of course, true. But if we're talking priorities, I think > optimistic locking would be first, because you'll pretty much need its > infrastructure to resolve the conflict you've just been informed of. My ideas are not really clear by now, so I'll wait a little before stating anything here --I suspect both are a little more than just complementary, but can't find a clear explanation now. > > Does all this make sense wrt your own claims & requirements? >=20 > It does. And I'm not making claims or requiring things... I'm just > exploring possibilities. >=20 > By the way, I *like* this whole exchange! Makes me feel good about > working with you. Kewl ;) And I'm sure good things are to happen --and esp. your experience with vertical mapping can be quite a driving force for implementing it. My turn to have some rest now. Regards, -- S=E9bastien. |