pypersyst-devel Mailing List for Pypersyst - Python Persistence System
Brought to you by:
pobrien
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(33) |
Nov
(1) |
Dec
(22) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(38) |
Feb
(4) |
Mar
(3) |
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(60) |
Oct
(35) |
Nov
|
Dec
|
2004 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
(2) |
Jun
(17) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Matthew S. <gld...@us...> - 2004-07-03 20:01:47
|
Pat, and whoever is following PyPerSyst CVS: The commits I made at 14:16 today are apparently just a temporary patch :-) I just ran into the same problem with what I am working on now, which is how to declare the adaptation from a specific type of display supplier and its corresponding update or delete suppliers. I have prior obligations this afternoon/evening so I regret that I have to stop development on this for the day ****agggh!**** but I will resume development on this late tonight or tomorrow morning. I will be reading this to see if it helps: http://peak.telecommunity.com/protocol_ref/protocols-context.html And I will also be considering ripping out some of what I've been doing and seeing if I can figure out a cleaner approach that doesn't involve so much magic :-) - Matthew |
From: <po...@or...> - 2004-06-25 22:20:41
|
Matthew Scott <gld...@us...> writes: > After I took this out since unit tests passed for both PyPerSyst and > the app using it, I remembered why I put it in -- it's so that the > extent-aware Grid/Table classes know how to pretty up column headings > :) > > > > Index: extent.py > =================================================================== > RCS file: /cvsroot/pypersyst/pypersyst/pypersyst/entity/extent.py,v > retrieving revision 1.29 > diff -a -u -r1.29 extent.py > --- extent.py 19 Jun 2004 21:29:39 -0000 1.29 > +++ extent.py 25 Jun 2004 21:34:50 -0000 > @@ -325,6 +325,8 @@ > """Synchronize with the current EntityClass definition.""" > self._attrSpec = EntityClass._attrSpec[:] > self._altkeySpec = EntityClass._altkeySpec[:] > + if hasattr(EntityClass, '_fieldSpec'): > + self._fieldSpec = EntityClass._fieldSpec.copy() > self._forClassName = EntityClass.__name__ > self._refreshkeys() > self._refreshlinks() I checked this in. I was kind of holding off on all the field-related stuff until I had time to really give them the attention they need. But this change looks pretty reasonable and would likely stay no matter what happens to fields, so it's in. Thanks. -- Patrick K. O'Brien Orbtech http://www.orbtech.com Blog http://www.sum-ergo-cogito.com |
From: Matthew S. <gld...@us...> - 2004-06-25 21:37:29
|
After I took this out since unit tests passed for both PyPerSyst and the app using it, I remembered why I put it in -- it's so that the extent-aware Grid/Table classes know how to pretty up column headings :) Index: extent.py =================================================================== RCS file: /cvsroot/pypersyst/pypersyst/pypersyst/entity/extent.py,v retrieving revision 1.29 diff -a -u -r1.29 extent.py --- extent.py 19 Jun 2004 21:29:39 -0000 1.29 +++ extent.py 25 Jun 2004 21:34:50 -0000 @@ -325,6 +325,8 @@ """Synchronize with the current EntityClass definition.""" self._attrSpec = EntityClass._attrSpec[:] self._altkeySpec = EntityClass._altkeySpec[:] + if hasattr(EntityClass, '_fieldSpec'): + self._fieldSpec = EntityClass._fieldSpec.copy() self._forClassName = EntityClass.__name__ self._refreshkeys() self._refreshlinks() |
From: Matthew S. <gld...@us...> - 2004-06-25 21:30:44
|
Is the attached patch closer to what you had in mind? It's smaller and simpler and probably more efficient :-) The changes it makes are: pypersyst.field.field.Field already had a _bypass attribute that could be set to True to keep __init__ from validating the value. __init__ now has a 'bypass' argument defaulting to None, that if set to something other than None will override the _bypass attribute. pypersyst.entity.base.Entity.__init__ now accepts a 'bypass' argument defaulting to None. If it is set to True or False, it is passed along to the field constructor which will act as described above. Additionally, if it is set to True, any field for which there is no corresponding value in 'attrs' will be given the value None. pypersyst.entity.entity.Entity.txb_create sets bypass=True when creating its surrogate instance. Any custom txb_create in someone's code would need to do this as well. Matthew Scott wrote: > Patrick K. O'Brien wrote: > >> So instead of Field.__init__() calling self.set(value), the value >> would not be passed as an argument to __init__, and the user of the >> field would have to call field.set(value) explicitly. That would >> allow us to constuct a field with no value, which is what you want for >> a create surrogate. >> > I agree that having this kind of ability in Field rather than hacked > into Entity would be smarter though. Perhaps it already supports this > and I didn't look closely enough :) I'll investigate and see what I > discover. |
From: Matthew S. <gld...@us...> - 2004-06-25 20:50:06
|
Patrick K. O'Brien wrote: >So instead of Field.__init__() calling self.set(value), the value >would not be passed as an argument to __init__, and the user of the >field would have to call field.set(value) explicitly. That would >allow us to constuct a field with no value, which is what you want for >a create surrogate. > > The reason I create the subclass on-the-fly was that it was a quick way to allow the None value to be set on a field in a Create surrogate. I agree that having this kind of ability in Field rather than hacked into Entity would be smarter though. Perhaps it already supports this and I didn't look closely enough :) I'll investigate and see what I discover. - Matthew |
From: <po...@or...> - 2004-06-25 18:01:47
|
po...@or... (Patrick K. O'Brien) writes: > Matthew Scott <gld...@us...> writes: > > > Matthew Scott wrote: > > > Matthew Scott wrote: > > > > > >> Attached is said patch. > > > Hmm... not quite. Updated patch coming soon. > > > > Pretty sure this one will do the trick. > > My instincts are telling me that I'd rather not instantiate the fields > themselves on a create builder. I think I'd prefer to have the ui > constuct itself based on the attributes for the field class, rather > than instantiate a field with a Null/None value, specialized flag, > etc. > > I'll look into this more closely so we can discuss the alternatives. Having said that, I'm now thinking about the asymetry that would result -- update and delete surrogates would have field instances, but create surrogates would not. Yuck. So now I'm thinking that perhaps a lazy construction approach would be appropriate for field classes. So instead of Field.__init__() calling self.set(value), the value would not be passed as an argument to __init__, and the user of the field would have to call field.set(value) explicitly. That would allow us to constuct a field with no value, which is what you want for a create surrogate. -- Patrick K. O'Brien Orbtech http://www.orbtech.com Blog http://www.sum-ergo-cogito.com |
From: <po...@or...> - 2004-06-25 17:37:53
|
Matthew Scott <gld...@us...> writes: > Matthew Scott wrote: > > Matthew Scott wrote: > > > >> Attached is said patch. > > Hmm... not quite. Updated patch coming soon. > > Pretty sure this one will do the trick. My instincts are telling me that I'd rather not instantiate the fields themselves on a create builder. I think I'd prefer to have the ui constuct itself based on the attributes for the field class, rather than instantiate a field with a Null/None value, specialized flag, etc. I'll look into this more closely so we can discuss the alternatives. -- Patrick K. O'Brien Orbtech http://www.orbtech.com Blog http://www.sum-ergo-cogito.com |
From: <po...@or...> - 2004-06-25 17:31:16
|
Matthew Scott <gld...@us...> writes: > Hey Pat, > > I know you have some changes in mind based on what we've been > discussing about transaction builders/"actions", but I was curious > about what you thought of adding in a txb_delete() as a corollary to > txb_create() and txb_update(). That was my plan all along. I just considered it trivial compared to create and update, so I put it off until I figured out how I wanted to handle those other two, then promptly forgot to complete the set. That's what happens when you get old. :-) I'll add it today and check it in. -- Patrick K. O'Brien Orbtech http://www.orbtech.com Blog http://www.sum-ergo-cogito.com |
From: <po...@or...> - 2004-06-25 17:20:36
|
Matthew Scott <gld...@us...> writes: > Hi Donnal, > > There may be other use cases that Pat can point out, but one that > comes to mind that I've made use of a few days ago is this: A > transaction that wants to use the time that its execution occurred to > set an attribute of some object. > > Since setting the attribute of said object is based on when the > transaction was executed, the transaction itself must store the time > at which it was executed in order to maintain the deterministic nature > of transactions if they are replayed from a log. > > > Donnal Walter wrote: > > Hi all, > > The Clock and EngineClock classes obviously play central roles in > > PyPerSyst, and at one time I was certain that I understood those > > roles. Months later, however, I am having a hard time seeing why > > the time stamp is needed. If the transaction log stores > > transactions in order in which they were executed, why is this > > sequestial order not sufficient for reconstructing the database if > > a crash occurs? > > Regards, > > Donnal Matt's explanation is correct. Here are some additional points. First, the clock module was recently simplified a good deal. So now there is only one Clock class (EngineClock went away). And it is no longer a shared-state Borg thingie, because that made it difficult to have more than one database open at the same time, since each database is controlled by its own clock. Actually, it's really the engine that is responsible for the clock. Each database has one engine, and each engine has a clock. (Maybe I should have named it the "timing belt" instead of the clock.) Having said all that, the clock internals shouldn't be of concern to most developers. The engine is responsible for the clock. The engine ensures that timestamps are deterministic when replaying transactions from a log. The transaction's execute method now takes a time value as a required parameter. The engine provides this value when it executes a transaction. So now only the engine should be importing the clock module. If you want access to the database time elsewhere, just ask the engine for it: engine.time(). Or, if you use the database class, just ask it, which will ask the engine, which will ask the clock, etc. Clear as mud? -- Patrick K. O'Brien Orbtech http://www.orbtech.com Blog http://www.sum-ergo-cogito.com |
From: Matthew S. <gld...@us...> - 2004-06-23 01:54:14
|
...of course, on my last message, disregard the subject line because I meant to say txb_delete() instead :) |
From: Matthew S. <gld...@us...> - 2004-06-23 01:38:44
|
Hey Pat, I know you have some changes in mind based on what we've been discussing about transaction builders/"actions", but I was curious about what you thought of adding in a txb_delete() as a corollary to txb_create() and txb_update(). I'll just use the standard Delete transaction for now but I'm just posting this here in case the idea pops off my brain-stack before more attention is given to it :-) - Matthew |
From: Matthew S. <gld...@us...> - 2004-06-21 22:07:40
|
Yet another tiny patch for Pat to peruse when he has a chance this week :-) In pypersyst.entity.extent._sync, does it make sense to add the following to go along with the _attrSpec and _altkeySpec copying? if hasattr(EntityClass, '_fieldSpec'): self._fieldSpec = EntityClass._fieldSpec.copy() For now, I'm patching this in to my local copy so that some code based on the wx.Table subclass you made for sandbox/pobrien/browser/browser.py can know how to display "human friendly" labels for fields when possible. ...and I'm posting this kind of stuff here so that other subscribers to this list can have some introspection into PyPerSyst changes beyond the pypersyst-cvs commit messages, however descriptive they may be. :) - Matthew |
From: Matthew S. <gld...@us...> - 2004-06-21 20:22:21
|
Hi Donnal, There may be other use cases that Pat can point out, but one that comes to mind that I've made use of a few days ago is this: A transaction that wants to use the time that its execution occurred to set an attribute of some object. Since setting the attribute of said object is based on when the transaction was executed, the transaction itself must store the time at which it was executed in order to maintain the deterministic nature of transactions if they are replayed from a log. Donnal Walter wrote: > Hi all, > > The Clock and EngineClock classes obviously play central roles in > PyPerSyst, and at one time I was certain that I understood those > roles. Months later, however, I am having a hard time seeing why > the time stamp is needed. If the transaction log stores > transactions in order in which they were executed, why is this > sequestial order not sufficient for reconstructing the database if > a crash occurs? > > Regards, > Donnal |
From: Donnal W. <don...@ya...> - 2004-06-21 20:02:03
|
Hi all, The Clock and EngineClock classes obviously play central roles in PyPerSyst, and at one time I was certain that I understood those roles. Months later, however, I am having a hard time seeing why the time stamp is needed. If the transaction log stores transactions in order in which they were executed, why is this sequestial order not sufficient for reconstructing the database if a crash occurs? Regards, Donnal |
From: Matthew S. <gld...@us...> - 2004-06-21 18:16:47
|
Matthew Scott wrote: > Matthew Scott wrote: > >> Attached is said patch. > > > Hmm... not quite. Updated patch coming soon. Pretty sure this one will do the trick. |
From: Matthew S. <gld...@us...> - 2004-06-21 17:59:05
|
Matthew Scott wrote: > Attached is said patch. Hmm... not quite. Updated patch coming soon. |
From: Matthew S. <gld...@us...> - 2004-06-21 17:40:15
|
> I think I have a general idea of how to solve this problem based on what > I've done for 'Syst-o-matic so I will post a patch to fix this sometime > today. Attached is said patch. Comments in the patch make it pretty self-explanatory. Oh, and I was able to use the pop() method on a dictionary for the first time, to avoid wrapping something in a try/except block. That was fun :) |
From: Matthew S. <gld...@us...> - 2004-06-21 17:12:34
|
def testFieldBuilder(self): db = self.db b = db.root._classes['Field'].txb_create() b.new.name = 'foo' db.execute(b.transaction()) The result of this test is as follows: ERROR: testFieldBuilder (__main__.TestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "test.py", line 23, in testFieldBuilder b = db.root._classes['Field'].txb_create() File "/home/gldnspud/p/PYTHONPATH/pypersyst/entity/entity.py", line 179, in txb_create new = cls._SurrogateClass(attrs={}) File "/home/gldnspud/p/PYTHONPATH/pypersyst/entity/base.py", line 21, in __init__ value = attrs[name] KeyError: 'name' I haven't thought through a solution to this yet, but here is my understanding of the symptoms. When a surrogate is created using the txb_create classmethod of an entity class, the class returned by cls._SurrogateClass is a subclass of pypersyst.entity.base.Entity If a _fieldSpec is specified for the Entity class upon which the surrogate is being created for, ...base.Entity.__init__ iterates through all of the _fieldSpec items and tries to create field instances for them, initializing them with the values passed in via the "attrs" argument. The problem is though, with the surrogate created by txb_create, an empty dictionary is passed in as "attrs", thus causing the KeyError exception above. I think I have a general idea of how to solve this problem based on what I've done for 'Syst-o-matic so I will post a patch to fix this sometime today. |
From: Matthew S. <gld...@us...> - 2004-05-31 20:52:56
|
Matthew Scott wrote: > Will commit updated tests in a few minutes... Tests pass 100% now. Rollback engine seems to work as advertised so far :) If using the "Entity smarts" of PyPerSyst, be sure to use pypersyst.entity.rollback.Engine and not pypersyst.engine.rollback.Engine -- this bit me recently, which prompted my writing of a few rollback engine tests... |
From: Matthew S. <gld...@us...> - 2004-05-31 20:40:41
|
Matthew R. Scott wrote: > from pypersyst.engine.rollback import Engine as RollbackEngine Ack... imported the wrong rollback engine :) Will commit updated tests in a few minutes... |
From: <po...@or...> - 2004-02-22 22:21:48
|
Norbert Ferchen <nfe...@ba...> writes: > Hello! > > I've spend some time on PyPerSyst and was a bit frustrated about the > lack of examples. Are ther any, that I haven't found? The sandbox is your best source. Take a look at sandbox/pobrien/university or sandbox/gldnspud/djlib for examples. > But I've tried my very best and stumbled around the > Root-Class. After subclassing it, I couldn't find an method to add > my stuff on it. Why is the branch method hidden by the prefix '_'? Because it should only be used in a very careful manner. The philosophy of pypersyst is to make things as declarative as possible, and to hide things that shouldn't be publicly accessible, in order to protect the integrity of the data/objects. > I would like to add an method for __setitem__ to make the behavior > more like an dictionary. I don't think that would fit with the intent of the Root class. Take a look at the entity.root.Root class. That might help explain things. > The last thing, I stumble around with, is about the Transaction > class. Is there an pretty Ideom for writing efficient transactions? entity.transaction has generic transactions that can handle all simple inserts, updates, and deletes. -- Patrick K. O'Brien Orbtech http://www.orbtech.com Blog http://www.sum-ergo-cogito.com |
From: Norbert F. <nfe...@ba...> - 2004-02-20 15:57:43
|
Hello! I've spend some time on PyPerSyst and was a bit frustrated about the lack of examples. Are ther any, that I haven't found? But I've tried my very best and stumbled around the Root-Class. After subclassing it, I couldn't find an method to add my stuff on it. Why is the branch method hidden by the prefix '_'? I would like to add an method for __setitem__ to make the behavior more like an dictionary. Without adding an branch you get in real trubles trying to close the db. The reason was hard to find and is well documented in the Python Library Ref with '*Warning:* For new-style classes, if __getstate__() returns a false value, the __setstate__() method will not be called. '. The __getstate__ will get an empty dictionary after startup, so the first close of an empty db will crash.. Without using subclassing the root-Class the problem does not accour, but I'm thinking of the sense for this class. The last thing, I stumble around with, is about the Transaction class. Is ther an pretty Ideom for writing efficient transactions? Regards, /Norbert Ferchen |
From: Donnal W. <don...@ya...> - 2003-10-28 16:06:19
|
Patrick K. O'Brien: > Donnal Walter: > > The domain I have in mind is clinical, and multiple clinicians > > will need to update the patient information. But for any given > > patient at any given time, *only one* clinician will typically > > need to update the information. ... > > I understand. Other domains have a similar need to regulate > access to information according to who is the responsible party > for updates. Ok, but what I was suggesting is that READ access be regulated for groups of clinicians, but that UPDATE access be controlled by the client application itself, based on the patients that a clinician *assumes* responsibility for (not *assigned*). (See comments below on eliminating the server, as well.) > > PyPerSyst, if I understand correctly, keeps the entire database > > memory, but I don't see why there couldn't be multiple > > "documents" in place of a single "root". Each document would > > have a "snapshot" and possibly a "log file". Any valid user > > should be allowed to open a snapshot for reading at any time. > > But if the user wants to check out a document for updating, the > > application should first check to see if there is a "log file" > > for that document. If so, it assumes that another user is > > updating the document and leaves it alone. If not, it creates > > a log file: (1) for logging transactions, and (2) for letting > > other users know that they cannot have write access at > > present. When the editing is finished, a new snapshot is saved > > and the log file is deleted (letting another user know that it > > is safe to open it in update mode). > > I think it is important to separate the logical requirements > from any physical implementation that could support those > requirements. As a statement of principle I would agree with this. Nevertheless, the physical implementation is of some importance. First, let me state up front that I am not trying to build a complete patient record system. A lot of patient information will be collected, but many of the applications could easily be envisioned as single user. Second, our current system (which isadmittedly inadequate, but for other reasons) is document-based, with clinicians accessing documents on a shared network drive. I was hoping to leverage this type of simplistic shared access, but with a more robust persistence mechanism and much, MUCH more effective custom applications. Part of my goal was to simulate a form of multi-user access WITHOUT needing separate server software. > Your requirement is to restrict access to an entity. For that > you need to know who the user is (name and password, perhaps) > and then be able to assign entities to that user. If an entity > is already assigned, another user can only have read access to > the data, but no update capabilities. How this gets implemented > depends on other requirements. General access to an entity could be restricted via the operating system. What I was suggesting was that UPDATE access be restricted only via the client software, based on whether it detects that the document is already in use or not. The presence or absence of the log file seemed like a convenient flag (for free). :-) > In particular I think it needs to be established whether > information needs to be shared between any of these patient > documents, or whether they are completely independent of each > other. If they are as independent as you seem to suggest, then > your implementation (separate databases for each) would be one > way to handle things. But if there is a need to share > information, or look at all patient documents at the same time, > your implementation might not be the best. All things being equal, I'd prefer to a separate server to control access to various components of the record system. For purposes of proving a concept, however, I only have available a shared network drive. (I'm not sure that I would be allowed to install executable software such as Twisted, for example.) I'm sure what I suggested above is limited both in its power and its flexibility, but if it allows me to demonstrate the superiority of custom applications built with Python, that may be the best I can hope for at present. > Do you have the need to support any of the following: > > <snip> > > Please don't laugh if any of those examples is completely > ridiculous. I'm not a doctor, I'm only pretending to be one > for Halloween. (Doctor Frankenstein, at your service.) No, I am not laughing, although we don't have need for the kinds of things you mentioned. For the most part these patient documents *really are* that independent. There actually is not a need for sharing information between documents. This is not to say that we would never want to prepare certain summary reports that draw from a large number of documents, but I have always viewed these as being doable in batch mode. Incidentally, the need for ad hoc queries is the best argument I can think of for "relational-like" entities. Ad hoc queries are one the classic problems for object-oriented databases, and setting things up to be more relationally oriented could be a benefit. On the other hand, my applications don't need real-time query capabability other than what is provided by the application itself. Aggregated reports can be done in a non-interactive (batch) mode. Regards ===== Donnal Walter Arkansas Children's Hospital |
From: <po...@or...> - 2003-10-28 14:07:01
|
Donnal Walter <don...@ya...> writes: > --- "Patrick K. O'Brien" <po...@or...> wrote: > > > Did that help answer your questions? > > Yes it did, thanks. I realized later, however, that my subject line > is somewhat misleading. The apps I hope to develop are *multi-user* > in a sense, but probably not truly *concurrent* users. If you will > indulge me, let me try to describe the situation. > > The domain I have in mind is clinical, and multiple clinicians will > need to update the patient information. But for any given patient > at any given time, *only one* clinician will typically need to > update the information. It would work well for a clinician to be > able to check out a patient record, and until that record is > checked back in, other users could *see* the record but would not > be allowed to modify it. This is not an artificial scenario made up > to over simplify the problem. It is the typical situation. In the > case where two clinicians want to update information at the same > time, the second could simply be informed (before the fact) that > the record is checked out temporarily. I understand. Other domains have a similar need to regulate access to information according to who is the responsible party for updates. > It might seem like Twisted would be a good solution here, since > there would be relatively few update conflicts, but IMHO it would > be better to inform a user that a given record is in use *before* > editing the data rather than during an attempted transaction. That could be handled any number of ways. > PyPerSyst, if I understand correctly, keeps the entire database in > memory, but I don't see why there couldn't be multiple "documents" > in place of a single "root". Each document would have a "snapshot" > and possibly a "log file". Any valid user should be allowed to open > a snapshot for reading at any time. But if the user wants to check > out a document for updating, the application should first check to > see if there is a "log file" for that document. If so, it assumes > that another user is updating the document and leaves it alone. If > not, it creates a log file: (1) for logging transactions, and (2) > for letting other users know that they cannot have write access at > present. When the editing is finished, a new snapshot is saved and > the log file is deleted (letting another user know that it is safe > to open it in update mode). I think it is important to separate the logical requirements from any physical implementation that could support those requirements. Your requirement is to restrict access to an entity. For that you need to know who the user is (name and password, perhaps) and then be able to assign entities to that user. If an entity is already assigned, another user can only have read access to the data, but no update capabilities. How this gets implemented depends on other requirements. In particular I think it needs to be established whether information needs to be shared between any of these patient documents, or whether they are completely independent of each other. If they are as independent as you seem to suggest, then your implementation (separate databases for each) would be one way to handle things. But if there is a need to share information, or look at all patient documents at the same time, your implementation might not be the best. Do you have the need to support any of the following: * List all the clinicians and the patients they have been assigned. * List all the patients and their current clinician. * Tell me which patient was in exam room 3, yesterday at 3:00pm. * List all the patients who were given EKGs last Friday. * Calculate the clinician/patient ratios by week for the past 3 months. * List all the patients treated by Clinician X last week. Please don't laugh if any of those examples is completely ridiculous. I'm not a doctor, I'm only pretending to be one for Halloween. (Doctor Frankenstein, at your service.) -- Patrick K. O'Brien Orbtech http://www.orbtech.com/web/pobrien ----------------------------------------------- "Your source for Python programming expertise." ----------------------------------------------- |
From: Donnal W. <don...@ya...> - 2003-10-28 11:26:17
|
--- "Patrick K. O'Brien" <po...@or...> wrote: > How this UpdateRevisionMismatch exception should be handled > by the user interface is up to the client application. In > some cases the changes should simply be dropped. In other > cases, the user may have spent a lot of time typing in the > changes and would lose a lot of work if you didn't give them > the option to resolve any anomalies and create a new > transaction. A lot depends on the particular domain. The > issues are basically the same as with any multi-user database. > I'm open to any suggestions if there are other mechanisms > that would be useful to have support for in PyPerSyst. > Did that help answer your questions? Yes it did, thanks. I realized later, however, that my subject line is somewhat misleading. The apps I hope to develop are *multi-user* in a sense, but probably not truly *concurrent* users. If you will indulge me, let me try to describe the situation. The domain I have in mind is clinical, and multiple clinicians will need to update the patient information. But for any given patient at any given time, *only one* clinician will typically need to update the information. It would work well for a clinician to be able to check out a patient record, and until that record is checked back in, other users could *see* the record but would not be allowed to modify it. This is not an artificial scenario made up to over simplify the problem. It is the typical situation. In the case where two clinicians want to update information at the same time, the second could simply be informed (before the fact) that the record is checked out temporarily. It might seem like Twisted would be a good solution here, since there would be relatively few update conflicts, but IMHO it would be better to inform a user that a given record is in use *before* editing the data rather than during an attempted transaction. PyPerSyst, if I understand correctly, keeps the entire database in memory, but I don't see why there couldn't be multiple "documents" in place of a single "root". Each document would have a "snapshot" and possibly a "log file". Any valid user should be allowed to open a snapshot for reading at any time. But if the user wants to check out a document for updating, the application should first check to see if there is a "log file" for that document. If so, it assumes that another user is updating the document and leaves it alone. If not, it creates a log file: (1) for logging transactions, and (2) for letting other users know that they cannot have write access at present. When the editing is finished, a new snapshot is saved and the log file is deleted (letting another user know that it is safe to open it in update mode). Regards, ===== Donnal Walter Arkansas Children's Hospital |