From: Jon M. <jo...@te...> - 2006-06-15 13:11:26
|
Matthew Buckett wrote: > Jon Maber wrote: > >> Matthew Buckett wrote: >> >> Did you say if your code is directly calls the SQL or wraps records in >> the quotas table in a Bodington PersistentObject subclass? >> >> If the latter then database reading and writing will operate through a >> single instance of that class so you could add a method like this; >> > > Which reminds me we currently have a bit of a problem with SoftCache and > transactions. > > If I edit a user (loading the User object then changing the surname) > then attempt to save the user which fails (and the DB transaction rolls > back). Softcache will still return the modified user Just the kind of problem that you are infinitely qualified to sort out. ;-) Throwing in my tuppence worth, I think it might take some fairly serious work to get this right. The SQLDatabase class would have to keep track of all the PersistentObjects involved in the transaction. At the moment calling set methods on PersistentObjects doesn't count as part of a transaction - only saving to the database is, so a really thorough work over would have to find a solution to that. Perhaps if the PersistentObject set methods threw exceptions if calling them caused a constraint to be violated? In some circumstances you need to make two or more calls to various setter methods and the constraints are unavoidably broken during the intermediate stages - that could be an area to work on in terms of transactions. In my Vivida project I'm working to a much simpler data model which imposes limitations but makes things easier for the programmer and I hope more reliable. None of the data is in a relational database, it's all in XML files and a fairly simply file locking scheme is implemented. At present none of the data is cached, except via the OS file system cache but it would be fairly simple to implement. All data processing is done via XSLT (Saxon) which is forced to access the file system via URLResolvers which can be subclassed. The return value from these resolvers can reference a stream source or a DOM source or other kinds of source so while at present my customised resolvers simply implement some read and write file locking (so that each XSLT 'transaction' can be made atomic) in the future XML files could be cached as DOM representations. The rolling back of the transaction in the event of an error is quite simple - output URL resolvers direct output to temporary files and these are only copied over on top of the target output files at the end of the successful transaction (perhaps after they have all been validated). Of course unlike an RDB there are no referential integrity checks between XML files and only limited validation within them but that need not necessarily be a huge drawback. So success of the transaction is fairly simply judged - did the transaction manage to get write locks for all the output files, did the transaction put well formed XML into all the output files and optionally did the output files validate against their schemas? At present Vivida is intended for a very small web site with no particular requirement to do complex or deep queries of the data so applying XPath expressions on individual XML files via XSLT is fine. In the future an XML indexer could be tacked on to index the XML files and allowing higher performance queries that span multiple files. This could also help cope with the obvious problem that would occur with a large installation - massive XML files. For example how would the Bodington users table be implemented in XML? It could be maintained conceptually as a single XML file but stored within an XML indexing store which is capable of locking portions of the file independently or it could be split into many XML files and the indexer used to find the right file for a given user. This kind of basic user data is likely to be the least suitable for storage in XML files when there are large numbers of users. However, the data that is most suitable is the tool content data, e.g. MCQ, Questionnaire, Short answer paper questions etc. would probably be better stored as IMS QTI files. Resource specific properties would probably be best as XML files of some kind. In fact probably the resources table would be better as one XML file per resource - instead of using index fields in the database to store the tree structure Bodington could simply use the host file system - one sub directory per resource. The XMLRepository is already used to find resources via XML representations of their metadata so why bother with the resources table at all? Just a thought. Jon |