The optimistic concurrency is built into the framework through the use of the timestamps.
When you save an object with a modified timestamp field, the timestamp of the object in the database are compared with original timestamp of the object being saved. If they match the db update occurs, if they are different then someone else has modified the object on the database and the save method will raise an exception indicating the object has been modified elsewhere.
It works quite well and does not rely on keeping ADO recordsets accurate. You could conceivably retrieve an object in one program, pass it to another atomsframework based application, and modify and save it in that application without a problem.
Pessimistic locking is unlikely to be implemented any time soon. To do it in ADO.NET you have to start and maintain a transaction from the time you retrieve a record to the time you update it.
This goes against the standard behaviour of the AF which is to minimise connection time and coupling to the database.
Why do you need to have pessimistic locking? Almost all situations that have traditionally used it can also be met using optimistic locking.
- Richard.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Anonymous
-
2004-06-24
Hi Richard,
I've checked it and I saw that there is not need to use the method getSelectTimeStamlSqlFor() at all.
Just a comment, I saw that there is some unused code in the framework. Are there any plans to clean it?
The reason for using pessimistic locking, is that the application is part of a system where users performs tasks. And the tasks include creation, mdifications and deletion of objects that relate to other objects. Which are all persisted in a database.
If another application were to edit the same objects, the application won't be able to persist the modification to the database.
We'll have to write a piece of code that handles the situation.
Furthermore, operationaly, we do not want to allow an applicaiton to perform midification to objects that are being edited by another application. This will cause a serious problems, operationaly speaking, in the system. And we don't want to have to issue an operational instruction, where we instruct users not to edit the same objects at the same time.
I hope this explains it.
Dan
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Code cleanup: Eventually - but it's last on the priority list. Let's get it working first and worry about that later :-)
Locking: Obviously you could put a transaction around the multiple object creation if they should be handled as a logical unit of work, but it sounds like records are created in the database from other programs (is that right?).
I'll have a think about what is needed, but don't get your hopes up. I'm travelling to Thailand for work next week and then I am going on holidays for 2 weeks and will be away from the 'net most of the time. If anything happens it will be in about a month or so... sorry.
- Richard.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Anonymous
-
2004-06-24
Hi Richard,
You're correct.
The database is the central database of the entire system. Several applications read and write to the database.
Transaction won't be good enough because, a user's task takes a long time to finish. It's a manual task. And at the same time, another user might accidentally modify the same objects.
I need a way to lock objects in the database. That way when a user starts working, the application will try to lock the requests objects. And if they are locked in the database, a relevant message is displayed to the user stating that the requested objects are locked and cannot be edited.
I was thinking of a lock mechanism based of the idea of a unique lock id.
Basically, when a user wants to work on some objects the application will first try to lock them. To be able to lock them the application will check that they are not locked. Then it generates a unique lock id and it assigns the unique lock id to each object that it needs to lock. Each object that gets an assignment of lock id a row is inserted in a lock table. The table includes the object's table name and OID and the unique lock id. This way, when another application tries to lock an object it will be rejected.
The only problem that I can find with this approach, is when an application doesn't releases the objects they are left locked. And the only way is to remove the lock manually.
I'll appreciate any idea.
Dan
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The lock table is bad idea. Simply because client disconnects, PC crashes and the like will mean you could have records that never get released.
Also in terms of length of time an object is locked, surely there is little difference between a pessimistic transaction and the use of the lock table. Either will result in objects being locked for a long period of time (during the manual process). The advantage of the transaction method is that you don't have to worry about disconnects and the SQL server takes care of it for you.
Now, I've had a bit of a think and the pessimistic locking could be fairly simple to implement. I haven't changed any code myself (lack of time just at the moment) but could you try the following:
1. In CMSSqlConnection.startTransaction change the line that does BeginTransaction to be BeginTransaction(IsolationLevel.RepeatableRead). This will start a pessimistic locking transaction.
2. In your code (or a test program)
2a. Start a transaction using CPersistenceBroker.startTransaction()
2b. Retreive an object
2c. Mark it as dirty and save it immediately. This will cause the object to have an updated modified date.
2d. Do _not_ commit the transaction yet.
3. Start a second instance of your code (or test program).
3a. When you try to retrieve the object you should get an error as the object is locked by the first instance of the program.
4. In Instance1 commit the transaction. See if the object can now be read correctly in the second instance of your app. (You could also check this by closing instance1 without committing the transaction).
If it all works (or doesn't) let me know. I will try to get some time soon to get this tested myself, but I've got a couple of deadlines to meet and I'm running around like a headless chicken :-)
- Richard.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So, for every object that the user wants to edit I need to add it to the transation by setting the dirty flag to true and updating it.
And, to release the lock I need to commit the transaction.
I'll try it and I'll let you know.
Dan.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I've done a little bit of testing over the weekend on pessimistic locking.
What you will find is that the initial read works fine on both instances of the application, but that only one instance will be able to successfully do the save. The other one will timeout on the save (time depends on the SQL server I think) and report an error.
By catching the error in the save you should be able to tell if a lock timeout occurs and report this to the users.
I would suggest that you roll your own pessimisticLoad function for the moment that calls the retrieve/find, setDirtyFlag and save methods for an object.
Also, you might want to get the latest version of the code from CVS since I've updated a number of files to add support for pessimistic locking, as well as some improvements in error handling.
- Richard.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Richard,
I've looked in the code for the optimistic concurrency procedure and I couldn't find it.
I could only find the method for getting the timestamp of an object but I couldn't find a method that calls that method.
Is it working? or I need to write some code for it.
Dan
P.S
What about pesimistic concurrency. Are there any plans for this?
The reason I'm asking this, is because I need it in the project I'm involved.
Thanks.
Hi Dan,
The optimistic concurrency is built into the framework through the use of the timestamps.
When you save an object with a modified timestamp field, the timestamp of the object in the database are compared with original timestamp of the object being saved. If they match the db update occurs, if they are different then someone else has modified the object on the database and the save method will raise an exception indicating the object has been modified elsewhere.
It works quite well and does not rely on keeping ADO recordsets accurate. You could conceivably retrieve an object in one program, pass it to another atomsframework based application, and modify and save it in that application without a problem.
Pessimistic locking is unlikely to be implemented any time soon. To do it in ADO.NET you have to start and maintain a transaction from the time you retrieve a record to the time you update it.
This goes against the standard behaviour of the AF which is to minimise connection time and coupling to the database.
Why do you need to have pessimistic locking? Almost all situations that have traditionally used it can also be met using optimistic locking.
- Richard.
Hi Richard,
I've checked it and I saw that there is not need to use the method getSelectTimeStamlSqlFor() at all.
Just a comment, I saw that there is some unused code in the framework. Are there any plans to clean it?
The reason for using pessimistic locking, is that the application is part of a system where users performs tasks. And the tasks include creation, mdifications and deletion of objects that relate to other objects. Which are all persisted in a database.
If another application were to edit the same objects, the application won't be able to persist the modification to the database.
We'll have to write a piece of code that handles the situation.
Furthermore, operationaly, we do not want to allow an applicaiton to perform midification to objects that are being edited by another application. This will cause a serious problems, operationaly speaking, in the system. And we don't want to have to issue an operational instruction, where we instruct users not to edit the same objects at the same time.
I hope this explains it.
Dan
Hi Dan,
Code cleanup: Eventually - but it's last on the priority list. Let's get it working first and worry about that later :-)
Locking: Obviously you could put a transaction around the multiple object creation if they should be handled as a logical unit of work, but it sounds like records are created in the database from other programs (is that right?).
I'll have a think about what is needed, but don't get your hopes up. I'm travelling to Thailand for work next week and then I am going on holidays for 2 weeks and will be away from the 'net most of the time. If anything happens it will be in about a month or so... sorry.
- Richard.
Hi Richard,
You're correct.
The database is the central database of the entire system. Several applications read and write to the database.
Transaction won't be good enough because, a user's task takes a long time to finish. It's a manual task. And at the same time, another user might accidentally modify the same objects.
I need a way to lock objects in the database. That way when a user starts working, the application will try to lock the requests objects. And if they are locked in the database, a relevant message is displayed to the user stating that the requested objects are locked and cannot be edited.
I was thinking of a lock mechanism based of the idea of a unique lock id.
Basically, when a user wants to work on some objects the application will first try to lock them. To be able to lock them the application will check that they are not locked. Then it generates a unique lock id and it assigns the unique lock id to each object that it needs to lock. Each object that gets an assignment of lock id a row is inserted in a lock table. The table includes the object's table name and OID and the unique lock id. This way, when another application tries to lock an object it will be rejected.
The only problem that I can find with this approach, is when an application doesn't releases the objects they are left locked. And the only way is to remove the lock manually.
I'll appreciate any idea.
Dan
Hi Dan,
The lock table is bad idea. Simply because client disconnects, PC crashes and the like will mean you could have records that never get released.
Also in terms of length of time an object is locked, surely there is little difference between a pessimistic transaction and the use of the lock table. Either will result in objects being locked for a long period of time (during the manual process). The advantage of the transaction method is that you don't have to worry about disconnects and the SQL server takes care of it for you.
Now, I've had a bit of a think and the pessimistic locking could be fairly simple to implement. I haven't changed any code myself (lack of time just at the moment) but could you try the following:
1. In CMSSqlConnection.startTransaction change the line that does BeginTransaction to be BeginTransaction(IsolationLevel.RepeatableRead). This will start a pessimistic locking transaction.
2. In your code (or a test program)
2a. Start a transaction using CPersistenceBroker.startTransaction()
2b. Retreive an object
2c. Mark it as dirty and save it immediately. This will cause the object to have an updated modified date.
2d. Do _not_ commit the transaction yet.
3. Start a second instance of your code (or test program).
3a. When you try to retrieve the object you should get an error as the object is locked by the first instance of the program.
4. In Instance1 commit the transaction. See if the object can now be read correctly in the second instance of your app. (You could also check this by closing instance1 without committing the transaction).
If it all works (or doesn't) let me know. I will try to get some time soon to get this tested myself, but I've got a couple of deadlines to meet and I'm running around like a headless chicken :-)
- Richard.
Hi Richard,
Thanks for the idea.
It makes more sense this way.
So, for every object that the user wants to edit I need to add it to the transation by setting the dirty flag to true and updating it.
And, to release the lock I need to commit the transaction.
I'll try it and I'll let you know.
Dan.
Yes, that's correct.
I've done a little bit of testing over the weekend on pessimistic locking.
What you will find is that the initial read works fine on both instances of the application, but that only one instance will be able to successfully do the save. The other one will timeout on the save (time depends on the SQL server I think) and report an error.
By catching the error in the save you should be able to tell if a lock timeout occurs and report this to the users.
I would suggest that you roll your own pessimisticLoad function for the moment that calls the retrieve/find, setDirtyFlag and save methods for an object.
Also, you might want to get the latest version of the code from CVS since I've updated a number of files to add support for pessimistic locking, as well as some improvements in error handling.
- Richard.