From: Carlos H. C. <li...@wa...> - 2014-04-28 15:28:37
|
Here is my list: 1) I still would like to see this implemented (without the need of external mechanisms), but I'm not sure if it would be possible at all: http://tracker.firebirdsql.org/browse/CORE-1738 2) I second Jesus Garcia request for native replication, since people at FDD and in FireBase's list are always claiming about the lack of such feature. 3) A way to schedule some tasks direct in FB, for example, running procures, sweeps, statistics recalcs, etc. at specific scheduled days/times. What would be accepted as a task could be limited to whatever you guys thinks that would be "safe" to implement: http://tracker.firebirdsql.org/browse/CORE-743 4) A way to access data in external files/non-fb-databases . AFAIR, this was planned as an extension of currently "execute statement" 5) An embedded Firebird version for Android (even if only "basic" server features could be available): http://tracker.firebirdsql.org/browse/CORE-3885 6) Custom messages for FK exceptions (and check constraints too): http://tracker.firebirdsql.org/browse/CORE-736 7) It is known that mixing DDL/DML in the same transaction can cause corruption in DBs. If this cannot be 100% fixed, I suggest to introduce some mechanism to detect such situation and avoid the execution of the statement before corruption happens. 8) Enhancement to the numerics calculations. The currently rule of summing the scale of the types involved in muls/divs is very bad and can easily cause overflow. Maybe FB could automatically truncate the result to the maximum precision possible to accommodate the final value. Even better if the truncation happens only in the final result (not during the internal calculations). After your comments, I can create the missing tickets in the tracker, only for the features that has any chance to be implemented. I suggest that if some of those features are impossible to be implemented, their existing tickets should be marked to "Wont/Fix" and the details of "why" be added to the comments. []s Carlos http://www.firebirdnews.org FireBase - http://www.FireBase.com.br |
From: Mark R. <ma...@la...> - 2014-04-28 15:48:12
|
On 28-4-2014 16:29, Carlos H. Cantu wrote: > 8) Enhancement to the numerics calculations. The currently rule of > summing the scale of the types involved in muls/divs is very bad and > can easily cause overflow. Maybe FB could automatically truncate the > result to the maximum precision possible to accommodate the final > value. Even better if the truncation happens only in the final result > (not during the internal calculations). This behavior is defined in the SQL standard (SQL:2011 Foundation, section 6.27) for multiplication (see sub c), with division we can choose (see sub d): 1) If the declared type of both operands of a dyadic arithmetic operator is exact numeric, then the declared type of the result is an implementation-defined exact numeric type, with precision and scale determined as follows: a) Let S1 and S2 be the scale of the first and second operands respectively. b) The precision of the result of addition and subtraction is implementation-defined, and the scale is the maximum of S1 and S2. c) **The precision of the result of multiplication is implementation-defined, and the scale is S1 + S2**. d) The precision and scale of the result of division are implementation-defined. I'd suggest that instead the maximum precision of DECIMAL and NUMERIC should be increased. Mark -- Mark Rotteveel |
From: Dmitry Y. <fir...@ya...> - 2014-04-28 18:21:09
|
28.04.2014 18:29, Carlos H. Cantu wrote: > > Here is my list: Arrrgh! :-) I've skipped the items I have nothing to say about (mostly due to lack of interest, sorry for fairness). > 2) I second Jesus Garcia request for native replication, since people at > FDD and in FireBase's list are always claiming about the lack of such > feature. Replication is a too common word. What do they really need? Warm standby? Hot standby? Sync replication? Async replication? Multi-master? Maybe point-in-time recovery? Maybe FB->Oracle or vice versa? > 3) A way to schedule some tasks direct in FB, for example, > running procures, sweeps, statistics recalcs, etc. at specific > scheduled days/times. What would be accepted as a task could be > limited to whatever you guys thinks that would be "safe" to implement: > http://tracker.firebirdsql.org/browse/CORE-743 See my other post about this. > 4) A way to access data in external files/non-fb-databases . AFAIR, > this was planned as an extension of currently "execute statement" I don't remember promises for external files, although it might be doable. I'd rather settle on non-fb-databases for the time being. It means we need someone with non-fb-database API knowledge to implement a plugin. Volunteers anyone? Or we may create some common (ODBC / JDBC / ADO.NET) plugins and don't care about the rest. But again, volunteers anyone? > 7) It is known that mixing DDL/DML in the same transaction can cause > corruption in DBs. If this cannot be 100% fixed, I suggest to > introduce some mechanism to detect such situation and avoid the > execution of the statement before corruption happens. Maybe my memory is failing, but I don't remember corruptions because of this reason. v1.0 and v1.5 don't count. > 8) Enhancement to the numerics calculations. The currently rule of > summing the scale of the types involved in muls/divs is very bad and > can easily cause overflow. Maybe FB could automatically truncate the > result to the maximum precision possible to accommodate the final > value. Even better if the truncation happens only in the final result > (not during the internal calculations). Blame me for delaying it again and again. But I don't agree with truncations, Mark is right that a longer precision is a better way to go. However, longer NUMERIC/DECIMAL data types and longer intermediate results during calculations are two different things, even if related. Please create a ticket so that I can have it assigned. > I suggest that if some of those features are impossible to be > implemented, their existing tickets should be marked to "Wont/Fix" and > the details of "why" be added to the comments. Usually the problem is not implementation possibility but need and reasonableness. Dmitry |
From: Carlos H. C. <li...@wa...> - 2014-04-28 19:05:42
|
>> 2) I second Jesus Garcia request for native replication, since people at >> FDD and in FireBase's list are always claiming about the lack of such >> feature. DY> Replication is a too common word. What do they really need? Warm DY> standby? Hot standby? Sync replication? Async replication? Multi-master? DY> Maybe point-in-time recovery? Maybe FB->Oracle or vice versa? I think most of them needs basic asynchronous replication, covering single and multi-master scenarios. For those who needs more complex scenarios, there are third party comercial tools. Anyway, I'm not the right person to answer, since I didn't need replication in my projects so far, so I'll leave this for Jesus or any others to answer. If you are interested, I can run a poll at firebirdnews.org and firebase about what users would expect in "native" FB replication. >> 4) A way to access data in external files/non-fb-databases . AFAIR, >> this was planned as an extension of currently "execute statement" DY> I don't remember promises for external files, although it might be DY> doable. I'd rather settle on non-fb-databases for the time being. DY> It means we need someone with non-fb-database API knowledge to implement DY> a plugin. Volunteers anyone? Or we may create some common (ODBC / JDBC / DY> ADO.NET) plugins and don't care about the rest. But again, volunteers DY> anyone? Afair, the idea was to use ODBC/JDBC as source. My chat about this was with Vlad, long time ago. Probably he can give more details about what he was thinking about. >> 7) It is known that mixing DDL/DML in the same transaction can cause >> corruption in DBs. If this cannot be 100% fixed, I suggest to >> introduce some mechanism to detect such situation and avoid the >> execution of the statement before corruption happens. DY> Maybe my memory is failing, but I don't remember corruptions because of DY> this reason. v1.0 and v1.5 don't count. I asked Alexey and seems that most of the previous problems are fixed, but dropping tables with active connections still causes corruption. Maybe he can give more details. >> 8) Enhancement to the numerics calculations. The currently rule of >> summing the scale of the types involved in muls/divs is very bad and >> can easily cause overflow. Maybe FB could automatically truncate the >> result to the maximum precision possible to accommodate the final >> value. Even better if the truncation happens only in the final result >> (not during the internal calculations). DY> Blame me for delaying it again and again. But I don't agree with DY> truncations, Mark is right that a longer precision is a better way to DY> go. However, longer NUMERIC/DECIMAL data types and longer intermediate DY> results during calculations are two different things, even if related. DY> Please create a ticket so that I can have it assigned. Done: http://tracker.firebirdsql.org/browse/CORE-4409 >> I suggest that if some of those features are impossible to be >> implemented, their existing tickets should be marked to "Wont/Fix" and >> the details of "why" be added to the comments. DY> Usually the problem is not implementation possibility but need and DY> reasonableness. Whatever the reason is, if it will not be done, it should not be left "open". []s Carlos http://www.firebirdnews.org FireBase - http://www.FireBase.com.br |
From: Thomas B. <tho...@as...> - 2014-04-28 22:13:21
|
We are right now discussing this in our (new) company again. We implemented replication mechanisms similar to ibreplicator (and probably others) on our own, as probably did many people. Yes, having this kind of feature build in would be a big help. Current application design leads to more and more diverse platforms that still can not handle data access online all the time - thus leading to what ever replication needs. Main focus should be in asynchronous multi master scenarios, as Carlos pointed out. Everything else seems to be as specialization... Thomas Am 28.04.2014 21:05, schrieb Carlos H. Cantu: >>> 2) I second Jesus Garcia request for native replication, since people at >>> FDD and in FireBase's list are always claiming about the lack of such >>> feature. > > DY> Replication is a too common word. What do they really need? Warm > DY> standby? Hot standby? Sync replication? Async replication? Multi-master? > DY> Maybe point-in-time recovery? Maybe FB->Oracle or vice versa? > > I think most of them needs basic asynchronous replication, covering > single and multi-master scenarios. For those who needs more complex > scenarios, there are third party comercial tools. Anyway, I'm not the > right person to answer, since I didn't need replication in my projects > so far, so I'll leave this for Jesus or any others to answer. If you > are interested, I can run a poll at firebirdnews.org and firebase > about what users would expect in "native" FB replication. -- Mit freundlichen Grüßen, Thomas Beckmann Diplom-Informatiker Wielandstraße 14c • 23558 Lübeck Tel +49 (22 25) 91 34 - 545 • Fax +49 (22 25) 91 34 - 604 Mail tho...@as... <mailto:tho...@as...> ASSFINET-Logo *ASSFINET Dienstleistungs-GmbH* Max-Planck-Straße 14 • 53501 Grafschaft bei Bonn in...@as... <mailto:in...@as...> • www.assfinet.de <http://www.assfinet.de/> Geschäftsführer: Dipl. Wirtschaftsinformatiker Marc Rindermann Registergericht Koblenz HRB 23331 Diese E-Mail enthält vertrauliche Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. |
From: Dmitry Y. <fir...@ya...> - 2014-04-29 07:17:36
|
29.04.2014 02:13, Thomas Beckmann wrote: > Main focus should be in asynchronous multi master scenarios, as Carlos > pointed out. Everything else seems to be as specialization... Dimitry Sibiryakov will surely correct me, but I always thought that multi-master replication can hardly work without special attention to the database design (e.g. surrogate only PKs with dedicated per-node ranges). Conflict resolution can also be tricky. I really doubt that 99% of existing databases "out of the box" are prepared for that. Dmitry |
From: Carlos H. C. <li...@wa...> - 2014-04-29 12:24:29
|
'>> Main focus should be in asynchronous multi master scenarios, as Carlos >> pointed out. Everything else seems to be as specialization... DY> Dimitry Sibiryakov will surely correct me, but I always thought that DY> multi-master replication can hardly work without special attention to DY> the database design (e.g. surrogate only PKs with dedicated per-node DY> ranges). Conflict resolution can also be tricky. I really doubt that 99% DY> of existing databases "out of the box" are prepared for that. You are right about the DB design, but that's up to the user. You dont and should not worry with this. []s Carlos http://www.firebirdnews.org FireBase - http://www.FireBase.com.br |
From: Dimitry S. <sd...@ib...> - 2014-04-29 12:38:42
|
29.04.2014 14:24, Carlos H. Cantu wrote: > You are right about the DB design, but that's up to the user. You > dont and should not worry with this. But the one who implements built-in replication will have to worry about conflict handling, constraint failure and so on. In the case of synchronous replication it is simple - you just raise error and the rest is up to user. Asynchronous replication is going to be more tricky. -- WBR, SD. |
From: Jim S. <ji...@ji...> - 2014-04-29 22:46:36
|
On 4/29/2014 8:38 AM, Dimitry Sibiryakov wrote: > 29.04.2014 14:24, Carlos H. Cantu wrote: >> You are right about the DB design, but that's up to the user. You >> dont and should not worry with this. > But the one who implements built-in replication will have to worry about conflict > handling, constraint failure and so on. > In the case of synchronous replication it is simple - you just raise error and the rest > is up to user. Asynchronous replication is going to be more tricky. > It's a pity, but the journaling code that Borland ripped out in their failed attempt at a write-ahead log would have been ideal for master/slave replication. In a nutshell, what a page was changed, a second BDB was allocated to hold page changes; if the buffer filled, then the page itself was used. Before the actual page was written, either the page or the set of page change delta was sent to the journal (which would be the slave in replication). As all of the semantic checking had already been performed, application at the slave was simple, straightforward, and automatically followed careful write ordering. But I suspect the code is long gone and probably next to impossible to reconstruct as it was part and parcel of the origin Interbase design and implementation. |
From: Claudio V. C. <cv...@us...> - 2014-04-28 21:20:17
|
> -----Original Message----- > From: Dmitry Yemanov [mailto:fir...@ya...] > Sent: Lunes, 28 de Abril de 2014 14:21 > > > 7) It is known that mixing DDL/DML in the same transaction can cause > > corruption in DBs. If this cannot be 100% fixed, I suggest to > > introduce some mechanism to detect such situation and avoid the > > execution of the statement before corruption happens. > > Maybe my memory is failing, but I don't remember corruptions > because of > this reason. v1.0 and v1.5 don't count. I know people will feel outraged with my opinion, but anyway: make DDL operations atomic and immediate. Uncommitted DDL and DML working in unison with stable DB structure is a naive dream, period. > > 8) Enhancement to the numerics calculations. The currently rule of > > summing the scale of the types involved in muls/divs is very bad and > > can easily cause overflow. Maybe FB could automatically truncate the > > result to the maximum precision possible to accommodate the final > > value. Even better if the truncation happens only in the > final result > > (not during the internal calculations). > > Blame me for delaying it again and again. But I don't agree with > truncations, Mark is right that a longer precision is a better way to > go. However, longer NUMERIC/DECIMAL data types and longer > intermediate > results during calculations are two different things, even if related. This is very important limitation, IMHO. SqlServer goes up to numeric(38,0), for example. I'm not sure why longer data types and longer intermediate results are very different things; probably my neurones are becoming rusty. The only difference I see is that intermediate large results can be volatile things inside the server code but can't be declared as proper data types neither stored in the DB. C. |
From: Dimitry S. <sd...@ib...> - 2014-04-28 22:03:05
|
28.04.2014 23:21, Claudio Valderrama C. wrote: > I know people will feel outraged with my opinion, but anyway: make DDL > operations atomic and immediate. Uncommitted DDL and DML working in unison > with stable DB structure is a naive dream, period. I may be wrong as often, but AFAIU this dream may be a reality if: 1) Eliminate DFW 2) Perform DDL (operations with system tables) in user transaction 3) Make garbage collector to handle system tables well MVCC will do the rest automagically. -- WBR, SD. |
From: Claudio V. C. <cv...@us...> - 2014-04-28 22:15:56
|
> -----Original Message----- > From: Dimitry Sibiryakov [mailto:sd...@ib...] > Sent: Lunes, 28 de Abril de 2014 18:03 > > 28.04.2014 23:21, Claudio Valderrama C. wrote: > > I know people will feel outraged with my opinion, but > anyway: make DDL > > operations atomic and immediate. Uncommitted DDL and DML > working in unison > > with stable DB structure is a naive dream, period. > > I may be wrong as often, but AFAIU this dream may be a reality if: > > 1) Eliminate DFW I'm not sure "eliminate" would be the goal. I would say "simplify" that egregious piece of code with its enigmatic levels. > 2) Perform DDL (operations with system tables) in user transaction Are you referring to the server's internal code or isql, for example? > 3) Make garbage collector to handle system tables well What problems do you see now? Old records not collected and disposed? > MVCC will do the rest automagically. C. |
From: Dimitry S. <sd...@ib...> - 2014-04-29 08:43:11
|
29.04.2014 0:16, Claudio Valderrama C. wrote: >> -----Original Message----- >> From: Dimitry Sibiryakov [mailto:sd...@ib...] >> Sent: Lunes, 28 de Abril de 2014 18:03 >> >> I may be wrong as often, but AFAIU this dream may be a reality if: >> >> 1) Eliminate DFW > > I'm not sure "eliminate" would be the goal. I would say "simplify" that > egregious piece of code with its enigmatic levels. You are right. I meant do not use it for DDL. >> 2) Perform DDL (operations with system tables) in user transaction > > Are you referring to the server's internal code or isql, for example? Both. As Vlad said, system transaction should be better read-ony. >> 3) Make garbage collector to handle system tables well > > What problems do you see now? Old records not collected and disposed? It doesn't do "special cleanup" on system tables' version collection. For example, when garbage collector expunge record in RDB$RELATIONS it should free table's pages which currently is done by DFW. -- WBR, SD. |
From: Adriano d. S. F. <adr...@gm...> - 2014-04-29 00:24:15
|
On 28-04-2014 19:02, Dimitry Sibiryakov wrote: > 28.04.2014 23:21, Claudio Valderrama C. wrote: >> I know people will feel outraged with my opinion, but anyway: make DDL >> operations atomic and immediate. Uncommitted DDL and DML working in unison >> with stable DB structure is a naive dream, period. > > I may be wrong as often, but AFAIU this dream may be a reality if: > > 1) Eliminate DFW > 2) Perform DDL (operations with system tables) in user transaction > 3) Make garbage collector to handle system tables well > > MVCC will do the rest automagically. > Not so simple. With the current way that Firebird validate objects, not (or trying) allowing invalid interdependencies after commit, you'll not be able to do changes. Auto-commit DDL requires object invalidation. Adriano |
From: Dmitry Y. <fir...@ya...> - 2014-04-29 08:03:11
|
29.04.2014 02:02, Dimitry Sibiryakov wrote: > I may be wrong as often, but AFAIU this dream may be a reality if: > > 1) Eliminate DFW > 2) Perform DDL (operations with system tables) in user transaction > 3) Make garbage collector to handle system tables well And reimplement the undo log to handle page-level operations to rollback physical DDL changes. And reimplement the metadata cache to become two-level - global and transaction-wise, with objects in the latter overriding objects in the former. I'm pretty sure I've missing a dozen of other issues as well. Dmitry |
From: Dimitry S. <sd...@ib...> - 2014-04-29 08:50:02
|
29.04.2014 10:03, Dmitry Yemanov wrote: > 29.04.2014 02:02, Dimitry Sibiryakov wrote: > >> I may be wrong as often, but AFAIU this dream may be a reality if: >> >> 1) Eliminate DFW >> 2) Perform DDL (operations with system tables) in user transaction > > 3) Make garbage collector to handle system tables well > > And reimplement the undo log to handle page-level operations to rollback > physical DDL changes. Not undo log, but garbage collector. It is its duty to clean up referenced objects (index nodes and BLOBs currently) for going record versions. > And reimplement the metadata cache to become two-level - global and > transaction-wise, with objects in the latter overriding objects in the > former. May be stop separate metadata cache at all and use ordinary data cache for reading system tables directly every time. It could help implement hot standby as well. > I'm pretty sure I've missing a dozen of other issues as well. I'm also sure that I missed a lot of things, such as record formats. But it is exactly the purpose of feature discussion - to point out to potential problems. -- WBR, SD. |
From: Alex <pes...@ma...> - 2014-04-29 09:08:08
|
On 04/29/2014 12:49 PM, Dimitry Sibiryakov wrote: >> And reimplement the metadata cache to become two-level - global and >> transaction-wise, with objects in the latter overriding objects in the >> former. > May be stop separate metadata cache at all and use ordinary data cache for reading > system tables directly every time. And have prepare time increased many times... Not an option due to performance reasons. |
From: Dimitry S. <sd...@ib...> - 2014-04-29 09:49:57
|
29.04.2014 11:07, Alex wrote: > On 04/29/2014 12:49 PM, Dimitry Sibiryakov wrote: > >> May be stop separate metadata cache at all and use ordinary data cache for reading >> system tables directly every time. > > And have prepare time increased many times... > Not an option due to performance reasons. System tables are likely to be in cache all the time. Besides, it is enough to check whether RDB$RECORD_VERSION still match value remembered in transaction metadata cache object. -- WBR, SD. |
From: Alex P. <pes...@ma...> - 2014-04-29 10:48:50
|
On 04/29/14 13:49, Dimitry Sibiryakov wrote: > 29.04.2014 11:07, Alex wrote: >> On 04/29/2014 12:49 PM, Dimitry Sibiryakov wrote: >> >>> May be stop separate metadata cache at all and use ordinary data cache for reading >>> system tables directly every time. >> And have prepare time increased many times... >> Not an option due to performance reasons. > System tables are likely to be in cache all the time. Besides, it is enough to check > whether RDB$RECORD_VERSION still match value remembered in transaction metadata cache object. It's about reading table data from page cache when prepare gets slower many times. If system table is not in page cache it will be hundreds times slower. If you think that I give too pessimistic estimation here please compare searching data in btree index + analyzing data page with record version check + analysis for GC (all this done locking appropriate pages for read) on one side with finding record in an array by index on another side (that's how metadata cache works). You will see that provided estimation is correct. |
From: Dimitry S. <sd...@ib...> - 2014-05-02 09:10:11
|
29.04.2014 12:48, Alex Peshkoff wrote: > It's about reading table data from page cache when prepare gets slower > many times. If system table is not in page cache it will be hundreds > times slower. Yes, this particular piece of code. But how big % of execution time it takes in greater picture? > If you think that I give too pessimistic estimation here please compare > searching data in btree index + analyzing data page with record version > check + analysis for GC (all this done locking appropriate pages for > read) on one side with finding record in an array by index on another > side (that's how metadata cache works). Of course finding of record in array by index is faster. But array is not versioned. Current metadata cache schema can be happily used on transaction level if transaction has snapshot IL. It is only RC which is going to be a problem. > You will see that provided estimation is correct. Look at list_stayng(). I'm afraid that with this monster in background, all other expenses are ignorable. -- WBR, SD. |
From: Dmitry Y. <fir...@ya...> - 2014-04-29 07:43:53
|
29.04.2014 01:21, Claudio Valderrama C. wrote: > I know people will feel outraged with my opinion, but anyway: make DDL > operations atomic and immediate. Atomic and immediate means autocommitted or always executed in a separate (e.g. system) transaction? > Uncommitted DDL and DML working in unison > with stable DB structure is a naive dream, period. Maybe true for the current FB code, but not generally. Other databases can handle this reliably. > I'm not sure why longer data types and longer intermediate > results are very different things; probably my neurones are becoming rusty. Implementaion complexity is different -- a few adjustments in former evl.cpp vs polluting the whole codebase with a new datatype + an ODS change. And semantics is also different - calculation may require longer numerics internally but the result perfectly fits NUMERIC(18) / BIGINT. Now we have a false overflow in these cases. Dmitry |
From: Vlad K. <hv...@us...> - 2014-04-29 08:03:51
|
>> I know people will feel outraged with my opinion, but anyway: make DDL >> operations atomic and immediate. This is the "Oracle way". > Atomic and immediate means autocommitted or always executed in a > separate (e.g. system) transaction? I have strong opinion that system transaction must be read-only. Current way to undo actions made by system transaction is fragile (to say softly). So, i see autocommit as only possibility, if we choose "Oracle way" >> Uncommitted DDL and DML working in unison >> with stable DB structure is a naive dream, period. > > Maybe true for the current FB code, but not generally. Other databases > can handle this reliably. MSSQL, for example. Regards, Vlad |
From: Alex <pes...@ma...> - 2014-04-29 08:26:12
|
On 04/29/2014 12:03 PM, Vlad Khorsun wrote: >>> I know people will feel outraged with my opinion, but anyway: make DDL >>> operations atomic and immediate. > This is the "Oracle way". > >> Atomic and immediate means autocommitted or always executed in a >> separate (e.g. system) transaction? > I have strong opinion that system transaction must be read-only. > One exception - when we need to reflect in system tables changes in underlying file system. I mean first of all additional files (like shadows). Here automatic rollback of user transaction may play bad things, i.e. loose sync between system table and file system. |
From: Dmitry Y. <fir...@ya...> - 2014-04-29 08:42:54
|
29.04.2014 12:03, Vlad Khorsun wrote: > So, i see autocommit as only possibility, if we choose "Oracle way" Out of curiosity, why cannot it be done in a separate *non-system* transaction? I.e. instead of committing user transaction start we and immediately commit a new one. Dmitry |
From: Vlad K. <hv...@us...> - 2014-04-29 09:05:39
|
> 29.04.2014 12:03, Vlad Khorsun wrote: > >> So, i see autocommit as only possibility, if we choose "Oracle way" > > Out of curiosity, why cannot it be done in a separate *non-system* > transaction? I.e. instead of committing user transaction start we and > immediately commit a new one. Technically - it is possible. Practically - what generation of altered object user transaction should work with ? Regards, Vlad |