|
From: Jason L. W. <jwh...@jw...> - 2007-08-06 09:23:04
|
Olivier, > Le 25-juil.-07 =E0 22:14, Jason L. Wharton a =E9crit : >=20 > > When working with a joined result set there are potentially > > multiple records > > involved in the cursor on the server. Each table involved in the > > cursor > > should have a table alias associated to it so that I can send = multiple > > UPDATE statements, one for each relation alias that has changes. > > This way, > > it will be quite easy to support cursor based updates of joined > > datasets. >=20 > What if the XSQLDA description did properly returned the TABLE/VIEW > name for each column, in addition to the column name, of the result > set? Wouldn't this be sufficient for a client layer to deduct how to > update what it wants by constructing UPDATE ... WHERE CURRENT OF ... > statements using these relation and column names? Am I missing > something fundamental here? (If so, just tell me and I won't take > more of your time in this thread). It does. Well, it doesn't for the relaliasname, but it does give the ownername, relname, sqlname and aliasname which covers all that you mentioned. The problem I see, or what's missing I should say, is that this forces = the client to handle the fetching of all the necessary key values or DB_KEY values along with the data and they have to know how to put it all = together in the where clause so that it works. It might just be that the = internal information, such as the key values or db_keys is data that should not = ever leave the system. This also separates the fact that when a for update clause is placed on the select it gives the behavior to cause the cursor = on the server to step one record at a time and flush to the client for each fetch. There are other reasons... This brings up the idea that it would be useful to have a token or some = kind of command to flush records to the client. Something like a SUSPEND = WITH FLUSH instead of just SUSPEND. That way, the server could control how records are sent to the client to some extent. This would really only have a practical use if you were using a stored procedure for selection of multiple records and inside the body of the stored procedure there was DML that got executed. If that is the case = then a transaction should be flagged as active-pending (as far as the client = is concerned) and have an opportunity to respond to that immediately = instead of scanning through multiple records (and executing lots of DML) to fill a network packet and then have a user do a commit based on what they see = in the first fetch but what to do a rollback based on what they see in a subsequent fetch. However, when they did a commit retaining after the = first fetch it also committed the DML that was executed when the 20 or so = records were processed to fill the network packet. I hope this makes sense. It surely isn't a high priority item, but interesting in that it gives more control over a critical part of how = the engine works. Jason L Wharton www.ibobjects.com |