From: Adriano d. S. F. <adr...@gm...> - 2011-03-25 17:07:21
|
On 25-03-2011 13:48, Alex Peshkoff wrote: > On 03/25/11 18:59, Adriano dos Santos Fernandes wrote: >> On 25-03-2011 05:27, Alex Peshkoff wrote: >>> On 03/24/11 20:28, Adriano dos Santos Fernandes wrote: >>>> On 24-03-2011 12:53, Alex Peshkoff wrote: >>>>> On 03/24/11 18:09, Adriano dos Santos Fernandes wrote: >>>>>> On 24-03-2011 11:59, Alex Peshkoff wrote: >>>>>>> May be you have forgotten - one of my proposals is to not pass external >>>>>>> handles in attach/startTransaction call. And do not pass them into >>>>>>> engine in any way. Cause with new API we have much better way to access >>>>>>> current context. >>>>>>> >>>>>> If your approach is about don't be able to use current Firebird API with >>>>>> external attachments and transactions (initiated by client or >>>>>> internally), I'm sorry but I can't consider it as usable. >>>>> May be you've missed in the thread 'Refcounted API objects': >>>>> >>>>> We really need converter from ISC API to new interface. If in external >>>>> engines we >>>>> add 2 special handle values - current connection and current >>>>> transaction, this will solve backward compatibility problem in all places. >>>>> >>>> Single handle meaning different things when used in different moments >>>> smells like fire to me. >>> Yes, probably you are right. Even if we take into an account existence >>> of current_user and current_role variables - I agree that this is >>> another usage. >>> >>> But no matter of that fact delivering of original ISC handles into >>> engine is also bad idea. And one of the reasons for it - in many cases >>> there will be nothing to deliver. Server with new interface will work >>> without ISC handles at all, therefore to support old API in external >>> engines you will anyway have to create a kind of pseudo ISC handle. >>> There is absolutely no crime in it - just make it possible to ask API >>> converter: I have IAttacment* (or ITransaction*), please create handle >>> for it. >>> >> This seems ok. >> >> What I had in mind: >> - Yvalve handles are mapped to Yvalve objects (YAttachment, YTransaction). >> - We add new API function that translate Yvalve pointer to handle: >> fb_get_handle(IInterface*, int type) >> >> And with your suggestion, then we add a way to make the Yvalve to create >> a Yvalve object from a provider object, so external code always access >> provider objects via Yvalve objects. And since external code now >> receives a Yvalve object, it can gets its legacy handle with fb_get_handle. > > Now (I talk about SVN state) external code receives ISC handle. But I > suggest you to remove this hack from the code and when external engines > need to access current attachment/transaction (let me call this pair > context later), make them work using engine directly, without yValve. > > I know your mind that all calls must pass through yValve. But I have not > seen any explanation of this requirement. The only reason I remember is > 'other types of providers need access to this calls'. But this > requirement is rather strange. Let's take a look at SP. It executes some > SQL operators, and (I hope) nobody requires this calls to go to yValve. Yvalve defines Firebird API requirements, may check handles (objects too), may do extra things. Don't make sense to move this type of functionality to each provider. Also, let say external code receives an engine IAttachment which was registered in yvalve and now has a handle. Then external code starts to use this handle and start new transaction with legacy API. If this IAttachment is not from yvalve, now yvalve gained another responsibility which is to coordinate engine provider objects. Let yvalve coordinate yvalve objects. In my new version of why.cpp things like CAttachment (C*) classes gone in favor of YAttachment (Y*) objects. Things are much simple and effective, became directly usable without legacy handles, may use the same API (semantics defined in yvalve) for client and external code running on the server. > Now imagine that instead direct > > INSERT INTO SOME_TABLES(SOME_INT) VALUES(123); > > Now let's rewrite this slightly. > > EXECUTE STATEMENT 'INSERT INTO SOME_TABLES(SOME_INT) VALUES(123)'; > > Should it pass through yValve? What's a difference with previous case? > And finally let's imagine that some external engine executes same > statement in current context. What's the difference between stored > procedure, execute statement and external engine (working with current > context) that in some cases SQL operator must pass go to yValve, but in > others - not? > This is pure internal command, has nothing to do with API, so it may execute without yvalve like any other internal command. If it's an external datasource, it must be created by yvalve cause it may go to another provider. Adriano |