From: Amit K. <ami...@en...> - 2013-04-17 03:53:06
|
Commid id 9b6d7fa8859f03334192b13736deaf335fd8ceed addresses the review comments. Details below. On 12 April 2013 16:15, Ashutosh Bapat <ash...@en...>wrote: > Hi Amit, > Here are comments for commit > commit badf4c31edfd3d72a38a348523f5e05730374700 > Author: Amit Khandekar <ami...@en...> > Date: Thu Apr 4 12:17:09 2013 +0545 > > The core BEFORE ROW trigger-support related changes. > > Because there is no tupleId parameter in the trigger functions is not > valid, > we need to accept the OLD row through oldtuple parameter of the > trigger functions. > And then we craft NEW row using modified values slot, plus the OLD > values in OLD tuple. > > Comments > --------------- > In ExecBRDeleteTriggers() and all the functions executing before row > triggers, I see this code > 2210 #ifdef PGXC > 2211 if ((IS_PGXC_COORDINATOR && !IsConnFromCoord())) > 2212 trigtuple = pgxc_get_trigger_tuple(datanode_tuphead); > 2213 else /* On datanode, do the usual way */ > 2214 { > > This code will be applicable to all the deletes/modifications at the > coordinator. The updates to catalogs happen locally to the coordinator and > thus should call GetTupleForTrigger instead of pgxc_get_trigger_tuple(). I > am not sure if there can be triggers on catalog tables, (mostly not), but > it will be better to check whether the tuple is local or global. > Corrected IS_PGXC_COORDINATOR() condition check. Used RelationGetLocInfo(). > Right now we are passing both the old tuple and tupleid both to > ExecBRDeleteTriggers() function. Depending upon the tuplestore > implementation we do for storing the old tuples, we may be able to retrieve > the old tuple from the tuplestore given some index into it. Can we use the > ItemPointer to store this index? > I have given quite a bit of a thought to try to re-use the tupleid parameter for dual purpose. We can use the same argument to pass tupleid or the tuple header. But this is because we know that both ItemPointer and HeapTuleHeader are defined as pointers. But we should not assume their definition. We can may be define a compilation-time check : #if sizeof(HeapTupleHeader) > sizeof(ItemPointer) /* (can we even do this ?) */ #error #endif So that in the future if we cannot accommodate HeapTupleHeader values in ItemPointer, it will not build, and then we then at that point we would do the changes in the prototypes of all the Exec[AB]R*Trigger() functions. Still I think this is not the right thing to do. We should keep the code in the current condition, and only if we start getting too many PG merge issues, we should consider the above method. > I am coming back to functions pgxc_form_trigger_tuple() and > pgxc_get_trigger_tuple(). In these functions there is nothing XC specific, > so my first thought was whether there are functions already in PG code, > which would serve the functionality that these two functions serve. I tried > to search but without success. Can you find any PG function which has this > functionality? Should these functions be in heaptuple.c or such file > instead of trigger.c? Also, there is nothing specific for triggers in these > functions, so better their names not contain trigger. > Removed pgxc_form_trigger_tuple(), but kept pgxc_get_trigger_tuple() because it might possibly be needed to serve the purpose of GetTupleTrigger() later when we fix the concurrent update issue. I have added a PGXCTODO for the same. > > Regarding trigger firing, I am assuming that, all the BR triggers are > fired on a single node either the coordinator (if there at least one > nonshippable trigger) or datanode (if all the triggers are shippable). We > can not fire some at coordinator and some at the datanode because that > might change the order of firing the triggers. PG has documented that order > of firing triggers is alphabetical. With this context, > in function pgxc_is_trigger_firable(), what happens if a triggers is > shippable but needs to be fired on coordinator. From the code it looks like > we won't fire the trigger. At line 5037, you have used IsConnFromCoord() > which is true even for a coordinator - coordinator connection. Do you want > to specifically check datanode here? It will be helpful if the function > prologue contains a truth table with shippability and node where the firing > will happen and the connection origination. This is not your change but > somebody who implemented triggers last time has left a serious bug here. > I had initially implemented the support to ensure all triggers be fired either on coordinator or datanode, but as I mentioned in the initial patch email, I have removed that part when I sent the BR trigger patch because this change better be done after we have both BR and AR trigger. The reason is because we need to selectively ship both AR and BR triggers or only ship AR but not BR triggers depending on some specific shippability conditions. This logic will be handled in a different commit. I have added PGXCTODO in pgxc_is_firable(). > > Everywhere I see that we have used the checks for coordinator to execute > the things XC way or PG way. I think we need better have PG way for local > modifications and XC way for remote modifications. > Corrected the conditions. I guess this is the same issue as the issue#1 you mentioned above ? > > The declaration for function fill_slot_with_oldvals(), should have the > return type, function name everything on the same line, as per PG standards. > Corrected. > > The function fill_slots_with_oldvals() is being called before the actual > update or in fact any of the trigger processing. What's the purpose of this > function? The comment there tell what the function does, but not WHY. > For remote tables, the plan slot does not have all NEW tuple values in the plan slot. (Right now we do have but after the fetch-only-reqd-columns patch we won't have) If oldtuple is supplied, we would also need a complete NEW tuple. Currently for remote tables, triggers are the only case where oldtuple is passed. So we should craft the NEW tuple wheneverwe have the OLD tuple. Have updated the above comments in the code. > > I think I will need to revisit this commit again at the time of reviewing > after row trigger implementation, since this commit has modified some of > that area as well. > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |