From: Amit K. <ami...@en...> - 2013-05-10 05:04:22
|
We can consider applying the usual row-trigger rules of trigger shippability to statement triggers: If the trigger function is shippable, execute the trigger on datanode, else on coordinator. It is not as trivial as it sounds though. For a non-FQS'ed DML, a DML is executed on datanode for each row to be processed. So if a user updates 10 rows with a non-shippable query, the coordinator will execute a parameterized remote update query on datanode for each of the 10 ctids found using the quals. And if we execute shippable statement triggers on datanode, the statement trigger will be executed 10 times on datanode. Is this expected from the user ? >From the user's perspective, the statement is executed once, so the statement trigger should be fired only once. Typical use case is that the user queries need to be logged/audited. So we need to prevent firing statement triggers on datanode for non-FQS'ed query. But should the user define the stmt trigger function as immutable in such a case ? May be not in this auditing scenario. But it is not very clear what would a shippable statement trigger mean to the user exactly. If the function is really one which does not access the database as per the immutable definition, then it anyway does not matter how many times it gets executed on datanode for a given statement. I think the solution is to *always* fire statement triggers on *coordinator* regardless of shippability or whether it is FQS or non-FQS. For FQS query, we need to explicitly fire stmt trigger before/after the fqs'ed query node is executed, may be inside the ExecRemoteQuery() function itself. Comments ? |