From: Alan S. <st...@ro...> - 2006-05-05 00:48:50
|
On Thu, 4 May 2006, Chandra Seetharaman wrote: > > > unregistering from the callout might still be a problem, since we cannot > > > be sure that the next pointer in the freed notifier_block will be sane. > > > > If you reference-count the entire chain, then any subsequently removed > > notifier_block it guaranteed to remain sane. Splitting the reference > > count across multiple CPUs keeps the cost of reference counting quite > > low. > > Paul, > > I am not able to get my head around to understand the reference count > logic. > > call_chain is defined as: > static int __kprobes notifier_call_chain(struct notifier_block **nl, > unsigned long val, void *v) > { > 1: int ret = NOTIFY_DONE; > 2: struct notifier_block *nb; > 3: > 4: nb = rcu_dereference(*nl); > 5: while (nb) { > 6: ret = nb->notifier_call(nb, val, v); > 7: if ((ret & NOTIFY_STOP_MASK) == NOTIFY_STOP_MASK) > 8: break; > 9: nb = rcu_dereference(nb->next); > a: } > b: return ret; > } > > Assume that the user of the notifier chain kmalloc'd a notifier_block > and registered initially. > > When the user's notifier_call is called from line 6 above, the user > kfrees the notifier_block, how having a reference count for the whole > chain would help in making sure that nb->next is sane in line 9 ? We can always change the code: while (nb) { next_nb = rcu_dereference(nb->next); ret = nb->notifier_call(nb, val, v); if ((ret & NOTIFY_STOP_MASK) == NOTIFY_STOP_MASK) break; nb = next_nb; } Alan Stern |