From: Manfred S. <ma...@co...> - 2003-12-20 18:20:44
Attachments:
patch-fasync-rcu
|
Hi, kill_fasync and fasync_helper were intended for mice and similar, rare users, thus it uses a simple rwlock for the locking. This is not true anymore: e.g. every pipe read and write operation calls kill_fasync, which must acquire the rwlock before handling the fasync list. What about switching to rcu? I did a reaim run on a 4-way pIII with STP, and it reduced the time within kill_fasync by 80%: diffprofile reaim_End_stock reaim_End_rcu 21166 1.2% default_idle 18882 0.9% total 290 12.8% page_address 269 23.5% group_send_sig_info 259 41.1% do_brk 244 6.3% current_kernel_time [ delta < 200: skipped] -205 -16.1% get_signal_to_deliver -240 -3.7% page_add_rmap -364 -4.7% __might_sleep -369 -8.4% page_remove_rmap -975 -81.2% kill_fasync What do you think? Patch against 2.6.0 is attached. -- Manfred |
From: Stephen H. <she...@os...> - 2003-12-20 21:11:08
|
Manfred Spraul wrote: > Hi, > > kill_fasync and fasync_helper were intended for mice and similar, rare > users, thus it uses a simple rwlock for the locking. This is not true > anymore: e.g. every pipe read and write operation calls kill_fasync, > which must acquire the rwlock before handling the fasync list. > What about switching to rcu? I did a reaim run on a 4-way pIII with > STP, and it reduced the time within kill_fasync by 80%: > > diffprofile reaim_End_stock reaim_End_rcu 21166 1.2% > default_idle > 18882 0.9% total > 290 12.8% page_address > 269 23.5% group_send_sig_info > 259 41.1% do_brk > 244 6.3% current_kernel_time > [ delta < 200: skipped] > -205 -16.1% get_signal_to_deliver > -240 -3.7% page_add_rmap > -364 -4.7% __might_sleep > -369 -8.4% page_remove_rmap > -975 -81.2% kill_fasync > > What do you think? Patch against 2.6.0 is attached. > > -- > Manfred > >------------------------------------------------------------------------ > >--- 2.6/fs/fcntl.c 2003-12-04 19:44:38.000000000 +0100 >+++ build-2.6/fs/fcntl.c 2003-12-20 10:56:23.344256035 +0100 >@@ -537,9 +537,19 @@ > return ret; > } > >-static rwlock_t fasync_lock = RW_LOCK_UNLOCKED; >+static spinlock_t fasync_lock = SPIN_LOCK_UNLOCKED; > static kmem_cache_t *fasync_cache; > >+struct fasync_rcu_struct { >+ struct fasync_struct data; >+ struct rcu_head rcu; >+}; > > Why do needless wrapping of existing structure? Just add and rcu element to it! |
From: Manfred S. <ma...@co...> - 2003-12-20 21:35:09
|
Stephen Hemminger wrote: >> >> +struct fasync_rcu_struct { >> + struct fasync_struct data; >> + struct rcu_head rcu; >> +}; >> >> > Why do needless wrapping of existing structure? Just add and rcu > element to it! > There are two independant users of fasync_struct - networking does it's own locking and allocation and uses __kill_fasync directly. - everyone else uses fasync_helper and calls kill_fasync, with the locking logic in fcntl.c. I didn't convert the network code, thus I couldn't add the rcu member into fasync_struct. -- Manfred |
From: Jamie L. <ja...@sh...> - 2003-12-21 11:36:55
|
Manfred Spraul wrote: > What about switching to rcu? What about killing fasync_helper altogether and using the method that epoll uses to register "listeners" which send a signal when the poll state of a device changes? That would trim off code all over the place, make the fast paths a little bit faster (in the case that there aren't any listeners), and most importantly make SIGIO reliable for every kind of file descriptor, instead of the pot luck you get now. Just an idea :) -- Jamie |
From: Manfred S. <ma...@co...> - 2003-12-21 12:40:54
|
Jamie Lokier wrote: >Manfred Spraul wrote: > > >>What about switching to rcu? >> >> > >What about killing fasync_helper altogether and using the method that >epoll uses to register "listeners" which send a signal when the poll >state of a device changes? > I think it would be a step in the wrong direction: poll should go away from a simple wake-up to an interface that transfers the band info (POLL_IN, POLL_OUT, etc). Right now at least two passes over the f_poll functions are necessary, because the info which event actually triggered is lost. kill_fasync transfers the band info, thus I don't want to remove it. > >That would trim off code all over the place, make the fast paths a >little bit faster (in the case that there aren't any listeners), and >most importantly make SIGIO reliable for every kind of file descriptor, >instead of the pot luck you get now. > >Just an idea :) > It's a good idea, but requires lots of changes - perhaps it will be necessary to change the pollwait and f_poll prototypes. -- Manfred |
From: Jamie L. <ja...@sh...> - 2003-12-21 14:15:12
|
Manfred Spraul wrote: > >What about killing fasync_helper altogether and using the method that > >epoll uses to register "listeners" which send a signal when the poll > >state of a device changes? > > > I think it would be a step in the wrong direction: poll should go away > from a simple wake-up to an interface that transfers the band info > (POLL_IN, POLL_OUT, etc). Right now at least two passes over the f_poll > functions are necessary, because the info which event actually triggered > is lost. kill_fasync transfers the band info, thus I don't want to > remove it. I agree with the principle of the poll wakeup passing the event information to the wakeup function - that would make select, poll, epoll _and_ this new version of fsync faster. That may be easier to implement now than it was in 2.4, because we have wakeup functions, although it is still a big change and it would be hard to get right in some drivers. Perhaps very hard. We have found the performance impact of the extra ->poll calls negligable with epoll. They're simply not slow calls. It's only when you're doing select() or poll() of many descriptors repeatedly that you notice, and that's already poor usage in other ways. So I am not convinced that such an invasive change is worthwhile, particularly as drivers would become more complicated. (Those drivers which already call kill_fasync have the right logic, assuming there are no bugs, but many don't and a big ->poll interface change implies they all have to have it). > It's a good idea, but requires lots of changes - perhaps it will be > necessary to change the pollwait and f_poll prototypes. However, the two changes: fasync -> eventpoll-like waiter, and poll -> fewer function calls are really quite orthogonal. The fasync change is best done separately, with no changes to pollwait and f_poll and virtually no changes to the drivers except to remove calls to kill_fasync. I don't think you need to change pollwait or ->poll, because the band information for the signal is available, as you say, by calling ->poll after the wakeup. Put it this way: Davide thought epoll needed special hooks in all the devices, until I convinced him they weren't needed. He tried it and not only did all the hooks go away, epoll became simpler and smaller, it worked with every pollable fd instead of just the ones useful for web servers, and surprisingly ran a bit faster too. -- Jamie |
From: Manfred S. <ma...@co...> - 2003-12-21 15:00:03
|
Jamie Lokier wrote: >I don't think you need to change pollwait or ->poll, because the band >information for the signal is available, as you say, by calling ->poll >after the wakeup. > I'm not convinced: The wakeup happens at irq time. The band info is necessary for send_sigio(). Calling f_poll at irq time is not an option - it will definitively cause breakage. schedule_work() for every call is IMHO not an option. And even that is not reliable: fasync users might expect seperate POLL_OUT and POLL_IN signals. -- Manfred |
From: Jamie L. <ja...@sh...> - 2003-12-21 15:08:28
|
Manfred Spraul wrote: > The wakeup happens at irq time. The band info is necessary for > send_sigio(). Calling f_poll at irq time is not an option - it will > definitively cause breakage. Agree * 3. > schedule_work() for every call is IMHO not an option. Agree, the latency would suck and it wouldn't even work for RT processes. > And even that is not reliable: fasync users might expect seperate > POLL_OUT and POLL_IN signals. They might, although they probably shouldn't (band is a bitmask for a reason). Anyway, you can handle all these problems by computing the band at signal delivery time. Yes it sounds like it would complicate the signal delivery code, but sigio should really be handled specially anyway, so that a signal queue entry for every fd is guaranteed and queue overflow is not possible. Somebody already has a patch for that, it might be worth working from. -- Jamie |
From: Bill D. <dav...@tm...> - 2004-01-02 21:31:58
|
Jamie Lokier wrote: > We have found the performance impact of the extra ->poll calls > negligable with epoll. They're simply not slow calls. It's > only when you're doing select() or poll() of many descriptors > repeatedly that you notice, and that's already poor usage in other > ways. I do agree with you, but there is a lot of old software, and software written on/for BSD, which does do this. I'm not prepared to say that BSD does it better, but it's easier to fix in one place, the kernel, than many other places. Your point about the complexity is also correct, but perhaps someone will offer a better solution to speeding up select(). I think anything as major as this might be better off in a development series, and that's a clear prod for someone to find a simpler way to do it ;-) Old programs grow; INN uses select and worked fine with 10-20 peers, with 200 peers sharing 2m articles and 1 TB of data it seems to work less well on Linux than BSD or Solaris. I'd love to see faster, there are lots of other servers out there as well. -- bill davidsen <dav...@tm...> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 |
From: Jamie L. <ja...@sh...> - 2004-01-02 22:42:12
|
Bill Davidsen wrote: > Jamie Lokier wrote: > >We have found the performance impact of the extra ->poll calls > >negligable with epoll. They're simply not slow calls. It's > >only when you're doing select() or poll() of many descriptors > >repeatedly that you notice, and that's already poor usage in other > >ways. > > I do agree with you, but there is a lot of old software, and software > written on/for BSD, which does do this. I'm not prepared to say that BSD > does it better, but it's easier to fix in one place, the kernel, than > many other places. > > Your point about the complexity is also correct, but perhaps someone > will offer a better solution to speeding up select(). I think anything > as major as this might be better off in a development series, and that's > a clear prod for someone to find a simpler way to do it ;-) Eliminating up to half of the ->poll calls using wake_up_info() and reducing the number of wakeups using an event mask argument to ->poll are not the best ways to speed up select() or poll() for large numbers of descriptors. The best way is to maintain poll state in each "struct file". The order of complexity for the bitmap scan is still significant, but ->poll calls are limited to the number of transitions which actually happen. I think somebody, maybe Richard Gooch, has a patch to do this that's several years old by now. -- Jamie |
From: Mike F. <mf...@ma...> - 2004-03-29 15:59:56
|
On Fri, Jan 02, 2004 at 10:41:50PM +0000, Jamie Lokier wrote: > The best way is to maintain poll state in each "struct file". The > order of complexity for the bitmap scan is still significant, but > ->poll calls are limited to the number of transitions which actually > happen. What's the drawback to this approach? Where is the poll state kept now? > I think somebody, maybe Richard Gooch, has a patch to do this that's > several years old by now. Why wasn't it merged? Implementation issues? - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to maj...@vg... More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |
From: Jamie L. <ja...@sh...> - 2004-03-29 15:59:52
|
Bill Davidsen wrote: > Jamie Lokier wrote: > >We have found the performance impact of the extra ->poll calls > >negligable with epoll. They're simply not slow calls. It's > >only when you're doing select() or poll() of many descriptors > >repeatedly that you notice, and that's already poor usage in other > >ways. > > I do agree with you, but there is a lot of old software, and software > written on/for BSD, which does do this. I'm not prepared to say that BSD > does it better, but it's easier to fix in one place, the kernel, than > many other places. > > Your point about the complexity is also correct, but perhaps someone > will offer a better solution to speeding up select(). I think anything > as major as this might be better off in a development series, and that's > a clear prod for someone to find a simpler way to do it ;-) Eliminating up to half of the ->poll calls using wake_up_info() and reducing the number of wakeups using an event mask argument to ->poll are not the best ways to speed up select() or poll() for large numbers of descriptors. The best way is to maintain poll state in each "struct file". The order of complexity for the bitmap scan is still significant, but ->poll calls are limited to the number of transitions which actually happen. I think somebody, maybe Richard Gooch, has a patch to do this that's several years old by now. -- Jamie - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to maj...@vg... More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |
From: Bill D. <dav...@tm...> - 2004-03-29 15:38:53
|
Jamie Lokier wrote: > We have found the performance impact of the extra ->poll calls > negligable with epoll. They're simply not slow calls. It's > only when you're doing select() or poll() of many descriptors > repeatedly that you notice, and that's already poor usage in other > ways. I do agree with you, but there is a lot of old software, and software written on/for BSD, which does do this. I'm not prepared to say that BSD does it better, but it's easier to fix in one place, the kernel, than many other places. Your point about the complexity is also correct, but perhaps someone will offer a better solution to speeding up select(). I think anything as major as this might be better off in a development series, and that's a clear prod for someone to find a simpler way to do it ;-) Old programs grow; INN uses select and worked fine with 10-20 peers, with 200 peers sharing 2m articles and 1 TB of data it seems to work less well on Linux than BSD or Solaris. I'd love to see faster, there are lots of other servers out there as well. -- bill davidsen <dav...@tm...> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to maj...@vg... More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |
From: Bill D. <dav...@tm...> - 2004-03-29 16:26:48
|
Jamie Lokier wrote: > We have found the performance impact of the extra ->poll calls > negligable with epoll. They're simply not slow calls. It's > only when you're doing select() or poll() of many descriptors > repeatedly that you notice, and that's already poor usage in other > ways. I do agree with you, but there is a lot of old software, and software written on/for BSD, which does do this. I'm not prepared to say that BSD does it better, but it's easier to fix in one place, the kernel, than many other places. Your point about the complexity is also correct, but perhaps someone will offer a better solution to speeding up select(). I think anything as major as this might be better off in a development series, and that's a clear prod for someone to find a simpler way to do it ;-) Old programs grow; INN uses select and worked fine with 10-20 peers, with 200 peers sharing 2m articles and 1 TB of data it seems to work less well on Linux than BSD or Solaris. I'd love to see faster, there are lots of other servers out there as well. -- bill davidsen <dav...@tm...> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to maj...@vg... More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |
From: Davide L. <da...@xm...> - 2003-12-21 15:14:19
|
On Sun, 21 Dec 2003, Manfred Spraul wrote: > >What about killing fasync_helper altogether and using the method that > >epoll uses to register "listeners" which send a signal when the poll > >state of a device changes? > > > I think it would be a step in the wrong direction: poll should go away > from a simple wake-up to an interface that transfers the band info > (POLL_IN, POLL_OUT, etc). Right now at least two passes over the f_poll > functions are necessary, because the info which event actually triggered > is lost. kill_fasync transfers the band info, thus I don't want to > remove it. It is my plan to propose (Linus is not contrary, in principle) a change of the poll/wake infrastructure for 2.7. There are two areas that can be improved. First, f_op->poll() does not allow you to send and event mask, and this requires the driver to indiscriminately wake up both IN and OUT waiters. The second area will be to give the driver to specify some "info" for the wake up. Something like: wake_up_info(&wq, XXXX); And add to the wait queue item storage for the passed info. Where "info" could be anything from an event mask, up to an allocated object with its own destructor. In this way the callback'd waked up will have the "info" ready w/out issuing an extra f_op->poll(). The code is pretty much trivial, even if changes will touch a bunch of code. The good thing is that migration can be gradual, beside the initial dumb compile fixing to suite the new f_op->poll() interface. - Davide |
From: Davide L. <da...@xm...> - 2003-12-21 15:17:55
|
On Sun, 21 Dec 2003, Davide Libenzi wrote: > On Sun, 21 Dec 2003, Manfred Spraul wrote: > > > >What about killing fasync_helper altogether and using the method that > > >epoll uses to register "listeners" which send a signal when the poll > > >state of a device changes? > > > > > I think it would be a step in the wrong direction: poll should go away > > from a simple wake-up to an interface that transfers the band info > > (POLL_IN, POLL_OUT, etc). Right now at least two passes over the f_poll > > functions are necessary, because the info which event actually triggered > > is lost. kill_fasync transfers the band info, thus I don't want to > > remove it. > > It is my plan to propose (Linus is not contrary, in principle) a change of > the poll/wake infrastructure for 2.7. There are two areas that can be > improved. First, f_op->poll() does not allow you to send and event mask, Sorry, poll_wait() does not allow you to specify an event mask ... - Davide |
From: Jamie L. <ja...@sh...> - 2003-12-21 15:28:35
|
Davide Libenzi wrote: > First, f_op->poll() does not allow you to send and event mask, > and this requires the driver to indiscriminately wake up both IN and OUT > waiters. The second area will be to give the driver to specify some "info" > > wake_up_info(&wq, XXXX); I agree totally, both of these are (and always were, isn't it amazing how long these things take) the way to do it "properly". > The good thing is that migration can be gradual, beside the initial > dumb compile fixing to suite the new f_op->poll() interface. Even that's trivial, if a little time consuming, as it's only a function signature change. Actually using the extra argument is optional for each driver. -- Jamie |
From: OGAWA H. <hir...@ma...> - 2003-12-21 18:39:53
|
Manfred Spraul <ma...@co...> writes: > void kill_fasync(struct fasync_struct **fp, int sig, int band) > { > - read_lock(&fasync_lock); > + rcu_read_lock(); > __kill_fasync(*fp, sig, band); > - read_unlock(&fasync_lock); > + rcu_read_unlock(); > } Usually *fp is NULL, I think. So what about the following test? void kill_fasync(struct fasync_struct **fp, int sig, int band) { if (*fp) { rcu_read_lock(); __kill_fasync(*fp, sig, band); rcu_read_unlock(); } } Or use inline function for testing *fp. -- OGAWA Hirofumi <hir...@ma...> |
From: Manfred S. <ma...@co...> - 2003-12-21 19:14:41
Attachments:
patch-fasync-rcu
|
OGAWA Hirofumi wrote: >Or use inline function for testing *fp. > > Initially I tried to keep the patch as tiny as possible, thus I avoided adding an inline function. But Stephen Hemminger convinced me to update the network code, and thus it didn't matter and I've switched to an inline function. What do you think about the attached patch? -- Manfred |
From: Linus T. <tor...@os...> - 2003-12-21 20:51:53
|
On Sun, 21 Dec 2003, Manfred Spraul wrote: > > Initially I tried to keep the patch as tiny as possible, thus I avoided > adding an inline function. But Stephen Hemminger convinced me to update > the network code, and thus it didn't matter and I've switched to an > inline function. > What do you think about the attached patch? Please, NO! Stuff like this - write_lock_irq(&fasync_lock); + if (s) + lock_sock(s); + else + spin_lock(&fasync_lock); + should not be allowed. That's especially true since the choice really is a static one depending on the caller. Just make the caller do the locking. Linus |
From: Manfred S. <ma...@co...> - 2003-12-21 21:08:38
|
Linus Torvalds wrote: >On Sun, 21 Dec 2003, Manfred Spraul wrote: > > >>Initially I tried to keep the patch as tiny as possible, thus I avoided >>adding an inline function. But Stephen Hemminger convinced me to update >>the network code, and thus it didn't matter and I've switched to an >>inline function. >>What do you think about the attached patch? >> >> > >Please, NO! > >Stuff like this > > - write_lock_irq(&fasync_lock); > + if (s) > + lock_sock(s); > + else > + spin_lock(&fasync_lock); > + > >should not be allowed. That's especially true since the choice really is a >static one depending on the caller. > >Just make the caller do the locking. > > It's not that simple: the function does kmalloc(); spin_lock(); use_allocation. If the caller does the locking, then the kmalloc would have to use GFP_ATOMIC, or the caller would have to do the alloc. But: as far as I can see, these lines usually run under lock_kernel(). If this is true, then the spin_lock(&fasync_lock) won't cause any scalability regression, and I'll use that lock instead of lock_sock, even for network sockets. -- Manfred |
From: Linus T. <tor...@os...> - 2003-12-21 21:19:40
|
On Sun, 21 Dec 2003, Manfred Spraul wrote: > > > >Just make the caller do the locking. > > It's not that simple: It _is_ that simple. The choices are: - let the caller do the locking - make the callee locking be statically determinable Those are the choices. Your kind of code is not goign to be integrated. > the function does > kmalloc(); > spin_lock(); > use_allocation. This is trivially handled by splitting out the allocation as a separate phase. Yes, it requires that the caller be changed, but if the choice is between insane locking and making a caller change, then the choice is very very clear. > But: as far as I can see, these lines usually run under lock_kernel(). > If this is true, then the spin_lock(&fasync_lock) won't cause any > scalability regression, and I'll use that lock instead of lock_sock, > even for network sockets. Don't. Here's a big clue: if you make code worse than it is today, it won't be accepted. I don't even see why you'd bother in the first place. So go back to the drawing board, and just do it _right_. Or don't do it at all. There's no point to making the code look and behave worse than it does today. Linus |
From: Manfred S. <ma...@co...> - 2003-12-21 21:55:00
|
Linus Torvalds wrote: >Here's a big clue: if you make code worse than it is today, it won't be >accepted. I don't even see why you'd bother in the first place. > > fasync_helper != kill_fasync fasync_helper is rare, and usually running under lock_kernel(). kill_fasync is far more common (every pipe_read and _write), I want to remove the unconditional read_lock(&global_lock). >So go back to the drawing board, and just do it _right_. Or don't do it at >all. There's no point to making the code look and behave worse than it >does today. > Today's solution is two copies of fasync_helper: one with lock_sock in net/socket.c, one with write_lock_irq(&fasync_lock) in fs/fcntl.c. Perhaps just a "if (*fp == NULL) return;" before grabbing the read_lock in kill_fasync, without touching fasync_helper - that would be sufficient to fix pipe_read and _write. -- Manfred |
From: Linus T. <tor...@os...> - 2003-12-21 22:05:30
|
On Sun, 21 Dec 2003, Manfred Spraul wrote: > > >Here's a big clue: if you make code worse than it is today, it won't be > >accepted. I don't even see why you'd bother in the first place. > > fasync_helper != kill_fasync > fasync_helper is rare, and usually running under lock_kernel(). But we want to get rid of lock_kernel(), not create new code that depends on it. And _especially_ if fasync_helper() is rarely used, that means that changing the callers to have a nicer calling convention would not be painful. > kill_fasync is far more common (every pipe_read and _write), I want to > remove the unconditional read_lock(&global_lock). Note that my personal preference would be to kill off "kill_fasync()" entirely. We actually have almost all the infrastructure in place already: it's called a "wait queue". In 2.5.x it took a callback function, and the only thing missing is really the "band" information at wakeup time. So if we instead made the whole fasync infrastructure use the existing wait-queues, and made wakeup() say what kind of wakeup it is, we could probably get rid of the specific fasync datastructures entirely. And we'd only take locks that we take _anyway_. I dunno. But to me that at least sounds like a real cleanup. > Today's solution is two copies of fasync_helper: one with lock_sock in > net/socket.c, one with write_lock_irq(&fasync_lock) in fs/fcntl.c. And two functions that statically do something different is actually _better_ than one function that does two different things dynamically. And if the two cases have different locking, then they should remain as two separate cases. Linus |
From: Manfred S. <ma...@co...> - 2003-12-25 01:23:24
|
On Sun, 21 Dec 2003, Linus Torvalds wrote: > > > kill_fasync is far more common (every pipe_read and _write), I want to > > remove the unconditional read_lock(&global_lock). > > Note that my personal preference would be to kill off "kill_fasync()" > entirely. > We've discussed that earlier in the thread, and came to the same conclusion. Unfortunately it touches several drivers, and is not a simple patch. Viro's summary of fasync in Documentation/filesystem/Locking is "fasync is a mess" - converting kill_fasync to wake_up_band() is 2.7 stuff. What about this minimal approach: <<<< --- 2.6/fs/fcntl.c 2003-12-04 19:44:38.000000000 +0100 +++ build-2.6/fs/fcntl.c 2003-12-24 00:15:16.000000000 +0100 @@ -609,9 +609,15 @@ void kill_fasync(struct fasync_struct **fp, int sig, int band) { - read_lock(&fasync_lock); - __kill_fasync(*fp, sig, band); - read_unlock(&fasync_lock); + /* First a quick test without locking: usually + * the list is empty. + */ + if (*fp) { + read_lock(&fasync_lock); + /* reread *fp after obtaining the lock */ + __kill_fasync(*fp, sig, band); + read_unlock(&fasync_lock); + } } EXPORT_SYMBOL(kill_fasync); <<<< -- Manfred |