From: Ludovic R. <lud...@gm...> - 2008-06-20 12:22:45
|
Daniel, Your patch does not work. You forgot to call pthread_mutex_unlock(&events_lock); if the thread died. Also I am not sure of the necessity to protect the updates of event_handler by the event_handler_lock. event_handler is a pthread_t which is a "unsigned long int' on my AMD64 linux. So updating the value should already be atomic. Using a lock-less algorithm is: - more fast - more robust If the thread doing: pthread_mutex_lock(&event_handler_lock); event_handler = 0; pthread_mutex_unlock(&event_handler_lock); is killed between the lock and the unlock then the libusb is locked forever. You could write a libusb_event_handler_LOCK_active() function (similar to libusb_event_handler_active()) to check this case with yet another thread id to check with pthread_kill(). But I guess you would protect the new thread id updates by another lock, etc. etc. Any thread can be killed at any time. Using lock/unlock is not a safe way to ensure atomicity. Regards, On Fri, Jun 20, 2008 at 6:34 AM, Daniel Drake <ds...@ge...> wrote: > Ludovic Rousseau wrote: >>> >>> Maybe a simple kill(thread_pid, 0) can be used. But that may not be >>> portable. And the pid of the thread holding the mutex must be stored. >>> >>> I will try this idea and report back here. >> >> It does work. Patch attached. >> >> Comments: >> - event_handler_active is now a pthread_t (instead of just a boolean) >> and contains a thread ID or 0 if no thread is in the event loop >> - in libusb_event_handler_active() I used a local variable "copy" to >> hold the value of event_handler_active so the value is not set to 0 >> between the if() and the pthread_kill(). Calling pthread_kill() with a >> thread ID of 0 causes a segmentation fault. > > Excellent, I was not aware of the kill trick! > > I've brushed it up with some proper locking which avoids the seg fault issue > in a more reliable fashion. Could you test it please? > > The 2nd half of the fix will involve capping the timeout on > libusb_wait_for_event() so that the existence of the event handler is > checked regularly. 2 seconds is probably appropriate - not too unresponsive > when the event handling thread dies or is killed, and not too heavy on CPU > wakeups/battery life. > > Thanks! > Daniel > > > From 77c4a514c4c6034085e97ab9c293af1c3dfd4b8e Mon Sep 17 00:00:00 2001 > From: Daniel Drake <ds...@ge...> > Date: Thu, 19 Jun 2008 23:26:54 -0500 > Subject: [PATCH] event threading stuff > > --- > libusb/io.c | 44 +++++++++++++++++++++++++++----------------- > 1 files changed, 27 insertions(+), 17 deletions(-) > > diff --git a/libusb/io.c b/libusb/io.c > index f659d24..bad83f8 100644 > --- a/libusb/io.c > +++ b/libusb/io.c > @@ -50,8 +50,9 @@ static libusb_pollfd_removed_cb fd_removed_cb = NULL; > /* this lock ensures that only one thread is handling events at any one > time */ > static pthread_mutex_t events_lock = PTHREAD_MUTEX_INITIALIZER; > > -/* used to see if there is an active thread doing event handling */ > -static int event_handler_active = 0; > +/* used to keep track of the thread doing event handling */ > +static pthread_t event_handler = 0; > +static pthread_mutex_t event_handler_lock = PTHREAD_MUTEX_INITIALIZER; > > /* used to wait for event completion in threads other than the one that is > * event handling */ > @@ -726,7 +727,6 @@ void myfunc() { > retry: > if (libusb_try_lock_events() == 0) { > // we obtained the event lock: do our own event handling > - libusb_lock_events(); > while (!completed) { > poll(libusb file descriptors, 120*1000); > if (poll indicates activity) > @@ -1091,7 +1091,9 @@ API_EXPORTED int libusb_try_lock_events(void) > if (r) > return 1; > > - event_handler_active = 1; > + pthread_mutex_lock(&event_handler_lock); > + event_handler = pthread_self(); > + pthread_mutex_unlock(&event_handler_lock); > return 0; > } > > @@ -1115,7 +1117,9 @@ API_EXPORTED int libusb_try_lock_events(void) > API_EXPORTED void libusb_lock_events(void) > { > pthread_mutex_lock(&events_lock); > - event_handler_active = 1; > + pthread_mutex_lock(&event_handler_lock); > + event_handler = pthread_self(); > + pthread_mutex_unlock(&event_handler_lock); > } > > /** \ingroup poll > @@ -1127,7 +1131,9 @@ API_EXPORTED void libusb_lock_events(void) > */ > API_EXPORTED void libusb_unlock_events(void) > { > - event_handler_active = 0; > + pthread_mutex_lock(&event_handler_lock); > + event_handler = 0; > + pthread_mutex_unlock(&event_handler_lock); > pthread_mutex_unlock(&events_lock); > > pthread_mutex_lock(&event_waiters_lock); > @@ -1145,21 +1151,25 @@ API_EXPORTED void libusb_unlock_events(void) > */ > API_EXPORTED int libusb_event_handler_active(void) > { > - int r; > + int r = 0; > + int tmp; > > - if (!event_handler_active) > - return 0; > + pthread_mutex_lock(&event_handler_lock); > + if (!event_handler) > + goto out; > > - /* FIXME: temporary hack to ensure thread didn't quit (e.g. due to > signal) > - * without libusb_unlock_events being triggered */ > - r = pthread_mutex_trylock(&events_lock); > - if (r == 0) { > - event_handler_active = 0; > - pthread_mutex_unlock(&events_lock); > - return 0; > + /* confirm the event handling thread is still alive */ > + tmp = pthread_kill(event_handler, 0); > + if (tmp == ESRCH) { > + /* it went away without properly releasing the lock */ Add a "pthread_mutex_unlock(&events_lock);" here > + event_handler = 0; > + goto out; > } > > - return 1; > + r = 1; > +out: > + pthread_mutex_unlock(&event_handler_lock); > + return r; > } -- Dr. Ludovic Rousseau |