From: Tracker i. u. n. <pup...@li...> - 2008-10-15 03:48:29
|
Bugs item #2167374, was opened at 2008-10-14 20:48 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Alexander Faucher (executedata) Assigned to: Nobody/Anonymous (nobody) Summary: Race condition in genaNotifyThread Initial Comment: When the HandleLock was moved to a Read/Write lock, the lock was dropped inside of genaNotifyThread while calling genaNotify. genaNotify touches the network and it is preferred that no lock is held while it is running. However, in some cases, the nature of read/write locks causes the write lock taken after genaNotify is called to be starved while genaNotifyThread is called over and over. What I see is that an event is sent, and that thread waits on the write lock. While waiting another event is dispatched on another thread, it hits the "in->eventKey != sub->ToSendEventKey" case and reschedules itself. The case "in->eventKey != sub->ToSendEventKey" will not be cleared until the write lock can be taken by the first thread. Until that happens, the second thread is free to call genaNotifyThread again with the next event. The next event has in->eventKey + 1 so also hits "in->eventKey != sub->ToSendEventKey" This race continues until all threads are scheduled such that nobody is sitting in the read lock on thread schedule. The write lock thread is then dispatched as it has been starved waiting and clears the condition. This race can be triggered by eventing many variables sequentially. It appears the easy fix for this would be to take the write lock for the entire genaNotifyThread A more involved fix would be to not try to run the next event job until ToSendEventKey is incremented. -Alexander Faucher ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 |
From: Tracker i. u. n. <pup...@li...> - 2009-02-19 09:24:52
|
Bugs item #2167374, was opened at 2008-10-14 20:48 Message generated for change (Comment added) made by executedata You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Alexander Faucher (executedata) Assigned to: Nobody/Anonymous (nobody) Summary: Race condition in genaNotifyThread Initial Comment: When the HandleLock was moved to a Read/Write lock, the lock was dropped inside of genaNotifyThread while calling genaNotify. genaNotify touches the network and it is preferred that no lock is held while it is running. However, in some cases, the nature of read/write locks causes the write lock taken after genaNotify is called to be starved while genaNotifyThread is called over and over. What I see is that an event is sent, and that thread waits on the write lock. While waiting another event is dispatched on another thread, it hits the "in->eventKey != sub->ToSendEventKey" case and reschedules itself. The case "in->eventKey != sub->ToSendEventKey" will not be cleared until the write lock can be taken by the first thread. Until that happens, the second thread is free to call genaNotifyThread again with the next event. The next event has in->eventKey + 1 so also hits "in->eventKey != sub->ToSendEventKey" This race continues until all threads are scheduled such that nobody is sitting in the read lock on thread schedule. The write lock thread is then dispatched as it has been starved waiting and clears the condition. This race can be triggered by eventing many variables sequentially. It appears the easy fix for this would be to take the write lock for the entire genaNotifyThread A more involved fix would be to not try to run the next event job until ToSendEventKey is incremented. -Alexander Faucher ---------------------------------------------------------------------- >Comment By: Alexander Faucher (executedata) Date: 2009-02-19 01:24 Message: It appears just dropping the strict ordering of "in->eventKey != sub->ToSendEventKey" clears the problem. This might break the upnp specification however. That check isn't justified in the code. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 |
From: Tracker i. u. n. <pup...@li...> - 2010-03-22 00:56:27
|
Bugs item #2167374, was opened at 2008-10-15 00:48 Message generated for change (Comment added) made by mroberto You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None >Priority: 6 Private: No Submitted By: Alexander Faucher (executedata) >Assigned to: Marcelo Roberto Jimenez (mroberto) Summary: Race condition in genaNotifyThread Initial Comment: When the HandleLock was moved to a Read/Write lock, the lock was dropped inside of genaNotifyThread while calling genaNotify. genaNotify touches the network and it is preferred that no lock is held while it is running. However, in some cases, the nature of read/write locks causes the write lock taken after genaNotify is called to be starved while genaNotifyThread is called over and over. What I see is that an event is sent, and that thread waits on the write lock. While waiting another event is dispatched on another thread, it hits the "in->eventKey != sub->ToSendEventKey" case and reschedules itself. The case "in->eventKey != sub->ToSendEventKey" will not be cleared until the write lock can be taken by the first thread. Until that happens, the second thread is free to call genaNotifyThread again with the next event. The next event has in->eventKey + 1 so also hits "in->eventKey != sub->ToSendEventKey" This race continues until all threads are scheduled such that nobody is sitting in the read lock on thread schedule. The write lock thread is then dispatched as it has been starved waiting and clears the condition. This race can be triggered by eventing many variables sequentially. It appears the easy fix for this would be to take the write lock for the entire genaNotifyThread A more involved fix would be to not try to run the next event job until ToSendEventKey is incremented. -Alexander Faucher ---------------------------------------------------------------------- >Comment By: Marcelo Roberto Jimenez (mroberto) Date: 2010-03-21 21:56 Message: Hi Alexander, Do you have a patch? That makes things simpler to analyze. Regards, Marcelo. ---------------------------------------------------------------------- Comment By: Alexander Faucher (executedata) Date: 2009-02-19 06:24 Message: It appears just dropping the strict ordering of "in->eventKey != sub->ToSendEventKey" clears the problem. This might break the upnp specification however. That check isn't justified in the code. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 |
From: Tracker i. u. n. <pup...@li...> - 2010-03-23 16:53:22
|
Bugs item #2167374, was opened at 2008-10-14 20:48 Message generated for change (Comment added) made by executedata You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 6 Private: No Submitted By: Alexander Faucher (executedata) Assigned to: Marcelo Roberto Jimenez (mroberto) Summary: Race condition in genaNotifyThread Initial Comment: When the HandleLock was moved to a Read/Write lock, the lock was dropped inside of genaNotifyThread while calling genaNotify. genaNotify touches the network and it is preferred that no lock is held while it is running. However, in some cases, the nature of read/write locks causes the write lock taken after genaNotify is called to be starved while genaNotifyThread is called over and over. What I see is that an event is sent, and that thread waits on the write lock. While waiting another event is dispatched on another thread, it hits the "in->eventKey != sub->ToSendEventKey" case and reschedules itself. The case "in->eventKey != sub->ToSendEventKey" will not be cleared until the write lock can be taken by the first thread. Until that happens, the second thread is free to call genaNotifyThread again with the next event. The next event has in->eventKey + 1 so also hits "in->eventKey != sub->ToSendEventKey" This race continues until all threads are scheduled such that nobody is sitting in the read lock on thread schedule. The write lock thread is then dispatched as it has been starved waiting and clears the condition. This race can be triggered by eventing many variables sequentially. It appears the easy fix for this would be to take the write lock for the entire genaNotifyThread A more involved fix would be to not try to run the next event job until ToSendEventKey is incremented. -Alexander Faucher ---------------------------------------------------------------------- Comment By: Alexander Faucher (executedata) Date: 2010-03-23 09:53 Message: Sorry, I don't, I went with the fast and easy solution. The proper fix is to make a queue of events for each subscription and create a job to service each event in order for that subscriber. ---------------------------------------------------------------------- Comment By: Marcelo Roberto Jimenez (mroberto) Date: 2010-03-21 17:56 Message: Hi Alexander, Do you have a patch? That makes things simpler to analyze. Regards, Marcelo. ---------------------------------------------------------------------- Comment By: Alexander Faucher (executedata) Date: 2009-02-19 01:24 Message: It appears just dropping the strict ordering of "in->eventKey != sub->ToSendEventKey" clears the problem. This might break the upnp specification however. That check isn't justified in the code. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 |
From: Tracker i. u. n. <pup...@li...> - 2010-03-27 15:53:01
|
Bugs item #2167374, was opened at 2008-10-15 00:48 Message generated for change (Comment added) made by mroberto You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 6 Private: No Submitted By: Alexander Faucher (executedata) Assigned to: Marcelo Roberto Jimenez (mroberto) Summary: Race condition in genaNotifyThread Initial Comment: When the HandleLock was moved to a Read/Write lock, the lock was dropped inside of genaNotifyThread while calling genaNotify. genaNotify touches the network and it is preferred that no lock is held while it is running. However, in some cases, the nature of read/write locks causes the write lock taken after genaNotify is called to be starved while genaNotifyThread is called over and over. What I see is that an event is sent, and that thread waits on the write lock. While waiting another event is dispatched on another thread, it hits the "in->eventKey != sub->ToSendEventKey" case and reschedules itself. The case "in->eventKey != sub->ToSendEventKey" will not be cleared until the write lock can be taken by the first thread. Until that happens, the second thread is free to call genaNotifyThread again with the next event. The next event has in->eventKey + 1 so also hits "in->eventKey != sub->ToSendEventKey" This race continues until all threads are scheduled such that nobody is sitting in the read lock on thread schedule. The write lock thread is then dispatched as it has been starved waiting and clears the condition. This race can be triggered by eventing many variables sequentially. It appears the easy fix for this would be to take the write lock for the entire genaNotifyThread A more involved fix would be to not try to run the next event job until ToSendEventKey is incremented. -Alexander Faucher ---------------------------------------------------------------------- >Comment By: Marcelo Roberto Jimenez (mroberto) Date: 2010-03-27 12:53 Message: Hi Alexander, I know I am asking a lot, but would it be possible for you to post here a small program to reproduce the bug and explain the setup? If I could reproduce the problem here I could try to deal with this issue. Regards, Marcelo. ---------------------------------------------------------------------- Comment By: Alexander Faucher (executedata) Date: 2010-03-23 13:53 Message: Sorry, I don't, I went with the fast and easy solution. The proper fix is to make a queue of events for each subscription and create a job to service each event in order for that subscriber. ---------------------------------------------------------------------- Comment By: Marcelo Roberto Jimenez (mroberto) Date: 2010-03-21 21:56 Message: Hi Alexander, Do you have a patch? That makes things simpler to analyze. Regards, Marcelo. ---------------------------------------------------------------------- Comment By: Alexander Faucher (executedata) Date: 2009-02-19 06:24 Message: It appears just dropping the strict ordering of "in->eventKey != sub->ToSendEventKey" clears the problem. This might break the upnp specification however. That check isn't justified in the code. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 |
From: Tracker i. u. n. <pup...@li...> - 2010-09-28 23:51:26
|
Bugs item #2167374, was opened at 2008-10-15 00:48 Message generated for change (Comment added) made by mroberto You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed >Resolution: Fixed Priority: 6 Private: No Submitted By: Alexander Faucher (executedata) Assigned to: Marcelo Roberto Jimenez (mroberto) Summary: Race condition in genaNotifyThread Initial Comment: When the HandleLock was moved to a Read/Write lock, the lock was dropped inside of genaNotifyThread while calling genaNotify. genaNotify touches the network and it is preferred that no lock is held while it is running. However, in some cases, the nature of read/write locks causes the write lock taken after genaNotify is called to be starved while genaNotifyThread is called over and over. What I see is that an event is sent, and that thread waits on the write lock. While waiting another event is dispatched on another thread, it hits the "in->eventKey != sub->ToSendEventKey" case and reschedules itself. The case "in->eventKey != sub->ToSendEventKey" will not be cleared until the write lock can be taken by the first thread. Until that happens, the second thread is free to call genaNotifyThread again with the next event. The next event has in->eventKey + 1 so also hits "in->eventKey != sub->ToSendEventKey" This race continues until all threads are scheduled such that nobody is sitting in the read lock on thread schedule. The write lock thread is then dispatched as it has been starved waiting and clears the condition. This race can be triggered by eventing many variables sequentially. It appears the easy fix for this would be to take the write lock for the entire genaNotifyThread A more involved fix would be to not try to run the next event job until ToSendEventKey is incremented. -Alexander Faucher ---------------------------------------------------------------------- >Comment By: Marcelo Roberto Jimenez (mroberto) Date: 2010-09-28 20:51 Message: A bug fix has been committed by Fabrice Fontaine, please check. ---------------------------------------------------------------------- Comment By: Marcelo Roberto Jimenez (mroberto) Date: 2010-03-27 12:53 Message: Hi Alexander, I know I am asking a lot, but would it be possible for you to post here a small program to reproduce the bug and explain the setup? If I could reproduce the problem here I could try to deal with this issue. Regards, Marcelo. ---------------------------------------------------------------------- Comment By: Alexander Faucher (executedata) Date: 2010-03-23 13:53 Message: Sorry, I don't, I went with the fast and easy solution. The proper fix is to make a queue of events for each subscription and create a job to service each event in order for that subscriber. ---------------------------------------------------------------------- Comment By: Marcelo Roberto Jimenez (mroberto) Date: 2010-03-21 21:56 Message: Hi Alexander, Do you have a patch? That makes things simpler to analyze. Regards, Marcelo. ---------------------------------------------------------------------- Comment By: Alexander Faucher (executedata) Date: 2009-02-19 06:24 Message: It appears just dropping the strict ordering of "in->eventKey != sub->ToSendEventKey" clears the problem. This might break the upnp specification however. That check isn't justified in the code. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=841026&aid=2167374&group_id=166957 |