When the HandleLock was moved to a Read/Write lock, the lock was dropped inside of genaNotifyThread while calling genaNotify.
genaNotify touches the network and it is preferred that no lock is held while it is running.
However, in some cases, the nature of read/write locks causes the write lock taken after genaNotify is called to be starved while genaNotifyThread is called over and over.
What I see is that an event is sent, and that thread waits on the write lock. While waiting another event is dispatched on another thread, it hits the "in->eventKey != sub->ToSendEventKey" case and reschedules itself.
The case "in->eventKey != sub->ToSendEventKey" will not be cleared until the write lock can be taken by the first thread. Until that happens, the second thread is free to call genaNotifyThread again with the next event. The next event has in->eventKey + 1 so also hits "in->eventKey != sub->ToSendEventKey"
This race continues until all threads are scheduled such that nobody is sitting in the read lock on thread schedule. The write lock thread is then dispatched as it has been starved waiting and clears the condition.
This race can be triggered by eventing many variables sequentially.
It appears the easy fix for this would be to take the write lock for the entire genaNotifyThread
A more involved fix would be to not try to run the next event job until ToSendEventKey is incremented.