Latest version of opal 3.18.3 from git:
/lib64/libpthread.so.0(+0x7dc5) [0x7f892bc81dc5]
/lib64/libc.so.6(clone
1:28:14.379 RTP-Report:8418 osutils.cxx(3759) PTLib Assertion fail: Released phantom deadlock, held from 14:10:02.437 to 14:10:03.962 (1.525s), in read/write mutex 0x7f41140cdc98 (mediasession.h(650) OpalMediaSession,1000ms) at rtp_session.cxx(2038)
/lib64/libpt.so.2.18.3(PPlatformWalkStack(std::ostream&, unsigned long, int, unsigned int, bool)+0xd4) [0x7f414b02dc94]
/lib64/libpt.so.2.18.3(PTrace::WalkStack(std::ostream&, unsigned long, int, bool)+0x48) [0x7f414b048448]
/lib64/libpt.so.2.18.3(PMutexExcessiveLockInfo::ReleasedLock(PObject const&, unsigned long, bool, PDebugLocation const&)+0x2f3) [0x7f414b013ce3]
/lib64/libpt.so.2.18.3(PReadWriteMutex::InternalEndWrite(PDebugLocation const)+0x7d) [0x7f414b014f9d]
/lib64/libpt.so.2.18.3(PSafeObject::InternalUnlockReadWrite(PDebugLocation const) const+0xc2) [0x7f414affddb2]
/lib64/libopal.so.3.18.3(OpalRTPSession::SendReport(unsigned int, bool, PTime const&)+0x29c) [0x7f414a5e114c]
/lib64/libopal.so.3.18.3(OpalRTPSession::TimedSendReport(PTimer&, long)+0xbb) [0x7f414a5d55bb]
/lib64/libpt.so.2.18.3(PTimer::OnTimeout()+0xfe) [0x7f414b01659e]
/lib64/libpt.so.2.18.3(PTimer::List::OnTimeout(unsigned int)+0xe8) [0x7f414b013578]
/lib64/libpt.so.2.18.3(PTimer::List::Timeout::Work()+0x2c) [0x7f414b0136fc]
/lib64/libpt.so.2.18.3(PQueuedThreadPool<ptimer::list::timeout>::QueuedWorkerThread::Work()+0x10e) [0x7f414b02bcfe]
/lib64/libpt.so.2.18.3(PThreadPoolBase::WorkerThreadBase::Main()+0x7c) [0x7f414aff9e5c]
/lib64/libpt.so.2.18.3(PThread::InternalThreadMain()+0x3d) [0x7f414b01858d]
/lib64/libpt.so.2.18.3(PThread::PX_ThreadMain(void)+0x30) [0x7f414afee100]
/lib64/libpthread.so.0(+0x7dc5) [0x7f4148087dc5]
/lib64/libc.so.6(clone
1:28:14.855 OpalMixer:1251 osutils.cxx(3738) PTLib Assertion fail: Phantom deadlock in read/write mutex 0x7f41140cdc98 (mediasession.h(650) OpalMediaSession,1000ms)
1:28:14.856 RTP-1-media:1241 osutils.cxx(3738) PTLib Assertion fail: Phantom deadlock in read/write mutex 0x7f41140cdc98 (mediasession.h(650) OpalMediaSession,1000ms)
1:28:14.848 RTP-Report:8242 osutils.cxx(3759) PTLib Assertion fail: Released phantom deadlock, held from 14:10:02.243 to 14:10:04.210 (1.966s), in read/write mutex 0x7f88cc0dcc78 (mediasession.h(650) OpalMediaSession,1000ms) at rtp_session.cxx(2038)
/lib64/libpt.so.2.18.3(PPlatformWalkStack(std::ostream&, unsigned long, int, unsigned int, bool)+0xd4) [0x7f892ec27c94]
/lib64/libpt.so.2.18.3(PTrace::WalkStack(std::ostream&, unsigned long, int, bool)+0x48) [0x7f892ec42448]
/lib64/libpt.so.2.18.3(PMutexExcessiveLockInfo::ReleasedLock(PObject const&, unsigned long, bool, PDebugLocation const&)+0x2f3) [0x7f892ec0dce3]
/lib64/libpt.so.2.18.3(PReadWriteMutex::InternalEndWrite(PDebugLocation const)+0x7d) [0x7f892ec0ef9d]
/lib64/libpt.so.2.18.3(PSafeObject::InternalUnlockReadWrite(PDebugLocation const) const+0xc2) [0x7f892ebf7db2]
/lib64/libopal.so.3.18.3(OpalRTPSession::SendReport(unsigned int, bool, PTime const&)+0x29c) [0x7f892e1db14c]
/lib64/libopal.so.3.18.3(OpalRTPSession::TimedSendReport(PTimer&, long)+0xbb) [0x7f892e1cf5bb]
/lib64/libpt.so.2.18.3(PTimer::OnTimeout()+0xfe) [0x7f892ec1059e]
/lib64/libpt.so.2.18.3(PTimer::List::OnTimeout(unsigned int)+0xe8) [0x7f892ec0d578]
/lib64/libpt.so.2.18.3(PTimer::List::Timeout::Work()+0x2c) [0x7f892ec0d6fc]
/lib64/libpt.so.2.18.3(PQueuedThreadPool<ptimer::list::timeout>::QueuedWorkerThread::Work()+0x10e) [0x7f892ec25cfe]
/lib64/libpt.so.2.18.3(PThreadPoolBase::WorkerThreadBase::Main()+0x7c) [0x7f892ebf3e5c]
/lib64/libpt.so.2.18.3(PThread::InternalThreadMain()+0x3d) [0x7f892ec1258d]
/lib64/libpt.so.2.18.3(PThread::PX_ThreadMain(void</ptimer::list::timeout>)+0x30) [0x7f892ebe8100]
/lib64/libpthread.so.0(+0x7dc5) [0x7f892bc81dc5]</ptimer::list::timeout>
Not much I can do with such limited information.
Phantom deadlocks just mean some operations took longer than expected. The usual cause is that your machine has become too busy, was this under heavy load?
Last edit: Robert Jongbloed 2020-04-11
I have the same problem. This isn't related to when all CPU's are busy, which doesn't matter because t38modem is
nice -10. It's happening often enough that I can get traces for it.