You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(12) |
|
2
(6) |
3
(13) |
4
(9) |
5
(6) |
6
(8) |
7
(5) |
8
(5) |
|
9
(15) |
10
(18) |
11
(18) |
12
(18) |
13
(7) |
14
(11) |
15
(6) |
|
16
(12) |
17
(28) |
18
(15) |
19
(12) |
20
(17) |
21
(23) |
22
(10) |
|
23
(9) |
24
(11) |
25
(7) |
26
(21) |
27
(12) |
28
(6) |
29
(6) |
|
30
(8) |
|
|
|
|
|
|
|
From: <js...@ac...> - 2007-09-30 13:51:11
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2007-09-30 09:00:01 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 220 tests, 10 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: <sv...@va...> - 2007-09-30 10:48:33
|
Author: sewardj
Date: 2007-09-30 11:48:32 +0100 (Sun, 30 Sep 2007)
New Revision: 6926
Log:
More suppression wibbles.
Modified:
branches/THRCHECK/glibc-2.X-thrcheck.supp
Modified: branches/THRCHECK/glibc-2.X-thrcheck.supp
===================================================================
--- branches/THRCHECK/glibc-2.X-thrcheck.supp 2007-09-30 10:47:58 UTC (rev 6925)
+++ branches/THRCHECK/glibc-2.X-thrcheck.supp 2007-09-30 10:48:32 UTC (rev 6926)
@@ -199,14 +199,14 @@
fun:pthread_cond_destroy@@GLIBC_2.3.2
}
-#z###--- pthread_mutex_trylock ---###
-#z# ditto
-#z{
-#z thrcheck-glibc2X-pthmxtrylock-1
-#z Thrcheck:Race
-#z fun:pthread_mutex_trylock
-#z}
-#z
+###--- pthread_mutex_trylock ---###
+{
+ thrcheck-glibc2X-pthmxtrylock-1
+ Thrcheck:Race
+ fun:pthread_mutex_trylock
+ fun:pthread_mutex_trylock
+}
+
#z###--- pthread_cond_timedwait ---###
#z{
#z thrcheck-glibc2X-pthmxtimedwait-1
@@ -256,13 +256,14 @@
fun:mythread_wrapper
fun:start_thread
}
+{
+ thrcheck-glibc2X-libpthread-6
+ Thrcheck:Race
+ fun:__deallocate_stack
+ fun:start_thread
+ fun:*clone*
+}
#z{
-#z thrcheck-glibc2X-libpthread-6
-#z Thrcheck:Race
-#z fun:__deallocate_stack
-#z fun:start_thread
-#z}
-#z{
#z thrcheck-glibc2X-libpthread-7
#z Thrcheck:Race
#z fun:__deallocate_stack
|
|
From: <sv...@va...> - 2007-09-30 10:47:59
|
Author: sewardj
Date: 2007-09-30 11:47:58 +0100 (Sun, 30 Sep 2007)
New Revision: 6925
Log:
* complete first pass at support for reader-writer locks
* tidy up evim__ vs ev__ functions
* initial support for Qt4 QReadWriteLock class
* probably many other small changes
Modified:
branches/THRCHECK/thrcheck/tc_intercepts.c
branches/THRCHECK/thrcheck/tc_main.c
branches/THRCHECK/thrcheck/tc_wordset.c
branches/THRCHECK/thrcheck/tc_wordset.h
branches/THRCHECK/thrcheck/thrcheck.h
Modified: branches/THRCHECK/thrcheck/tc_intercepts.c
===================================================================
--- branches/THRCHECK/thrcheck/tc_intercepts.c 2007-09-27 19:09:01 UTC (rev 6924)
+++ branches/THRCHECK/thrcheck/tc_intercepts.c 2007-09-30 10:47:58 UTC (rev 6925)
@@ -273,9 +273,19 @@
/*----------------------------------------------------------------*/
-/*--- pthread_mutex_lock, pthread_mutex_unlock, et al ---*/
+/*--- pthread_mutex_t functions ---*/
/*----------------------------------------------------------------*/
+/* Handled: pthread_mutex_init pthread_mutex_destroy
+ pthread_mutex_lock pthread_mutex_trylock
+ pthread_mutex_unlock
+
+ Unhandled: pthread_mutex_timedlock -- FIXME
+
+ FIXME: pthread_spin_init pthread_spin_destroy
+ pthread_spin_lock pthread_spin_unlock pthread_spin_trylock
+*/
+
// pthread_mutex_init
PTH_FUNC(int, pthreadZumutexZuinit, // pthread_mutex_init
pthread_mutex_t *mutex,
@@ -300,7 +310,7 @@
CALL_FN_W_WW(ret, fn, mutex,attr);
if (ret == 0 /*success*/) {
- DO_CREQ_v_WW(_VG_USERREQ__tc_PTHREAD_MUTEX_INIT_POST,
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_MUTEX_INIT_POST,
pthread_mutex_t*,mutex, long,mbRec);
} else {
DO_PthAPIerror( "pthread_mutex_init", ret );
@@ -324,12 +334,12 @@
fprintf(stderr, "<< pthread_mxdestroy %p", mutex); fflush(stderr);
}
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_DESTROY_PRE,
+ pthread_mutex_t*,mutex);
+
CALL_FN_W_W(ret, fn, mutex);
- if (ret == 0 /*success*/) {
- DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_DESTROY_POST,
- pthread_mutex_t*,mutex);
- } else {
+ if (ret != 0) {
DO_PthAPIerror( "pthread_mutex_destroy", ret );
}
@@ -351,7 +361,7 @@
fprintf(stderr, "<< pthread_mxlock %p", mutex); fflush(stderr);
}
- DO_CREQ_v_W(_VG_USERREQ__tc_PTHREAD_MUTEX_LOCK_PRE,
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
pthread_mutex_t*,mutex);
CALL_FN_W_W(ret, fn, mutex);
@@ -387,7 +397,7 @@
fprintf(stderr, "<< pthread_mxtrylock %p", mutex); fflush(stderr);
}
- DO_CREQ_v_W(_VG_USERREQ__tc_PTHREAD_MUTEX_LOCK_PRE,
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
pthread_mutex_t*,mutex);
CALL_FN_W_W(ret, fn, mutex);
@@ -430,7 +440,7 @@
CALL_FN_W_W(ret, fn, mutex);
if (ret == 0 /*success*/) {
- DO_CREQ_v_W(_VG_USERREQ__tc_PTHREAD_MUTEX_UNLOCK_POST,
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_UNLOCK_POST,
pthread_mutex_t*,mutex);
} else {
DO_PthAPIerror( "pthread_mutex_unlock", ret );
@@ -444,9 +454,16 @@
/*----------------------------------------------------------------*/
-/*--- pthread_cond_wait, pthread_cond_signal, et al ---*/
+/*--- pthread_cond_t functions ---*/
/*----------------------------------------------------------------*/
+/* Handled: pthread_cond_wait pthread_cond_timedwait
+ pthread_cond_signal pthread_cond_broadcast
+
+ Unhandled: pthread_cond_init pthread_cond_destroy
+ -- are these important?
+*/
+
// pthread_cond_wait
PTH_FUNC(int, pthreadZucondZuwaitZAZa, // pthread_cond_wait@*
pthread_cond_t* cond, pthread_mutex_t* mutex)
@@ -469,13 +486,16 @@
CALL_FN_W_WW(ret, fn, cond,mutex);
+ /* And now we have the mutex again, regardless of the error code
+ returned. */
+ // FIXME: but only if we actually had it before the call
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_POST,
+ pthread_mutex_t*,mutex);
+
if (ret == 0) {
DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_COND_WAIT_POST,
pthread_cond_t*,cond, pthread_mutex_t*,mutex);
- /* And now we have the mutex again. */
- DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_POST,
- pthread_mutex_t*,mutex);
} else {
DO_PthAPIerror( "pthread_cond_wait", ret );
}
@@ -512,13 +532,17 @@
CALL_FN_W_WWW(ret, fn, cond,mutex,abstime);
+ /* And now we have the mutex again, regardless of the error code
+ returned. In particular we still have it even if
+ ret==ETIMEDOUT. */
+ // FIXME: but only if we actually had it before the call
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_POST,
+ pthread_mutex_t*,mutex);
+
if (ret == 0) {
DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_COND_WAIT_POST,
pthread_cond_t*,cond, pthread_mutex_t*,mutex);
- /* And now we have the mutex again. */
- DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_POST,
- pthread_mutex_t*,mutex);
} else {
DO_PthAPIerror( "pthread_cond_timedwait", ret );
}
@@ -595,14 +619,193 @@
/*----------------------------------------------------------------*/
+/*--- pthread_rwlock_t functions ---*/
+/*----------------------------------------------------------------*/
+
+/* Handled: pthread_rwlock_init pthread_rwlock_destroy
+ pthread_rwlock_rdlock
+ pthread_rwlock_wrlock
+ pthread_rwlock_unlock
+
+ Unhandled: pthread_rwlock_timedrdlock
+ pthread_rwlock_tryrdlock
+
+ pthread_rwlock_timedwrlock
+ pthread_rwlock_trywrlock
+*/
+
+// pthread_rwlock_init
+PTH_FUNC(int, pthreadZurwlockZuinit, // pthread_rwlock_init
+ pthread_rwlock_t *rwl,
+ pthread_rwlockattr_t* attr)
+{
+ int ret;
+ OrigFn fn;
+ VALGRIND_GET_ORIG_FN(fn);
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, "<< pthread_rwl_init %p", rwl); fflush(stderr);
+ }
+
+ CALL_FN_W_WW(ret, fn, rwl,attr);
+
+ if (ret == 0 /*success*/) {
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_RWLOCK_INIT_POST,
+ pthread_rwlock_t*,rwl);
+ } else {
+ DO_PthAPIerror( "pthread_rwlock_init", ret );
+ }
+
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, " :: rwl_init -> %d >>\n", ret);
+ }
+ return ret;
+}
+
+
+// pthread_rwlock_destroy
+PTH_FUNC(int, pthreadZurwlockZudestroy, // pthread_rwlock_destroy
+ pthread_rwlock_t *rwl)
+{
+ int ret;
+ OrigFn fn;
+ VALGRIND_GET_ORIG_FN(fn);
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, "<< pthread_rwl_destroy %p", rwl); fflush(stderr);
+ }
+
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_RWLOCK_DESTROY_PRE,
+ pthread_rwlock_t*,rwl);
+
+ CALL_FN_W_W(ret, fn, rwl);
+
+ if (ret != 0) {
+ DO_PthAPIerror( "pthread_rwlock_destroy", ret );
+ }
+
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, " :: rwl_destroy -> %d >>\n", ret);
+ }
+ return ret;
+}
+
+
+PTH_FUNC(int, pthreadZurwlockZuwrlock, // pthread_rwlock_wrlock
+ pthread_rwlock_t* rwlock)
+{
+ int ret;
+ OrigFn fn;
+ VALGRIND_GET_ORIG_FN(fn);
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, "<< pthread_rwl_wlk %p", rwlock); fflush(stderr);
+ }
+
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_RWLOCK_LOCK_PRE,
+ pthread_rwlock_t*,rwlock, long,1/*isW*/);
+
+ CALL_FN_W_W(ret, fn, rwlock);
+
+ if (ret == 0 /*success*/) {
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_RWLOCK_LOCK_POST,
+ pthread_rwlock_t*,rwlock, long,1/*isW*/);
+ } else {
+ DO_PthAPIerror( "pthread_rwlock_wrlock", ret );
+ }
+
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, " :: rwl_wlk -> %d >>\n", ret);
+ }
+ return ret;
+}
+
+
+PTH_FUNC(int, pthreadZurwlockZurdlock, // pthread_rwlock_rdlock
+ pthread_rwlock_t* rwlock)
+{
+ int ret;
+ OrigFn fn;
+ VALGRIND_GET_ORIG_FN(fn);
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, "<< pthread_rwl_rlk %p", rwlock); fflush(stderr);
+ }
+
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_RWLOCK_LOCK_PRE,
+ pthread_rwlock_t*,rwlock, long,0/*!isW*/);
+
+ CALL_FN_W_W(ret, fn, rwlock);
+
+ if (ret == 0 /*success*/) {
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_RWLOCK_LOCK_POST,
+ pthread_rwlock_t*,rwlock, long,0/*!isW*/);
+ } else {
+ DO_PthAPIerror( "pthread_rwlock_rdlock", ret );
+ }
+
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, " :: rwl_rlk -> %d >>\n", ret);
+ }
+ return ret;
+}
+
+
+PTH_FUNC(int, pthreadZurwlockZuunlock, // pthread_rwlock_unlock
+ pthread_rwlock_t* rwlock)
+{
+ int ret;
+ OrigFn fn;
+ VALGRIND_GET_ORIG_FN(fn);
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, "<< pthread_rwl_unlk %p", rwlock); fflush(stderr);
+ }
+
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_RWLOCK_UNLOCK_PRE,
+ pthread_rwlock_t*,rwlock);
+
+ CALL_FN_W_W(ret, fn, rwlock);
+
+ if (ret == 0 /*success*/) {
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_RWLOCK_UNLOCK_POST,
+ pthread_rwlock_t*,rwlock);
+ } else {
+ DO_PthAPIerror( "pthread_rwlock_unlock", ret );
+ }
+
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, " :: rwl_unlk -> %d >>\n", ret);
+ }
+ return ret;
+}
+
+
+/*----------------------------------------------------------------*/
/*--- ---*/
/*----------------------------------------------------------------*/
+/* Handled: QMutex::lock()
+ QMutex::unlock()
+ QMutex::tryLock
+ QReadWriteLock::lockForRead()
+ QReadWriteLock::lockForWrite()
+ QReadWriteLock::unlock()
+
+ Unhandled: QMutex::tryLock(int)
+ QReadWriteLock::tryLockForRead(int)
+ QReadWriteLock::tryLockForRead()
+ QReadWriteLock::tryLockForWrite(int)
+ QReadWriteLock::tryLockForWrite()
+
+ maybe not the next 3; qt-4.3.1 on Unix merely
+ implements QWaitCondition using pthread_cond_t
+ QWaitCondition::wait(QMutex*, unsigned long)
+ QWaitCondition::wakeAll()
+ QWaitCondition::wakeOne()
+*/
+
// soname is libQtCore.so.4 ; match against libQtCore.so*
#define QT4_FUNC(ret_ty, f, args...) \
ret_ty I_WRAP_SONAME_FNNAME_ZZ(libQtCoreZdsoZa,f)(args); \
ret_ty I_WRAP_SONAME_FNNAME_ZZ(libQtCoreZdsoZa,f)(args)
+// QMutex::lock()
QT4_FUNC(void, ZuZZN6QMutex4lockEv, // _ZN6QMutex4lockEv == QMutex::lock()
void* self)
{
@@ -612,7 +815,7 @@
fprintf(stderr, "<< QMutex::lock %p", self); fflush(stderr);
}
- DO_CREQ_v_W(_VG_USERREQ__tc_PTHREAD_MUTEX_LOCK_PRE,
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
void*, self);
CALL_FN_v_W(fn, self);
@@ -621,10 +824,11 @@
void*, self);
if (TRACE_QT4_FNS) {
- fprintf(stderr, " :: QMutex::lock done >>\n");
+ fprintf(stderr, " :: Q::lock done >>\n");
}
}
+// QMutex::unlock()
QT4_FUNC(void, ZuZZN6QMutex6unlockEv, // _ZN6QMutex6unlockEv == QMutex::unlock()
void* self)
{
@@ -640,14 +844,15 @@
CALL_FN_v_W(fn, self);
- DO_CREQ_v_W(_VG_USERREQ__tc_PTHREAD_MUTEX_UNLOCK_POST,
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_UNLOCK_POST,
void*, self);
if (TRACE_QT4_FNS) {
- fprintf(stderr, " QMutex::unlock done >>\n");
+ fprintf(stderr, " Q::unlock done >>\n");
}
}
+// QMutex::tryLock
// _ZN6QMutex7tryLockEv == bool QMutex::tryLock()
// using 'long' to mimic C++ 'bool'
QT4_FUNC(long, ZuZZN6QMutex7tryLockEv,
@@ -656,11 +861,11 @@
OrigFn fn;
long ret;
VALGRIND_GET_ORIG_FN(fn);
- if (1|| TRACE_QT4_FNS) {
+ if (TRACE_QT4_FNS) {
fprintf(stderr, "<< QMutex::tryLock %p", self); fflush(stderr);
}
- DO_CREQ_v_W(_VG_USERREQ__tc_PTHREAD_MUTEX_LOCK_PRE,
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
void*, self);
CALL_FN_W_W(ret, fn, self);
@@ -671,19 +876,93 @@
void*, self);
}
- if (1|| TRACE_QT4_FNS) {
- fprintf(stderr, " :: QMutex::tryLock -> %lu >>\n", ret);
+ if (TRACE_QT4_FNS) {
+ fprintf(stderr, " :: Q::tryLock -> %lu >>\n", ret);
}
return ret;
}
-/*
-bool QMutex::tryLock(int timeout) _ZN6QMutex7tryLockEi
- _ZN6QMutex7tryLockEv
-*/
+// QReadWriteLock::lockForRead()
+// _ZN14QReadWriteLock11lockForReadEv == QReadWriteLock::lockForRead()
+QT4_FUNC(void, ZuZZN14QReadWriteLock11lockForReadEv,
+ // _ZN14QReadWriteLock11lockForReadEv
+ void* self)
+{
+ OrigFn fn;
+ VALGRIND_GET_ORIG_FN(fn);
+ if (TRACE_QT4_FNS) {
+ fprintf(stderr, "<< QReadWriteLock::lockForRead %p", self);
+ fflush(stderr);
+ }
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_RWLOCK_LOCK_PRE,
+ void*,self, long,0/*!isW*/);
+
+ CALL_FN_v_W(fn, self);
+
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_RWLOCK_LOCK_POST,
+ void*,self, long,0/*!isW*/);
+
+ if (TRACE_QT4_FNS) {
+ fprintf(stderr, " :: Q::lockForRead :: done >>\n");
+ }
+}
+
+// QReadWriteLock::lockForWrite()
+// _ZN14QReadWriteLock12lockForWriteEv == QReadWriteLock::lockForWrite()
+QT4_FUNC(void, ZuZZN14QReadWriteLock12lockForWriteEv,
+ // _ZN14QReadWriteLock12lockForWriteEv
+ void* self)
+{
+ OrigFn fn;
+ VALGRIND_GET_ORIG_FN(fn);
+ if (TRACE_QT4_FNS) {
+ fprintf(stderr, "<< QReadWriteLock::lockForWrite %p", self);
+ fflush(stderr);
+ }
+
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_RWLOCK_LOCK_PRE,
+ void*,self, long,1/*isW*/);
+
+ CALL_FN_v_W(fn, self);
+
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_RWLOCK_LOCK_POST,
+ void*,self, long,1/*isW*/);
+
+ if (TRACE_QT4_FNS) {
+ fprintf(stderr, " :: Q::lockForWrite :: done >>\n");
+ }
+}
+
+// QReadWriteLock::unlock()
+// _ZN14QReadWriteLock6unlockEv == QReadWriteLock::unlock()
+QT4_FUNC(void, ZuZZN14QReadWriteLock6unlockEv,
+ // _ZN14QReadWriteLock6unlockEv
+ void* self)
+{
+ OrigFn fn;
+ VALGRIND_GET_ORIG_FN(fn);
+ if (TRACE_QT4_FNS) {
+ fprintf(stderr, "<< QReadWriteLock::unlock %p", self);
+ fflush(stderr);
+ }
+
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_RWLOCK_UNLOCK_PRE,
+ void*,self);
+
+ CALL_FN_v_W(fn, self);
+
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_RWLOCK_UNLOCK_POST,
+ void*,self);
+
+ if (TRACE_QT4_FNS) {
+ fprintf(stderr, " :: Q::unlock :: done >>\n");
+ }
+}
+
+
/*--------------------------------------------------------------------*/
/*--- end tc_intercepts.c ---*/
/*--------------------------------------------------------------------*/
Modified: branches/THRCHECK/thrcheck/tc_main.c
===================================================================
--- branches/THRCHECK/thrcheck/tc_main.c 2007-09-27 19:09:01 UTC (rev 6924)
+++ branches/THRCHECK/thrcheck/tc_main.c 2007-09-30 10:47:58 UTC (rev 6925)
@@ -69,6 +69,11 @@
// FIXME accesses to NoAccess areas: change state to Excl?
+// FIXME report errors for accesses of NoAccess memory?
+
+// FIXME pth_cond_wait/timedwait wrappers. Even if these fail,
+// the thread still holds the lock.
+
// this is:
// shadow_mem_make_NoAccess: 29156 SMs, 1728 scanned
// happens_before_wrk: 1000
@@ -190,7 +195,7 @@
/* These are handles for Word sets. CONSTRAINTS: must be (very) small
- ints numbered from zero, since 15-bit versions of them are used to
+ ints numbered from zero, since < 30-bit versions of them are used to
encode thread-sets and lock-sets in 32-bit shadow words. */
typedef WordSet WordSetID;
@@ -204,7 +209,8 @@
struct _Thread* admin;
UInt magic;
/* USEFUL */
- WordSetID lockset; /* WordSet of Lock* currently held by thread */
+ WordSetID locksetA; /* WordSet of Lock* currently held by thread */
+ WordSetID locksetW; /* subset of locksetA held in w-mode */
SegmentID csegid; /* current thread segment for thread */
/* EXPOSITION */
/* Place where parent was when this thread was created. */
@@ -252,8 +258,8 @@
/* .heldBy is NULL: lock is unheld, and .heldW is meaningless
but arbitrarily set to False
.heldBy is non-NULL:
- .heldW is True: lock is w-held by threads in heldBy
- .heldR is False: lock is r-held by threads in heldBy
+ .heldW is True: lock is w-held by threads in heldBy
+ .heldW is False: lock is r-held by threads in heldBy
Either way, heldBy may not validly be an empty Bag.
for LK_nonRec, r-holdings are not allowed, and w-holdings may
@@ -339,10 +345,11 @@
static inline Bool is_sane_LockN ( Lock* lock ); /* fwds */
-static Thread* mk_Thread ( WordSetID lockset, SegmentID csegid ) {
+static Thread* mk_Thread ( SegmentID csegid ) {
static Int indx = 1;
Thread* thread = tc_zalloc( sizeof(Lock) );
- thread->lockset = lockset;
+ thread->locksetA = TC_(emptyWS)( univ_lsets );
+ thread->locksetW = TC_(emptyWS)( univ_lsets );
thread->csegid = csegid;
thread->magic = Thread_MAGIC;
thread->created_at = NULL;
@@ -480,9 +487,9 @@
goto case_LK_nonRec;
/* 2nd and subsequent locking of a lock by its owner */
tl_assert(lk->heldW);
- /* assert: lk is only held by one thread .. */
- tl_assert(TC_(sizeUniqueBag(lk->heldBy)) == 1);
- /* assert: .. and that thread is 'thr'. */
+ /* assert: lk is only held by one thread .. */
+ tl_assert(TC_(sizeUniqueBag(lk->heldBy)) == 1);
+ /* assert: .. and that thread is 'thr'. */
tl_assert(TC_(elemBag)(lk->heldBy, (Word)thr)
== TC_(sizeTotalBag)(lk->heldBy));
TC_(addToBag)(lk->heldBy, (Word)thr);
@@ -544,14 +551,10 @@
/* Proposal (for debugging sanity):
- WordSetIDs from 0 .. 0x7FFF (32768)
SegmentIDs from 0x1000000 .. 0x1FFFFFF (16777216)
All other xxxID handles are invalid.
*/
-static inline Bool is_sane_WordSetID ( WordSetID wset ) {
- return wset >= 0 && wset <= 0x7FFF;
-}
static inline Bool is_sane_SegmentID ( SegmentID tseg ) {
return tseg >= 0x1000000 && tseg <= 0x1FFFFFF;
}
@@ -587,25 +590,49 @@
}
/* Shadow value encodings:
- 11 WordSetID:15 WordSetID:15 ShM thread-set lock-set
- 10 WordSetID:15 WordSetID:15 ShR thread-set lock-set
- 01 TSegmentID:30 Excl thread-segment
- 00 0--(20)--0 10 0000 0000 New
- 00 0--(20)--0 01 0000 0000 NoAccess
- Note that the elements in thread sets are TUniqueIDs and not
- ThreadIds. */
+ 11 WordSetID:TSID_BITS WordSetID:LSIT_BITS ShM thread-set lock-set
+ 10 WordSetID:TSID_BITS WordSetID:LSIT_BITS ShR thread-set lock-set
+ 01 TSegmentID:30 Excl thread-segment
+ 00 0--(20)--0 10 0000 0000 New
+ 00 0--(20)--0 01 0000 0000 NoAccess
+
+ TSID_BITS + LSID_BITS must equal 30.
+ The elements in thread sets are Thread*, casted to Word.
+ The elements in lock sets are Lock*, casted to Word.
+*/
+
+#define N_LSID_BITS 16
+#define N_LSID_MASK ((1 << (N_LSID_BITS)) - 1)
+#define N_LSID_SHIFT 0
+
+#define N_TSID_BITS (30 - (N_LSID_BITS))
+#define N_TSID_MASK ((1 << (N_TSID_BITS)) - 1)
+#define N_TSID_SHIFT (N_LSID_BITS)
+
+static inline Bool is_sane_WordSetID_LSet ( WordSetID wset ) {
+ return wset >= 0 && wset <= N_LSID_MASK;
+}
+static inline Bool is_sane_WordSetID_TSet ( WordSetID wset ) {
+ return wset >= 0 && wset <= N_TSID_MASK;
+}
+
+
#define SHMEM_New ((UInt)(2<<8))
#define SHMEM_NoAccess ((UInt)(1<<8))
static inline UInt mk_SHMEM_ShM ( WordSetID tset, WordSetID lset ) {
- tl_assert(is_sane_WordSetID(tset));
- tl_assert(is_sane_WordSetID(lset));
- return (UInt)( (3<<30) | (tset << 15) | (lset << 0));
+ tl_assert(is_sane_WordSetID_TSet(tset));
+ tl_assert(is_sane_WordSetID_LSet(lset));
+ return (UInt)( (3<<30) | (tset << N_TSID_SHIFT)
+ | (lset << N_LSID_SHIFT));
}
static inline UInt mk_SHMEM_ShR ( WordSetID tset, WordSetID lset ) {
- tl_assert(is_sane_WordSetID(tset));
- tl_assert(is_sane_WordSetID(lset));
- return (UInt)( (2<<30) | (tset << 15) | (lset << 0));
+ //if ((!is_sane_WordSetID(tset)) || (!is_sane_WordSetID(lset)))
+ // VG_(printf)("XXXXXXXXXX %d %d\n", (Int)tset, (Int)lset);
+ tl_assert(is_sane_WordSetID_TSet(tset));
+ tl_assert(is_sane_WordSetID_LSet(lset));
+ return (UInt)( (2<<30) | (tset << N_TSID_SHIFT)
+ | (lset << N_LSID_SHIFT));
}
static inline UInt mk_SHMEM_Excl ( SegmentID tseg ) {
tl_assert(is_sane_SegmentID(tseg));
@@ -637,27 +664,27 @@
}
static inline WordSetID un_SHMEM_ShR_tset ( UInt w32 ) {
tl_assert(is_SHMEM_ShR(w32));
- return (w32 >> 15) & 0x7FFF;
+ return (w32 >> N_TSID_SHIFT) & N_TSID_MASK;
}
static inline WordSetID un_SHMEM_ShR_lset ( UInt w32 ) {
tl_assert(is_SHMEM_ShR(w32));
- return (w32 >> 0) & 0x7FFF;
+ return (w32 >> N_LSID_SHIFT) & N_LSID_MASK;
}
static inline WordSetID un_SHMEM_ShM_tset ( UInt w32 ) {
tl_assert(is_SHMEM_ShM(w32));
- return (w32 >> 15) & 0x7FFF;
+ return (w32 >> N_TSID_SHIFT) & N_TSID_MASK;
}
static inline WordSetID un_SHMEM_ShM_lset ( UInt w32 ) {
tl_assert(is_SHMEM_ShM(w32));
- return (w32 >> 0) & 0x7FFF;
+ return (w32 >> N_LSID_SHIFT) & N_LSID_MASK;
}
static inline WordSetID un_SHMEM_Sh_tset ( UInt w32 ) {
tl_assert(is_SHMEM_Sh(w32));
- return (w32 >> 15) & 0x7FFF;
+ return (w32 >> N_TSID_SHIFT) & N_TSID_MASK;
}
static inline WordSetID un_SHMEM_Sh_lset ( UInt w32 ) {
tl_assert(is_SHMEM_Sh(w32));
- return (w32 >> 0) & 0x7FFF;
+ return (w32 >> N_LSID_SHIFT) & N_LSID_MASK;
}
@@ -692,11 +719,12 @@
{
space(d+0); VG_(printf)("Thread %p {\n", t);
if (sHOW_ADMIN) {
- space(d+3); VG_(printf)("admin %p\n", t->admin);
- space(d+3); VG_(printf)("magic 0x%x\n", (UInt)t->magic);
+ space(d+3); VG_(printf)("admin %p\n", t->admin);
+ space(d+3); VG_(printf)("magic 0x%x\n", (UInt)t->magic);
}
- space(d+3); VG_(printf)("lockset %d\n", (Int)t->lockset);
- space(d+3); VG_(printf)("csegid 0x%x\n", (UInt)t->csegid);
+ space(d+3); VG_(printf)("locksetA %d\n", (Int)t->locksetA);
+ space(d+3); VG_(printf)("locksetW %d\n", (Int)t->locksetW);
+ space(d+3); VG_(printf)("csegid 0x%x\n", (UInt)t->csegid);
space(d+0); VG_(printf)("}\n");
}
@@ -966,7 +994,6 @@
{
SegmentID segid;
Segment* seg;
- WordSetID empty;
Thread* thr;
/* Get everything initialised and zeroed. */
@@ -1019,8 +1046,7 @@
map_segments_add( segid, seg );
/* a Thread for the new thread ... */
- empty = TC_(emptyWS)( univ_lsets );
- thr = mk_Thread( empty, segid );
+ thr = mk_Thread( segid );
seg->thr = thr;
/* and bind it in the thread-map table */
@@ -1778,15 +1804,18 @@
#define BAD(_str) do { how = (_str); goto bad; } while (0)
Char* how = "no error";
Thread* thr;
- WordSetID ws;
+ WordSetID wsA, wsW;
Word* ls_words;
Word ls_size, i;
Lock* lk;
Segment* seg;
for (thr = admin_threads; thr; thr = thr->admin) {
- if (!is_sane_Thread(thr)) BAD("1");
- ws = thr->lockset;
- TC_(getPayloadWS)( &ls_words, &ls_size, univ_lsets, ws );
+ if (!is_sane_Thread(thr)) BAD("1");
+ wsA = thr->locksetA;
+ wsW = thr->locksetW;
+ // locks held in W mode are a subset of all locks held
+ if (!TC_(isSubsetOf)( univ_lsets, wsW, wsA )) BAD("7");
+ TC_(getPayloadWS)( &ls_words, &ls_size, univ_lsets, wsA );
for (i = 0; i < ls_size; i++) {
lk = (Lock*)ls_words[i];
// Thread.lockset: each element is really a valid Lock
@@ -1851,18 +1880,25 @@
// is_sane_LockN above ensures these
tl_assert(count >= 1);
tl_assert(is_sane_Thread(thr));
- if (!TC_(elemWS)(univ_lsets, thr->lockset, (Word)lk))
+ if (!TC_(elemWS)(univ_lsets, thr->locksetA, (Word)lk))
+ BAD("6");
+ // also check the w-only lockset
+ if (lk->heldW
+ && !TC_(elemWS)(univ_lsets, thr->locksetW, (Word)lk))
BAD("7");
+ if ((!lk->heldW)
+ && TC_(elemWS)(univ_lsets, thr->locksetW, (Word)lk))
+ BAD("8");
}
TC_(doneIterBag)( lk->heldBy );
} else {
/* lock not held by anybody */
- if (lk->heldW) BAD("7a"); /* should be False if !heldBy */
+ if (lk->heldW) BAD("9"); /* should be False if !heldBy */
// since lk is unheld, then (no lockset contains lk)
// hmm, this is really too expensive to check. Hmm.
}
// secmaps for lk has .anyLocks == True
- if (!shmem__get_anyLocks(lk->guestaddr)) BAD("12");
+ if (!shmem__get_anyLocks(lk->guestaddr)) BAD("10");
}
return;
@@ -2203,7 +2239,7 @@
Segment* seg_old = map_segments_lookup( segid_old );
Thread* thr_old = seg_old->thr;
tset = TC_(doubletonWS)( univ_tsets, (Word)thr_old, (Word)thr_acc );
- lset = add_BHL( thr_acc->lockset );
+ lset = add_BHL( thr_acc->locksetA ); /* read ==> use all locks */
wnew = mk_SHMEM_ShR( tset, lset );
*swordP = wnew;
sm->anyShared = True;
@@ -2222,7 +2258,8 @@
tset_old, (Word)thr_acc );
WordSetID lset_new = TC_(intersectWS)( univ_lsets,
lset_old,
- add_BHL(thr_acc->lockset) );
+ add_BHL(thr_acc->locksetA)
+ /* read ==> use all locks */ );
UInt wnew = mk_SHMEM_ShR( tset_new, lset_new );
if (lset_old != lset_new)
record_last_lock_lossage(a,lset_old,lset_new);
@@ -2242,7 +2279,8 @@
tset_old, (Word)thr_acc );
WordSetID lset_new = TC_(intersectWS)( univ_lsets,
lset_old,
- add_BHL(thr_acc->lockset) );
+ add_BHL(thr_acc->locksetA)
+ /* read ==> use all locks */ );
UInt wnew = mk_SHMEM_ShM( tset_new, lset_new );
if (lset_old != lset_new)
record_last_lock_lossage(a,lset_old,lset_new);
@@ -2334,7 +2372,7 @@
Segment* seg_old = map_segments_lookup( segid_old );
Thread* thr_old = seg_old->thr;
tset = TC_(doubletonWS)( univ_tsets, (Word)thr_old, (Word)thr_acc );
- lset = thr_acc->lockset;
+ lset = thr_acc->locksetW; /* write ==> use only w-held locks */
wnew = mk_SHMEM_ShM( tset, lset );
*swordP = wnew;
sm->anyShared = True;
@@ -2356,8 +2394,12 @@
WordSetID lset_old = un_SHMEM_ShR_lset(wold);
WordSetID tset_new = TC_(addToWS)( univ_tsets,
tset_old, (Word)thr_acc );
- WordSetID lset_new = TC_(intersectWS)( univ_lsets,
- lset_old, thr_acc->lockset );
+ WordSetID lset_new = TC_(intersectWS)(
+ univ_lsets,
+ lset_old,
+ thr_acc->locksetW
+ /* write ==> use only w-held locks */
+ );
UInt wnew = mk_SHMEM_ShM( tset_new, lset_new );
if (lset_old != lset_new)
record_last_lock_lossage(a,lset_old,lset_new);
@@ -2380,8 +2422,12 @@
WordSetID lset_old = un_SHMEM_ShM_lset(wold);
WordSetID tset_new = TC_(addToWS)( univ_tsets,
tset_old, (Word)thr_acc );
- WordSetID lset_new = TC_(intersectWS)( univ_lsets,
- lset_old, thr_acc->lockset );
+ WordSetID lset_new = TC_(intersectWS)(
+ univ_lsets,
+ lset_old,
+ thr_acc->locksetW
+ /* write ==> use only w-held locks */
+ );
UInt wnew = mk_SHMEM_ShM( tset_new, lset_new );
if (lset_old != lset_new)
record_last_lock_lossage(a,lset_old,lset_new);
@@ -2425,13 +2471,21 @@
tl_assert(!lk->heldW);
return;
}
+ /* for each thread that holds this lock do ... */
TC_(initIterBag)( lk->heldBy );
while (TC_(nextIterBag)( lk->heldBy, (Word*)&thr, NULL )) {
tl_assert(is_sane_Thread(thr));
tl_assert(TC_(elemWS)( univ_lsets,
- thr->lockset, (Word)lk ));
- thr->lockset
- = TC_(delFromWS)( univ_lsets, thr->lockset, (Word)lk );
+ thr->locksetA, (Word)lk ));
+ thr->locksetA
+ = TC_(delFromWS)( univ_lsets, thr->locksetA, (Word)lk );
+
+ if (lk->heldW) {
+ tl_assert(TC_(elemWS)( univ_lsets,
+ thr->locksetW, (Word)lk ));
+ thr->locksetW
+ = TC_(delFromWS)( univ_lsets, thr->locksetW, (Word)lk );
+ }
}
TC_(doneIterBag)( lk->heldBy );
}
@@ -2683,19 +2737,20 @@
/*----------------------------------------------------------------*/
-/*--- Event handlers (ev__* functions) ---*/
-/*--- These handle the main significant events, and all take ---*/
-/*--- a parameter indicating which thread is doing the event. ---*/
+/*--- Event handlers (evh__* functions) ---*/
+/*--- plus helpers (evhH__* functions) ---*/
/*----------------------------------------------------------------*/
+/*--------- Event handler helpers (evhH__* functions) ---------*/
+
/* Create a new segment for 'thr', making it depend (.prev) on its
existing segment, bind together the SegmentID and Segment, and
return both of them. Also update 'thr' so it references the new
Segment. */
static
-void start_new_segment_for_thread ( /*OUT*/SegmentID* new_segidP,
- /*OUT*/Segment** new_segP,
- Thread* thr )
+void evhH__start_new_segment_for_thread ( /*OUT*/SegmentID* new_segidP,
+ /*OUT*/Segment** new_segP,
+ Thread* thr )
{
Segment* cur_seg;
tl_assert(new_segP);
@@ -2711,12 +2766,26 @@
thr->csegid = *new_segidP;
}
+
+/* The lock at 'lock_ga' has acquired a writer. Make all necessary
+ updates, and also do all possible error checks. */
static
-void ev__post_thread_w_acquires_lock ( Thread* thr,
- LockKind lkk, Addr lock_ga )
+void evhH__post_thread_w_acquires_lock ( Thread* thr,
+ LockKind lkk, Addr lock_ga )
{
Lock* lk;
+ /* Basically what we need to do is call lockN_acquire_writer.
+ However, that will barf if any 'invalid' lock states would
+ result. Therefore check before calling. Side effect is that
+ 'is_sane_LockN(lk)' is both a pre- and post-condition of this
+ routine.
+
+ Because this routine is only called after successful lock
+ acquisition, we should not be asked to move the lock into any
+ invalid states. Requests to do so are bugs in libpthread, since
+ that should have rejected any such requests. */
+
/* be paranoid w.r.t hint bits, even if lock_ga is complete
nonsense */
shmem__set_anyLocks( lock_ga, True );
@@ -2729,17 +2798,6 @@
tl_assert( is_sane_LockN(lk) );
shmem__set_anyLocks( lock_ga, True );
- /* Basically what we need to do is call lockN_acquire_writer.
- However, that will barf if any 'invalid' lock states would
- result. Therefore check before calling. Side effect is that
- 'is_sane_LockN(lk)' is both a pre- and post-condition of this
- routine.
-
- Because this routine is only called after successful lock
- acquisition, we should not be asked to move the lock into any
- invalid states. Requests to do so are bugs in libpthread, since
- that should have rejected any such requests. */
-
if (lk->heldBy == NULL) {
/* the lock isn't held. Simple. */
tl_assert(!lk->heldW);
@@ -2752,7 +2810,7 @@
tl_assert(lk->heldBy);
if (!lk->heldW) {
record_error_Misc( thr, "Bug in libpthread: write lock "
- "acquired on lock which has read locks");
+ "granted on rwlock which is currently rd-held");
goto error;
}
@@ -2762,8 +2820,8 @@
if (thr != (Thread*)TC_(anyElementOfBag)(lk->heldBy)) {
record_error_Misc( thr, "Bug in libpthread: write lock "
- "acquired on lock which is w-locked by "
- "a different thread");
+ "granted on mutex/rwlock which is currently "
+ "wr-held by a different thread");
goto error;
}
@@ -2773,8 +2831,9 @@
once the lock has been acquired, this must also be a libpthread
bug. */
if (lk->kind != LK_mbRec) {
- record_error_Misc( thr, "Bug in libpthread: recursive w-lock "
- "was unexpectedly allowed");
+ record_error_Misc( thr, "Bug in libpthread: recursive write lock "
+ "granted on mutex/wrlock which does not "
+ "support recursion");
goto error;
}
@@ -2784,23 +2843,99 @@
noerror:
/* update the thread's held-locks set */
- thr->lockset = TC_(addToWS)( univ_lsets, thr->lockset, (Word)lk );
+ thr->locksetA = TC_(addToWS)( univ_lsets, thr->locksetA, (Word)lk );
+ thr->locksetW = TC_(addToWS)( univ_lsets, thr->locksetW, (Word)lk );
/* fall through */
error:
tl_assert(is_sane_LockN(lk));
}
-static void ev__pre_thread_releases_lock ( Thread* thr, Addr lock_ga )
+
+/* The lock at 'lock_ga' has acquired a reader. Make all necessary
+ updates, and also do all possible error checks. */
+static
+void evhH__post_thread_r_acquires_lock ( Thread* thr,
+ LockKind lkk, Addr lock_ga )
{
+ Lock* lk;
+
+ /* Basically what we need to do is call lockN_acquire_reader.
+ However, that will barf if any 'invalid' lock states would
+ result. Therefore check before calling. Side effect is that
+ 'is_sane_LockN(lk)' is both a pre- and post-condition of this
+ routine.
+
+ Because this routine is only called after successful lock
+ acquisition, we should not be asked to move the lock into any
+ invalid states. Requests to do so are bugs in libpthread, since
+ that should have rejected any such requests. */
+
+ /* be paranoid w.r.t hint bits, even if lock_ga is complete
+ nonsense */
+ shmem__set_anyLocks( lock_ga, True );
+
+ tl_assert(is_sane_Thread(thr));
+ /* Try to find the lock. If we can't, then create a new one with
+ kind 'lkk'. Only a reader-writer lock can be read-locked,
+ hence the first assertion. */
+ tl_assert(lkk == LK_rdwr);
+ lk = map_locks_lookup_or_create(
+ lkk, lock_ga, map_threads_reverse_lookup_SLOW(thr) );
+ tl_assert( is_sane_LockN(lk) );
+ shmem__set_anyLocks( lock_ga, True );
+
+ if (lk->heldBy == NULL) {
+ /* the lock isn't held. Simple. */
+ tl_assert(!lk->heldW);
+ lockN_acquire_reader( lk, thr );
+ goto noerror;
+ }
+
+ /* So the lock is already held. If held as a w-lock then
+ libpthread must be buggy. */
+ tl_assert(lk->heldBy);
+ if (lk->heldW) {
+ record_error_Misc( thr, "Bug in libpthread: read lock "
+ "granted on rwlock which is "
+ "currently wr-held");
+ goto error;
+ }
+
+ /* Easy enough. In short anybody can get a read-lock on a rwlock
+ provided it is either unlocked or already in rd-held. */
+ lockN_acquire_reader( lk, thr );
+ goto noerror;
+
+ noerror:
+ /* update the thread's held-locks set */
+ thr->locksetA = TC_(addToWS)( univ_lsets, thr->locksetA, (Word)lk );
+ /* but don't update thr->locksetW, since lk is only rd-held */
+ /* fall through */
+
+ error:
+ tl_assert(is_sane_LockN(lk));
+}
+
+
+/* The lock at 'lock_ga' is just about to be unlocked. Make all
+ necessary updates, and also do all possible error checks. */
+static
+void evhH__pre_thread_releases_lock ( Thread* thr,
+ Addr lock_ga, Bool isRDWR )
+{
Lock* lock;
Word n;
/* This routine is called prior to a lock release, before
libpthread has had a chance to validate the call. Hence we need
to detect and reject any attempts to move the lock into an
- invalid state. Such attempts are bugs in the client. */
+ invalid state. Such attempts are bugs in the client.
+ isRDWR is True if we know from the wrapper context that lock_ga
+ should refer to a reader-writer lock, and is False if [ditto]
+ lock_ga should refer to a standard mutex. */
+
/* be paranoid w.r.t hint bits, even if lock_ga is complete
nonsense */
shmem__set_anyLocks( lock_ga, True );
@@ -2819,12 +2954,22 @@
tl_assert(lock->guestaddr == lock_ga);
tl_assert(is_sane_LockN(lock));
+ if (isRDWR && lock->kind != LK_rdwr) {
+ record_error_Misc( thr, "pthread_rwlock_unlock with a "
+ "pthread_mutex_t* argument " );
+ }
+ if ((!isRDWR) && lock->kind == LK_rdwr) {
+ record_error_Misc( thr, "pthread_mutex_unlock with a "
+ "pthread_rwlock_t* argument " );
+ }
+
if (!lock->heldBy) {
/* The lock is not held. This indicates a serious bug in the
client. */
tl_assert(!lock->heldW);
record_error_UnlockUnlocked( thr, lock );
- tl_assert(!TC_(elemWS)( univ_lsets, thr->lockset, (Word)lock ));
+ tl_assert(!TC_(elemWS)( univ_lsets, thr->locksetA, (Word)lock ));
+ tl_assert(!TC_(elemWS)( univ_lsets, thr->locksetW, (Word)lock ));
goto error;
}
@@ -2840,7 +2985,8 @@
Thread* realOwner = (Thread*)TC_(anyElementOfBag)( lock->heldBy );
tl_assert(is_sane_Thread(realOwner));
tl_assert(realOwner != thr);
- tl_assert(!TC_(elemWS)( univ_lsets, thr->lockset, (Word)lock ));
+ tl_assert(!TC_(elemWS)( univ_lsets, thr->locksetA, (Word)lock ));
+ tl_assert(!TC_(elemWS)( univ_lsets, thr->locksetW, (Word)lock ));
record_error_UnlockForeign( thr, realOwner, lock );
goto error;
}
@@ -2860,15 +3006,21 @@
or a rwlock which is currently r-held. */
tl_assert(lock->kind == LK_mbRec
|| (lock->kind == LK_rdwr && !lock->heldW));
- tl_assert(TC_(elemWS)( univ_lsets, thr->lockset, (Word)lock ));
+ tl_assert(TC_(elemWS)( univ_lsets, thr->locksetA, (Word)lock ));
+ if (lock->heldW)
+ tl_assert(TC_(elemWS)( univ_lsets, thr->locksetW, (Word)lock ));
+ else
+ tl_assert(!TC_(elemWS)( univ_lsets, thr->locksetW, (Word)lock ));
} else {
/* We no longer hold the lock. */
if (lock->heldBy) {
tl_assert(0 == TC_(elemBag)( lock->heldBy, (Word)thr ));
}
/* update this thread's lockset accordingly. */
- thr->lockset
- = TC_(delFromWS)( univ_lsets, thr->lockset, (Word)lock );
+ thr->locksetA
+ = TC_(delFromWS)( univ_lsets, thr->locksetA, (Word)lock );
+ thr->locksetW
+ = TC_(delFromWS)( univ_lsets, thr->locksetW, (Word)lock );
}
/* fall through */
@@ -2876,74 +3028,151 @@
tl_assert(is_sane_LockN(lock));
}
-static void ev__pre_thread_create ( ThreadId parent, ThreadId child )
+
+/*--------- Event handlers proper (evh__* functions) ---------*/
+
+/* FIXME: Horrible inefficient hack. Get rid of it somehow. */
+// FIXME: get rid of the "if .." hack. It exists because evim__new_mem
+// is called during initialisation (as notification of initial memory
+// layout) and VG_(get_running_tid)() returns VG_INVALID_THREADID at
+// that point.
+static inline Thread* get_current_Thread ( void ) {
+ ThreadId coretid;
+ Thread* thr;
+ coretid = VG_(get_running_tid)();
+ if (coretid == VG_INVALID_THREADID)
+ coretid = 1; /* KLUDGE */
+ thr = map_threads_lookup( coretid );
+ return thr;
+}
+
+static
+void evh__new_mem ( Addr a, SizeT len ) {
+ if (SHOW_EVENTS >= 2)
+ VG_(printf)("evh__new_mem(%p, %lu)\n", (void*)a, len );
+ shadow_mem_make_New( get_current_Thread(), a, len );
+ if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
+ all__sanity_check("evh__new_mem-post");
+}
+
+static
+void evh__new_mem_w_perms ( Addr a, SizeT len,
+ Bool rr, Bool ww, Bool xx ) {
+ if (SHOW_EVENTS >= 1)
+ VG_(printf)("evh__new_mem_w_perms(%p, %lu, %d,%d,%d)\n",
+ (void*)a, len, (Int)rr, (Int)ww, (Int)xx );
+ if (rr || ww || xx)
+ shadow_mem_make_New( get_current_Thread(), a, len );
+ if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
+ all__sanity_check("evh__new_mem_w_perms-post");
+}
+
+static
+void evh__set_perms ( Addr a, SizeT len,
+ Bool rr, Bool ww, Bool xx ) {
+ if (SHOW_EVENTS >= 1)
+ VG_(printf)("evh__set_perms(%p, %lu, %d,%d,%d)\n",
+ (void*)a, len, (Int)rr, (Int)ww, (Int)xx );
+ /* Hmm. What should we do here, that actually makes any sense?
+ Let's say: if neither readable nor writable, then declare it
+ NoAccess, else leave it alone. */
+ if (!(rr || ww))
+ shadow_mem_make_NoAccess( get_current_Thread(), a, len );
+ if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
+ all__sanity_check("evh__set_perms-post");
+}
+
+static
+void evh__die_mem ( Addr a, SizeT len ) {
+ if (SHOW_EVENTS >= 2)
+ VG_(printf)("evh__die_mem(%p, %lu)\n", (void*)a, len );
+ shadow_mem_make_NoAccess( get_current_Thread(), a, len );
+ if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
+ all__sanity_check("evh__die_mem-post");
+}
+
+static
+void evh__pre_thread_ll_create ( ThreadId parent, ThreadId child )
{
- Thread* thr_p;
- Thread* thr_c;
- SegmentID segid_c;
- Segment* seg_c;
- WordSetID empty;
+ if (SHOW_EVENTS >= 1)
+ VG_(printf)("evh__pre_thread_ll_create(p=%d, c=%d)\n",
+ (Int)parent, (Int)child );
- tl_assert(is_sane_ThreadId(parent));
- tl_assert(is_sane_ThreadId(child));
- tl_assert(parent != child);
+ if (parent != VG_INVALID_THREADID) {
+ Thread* thr_p;
+ Thread* thr_c;
+ SegmentID segid_c;
+ Segment* seg_c;
- thr_p = map_threads_maybe_lookup( parent );
- thr_c = map_threads_maybe_lookup( child );
+ tl_assert(is_sane_ThreadId(parent));
+ tl_assert(is_sane_ThreadId(child));
+ tl_assert(parent != child);
- tl_assert(thr_p != NULL);
- tl_assert(thr_c == NULL);
+ thr_p = map_threads_maybe_lookup( parent );
+ thr_c = map_threads_maybe_lookup( child );
- /* Create a new thread record for the child. */
- // FIXME: code duplication from init_data_structures
- segid_c = alloc_SegmentID();
- seg_c = mk_Segment( NULL/*thr*/, NULL/*prev*/, NULL/*other*/ );
- map_segments_add( segid_c, seg_c );
+ tl_assert(thr_p != NULL);
+ tl_assert(thr_c == NULL);
- /* a Thread for the new thread ... */
- empty = TC_(emptyWS)( univ_lsets );
- thr_c = mk_Thread( empty, segid_c );
- seg_c->thr = thr_c;
+ /* Create a new thread record for the child. */
+ // FIXME: code duplication from init_data_structures
+ segid_c = alloc_SegmentID();
+ seg_c = mk_Segment( NULL/*thr*/, NULL/*prev*/, NULL/*other*/ );
+ map_segments_add( segid_c, seg_c );
- /* and bind it in the thread-map table */
- map_threads[child] = thr_c;
+ /* a Thread for the new thread ... */
+ thr_c = mk_Thread( segid_c );
+ seg_c->thr = thr_c;
- /* Record where the parent is so we can later refer to this in
- error messages.
+ /* and bind it in the thread-map table */
+ map_threads[child] = thr_c;
- On amd64-linux, this entails a nasty glibc-2.5 specific hack.
- The stack snapshot is taken immediately after the parent has
- returned from its sys_clone call. Unfortunately there is no
- unwind info for the insn following "syscall" - reading the glibc
- sources confirms this. So we ask for a snapshot to be taken as
- if RIP was 3 bytes earlier, in a place where there is unwind
- info. Sigh.
- */
- { Word first_ip_delta = 0;
-# if defined(VGP_amd64_linux)
- first_ip_delta = -3;
-# endif
- thr_c->created_at = VG_(record_ExeContext)(parent, first_ip_delta);
- }
+ /* Record where the parent is so we can later refer to this in
+ error messages.
- /* Now, mess with segments. */
- if (clo_happens_before >= 1) {
- /* Make the child's new segment depend on the parent */
- seg_c->other = map_segments_lookup( thr_p->csegid );
- seg_c->other_hint = 'c';
- /* and start a new segment for the parent. */
- { SegmentID new_segid = 0; /* bogus */
- Segment* new_seg = NULL;
- start_new_segment_for_thread( &new_segid, &new_seg, thr_p );
- tl_assert(is_sane_SegmentID(new_segid));
- tl_assert(is_sane_Segment(new_seg));
+ On amd64-linux, this entails a nasty glibc-2.5 specific hack.
+ The stack snapshot is taken immediately after the parent has
+ returned from its sys_clone call. Unfortunately there is no
+ unwind info for the insn following "syscall" - reading the
+ glibc sources confirms this. So we ask for a snapshot to be
+ taken as if RIP was 3 bytes earlier, in a place where there
+ is unwind info. Sigh.
+ */
+ { Word first_ip_delta = 0;
+# if defined(VGP_amd64_linux)
+ first_ip_delta = -3;
+# endif
+ thr_c->created_at = VG_(record_ExeContext)(parent, first_ip_delta);
}
+
+ /* Now, mess with segments. */
+ if (clo_happens_before >= 1) {
+ /* Make the child's new segment depend on the parent */
+ seg_c->other = map_segments_lookup( thr_p->csegid );
+ seg_c->other_hint = 'c';
+ /* and start a new segment for the parent. */
+ { SegmentID new_segid = 0; /* bogus */
+ Segment* new_seg = NULL;
+ evhH__start_new_segment_for_thread( &new_segid, &new_seg,
+ thr_p );
+ tl_assert(is_sane_SegmentID(new_segid));
+ tl_assert(is_sane_Segment(new_seg));
+ }
+ }
}
+
+ if (sanity_flags & SCE_THREADS)
+ all__sanity_check("evh__pre_thread_create-post");
}
-static void ev__post_thread_async_exit ( ThreadId quit_tid )
+static
+void evh__pre_thread_ll_exit ( ThreadId quit_tid )
{
Thread* thr_q;
+ if (SHOW_EVENTS >= 1)
+ VG_(printf)("evh__pre_thread_ll_exit(thr=%d)\n",
+ (Int)quit_tid );
+
/* quit_tid has disappeared without joining to any other thread.
Therefore there is no synchronisation event associated with its
exit and so we have to pretty much treat it as if it was still
@@ -2956,13 +3185,17 @@
tl_assert(is_sane_ThreadId(quit_tid));
thr_q = map_threads_maybe_lookup( quit_tid );
tl_assert(thr_q != NULL);
- // FIXME: if exiting thread holds any locks, complain
+ // FIXME: error-if: exiting thread holds any locks
/* About the only thing we do need to do is clear the map_threads
entry, in order that the Valgrind core can re-use it. */
map_threads_delete( quit_tid );
+
+ if (sanity_flags & SCE_THREADS)
+ all__sanity_check("evh__pre_thread_ll_exit-post");
}
-static void ev__post_thread_join ( ThreadId stay_tid, Thread* quit_thr )
+static
+void evh__TC_PTHREAD_JOIN_POST ( ThreadId stay_tid, Thread* quit_thr )
{
Int i, stats_SMs, stats_SMs_scanned, stats_reExcls;
Addr ga;
@@ -2970,6 +3203,10 @@
Thread* thr_s;
Thread* thr_q;
+ if (SHOW_EVENTS >= 1)
+ VG_(printf)("evh__post_thread_join(stayer=%d, quitter=%p)\n",
+ (Int)stay_tid, quit_thr );
+
tl_assert(is_sane_ThreadId(stay_tid));
thr_s = map_threads_maybe_lookup( stay_tid );
@@ -2982,7 +3219,7 @@
/* Start a new segment for the stayer */
SegmentID new_segid = 0; /* bogus */
Segment* new_seg = NULL;
- start_new_segment_for_thread( &new_segid, &new_seg, thr_s );
+ evhH__start_new_segment_for_thread( &new_segid, &new_seg, thr_s );
tl_assert(is_sane_SegmentID(new_segid));
tl_assert(is_sane_Segment(new_seg));
/* and make it depend on the quitter's last segment */
@@ -2991,7 +3228,8 @@
new_seg->other_hint = 'j';
}
- // FIXME: if exiting thread holds any locks, complain
+ // FIXME: error-if: exiting thread holds any locks
+ // or shouw evh__pre_thread_ll_exit do that?
/* Delete thread from ShM/ShR thread sets and restore Excl states
where appropriate */
@@ -3073,184 +3311,64 @@
TC_(doneIterFM)( map_shmem );
if (SHOW_EXPENSIVE_STUFF)
- VG_(printf)("ev__post_thread_join: %d SMs, "
+ VG_(printf)("evh__post_thread_join: %d SMs, "
"%d scanned, %d re-Excls\n",
stats_SMs, stats_SMs_scanned, stats_reExcls);
/* This holds because, at least when using NPTL as the thread
library, we should be notified the low level thread exit before
we hear of any join event on it. The low level exit
- notification feeds through into ev__post_thread_async_exit,
+ notification feeds through into evh__pre_thread_ll_exit,
which should clear the map_threads entry for it. Hence we
expect there to be no map_threads entry at this point. */
tl_assert( map_threads_maybe_reverse_lookup_SLOW(thr_q)
== VG_INVALID_THREADID);
-}
-
-/*----------------------------------------------------------------*/
-/*--- Event handlers -- impedance matchers (evim__* functions) ---*/
-/*----------------------------------------------------------------*/
-
-/* FIXME: Horrible inefficient hack. Get rid of it somehow. */
-// FIXME: get rid of the "if .." hack. It exists because evim__new_mem
-// is called during initialisation (as notification of initial memory
-// layout) and VG_(get_running_tid)() returns VG_INVALID_THREADID at
-// that point.
-static inline Thread* get_current_Thread ( void ) {
- ThreadId coretid;
- Thread* thr;
- coretid = VG_(get_running_tid)();
- if (coretid == VG_INVALID_THREADID)
- coretid = 1; /* KLUDGE */
- thr = map_threads_lookup( coretid );
- return thr;
-}
-
-static
-void evim__new_mem ( Addr a, SizeT len ) {
- if (SHOW_EVENTS >= 2)
- VG_(printf)("evim__new_mem(%p, %lu)\n", (void*)a, len );
- shadow_mem_make_New( get_current_Thread(), a, len );
- if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
- all__sanity_check("evim__new_mem-post");
-}
-
-static
-void evim__new_mem_w_perms ( Addr a, SizeT len,
- Bool rr, Bool ww, Bool xx ) {
- if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__new_mem_w_perms(%p, %lu, %d,%d,%d)\n",
- (void*)a, len, (Int)rr, (Int)ww, (Int)xx );
- if (rr || ww || xx)
- shadow_mem_make_New( get_current_Thread(), a, len );
- if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
- all__sanity_check("evim__new_mem_w_perms-post");
-}
-
-static
-void evim__set_perms ( Addr a, SizeT len,
- Bool rr, Bool ww, Bool xx ) {
- if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__set_perms(%p, %lu, %d,%d,%d)\n",
- (void*)a, len, (Int)rr, (Int)ww, (Int)xx );
- /* Hmm. What should we do here, that actually makes any sense?
- Let's say: if neither readable nor writable, then declare it
- NoAccess, else leave it alone. */
- if (!(rr || ww))
- shadow_mem_make_NoAccess( get_current_Thread(), a, len );
- if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
- all__sanity_check("evim__set_perms-post");
-}
-
-static
-void evim__die_mem ( Addr a, SizeT len ) {
- if (SHOW_EVENTS >= 2)
- VG_(printf)("evim__die_mem(%p, %lu)\n", (void*)a, len );
- shadow_mem_make_NoAccess( get_current_Thread(), a, len );
- if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
- all__sanity_check("evim__die_mem-post");
-}
-
-static
-void evim__pre_thread_ll_create ( ThreadId parent, ThreadId child ) {
- if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__pre_thread_ll_create(p=%d, c=%d)\n",
- (Int)parent, (Int)child );
- if (parent != VG_INVALID_THREADID)
- ev__pre_thread_create ( parent, child );
if (sanity_flags & SCE_THREADS)
- all__sanity_check("evim__pre_thread_create-post");
+ all__sanity_check("evh__post_thread_join-post");
}
static
-void evim__pre_thread_ll_exit ( ThreadId tid ) {
- if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__pre_thread_ll_exit(thr=%d)\n",
- (Int)tid );
- ev__post_thread_async_exit( tid );
- if (sanity_flags & SCE_THREADS)
- all__sanity_check("evim__pre_thread_ll_exit-post");
-}
-
-static
-void evim__post_thread_join ( ThreadId stayer, Thread* quitter )
-{
- if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__post_thread_join(stayer=%d, quitter=%p)\n",
- (Int)stayer, quitter );
- ev__post_thread_join( stayer, quitter );
- if (sanity_flags & SCE_THREADS)
- all__sanity_check("evim__post_thread_join-post");
-}
-
-static
-void evim__post_mutex_lock ( ThreadId tid, void* mutex ) {
- if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__post_mutex_lock(ctid=%d, %p)\n",
- (Int)tid, (void*)mutex );
- if (sanity_flags & SCE_LOCKS)
- all__sanity_check("evim__post_mutex_lock-pre");
- ev__post_thread_w_acquires_lock(
- map_threads_lookup(tid),
- LK_mbRec, /* if not known, create new lock with this LockKind */
- (Addr)mutex
- );
- if (sanity_flags & SCE_LOCKS)
- all__sanity_check("evim__post_mutex_lock-post");
-}
-
-static
-void evim__post_mutex_unlock ( ThreadId tid, void* mutex ) {
- if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__post_mutex_unlock(ctid=%d, %p)\n",
- (Int)tid, (void*)mutex );
- ev__pre_thread_releases_lock( map_threads_lookup(tid), (Addr)mutex );
- if (sanity_flags & SCE_LOCKS)
- all__sanity_check("evim__post_mutex_unlock-post");
-}
-
-static
-void evim__pre_mem_read ( CorePart part, ThreadId tid, Char* s,
- Addr a, SizeT size) {
+void evh__pre_mem_read ( CorePart part, ThreadId tid, Char* s,
+ Addr a, SizeT size) {
if (SHOW_EVENTS >= 2
|| (SHOW_EVENTS >= 1 && size != 1))
- VG_(printf)("evim__pre_mem_read(ctid=%d, \"%s\", %p, %lu)\n",
+ VG_(printf)("evh__pre_mem_read(ctid=%d, \"%s\", %p, %lu)\n",
(Int)tid, s, (void*)a, size );
shadow_mem_read_range( map_threads_lookup(tid), a, size);
if (size >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
- all__sanity_check("evim__pre_mem_read-post");
+ all__sanity_check("evh__pre_mem_read-post");
}
static
-void evim__pre_mem_read_asciiz ( CorePart part, ThreadId tid,
- Char* s, Addr a ) {
+void evh__pre_mem_read_asciiz ( CorePart part, ThreadId tid,
+ Char* s, Addr a ) {
Int len;
if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__pre_mem_asciiz(ctid=%d, \"%s\", %p)\n",
+ VG_(printf)("evh__pre_mem_asciiz(ctid=%d, \"%s\", %p)\n",
(Int)tid, s, (void*)a );
// FIXME: think of a less ugly hack
len = VG_(strlen)( (Char*) a );
- shadow_mem_read_range( map_threads_lookup(tid), a, len );
+ shadow_mem_read_range( map_threads_lookup(tid), a, len+1 );
if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
- all__sanity_check("evim__pre_mem_read_asciiz-post");
+ all__sanity_check("evh__pre_mem_read_asciiz-post");
}
static
-void evim__pre_mem_write ( CorePart part, ThreadId tid, Char* s,
- Addr a, SizeT size ) {
+void evh__pre_mem_write ( CorePart part, ThreadId tid, Char* s,
+ Addr a, SizeT size ) {
if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__pre_mem_write(ctid=%d, \"%s\", %p, %lu)\n",
+ VG_(printf)("evh__pre_mem_write(ctid=%d, \"%s\", %p, %lu)\n",
(Int)tid, s, (void*)a, size );
shadow_mem_write_range( map_threads_lookup(tid), a, size);
if (size >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
- all__sanity_check("evim__pre_mem_write-post");
+ all__sanity_check("evh__pre_mem_write-post");
}
static
-void evim__new_mem_heap ( Addr a, SizeT len, Bool is_inited ) {
+void evh__new_mem_heap ( Addr a, SizeT len, Bool is_inited ) {
if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__new_mem_heap(%p, %lu, inited=%d)\n",
+ VG_(printf)("evh__new_mem_heap(%p, %lu, inited=%d)\n",
(void*)a, len, (Int)is_inited );
// FIXME: this is kinda stupid
if (is_inited) {
@@ -3259,40 +3377,40 @@
shadow_mem_make_New(get_current_Thread(), a, len);
}
if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
- all__sanity_check("evim__pre_mem_read-post");
+ all__sanity_check("evh__pre_mem_read-post");
}
static
-void evim__die_mem_heap ( Addr a, SizeT len ) {
+void evh__die_mem_heap ( Addr a, SizeT len ) {
if (SHOW_EVENTS >= 1)
- VG_(printf)("evim__die_mem_heap(%p, %lu)\n", (void*)a, len );
+ VG_(printf)("evh__die_mem_heap(%p, %lu)\n", (void*)a, len );
shadow_mem_make_NoAccess( get_current_Thread(), a, len );
if (len >= SCE_BIGRANGE_T && (sanity_flags & SCE_BIGRANGE))
- all__sanity_check("evim__pre_mem_read-post");
+ all__sanity_check("evh__pre_mem_read-post");
}
// thread async exit?
static VG_REGPARM(1)
-void evim__mem_help_read_1(Addr a) {
+void evh__mem_help_read_1(Addr a) {
msm__handle_read_aligned_32( get_current_Thread(), ROUNDDN(a,4) );
}
static VG_REGPARM(1)
-void evim__mem_help_read_2(Addr a) {
+void evh__mem_help_read_2(Addr a) {
msm__handle_read_aligned_32( get_current_Thread(), ROUNDDN(a,4) );
}
static VG_REGPARM(1)
-void evim__mem_help_read_4(Addr a) {
+void evh__mem_help_read_4(Addr a) {
msm__handle_read_aligned_32( get_current_Thread(), ROUNDDN(a,4) );
}
static VG_REGPARM(1)
-void evim__mem_help_read_8(Addr a) {
+void evh__mem_help_read_8(Addr a) {
Thread* thr = get_current_Thread();
msm__handle_read_aligned_32( thr, ROUNDDN(a+0,4) );
msm__handle_read_aligned_32( thr, ROUNDDN(a+4,4) );
}
static VG_REGPARM(2)
-void evim__mem_help_read_N(Addr a, SizeT size) {
+void evh__mem_help_read_N(Addr a, SizeT size) {
Thread* thr = get_current_Thread();
a = ROUNDDN(a,4);
size = ROUNDDN(size, 4);
@@ -3303,25 +3421,25 @@
}
}
static VG_REGPARM(1)
-void evim__mem_help_write_1(Addr a) {
+void evh__mem_help_write_1(Addr a) {
msm__handle_write_aligned_32( get_current_Thread(), ROUNDDN(a,4) );
}
static VG_REGPARM(1)
-void evim__mem_help_write_2(Addr a) {
+void evh__mem_help_write_2(Addr a) {
msm__handle_write_aligned_32( get_current_Thread(), ROUNDDN(a,4) );
}
static VG_REGPARM(1)
-void evim__mem_help_write_4(Addr a) {
+void evh__mem_help_write_4(Addr a) {
msm__handle_write_aligned_32( get_current_Thread(), ROUNDDN(a,4) );
}
static VG_REGPARM(1)
-void evim__mem_help_write_8(Addr a) {
+void evh__mem_help_write_8(Addr a) {
Thread* thr = get_current_Thread();
msm__handle_write_aligned_32( thr, ROUNDDN(a+0,4) );
msm__handle_write_aligned_32( thr, ROUNDDN(a+4,4) );
}
static VG_R...
[truncated message content] |
|
From: Tom H. <th...@cy...> - 2007-09-30 02:30:57
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2007-09-30 03:15:01 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 256 tests, 27 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-09-30 02:23:20
|
Nightly build on dellow ( x86_64, Fedora 7 ) started at 2007-09-30 03:10:04 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 293 tests, 4 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-09-30 02:17:52
|
Nightly build on lloyd ( x86_64, Fedora Core 3 ) started at 2007-09-30 03:05:07 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 293 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-09-30 02:10:58
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-09-30 03:00:03 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 295 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: <js...@ac...> - 2007-09-30 00:38:20
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-09-30 02:00:01 CEST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 228 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |