You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
1
(10) |
2
(8) |
3
(17) |
4
(28) |
5
(22) |
6
(8) |
|
7
(8) |
8
(22) |
9
(12) |
10
(17) |
11
(14) |
12
(15) |
13
(6) |
|
14
(9) |
15
(9) |
16
(16) |
17
(13) |
18
(18) |
19
(7) |
20
(5) |
|
21
(6) |
22
(5) |
23
(11) |
24
(5) |
25
(11) |
26
(7) |
27
(15) |
|
28
(11) |
29
(12) |
30
(12) |
31
(15) |
|
|
|
|
From: <sv...@va...> - 2007-10-16 23:18:04
|
Author: njn
Date: 2007-10-17 00:18:06 +0100 (Wed, 17 Oct 2007)
New Revision: 7009
Log:
Add a comment.
Modified:
trunk/coregrind/m_mallocfree.c
Modified: trunk/coregrind/m_mallocfree.c
===================================================================
--- trunk/coregrind/m_mallocfree.c 2007-10-16 21:07:43 UTC (rev 7008)
+++ trunk/coregrind/m_mallocfree.c 2007-10-16 23:18:06 UTC (rev 7009)
@@ -1104,6 +1104,24 @@
req_bszB = pszB_to_bszB(a, req_pszB);
// Scan through all the big-enough freelists for a block.
+ //
+ // Nb: this scanning might be expensive in some cases. Eg. if you
+ // allocate lots of small objects without freeing them, but no
+ // medium-sized objects, it will repeatedly scanning through the whole
+ // list, and each time not find any free blocks until the last element.
+ //
+ // If this becomes a noticeable problem... the loop answers the question
+ // "where is the first nonempty list above me?" And most of the time,
+ // you ask the same question and get the same answer. So it would be
+ // good to somehow cache the results of previous searches.
+ // One possibility is an array (with N_MALLOC_LISTS elements) of
+ // shortcuts. shortcut[i] would give the index number of the nearest
+ // larger list above list i which is non-empty. Then this loop isn't
+ // necessary. However, we'd have to modify some section [ .. i-1] of the
+ // shortcut array every time a list [i] changes from empty to nonempty or
+ // back. This would require care to avoid pathological worst-case
+ // behaviour.
+ //
for (lno = pszB_to_listNo(req_pszB); lno < N_MALLOC_LISTS; lno++) {
b = a->freelist[lno];
if (NULL == b) continue; // If this list is empty, try the next one.
|
|
From: <sv...@va...> - 2007-10-16 21:07:41
|
Author: sewardj
Date: 2007-10-16 22:07:43 +0100 (Tue, 16 Oct 2007)
New Revision: 7008
Log:
Various small changes aimed at improving usability:
* make VALGRIND_TC_CLEAN_MEMORY work, for the benefit of recycling
allocators
* handle pthread_mutex_trylock more correctly
* handle pthread_mutex_timedlock at all
* don't crash after reporting an error of destroying a freed lock
* a bunch more suppressions
Modified:
branches/THRCHECK/glibc-2.X-thrcheck.supp
branches/THRCHECK/thrcheck/tc_intercepts.c
branches/THRCHECK/thrcheck/tc_main.c
branches/THRCHECK/thrcheck/thrcheck.h
Modified: branches/THRCHECK/glibc-2.X-thrcheck.supp
===================================================================
--- branches/THRCHECK/glibc-2.X-thrcheck.supp 2007-10-16 20:59:56 UTC (rev 7007)
+++ branches/THRCHECK/glibc-2.X-thrcheck.supp 2007-10-16 21:07:43 UTC (rev 7008)
@@ -54,14 +54,22 @@
fun:pthread_join
fun:pthread_join
}
-#z{
-#z thrcheck-glibc2X-pthjoin-2
-#z Thrcheck:Race
-#z fun:__free_tcb
-#z fun:pthread_join
-#z fun:pthread_join
-#z}
-#z
+{
+ thrcheck-glibc2X-pthjoin-2
+ Thrcheck:Race
+ fun:__deallocate_stack
+ fun:pthread_join
+ fun:pthread_join
+}
+{
+ thrcheck-glibc2X-pthjoin-3
+ Thrcheck:Race
+ fun:free_stacks
+ fun:__deallocate_stack
+ fun:pthread_join
+ fun:pthread_join
+}
+
#z###--- IO_file ---###
#z{
#z thrcheck-glibc2X-IOfile-1
@@ -131,6 +139,15 @@
fun:pthread_mutex_lock
}
+###--- pthread_mutex_unlock ---###
+{
+ thrcheck-glibc2X-pthmxunlock-1
+ Thrcheck:Race
+ fun:__lll_mutex_unlock_wake
+ fun:pthread_mutex_unlock
+ fun:pthread_mutex_unlock
+}
+
###--- pthread_mutex_destroy ---###
{
thrcheck-glibc2X-pthmxdestroy-1
@@ -225,20 +242,19 @@
fun:pthread_mutex_trylock
}
-#z###--- pthread_cond_timedwait ---###
-#z{
-#z thrcheck-glibc2X-pthmxtimedwait-1
-#z Thrcheck:Race
-#z fun:pthread_cond_timedwait@@GLIBC_*
-#z fun:pthread_cond_timedwait*
-#z}
-#z{
-#z thrcheck-glibc2X-pthmxtimedwait-2
-#z Thrcheck:Race
-#z fun:__lll_mutex_lock_wait
-#z fun:pthread_cond_timedwait@@GLIBC_*
-#z fun:pthread_cond_timedwait*
-#z}
+###--- pthread_cond_timedwait ---###
+# ditto
+{
+ thrcheck-glibc2X-pthmxtimedwait-1
+ Thrcheck:Race
+ fun:pthread_cond_timedwait@@GLIBC_2.3.2
+}
+{
+ thrcheck-glibc2X-pthmxtimedwait-2
+ Thrcheck:Race
+ fun:__lll_mutex_unlock_wake
+ fun:pthread_cond_timedwait@@GLIBC_*
+}
###--- libpthread internal stuff ---###
{
@@ -251,8 +267,7 @@
thrcheck-glibc2X-libpthread-2
Thrcheck:Race
fun:__lll_mutex_unlock_wake
- fun:_L_mutex_unlock_*
- fun:__pthread_mutex_unlock_usercnt
+ fun:_L_*unlock_*
}
{
thrcheck-glibc2X-libpthread-3
@@ -261,14 +276,14 @@
fun:_L_mutex_lock_*
fun:start_thread
}
-#z{
-#z thrcheck-glibc2X-libpthread-4
-#z Thrcheck:Race
-#z fun:__lll_mutex_lock_wait
-#z fun:_L_mutex_lock_*
-#z fun:pthread_mutex_lock
-#z}
{
+ thrcheck-glibc2X-libpthread-4
+ Thrcheck:Race
+ fun:__lll_mutex_lock_wait
+ fun:_L_mutex_lock_*
+ fun:pthread_mutex_lock
+}
+{
thrcheck-glibc2X-libpthread-5
Thrcheck:Race
fun:mythread_wrapper
@@ -281,29 +296,29 @@
fun:start_thread
fun:*clone*
}
-#z{
-#z thrcheck-glibc2X-libpthread-7
-#z Thrcheck:Race
-#z fun:__deallocate_stack
-#z fun:__free_tcb
-#z fun:start_thread
-#z}
-#z{
-#z thrcheck-glibc2X-libpthread-8
-#z Thrcheck:Race
-#z fun:__deallocate_stack
-#z fun:pthread_join
-#z fun:pthread_join
-#z}
-#z
-#z###--- fork ---###
-#z{
-#z thrcheck-glibc2X-fork-1
-#z Thrcheck:Race
-#z fun:__reclaim_stacks
-#z fun:fork
-#z}
+{
+ thrcheck-glibc2X-libpthread-7
+ Thrcheck:Race
+ fun:__deallocate_stack
+ fun:__free_tcb
+ fun:start_thread
+}
+{
+ thrcheck-glibc2X-libpthread-8
+ Thrcheck:Race
+ fun:__deallocate_stack
+ fun:__free_tcb
+ fun:pthread_join
+}
+###--- fork ---###
+{
+ thrcheck-glibc2X-fork-1
+ Thrcheck:Race
+ fun:__reclaim_stacks
+ fun:fork
+}
+
###--- glibc-2.5 specific ---###
{
thrcheck-glibc25-ld25-64bit-1
Modified: branches/THRCHECK/thrcheck/tc_intercepts.c
===================================================================
--- branches/THRCHECK/thrcheck/tc_intercepts.c 2007-10-16 20:59:56 UTC (rev 7007)
+++ branches/THRCHECK/thrcheck/tc_intercepts.c 2007-10-16 21:07:43 UTC (rev 7008)
@@ -281,13 +281,14 @@
/*----------------------------------------------------------------*/
/* Handled: pthread_mutex_init pthread_mutex_destroy
- pthread_mutex_lock pthread_mutex_trylock
+ pthread_mutex_lock
+ pthread_mutex_trylock pthread_mutex_timedlock
pthread_mutex_unlock
- Unhandled: pthread_mutex_timedlock -- FIXME
-
- FIXME: pthread_spin_init pthread_spin_destroy
- pthread_spin_lock pthread_spin_unlock pthread_spin_trylock
+ Unhandled: pthread_spin_init pthread_spin_destroy
+ pthread_spin_lock
+ pthread_spin_trylock
+ pthread_spin_unlock
*/
// pthread_mutex_init
@@ -365,8 +366,8 @@
fprintf(stderr, "<< pthread_mxlock %p", mutex); fflush(stderr);
}
- DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
- pthread_mutex_t*,mutex);
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
+ pthread_mutex_t*,mutex, long,0/*!isTryLock*/);
CALL_FN_W_W(ret, fn, mutex);
@@ -389,8 +390,12 @@
}
-// pthread_mutex_trylock. AFAICS the handling needed here is identical
-// to that for pthread_mutex_lock.
+// pthread_mutex_trylock. The handling needed here is very similar
+// to that for pthread_mutex_lock, except that we need to tell
+// the pre-lock creq that this is a trylock-style operation, and
+// therefore not to complain if the lock is nonrecursive and
+// already locked by this thread -- because then it'll just fail
+// immediately with EBUSY.
PTH_FUNC(int, pthreadZumutexZutrylock, // pthread_mutex_trylock
pthread_mutex_t *mutex)
{
@@ -401,8 +406,8 @@
fprintf(stderr, "<< pthread_mxtrylock %p", mutex); fflush(stderr);
}
- DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
- pthread_mutex_t*,mutex);
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
+ pthread_mutex_t*,mutex, long,1/*isTryLock*/);
CALL_FN_W_W(ret, fn, mutex);
@@ -426,6 +431,44 @@
}
+// pthread_mutex_timedlock. Identical logic to pthread_mutex_trylock.
+PTH_FUNC(int, pthreadZumutexZutimedlock, // pthread_mutex_timedlock
+ pthread_mutex_t *mutex,
+ void* timeout)
+{
+ int ret;
+ OrigFn fn;
+ VALGRIND_GET_ORIG_FN(fn);
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, "<< pthread_mxtimedlock %p %p", mutex, timeout);
+ fflush(stderr);
+ }
+
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
+ pthread_mutex_t*,mutex, long,1/*isTryLock-ish*/);
+
+ CALL_FN_W_WW(ret, fn, mutex,timeout);
+
+ /* There's a hole here: libpthread now knows the lock is locked,
+ but the tool doesn't, so some other thread could run and detect
+ that the lock has been acquired by someone (this thread). Does
+ this matter? Not sure, but I don't think so. */
+
+ if (ret == 0 /*success*/) {
+ DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_POST,
+ pthread_mutex_t*,mutex);
+ } else {
+ if (ret != ETIMEDOUT)
+ DO_PthAPIerror( "pthread_mutex_timedlock", ret );
+ }
+
+ if (TRACE_PTH_FNS) {
+ fprintf(stderr, " :: mxtimedlock -> %d >>\n", ret);
+ }
+ return ret;
+}
+
+
// pthread_mutex_unlock
PTH_FUNC(int, pthreadZumutexZuunlock, // pthread_mutex_unlock
pthread_mutex_t *mutex)
@@ -547,8 +590,9 @@
DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_COND_WAIT_POST,
pthread_cond_t*,cond, pthread_mutex_t*,mutex);
- } else {
- DO_PthAPIerror( "pthread_cond_timedwait", ret );
+ } else {
+ if (ret != ETIMEDOUT)
+ DO_PthAPIerror( "pthread_cond_timedwait", ret );
}
if (TRACE_PTH_FNS) {
@@ -819,8 +863,8 @@
fprintf(stderr, "<< QMutex::lock %p", self); fflush(stderr);
}
- DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
- void*, self);
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
+ void*,self, long,0/*!isTryLock*/);
CALL_FN_v_W(fn, self);
@@ -869,8 +913,8 @@
fprintf(stderr, "<< QMutex::tryLock %p", self); fflush(stderr);
}
- DO_CREQ_v_W(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
- void*, self);
+ DO_CREQ_v_WW(_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE,
+ void*,self, long,1/*isTryLock*/);
CALL_FN_W_W(ret, fn, self);
Modified: branches/THRCHECK/thrcheck/tc_main.c
===================================================================
--- branches/THRCHECK/thrcheck/tc_main.c 2007-10-16 20:59:56 UTC (rev 7007)
+++ branches/THRCHECK/thrcheck/tc_main.c 2007-10-16 21:07:43 UTC (rev 7008)
@@ -2241,7 +2241,6 @@
static void record_error_UnlockUnlocked ( Thread*, Lock* );
static void record_error_UnlockForeign ( Thread*, Thread*, Lock* );
static void record_error_UnlockBogus ( Thread*, Addr );
-static void record_error_DestroyLocked ( Thread*, Lock* );
static void record_error_PthAPIerror ( Thread*, HChar*, Word, HChar* );
static void record_error_LockOrder ( Thread*, Lock*, Lock* );
@@ -4805,7 +4804,7 @@
tl_assert( lk->guestaddr == (Addr)mutex );
if (lk->heldBy) {
/* Basically act like we unlocked the lock */
- record_error_DestroyLocked( thr, lk );
+ record_error_Misc( thr, "pthread_mutex_destroy of a locked mutex" );
/* remove lock from locksets of all owning threads */
remove_Lock_from_locksets_of_all_owning_Threads( lk );
TC_(deleteBag)( lk->heldBy );
@@ -4820,7 +4819,8 @@
all__sanity_check("evh__tc_PTHREAD_MUTEX_DESTROY_PRE");
}
-static void evh__TC_PTHREAD_MUTEX_LOCK_PRE ( ThreadId tid, void* mutex )
+static void evh__TC_PTHREAD_MUTEX_LOCK_PRE ( ThreadId tid,
+ void* mutex, Word isTryLock )
{
/* Just check the mutex is sane; nothing else to do. */
// 'mutex' may be invalid - not checked by wrapper
@@ -4830,6 +4830,7 @@
VG_(printf)("evh__tc_PTHREAD_MUTEX_LOCK_PRE(ctid=%d, mutex=%p)\n",
(Int)tid, (void*)mutex );
+ tl_assert(isTryLock == 0 || isTryLock == 1);
thr = map_threads_maybe_lookup( tid );
tl_assert(thr); /* cannot fail - Thread* must already exist */
@@ -4841,12 +4842,15 @@
}
if ( lk
+ && isTryLock == 0
&& (lk->kind == LK_nonRec || lk->kind == LK_rdwr)
&& lk->heldBy
&& lk->heldW
&& TC_(elemBag)( lk->heldBy, (Word)thr ) > 0 ) {
- /* uh, it's a non-recursive lock and we already w-hold it. Duh.
- Deadlock coming up; but at least produce an error message. */
+ /* uh, it's a non-recursive lock and we already w-hold it, and
+ this is a real lock operation (not a speculative "tryLock"
+ kind of thing. Duh. Deadlock coming up; but at least
+ produce an error message. */
record_error_Misc( thr, "Attempt to re-lock a "
"non-recursive lock I already hold" );
}
@@ -5067,7 +5071,7 @@
tl_assert( lk->guestaddr == (Addr)rwl );
if (lk->heldBy) {
/* Basically act like we unlocked the lock */
- record_error_DestroyLocked( thr, lk );
+ record_error_Misc( thr, "pthread_rwlock_destroy of a locked mutex" );
/* remove lock from locksets of all owning threads */
remove_Lock_from_locksets_of_all_owning_Threads( lk );
TC_(deleteBag)( lk->heldBy );
@@ -5961,14 +5965,15 @@
/* --- --- User-visible client requests --- --- */
case VG_USERREQ__TC_CLEAN_MEMORY:
- if (1) VG_(printf)("VG_USERREQ__TC_CLEAN_MEMORY(%p,%d)\n",
+ if (0) VG_(printf)("VG_USERREQ__TC_CLEAN_MEMORY(%p,%d)\n",
args[1], args[2]);
/* Call die_mem to (expensively) tidy up properly, if there
are any held locks etc in the area */
- // FIXME: next line causes firefox to stall - no idea why
- // evh__die_mem(args[1], args[2]);
- /* and then set it to New */
- evh__new_mem(args[1], args[2]);
+ if (args[2] > 0) { /* length */
+ evh__die_mem(args[1], args[2]);
+ /* and then set it to New */
+ evh__new_mem(args[1], args[2]);
+ }
break;
/* --- --- Client requests for Thrcheck's use only --- --- */
@@ -6059,8 +6064,8 @@
evh__TC_PTHREAD_MUTEX_UNLOCK_POST( tid, (void*)args[1] );
break;
- case _VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE: // pth_mx_t*
- evh__TC_PTHREAD_MUTEX_LOCK_PRE( tid, (void*)args[1] );
+ case _VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE: // pth_mx_t*, Word
+ evh__TC_PTHREAD_MUTEX_LOCK_PRE( tid, (void*)args[1], args[2] );
break;
case _VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_POST: // pth_mx_t*
@@ -6214,7 +6219,6 @@
XE_UnlockUnlocked, // unlocking a not-locked lock
XE_UnlockForeign, // unlocking a lock held by some other thread
XE_UnlockBogus, // unlocking an address not known to be a lock
- XE_DestroyLocked, // pth_mx_destroy on locked lock
XE_PthAPIerror, // error from the POSIX pthreads API
XE_LockOrder, // lock order error
XE_Misc // misc other error (w/ string to describe it)
@@ -6253,10 +6257,6 @@
Addr lock_ga; /* purported address of the lock */
} UnlockBogus;
struct {
- Thread* thr; /* doing the unlocking */
- Lock* lock; /* lock (that is locked and now destroyed) */
- } DestroyLocked;
- struct {
Thread* thr;
HChar* fnname; /* persistent, in tool-arena */
Word err; /* pth error code */
@@ -6289,7 +6289,6 @@
XS_UnlockUnlocked,
XS_UnlockForeign,
XS_UnlockBogus,
- XS_DestroyLocked,
XS_PthAPIerror,
XS_LockOrder,
XS_Misc
@@ -6382,19 +6381,6 @@
XE_UnlockBogus, 0, NULL, &xe );
}
-static void record_error_DestroyLocked ( Thread* thr, Lock* lk ) {
- XError xe;
- tl_assert( is_sane_Thread(thr) );
- tl_assert( is_sane_LockN(lk) );
- init_XError(&xe);
- xe.tag = XE_DestroyLocked;
- xe.XE.DestroyLocked.thr = thr;
- xe.XE.DestroyLocked.lock = mk_LockP_from_LockN(lk);
- // FIXME: tid vs thr
- VG_(maybe_record_error)( map_threads_reverse_lookup_SLOW(thr),
- XE_DestroyLocked, 0, NULL, &xe );
-}
-
static
void record_error_LockOrder ( Thread* thr, Lock* before, Lock* after ) {
XError xe;
@@ -6470,18 +6456,15 @@
case XE_UnlockBogus:
return xe1->XE.UnlockBogus.thr == xe2->XE.UnlockBogus.thr
&& xe1->XE.UnlockBogus.lock_ga == xe2->XE.UnlockBogus.lock_ga;
- case XE_DestroyLocked:
- return xe1->XE.DestroyLocked.thr == xe2->XE.DestroyLocked.thr
- && xe1->XE.DestroyLocked.lock == xe2->XE.DestroyLocked.lock;
case XE_PthAPIerror:
return xe1->XE.PthAPIerror.thr == xe2->XE.PthAPIerror.thr
&& 0==VG_(strcmp)(xe1->XE.PthAPIerror.fnname,
xe2->XE.PthAPIerror.fnname)
&& xe1->XE.PthAPIerror.err == xe2->XE.PthAPIerror.err;
case XE_LockOrder:
- return xe1->XE.LockOrder.thr == xe2->XE.LockOrder.thr
- && xe1->XE.LockOrder.before == xe2->XE.LockOrder.before
- && xe1->XE.LockOrder.after == xe2->XE.LockOrder.after;
+ return xe1->XE.LockOrder.thr == xe2->XE.LockOrder.thr;
+ /* && xe1->XE.LockOrder.before == xe2->XE.LockOrder.before
+ && xe1->XE.LockOrder.after == xe2->XE.LockOrder.after; */
case XE_Misc:
return xe1->XE.Misc.thr == xe2->XE.Misc.thr
&& 0==VG_(strcmp)(xe1->XE.Misc.errstr, xe2->XE.Misc.errstr);
@@ -6847,7 +6830,6 @@
case XE_UnlockUnlocked: return "UnlockUnlocked";
case XE_UnlockForeign: return "UnlockForeign";
case XE_UnlockBogus: return "UnlockBogus";
- case XE_DestroyLocked: return "DestroyLocked";
case XE_PthAPIerror: return "PthAPIerror";
case XE_LockOrder: return "LockOrder";
case XE_Misc: return "Misc";
@@ -6867,7 +6849,6 @@
TRY("UnlockUnlocked", XS_UnlockUnlocked);
TRY("UnlockForeign", XS_UnlockForeign);
TRY("UnlockBogus", XS_UnlockBogus);
- TRY("DestroyLocked", XS_DestroyLocked);
TRY("PthAPIerror", XS_PthAPIerror);
TRY("LockOrder", XS_LockOrder);
TRY("Misc", XS_Misc);
@@ -6891,7 +6872,6 @@
case XS_UnlockUnlocked: return VG_(get_error_kind)(err) == XE_UnlockUnlocked;
case XS_UnlockForeign: return VG_(get_error_kind)(err) == XE_UnlockForeign;
case XS_UnlockBogus: return VG_(get_error_kind)(err) == XE_UnlockBogus;
- case XS_DestroyLocked: return VG_(get_error_kind)(err) == XE_DestroyLocked;
case XS_PthAPIerror: return VG_(get_error_kind)(err) == XE_PthAPIerror;
case XS_LockOrder: return VG_(get_error_kind)(err) == XE_LockOrder;
case XS_Misc: return VG_(get_error_kind)(err) == XE_Misc;
Modified: branches/THRCHECK/thrcheck/thrcheck.h
===================================================================
--- branches/THRCHECK/thrcheck/thrcheck.h 2007-10-16 20:59:56 UTC (rev 7007)
+++ branches/THRCHECK/thrcheck/thrcheck.h 2007-10-16 21:07:43 UTC (rev 7008)
@@ -76,7 +76,7 @@
_VG_USERREQ__TC_PTHREAD_MUTEX_DESTROY_PRE, // pth_mx_t*
_VG_USERREQ__TC_PTHREAD_MUTEX_UNLOCK_PRE, // pth_mx_t*
_VG_USERREQ__TC_PTHREAD_MUTEX_UNLOCK_POST, // pth_mx_t*
- _VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE, // pth_mx_t*
+ _VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_PRE, // pth_mx_t*, long isTryLock
_VG_USERREQ__TC_PTHREAD_MUTEX_LOCK_POST, // pth_mx_t*
_VG_USERREQ__TC_PTHREAD_COND_SIGNAL_PRE, // pth_cond_t*
_VG_USERREQ__TC_PTHREAD_COND_BROADCAST_PRE, // pth_cond_t*
@@ -94,12 +94,12 @@
about the specified memory range, and resets it to New. This is
particularly useful for memory allocators that wish to recycle
memory. */
-#define VALGRIND_TC_CLEAN_MEMORY(_qzz_start, _qzz_len) \
- do { \
- unsigned long _qzz_res; \
- VALGRIND_MAGIC_SEQUENCE(_qzz_res, 0, VG_USERREQ__TC_CLEAN_MEMORY, \
- _qzz_start, _qzz_len, 0, 0); \
- (void)0; \
+#define VALGRIND_TC_CLEAN_MEMORY(_qzz_start, _qzz_len) \
+ do { \
+ unsigned long _qzz_res; \
+ VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, VG_USERREQ__TC_CLEAN_MEMORY, \
+ _qzz_start, _qzz_len, 0, 0, 0); \
+ (void)0; \
} while(0)
#endif /* __THRCHECK_H */
|
|
From: <sv...@va...> - 2007-10-16 20:59:53
|
Author: sewardj
Date: 2007-10-16 21:59:56 +0100 (Tue, 16 Oct 2007)
New Revision: 7007
Log:
Be even more paranoid.
Modified:
branches/THRCHECK/coregrind/m_scheduler/scheduler.c
Modified: branches/THRCHECK/coregrind/m_scheduler/scheduler.c
===================================================================
--- branches/THRCHECK/coregrind/m_scheduler/scheduler.c 2007-10-16 20:59:33 UTC (rev 7006)
+++ branches/THRCHECK/coregrind/m_scheduler/scheduler.c 2007-10-16 20:59:56 UTC (rev 7007)
@@ -640,6 +640,7 @@
VG_(clo_profile_flags) > 0 ? 1 : 0 )
);
+ vg_assert(VG_(in_generated_code) == True);
VG_(in_generated_code) = False;
if (jumped) {
|
|
From: <sv...@va...> - 2007-10-16 20:59:33
|
Author: sewardj
Date: 2007-10-16 21:59:33 +0100 (Tue, 16 Oct 2007)
New Revision: 7006
Log:
Comment-only change
Modified:
branches/THRCHECK/include/pub_tool_tooliface.h
Modified: branches/THRCHECK/include/pub_tool_tooliface.h
===================================================================
--- branches/THRCHECK/include/pub_tool_tooliface.h 2007-10-16 16:04:02 UTC (rev 7005)
+++ branches/THRCHECK/include/pub_tool_tooliface.h 2007-10-16 20:59:33 UTC (rev 7006)
@@ -547,13 +547,14 @@
/* Scheduler events (not exhaustive) */
/* Called when 'tid' starts or stops running client code blocks.
- Gives the total dispatched block count at that event. Note, this is
- not the same as 'tid' holding the BigLock (the lock that ensures that
- only one thread runs at a time): a thread can hold the lock for other
- purposes (making translations, etc) yet not be running client blocks.
- Obviously though, a thread must hold the lock in order to run client
- code blocks, so the times bracketed by 'thread_run'..'thread_runstate'
- are a subset of the times when thread 'tid' holds the cpu lock.
+ Gives the total dispatched block count at that event. Note, this
+ is not the same as 'tid' holding the BigLock (the lock that ensures
+ that only one thread runs at a time): a thread can hold the lock
+ for other purposes (making translations, etc) yet not be running
+ client blocks. Obviously though, a thread must hold the lock in
+ order to run client code blocks, so the times bracketed by
+ 'start_client_code'..'stop_client_code' are a subset of the times
+ when thread 'tid' holds the cpu lock.
*/
void VG_(track_start_client_code)(
void(*f)(ThreadId tid, ULong blocks_dispatched)
|
|
From: <sv...@va...> - 2007-10-16 16:04:03
|
Author: sewardj
Date: 2007-10-16 17:04:02 +0100 (Tue, 16 Oct 2007)
New Revision: 7005
Log:
Improve error messages:
* in race errors, show the memory access size (1, 2, 4 or 8)
* in lock order errors, show enough info that users have at
least some hope of figuring out what's going on
Modified:
branches/THRCHECK/thrcheck/tc_main.c
Modified: branches/THRCHECK/thrcheck/tc_main.c
===================================================================
--- branches/THRCHECK/thrcheck/tc_main.c 2007-10-16 08:09:10 UTC (rev 7004)
+++ branches/THRCHECK/thrcheck/tc_main.c 2007-10-16 16:04:02 UTC (rev 7005)
@@ -250,6 +250,8 @@
/* EXPOSITION */
/* Place where lock first came to the attention of Thrcheck. */
ExeContext* appeared_at;
+ /* Place where the lock was first locked. */
+ ExeContext* first_locked_at;
/* USEFUL-STATIC */
Addr guestaddr; /* Guest address of lock */
LockKind kind; /* what kind of lock this is */
@@ -490,6 +492,8 @@
/*--- Simple helpers for the data structures ---*/
/*----------------------------------------------------------------*/
+static ThreadId map_threads_maybe_reverse_lookup_SLOW ( Thread* ); /*fwds*/
+
#define Thread_MAGIC 0x504fc5e5
#define LockN_MAGIC 0x6545b557 /* normal nonpersistent locks */
#define LockP_MAGIC 0x755b5456 /* persistent (copied) locks */
@@ -524,6 +528,7 @@
lock->unique = unique++;
lock->magic = LockN_MAGIC;
lock->appeared_at = NULL;
+ lock->first_locked_at = NULL;
lock->guestaddr = guestaddr;
lock->kind = kind;
lock->heldW = False;
@@ -630,6 +635,18 @@
{
tl_assert(is_sane_LockN(lk));
tl_assert(is_sane_Thread(thr));
+
+ /* If it's never been locked before, note the first person to lock
+ it. This is so as to produce better lock-order error
+ messages. */
+ if (lk->first_locked_at == NULL) {
+ ThreadId tid = map_threads_maybe_reverse_lookup_SLOW(thr);
+ if (tid != VG_INVALID_THREADID) {
+ lk->first_locked_at
+ = VG_(record_ExeContext(tid, 0/*first_ip_delta*/));
+ }
+ }
+
switch (lk->kind) {
case LK_nonRec:
case_LK_nonRec:
@@ -669,6 +686,18 @@
/* lk must be free or already r-held. */
tl_assert(lk->heldBy == NULL
|| (lk->heldBy != NULL && !lk->heldW));
+
+ /* If it's never been locked before, note the first person to lock
+ it. This is so as to produce better lock-order error
+ messages. */
+ if (lk->first_locked_at == NULL) {
+ ThreadId tid = map_threads_maybe_reverse_lookup_SLOW(thr);
+ if (tid != VG_INVALID_THREADID) {
+ lk->first_locked_at
+ = VG_(record_ExeContext(tid, 0/*first_ip_delta*/));
+ }
+ }
+
if (lk->heldBy) {
TC_(addToBag)(lk->heldBy, (Word)thr);
} else {
@@ -2214,6 +2243,7 @@
static void record_error_UnlockBogus ( Thread*, Addr );
static void record_error_DestroyLocked ( Thread*, Lock* );
static void record_error_PthAPIerror ( Thread*, HChar*, Word, HChar* );
+static void record_error_LockOrder ( Thread*, Lock*, Lock* );
static void record_error_Misc ( Thread*, HChar* );
@@ -2346,7 +2376,7 @@
from old. 'thr_acc' and 'a' are supplied only so it can produce
coherent error messages if necessary. */
static
-UInt msm__handle_read ( Thread* thr_acc, Addr a, UInt wold )
+UInt msm__handle_read ( Thread* thr_acc, Addr a, UInt wold, Int szB )
{
tl_assert(is_sane_Thread(thr_acc));
@@ -2435,7 +2465,7 @@
if (TC_(isEmptyWS)(univ_lsets, lset_new)
&& !TC_(isEmptyWS)(univ_lsets, lset_old)) {
record_error_Race( thr_acc, a,
- False/*isWrite*/, 4/*szB*/, wold, wnew,
+ False/*isWrite*/, szB, wold, wnew,
maybe_get_lastlock_initpoint(a) );
}
stats__msm_r32_ShM_to_ShM++;
@@ -2470,7 +2500,7 @@
resulting from a write to a location, and report any errors
necessary on the way. */
static
-UInt msm__handle_write ( Thread* thr_acc, Addr a, UInt wold )
+UInt msm__handle_write ( Thread* thr_acc, Addr a, UInt wold, Int szB )
{
tl_assert(is_sane_Thread(thr_acc));
@@ -2522,7 +2552,7 @@
wnew = mk_SHVAL_ShM( tset, lset );
if (TC_(isEmptyWS)(univ_lsets, lset)) {
record_error_Race( thr_acc,
- a, True/*isWrite*/, 4/*szB*/, wold, wnew,
+ a, True/*isWrite*/, szB, wold, wnew,
maybe_get_lastlock_initpoint(a) );
}
stats__msm_w32_Excl_to_ShM++;
@@ -2552,7 +2582,7 @@
record_last_lock_lossage(a,lset_old,lset_new);
if (TC_(isEmptyWS)(univ_lsets, lset_new)) {
record_error_Race( thr_acc, a,
- True/*isWrite*/, 4/*szB*/, wold, wnew,
+ True/*isWrite*/, szB, wold, wnew,
maybe_get_lastlock_initpoint(a) );
}
stats__msm_w32_ShR_to_ShM++;
@@ -2581,7 +2611,7 @@
if (TC_(isEmptyWS)(univ_lsets, lset_new)
&& !TC_(isEmptyWS)(univ_lsets, lset_old)) {
record_error_Race( thr_acc, a,
- True/*isWrite*/, 4/*szB*/, wold, wnew,
+ True/*isWrite*/, szB, wold, wnew,
maybe_get_lastlock_initpoint(a) );
}
stats__msm_w32_ShM_to_ShM++;
@@ -3296,7 +3326,7 @@
/* EXPENSIVE: tl_assert(is_sane_CacheLine(cl)); */
svOld = cl->w8[ix8];
}
- svNew = msm__handle_read( thr_acc, a, svOld );
+ svNew = msm__handle_read( thr_acc, a, svOld, 1 );
cl->w8[ix8] = svNew;
}
static void shadow_mem_read16 ( Thread* thr_acc, Addr a, UInt uuOpaque ) {
@@ -3314,7 +3344,7 @@
/* EXPENSIVE: tl_assert(is_sane_CacheLine(cl)); */
svOld = cl->w16[ix16];
}
- svNew = msm__handle_read( thr_acc, a, svOld );
+ svNew = msm__handle_read( thr_acc, a, svOld, 2 );
cl->w16[ix16] = svNew;
return;
slowcase: /* misaligned, or must go further down the tree */
@@ -3338,7 +3368,7 @@
/* EXPENSIVE: tl_assert(is_sane_CacheLine(cl)); */
svOld = cl->w32[ix32];
}
- svNew = msm__handle_read( thr_acc, a, svOld );
+ svNew = msm__handle_read( thr_acc, a, svOld, 4 );
cl->w32[ix32] = svNew;
return;
slowcase: /* misaligned, or must go further down the tree */
@@ -3360,7 +3390,7 @@
tl_assert(svOld == SHVAL_InvalidD);
goto slowcase;
}
- svNew = msm__handle_read( thr_acc, a, svOld );
+ svNew = msm__handle_read( thr_acc, a, svOld, 8 );
cl->w64[ix64] = svNew;
return;
slowcase: /* misaligned, or must go further down the tree */
@@ -3383,7 +3413,7 @@
/* EXPENSIVE: tl_assert(is_sane_CacheLine(cl)); */
svOld = cl->w8[ix8];
}
- svNew = msm__handle_write( thr_acc, a, svOld );
+ svNew = msm__handle_write( thr_acc, a, svOld, 1 );
cl->w8[ix8] = svNew;
}
static void shadow_mem_write16 ( Thread* thr_acc, Addr a, UInt uuOpaque ) {
@@ -3401,7 +3431,7 @@
/* EXPENSIVE: tl_assert(is_sane_CacheLine(cl)); */
svOld = cl->w16[ix16];
}
- svNew = msm__handle_write( thr_acc, a, svOld );
+ svNew = msm__handle_write( thr_acc, a, svOld, 2 );
cl->w16[ix16] = svNew;
return;
slowcase: /* misaligned, or must go further down the tree */
@@ -3409,7 +3439,6 @@
shadow_mem_write8( thr_acc, a + 0, 0/*unused*/ );
shadow_mem_write8( thr_acc, a + 1, 0/*unused*/ );
}
-/* inline */
static void shadow_mem_write32 ( Thread* thr_acc, Addr a, UInt uuOpaque ) {
CacheLine* cl;
UWord ix32;
@@ -3425,7 +3454,7 @@
/* EXPENSIVE: tl_assert(is_sane_CacheLine(cl)); */
svOld = cl->w32[ix32];
}
- svNew = msm__handle_write( thr_acc, a, svOld );
+ svNew = msm__handle_write( thr_acc, a, svOld, 4 );
cl->w32[ix32] = svNew;
return;
slowcase: /* misaligned, or must go further down the tree */
@@ -3446,7 +3475,7 @@
tl_assert(svOld == SHVAL_InvalidD);
goto slowcase;
}
- svNew = msm__handle_write( thr_acc, a, svOld );
+ svNew = msm__handle_write( thr_acc, a, svOld, 8 );
cl->w64[ix64] = svNew;
return;
slowcase: /* misaligned, or must go further down the tree */
@@ -3523,6 +3552,7 @@
shadow_mem_set16( NULL/*unused*/, a + 0, svNew );
shadow_mem_set16( NULL/*unused*/, a + 2, svNew );
}
+inline
static void shadow_mem_set64 ( Thread* uu_thr_acc, Addr a, UInt svNew ) {
CacheLine* cl;
UWord ix64, ix32, ix16, ix8;
@@ -5132,6 +5162,9 @@
/* FIXME: here are some optimisations still to do in
laog__pre_thread_acquires_lock.
+ The graph is structured so that if L1 --*--> L2 then L1 must be
+ acquired before L2.
+
The common case is that some thread T holds (eg) L1 L2 and L3 and
is repeatedly acquiring and releasing Ln, and there is no ordering
error in what it is doing. Hence it repeatly:
@@ -5283,7 +5316,7 @@
TC_(initIterFM)( laog );
me = NULL;
links = NULL;
-VG_(printf)("laog sanity check\n");
+ if (0) VG_(printf)("laog sanity check\n");
while (TC_(nextIterFM)( laog, (Word*)&me, (Word*)&links )) {
tl_assert(me);
tl_assert(links);
@@ -5312,13 +5345,15 @@
tl_assert(0);
}
-/* Return True iff there is a path in laog from 'src' to any of the
- elements in 'dst'. */
+/* If there is a path in laog from 'src' to any of the elements in
+ 'dst', return an arbitrarily chosen element of 'dst' reachable from
+ 'src'. If no path exist from 'src' to any element in 'dst', return
+ NULL. */
__attribute__((noinline))
static
-Bool laog__do_dfs_from_to ( Lock* src, WordSetID dsts /* univ_lsets */ )
+Lock* laog__do_dfs_from_to ( Lock* src, WordSetID dsts /* univ_lsets */ )
{
- Bool ret;
+ Lock* ret;
Word i, ssz;
XArray* stack; /* of Lock* */
WordFM* visited; /* Lock* -> void, iow, Set(Lock*) */
@@ -5331,8 +5366,9 @@
/* If the destination set is empty, we can never get there from
'src' :-), so don't bother to try */
if (TC_(isEmptyWS)( univ_lsets, dsts ))
- return False;
+ return NULL;
+ ret = NULL;
stack = VG_(newXA)( tc_zalloc, tc_free, sizeof(Lock*) );
visited = TC_(newFM)( tc_zalloc, tc_free, NULL/*unboxedcmp*/ );
@@ -5342,12 +5378,12 @@
ssz = VG_(sizeXA)( stack );
- if (ssz == 0) { ret = False; break; }
+ if (ssz == 0) { ret = NULL; break; }
here = *(Lock**) VG_(indexXA)( stack, ssz-1 );
VG_(dropTailXA)( stack, 1 );
- if (TC_(elemWS)( univ_lsets, dsts, (Word)here )) { ret = True; break; }
+ if (TC_(elemWS)( univ_lsets, dsts, (Word)here )) { ret = here; break; }
if (TC_(lookupFM)( visited, NULL, NULL, (Word)here ))
continue;
@@ -5378,6 +5414,7 @@
{
Word* ls_words;
Word ls_size, i;
+ Lock* other;
/* It may be that 'thr' already holds 'lk' and is recursively
relocking in. In this case we just ignore the call. */
@@ -5394,9 +5431,14 @@
(rather than after, as we are doing here) at least one of those
locks.
*/
- if (laog__do_dfs_from_to(lk, thr->locksetA)) {
- record_error_Misc( thr, "Lock acquisition order is inconsistent "
- "with previously observed ordering" );
+ other = laog__do_dfs_from_to(lk, thr->locksetA);
+ if (other) {
+ /* So we managed to find a path lk --*--> other in the graph,
+ which implies that 'lk' should have been acquired before
+ 'other' but is in fact being acquired afterwards. We present
+ the lk/other arguments to record_error_LockOrder in the order
+ in which they should have been acquired. */
+ record_error_LockOrder( thr, lk, other );
}
/* Second, add to laog the pairs
@@ -6174,6 +6216,7 @@
XE_UnlockBogus, // unlocking an address not known to be a lock
XE_DestroyLocked, // pth_mx_destroy on locked lock
XE_PthAPIerror, // error from the POSIX pthreads API
+ XE_LockOrder, // lock order error
XE_Misc // misc other error (w/ string to describe it)
}
XErrorTag;
@@ -6221,6 +6264,11 @@
} PthAPIerror;
struct {
Thread* thr;
+ Lock* before; /* always locked first in prog. history */
+ Lock* after; /* was erroneously locked before 'before' */
+ } LockOrder;
+ struct {
+ Thread* thr;
HChar* errstr; /* persistent, in tool-arena */
} Misc;
} XE;
@@ -6243,6 +6291,7 @@
XS_UnlockBogus,
XS_DestroyLocked,
XS_PthAPIerror,
+ XS_LockOrder,
XS_Misc
}
XSuppTag;
@@ -6276,9 +6325,7 @@
xe.XE.Race.thr = thr;
// FIXME: tid vs thr
VG_(maybe_record_error)( map_threads_reverse_lookup_SLOW(thr),
- XE_Race, data_addr,
- (isWrite ? "write to" : "read from"),
- &xe);
+ XE_Race, data_addr, NULL, &xe );
}
static void record_error_FreeMemLock ( Thread* thr, Lock* lk ) {
@@ -6349,6 +6396,22 @@
}
static
+void record_error_LockOrder ( Thread* thr, Lock* before, Lock* after ) {
+ XError xe;
+ tl_assert( is_sane_Thread(thr) );
+ tl_assert( is_sane_LockN(after) );
+ tl_assert( is_sane_LockN(before) );
+ init_XError(&xe);
+ xe.tag = XE_LockOrder;
+ xe.XE.LockOrder.thr = thr;
+ xe.XE.LockOrder.before = mk_LockP_from_LockN(before);
+ xe.XE.LockOrder.after = mk_LockP_from_LockN(after);
+ // FIXME: tid vs thr
+ VG_(maybe_record_error)( map_threads_reverse_lookup_SLOW(thr),
+ XE_LockOrder, 0, NULL, &xe );
+}
+
+static
void record_error_PthAPIerror ( Thread* thr, HChar* fnname,
Word err, HChar* errstr ) {
XError xe;
@@ -6381,7 +6444,6 @@
static Bool tc_eq_Error ( VgRes not_used, Error* e1, Error* e2 )
{
- Char *e1s, *e2s;
XError *xe1, *xe2;
tl_assert(VG_(get_error_kind)(e1) == VG_(get_error_kind)(e2));
@@ -6393,9 +6455,8 @@
switch (VG_(get_error_kind)(e1)) {
case XE_Race:
- //return VG_(get_error_tid)(e1) == VG_(get_error_tid)(e2);
- break;
- //return VG_(get_error_address)(e1) == VG_(get_error_address)(e2);
+ return xe1->XE.Race.szB == xe2->XE.Race.szB
+ && xe1->XE.Race.isWrite == xe2->XE.Race.isWrite;
case XE_FreeMemLock:
return xe1->XE.FreeMemLock.thr == xe2->XE.FreeMemLock.thr
&& xe1->XE.FreeMemLock.lock == xe2->XE.FreeMemLock.lock;
@@ -6417,6 +6478,10 @@
&& 0==VG_(strcmp)(xe1->XE.PthAPIerror.fnname,
xe2->XE.PthAPIerror.fnname)
&& xe1->XE.PthAPIerror.err == xe2->XE.PthAPIerror.err;
+ case XE_LockOrder:
+ return xe1->XE.LockOrder.thr == xe2->XE.LockOrder.thr
+ && xe1->XE.LockOrder.before == xe2->XE.LockOrder.before
+ && xe1->XE.LockOrder.after == xe2->XE.LockOrder.after;
case XE_Misc:
return xe1->XE.Misc.thr == xe2->XE.Misc.thr
&& 0==VG_(strcmp)(xe1->XE.Misc.errstr, xe2->XE.Misc.errstr);
@@ -6424,11 +6489,8 @@
tl_assert(0);
}
- e1s = VG_(get_error_string)(e1);
- e2s = VG_(get_error_string)(e2);
- if (e1s != e2s) return False;
- if (0 != VG_(strcmp)(e1s, e2s)) return False;
- return True;
+ /*NOTREACHED*/
+ tl_assert(0);
}
/* Announce (that is, print the point-of-creation) of the threads in
@@ -6528,6 +6590,32 @@
break;
}
+ case XE_LockOrder: {
+ tl_assert(xe);
+ tl_assert( is_sane_Thread( xe->XE.LockOrder.thr ) );
+ tl_assert( is_sane_LockP( xe->XE.LockOrder.after ) );
+ tl_assert( is_sane_LockP( xe->XE.LockOrder.before ) );
+ announce_one_thread( xe->XE.LockOrder.thr );
+ VG_(message)(Vg_UserMsg,
+ "Thread #%d: lock order \"%p before %p\" violated",
+ (Int)xe->XE.LockOrder.thr->errmsg_index,
+ (void*)xe->XE.LockOrder.before->guestaddr,
+ (void*)xe->XE.LockOrder.after->guestaddr);
+ VG_(pp_ExeContext)( VG_(get_error_where)(err) );
+ if (xe->XE.LockOrder.before->first_locked_at
+ && xe->XE.LockOrder.after->first_locked_at) {
+ VG_(message)(Vg_UserMsg,
+ " Required order was established by acquisition of lock at %p",
+ (void*)xe->XE.LockOrder.before->guestaddr);
+ VG_(pp_ExeContext)( xe->XE.LockOrder.before->first_locked_at );
+ VG_(message)(Vg_UserMsg,
+ " followed by a later acquisition of lock at %p",
+ (void*)xe->XE.LockOrder.after->guestaddr);
+ VG_(pp_ExeContext)( xe->XE.LockOrder.after->first_locked_at );
+ }
+ break;
+ }
+
case XE_PthAPIerror: {
tl_assert(xe);
tl_assert( is_sane_Thread( xe->XE.PthAPIerror.thr ) );
@@ -6624,6 +6712,8 @@
Char old_tset_buf[140], new_tset_buf[140];
UInt old_state, new_state;
Thread* thr_acc;
+ HChar* what;
+ Int szB;
WordSetID tset_to_announce = TC_(emptyWS)( univ_tsets );
/* First extract some essential info */
@@ -6631,6 +6721,8 @@
old_state = xe->XE.Race.old_state;
new_state = xe->XE.Race.new_state;
thr_acc = xe->XE.Race.thr;
+ what = xe->XE.Race.isWrite ? "write" : "read";
+ szB = xe->XE.Race.szB;
tl_assert(is_sane_Thread(thr_acc));
err_ga = VG_(get_error_address)(err);
@@ -6663,8 +6755,9 @@
new_tset, (Word)old_thr );
announce_threadset( tset_to_announce );
- VG_(message)(Vg_UserMsg, "Possible data race during %s %p %(y",
- VG_(get_error_string)(err), err_ga, err_ga);
+ VG_(message)(Vg_UserMsg,
+ "Possible data race during %s of size %d at %p",
+ what, szB, err_ga);
VG_(pp_ExeContext)( VG_(get_error_where)(err) );
/* pp_AddrInfo(err_addr, &extra->addrinfo); */
if (show_raw_states)
@@ -6692,8 +6785,9 @@
tset_to_announce = TC_(unionWS)( univ_tsets, old_tset, new_tset );
announce_threadset( tset_to_announce );
- VG_(message)(Vg_UserMsg, "Possible data race during %s %p %(y",
- VG_(get_error_string)(err), err_ga, err_ga);
+ VG_(message)(Vg_UserMsg,
+ "Possible data race during %s of size %d at %p",
+ what, szB, err_ga);
VG_(pp_ExeContext)( VG_(get_error_where)(err) );
/* pp_AddrInfo(err_addr, &extra->addrinfo); */
if (show_raw_states)
@@ -6726,8 +6820,9 @@
}
/* Hmm, unknown transition. Just print what we do know. */
else {
- VG_(message)(Vg_UserMsg, "Possible data race during %s %p %(y",
- VG_(get_error_string)(err), err_ga, err_ga);
+ VG_(message)(Vg_UserMsg,
+ "Possible data race during %s of size %d at %p",
+ what, szB, err_ga);
VG_(pp_ExeContext)( VG_(get_error_where)(err) );
//pp_AddrInfo(err_addr, &extra->addrinfo);
@@ -6754,6 +6849,7 @@
case XE_UnlockBogus: return "UnlockBogus";
case XE_DestroyLocked: return "DestroyLocked";
case XE_PthAPIerror: return "PthAPIerror";
+ case XE_LockOrder: return "LockOrder";
case XE_Misc: return "Misc";
default: tl_assert(0); /* fill in missing case */
}
@@ -6773,6 +6869,7 @@
TRY("UnlockBogus", XS_UnlockBogus);
TRY("DestroyLocked", XS_DestroyLocked);
TRY("PthAPIerror", XS_PthAPIerror);
+ TRY("LockOrder", XS_LockOrder);
TRY("Misc", XS_Misc);
return False;
# undef TRY
@@ -6796,6 +6893,7 @@
case XS_UnlockBogus: return VG_(get_error_kind)(err) == XE_UnlockBogus;
case XS_DestroyLocked: return VG_(get_error_kind)(err) == XE_DestroyLocked;
case XS_PthAPIerror: return VG_(get_error_kind)(err) == XE_PthAPIerror;
+ case XS_LockOrder: return VG_(get_error_kind)(err) == XE_LockOrder;
case XS_Misc: return VG_(get_error_kind)(err) == XE_Misc;
//case XS_: return VG_(get_error_kind)(err) == XE_;
default: tl_assert(0); /* fill in missing cases */
|
|
From: <sv...@va...> - 2007-10-16 08:09:09
|
Author: njn
Date: 2007-10-16 09:09:10 +0100 (Tue, 16 Oct 2007)
New Revision: 7004
Log:
Minimised the number of XPts dup'd by introducing SXPts, and doing fully
accurate significance tests at duplication time.
Modified:
branches/MASSIF2/massif/ms_main.c
branches/MASSIF2/massif/tests/culling1.stderr.exp
branches/MASSIF2/massif/tests/culling2.stderr.exp
branches/MASSIF2/massif/tests/deep-B.stderr.exp
branches/MASSIF2/massif/tests/deep-C.stderr.exp
branches/MASSIF2/massif/tests/realloc.post.exp
branches/MASSIF2/massif/tests/realloc.stderr.exp
Modified: branches/MASSIF2/massif/ms_main.c
===================================================================
--- branches/MASSIF2/massif/ms_main.c 2007-10-16 07:42:54 UTC (rev 7003)
+++ branches/MASSIF2/massif/ms_main.c 2007-10-16 08:09:10 UTC (rev 7004)
@@ -33,6 +33,7 @@
// Performance:
//
// perl perf/vg_perf --tools=massif --reps=3 perf/{heap,tinycc} massif
+// time valgrind --tool=massif --depth=100 konqueror
//
// The other benchmarks don't do much allocation, and so give similar speeds
// to Nulgrind.
@@ -66,6 +67,13 @@
// many-xpts 0.13s ma: 2.8s (21.6x, -----)
// konqueror 4:37 real 4:14 user
//
+// Minimised the number of dup'd XPts by introducing SXPts (r7004):
+// heap 0.56s ma:20.8s (37.2x, -----)
+// tinycc 0.45s ma: 7.1s (15.7x, -----)
+// many-xpts 0.05s ma: 1.6s (33.0x, -----)
+// konqueror 3:45 real 3:35 user
+//
+//
// Todo:
// - for regtests, need to filter out code addresses in *.post.* files
// - do snapshots on client requests
@@ -250,25 +258,23 @@
// - 15,000 XPts 800,000 XPts
// - 1,800 top-XPts
-static UInt n_xpts = 0;
-static UInt n_dupd_xpts = 0;
-static UInt n_dupd_xpts_freed = 0;
static UInt n_heap_allocs = 0;
-static UInt n_heap_zero_allocs = 0;
static UInt n_heap_reallocs = 0;
static UInt n_heap_frees = 0;
static UInt n_stack_allocs = 0;
static UInt n_stack_frees = 0;
+static UInt n_xpts = 0;
static UInt n_xpt_init_expansions = 0;
static UInt n_xpt_later_expansions = 0;
-static UInt n_getXCon_redo = 0;
-static UInt n_cullings = 0;
+static UInt n_sxpt_allocs = 0;
+static UInt n_sxpt_frees = 0;
+static UInt n_skipped_snapshots = 0;
static UInt n_real_snapshots = 0;
static UInt n_detailed_snapshots = 0;
static UInt n_peak_snapshots = 0;
-static UInt n_skipped_snapshots = 0;
+static UInt n_cullings = 0;
+static UInt n_XCon_redos = 0;
-
//------------------------------------------------------------//
//--- Globals ---//
//------------------------------------------------------------//
@@ -472,7 +478,16 @@
// | / \ | the XTree will look like this.
// | v v |
// child1 child2
+//
+// XTrees and XPts are mirrored by SXTrees and SXPts, where the 'S' is short
+// for "saved". When the XTree is duplicated for a snapshot, we duplicate
+// it as an SXTree, which is similar but omits some things it does not need,
+// and aggregates up insignificant nodes. This is important as an SXTree is
+// typically much smaller than an XTree.
+// XXX: make XPt and SXPt extensible arrays, to avoid having to do two
+// allocations per Pt.
+
typedef struct _XPt XPt;
struct _XPt {
Addr ip; // code address
@@ -493,6 +508,36 @@
XPt** children; // pointers to children XPts
};
+typedef
+ enum {
+ SigSXPt,
+ InsigSXPt
+ }
+ SXPtTag;
+
+typedef struct _SXPt SXPt;
+struct _SXPt {
+ SXPtTag tag;
+ SizeT szB; // memory size for the node, be it Sig or Insig
+ union {
+ // An SXPt representing a single significant code location. Much like
+ // an XPt, minus the fields that aren't necessary.
+ struct {
+ Addr ip;
+ UInt n_children;
+ SXPt** children;
+ }
+ Sig;
+
+ // An SXPt representing one or more code locations, all below the
+ // significance threshold.
+ struct {
+ Int n_xpts; // number of aggregated XPts
+ }
+ Insig;
+ };
+};
+
// Fake XPt representing all allocation functions like malloc(). Acts as
// parent node to all top-XPts.
static XPt* alloc_xpt;
@@ -519,31 +564,15 @@
return (void*)(hp - n_bytes);
}
-__attribute__((unused))
-static void pp_XPt(XPt* xpt)
-{
- Int i;
-
- VG_(printf)("XPt (%p):\n", xpt);
- VG_(printf)("- ip: : %p\n", (void*)xpt->ip);
- VG_(printf)("- szB : %ld\n", xpt->szB);
- VG_(printf)("- parent : %p\n", xpt->parent);
- VG_(printf)("- n_children : %d\n", xpt->n_children);
- VG_(printf)("- max_children: %d\n", xpt->max_children);
- for (i = 0; i < xpt->n_children; i++) {
- VG_(printf)("- children[%2d]: %p\n", i, xpt->children[i]);
- }
-}
-
static XPt* new_XPt(Addr ip, XPt* parent)
{
// XPts are never freed, so we can use perm_malloc to allocate them.
// Note that we cannot use perm_malloc for the 'children' array, because
// that needs to be resizable.
- XPt* xpt = perm_malloc(sizeof(XPt));
- xpt->ip = ip;
- xpt->szB = 0;
- xpt->parent = parent;
+ XPt* xpt = perm_malloc(sizeof(XPt));
+ xpt->ip = ip;
+ xpt->szB = 0;
+ xpt->parent = parent;
// We don't initially allocate any space for children. We let that
// happen on demand. Many XPts (ie. all the bottom-XPts) don't have any
@@ -604,64 +633,104 @@
xpt->szB * 10000ULL / total_szB >= clo_threshold);
}
-
//------------------------------------------------------------//
//--- XTree Operations ---//
//------------------------------------------------------------//
-static XPt* dup_XTree(XPt* xpt, XPt* parent, SizeT total_szB)
+// Duplicates an XTree as an SXTree.
+static SXPt* dup_XTree(XPt* xpt, SizeT total_szB)
{
- Int i;
- XPt* dup_xpt = VG_(malloc)(sizeof(XPt));
- dup_xpt->ip = xpt->ip;
- dup_xpt->szB = xpt->szB;
- dup_xpt->parent = parent; // Nb: not xpt->children!
- // If this node is not significant, there's no point duplicating its
- // children. And not doing so can make a huge difference, eg.
- // it speeds up massif/perf/many-xpts by over 10x.
- if (!is_significant_XPt(xpt, total_szB)) {
- dup_xpt->n_children = 0;
- dup_xpt->max_children = 0;
- dup_xpt->children = NULL;
+ Int i, n_sig_children, n_insig_children, n_child_sxpts;
+ SizeT insig_children_szB;
+ SXPt* sxpt;
+
+ // Sort XPt's children by szB (reverse order: biggest to smallest).
+ VG_(ssort)(xpt->children, xpt->n_children, sizeof(XPt*), XPt_revcmp_szB);
+
+ // Number of XPt children Action for SXPT
+ // ------------------ ---------------
+ // 0 sig, 0 insig alloc 0 children
+ // N sig, 0 insig alloc N children, dup all
+ // N sig, M insig alloc N+1, dup first N, aggregate remaining M
+ // 0 sig, M insig alloc 1, aggregate M
+
+ // How many children are significant? And do we need an aggregate SXPt?
+ n_sig_children = 0;
+ while (n_sig_children < xpt->n_children &&
+ is_significant_XPt(xpt->children[n_sig_children], total_szB))
+ {
+ n_sig_children++;
+ }
+ n_insig_children = xpt->n_children - n_sig_children;
+ n_child_sxpts = n_sig_children + ( n_insig_children > 0 ? 1 : 0 );
+
+ // Duplicate the XPt.
+ sxpt = VG_(malloc)(sizeof(SXPt));
+ n_sxpt_allocs++;
+ sxpt->tag = SigSXPt;
+ sxpt->szB = xpt->szB;
+ sxpt->Sig.ip = xpt->ip;
+ sxpt->Sig.n_children = n_child_sxpts;
+
+ // Create the SXPt's children.
+ if (n_child_sxpts > 0) {
+ SizeT sig_children_szB = 0;
+ sxpt->Sig.children = VG_(malloc)(n_child_sxpts * sizeof(SXPt*));
+
+ // Duplicate the significant children.
+ for (i = 0; i < n_sig_children; i++) {
+ sxpt->Sig.children[i] = dup_XTree(xpt->children[i], total_szB);
+ sig_children_szB += sxpt->Sig.children[i]->szB;
+ }
+
+ // Create the SXPt for the insignificant children, if any, and put it
+ // in the last child entry.
+ insig_children_szB = sxpt->szB - sig_children_szB;
+ if (n_insig_children > 0) {
+ // Nb: We 'n_sxpt_allocs' here because creating an Insig SXPt
+ // doesn't involve a call to dup_XTree().
+ SXPt* insig_sxpt = VG_(malloc)(sizeof(SXPt));
+ n_sxpt_allocs++;
+ insig_sxpt->tag = InsigSXPt;
+ insig_sxpt->szB = insig_children_szB;
+ insig_sxpt->Insig.n_xpts = n_insig_children;
+ sxpt->Sig.children[n_sig_children] = insig_sxpt;
+ }
} else {
- dup_xpt->n_children = xpt->n_children;
- dup_xpt->max_children = xpt->max_children;
- // We copy n_children children (not max_children). If n_children==0,
- // don't bother allocating an 'children' array in the dup.
- if (xpt->n_children > 0) {
- dup_xpt->children = VG_(malloc)(dup_xpt->n_children * sizeof(XPt*));
- for (i = 0; i < xpt->n_children; i++) {
- dup_xpt->children[i] =
- dup_XTree(xpt->children[i], dup_xpt, total_szB);
- }
- } else {
- dup_xpt->children = NULL;
- }
+ sxpt->Sig.children = NULL;
}
- n_dupd_xpts++;
- return dup_xpt;
+ return sxpt;
}
-static void free_XTree(XPt* xpt)
+static void free_SXTree(SXPt* sxpt)
{
Int i;
- // Free all children XPts, then the children array, then the XPt itself.
- tl_assert(xpt != NULL);
- for (i = 0; i < xpt->n_children; i++) {
- XPt* child = xpt->children[i];
- free_XTree(child);
- xpt->children[i] = NULL;
+ tl_assert(sxpt != NULL);
+
+ switch (sxpt->tag) {
+ case SigSXPt:
+ // Free all children SXPts, then the children array.
+ for (i = 0; i < sxpt->Sig.n_children; i++) {
+ free_SXTree(sxpt->Sig.children[i]);
+ sxpt->Sig.children[i] = NULL;
+ }
+ VG_(free)(sxpt->Sig.children); sxpt->Sig.children = NULL;
+ break;
+
+ case InsigSXPt:
+ break;
+
+ default: tl_assert2(0, "free_SXTree: unknown SXPt tag");
}
- VG_(free)(xpt->children); xpt->children = NULL;
- VG_(free)(xpt); xpt = NULL;
-
- n_dupd_xpts_freed++;
+
+ // Free the SXPt itself.
+ VG_(free)(sxpt); sxpt = NULL;
+ n_sxpt_frees++;
}
-// Sanity checking: we check snapshot XTrees after they are taken, before
-// they are deleted, and before they are printed. We also periodically
-// check the main heap XTree with ms_expensive_sanity_check.
+// Sanity checking: we periodically check the heap XTree with
+// ms_expensive_sanity_check.
static void sanity_check_XTree(XPt* xpt, XPt* parent)
{
Int i;
@@ -687,7 +756,37 @@
}
}
+// Sanity checking: we check SXTrees (which are in snapshots) after
+// snapshots are created, before they are deleted, and before they are
+// printed.
+static void sanity_check_SXTree(SXPt* sxpt)
+{
+ Int i;
+ tl_assert(sxpt != NULL);
+
+ // Check the sum of any children szBs equals the SXPt's szB. Check the
+ // children at the same time.
+ switch (sxpt->tag) {
+ case SigSXPt: {
+ if (sxpt->Sig.n_children > 0) {
+ SizeT children_sum_szB = 0;
+ for (i = 0; i < sxpt->Sig.n_children; i++) {
+ sanity_check_SXTree(sxpt->Sig.children[i]);
+ children_sum_szB += sxpt->Sig.children[i]->szB;
+ }
+ tl_assert(children_sum_szB == sxpt->szB);
+ }
+ break;
+ }
+ case InsigSXPt:
+ break; // do nothing
+
+ default: tl_assert2(0, "sanity_check_SXTree: unknown SXPt tag");
+ }
+}
+
+
//------------------------------------------------------------//
//--- XCon Operations ---//
//------------------------------------------------------------//
@@ -825,7 +924,7 @@
{
return n_ips;
} else {
- n_getXCon_redo++;
+ n_XCon_redos++;
}
}
}
@@ -920,7 +1019,7 @@
SizeT heap_szB;
SizeT heap_admin_szB;
SizeT stacks_szB;
- XPt* alloc_xpt; // Heap XTree root, if a detailed snapshot,
+ SXPt* alloc_sxpt; // Heap XTree root, if a detailed snapshot,
} // otherwise NULL
Snapshot;
@@ -935,7 +1034,7 @@
tl_assert(snapshot->heap_admin_szB == 0);
tl_assert(snapshot->heap_szB == 0);
tl_assert(snapshot->stacks_szB == 0);
- tl_assert(snapshot->alloc_xpt == NULL);
+ tl_assert(snapshot->alloc_sxpt == NULL);
return False;
} else {
tl_assert(snapshot->time != UNUSED_SNAPSHOT_TIME);
@@ -945,7 +1044,7 @@
static Bool is_detailed_snapshot(Snapshot* snapshot)
{
- return (snapshot->alloc_xpt ? True : False);
+ return (snapshot->alloc_sxpt ? True : False);
}
static Bool is_uncullable_snapshot(Snapshot* snapshot)
@@ -957,8 +1056,8 @@
static void sanity_check_snapshot(Snapshot* snapshot)
{
- if (snapshot->alloc_xpt) {
- sanity_check_XTree(snapshot->alloc_xpt, /*parent*/NULL);
+ if (snapshot->alloc_sxpt) {
+ sanity_check_SXTree(snapshot->alloc_sxpt);
}
}
@@ -984,7 +1083,7 @@
snapshot->heap_admin_szB = 0;
snapshot->heap_szB = 0;
snapshot->stacks_szB = 0;
- snapshot->alloc_xpt = NULL;
+ snapshot->alloc_sxpt = NULL;
}
// This zeroes all the fields in the snapshot, and frees the heap XTree if
@@ -994,10 +1093,10 @@
// Nb: if there's an XTree, we free it after calling clear_snapshot,
// because clear_snapshot does a sanity check which includes checking the
// XTree.
- XPt* tmp_xpt = snapshot->alloc_xpt;
+ SXPt* tmp_sxpt = snapshot->alloc_sxpt;
clear_snapshot(snapshot);
- if (tmp_xpt) {
- free_XTree(tmp_xpt);
+ if (tmp_sxpt) {
+ free_SXTree(tmp_sxpt);
}
}
@@ -1204,8 +1303,9 @@
if (is_detailed) {
// XXX: total_szB computed in various places -- factor it out
SizeT total_szB = heap_szB + clo_heap_admin*n_heap_blocks + stacks_szB;
- snapshot->alloc_xpt = dup_XTree(alloc_xpt, /*parent*/NULL, total_szB);
- tl_assert(snapshot->alloc_xpt->szB == heap_szB);
+ snapshot->alloc_sxpt = dup_XTree(alloc_xpt, total_szB);
+ tl_assert( alloc_xpt->szB == heap_szB);
+ tl_assert(snapshot->alloc_sxpt->szB == heap_szB);
}
snapshot->heap_admin_szB = clo_heap_admin * n_heap_blocks;
}
@@ -1412,7 +1512,6 @@
// Update statistics.
n_heap_allocs++;
- if (0 == szB) n_heap_zero_allocs++;
// Update heap stats.
update_heap_stats(hc->szB, /*n_heap_blocks_delta*/1);
@@ -1740,7 +1839,7 @@
return mbuf;
}
-static void pp_snapshot_XPt(Int fd, XPt* xpt, Int depth, Char* depth_str,
+static void pp_snapshot_SXPt(Int fd, SXPt* sxpt, Int depth, Char* depth_str,
Int depth_str_len,
SizeT snapshot_heap_szB, SizeT snapshot_total_szB)
{
@@ -1749,63 +1848,64 @@
Char* perc;
Char ip_desc_array[BUF_LEN];
Char* ip_desc = ip_desc_array;
- SizeT printed_children_szB = 0;
- Int n_sig_children;
- Int n_insig_children;
- Int n_child_entries;
+ SXPt* pred = NULL;
+ SXPt* child = NULL;
- // Sort XPt's children by szB (reverse order: biggest to smallest)
- VG_(ssort)(xpt->children, xpt->n_children, sizeof(XPt*),
- XPt_revcmp_szB);
+ switch (sxpt->tag) {
+ case SigSXPt:
+ // Print the SXPt itself.
+ if (sxpt->Sig.ip == 0) {
+ ip_desc =
+ "(heap allocation functions) malloc/new/new[], --alloc-fns, etc.";
+ } else {
+ // XXX: why the -1?
+ ip_desc = VG_(describe_IP)(sxpt->Sig.ip-1, ip_desc, BUF_LEN);
+ }
+ perc = make_perc(sxpt->szB, snapshot_total_szB);
+ FP("%sn%d: %lu %s\n",
+ depth_str, sxpt->Sig.n_children, sxpt->szB, ip_desc);
- // How many children are significant? Also calculate the number of child
- // entries to print -- there may be a need for an "in N places" line.
- n_sig_children = 0;
- while (n_sig_children < xpt->n_children &&
- is_significant_XPt(xpt->children[n_sig_children],
- snapshot_total_szB))
- {
- n_sig_children++;
- }
- n_insig_children = xpt->n_children - n_sig_children;
- n_child_entries = n_sig_children + ( n_insig_children > 0 ? 1 : 0 );
+ // Indent.
+ tl_assert(depth+1 < depth_str_len-1); // -1 for end NUL char
+ depth_str[depth+0] = ' ';
+ depth_str[depth+1] = '\0';
- // Print the XPt entry.
- if (xpt->ip == 0) {
- ip_desc =
- "(heap allocation functions) malloc/new/new[], --alloc-fns, etc.";
- } else {
- ip_desc = VG_(describe_IP)(xpt->ip-1, ip_desc, BUF_LEN);
- }
- perc = make_perc(xpt->szB, snapshot_total_szB);
- FP("%sn%d: %lu %s\n", depth_str, n_child_entries, xpt->szB, ip_desc);
+ // Print the SXPt's children. They should already be in sorted order.
+ for (i = 0; i < sxpt->Sig.n_children; i++) {
+ pred = child;
+ child = sxpt->Sig.children[i];
- // Indent.
- tl_assert(depth+1 < depth_str_len-1); // -1 for end NUL char
- depth_str[depth+0] = ' ';
- depth_str[depth+1] = '\0';
+ // Only the last child can be an Insig SXPt.
+ if (i < sxpt->Sig.n_children-1)
+ tl_assert(SigSXPt == child->tag);
- // Print the children.
- for (i = 0; i < n_sig_children; i++) {
- XPt* child = xpt->children[i];
- pp_snapshot_XPt(fd, child, depth+1, depth_str, depth_str_len,
- snapshot_heap_szB, snapshot_total_szB);
- printed_children_szB += child->szB;
- }
+ // Sortedness check: if this child is a normal SXPt, check it's not
+ // bigger than its predecessor.
+ if (pred && SigSXPt == child->tag)
+ tl_assert(child->szB <= pred->szB);
- // Print the extra "in N places" line, if any children were insignificant.
- if (n_insig_children > 0) {
- Char* s = ( n_insig_children == 1 ? "," : "s, all" );
- SizeT total_insig_children_szB = xpt->szB - printed_children_szB;
- perc = make_perc(total_insig_children_szB, snapshot_total_szB);
+ // Ok, print the child.
+ pp_snapshot_SXPt(fd, child, depth+1, depth_str, depth_str_len,
+ snapshot_heap_szB, snapshot_total_szB);
+
+ // Unindent.
+ depth_str[depth+0] = '\0';
+ depth_str[depth+1] = '\0';
+ }
+ break;
+
+ case InsigSXPt: {
+ Char* s = ( sxpt->Insig.n_xpts == 1 ? "," : "s, all" );
+ perc = make_perc(sxpt->szB, snapshot_total_szB);
FP("%sn0: %lu in %d place%s below massif's threshold (%s)\n",
- depth_str, total_insig_children_szB, n_insig_children, s,
+ depth_str, sxpt->szB, sxpt->Insig.n_xpts, s,
make_perc(clo_threshold, 10000));
+ break;
+ }
+
+ default:
+ tl_assert2(0, "pp_snapshot_SXPt: unrecognised SXPt tag");
}
-
- // Unindent.
- depth_str[depth+0] = '\0';
- depth_str[depth+1] = '\0';
}
static void pp_snapshot(Int fd, Snapshot* snapshot, Int snapshot_n)
@@ -1829,9 +1929,9 @@
depth_str[0] = '\0'; // Initialise depth_str to "".
FP("heap_tree=%s\n", ( Peak == snapshot->kind ? "peak" : "detailed" ));
- pp_snapshot_XPt(fd, snapshot->alloc_xpt, 0, depth_str,
- depth_str_len, snapshot->heap_szB,
- snapshot_total_szB);
+ pp_snapshot_SXPt(fd, snapshot->alloc_sxpt, 0, depth_str,
+ depth_str_len, snapshot->heap_szB,
+ snapshot_total_szB);
VG_(free)(depth_str);
@@ -1906,9 +2006,6 @@
// Stats
tl_assert(n_xpts > 0); // always have alloc_xpt
VERB(1, "heap allocs: %u", n_heap_allocs);
- VERB(1, "heap zero allocs: %u (%d%%)",
- n_heap_zero_allocs,
- ( n_heap_allocs ? n_heap_zero_allocs * 100 / n_heap_allocs : 0 ));
VERB(1, "heap reallocs: %u", n_heap_reallocs);
VERB(1, "heap frees: %u", n_heap_frees);
VERB(1, "stack allocs: %u", n_stack_allocs);
@@ -1917,16 +2014,16 @@
VERB(1, "top-XPts: %u (%d%%)",
alloc_xpt->n_children,
( n_xpts ? alloc_xpt->n_children * 100 / n_xpts : 0));
- VERB(1, "dup'd XPts: %u", n_dupd_xpts);
- VERB(1, "dup'd/freed XPts: %u", n_dupd_xpts_freed);
VERB(1, "XPt-init-expansions: %u", n_xpt_init_expansions);
VERB(1, "XPt-later-expansions: %u", n_xpt_later_expansions);
+ VERB(1, "SXPt allocs: %u", n_sxpt_allocs);
+ VERB(1, "SXPt frees: %u", n_sxpt_frees);
VERB(1, "skipped snapshots: %u", n_skipped_snapshots);
VERB(1, "real snapshots: %u", n_real_snapshots);
VERB(1, "detailed snapshots: %u", n_detailed_snapshots);
VERB(1, "peak snapshots: %u", n_peak_snapshots);
VERB(1, "cullings: %u", n_cullings);
- VERB(1, "XCon_redos: %u", n_getXCon_redo);
+ VERB(1, "XCon_redos: %u", n_XCon_redos);
}
Modified: branches/MASSIF2/massif/tests/culling1.stderr.exp
===================================================================
--- branches/MASSIF2/massif/tests/culling1.stderr.exp 2007-10-16 07:42:54 UTC (rev 7003)
+++ branches/MASSIF2/massif/tests/culling1.stderr.exp 2007-10-16 08:09:10 UTC (rev 7004)
@@ -420,17 +420,16 @@
Massif: post-cull Sd 49 (t:3582, hp:1990, ad:1592, st:0)
Massif: New time interval = 72 (between snapshots 0 and 1)
Massif: heap allocs: 200
-Massif: heap zero allocs: 0 (0%)
Massif: heap reallocs: 0
Massif: heap frees: 0
Massif: stack allocs: 0
Massif: stack frees: 0
Massif: XPts: 2
Massif: top-XPts: 1 (50%)
-Massif: dup'd XPts: 30
-Massif: dup'd/freed XPts: 18
Massif: XPt-init-expansions: 1
Massif: XPt-later-expansions: 0
+Massif: SXPt allocs: 30
+Massif: SXPt frees: 18
Massif: skipped snapshots: 51
Massif: real snapshots: 150
Massif: detailed snapshots: 15
Modified: branches/MASSIF2/massif/tests/culling2.stderr.exp
===================================================================
--- branches/MASSIF2/massif/tests/culling2.stderr.exp 2007-10-16 07:42:54 UTC (rev 7003)
+++ branches/MASSIF2/massif/tests/culling2.stderr.exp 2007-10-16 08:09:10 UTC (rev 7004)
@@ -523,17 +523,16 @@
Massif: post-cull Sd 49 (t:21293, hp:19701, ad:1592, st:0)
Massif: New time interval = 321 (between snapshots 26 and 27)
Massif: heap allocs: 200
-Massif: heap zero allocs: 1 (0%)
Massif: heap reallocs: 0
Massif: heap frees: 0
Massif: stack allocs: 0
Massif: stack frees: 0
Massif: XPts: 2
Massif: top-XPts: 1 (50%)
-Massif: dup'd XPts: 40
-Massif: dup'd/freed XPts: 38
Massif: XPt-init-expansions: 1
Massif: XPt-later-expansions: 0
+Massif: SXPt allocs: 40
+Massif: SXPt frees: 38
Massif: skipped snapshots: 1
Massif: real snapshots: 200
Massif: detailed snapshots: 20
Modified: branches/MASSIF2/massif/tests/deep-B.stderr.exp
===================================================================
--- branches/MASSIF2/massif/tests/deep-B.stderr.exp 2007-10-16 07:42:54 UTC (rev 7003)
+++ branches/MASSIF2/massif/tests/deep-B.stderr.exp 2007-10-16 08:09:10 UTC (rev 7004)
@@ -32,17 +32,16 @@
Massif: alloc Sd 9 (t:972, hp:900, ad:72, st:0)
Massif: alloc S. 10 (t:1080, hp:1000, ad:80, st:0)
Massif: heap allocs: 10
-Massif: heap zero allocs: 0 (0%)
Massif: heap reallocs: 0
Massif: heap frees: 0
Massif: stack allocs: 0
Massif: stack frees: 0
Massif: XPts: 7
Massif: top-XPts: 1 (14%)
-Massif: dup'd XPts: 7
-Massif: dup'd/freed XPts: 0
Massif: XPt-init-expansions: 6
Massif: XPt-later-expansions: 0
+Massif: SXPt allocs: 7
+Massif: SXPt frees: 0
Massif: skipped snapshots: 0
Massif: real snapshots: 11
Massif: detailed snapshots: 1
Modified: branches/MASSIF2/massif/tests/deep-C.stderr.exp
===================================================================
--- branches/MASSIF2/massif/tests/deep-C.stderr.exp 2007-10-16 07:42:54 UTC (rev 7003)
+++ branches/MASSIF2/massif/tests/deep-C.stderr.exp 2007-10-16 08:09:10 UTC (rev 7004)
@@ -35,17 +35,16 @@
Massif: alloc Sd 9 (t:972, hp:900, ad:72, st:0)
Massif: alloc S. 10 (t:1080, hp:1000, ad:80, st:0)
Massif: heap allocs: 10
-Massif: heap zero allocs: 0 (0%)
Massif: heap reallocs: 0
Massif: heap frees: 0
Massif: stack allocs: 0
Massif: stack frees: 0
Massif: XPts: 4
Massif: top-XPts: 1 (25%)
-Massif: dup'd XPts: 4
-Massif: dup'd/freed XPts: 0
Massif: XPt-init-expansions: 3
Massif: XPt-later-expansions: 0
+Massif: SXPt allocs: 4
+Massif: SXPt frees: 0
Massif: skipped snapshots: 0
Massif: real snapshots: 11
Massif: detailed snapshots: 1
Modified: branches/MASSIF2/massif/tests/realloc.post.exp
===================================================================
--- branches/MASSIF2/massif/tests/realloc.post.exp 2007-10-16 07:42:54 UTC (rev 7003)
+++ branches/MASSIF2/massif/tests/realloc.post.exp 2007-10-16 08:09:10 UTC (rev 7004)
@@ -52,10 +52,10 @@
100.00% (150B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->100.00% (150B) 0x80483E3: main (realloc.c:12)
|
+->00.00% (0B) 0x80483BA: main (realloc.c:8)
+|
->00.00% (0B) 0x80483A7: main (realloc.c:5)
|
-->00.00% (0B) 0x80483BA: main (realloc.c:8)
-|
->00.00% (0B) 0x80483CD: main (realloc.c:10)
--------------------------------------------------------------------------------
Modified: branches/MASSIF2/massif/tests/realloc.stderr.exp
===================================================================
--- branches/MASSIF2/massif/tests/realloc.stderr.exp 2007-10-16 07:42:54 UTC (rev 7003)
+++ branches/MASSIF2/massif/tests/realloc.stderr.exp 2007-10-16 08:09:10 UTC (rev 7004)
@@ -22,17 +22,16 @@
Massif: de-PEAK Sp 6 (t:250, hp:150, ad:0, st:0)
Massif: dealloc S. 7 (t:400, hp:0, ad:0, st:0)
Massif: heap allocs: 1
-Massif: heap zero allocs: 0 (0%)
Massif: heap reallocs: 3
Massif: heap frees: 1
Massif: stack allocs: 0
Massif: stack frees: 0
Massif: XPts: 5
Massif: top-XPts: 4 (80%)
-Massif: dup'd XPts: 8
-Massif: dup'd/freed XPts: 0
Massif: XPt-init-expansions: 1
Massif: XPt-later-expansions: 0
+Massif: SXPt allocs: 8
+Massif: SXPt frees: 0
Massif: skipped snapshots: 0
Massif: real snapshots: 8
Massif: detailed snapshots: 2
|
|
From: <sv...@va...> - 2007-10-16 07:42:53
|
Author: njn
Date: 2007-10-16 08:42:54 +0100 (Tue, 16 Oct 2007)
New Revision: 7003
Log:
remove unused function
Modified:
branches/MASSIF2/massif/ms_main.c
Modified: branches/MASSIF2/massif/ms_main.c
===================================================================
--- branches/MASSIF2/massif/ms_main.c 2007-10-16 07:38:18 UTC (rev 7002)
+++ branches/MASSIF2/massif/ms_main.c 2007-10-16 07:42:54 UTC (rev 7003)
@@ -697,16 +697,6 @@
#define MAX_OVERESTIMATE 50
#define MAX_IPS (MAX_DEPTH + MAX_OVERESTIMATE)
-// XXX: look at the "(below main)"/"__libc_start_main" mess (m_stacktrace.c
-// and m_demangle.c). Don't hard-code "(below main)" in here.
-// [Nb: Josef wants --show-below-main to work for his fn entry/exit tracing]
-static Bool is_main_or_below_main(Char* fnname)
-{
- if (VG_STREQ(fnname, "main")) return True;
- if (VG_STREQ(fnname, "(below main)")) return True;
- return False;
-}
-
// Get the stack trace for an XCon, filtering out uninteresting entries:
// alloc-fns and entries above alloc-fns, and entries below main-or-below-main.
// Eg: alloc-fn1 / alloc-fn2 / a / b / main / (below main) / c
@@ -764,6 +754,11 @@
// Filter out entries that are below main, if necessary.
// XXX: stats -- should record how often this happens.
+ // XXX: look at the "(below main)"/"__libc_start_main" mess
+ // (m_stacktrace.c and m_demangle.c). Don't hard-code "(below
+ // main)" in here.
+ // [Nb: Josef wants --show-below-main to work for his fn entry/exit
+ // tracing]
if (should_hide_below_main) {
for (i = n_ips-1; i >= 0; i--) {
if (VG_(get_fnname)(ips[i], buf, BUF_LEN)) {
|
|
From: <sv...@va...> - 2007-10-16 07:38:16
|
Author: njn
Date: 2007-10-16 08:38:18 +0100 (Tue, 16 Oct 2007)
New Revision: 7002
Log:
Add a third verbosity level, and make verb=1 less verbose.
Modified:
branches/MASSIF2/massif/ms_main.c
branches/MASSIF2/massif/tests/culling1.vgtest
branches/MASSIF2/massif/tests/culling2.vgtest
branches/MASSIF2/massif/tests/deep-B.vgtest
branches/MASSIF2/massif/tests/deep-C.vgtest
branches/MASSIF2/massif/tests/realloc.vgtest
Modified: branches/MASSIF2/massif/ms_main.c
===================================================================
--- branches/MASSIF2/massif/ms_main.c 2007-10-16 07:25:53 UTC (rev 7001)
+++ branches/MASSIF2/massif/ms_main.c 2007-10-16 07:38:18 UTC (rev 7002)
@@ -64,6 +64,7 @@
// heap 0.59s ma:20.3s (34.5x, -----)
// tinycc 0.49s ma: 7.6s (15.4x, -----)
// many-xpts 0.13s ma: 2.8s (21.6x, -----)
+// konqueror 4:37 real 4:14 user
//
// Todo:
// - for regtests, need to filter out code addresses in *.post.* files
@@ -1052,7 +1053,7 @@
j < clo_max_snapshots && !is_snapshot_in_use(&snapshots[j]); \
j++) { }
- VERB(1, "Culling...");
+ VERB(2, "Culling...");
// First we remove enough snapshots by clearing them in-place. Once
// that's done, we can slide the remaining ones down.
@@ -1091,7 +1092,7 @@
if (VG_(clo_verbosity) > 1) {
Char buf[64];
VG_(snprintf)(buf, 64, " %3d (t-span = %lld)", i, min_timespan);
- VERB_snapshot(1, buf, min_j);
+ VERB_snapshot(2, buf, min_j);
}
delete_snapshot(min_snapshot);
n_deleted++;
@@ -1135,7 +1136,7 @@
if (is_uncullable_snapshot(&snapshots[i]) &&
is_uncullable_snapshot(&snapshots[i-1]))
{
- VERB(1, "(Ignoring interval %d--%d when computing minimum)", i-1, i);
+ VERB(2, "(Ignoring interval %d--%d when computing minimum)", i-1, i);
} else {
Time timespan = snapshots[i].time - snapshots[i-1].time;
tl_assert(timespan >= 0);
@@ -1149,12 +1150,12 @@
// Print remaining snapshots, if necessary.
if (VG_(clo_verbosity) > 1) {
- VERB(1, "Finished culling (%3d of %3d deleted)",
+ VERB(2, "Finished culling (%3d of %3d deleted)",
n_deleted, clo_max_snapshots);
for (i = 0; i < next_snapshot_i; i++) {
- VERB_snapshot(1, " post-cull", i);
+ VERB_snapshot(2, " post-cull", i);
}
- VERB(1, "New time interval = %lld (between snapshots %d and %d)",
+ VERB(2, "New time interval = %lld (between snapshots %d and %d)",
min_timespan, min_timespan_i-1, min_timespan_i);
}
@@ -1314,11 +1315,11 @@
// Finish up verbosity and stats stuff.
if (n_skipped_snapshots_since_last_snapshot > 0) {
- VERB(1, " (skipped %d snapshot%s)",
+ VERB(2, " (skipped %d snapshot%s)",
n_skipped_snapshots_since_last_snapshot,
( n_skipped_snapshots_since_last_snapshot == 1 ? "" : "s") );
}
- VERB_snapshot(1, what, next_snapshot_i);
+ VERB_snapshot(2, what, next_snapshot_i);
n_skipped_snapshots_since_last_snapshot = 0;
// Cull the entries, if our snapshot table is full.
@@ -1412,7 +1413,7 @@
VG_(HT_add_node)(malloc_list, hc);
if (clo_heap) {
- VERB(2, "<<< new_mem_heap (%lu)", szB);
+ VERB(3, "<<< new_mem_heap (%lu)", szB);
// Update statistics.
n_heap_allocs++;
@@ -1428,7 +1429,7 @@
// Maybe take a snapshot.
maybe_take_snapshot(Normal, " alloc");
- VERB(2, ">>>");
+ VERB(3, ">>>");
}
return p;
@@ -1448,7 +1449,7 @@
die_szB = hc->szB;
if (clo_heap) {
- VERB(2, "<<< die_mem_heap");
+ VERB(3, "<<< die_mem_heap");
// Update statistics
n_heap_frees++;
@@ -1465,7 +1466,7 @@
// Maybe take a snapshot.
maybe_take_snapshot(Normal, "dealloc");
- VERB(2, ">>> (-%lu)", die_szB);
+ VERB(3, ">>> (-%lu)", die_szB);
}
// Actually free the chunk, and the heap block (if necessary)
@@ -1491,7 +1492,7 @@
old_szB = hc->szB;
if (clo_heap) {
- VERB(2, "<<< renew_mem_heap (%lu)", new_szB);
+ VERB(3, "<<< renew_mem_heap (%lu)", new_szB);
// Update statistics
n_heap_reallocs++;
@@ -1546,7 +1547,7 @@
if (clo_heap) {
maybe_take_snapshot(Normal, "realloc");
- VERB(2, ">>> (%ld)", new_szB - old_szB);
+ VERB(3, ">>> (%ld)", new_szB - old_szB);
}
return p_new;
@@ -1621,23 +1622,23 @@
static INLINE void new_mem_stack_2(Addr a, SizeT len, Char* what)
{
if (have_started_executing_code) {
- VERB(2, "<<< new_mem_stack (%ld)", len);
+ VERB(3, "<<< new_mem_stack (%ld)", len);
n_stack_allocs++;
update_stack_stats(len);
maybe_take_snapshot(Normal, what);
- VERB(2, ">>>");
+ VERB(3, ">>>");
}
}
static INLINE void die_mem_stack_2(Addr a, SizeT len, Char* what)
{
if (have_started_executing_code) {
- VERB(2, "<<< die_mem_stack (%ld)", -len);
+ VERB(3, "<<< die_mem_stack (%ld)", -len);
n_stack_frees++;
maybe_take_snapshot(Peak, "stkPEAK");
update_stack_stats(-len);
maybe_take_snapshot(Normal, what);
- VERB(2, ">>>");
+ VERB(3, ">>>");
}
}
Modified: branches/MASSIF2/massif/tests/culling1.vgtest
===================================================================
--- branches/MASSIF2/massif/tests/culling1.vgtest 2007-10-16 07:25:53 UTC (rev 7001)
+++ branches/MASSIF2/massif/tests/culling1.vgtest 2007-10-16 07:38:18 UTC (rev 7002)
@@ -1,4 +1,4 @@
prog: culling1
-vgopts: -v --stacks=no --time-unit=B
+vgopts: -v -v --stacks=no --time-unit=B
stderr_filter: filter_verbose
cleanup: rm massif.out
Modified: branches/MASSIF2/massif/tests/culling2.vgtest
===================================================================
--- branches/MASSIF2/massif/tests/culling2.vgtest 2007-10-16 07:25:53 UTC (rev 7001)
+++ branches/MASSIF2/massif/tests/culling2.vgtest 2007-10-16 07:38:18 UTC (rev 7002)
@@ -1,4 +1,4 @@
prog: culling2
-vgopts: -v --stacks=no --time-unit=B
+vgopts: -v -v --stacks=no --time-unit=B
stderr_filter: filter_verbose
cleanup: rm massif.out
Modified: branches/MASSIF2/massif/tests/deep-B.vgtest
===================================================================
--- branches/MASSIF2/massif/tests/deep-B.vgtest 2007-10-16 07:25:53 UTC (rev 7001)
+++ branches/MASSIF2/massif/tests/deep-B.vgtest 2007-10-16 07:38:18 UTC (rev 7002)
@@ -1,5 +1,5 @@
prog: deep
-vgopts: --stacks=no --time-unit=B --alloc-fn=a6 --alloc-fn=a7 --alloc-fn=a8 --alloc-fn=a9 --alloc-fn=a10 --alloc-fn=a11 --alloc-fn=a12 -v
+vgopts: --stacks=no --time-unit=B --alloc-fn=a6 --alloc-fn=a7 --alloc-fn=a8 --alloc-fn=a9 --alloc-fn=a10 --alloc-fn=a11 --alloc-fn=a12 -v -v
stderr_filter: filter_verbose
post: perl ../../massif/ms_print massif.out
cleanup: rm massif.out
Modified: branches/MASSIF2/massif/tests/deep-C.vgtest
===================================================================
--- branches/MASSIF2/massif/tests/deep-C.vgtest 2007-10-16 07:25:53 UTC (rev 7001)
+++ branches/MASSIF2/massif/tests/deep-C.vgtest 2007-10-16 07:38:18 UTC (rev 7002)
@@ -1,5 +1,5 @@
prog: deep
-vgopts: --stacks=no --time-unit=B --alloc-fn=a3 --alloc-fn=a4 --alloc-fn=a5 --alloc-fn=a6 --alloc-fn=a7 --alloc-fn=a8 --alloc-fn=a9 --alloc-fn=a10 --alloc-fn=a11 --alloc-fn=a12 -v
+vgopts: --stacks=no --time-unit=B --alloc-fn=a3 --alloc-fn=a4 --alloc-fn=a5 --alloc-fn=a6 --alloc-fn=a7 --alloc-fn=a8 --alloc-fn=a9 --alloc-fn=a10 --alloc-fn=a11 --alloc-fn=a12 -v -v
stderr_filter: filter_verbose
post: perl ../../massif/ms_print massif.out
cleanup: rm massif.out
Modified: branches/MASSIF2/massif/tests/realloc.vgtest
===================================================================
--- branches/MASSIF2/massif/tests/realloc.vgtest 2007-10-16 07:25:53 UTC (rev 7001)
+++ branches/MASSIF2/massif/tests/realloc.vgtest 2007-10-16 07:38:18 UTC (rev 7002)
@@ -1,5 +1,5 @@
prog: realloc
-vgopts: -v --stacks=no --heap-admin=0 --time-unit=B --threshold=0
+vgopts: -v -v --stacks=no --heap-admin=0 --time-unit=B --threshold=0
stderr_filter: filter_verbose
post: perl ../../massif/ms_print --threshold=0 massif.out
cleanup: rm massif.out
|
|
From: <sv...@va...> - 2007-10-16 07:25:53
|
Author: njn
Date: 2007-10-16 08:25:53 +0100 (Tue, 16 Oct 2007)
New Revision: 7001
Log:
Made many-xpts run for longer.
Modified:
branches/MASSIF2/massif/ms_main.c
branches/MASSIF2/massif/perf/many-xpts.c
Modified: branches/MASSIF2/massif/ms_main.c
===================================================================
--- branches/MASSIF2/massif/ms_main.c 2007-10-16 02:48:57 UTC (rev 7000)
+++ branches/MASSIF2/massif/ms_main.c 2007-10-16 07:25:53 UTC (rev 7001)
@@ -60,6 +60,11 @@
// tinycc 0.49s ma: 7.6s (15.4x, -----)
// many-xpts 0.04s ma: 1.9s (46.2x, -----)
//
+// Many many-xpts run for longer (r7001):
+// heap 0.59s ma:20.3s (34.5x, -----)
+// tinycc 0.49s ma: 7.6s (15.4x, -----)
+// many-xpts 0.13s ma: 2.8s (21.6x, -----)
+//
// Todo:
// - for regtests, need to filter out code addresses in *.post.* files
// - do snapshots on client requests
Modified: branches/MASSIF2/massif/perf/many-xpts.c
===================================================================
--- branches/MASSIF2/massif/perf/many-xpts.c 2007-10-16 02:48:57 UTC (rev 7000)
+++ branches/MASSIF2/massif/perf/many-xpts.c 2007-10-16 07:25:53 UTC (rev 7001)
@@ -2,7 +2,7 @@
#define nth_bit(x, n) ((x >> n) & 1)
#define Fn(N, Np1) \
- void a##N(int x) { if (nth_bit(x, N)) a##Np1(x); else a##Np1(x); }
+ void* a##N(int x) { return ( nth_bit(x, N) ? a##Np1(x) : a##Np1(x) ); }
// This test allocates a lot of heap memory, and every allocation features a
// different stack trace -- the stack traces are effectively a
@@ -10,9 +10,9 @@
// 'i', and if it's a 1 the first function is called, and if it's a 0 the
// second function is called.
-void a999(int x)
+void* a999(int x)
{
- malloc(100);
+ return malloc(100);
}
Fn(17, 999)
@@ -43,8 +43,9 @@
a0(i);
// Do a lot of allocations so it gets dup'd a lot of times.
- for (i = 0; i < 3000; i++) {
- free(malloc(20000));
+ for (i = 0; i < 100000; i++) {
+ free(a1(234));
+ free(a2(111));
}
return 0;
|
|
From: <sv...@va...> - 2007-10-16 02:48:55
|
Author: njn Date: 2007-10-16 03:48:57 +0100 (Tue, 16 Oct 2007) New Revision: 7000 Log: comment Modified: branches/MASSIF2/massif/ms_main.c Modified: branches/MASSIF2/massif/ms_main.c =================================================================== --- branches/MASSIF2/massif/ms_main.c 2007-10-16 02:46:07 UTC (rev 6999) +++ branches/MASSIF2/massif/ms_main.c 2007-10-16 02:48:57 UTC (rev 7000) @@ -61,6 +61,7 @@ // many-xpts 0.04s ma: 1.9s (46.2x, -----) // // Todo: +// - for regtests, need to filter out code addresses in *.post.* files // - do snapshots on client requests // - C++ tests -- for each of the allocators, and overloaded versions of // them (see 'init_alloc_fns'). |
|
From: <sv...@va...> - 2007-10-16 02:46:08
|
Author: njn
Date: 2007-10-16 03:46:07 +0100 (Tue, 16 Oct 2007)
New Revision: 6999
Log:
move a variable
Modified:
branches/MASSIF2/massif/ms_main.c
Modified: branches/MASSIF2/massif/ms_main.c
===================================================================
--- branches/MASSIF2/massif/ms_main.c 2007-10-15 21:58:54 UTC (rev 6998)
+++ branches/MASSIF2/massif/ms_main.c 2007-10-16 02:46:07 UTC (rev 6999)
@@ -260,7 +260,6 @@
static UInt n_detailed_snapshots = 0;
static UInt n_peak_snapshots = 0;
static UInt n_skipped_snapshots = 0;
-static UInt n_skipped_snapshots_since_last_snapshot = 0;
//------------------------------------------------------------//
@@ -1238,7 +1237,8 @@
static Time min_time_interval = 0;
// Zero allows startup snapshot.
static Time earliest_possible_time_of_next_snapshot = 0;
- static Int n_snapshots_since_last_detailed = 0;
+ static Int n_snapshots_since_last_detailed = 0;
+ static Int n_skipped_snapshots_since_last_snapshot = 0;
Snapshot* snapshot;
Bool is_detailed;
|
|
From: Tom H. <th...@cy...> - 2007-10-16 02:31:36
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2007-10-16 03:15:01 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 260 tests, 27 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-10-16 02:23:39
|
Nightly build on dellow ( x86_64, Fedora 7 ) started at 2007-10-16 03:10:04 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 293 tests, 4 stderr failures, 3 stdout failures, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) |
|
From: Tom H. <th...@cy...> - 2007-10-16 02:23:38
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2007-10-16 03:05:06 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 293 tests, 4 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-10-16 02:15:44
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-10-16 03:00:03 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 295 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: <js...@ac...> - 2007-10-16 00:09:01
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-10-16 02:00:01 CEST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 228 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... failed Last 20 lines of verbose log follow echo Checking out valgrind source tree ... svn co svn://svn.valgrind.org/valgrind/trunk -r {2007-10-15T02:00:01} valgrind svn: Unknown hostname 'svn.valgrind.org' ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Tue Oct 16 02:00:47 2007 --- new.short Tue Oct 16 02:09:02 2007 *************** *** 1,7 **** ! Checking out valgrind source tree ... failed ! Last 20 lines of verbose log follow echo - Checking out valgrind source tree ... svn co svn://svn.valgrind.org/valgrind/trunk -r {2007-10-15T02:00:01} valgrind - svn: Unknown hostname 'svn.valgrind.org' --- 1,18 ---- ! Checking out valgrind source tree ... done ! Configuring valgrind ... done ! Building valgrind ... done ! Running regression tests ... failed ! Regression test results follow ! ! == 228 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == ! memcheck/tests/deep_templates (stdout) ! memcheck/tests/leak-cycle (stderr) ! memcheck/tests/leak-tree (stderr) ! memcheck/tests/pointer-trace (stderr) ! none/tests/faultstatus (stderr) ! none/tests/fdleak_cmsg (stderr) ! none/tests/mremap (stderr) ! none/tests/mremap2 (stdout) |