You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(15) |
2
(12) |
3
(11) |
4
(20) |
5
(6) |
|
6
(6) |
7
(7) |
8
(8) |
9
(17) |
10
(25) |
11
(27) |
12
(6) |
|
13
(28) |
14
(16) |
15
(20) |
16
(9) |
17
(26) |
18
(7) |
19
(25) |
|
20
(7) |
21
(18) |
22
(25) |
23
(15) |
24
(21) |
25
(32) |
26
(15) |
|
27
(23) |
28
(33) |
|
|
|
|
|
|
From: Tom H. <th...@cy...> - 2005-02-25 03:12:25
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2005-02-25 03:10:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow make[4]: Entering directory `/tmp/valgrind.29762/valgrind/cachegrind/tests' source='chdir.c' object='chdir.o' libtool=no \ depfile='.deps/chdir.Po' tmpdepfile='.deps/chdir.TPo' \ depmode=gcc3 /bin/sh ../../depcomp \ gcc -DHAVE_CONFIG_H -I. -I. -I../.. -Winline -Wall -Wshadow -g -c `test -f 'chdir.c' || echo './'`chdir.c gcc -Winline -Wall -Wshadow -g -o chdir chdir.o source='dlclose.c' object='dlclose.o' libtool=no \ depfile='.deps/dlclose.Po' tmpdepfile='.deps/dlclose.TPo' \ depmode=gcc3 /bin/sh ../../depcomp \ gcc -DHAVE_CONFIG_H -I. -I. -I../.. -Winline -Wall -Wshadow -g -c `test -f 'dlclose.c' || echo './'`dlclose.c cc1: No space left on device: error writing to /tmp/ccit44vK.s make[4]: *** [dlclose.o] Error 1 make[4]: Leaving directory `/tmp/valgrind.29762/valgrind/cachegrind/tests' make[3]: *** [check-am] Error 2 make[3]: Leaving directory `/tmp/valgrind.29762/valgrind/cachegrind/tests' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/tmp/valgrind.29762/valgrind/cachegrind/tests' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.29762/valgrind/cachegrind' make: *** [check-recursive] Error 1 |
|
From: Nicholas N. <nj...@cs...> - 2005-02-25 03:10:18
|
On Thu, 24 Feb 2005, Jeremy Fitzhardinge wrote: > ==18124== 128+4014 bytes in 1 blocks are definitely lost in loss record 461 of 483 It's good that the new reported sizes are more accurate, but I predict one thousand users will immediately scream "what does 128+4014 mean?" It's not at all obvious. Is there any simpler/more intuitive way to summarise, perhaps by changing the wording of the sentence? N |
|
From: Nicholas N. <nj...@cs...> - 2005-02-25 03:06:48
|
On Thu, 24 Feb 2005, Julian Seward wrote: >> > - In general, on a 32-bit machine, because memory is allocated >> > in 64M superblocks to either shadow, client or V-internal, we get >> > rid of all problems associated with the current hard partitioning >> > scheme between client and shadow memory. Big-bang allocation is >> > done away with. We know we can still protect V from wild writes >> > by the client at fairly minimal expense. >> > >> What problems are there, and how are they avoided by this scheme? > > * big-bang shadow allocation causes problems on kernels that don't > do overcommitment > > * a fixed partitioning scheme is less appropriate if we move towards > compressed representations of shadow memory, since that compression > ratio could be variable Plus all the other problems mentioned in bug #82301, which we've talked about at length multiple times in the past. N |
|
From: Jeremy F. <je...@go...> - 2005-02-25 01:56:26
|
CVS commit by fitzhardinge:
Add test cases for the leak checker, and update existing tests.
A addrcheck/tests/leak-0.stderr.exp 1.1
A addrcheck/tests/leak-0.vgtest 1.1
A addrcheck/tests/leak-cycle.stderr.exp 1.1
A addrcheck/tests/leak-cycle.vgtest 1.1
A addrcheck/tests/leak-regroot.stderr.exp 1.1
A addrcheck/tests/leak-regroot.vgtest 1.1
A addrcheck/tests/leak-tree.stderr.exp 1.1
A addrcheck/tests/leak-tree.vgtest 1.1
A memcheck/tests/leak-0.c 1.1 [no copyright]
A memcheck/tests/leak-0.stderr.exp 1.1
A memcheck/tests/leak-0.vgtest 1.1
A memcheck/tests/leak-cycle.c 1.1 [no copyright]
A memcheck/tests/leak-cycle.stderr.exp 1.1
A memcheck/tests/leak-cycle.vgtest 1.1
A memcheck/tests/leak-regroot.c 1.1 [no copyright]
A memcheck/tests/leak-regroot.stderr.exp 1.1
A memcheck/tests/leak-regroot.vgtest 1.1
A memcheck/tests/leak-tree.c 1.1 [no copyright]
A memcheck/tests/leak-tree.stderr.exp 1.1
A memcheck/tests/leak-tree.vgtest 1.1
M +9 -0 memcheck/tests/Makefile.am 1.68
M +1 -1 memcheck/tests/mempool.stderr.exp 1.2
M +2 -1 memcheck/tests/pointer-trace.c 1.2
M +5 -3 memcheck/tests/pointer-trace.stderr.exp 1.3
--- valgrind/memcheck/tests/Makefile.am #1.67:1.68
@@ -33,4 +33,8 @@
inits.stderr.exp inits.vgtest \
inline.stderr.exp inline.stdout.exp inline.vgtest \
+ leak-0.vgtest leak-0.stderr.exp \
+ leak-cycle.vgtest leak-cycle.stderr.exp \
+ leak-tree.vgtest leak-tree.stderr.exp \
+ leak-regroot.vgtest leak-regroot.stderr.exp \
malloc1.stderr.exp malloc1.vgtest \
malloc2.stderr.exp malloc2.vgtest \
@@ -87,4 +91,5 @@
doublefree error_counts errs1 exitprog execve execve2 \
fprw fwrite hello inits inline \
+ leak-0 leak-cycle leak-tree leak-regroot \
malloc1 malloc2 malloc3 manuel1 manuel2 manuel3 \
memalign_test memalign2 memcmptest mempool mmaptest \
@@ -131,4 +136,8 @@
inits_SOURCES = inits.c
inline_SOURCES = inline.c
+leak_0_SOURCES = leak-0.c
+leak_cycle_SOURCES = leak-cycle.c
+leak_tree_SOURCES = leak-tree.c
+leak_regroot_SOURCES = leak-regroot.c
malloc1_SOURCES = malloc1.c
malloc2_SOURCES = malloc2.c
--- valgrind/memcheck/tests/pointer-trace.c #1.1:1.2
@@ -39,7 +39,8 @@ int main()
unlink("./pointer-trace-test-file");
- map = mmap(0, stepsize * 2, PROT_READ, MAP_PRIVATE, fd, 0);
+ map = mmap(0, stepsize * 2, PROT_WRITE|PROT_READ, MAP_PRIVATE, fd, 0);
if (map == (char *)-1)
perror("trap 3 failed");
+ //printf("trap 3 = %p-%p\n", map, map+stepsize*2);
map = mmap(0, 256*1024, PROT_NONE, MAP_PRIVATE|MAP_NORESERVE|MAP_ANONYMOUS, -1, 0);
--- valgrind/memcheck/tests/mempool.stderr.exp #1.1:1.2
@@ -32,5 +32,5 @@
-20 bytes in 1 blocks are definitely lost in loss record 2 of 3
+100028 (20+100008) bytes in 1 blocks are definitely lost in loss record 2 of 3
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: make_pool (mempool.c:37)
--- valgrind/memcheck/tests/pointer-trace.stderr.exp #1.2:1.3
@@ -5,4 +5,5 @@
LEAK SUMMARY:
definitely lost: 0 bytes in 0 blocks.
+ indirectly lost: 0 bytes in 0 blocks.
possibly lost: 0 bytes in 0 blocks.
still reachable: 1048576 bytes in 1 blocks.
@@ -18,11 +19,12 @@
checked ... bytes.
-1048576 bytes in 1 blocks are possibly lost in loss record 1 of 1
+1048576 bytes in 1 blocks are definitely lost in loss record 1 of 1
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: main (pointer-trace.c:24)
LEAK SUMMARY:
- definitely lost: 0 bytes in 0 blocks.
- possibly lost: 1048576 bytes in 1 blocks.
+ definitely lost: 1048576 bytes in 1 blocks.
+ indirectly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 0 bytes in 0 blocks.
suppressed: 0 bytes in 0 blocks.
|
|
From: Jeremy F. <je...@go...> - 2005-02-25 01:54:13
|
CVS commit by fitzhardinge:
Group leaked blocks, so that only the "head" of linked blocks is reported.
For each of those, it also reports the amount of indirect leaking, so
you can tell what the total amount of leaked memory due to a particular
allocation site is. --show-reachable=yes will show both reachable allocated
blocks and indirectly leaked blocks.
M +109 -48 mac_leakcheck.c 1.20
--- valgrind/memcheck/mac_leakcheck.c #1.19:1.20
@@ -35,5 +35,6 @@
/* Define to debug the memory-leak-detector. */
-/* #define VG_DEBUG_LEAKCHECK */
+#define VG_DEBUG_LEAKCHECK 0
+#define VG_DEBUG_CLIQUE 0
#define ROUNDDN(p, a) ((Addr)(p) & ~((a)-1))
@@ -104,5 +105,5 @@ typedef
has been sorted on the ->data field. */
-#ifdef VG_DEBUG_LEAKCHECK
+#if VG_DEBUG_LEAKCHECK
/* Used to sanity-check the fast binary-search mechanism. */
static
@@ -159,5 +160,5 @@ Int find_shadow_for ( Addr ptr,
}
-# ifdef VG_DEBUG_LEAKCHECK
+# if VG_DEBUG_LEAKCHECK
sk_assert(retVal == find_shadow_for_OLD ( ptr, shadows, n_shadows ));
# endif
@@ -173,16 +174,14 @@ static Int lc_markstack_top;
static Addr lc_min_mallocd_addr;
static Addr lc_max_mallocd_addr;
+static SizeT lc_scanned;
static Bool (*lc_is_valid_chunk) (UInt chunk);
static Bool (*lc_is_valid_address)(Addr addr);
-/* Used for printing leak errors, avoids exposing the LossRecord type (which
- comes in as void*, requiring a cast. */
-void MAC_(pp_LeakError)(void* vl, UInt n_this_record, UInt n_total_records)
+static const Char *pp_lossmode(Reachedness lossmode)
{
- LossRecord* l = (LossRecord*)vl;
const Char *loss = "?";
- switch(l->loss_mode) {
+ switch(lossmode) {
case Unreached: loss = "definitely lost"; break;
case IndirectLeak: loss = "indirectly lost"; break;
@@ -191,9 +190,27 @@ void MAC_(pp_LeakError)(void* vl, UInt n
}
+ return loss;
+}
+
+/* Used for printing leak errors, avoids exposing the LossRecord type (which
+ comes in as void*, requiring a cast. */
+void MAC_(pp_LeakError)(void* vl, UInt n_this_record, UInt n_total_records)
+{
+ LossRecord* l = (LossRecord*)vl;
+ const Char *loss = pp_lossmode(l->loss_mode);
+
VG_(message)(Vg_UserMsg, "");
+ if (l->indirect_bytes) {
VG_(message)(Vg_UserMsg,
- "%d+%d bytes in %d blocks are %s in loss record %d of %d",
+ "%d (%d+%d) bytes in %d blocks are %s in loss record %d of %d",
+ l->total_bytes + l->indirect_bytes,
l->total_bytes, l->indirect_bytes, l->num_blocks,
loss, n_this_record, n_total_records);
+ } else {
+ VG_(message)(Vg_UserMsg,
+ "%d bytes in %d blocks are %s in loss record %d of %d",
+ l->total_bytes, l->num_blocks,
+ loss, n_this_record, n_total_records);
+ }
VG_(pp_ExeContext)(l->allocated_at);
}
@@ -213,8 +230,17 @@ static Int lc_compar(void* n1, void* n2)
/* If ptr is pointing to a heap-allocated block which hasn't been seen
- before, push it onto the mark stack. */
-static void _lc_markstack_push(Addr ptr, Bool mopup)
+ before, push it onto the mark stack. Clique is the index of the
+ clique leader; -1 if none. */
+static void _lc_markstack_push(Addr ptr, Int clique)
{
- Int sh_no = find_shadow_for(ptr, lc_shadows, lc_n_shadows);
+ Int sh_no;
+
+ if (!VG_(is_client_addr)(ptr)) /* quick filter */
+ return;
+
+ sh_no = find_shadow_for(ptr, lc_shadows, lc_n_shadows);
+
+ if (VG_DEBUG_LEAKCHECK)
+ VG_(printf)("ptr=%p -> block %d\n", ptr, sh_no);
if (sh_no == -1)
@@ -234,11 +260,35 @@ static void _lc_markstack_push(Addr ptr,
}
- if (mopup) {
+ if (clique != -1) {
if (0)
VG_(printf)("mopup: %d: %p is %d\n",
sh_no, lc_shadows[sh_no]->data, lc_markstack[sh_no].state);
- if (lc_markstack[sh_no].state == Unreached)
+ /* An unmarked block - add it to the clique. Add its size to
+ the clique-leader's indirect size. If the new block was
+ itself a clique leader, it isn't any more, so add its
+ indirect to the new clique leader.
+
+ If this block *is* the clique leader, it means this is a
+ cyclic structure, so none of this applies. */
+ if (lc_markstack[sh_no].state == Unreached) {
lc_markstack[sh_no].state = IndirectLeak;
+
+ if (sh_no != clique) {
+ if (VG_DEBUG_CLIQUE) {
+ if (lc_markstack[sh_no].indirect)
+ VG_(printf)(" clique %d joining clique %d adding %d+%d bytes\n",
+ sh_no, clique,
+ lc_shadows[sh_no]->size, lc_markstack[sh_no].indirect);
+ else
+ VG_(printf)(" %d joining %d adding %d\n",
+ sh_no, clique, lc_shadows[sh_no]->size);
+ }
+
+ lc_markstack[clique].indirect += lc_shadows[sh_no]->size;
+ lc_markstack[clique].indirect += lc_markstack[sh_no].indirect;
+ lc_markstack[sh_no].indirect = 0; /* shouldn't matter */
+ }
+ }
} else if (ptr == lc_shadows[sh_no]->data) {
lc_markstack[sh_no].state = Proper;
@@ -251,5 +301,5 @@ static void _lc_markstack_push(Addr ptr,
static void lc_markstack_push(Addr ptr)
{
- _lc_markstack_push(ptr, False);
+ _lc_markstack_push(ptr, -1);
}
@@ -268,6 +318,9 @@ static Int lc_markstack_pop(void)
/* Scan a block of memory between [start, start+len). This range may
- be bogus, inaccessable, or otherwise strange; we deal with it. */
-static void _lc_scan_memory(Addr start, SizeT len, Bool mopup)
+ be bogus, inaccessable, or otherwise strange; we deal with it.
+
+ If clique != -1, it means we're gathering leaked memory into
+ cliques, and clique is the index of the current clique leader. */
+static void _lc_scan_memory(Addr start, SizeT len, Int clique)
{
Addr ptr = ROUNDUP(start, sizeof(Addr));
@@ -275,7 +328,11 @@ static void _lc_scan_memory(Addr start,
vki_sigset_t sigmask;
+ if (VG_DEBUG_LEAKCHECK)
+ VG_(printf)("scan %p-%p\n", start, len);
VG_(sigprocmask)(VKI_SIG_SETMASK, NULL, &sigmask);
VG_(set_fault_catcher)(vg_scan_all_valid_memory_catcher);
+ lc_scanned += end-ptr;
+
if (!VG_(is_client_addr)(ptr) ||
!VG_(is_addressable)(ptr, sizeof(Addr), VKI_PROT_READ))
@@ -301,6 +358,7 @@ static void _lc_scan_memory(Addr start,
if ((*lc_is_valid_address)(ptr)) {
addr = *(Addr *)ptr;
- _lc_markstack_push(addr, mopup);
- }
+ _lc_markstack_push(addr, clique);
+ } else if (0 && VG_DEBUG_LEAKCHECK)
+ VG_(printf)("%p not valid\n", ptr);
ptr += sizeof(Addr);
} else {
@@ -309,5 +367,5 @@ static void _lc_scan_memory(Addr start,
VG_(sigprocmask)(VKI_SIG_SETMASK, &sigmask, NULL);
- ptr = PGROUNDUP(ptr); /* bad page - skip it */
+ ptr = PGROUNDUP(ptr+1); /* bad page - skip it */
}
}
@@ -319,5 +377,5 @@ static void _lc_scan_memory(Addr start,
static void lc_scan_memory(Addr start, SizeT len)
{
- _lc_scan_memory(start, len, False);
+ _lc_scan_memory(start, len, -1);
}
@@ -325,7 +383,6 @@ static void lc_scan_memory(Addr start, S
actually gathering leaked blocks, so they should be marked
IndirectLeak. */
-static SizeT lc_do_leakcheck(Bool mopup)
+static void lc_do_leakcheck(Int clique)
{
- Int scanned = 0;
Int top;
@@ -334,10 +391,6 @@ static SizeT lc_do_leakcheck(Bool mopup)
sk_assert(lc_markstack[top].state != Unreached);
- scanned += lc_shadows[top]->size;
-
- _lc_scan_memory(lc_shadows[top]->data, lc_shadows[top]->size, mopup);
+ _lc_scan_memory(lc_shadows[top]->data, lc_shadows[top]->size, clique);
}
-
- return scanned;
}
@@ -363,5 +416,4 @@ void MAC_(do_detect_memory_leaks) (
Int blocks_suppressed;
Int n_lossrecords;
- UInt bytes_notified;
Bool is_suppressed;
@@ -416,4 +468,6 @@ void MAC_(do_detect_memory_leaks) (
lc_is_valid_address = is_valid_address;
+ lc_scanned = 0;
+
/* Do the scan of memory, pushing any pointers onto the mark stack */
VG_(find_root_memory)(lc_scan_memory);
@@ -423,14 +477,19 @@ void MAC_(do_detect_memory_leaks) (
/* Keep walking the heap until everything is found */
- bytes_notified = lc_do_leakcheck(False);
+ lc_do_leakcheck(-1);
if (VG_(clo_verbosity) > 0)
- VG_(message)(Vg_UserMsg, "checked %d bytes.", bytes_notified);
+ VG_(message)(Vg_UserMsg, "checked %d bytes.", lc_scanned);
- /* Go through and find the heads of lost data-structures; we don't
- want to report every single lost block individually. */
+ /* Go through and group lost structures into cliques. For each
+ Unreached block, push it onto the mark stack, and find all the
+ blocks linked to it. These are marked IndirectLeak, and their
+ size is added to the clique leader's indirect size. If one of
+ the found blocks was itself a clique leader (from a previous
+ pass), then the cliques are merged. */
for (i = 0; i < lc_n_shadows; i++) {
- SizeT indirect = 0;
-
+ if (VG_DEBUG_CLIQUE)
+ VG_(printf)("cliques: %d at %p -> %s\n",
+ i, lc_shadows[i]->data, pp_lossmode(lc_markstack[i].state));
if (lc_markstack[i].state != Unreached)
continue;
@@ -438,16 +497,17 @@ void MAC_(do_detect_memory_leaks) (
sk_assert(lc_markstack_top == -1);
- if (0)
- VG_(printf)("%d: mopping up from %p\n", i, lc_shadows[i]->data);
+ if (VG_DEBUG_CLIQUE)
+ VG_(printf)("%d: gathering clique %p\n", i, lc_shadows[i]->data);
- _lc_markstack_push(lc_shadows[i]->data, True);
+ _lc_markstack_push(lc_shadows[i]->data, i);
- indirect = lc_do_leakcheck(True);
+ lc_do_leakcheck(i);
sk_assert(lc_markstack_top == -1);
sk_assert(lc_markstack[i].state == IndirectLeak);
- lc_markstack[i].state = Unreached; /* return to unreached state */
- lc_markstack[i].indirect = indirect;
+ lc_markstack[i].state = Unreached; /* Return to unreached state,
+ to indicate its a clique
+ leader */
}
@@ -496,5 +556,5 @@ void MAC_(do_detect_memory_leaks) (
for (p = errlist; p != NULL; p = p->next) {
if (p->num_blocks > 0 && p->total_bytes < n_min) {
- n_min = p->total_bytes;
+ n_min = p->total_bytes + p->indirect_bytes;
p_min = p;
}
@@ -505,8 +565,9 @@ void MAC_(do_detect_memory_leaks) (
we disallow that when --leak-check=yes.
- Prints the error if not suppressed, unless it's reachable (Proper)
+ Prints the error if not suppressed, unless it's reachable (Proper or IndirectLeak)
and --show-reachable=no */
- print_record = ( MAC_(clo_show_reachable) || Proper != p_min->loss_mode );
+ print_record = ( MAC_(clo_show_reachable) ||
+ Unreached == p_min->loss_mode || Interior == p_min->loss_mode );
is_suppressed =
VG_(unique_error) ( VG_(get_VCPU_tid)(), LeakErr, (UInt)i+1,
|
|
From: Jeremy F. <je...@go...> - 2005-02-25 01:52:33
|
CVS commit by fitzhardinge:
Change memcheck/addrcheck's leak detector to use a mark-sweep algorithm.
This allows it to detect more kinds of leaks; in particular, it isn't
confused by cycles, and it can report all leaked memory, rather than
just unreferenced heads of data structures.
M +3 -0 coregrind/core.h 1.90
M +20 -0 coregrind/vg_memory.c 1.91
M +10 -0 coregrind/vg_scheduler.c 1.224
M +15 -0 coregrind/x86/state.c 1.18
M +9 -0 include/tool.h.base 1.24
M +223 -177 memcheck/mac_leakcheck.c 1.19
--- valgrind/include/tool.h.base #1.23:1.24
@@ -516,4 +516,13 @@
extern void VG_(init_shadow_range)(Addr p, UInt sz, Bool call_init);
+/* Calls into the core used by leak-checking */
+
+/* Calls "add_rootrange" with each range of memory which looks like a
+ plausible source of root pointers. */
+extern void VG_(find_root_memory)(void (*add_rootrange)(Addr addr, SizeT sz));
+
+/* Calls "mark_addr" with register values (which may or may not be pointers) */
+extern void VG_(mark_from_registers)(void (*mark_addr)(Addr addr));
+
/* ------------------------------------------------------------------ */
/* signal.h.
--- valgrind/coregrind/x86/state.c #1.17:1.18
@@ -160,4 +160,19 @@ void VGA_(thread_initial_stack)(ThreadId
}
+void VGA_(mark_from_registers)(ThreadId tid, void (*marker)(Addr))
+{
+ ThreadState *tst = VG_(get_ThreadState)(tid);
+ arch_thread_t *arch = &tst->arch;
+
+ /* XXX ask tool about validity? */
+ (*marker)(arch->m_eax);
+ (*marker)(arch->m_ebx);
+ (*marker)(arch->m_ecx);
+ (*marker)(arch->m_edx);
+ (*marker)(arch->m_esi);
+ (*marker)(arch->m_edi);
+ (*marker)(arch->m_esp);
+ (*marker)(arch->m_ebp);
+}
/*------------------------------------------------------------*/
--- valgrind/coregrind/core.h #1.89:1.90
@@ -1777,4 +1777,7 @@ extern void VGA_(fill_elffpxregs_from_ts
const arch_thread_t* arch );
+// Used by leakcheck
+extern void VGA_(mark_from_registers)(ThreadId tid, void (*marker)(Addr));
+
// Signal stuff
extern void VGA_(push_signal_frame) ( ThreadId tid, Addr sp_top_of_frame,
--- valgrind/coregrind/vg_memory.c #1.90:1.91
@@ -834,4 +834,24 @@ void VG_(client_free)(Addr addr)
}
+/* We'll call any RW mmaped memory segment, within the client address
+ range, which isn't SF_CORE, a root. */
+void VG_(find_root_memory)(void (*add_rootrange)(Addr a, SizeT sz))
+{
+ Segment *s;
+
+ for(s = VG_(first_segment)(); s != NULL; s = VG_(next_segment)(s)) {
+ UInt flags = s->flags & (SF_SHARED|SF_MMAP|SF_VALGRIND|SF_CORE|SF_STACK);
+ if (flags != SF_MMAP && flags != SF_STACK)
+ continue;
+ if ((s->prot & (VKI_PROT_READ|VKI_PROT_WRITE)) != (VKI_PROT_READ|VKI_PROT_WRITE))
+ continue;
+ if (!VG_(is_client_addr)(s->addr) ||
+ !VG_(is_client_addr)(s->addr+s->len))
+ continue;
+
+ (*add_rootrange)(s->addr, s->len);
+ }
+}
+
/*--------------------------------------------------------------------*/
/*--- Querying memory layout ---*/
--- valgrind/coregrind/vg_scheduler.c #1.223:1.224
@@ -144,4 +144,14 @@ ThreadId VG_(first_matching_thread_stack
}
+void VG_(mark_from_registers)(void (*mark_addr)(Addr))
+{
+ ThreadId tid;
+
+ for(tid = 1; tid < VG_N_THREADS; tid++) {
+ if (!VG_(is_valid_tid)(tid))
+ continue;
+ VGA_(mark_from_registers)(tid, mark_addr);
+ }
+}
/* Print the scheduler status. */
--- valgrind/memcheck/mac_leakcheck.c #1.18:1.19
@@ -37,4 +37,9 @@
/* #define VG_DEBUG_LEAKCHECK */
+#define ROUNDDN(p, a) ((Addr)(p) & ~((a)-1))
+#define ROUNDUP(p, a) ROUNDDN((p)+(a)-1, (a))
+#define PGROUNDDN(p) ROUNDDN(p, VKI_PAGE_SIZE)
+#define PGROUNDUP(p) ROUNDUP(p, VKI_PAGE_SIZE)
+
/*------------------------------------------------------------*/
/*--- Low-level address-space scanning, for the leak ---*/
@@ -55,104 +60,4 @@ void vg_scan_all_valid_memory_catcher (
}
-/* Safely (avoiding SIGSEGV / SIGBUS) scan the entire valid address
- space and pass the addresses and values of all addressible,
- defined, aligned words to notify_word. This is the basis for the
- leak detector. Returns the number of calls made to notify_word.
-
- Addresses are validated 3 ways. First we enquire whether (addr >>
- 16) denotes a 64k chunk in use, by asking is_valid_64k_chunk(). If
- so, we decide for ourselves whether each x86-level (4 K) page in
- the chunk is safe to inspect. If yes, we enquire with
- is_valid_address() whether or not each of the 1024 word-locations
- on the page is valid. Only if so are that address and its contents
- passed to notify_word.
-
- This is all to avoid duplication of this machinery between
- Memcheck and Addrcheck.
-*/
-static
-UInt vg_scan_all_valid_memory ( Bool is_valid_64k_chunk ( UInt ),
- Bool is_valid_address ( Addr ),
- void (*notify_word)( Addr, UInt ) )
-{
- /* All volatile, because some gccs seem paranoid about longjmp(). */
- volatile Bool anyValid;
- volatile Addr pageBase, addr;
- volatile UInt numPages, page, primaryMapNo;
- volatile UInt page_first_word, nWordsNotified;
- vki_sigset_t sigmask;
-
- VG_(sigprocmask)(VKI_SIG_SETMASK, NULL, &sigmask);
- VG_(set_fault_catcher)(vg_scan_all_valid_memory_catcher);
-
- numPages = 1 << (32-VKI_PAGE_SHIFT);
- sk_assert(numPages == 1048576);
- sk_assert(4096 == (1 << VKI_PAGE_SHIFT));
-
- nWordsNotified = 0;
-
- for (page = 0; page < numPages; page++) {
-
- /* Base address of this 4k page. */
- pageBase = page << VKI_PAGE_SHIFT;
-
- /* Skip if this page is in an unused 64k chunk. */
- primaryMapNo = pageBase >> 16;
- if (!is_valid_64k_chunk(primaryMapNo))
- continue;
-
- /* Next, establish whether or not we want to consider any
- locations on this page. We need to do so before actually
- prodding it, because prodding it when in fact it is not
- needed can cause a page fault which under some rare
- circumstances can cause the kernel to extend the stack
- segment all the way down to here, which is seriously bad.
- Hence: */
- anyValid = False;
- for (addr = pageBase; addr < pageBase+VKI_PAGE_SIZE; addr += 4) {
- if (is_valid_address(addr)) {
- anyValid = True;
- break;
- }
- }
-
- if (!anyValid)
- continue; /* nothing interesting here .. move to the next page */
-
- /* Ok, we have to prod cautiously at the page and see if it
- explodes or not. */
- if (VG_(is_addressable)(pageBase, sizeof(UInt), VKI_PROT_READ) &&
- __builtin_setjmp(memscan_jmpbuf) == 0) {
- page_first_word = * (volatile UInt*)pageBase;
- /* we get here if we didn't get a fault */
- /* Scan the page */
- for (addr = pageBase; addr < pageBase+VKI_PAGE_SIZE; addr += sizeof(void *)) {
- if (is_valid_address(addr)) {
- nWordsNotified++;
- notify_word ( addr, *(UInt*)addr );
- }
- }
- } else {
- /* We get here if reading the first word of the page caused a
- fault, which in turn caused the signal handler to longjmp.
- Ignore this page. */
-
- /* We need to restore the signal mask, because we were
- longjmped out of a signal handler. */
- VG_(sigprocmask)(VKI_SIG_SETMASK, &sigmask, NULL);
- if (0)
- VG_(printf)(
- "vg_scan_all_valid_memory_sighandler: ignoring page at %p\n",
- (void*)pageBase
- );
- }
- }
-
- VG_(sigprocmask)(VKI_SIG_SETMASK, &sigmask, NULL);
- VG_(set_fault_catcher)(NULL);
-
- return nWordsNotified;
-}
-
/*------------------------------------------------------------*/
/*--- Detecting leaked (unreachable) malloc'd blocks. ---*/
@@ -163,8 +68,19 @@ UInt vg_scan_all_valid_memory ( Bool is_
-- Interior-ly reached; only an interior pointer to it has been found
-- Unreached; so far, no pointers to any part of it have been found.
+ -- IndirectLeak; leaked, but referred to by another leaked block
*/
-typedef
- enum { Unreached, Interior, Proper }
- Reachedness;
+typedef enum {
+ Unreached,
+ IndirectLeak,
+ Interior,
+ Proper
+ } Reachedness;
+
+/* An entry in the mark stack */
+typedef struct {
+ Int next:30; /* Index of next in mark stack */
+ UInt state:2; /* Reachedness */
+ SizeT indirect; /* if Unreached, how much is unreachable from here */
+} MarkStack;
/* A block record, used for generating err msgs. */
@@ -178,4 +94,5 @@ typedef
/* Number of blocks and total # bytes involved. */
UInt total_bytes;
+ UInt indirect_bytes;
UInt num_blocks;
}
@@ -252,57 +169,11 @@ Int find_shadow_for ( Addr ptr,
static MAC_Chunk** lc_shadows;
static Int lc_n_shadows;
-static Reachedness* lc_reachedness;
+static MarkStack* lc_markstack;
+static Int lc_markstack_top;
static Addr lc_min_mallocd_addr;
static Addr lc_max_mallocd_addr;
-static
-void vg_detect_memory_leaks_notify_addr ( Addr a, UInt word_at_a )
-{
- Int sh_no;
- Addr ptr;
-
- /* Rule out some known causes of bogus pointers. Mostly these do
- not cause much trouble because only a few false pointers can
- ever lurk in these places. This mainly stops it reporting that
- blocks are still reachable in stupid test programs like this
-
- int main (void) { char* a = malloc(100); return 0; }
-
- which people seem inordinately fond of writing, for some reason.
-
- Note that this is a complete kludge. It would be better to
- ignore any addresses corresponding to valgrind.so's .bss and
- .data segments, but I cannot think of a reliable way to identify
- where the .bss segment has been put. If you can, drop me a
- line.
- */
- if (!VG_(is_client_addr)(a))
- return;
-
- /* OK, let's get on and do something Useful for a change. */
-
- ptr = (Addr)word_at_a;
- if (0)
- VG_(printf)("notify %p -> %x; min=%p max=%p\n", a, word_at_a,
- lc_min_mallocd_addr, lc_max_mallocd_addr);
- if (ptr >= lc_min_mallocd_addr && ptr <= lc_max_mallocd_addr) {
- /* Might be legitimate; we'll have to investigate further. */
- sh_no = find_shadow_for ( ptr, lc_shadows, lc_n_shadows );
- if (0) VG_(printf)("find_shadow_for(%p) -> %d\n", ptr, sh_no);
- if (sh_no != -1) {
- /* Found a block at/into which ptr points. */
- sk_assert(sh_no >= 0 && sh_no < lc_n_shadows);
- sk_assert(ptr <= lc_shadows[sh_no]->data + lc_shadows[sh_no]->size);
- /* Decide whether Proper-ly or Interior-ly reached. */
- if (ptr == lc_shadows[sh_no]->data) {
- if (0) VG_(printf)("pointer at %p to %p\n", a, word_at_a );
- lc_reachedness[sh_no] = Proper;
- } else {
- if (lc_reachedness[sh_no] == Unreached)
- lc_reachedness[sh_no] = Interior;
- }
- }
- }
-}
+static Bool (*lc_is_valid_chunk) (UInt chunk);
+static Bool (*lc_is_valid_address)(Addr addr);
/* Used for printing leak errors, avoids exposing the LossRecord type (which
@@ -311,18 +182,23 @@ void MAC_(pp_LeakError)(void* vl, UInt n
{
LossRecord* l = (LossRecord*)vl;
+ const Char *loss = "?";
+
+ switch(l->loss_mode) {
+ case Unreached: loss = "definitely lost"; break;
+ case IndirectLeak: loss = "indirectly lost"; break;
+ case Interior: loss = "possibly lost"; break;
+ case Proper: loss = "still reachable"; break;
+ }
VG_(message)(Vg_UserMsg, "");
VG_(message)(Vg_UserMsg,
- "%d bytes in %d blocks are %s in loss record %d of %d",
- l->total_bytes, l->num_blocks,
- l->loss_mode==Unreached ? "definitely lost"
- : (l->loss_mode==Interior ? "possibly lost"
- : "still reachable"),
- n_this_record, n_total_records
- );
+ "%d+%d bytes in %d blocks are %s in loss record %d of %d",
+ l->total_bytes, l->indirect_bytes, l->num_blocks,
+ loss, n_this_record, n_total_records);
VG_(pp_ExeContext)(l->allocated_at);
}
Int MAC_(bytes_leaked) = 0;
+Int MAC_(bytes_indirect) = 0;
Int MAC_(bytes_dubious) = 0;
Int MAC_(bytes_reachable) = 0;
@@ -336,4 +212,134 @@ static Int lc_compar(void* n1, void* n2)
}
+/* If ptr is pointing to a heap-allocated block which hasn't been seen
+ before, push it onto the mark stack. */
+static void _lc_markstack_push(Addr ptr, Bool mopup)
+{
+ Int sh_no = find_shadow_for(ptr, lc_shadows, lc_n_shadows);
+
+ if (sh_no == -1)
+ return;
+
+ sk_assert(sh_no >= 0 && sh_no < lc_n_shadows);
+ sk_assert(ptr <= lc_shadows[sh_no]->data + lc_shadows[sh_no]->size);
+
+ if (lc_markstack[sh_no].state == Unreached) {
+ if (0)
+ VG_(printf)("pushing %p-%p\n", lc_shadows[sh_no]->data,
+ lc_shadows[sh_no]->data + lc_shadows[sh_no]->size);
+
+ sk_assert(lc_markstack[sh_no].next == -1);
+ lc_markstack[sh_no].next = lc_markstack_top;
+ lc_markstack_top = sh_no;
+ }
+
+ if (mopup) {
+ if (0)
+ VG_(printf)("mopup: %d: %p is %d\n",
+ sh_no, lc_shadows[sh_no]->data, lc_markstack[sh_no].state);
+
+ if (lc_markstack[sh_no].state == Unreached)
+ lc_markstack[sh_no].state = IndirectLeak;
+ } else if (ptr == lc_shadows[sh_no]->data) {
+ lc_markstack[sh_no].state = Proper;
+ } else {
+ if (lc_markstack[sh_no].state == Unreached)
+ lc_markstack[sh_no].state = Interior;
+ }
+}
+
+static void lc_markstack_push(Addr ptr)
+{
+ _lc_markstack_push(ptr, False);
+}
+
+/* Return the top of the mark stack, if any. */
+static Int lc_markstack_pop(void)
+{
+ Int ret = lc_markstack_top;
+
+ if (ret != -1) {
+ lc_markstack_top = lc_markstack[ret].next;
+ lc_markstack[ret].next = -1;
+ }
+
+ return ret;
+}
+
+/* Scan a block of memory between [start, start+len). This range may
+ be bogus, inaccessable, or otherwise strange; we deal with it. */
+static void _lc_scan_memory(Addr start, SizeT len, Bool mopup)
+{
+ Addr ptr = ROUNDUP(start, sizeof(Addr));
+ Addr end = ROUNDDN(start+len, sizeof(Addr));
+ vki_sigset_t sigmask;
+
+ VG_(sigprocmask)(VKI_SIG_SETMASK, NULL, &sigmask);
+ VG_(set_fault_catcher)(vg_scan_all_valid_memory_catcher);
+
+ if (!VG_(is_client_addr)(ptr) ||
+ !VG_(is_addressable)(ptr, sizeof(Addr), VKI_PROT_READ))
+ ptr = PGROUNDUP(ptr+1); /* first page bad */
+
+ while(ptr < end) {
+ Addr addr;
+
+ /* Skip invalid chunks */
+ if (!(*lc_is_valid_chunk)(PM_IDX(ptr))) {
+ ptr = ROUNDUP(ptr+1, SECONDARY_SIZE);
+ continue;
+ }
+
+ /* Look to see if this page seems reasonble */
+ if ((ptr % VKI_PAGE_SIZE) == 0) {
+ if (!VG_(is_client_addr)(ptr) ||
+ !VG_(is_addressable)(ptr, sizeof(Addr), VKI_PROT_READ))
+ ptr += VKI_PAGE_SIZE; /* bad page - skip it */
+ }
+
+ if (__builtin_setjmp(memscan_jmpbuf) == 0) {
+ if ((*lc_is_valid_address)(ptr)) {
+ addr = *(Addr *)ptr;
+ _lc_markstack_push(addr, mopup);
+ }
+ ptr += sizeof(Addr);
+ } else {
+ /* We need to restore the signal mask, because we were
+ longjmped out of a signal handler. */
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &sigmask, NULL);
+
+ ptr = PGROUNDUP(ptr); /* bad page - skip it */
+ }
+ }
+
+ VG_(sigprocmask)(VKI_SIG_SETMASK, &sigmask, NULL);
+ VG_(set_fault_catcher)(NULL);
+}
+
+static void lc_scan_memory(Addr start, SizeT len)
+{
+ _lc_scan_memory(start, len, False);
+}
+
+/* Process the mark stack until empty. If mopup is true, then we're
+ actually gathering leaked blocks, so they should be marked
+ IndirectLeak. */
+static SizeT lc_do_leakcheck(Bool mopup)
+{
+ Int scanned = 0;
+ Int top;
+
+ while((top = lc_markstack_pop()) != -1) {
+ sk_assert(top >= 0 && top < lc_n_shadows);
+ sk_assert(lc_markstack[top].state != Unreached);
+
+ scanned += lc_shadows[top]->size;
+
+ _lc_scan_memory(lc_shadows[top]->data, lc_shadows[top]->size, mopup);
+ }
+
+ return scanned;
+}
+
/* Top level entry point to leak detector. Call here, passing in
suitable address-validating functions (see comment at top of
@@ -352,4 +358,5 @@ void MAC_(do_detect_memory_leaks) (
Int i;
Int blocks_leaked;
+ Int blocks_indirect;
Int blocks_dubious;
Int blocks_reachable;
@@ -398,29 +405,59 @@ void MAC_(do_detect_memory_leaks) (
+ lc_shadows[lc_n_shadows-1]->size;
- lc_reachedness = VG_(malloc)( lc_n_shadows * sizeof(Reachedness) );
- for (i = 0; i < lc_n_shadows; i++)
- lc_reachedness[i] = Unreached;
+ lc_markstack = VG_(malloc)( lc_n_shadows * sizeof(*lc_markstack) );
+ for (i = 0; i < lc_n_shadows; i++) {
+ lc_markstack[i].next = -1;
+ lc_markstack[i].state = Unreached;
+ lc_markstack[i].indirect = 0;
+ }
+ lc_markstack_top = -1;
- /* Do the scan of memory. */
- bytes_notified
- = sizeof(UWord)
- * vg_scan_all_valid_memory (
- is_valid_64k_chunk,
- is_valid_address,
- &vg_detect_memory_leaks_notify_addr
- );
+ lc_is_valid_chunk = is_valid_64k_chunk;
+ lc_is_valid_address = is_valid_address;
+
+ /* Do the scan of memory, pushing any pointers onto the mark stack */
+ VG_(find_root_memory)(lc_scan_memory);
+
+ /* Push registers onto mark stack */
+ VG_(mark_from_registers)(lc_markstack_push);
+
+ /* Keep walking the heap until everything is found */
+ bytes_notified = lc_do_leakcheck(False);
if (VG_(clo_verbosity) > 0)
VG_(message)(Vg_UserMsg, "checked %d bytes.", bytes_notified);
+ /* Go through and find the heads of lost data-structures; we don't
+ want to report every single lost block individually. */
+ for (i = 0; i < lc_n_shadows; i++) {
+ SizeT indirect = 0;
+
+ if (lc_markstack[i].state != Unreached)
+ continue;
+
+ sk_assert(lc_markstack_top == -1);
+
+ if (0)
+ VG_(printf)("%d: mopping up from %p\n", i, lc_shadows[i]->data);
+
+ _lc_markstack_push(lc_shadows[i]->data, True);
+
+ indirect = lc_do_leakcheck(True);
+
+ sk_assert(lc_markstack_top == -1);
+ sk_assert(lc_markstack[i].state == IndirectLeak);
+
+ lc_markstack[i].state = Unreached; /* return to unreached state */
+ lc_markstack[i].indirect = indirect;
+ }
+
/* Common up the lost blocks so we can print sensible error messages. */
n_lossrecords = 0;
errlist = NULL;
for (i = 0; i < lc_n_shadows; i++) {
-
ExeContext* where = lc_shadows[i]->where;
for (p = errlist; p != NULL; p = p->next) {
- if (p->loss_mode == lc_reachedness[i]
+ if (p->loss_mode == lc_markstack[i].state
&& VG_(eq_ExeContext) ( MAC_(clo_leak_resolution),
p->allocated_at,
@@ -432,10 +469,12 @@ void MAC_(do_detect_memory_leaks) (
p->num_blocks ++;
p->total_bytes += lc_shadows[i]->size;
+ p->indirect_bytes += lc_markstack[i].indirect;
} else {
n_lossrecords ++;
p = VG_(malloc)(sizeof(LossRecord));
- p->loss_mode = lc_reachedness[i];
+ p->loss_mode = lc_markstack[i].state;
p->allocated_at = where;
p->total_bytes = lc_shadows[i]->size;
+ p->indirect_bytes = lc_markstack[i].indirect;
p->num_blocks = 1;
p->next = errlist;
@@ -446,4 +485,5 @@ void MAC_(do_detect_memory_leaks) (
/* Print out the commoned-up blocks and collect summary stats. */
blocks_leaked = MAC_(bytes_leaked) = 0;
+ blocks_indirect = MAC_(bytes_indirect) = 0;
blocks_dubious = MAC_(bytes_dubious) = 0;
blocks_reachable = MAC_(bytes_reachable) = 0;
@@ -483,4 +523,8 @@ void MAC_(do_detect_memory_leaks) (
MAC_(bytes_leaked) += p_min->total_bytes;
+ } else if (IndirectLeak == p_min->loss_mode) {
+ blocks_indirect += p_min->num_blocks;
+ MAC_(bytes_indirect)+= p_min->total_bytes;
+
} else if (Interior == p_min->loss_mode) {
blocks_dubious += p_min->num_blocks;
@@ -502,4 +546,6 @@ void MAC_(do_detect_memory_leaks) (
VG_(message)(Vg_UserMsg, " definitely lost: %d bytes in %d blocks.",
MAC_(bytes_leaked), blocks_leaked );
+ VG_(message)(Vg_UserMsg, " indirectly lost: %d bytes in %d blocks.",
+ MAC_(bytes_indirect), blocks_indirect );
VG_(message)(Vg_UserMsg, " possibly lost: %d bytes in %d blocks.",
MAC_(bytes_dubious), blocks_dubious );
@@ -517,5 +563,5 @@ void MAC_(do_detect_memory_leaks) (
VG_(free) ( lc_shadows );
- VG_(free) ( lc_reachedness );
+ VG_(free) ( lc_markstack );
}
|
|
From: Nicholas N. <nj...@cs...> - 2005-02-25 01:35:08
|
Hi, I just tried compiling CVS HEAD with a GCC 4.0 pre-release snapshot. It's really sensitive about signedness in a way it hasn't been before. For example, we have this typedef in include/basic_types.h: typedef signed char Char; It gives gadzillions of warnings about functions that we declare to take a "Char*" parameter, and which we pass a string literal. The problem seems to be use of the "signed" -- GCC is saying that 'signed char' is not the same as 'char', thank you very much. If I change it to this: typedef signed char Char; those warnings go away. But there are still a whole lot, some arise from where we mix up signed and unsigned types, which are a bit more reasonable. Any thoughts about this? The "char != signed char" idea seems crazy, but maybe it is worth cleaning up our intermixing of signed and unsigned types a bit? Or perhaps we could use -no-pedantic-signed-warnings, or whatever the option is, if there is one? Once GCC 4.0.0 is released, we'll have to deal with this, because I'm sure people will freak out at the huge numbers of warnings we get with it. N |