You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(4) |
2
(1) |
3
|
|
4
|
5
(6) |
6
|
7
(7) |
8
(2) |
9
(1) |
10
(2) |
|
11
(4) |
12
(1) |
13
(4) |
14
(5) |
15
(2) |
16
(5) |
17
(2) |
|
18
(3) |
19
(12) |
20
(10) |
21
(3) |
22
(7) |
23
(4) |
24
(5) |
|
25
(3) |
26
(2) |
27
(1) |
28
|
29
(1) |
30
(1) |
|
|
From: Ivo R. <iv...@iv...> - 2016-09-26 18:53:11
|
2016-09-26 15:19 GMT+02:00 Ruurd Beerstra <Ruu...@in...>: > Hi, > > First: I'm glad to see my patch was accepted, thanks to all involved. > > 2nd: Yes, the test will fail on 32-bit systems. Those beasts have gone > extinct here some years ago, sorry. > Hi Ruurd, Philippe fixed this in SVN r15985. Now the question is (as described in bug #367995): would you like to provide an implementation of VG_(HT_remove_at_Iter)() as suggested by Philippe? Once VG_(HT_remove_at_Iter)() is available, we can easily adopt it all over memcheck and also in the meta mempool functionality you provided. If yes, please create a new bug for tracking it. Thanks, I. |
|
From: Ruurd B. <Ruu...@in...> - 2016-09-26 13:34:10
|
Hi, First: I'm glad to see my patch was accepted, thanks to all involved. 2nd: Yes, the test will fail on 32-bit systems. Those beasts have gone extinct here some years ago, sorry. Since the tests already assert that the expected NUMBER of blocks are leaked, the SIZE of those blocks is irrelevant. So an easy fix here is to make the filter_overlaperror edit the leaked block-sizes out. I tried that (forced compile of valgrind in 32-bit mode), and had to adapt the filter and all .vgtest files to call that filter and all stderr.exp files to omit the explicit sizes. That works, now on both 64 and 32-bit environments. I've created this patch against the current SVN repository. Attached. Regards, Ruurd Beerstra -----Original Message----- From: val...@li... [mailto:val...@li...] Sent: Sunday, September 25, 2016 19:52 To: val...@li... Subject: Valgrind-developers Digest, Vol 124, Issue 18 Send Valgrind-developers mailing list submissions to val...@li... To subscribe or unsubscribe via the World Wide Web, visit https://lists.sourceforge.net/lists/listinfo/valgrind-developers or, via email, send a message with subject or body 'help' to val...@li... You can reach the person managing the list at val...@li... When replying, please edit your Subject line so it is more specific than "Re: Contents of Valgrind-developers digest..." Today's Topics: 1. Re: Valgrind: r15984 - in /trunk: ./ include/ memcheck/ memcheck/docs/ memcheck/tests/ (Mark Wielaard) ---------------------------------------------------------------------- Message: 1 Date: Sun, 25 Sep 2016 19:51:41 +0200 From: Mark Wielaard <mj...@re...> Subject: Re: [Valgrind-developers] Valgrind: r15984 - in /trunk: ./ include/ memcheck/ memcheck/docs/ memcheck/tests/ To: val...@li... Message-ID: <20160925175141.GB3034@stream> Content-Type: text/plain; charset=us-ascii On Sat, Sep 24, 2016 at 09:15:44PM -0000, sv...@va... wrote: > Added meta mempool support into memcheck for describing a custom allocator which: > - Auto-frees all chunks assuming that destroying a pool destroys all > objects in the pool > - Uses itself to allocate other memory blocks Unit tests included. > Fixes BZ#367995 > Patch by: Ruurd Beerstra <ruu...@in...> Note that some of the new testcases fail on 32-bit systems. Probably because of different pointer size. :::::::::::::: memcheck/tests/leak-autofreepool-0.stderr.diff :::::::::::::: --- leak-autofreepool-0.stderr.exp 2016-09-25 12:45:55.754805560 -0400 +++ leak-autofreepool-0.stderr.out 2016-09-25 13:33:04.015448020 -0400 @@ -4,10 +4,10 @@ in use at exit: ... bytes in ... blocks total heap usage: ... allocs, ... frees, ... bytes allocated -320 bytes in 20 blocks are definitely lost in loss record ... of ... +160 bytes in 20 blocks are definitely lost in loss record ... of ... LEAK SUMMARY: - definitely lost: 320 bytes in 20 blocks + definitely lost: 160 bytes in 20 blocks indirectly lost: 0 bytes in 0 blocks possibly lost: 0 bytes in 0 blocks still reachable: 0 bytes in 0 blocks :::::::::::::: memcheck/tests/leak-autofreepool-1.stderr.diff :::::::::::::: --- leak-autofreepool-1.stderr.exp 2016-09-25 12:45:56.571831631 -0400 +++ leak-autofreepool-1.stderr.out 2016-09-25 13:33:04.526464330 -0400 @@ -4,10 +4,10 @@ in use at exit: ... bytes in ... blocks total heap usage: ... allocs, ... frees, ... bytes allocated -320 bytes in 20 blocks are definitely lost in loss record ... of ... +160 bytes in 20 blocks are definitely lost in loss record ... of ... LEAK SUMMARY: - definitely lost: 320 bytes in 20 blocks + definitely lost: 160 bytes in 20 blocks indirectly lost: 0 bytes in 0 blocks possibly lost: 0 bytes in 0 blocks still reachable: 0 bytes in 0 blocks :::::::::::::: memcheck/tests/leak-autofreepool-4.stderr.diff :::::::::::::: --- leak-autofreepool-4.stderr.exp 2016-09-25 12:45:56.584832046 -0400 +++ leak-autofreepool-4.stderr.out 2016-09-25 13:33:06.109514856 -0400 @@ -4,10 +4,10 @@ in use at exit: ... bytes in ... blocks total heap usage: ... allocs, ... frees, ... bytes allocated -320 bytes in 20 blocks are definitely lost in loss record ... of ... +160 bytes in 20 blocks are definitely lost in loss record ... of ... LEAK SUMMARY: - definitely lost: 320 bytes in 20 blocks + definitely lost: 160 bytes in 20 blocks indirectly lost: 0 bytes in 0 blocks possibly lost: 4,096 bytes in 1 blocks still reachable: 0 bytes in 0 blocks > > > Added: > trunk/memcheck/tests/filter_overlaperror (with props) > trunk/memcheck/tests/leak-autofreepool-0.stderr.exp > trunk/memcheck/tests/leak-autofreepool-0.vgtest > trunk/memcheck/tests/leak-autofreepool-1.stderr.exp > trunk/memcheck/tests/leak-autofreepool-1.vgtest > trunk/memcheck/tests/leak-autofreepool-2.stderr.exp > trunk/memcheck/tests/leak-autofreepool-2.vgtest > trunk/memcheck/tests/leak-autofreepool-3.stderr.exp > trunk/memcheck/tests/leak-autofreepool-3.vgtest > trunk/memcheck/tests/leak-autofreepool-4.stderr.exp > trunk/memcheck/tests/leak-autofreepool-4.vgtest > trunk/memcheck/tests/leak-autofreepool-5.stderr.exp > trunk/memcheck/tests/leak-autofreepool-5.vgtest > trunk/memcheck/tests/leak-autofreepool.c > Modified: > trunk/NEWS > trunk/include/valgrind.h > trunk/memcheck/docs/mc-manual.xml > trunk/memcheck/mc_errors.c > trunk/memcheck/mc_include.h > trunk/memcheck/mc_leakcheck.c > trunk/memcheck/mc_main.c > trunk/memcheck/mc_malloc_wrappers.c > trunk/memcheck/tests/ (props changed) > trunk/memcheck/tests/Makefile.am > > Modified: trunk/NEWS > ====================================================================== > ======== > --- trunk/NEWS (original) > +++ trunk/NEWS Sat Sep 24 22:15:44 2016 > @@ -10,6 +10,11 @@ > > * Memcheck: > > + - Added meta mempool support for describing a custom allocator which: > + - Auto-frees all chunks assuming that destroying a pool destroys all > + objects in the pool > + - Uses itself to allocate other memory blocks > + > * Helgrind: > > * Callgrind: > @@ -165,6 +170,7 @@ > 366138 Fix configure errors out when using Xcode 8 (clang 8.0.0) > 366344 Multiple unhandled instruction for Aarch64 > (0x0EE0E020, 0x1AC15800, 0x4E284801, 0x5E040023, 0x5E056060) > +367995 Integration of memcheck with custom memory allocator > 368412 False positive result for altivec capability check > 368461 mmapunmap test fails on ppc64 > 368416 Add tc06_two_races_xml.exp output for ppc64 > > Modified: trunk/include/valgrind.h > ====================================================================== > ======== > --- trunk/include/valgrind.h (original) > +++ trunk/include/valgrind.h Sat Sep 24 22:15:44 2016 > @@ -7009,6 +7009,22 @@ > VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__CREATE_MEMPOOL, \ > pool, rzB, is_zeroed, 0, 0) > > +/* Create a memory pool with special flags. When the VALGRIND_MEMPOOL_AUTO_FREE > + is passed, a MEMPOOL_DELETE will auto-free all chunks (so not reported as > + leaks) for allocators that assume that destroying a pool destroys all > + objects in the pool. When VALGRIND_MEMPOOL_METAPOOL is passed, the custom > + allocator uses the pool blocks as superblocks to dole out MALLOC_LIKE blocks. > + The resulting behaviour would normally be classified as overlapping blocks, > + and cause assert-errors in valgrind. > + These 2 MEMPOOL flags can be OR-ed together into the "flags" argument. > + When flags is zero, the behaviour is identical to VALGRIND_CREATE_MEMPOOL. > +*/ > +#define VALGRIND_MEMPOOL_AUTO_FREE 1 > +#define VALGRIND_MEMPOOL_METAPOOL 2 > +#define VALGRIND_CREATE_META_MEMPOOL(pool, rzB, is_zeroed, flags) \ > + VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__CREATE_MEMPOOL, \ > + pool, rzB, is_zeroed, flags, > +0) > + > /* Destroy a memory pool. */ > #define VALGRIND_DESTROY_MEMPOOL(pool) \ > VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DESTROY_MEMPOOL, \ > > Modified: trunk/memcheck/docs/mc-manual.xml > ====================================================================== > ======== > --- trunk/memcheck/docs/mc-manual.xml (original) > +++ trunk/memcheck/docs/mc-manual.xml Sat Sep 24 22:15:44 2016 > @@ -2320,6 +2320,40 @@ > </listitem> > > <listitem> > + <para> > + <varname>VALGRIND_CREATE_META_MEMPOOL(pool, rzB, is_zeroed, flags)</varname>: > + This does the same as <varname>VALGRIND_CREATE_MEMPOOL</varname>, > + but allows you to specify two seldom-used options for custom > + allocators (or-ed together) in the <varname>flags<varname> argument:</para> > + <itemizedlist> > + <listitem> > + <para> > + <varname>VALGRIND_MEMPOOL_AUTO_FREE</varname>. > + This indicates that items allocated from this > + memory pool are automatically freed when > + <varname>VALGRIND_MEMPOOL_FREE</varname> > + is used on a block. This allows a custom allocator to delete > + (part of) a memory pool without explicitly deleting all allocated > + items. Without this option, such an action will report all items > + in the pool as memory leaks. > + </para> > + </listitem> > + <listitem> > + <para> > + <varname>VALGRIND_MEMPOOL_METAPOOL</varname>. > + This indicates that memory that has been > + marked as being allocated with > + <varname>VALGRIND_MALLOCLIKE_BLOCK</varname> is used > + by a custom allocator to pass out memory to an application (again > + marked with <varname>VALGRIND_MALLOCLIKE_BLOCK</varname>). > + Without this option, such overlapping memory blocks may trigger > + a fatal error message in memcheck. > + </para> > + </listitem> > + <itemizedlist> > + </listitem> > + > + <listitem> > <para><varname>VALGRIND_DESTROY_MEMPOOL(pool)</varname>: > This request tells Memcheck that a pool is being torn down. Memcheck > then removes all records of chunks associated with the pool, as > well > > Modified: trunk/memcheck/mc_errors.c > ====================================================================== > ======== > --- trunk/memcheck/mc_errors.c (original) > +++ trunk/memcheck/mc_errors.c Sat Sep 24 22:15:44 2016 > @@ -925,6 +925,30 @@ > VG_(maybe_record_error)( tid, Err_User, a, /*s*/NULL, &extra ); } > > +Bool MC_(is_mempool_block)(MC_Chunk* mc_search) { > + MC_Mempool* mp; > + > + if (!MC_(mempool_list)) > + return False; > + > + // A chunk can only come from a mempool if a custom allocator > + // is used. No search required for other kinds. > + if (mc_search->allockind == MC_AllocCustom) { > + VG_(HT_ResetIter)( MC_(mempool_list) ); > + while ( (mp = VG_(HT_Next)(MC_(mempool_list))) ) { > + MC_Chunk* mc; > + VG_(HT_ResetIter)(mp->chunks); > + while ( (mc = VG_(HT_Next)(mp->chunks)) ) { > + if (mc == mc_search) > + return True; > + } > + } > + } > + > + return False; > +} > + > /*------------------------------------------------------------*/ > /*--- Other error operations ---*/ > /*------------------------------------------------------------*/ > @@ -1016,7 +1040,8 @@ > > // Forward declarations > static Bool client_block_maybe_describe( Addr a, AddrInfo* ai ); > -static Bool mempool_block_maybe_describe( Addr a, AddrInfo* ai ); > +static Bool mempool_block_maybe_describe( Addr a, Bool is_metapool, > + AddrInfo* ai ); > > > /* Describe an address as best you can, for error messages, @@ > -1031,10 +1056,12 @@ > if (client_block_maybe_describe( a, ai )) { > return; > } > - /* -- Perhaps it's in mempool block? -- */ > - if (mempool_block_maybe_describe( a, ai )) { > + > + /* -- Perhaps it's in mempool block (non-meta)? -- */ > + if (mempool_block_maybe_describe( a, /*is_metapool*/ False, ai)) { > return; > } > + > /* Blocks allocated by memcheck malloc functions are either > on the recently freed list or on the malloc-ed list. > Custom blocks can be on both : a recently freed block might @@ > -1046,7 +1073,8 @@ > /* -- Search for a currently malloc'd block which might bracket it. -- */ > VG_(HT_ResetIter)(MC_(malloc_list)); > while ( (mc = VG_(HT_Next)(MC_(malloc_list))) ) { > - if (addr_is_in_MC_Chunk_default_REDZONE_SZB(mc, a)) { > + if (!MC_(is_mempool_block)(mc) && > + addr_is_in_MC_Chunk_default_REDZONE_SZB(mc, a)) { > ai->tag = Addr_Block; > ai->Addr.Block.block_kind = Block_Mallocd; > if (MC_(get_freed_block_bracketting)( a )) @@ -1063,7 > +1091,7 @@ > } > /* -- Search for a recently freed block which might bracket it. -- */ > mc = MC_(get_freed_block_bracketting)( a ); > - if (mc) { > + if (mc && !MC_(is_mempool_block)(mc)) { > ai->tag = Addr_Block; > ai->Addr.Block.block_kind = Block_Freed; > ai->Addr.Block.block_desc = "block"; @@ -1075,6 +1103,16 @@ > return; > } > > + /* -- Perhaps it's in a meta mempool block? -- */ > + /* This test is done last, because metapool blocks overlap with blocks > + handed out to the application. That makes every heap address part of > + a metapool block, so the interesting cases are handled first. > + This final search is a last-ditch attempt. When found, it is probably > + an error in the custom allocator itself. */ > + if (mempool_block_maybe_describe( a, /*is_metapool*/ True, ai )) { > + return; > + } > + > /* No block found. Search a non-heap block description. */ > VG_(describe_addr) (a, ai); > } > @@ -1215,7 +1253,7 @@ > } > > > -static Bool mempool_block_maybe_describe( Addr a, > +static Bool mempool_block_maybe_describe( Addr a, Bool is_metapool, > /*OUT*/AddrInfo* ai ) { > MC_Mempool* mp; > @@ -1223,7 +1261,7 @@ > > VG_(HT_ResetIter)( MC_(mempool_list) ); > while ( (mp = VG_(HT_Next)(MC_(mempool_list))) ) { > - if (mp->chunks != NULL) { > + if (mp->chunks != NULL && mp->metapool == is_metapool) { > MC_Chunk* mc; > VG_(HT_ResetIter)(mp->chunks); > while ( (mc = VG_(HT_Next)(mp->chunks)) ) { > > Modified: trunk/memcheck/mc_include.h > ====================================================================== > ======== > --- trunk/memcheck/mc_include.h (original) > +++ trunk/memcheck/mc_include.h Sat Sep 24 22:15:44 2016 > @@ -93,6 +93,9 @@ > Addr pool; // pool identifier > SizeT rzB; // pool red-zone size > Bool is_zeroed; // allocations from this pool are zeroed > + Bool auto_free; // De-alloc block frees all chunks in block > + Bool metapool; // These chunks are VALGRIND_MALLOC_LIKE > + // memory, and used as pool. > VgHashTable *chunks; // chunks associated with this pool > } > MC_Mempool; > @@ -105,7 +108,8 @@ > void MC_(handle_free) ( ThreadId tid, > Addr p, UInt rzB, MC_AllocKind kind ); > > -void MC_(create_mempool) ( Addr pool, UInt rzB, Bool is_zeroed ); > +void MC_(create_mempool) ( Addr pool, UInt rzB, Bool is_zeroed, > + Bool auto_free, Bool metapool ); > void MC_(destroy_mempool) ( Addr pool ); > void MC_(mempool_alloc) ( ThreadId tid, Addr pool, > Addr addr, SizeT size ); @@ -114,6 +118,7 > @@ > void MC_(move_mempool) ( Addr poolA, Addr poolB ); > void MC_(mempool_change) ( Addr pool, Addr addrA, Addr addrB, SizeT > size ); Bool MC_(mempool_exists) ( Addr pool ); > +Bool MC_(is_mempool_block)( MC_Chunk* mc_search ); > > /* Searches for a recently freed block which might bracket Addr a. > Return the MC_Chunk* for this block or NULL if no bracketting > block > > Modified: trunk/memcheck/mc_leakcheck.c > ====================================================================== > ======== > --- trunk/memcheck/mc_leakcheck.c (original) > +++ trunk/memcheck/mc_leakcheck.c Sat Sep 24 22:15:44 2016 > @@ -1760,6 +1760,25 @@ > VG_(free)(seg_starts); > } > > +static MC_Mempool *find_mp_of_chunk (MC_Chunk* mc_search) { > + MC_Mempool* mp; > + > + tl_assert( MC_(mempool_list) ); > + > + VG_(HT_ResetIter)( MC_(mempool_list) ); > + while ( (mp = VG_(HT_Next)(MC_(mempool_list))) ) { > + MC_Chunk* mc; > + VG_(HT_ResetIter)(mp->chunks); > + while ( (mc = VG_(HT_Next)(mp->chunks)) ) { > + if (mc == mc_search) > + return mp; > + } > + } > + > + return NULL; > +} > + > /*------------------------------------------------------------*/ > /*--- Top-level entry point. ---*/ > /*------------------------------------------------------------*/ > @@ -1816,7 +1835,7 @@ > tl_assert( lc_chunks[i]->data <= lc_chunks[i+1]->data); > } > > - // Sanity check -- make sure they don't overlap. The one exception is that > + // Sanity check -- make sure they don't overlap. One exception is > + that > // we allow a MALLOCLIKE block to sit entirely within a malloc() block. > // This is for bug 100628. If this occurs, we ignore the malloc() block > // for leak-checking purposes. This is a hack and probably should > be done @@ -1825,6 +1844,9 @@ > // for mempool chunks, but if custom-allocated blocks are put in a separate > // table from normal heap blocks it makes free-mismatch checking more > // difficult. > + // Another exception: Metapool memory blocks overlap by definition. The meta- > + // block is allocated (by a custom allocator), and chunks of that block are > + // allocated again for use by the application: Not an error. > // > // If this check fails, it probably means that the application > // has done something stupid with VALGRIND_MALLOCLIKE_BLOCK client > @@ -1867,15 +1889,48 @@ > lc_n_chunks--; > > } else { > - VG_(umsg)("Block 0x%lx..0x%lx overlaps with block 0x%lx..0x%lx\n", > - start1, end1, start2, end2); > - VG_(umsg)("Blocks allocation contexts:\n"), > - VG_(pp_ExeContext)( MC_(allocated_at)(ch1)); > - VG_(umsg)("\n"), > - VG_(pp_ExeContext)( MC_(allocated_at)(ch2)); > - VG_(umsg)("This is usually caused by using VALGRIND_MALLOCLIKE_BLOCK"); > - VG_(umsg)("in an inappropriate way.\n"); > - tl_assert (0); > + // Overlap is allowed ONLY when one of the two candicates is a block > + // from a memory pool that has the metapool attribute set. > + // All other mixtures trigger the error + assert. > + MC_Mempool* mp; > + Bool ch1_is_meta = False, ch2_is_meta = False; > + Bool Inappropriate = False; > + > + if (MC_(is_mempool_block)(ch1)) { > + mp = find_mp_of_chunk(ch1); > + if (mp && mp->metapool) { > + ch1_is_meta = True; > + } > + } > + > + if (MC_(is_mempool_block)(ch2)) { > + mp = find_mp_of_chunk(ch2); > + if (mp && mp->metapool) { > + ch2_is_meta = True; > + } > + } > + > + // If one of the blocks is a meta block, the other must be entirely > + // within that meta block, or something is really wrong with the custom > + // allocator. > + if (ch1_is_meta != ch2_is_meta) { > + if ( (ch1_is_meta && (start2 < start1 || end2 > end1)) || > + (ch2_is_meta && (start1 < start2 || end1 > end2)) ) { > + Inappropriate = True; > + } > + } > + > + if (ch1_is_meta == ch2_is_meta || Inappropriate) { > + VG_(umsg)("Block 0x%lx..0x%lx overlaps with block 0x%lx..0x%lx\n", > + start1, end1, start2, end2); > + VG_(umsg)("Blocks allocation contexts:\n"), > + VG_(pp_ExeContext)( MC_(allocated_at)(ch1)); > + VG_(umsg)("\n"), > + VG_(pp_ExeContext)( MC_(allocated_at)(ch2)); > + VG_(umsg)("This is usually caused by using "); > + VG_(umsg)("VALGRIND_MALLOCLIKE_BLOCK in an inappropriate way.\n"); > + tl_assert (0); > + } > } > } > > > Modified: trunk/memcheck/mc_main.c > ====================================================================== > ======== > --- trunk/memcheck/mc_main.c (original) > +++ trunk/memcheck/mc_main.c Sat Sep 24 22:15:44 2016 > @@ -7032,8 +7032,13 @@ > Addr pool = (Addr)arg[1]; > UInt rzB = arg[2]; > Bool is_zeroed = (Bool)arg[3]; > + UInt flags = arg[4]; > > - MC_(create_mempool) ( pool, rzB, is_zeroed ); > + // The create_mempool function does not know these mempool flags, > + // pass as booleans. > + MC_(create_mempool) ( pool, rzB, is_zeroed, > + (flags & VALGRIND_MEMPOOL_AUTO_FREE), > + (flags & VALGRIND_MEMPOOL_METAPOOL) ); > return True; > } > > > Modified: trunk/memcheck/mc_malloc_wrappers.c > ====================================================================== > ======== > --- trunk/memcheck/mc_malloc_wrappers.c (original) > +++ trunk/memcheck/mc_malloc_wrappers.c Sat Sep 24 22:15:44 2016 > @@ -338,7 +338,8 @@ > /* Allocate memory and note change in memory available */ > void* MC_(new_block) ( ThreadId tid, > Addr p, SizeT szB, SizeT alignB, > - Bool is_zeroed, MC_AllocKind kind, VgHashTable *table) > + Bool is_zeroed, MC_AllocKind kind, > + VgHashTable *table) > { > MC_Chunk* mc; > > @@ -674,14 +675,52 @@ > > static void check_mempool_sane(MC_Mempool* mp); /*forward*/ > > +static void free_mallocs_in_mempool_block (MC_Mempool* mp, > + Addr StartAddr, > + Addr EndAddr) { > + MC_Chunk *mc; > + ThreadId tid; > + Bool found; > > -void MC_(create_mempool)(Addr pool, UInt rzB, Bool is_zeroed) > + tl_assert(mp->auto_free); > + > + if (VG_(clo_verbosity) > 2) { > + VG_(message)(Vg_UserMsg, > + "free_mallocs_in_mempool_block: Start 0x%lx size %lu\n", > + StartAddr, (SizeT) (EndAddr - StartAddr)); > + } > + > + tid = VG_(get_running_tid)(); > + > + do { > + found = False; > + > + VG_(HT_ResetIter)(MC_(malloc_list)); > + while (!found && (mc = VG_(HT_Next)(MC_(malloc_list))) ) { > + if (mc->data >= StartAddr && mc->data + mc->szB < EndAddr) { > + if (VG_(clo_verbosity) > 2) { > + VG_(message)(Vg_UserMsg, "Auto-free of 0x%lx size=%lu\n", > + mc->data, (mc->szB + 0UL)); > + } > + > + mc = VG_(HT_remove) ( MC_(malloc_list), (UWord) mc->data); > + die_and_free_mem(tid, mc, mp->rzB); > + found = True; > + } > + } > + } while (found); > +} > + > +void MC_(create_mempool)(Addr pool, UInt rzB, Bool is_zeroed, > + Bool auto_free, Bool metapool) > { > MC_Mempool* mp; > > if (VG_(clo_verbosity) > 2) { > - VG_(message)(Vg_UserMsg, "create_mempool(0x%lx, %u, %d)\n", > - pool, rzB, is_zeroed); > + VG_(message)(Vg_UserMsg, > + "create_mempool(0x%lx, rzB=%u, zeroed=%d, autofree=%d, metapool=%d)\n", > + pool, rzB, is_zeroed, auto_free, metapool); > VG_(get_and_pp_StackTrace) > (VG_(get_running_tid)(), MEMPOOL_DEBUG_STACKTRACE_DEPTH); > } > @@ -695,6 +734,8 @@ > mp->pool = pool; > mp->rzB = rzB; > mp->is_zeroed = is_zeroed; > + mp->auto_free = auto_free; > + mp->metapool = metapool; > mp->chunks = VG_(HT_construct)( "MC_(create_mempool)" ); > check_mempool_sane(mp); > > @@ -882,10 +923,14 @@ > return; > } > > + if (mp->auto_free) { > + free_mallocs_in_mempool_block(mp, mc->data, mc->data + (mc->szB + 0UL)); > + } > + > if (VG_(clo_verbosity) > 2) { > VG_(message)(Vg_UserMsg, > - "mempool_free(0x%lx, 0x%lx) freed chunk of %lu bytes\n", > - pool, addr, mc->szB + 0UL); > + "mempool_free(0x%lx, 0x%lx) freed chunk of %lu bytes\n", > + pool, addr, mc->szB + 0UL); > } > > die_and_free_mem ( tid, mc, mp->rzB ); > > Modified: trunk/memcheck/tests/Makefile.am > ====================================================================== > ======== > --- trunk/memcheck/tests/Makefile.am (original) > +++ trunk/memcheck/tests/Makefile.am Sat Sep 24 22:15:44 2016 > @@ -61,7 +61,8 @@ > filter_stderr filter_xml \ > filter_strchr \ > filter_varinfo3 \ > - filter_memcheck > + filter_memcheck \ > + filter_overlaperror > > noinst_HEADERS = leak.h > > @@ -155,6 +156,12 @@ > leak-pool-3.vgtest leak-pool-3.stderr.exp \ > leak-pool-4.vgtest leak-pool-4.stderr.exp \ > leak-pool-5.vgtest leak-pool-5.stderr.exp \ > + leak-autofreepool-0.vgtest leak-autofreepool-0.stderr.exp \ > + leak-autofreepool-1.vgtest leak-autofreepool-1.stderr.exp \ > + leak-autofreepool-2.vgtest leak-autofreepool-2.stderr.exp \ > + leak-autofreepool-3.vgtest leak-autofreepool-3.stderr.exp \ > + leak-autofreepool-4.vgtest leak-autofreepool-4.stderr.exp \ > + leak-autofreepool-5.vgtest leak-autofreepool-5.stderr.exp \ > leak-tree.vgtest leak-tree.stderr.exp \ > leak-segv-jmp.vgtest leak-segv-jmp.stderr.exp \ > lks.vgtest lks.stdout.exp lks.supp lks.stderr.exp \ @@ -347,6 +354,7 > @@ > leak-cycle \ > leak-delta \ > leak-pool \ > + leak-autofreepool \ > leak-tree \ > leak-segv-jmp \ > long-supps \ > > Added: trunk/memcheck/tests/filter_overlaperror > ====================================================================== > ======== > --- trunk/memcheck/tests/filter_overlaperror (added) > +++ trunk/memcheck/tests/filter_overlaperror Sat Sep 24 22:15:44 2016 > @@ -0,0 +1,4 @@ > +#! /bin/sh > + > +./filter_allocs "$@" | > +sed 's/\(Memcheck: mc_leakcheck.c:\)[0-9]*\(.*impossible.*happened.*\)/\1...\2/' > > Added: trunk/memcheck/tests/leak-autofreepool-0.stderr.exp > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-0.stderr.exp (added) > +++ trunk/memcheck/tests/leak-autofreepool-0.stderr.exp Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,17 @@ > + > + > +HEAP SUMMARY: > + in use at exit: ... bytes in ... blocks > + total heap usage: ... allocs, ... frees, ... bytes allocated > + > +320 bytes in 20 blocks are definitely lost in loss record ... of ... > + > +LEAK SUMMARY: > + definitely lost: 320 bytes in 20 blocks > + indirectly lost: 0 bytes in 0 blocks > + possibly lost: 0 bytes in 0 blocks > + still reachable: 0 bytes in 0 blocks > + suppressed: 0 bytes in 0 blocks > + > +For counts of detected and suppressed errors, rerun with: -v ERROR > +SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0) > > Added: trunk/memcheck/tests/leak-autofreepool-0.vgtest > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-0.vgtest (added) > +++ trunk/memcheck/tests/leak-autofreepool-0.vgtest Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,4 @@ > +prog: leak-autofreepool > +vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes > +args: 0 > +stderr_filter: filter_allocs > > Added: trunk/memcheck/tests/leak-autofreepool-1.stderr.exp > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-1.stderr.exp (added) > +++ trunk/memcheck/tests/leak-autofreepool-1.stderr.exp Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,17 @@ > + > + > +HEAP SUMMARY: > + in use at exit: ... bytes in ... blocks > + total heap usage: ... allocs, ... frees, ... bytes allocated > + > +320 bytes in 20 blocks are definitely lost in loss record ... of ... > + > +LEAK SUMMARY: > + definitely lost: 320 bytes in 20 blocks > + indirectly lost: 0 bytes in 0 blocks > + possibly lost: 0 bytes in 0 blocks > + still reachable: 0 bytes in 0 blocks > + suppressed: 0 bytes in 0 blocks > + > +For counts of detected and suppressed errors, rerun with: -v ERROR > +SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0) > > Added: trunk/memcheck/tests/leak-autofreepool-1.vgtest > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-1.vgtest (added) > +++ trunk/memcheck/tests/leak-autofreepool-1.vgtest Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,4 @@ > +prog: leak-autofreepool > +vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes > +args: 1 > +stderr_filter: filter_allocs > > Added: trunk/memcheck/tests/leak-autofreepool-2.stderr.exp > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-2.stderr.exp (added) > +++ trunk/memcheck/tests/leak-autofreepool-2.stderr.exp Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,10 @@ > + > + > +HEAP SUMMARY: > + in use at exit: ... bytes in ... blocks > + total heap usage: ... allocs, ... frees, ... bytes allocated > + > +All heap blocks were freed -- no leaks are possible > + > +For counts of detected and suppressed errors, rerun with: -v ERROR > +SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) > > Added: trunk/memcheck/tests/leak-autofreepool-2.vgtest > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-2.vgtest (added) > +++ trunk/memcheck/tests/leak-autofreepool-2.vgtest Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,4 @@ > +prog: leak-autofreepool > +vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes > +args: 2 > +stderr_filter: filter_allocs > > Added: trunk/memcheck/tests/leak-autofreepool-3.stderr.exp > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-3.stderr.exp (added) > +++ trunk/memcheck/tests/leak-autofreepool-3.stderr.exp Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,10 @@ > + > + > +HEAP SUMMARY: > + in use at exit: ... bytes in ... blocks > + total heap usage: ... allocs, ... frees, ... bytes allocated > + > +All heap blocks were freed -- no leaks are possible > + > +For counts of detected and suppressed errors, rerun with: -v ERROR > +SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) > > Added: trunk/memcheck/tests/leak-autofreepool-3.vgtest > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-3.vgtest (added) > +++ trunk/memcheck/tests/leak-autofreepool-3.vgtest Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,4 @@ > +prog: leak-autofreepool > +vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes > +args: 3 > +stderr_filter: filter_allocs > > Added: trunk/memcheck/tests/leak-autofreepool-4.stderr.exp > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-4.stderr.exp (added) > +++ trunk/memcheck/tests/leak-autofreepool-4.stderr.exp Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,17 @@ > + > + > +HEAP SUMMARY: > + in use at exit: ... bytes in ... blocks > + total heap usage: ... allocs, ... frees, ... bytes allocated > + > +320 bytes in 20 blocks are definitely lost in loss record ... of ... > + > +LEAK SUMMARY: > + definitely lost: 320 bytes in 20 blocks > + indirectly lost: 0 bytes in 0 blocks > + possibly lost: 4,096 bytes in 1 blocks > + still reachable: 0 bytes in 0 blocks > + suppressed: 0 bytes in 0 blocks > + > +For counts of detected and suppressed errors, rerun with: -v ERROR > +SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0) > > Added: trunk/memcheck/tests/leak-autofreepool-4.vgtest > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-4.vgtest (added) > +++ trunk/memcheck/tests/leak-autofreepool-4.vgtest Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,4 @@ > +prog: leak-autofreepool > +vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes > +args: 4 > +stderr_filter: filter_allocs > > Added: trunk/memcheck/tests/leak-autofreepool-5.stderr.exp > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-5.stderr.exp (added) > +++ trunk/memcheck/tests/leak-autofreepool-5.stderr.exp Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,34 @@ > + > + > +HEAP SUMMARY: > + in use at exit: ... bytes in ... blocks > + total heap usage: ... allocs, ... frees, ... bytes allocated > + > +Block 0x..........0x........ overlaps with block 0x..........0x........ > +Blocks allocation contexts: > + ... > + > + ... > +This is usually caused by using VALGRIND_MALLOCLIKE_BLOCK in an inappropriate way. > + > +Memcheck: mc_leakcheck.c:... (vgMemCheck_detect_memory_leaks): the 'impossible' happened. > + > +host stacktrace: > + ... > + > +sched status: > + running_tid=1 > + > + > +Note: see also the FAQ in the source distribution. > +It contains workarounds to several common problems. > +In particular, if Valgrind aborted or crashed after identifying > +problems in your program, there's a good chance that fixing those > +problems will prevent Valgrind aborting or crashing, especially if it > +happened in m_mallocfree.c. > + > +If that doesn't help, please report this bug to: www.valgrind.org > + > +In the bug report, send all the above text, the valgrind version, and > +what OS and version you are using. Thanks. > + > > Added: trunk/memcheck/tests/leak-autofreepool-5.vgtest > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool-5.vgtest (added) > +++ trunk/memcheck/tests/leak-autofreepool-5.vgtest Sat Sep 24 > +++ 22:15:44 2016 > @@ -0,0 +1,4 @@ > +prog: leak-autofreepool > +vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes > +args: 5 > +stderr_filter: filter_overlaperror > > Added: trunk/memcheck/tests/leak-autofreepool.c > ====================================================================== > ======== > --- trunk/memcheck/tests/leak-autofreepool.c (added) > +++ trunk/memcheck/tests/leak-autofreepool.c Sat Sep 24 22:15:44 2016 > @@ -0,0 +1,226 @@ > + > +#include <stdlib.h> > +#include <stdint.h> > +#include <assert.h> > +#include <string.h> > +#include <stdio.h> > + > +#include "../memcheck.h" > + > +// Test VALGRIND_CREATE_META_MEMPOOL features, the > +VALGRIND_MEMPOOL_METAPOOL and // VALGRIND_MEMPOOL_AUTO_FREE flags. > +// Also show that without these, having a custom allocator that: > +// - Allocates a MEMPOOL > +// - Uses ITSELF to get large blocks to populate the pool (so these are marked > +// as MALLOCLIKE blocks) > +// - Then passes out MALLOCLIKE blocks out of these pool blocks // > +Was not previously supported by the 'loose model' for mempools in > +memcheck // because it spotted these (correctly) as overlapping > +blocks (test case 3 // below). > +// The VALGRIND_MEMPOOL_METAPOOL says not to treat these as overlaps. > +// > +// Also, when one of these metapool blocks is freed, memcheck will > +not auto-free // the MALLOCLIKE blocks allocated from the meta-pool, and report them as leaks. > +// When VALGRIND_MEMPOOL_AUTO_FREE is passed, no such leaks are reported. > +// This is for custom allocators that destroy a pool without freeing > +the objects // allocated from it, because that is the defined behaviour of the allocator. > + > +struct pool > +{ > + size_t allocated; > + size_t used; > + uint8_t *buf; > +}; > + > +struct cell > +{ > + struct cell *next; > + int x; > +}; > + > +static struct pool _PlainPool, *PlainPool = &_PlainPool; static > +struct pool _MetaPool, *MetaPool = &_MetaPool; > + > +#define N 10 > +#define POOL_BLOCK_SIZE 4096 > +// For easy testing, the plain mempool uses N allocations, the // > +metapool 2 * N (so 10 reported leaks are from the plain pool, 20 must > +be // from the metapool. > + > +static int MetaPoolFlags = 0; > +static int CleanupBeforeExit = 0; > + > +static struct cell *cells_plain[2 * N]; static struct cell > +*cells_meta[2 * N]; > + > +static char PlainBlock[POOL_BLOCK_SIZE]; > +static char MetaBlock[POOL_BLOCK_SIZE]; > + > +void create_meta_pool (void) > +{ > + VALGRIND_CREATE_META_MEMPOOL(MetaPool, 0, 0, MetaPoolFlags); > + VALGRIND_MEMPOOL_ALLOC(MetaPool, MetaBlock, POOL_BLOCK_SIZE); > + > + MetaPool->buf = (uint8_t *) MetaBlock; > + MetaPool->allocated = POOL_BLOCK_SIZE; > + MetaPool->used = 0; > + > + /* A pool-block is expected to have metadata, and the core of > + valgrind sees a MALLOCLIKE_BLOCK that starts at the same address > + as a MEMPOOLBLOCK as a MEMPOOLBLOCK, hence never as a leak. > + Introduce such some simulated metadata. > + */ > + > + MetaPool->buf += sizeof(uint8_t); > + MetaPool->used += sizeof(uint8_t); } > + > +static void create_plain_pool (void) > +{ > + VALGRIND_CREATE_MEMPOOL(PlainPool, 0, 0); > + > + PlainPool->buf = (uint8_t *) PlainBlock; > + PlainPool->allocated = POOL_BLOCK_SIZE; > + PlainPool->used = 0; > + > + /* Same overhead */ > + PlainPool->buf += sizeof(uint8_t); > + PlainPool->used += sizeof(uint8_t); } > + > +static void *allocate_meta_style (struct pool *p, size_t n) { > + void *a = p->buf + p->used; > + assert(p->used + n < p->allocated); > + > + // Simulate a custom allocator that allocates memory either > + directly for // the application or for a custom memory pool: All are marked as MALLOCLIKE. > + VALGRIND_MALLOCLIKE_BLOCK(a, n, 0, 0); p->used += n; > + > + return a; > +} > + > +static void *allocate_plain_style (struct pool *p, size_t n) { > + void *a = p->buf + p->used; > + assert(p->used + n < p->allocated); > + > + // And this is custom allocator that knows what it is allocating from a pool. > + VALGRIND_MEMPOOL_ALLOC(p, a, n); > + p->used += n; > + > + return a; > +} > + > +/* flags */ > + > +static void set_flags ( int n ) > +{ > + switch (n) { > + // Case 0: No special flags. VALGRIND_CREATE_META_MEMPOOL is same as > + // VALGRIND_CREATE_MEMPOOL. > + // When mempools are destroyed, the METAPOOL leaks because auto-free is > + // missing. Must show 2*N (20) leaks. > + // The VALGRIND_MEMPOOL_ALLOC items from the plain pool are automatically > + // destroyed. CleanupBeforeExit means the metapool is freed and destroyed > + // (simulating an app that cleans up before it exits), and when false it > + // simply exits with the pool unaltered. > + case 0: > + MetaPoolFlags = 0; > + CleanupBeforeExit = 1; > + break; > + > + // Case 1: VALGRIND_MEMPOOL_METAPOOL, no auto-free. > + // Without explicit free, these MALLOCLIKE_BLOCK blocks are considered > + // leaks. So this case should show same as case 0: 20 leaks. > + case 1: > + MetaPoolFlags = VALGRIND_MEMPOOL_METAPOOL; > + CleanupBeforeExit = 1; > + break; > + > + // Same as before, but now the MALLOCLIKE blocks are auto-freed. > + // Must show 0 leaks. > + case 2: > + MetaPoolFlags = VALGRIND_MEMPOOL_AUTO_FREE | VALGRIND_MEMPOOL_METAPOOL; > + CleanupBeforeExit = 1; > + break; > + > + case 3: > + // Just auto-free, with cleanup. The cleanup removes the overlapping > + // blocks, so this is the same as case 2: No leaks, no problems. > + MetaPoolFlags = VALGRIND_MEMPOOL_AUTO_FREE; > + CleanupBeforeExit = 1; > + break; > + > + case 4: > + // No auto-free, no cleanup. Leaves overlapping blocks detected > + // by valgrind, but those are ignored because of the METAPOOL. > + // So, no crash, no problems, but 20 leaks. > + MetaPoolFlags = VALGRIND_MEMPOOL_METAPOOL; > + CleanupBeforeExit = 0; > + break; > + > + case 5: > + // Main reason for the VALGRIND_MEMPOOL_METAPOOL flags: When not > + // specified, and the application has a memorypool that has MALLOC_LIKE > + // overlapping allocations, that leaves block(s) that overlap. > + // Causes a fatal error. > + // The METAPOOL allows the overlap. Test must show that without that > + // flag, a fatal error occurs. > + MetaPoolFlags = 0; > + CleanupBeforeExit = 0; > + break; > + > + default: > + assert(0); > + } > +} > + > +int main( int argc, char** argv ) > +{ > + int arg; > + size_t i; > + > + assert(argc == 2); > + assert(argv[1]); > + assert(strlen(argv[1]) == 1); > + assert(argv[1][0] >= '0' && argv[1][0] <= '9'); > + arg = atoi( argv[1] ); > + set_flags( arg ); > + > + create_plain_pool(); > + create_meta_pool(); > + > + // N plain allocs > + for (i = 0; i < N; ++i) { > + cells_plain[i] = allocate_plain_style(PlainPool,sizeof(struct cell)); > + } > + > + // 2*N meta allocs > + for (i = 0; i < 2 * N; ++i) { > + cells_meta[i] = allocate_meta_style(MetaPool,sizeof(struct cell)); > + } > + > + // Leak the memory from the pools by losing the pointers. > + for (i = 0; i < N; ++i) { > + cells_plain[i] = NULL; > + } > + > + for (i = 0; i < 2 * N; ++i) { > + cells_meta[i] = NULL; > + } > + > + // This must free MALLOCLIKE allocations from the pool when > + // VALGRIND_MEMPOOL_AUTO_FREE > + // is set for the pool and report leaks when not. > + if (CleanupBeforeExit) { > + VALGRIND_MEMPOOL_FREE(MetaPool, MetaBlock); > + VALGRIND_DESTROY_MEMPOOL(MetaPool); > + } > + > + // Cleanup. > + VALGRIND_DESTROY_MEMPOOL(PlainPool); > + > + return 0; > +} > > > ---------------------------------------------------------------------- > -------- _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers ------------------------------ ------------------------------------------------------------------------------ ------------------------------ _______________________________________________ Valgrind-developers mailing list Val...@li... https://lists.sourceforge.net/lists/listinfo/valgrind-developers End of Valgrind-developers Digest, Vol 124, Issue 18 **************************************************** |