You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(4) |
2
(1) |
3
|
|
4
|
5
(6) |
6
|
7
(7) |
8
(2) |
9
(1) |
10
(2) |
|
11
(4) |
12
(1) |
13
(4) |
14
(5) |
15
(2) |
16
(5) |
17
(2) |
|
18
(3) |
19
(12) |
20
(10) |
21
(3) |
22
(7) |
23
(4) |
24
(5) |
|
25
(3) |
26
(2) |
27
(1) |
28
|
29
(1) |
30
(1) |
|
|
From: <sv...@va...> - 2016-09-24 21:15:57
|
Author: iraisr
Date: Sat Sep 24 22:15:44 2016
New Revision: 15984
Log:
Added meta mempool support into memcheck for describing a custom allocator which:
- Auto-frees all chunks assuming that destroying a pool destroys all
objects in the pool
- Uses itself to allocate other memory blocks
Unit tests included.
Fixes BZ#367995
Patch by: Ruurd Beerstra <ruu...@in...>
Added:
trunk/memcheck/tests/filter_overlaperror (with props)
trunk/memcheck/tests/leak-autofreepool-0.stderr.exp
trunk/memcheck/tests/leak-autofreepool-0.vgtest
trunk/memcheck/tests/leak-autofreepool-1.stderr.exp
trunk/memcheck/tests/leak-autofreepool-1.vgtest
trunk/memcheck/tests/leak-autofreepool-2.stderr.exp
trunk/memcheck/tests/leak-autofreepool-2.vgtest
trunk/memcheck/tests/leak-autofreepool-3.stderr.exp
trunk/memcheck/tests/leak-autofreepool-3.vgtest
trunk/memcheck/tests/leak-autofreepool-4.stderr.exp
trunk/memcheck/tests/leak-autofreepool-4.vgtest
trunk/memcheck/tests/leak-autofreepool-5.stderr.exp
trunk/memcheck/tests/leak-autofreepool-5.vgtest
trunk/memcheck/tests/leak-autofreepool.c
Modified:
trunk/NEWS
trunk/include/valgrind.h
trunk/memcheck/docs/mc-manual.xml
trunk/memcheck/mc_errors.c
trunk/memcheck/mc_include.h
trunk/memcheck/mc_leakcheck.c
trunk/memcheck/mc_main.c
trunk/memcheck/mc_malloc_wrappers.c
trunk/memcheck/tests/ (props changed)
trunk/memcheck/tests/Makefile.am
Modified: trunk/NEWS
==============================================================================
--- trunk/NEWS (original)
+++ trunk/NEWS Sat Sep 24 22:15:44 2016
@@ -10,6 +10,11 @@
* Memcheck:
+ - Added meta mempool support for describing a custom allocator which:
+ - Auto-frees all chunks assuming that destroying a pool destroys all
+ objects in the pool
+ - Uses itself to allocate other memory blocks
+
* Helgrind:
* Callgrind:
@@ -165,6 +170,7 @@
366138 Fix configure errors out when using Xcode 8 (clang 8.0.0)
366344 Multiple unhandled instruction for Aarch64
(0x0EE0E020, 0x1AC15800, 0x4E284801, 0x5E040023, 0x5E056060)
+367995 Integration of memcheck with custom memory allocator
368412 False positive result for altivec capability check
368461 mmapunmap test fails on ppc64
368416 Add tc06_two_races_xml.exp output for ppc64
Modified: trunk/include/valgrind.h
==============================================================================
--- trunk/include/valgrind.h (original)
+++ trunk/include/valgrind.h Sat Sep 24 22:15:44 2016
@@ -7009,6 +7009,22 @@
VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__CREATE_MEMPOOL, \
pool, rzB, is_zeroed, 0, 0)
+/* Create a memory pool with special flags. When the VALGRIND_MEMPOOL_AUTO_FREE
+ is passed, a MEMPOOL_DELETE will auto-free all chunks (so not reported as
+ leaks) for allocators that assume that destroying a pool destroys all
+ objects in the pool. When VALGRIND_MEMPOOL_METAPOOL is passed, the custom
+ allocator uses the pool blocks as superblocks to dole out MALLOC_LIKE blocks.
+ The resulting behaviour would normally be classified as overlapping blocks,
+ and cause assert-errors in valgrind.
+ These 2 MEMPOOL flags can be OR-ed together into the "flags" argument.
+ When flags is zero, the behaviour is identical to VALGRIND_CREATE_MEMPOOL.
+*/
+#define VALGRIND_MEMPOOL_AUTO_FREE 1
+#define VALGRIND_MEMPOOL_METAPOOL 2
+#define VALGRIND_CREATE_META_MEMPOOL(pool, rzB, is_zeroed, flags) \
+ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__CREATE_MEMPOOL, \
+ pool, rzB, is_zeroed, flags, 0)
+
/* Destroy a memory pool. */
#define VALGRIND_DESTROY_MEMPOOL(pool) \
VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DESTROY_MEMPOOL, \
Modified: trunk/memcheck/docs/mc-manual.xml
==============================================================================
--- trunk/memcheck/docs/mc-manual.xml (original)
+++ trunk/memcheck/docs/mc-manual.xml Sat Sep 24 22:15:44 2016
@@ -2320,6 +2320,40 @@
</listitem>
<listitem>
+ <para>
+ <varname>VALGRIND_CREATE_META_MEMPOOL(pool, rzB, is_zeroed, flags)</varname>:
+ This does the same as <varname>VALGRIND_CREATE_MEMPOOL</varname>,
+ but allows you to specify two seldom-used options for custom
+ allocators (or-ed together) in the <varname>flags<varname> argument:</para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ <varname>VALGRIND_MEMPOOL_AUTO_FREE</varname>.
+ This indicates that items allocated from this
+ memory pool are automatically freed when
+ <varname>VALGRIND_MEMPOOL_FREE</varname>
+ is used on a block. This allows a custom allocator to delete
+ (part of) a memory pool without explicitly deleting all allocated
+ items. Without this option, such an action will report all items
+ in the pool as memory leaks.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <varname>VALGRIND_MEMPOOL_METAPOOL</varname>.
+ This indicates that memory that has been
+ marked as being allocated with
+ <varname>VALGRIND_MALLOCLIKE_BLOCK</varname> is used
+ by a custom allocator to pass out memory to an application (again
+ marked with <varname>VALGRIND_MALLOCLIKE_BLOCK</varname>).
+ Without this option, such overlapping memory blocks may trigger
+ a fatal error message in memcheck.
+ </para>
+ </listitem>
+ <itemizedlist>
+ </listitem>
+
+ <listitem>
<para><varname>VALGRIND_DESTROY_MEMPOOL(pool)</varname>:
This request tells Memcheck that a pool is being torn down. Memcheck
then removes all records of chunks associated with the pool, as well
Modified: trunk/memcheck/mc_errors.c
==============================================================================
--- trunk/memcheck/mc_errors.c (original)
+++ trunk/memcheck/mc_errors.c Sat Sep 24 22:15:44 2016
@@ -925,6 +925,30 @@
VG_(maybe_record_error)( tid, Err_User, a, /*s*/NULL, &extra );
}
+Bool MC_(is_mempool_block)(MC_Chunk* mc_search)
+{
+ MC_Mempool* mp;
+
+ if (!MC_(mempool_list))
+ return False;
+
+ // A chunk can only come from a mempool if a custom allocator
+ // is used. No search required for other kinds.
+ if (mc_search->allockind == MC_AllocCustom) {
+ VG_(HT_ResetIter)( MC_(mempool_list) );
+ while ( (mp = VG_(HT_Next)(MC_(mempool_list))) ) {
+ MC_Chunk* mc;
+ VG_(HT_ResetIter)(mp->chunks);
+ while ( (mc = VG_(HT_Next)(mp->chunks)) ) {
+ if (mc == mc_search)
+ return True;
+ }
+ }
+ }
+
+ return False;
+}
+
/*------------------------------------------------------------*/
/*--- Other error operations ---*/
/*------------------------------------------------------------*/
@@ -1016,7 +1040,8 @@
// Forward declarations
static Bool client_block_maybe_describe( Addr a, AddrInfo* ai );
-static Bool mempool_block_maybe_describe( Addr a, AddrInfo* ai );
+static Bool mempool_block_maybe_describe( Addr a, Bool is_metapool,
+ AddrInfo* ai );
/* Describe an address as best you can, for error messages,
@@ -1031,10 +1056,12 @@
if (client_block_maybe_describe( a, ai )) {
return;
}
- /* -- Perhaps it's in mempool block? -- */
- if (mempool_block_maybe_describe( a, ai )) {
+
+ /* -- Perhaps it's in mempool block (non-meta)? -- */
+ if (mempool_block_maybe_describe( a, /*is_metapool*/ False, ai)) {
return;
}
+
/* Blocks allocated by memcheck malloc functions are either
on the recently freed list or on the malloc-ed list.
Custom blocks can be on both : a recently freed block might
@@ -1046,7 +1073,8 @@
/* -- Search for a currently malloc'd block which might bracket it. -- */
VG_(HT_ResetIter)(MC_(malloc_list));
while ( (mc = VG_(HT_Next)(MC_(malloc_list))) ) {
- if (addr_is_in_MC_Chunk_default_REDZONE_SZB(mc, a)) {
+ if (!MC_(is_mempool_block)(mc) &&
+ addr_is_in_MC_Chunk_default_REDZONE_SZB(mc, a)) {
ai->tag = Addr_Block;
ai->Addr.Block.block_kind = Block_Mallocd;
if (MC_(get_freed_block_bracketting)( a ))
@@ -1063,7 +1091,7 @@
}
/* -- Search for a recently freed block which might bracket it. -- */
mc = MC_(get_freed_block_bracketting)( a );
- if (mc) {
+ if (mc && !MC_(is_mempool_block)(mc)) {
ai->tag = Addr_Block;
ai->Addr.Block.block_kind = Block_Freed;
ai->Addr.Block.block_desc = "block";
@@ -1075,6 +1103,16 @@
return;
}
+ /* -- Perhaps it's in a meta mempool block? -- */
+ /* This test is done last, because metapool blocks overlap with blocks
+ handed out to the application. That makes every heap address part of
+ a metapool block, so the interesting cases are handled first.
+ This final search is a last-ditch attempt. When found, it is probably
+ an error in the custom allocator itself. */
+ if (mempool_block_maybe_describe( a, /*is_metapool*/ True, ai )) {
+ return;
+ }
+
/* No block found. Search a non-heap block description. */
VG_(describe_addr) (a, ai);
}
@@ -1215,7 +1253,7 @@
}
-static Bool mempool_block_maybe_describe( Addr a,
+static Bool mempool_block_maybe_describe( Addr a, Bool is_metapool,
/*OUT*/AddrInfo* ai )
{
MC_Mempool* mp;
@@ -1223,7 +1261,7 @@
VG_(HT_ResetIter)( MC_(mempool_list) );
while ( (mp = VG_(HT_Next)(MC_(mempool_list))) ) {
- if (mp->chunks != NULL) {
+ if (mp->chunks != NULL && mp->metapool == is_metapool) {
MC_Chunk* mc;
VG_(HT_ResetIter)(mp->chunks);
while ( (mc = VG_(HT_Next)(mp->chunks)) ) {
Modified: trunk/memcheck/mc_include.h
==============================================================================
--- trunk/memcheck/mc_include.h (original)
+++ trunk/memcheck/mc_include.h Sat Sep 24 22:15:44 2016
@@ -93,6 +93,9 @@
Addr pool; // pool identifier
SizeT rzB; // pool red-zone size
Bool is_zeroed; // allocations from this pool are zeroed
+ Bool auto_free; // De-alloc block frees all chunks in block
+ Bool metapool; // These chunks are VALGRIND_MALLOC_LIKE
+ // memory, and used as pool.
VgHashTable *chunks; // chunks associated with this pool
}
MC_Mempool;
@@ -105,7 +108,8 @@
void MC_(handle_free) ( ThreadId tid,
Addr p, UInt rzB, MC_AllocKind kind );
-void MC_(create_mempool) ( Addr pool, UInt rzB, Bool is_zeroed );
+void MC_(create_mempool) ( Addr pool, UInt rzB, Bool is_zeroed,
+ Bool auto_free, Bool metapool );
void MC_(destroy_mempool) ( Addr pool );
void MC_(mempool_alloc) ( ThreadId tid, Addr pool,
Addr addr, SizeT size );
@@ -114,6 +118,7 @@
void MC_(move_mempool) ( Addr poolA, Addr poolB );
void MC_(mempool_change) ( Addr pool, Addr addrA, Addr addrB, SizeT size );
Bool MC_(mempool_exists) ( Addr pool );
+Bool MC_(is_mempool_block)( MC_Chunk* mc_search );
/* Searches for a recently freed block which might bracket Addr a.
Return the MC_Chunk* for this block or NULL if no bracketting block
Modified: trunk/memcheck/mc_leakcheck.c
==============================================================================
--- trunk/memcheck/mc_leakcheck.c (original)
+++ trunk/memcheck/mc_leakcheck.c Sat Sep 24 22:15:44 2016
@@ -1760,6 +1760,25 @@
VG_(free)(seg_starts);
}
+static MC_Mempool *find_mp_of_chunk (MC_Chunk* mc_search)
+{
+ MC_Mempool* mp;
+
+ tl_assert( MC_(mempool_list) );
+
+ VG_(HT_ResetIter)( MC_(mempool_list) );
+ while ( (mp = VG_(HT_Next)(MC_(mempool_list))) ) {
+ MC_Chunk* mc;
+ VG_(HT_ResetIter)(mp->chunks);
+ while ( (mc = VG_(HT_Next)(mp->chunks)) ) {
+ if (mc == mc_search)
+ return mp;
+ }
+ }
+
+ return NULL;
+}
+
/*------------------------------------------------------------*/
/*--- Top-level entry point. ---*/
/*------------------------------------------------------------*/
@@ -1816,7 +1835,7 @@
tl_assert( lc_chunks[i]->data <= lc_chunks[i+1]->data);
}
- // Sanity check -- make sure they don't overlap. The one exception is that
+ // Sanity check -- make sure they don't overlap. One exception is that
// we allow a MALLOCLIKE block to sit entirely within a malloc() block.
// This is for bug 100628. If this occurs, we ignore the malloc() block
// for leak-checking purposes. This is a hack and probably should be done
@@ -1825,6 +1844,9 @@
// for mempool chunks, but if custom-allocated blocks are put in a separate
// table from normal heap blocks it makes free-mismatch checking more
// difficult.
+ // Another exception: Metapool memory blocks overlap by definition. The meta-
+ // block is allocated (by a custom allocator), and chunks of that block are
+ // allocated again for use by the application: Not an error.
//
// If this check fails, it probably means that the application
// has done something stupid with VALGRIND_MALLOCLIKE_BLOCK client
@@ -1867,15 +1889,48 @@
lc_n_chunks--;
} else {
- VG_(umsg)("Block 0x%lx..0x%lx overlaps with block 0x%lx..0x%lx\n",
- start1, end1, start2, end2);
- VG_(umsg)("Blocks allocation contexts:\n"),
- VG_(pp_ExeContext)( MC_(allocated_at)(ch1));
- VG_(umsg)("\n"),
- VG_(pp_ExeContext)( MC_(allocated_at)(ch2));
- VG_(umsg)("This is usually caused by using VALGRIND_MALLOCLIKE_BLOCK");
- VG_(umsg)("in an inappropriate way.\n");
- tl_assert (0);
+ // Overlap is allowed ONLY when one of the two candicates is a block
+ // from a memory pool that has the metapool attribute set.
+ // All other mixtures trigger the error + assert.
+ MC_Mempool* mp;
+ Bool ch1_is_meta = False, ch2_is_meta = False;
+ Bool Inappropriate = False;
+
+ if (MC_(is_mempool_block)(ch1)) {
+ mp = find_mp_of_chunk(ch1);
+ if (mp && mp->metapool) {
+ ch1_is_meta = True;
+ }
+ }
+
+ if (MC_(is_mempool_block)(ch2)) {
+ mp = find_mp_of_chunk(ch2);
+ if (mp && mp->metapool) {
+ ch2_is_meta = True;
+ }
+ }
+
+ // If one of the blocks is a meta block, the other must be entirely
+ // within that meta block, or something is really wrong with the custom
+ // allocator.
+ if (ch1_is_meta != ch2_is_meta) {
+ if ( (ch1_is_meta && (start2 < start1 || end2 > end1)) ||
+ (ch2_is_meta && (start1 < start2 || end1 > end2)) ) {
+ Inappropriate = True;
+ }
+ }
+
+ if (ch1_is_meta == ch2_is_meta || Inappropriate) {
+ VG_(umsg)("Block 0x%lx..0x%lx overlaps with block 0x%lx..0x%lx\n",
+ start1, end1, start2, end2);
+ VG_(umsg)("Blocks allocation contexts:\n"),
+ VG_(pp_ExeContext)( MC_(allocated_at)(ch1));
+ VG_(umsg)("\n"),
+ VG_(pp_ExeContext)( MC_(allocated_at)(ch2));
+ VG_(umsg)("This is usually caused by using ");
+ VG_(umsg)("VALGRIND_MALLOCLIKE_BLOCK in an inappropriate way.\n");
+ tl_assert (0);
+ }
}
}
Modified: trunk/memcheck/mc_main.c
==============================================================================
--- trunk/memcheck/mc_main.c (original)
+++ trunk/memcheck/mc_main.c Sat Sep 24 22:15:44 2016
@@ -7032,8 +7032,13 @@
Addr pool = (Addr)arg[1];
UInt rzB = arg[2];
Bool is_zeroed = (Bool)arg[3];
+ UInt flags = arg[4];
- MC_(create_mempool) ( pool, rzB, is_zeroed );
+ // The create_mempool function does not know these mempool flags,
+ // pass as booleans.
+ MC_(create_mempool) ( pool, rzB, is_zeroed,
+ (flags & VALGRIND_MEMPOOL_AUTO_FREE),
+ (flags & VALGRIND_MEMPOOL_METAPOOL) );
return True;
}
Modified: trunk/memcheck/mc_malloc_wrappers.c
==============================================================================
--- trunk/memcheck/mc_malloc_wrappers.c (original)
+++ trunk/memcheck/mc_malloc_wrappers.c Sat Sep 24 22:15:44 2016
@@ -338,7 +338,8 @@
/* Allocate memory and note change in memory available */
void* MC_(new_block) ( ThreadId tid,
Addr p, SizeT szB, SizeT alignB,
- Bool is_zeroed, MC_AllocKind kind, VgHashTable *table)
+ Bool is_zeroed, MC_AllocKind kind,
+ VgHashTable *table)
{
MC_Chunk* mc;
@@ -674,14 +675,52 @@
static void check_mempool_sane(MC_Mempool* mp); /*forward*/
+static void free_mallocs_in_mempool_block (MC_Mempool* mp,
+ Addr StartAddr,
+ Addr EndAddr)
+{
+ MC_Chunk *mc;
+ ThreadId tid;
+ Bool found;
-void MC_(create_mempool)(Addr pool, UInt rzB, Bool is_zeroed)
+ tl_assert(mp->auto_free);
+
+ if (VG_(clo_verbosity) > 2) {
+ VG_(message)(Vg_UserMsg,
+ "free_mallocs_in_mempool_block: Start 0x%lx size %lu\n",
+ StartAddr, (SizeT) (EndAddr - StartAddr));
+ }
+
+ tid = VG_(get_running_tid)();
+
+ do {
+ found = False;
+
+ VG_(HT_ResetIter)(MC_(malloc_list));
+ while (!found && (mc = VG_(HT_Next)(MC_(malloc_list))) ) {
+ if (mc->data >= StartAddr && mc->data + mc->szB < EndAddr) {
+ if (VG_(clo_verbosity) > 2) {
+ VG_(message)(Vg_UserMsg, "Auto-free of 0x%lx size=%lu\n",
+ mc->data, (mc->szB + 0UL));
+ }
+
+ mc = VG_(HT_remove) ( MC_(malloc_list), (UWord) mc->data);
+ die_and_free_mem(tid, mc, mp->rzB);
+ found = True;
+ }
+ }
+ } while (found);
+}
+
+void MC_(create_mempool)(Addr pool, UInt rzB, Bool is_zeroed,
+ Bool auto_free, Bool metapool)
{
MC_Mempool* mp;
if (VG_(clo_verbosity) > 2) {
- VG_(message)(Vg_UserMsg, "create_mempool(0x%lx, %u, %d)\n",
- pool, rzB, is_zeroed);
+ VG_(message)(Vg_UserMsg,
+ "create_mempool(0x%lx, rzB=%u, zeroed=%d, autofree=%d, metapool=%d)\n",
+ pool, rzB, is_zeroed, auto_free, metapool);
VG_(get_and_pp_StackTrace)
(VG_(get_running_tid)(), MEMPOOL_DEBUG_STACKTRACE_DEPTH);
}
@@ -695,6 +734,8 @@
mp->pool = pool;
mp->rzB = rzB;
mp->is_zeroed = is_zeroed;
+ mp->auto_free = auto_free;
+ mp->metapool = metapool;
mp->chunks = VG_(HT_construct)( "MC_(create_mempool)" );
check_mempool_sane(mp);
@@ -882,10 +923,14 @@
return;
}
+ if (mp->auto_free) {
+ free_mallocs_in_mempool_block(mp, mc->data, mc->data + (mc->szB + 0UL));
+ }
+
if (VG_(clo_verbosity) > 2) {
VG_(message)(Vg_UserMsg,
- "mempool_free(0x%lx, 0x%lx) freed chunk of %lu bytes\n",
- pool, addr, mc->szB + 0UL);
+ "mempool_free(0x%lx, 0x%lx) freed chunk of %lu bytes\n",
+ pool, addr, mc->szB + 0UL);
}
die_and_free_mem ( tid, mc, mp->rzB );
Modified: trunk/memcheck/tests/Makefile.am
==============================================================================
--- trunk/memcheck/tests/Makefile.am (original)
+++ trunk/memcheck/tests/Makefile.am Sat Sep 24 22:15:44 2016
@@ -61,7 +61,8 @@
filter_stderr filter_xml \
filter_strchr \
filter_varinfo3 \
- filter_memcheck
+ filter_memcheck \
+ filter_overlaperror
noinst_HEADERS = leak.h
@@ -155,6 +156,12 @@
leak-pool-3.vgtest leak-pool-3.stderr.exp \
leak-pool-4.vgtest leak-pool-4.stderr.exp \
leak-pool-5.vgtest leak-pool-5.stderr.exp \
+ leak-autofreepool-0.vgtest leak-autofreepool-0.stderr.exp \
+ leak-autofreepool-1.vgtest leak-autofreepool-1.stderr.exp \
+ leak-autofreepool-2.vgtest leak-autofreepool-2.stderr.exp \
+ leak-autofreepool-3.vgtest leak-autofreepool-3.stderr.exp \
+ leak-autofreepool-4.vgtest leak-autofreepool-4.stderr.exp \
+ leak-autofreepool-5.vgtest leak-autofreepool-5.stderr.exp \
leak-tree.vgtest leak-tree.stderr.exp \
leak-segv-jmp.vgtest leak-segv-jmp.stderr.exp \
lks.vgtest lks.stdout.exp lks.supp lks.stderr.exp \
@@ -347,6 +354,7 @@
leak-cycle \
leak-delta \
leak-pool \
+ leak-autofreepool \
leak-tree \
leak-segv-jmp \
long-supps \
Added: trunk/memcheck/tests/filter_overlaperror
==============================================================================
--- trunk/memcheck/tests/filter_overlaperror (added)
+++ trunk/memcheck/tests/filter_overlaperror Sat Sep 24 22:15:44 2016
@@ -0,0 +1,4 @@
+#! /bin/sh
+
+./filter_allocs "$@" |
+sed 's/\(Memcheck: mc_leakcheck.c:\)[0-9]*\(.*impossible.*happened.*\)/\1...\2/'
Added: trunk/memcheck/tests/leak-autofreepool-0.stderr.exp
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-0.stderr.exp (added)
+++ trunk/memcheck/tests/leak-autofreepool-0.stderr.exp Sat Sep 24 22:15:44 2016
@@ -0,0 +1,17 @@
+
+
+HEAP SUMMARY:
+ in use at exit: ... bytes in ... blocks
+ total heap usage: ... allocs, ... frees, ... bytes allocated
+
+320 bytes in 20 blocks are definitely lost in loss record ... of ...
+
+LEAK SUMMARY:
+ definitely lost: 320 bytes in 20 blocks
+ indirectly lost: 0 bytes in 0 blocks
+ possibly lost: 0 bytes in 0 blocks
+ still reachable: 0 bytes in 0 blocks
+ suppressed: 0 bytes in 0 blocks
+
+For counts of detected and suppressed errors, rerun with: -v
+ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Added: trunk/memcheck/tests/leak-autofreepool-0.vgtest
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-0.vgtest (added)
+++ trunk/memcheck/tests/leak-autofreepool-0.vgtest Sat Sep 24 22:15:44 2016
@@ -0,0 +1,4 @@
+prog: leak-autofreepool
+vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes
+args: 0
+stderr_filter: filter_allocs
Added: trunk/memcheck/tests/leak-autofreepool-1.stderr.exp
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-1.stderr.exp (added)
+++ trunk/memcheck/tests/leak-autofreepool-1.stderr.exp Sat Sep 24 22:15:44 2016
@@ -0,0 +1,17 @@
+
+
+HEAP SUMMARY:
+ in use at exit: ... bytes in ... blocks
+ total heap usage: ... allocs, ... frees, ... bytes allocated
+
+320 bytes in 20 blocks are definitely lost in loss record ... of ...
+
+LEAK SUMMARY:
+ definitely lost: 320 bytes in 20 blocks
+ indirectly lost: 0 bytes in 0 blocks
+ possibly lost: 0 bytes in 0 blocks
+ still reachable: 0 bytes in 0 blocks
+ suppressed: 0 bytes in 0 blocks
+
+For counts of detected and suppressed errors, rerun with: -v
+ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Added: trunk/memcheck/tests/leak-autofreepool-1.vgtest
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-1.vgtest (added)
+++ trunk/memcheck/tests/leak-autofreepool-1.vgtest Sat Sep 24 22:15:44 2016
@@ -0,0 +1,4 @@
+prog: leak-autofreepool
+vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes
+args: 1
+stderr_filter: filter_allocs
Added: trunk/memcheck/tests/leak-autofreepool-2.stderr.exp
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-2.stderr.exp (added)
+++ trunk/memcheck/tests/leak-autofreepool-2.stderr.exp Sat Sep 24 22:15:44 2016
@@ -0,0 +1,10 @@
+
+
+HEAP SUMMARY:
+ in use at exit: ... bytes in ... blocks
+ total heap usage: ... allocs, ... frees, ... bytes allocated
+
+All heap blocks were freed -- no leaks are possible
+
+For counts of detected and suppressed errors, rerun with: -v
+ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Added: trunk/memcheck/tests/leak-autofreepool-2.vgtest
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-2.vgtest (added)
+++ trunk/memcheck/tests/leak-autofreepool-2.vgtest Sat Sep 24 22:15:44 2016
@@ -0,0 +1,4 @@
+prog: leak-autofreepool
+vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes
+args: 2
+stderr_filter: filter_allocs
Added: trunk/memcheck/tests/leak-autofreepool-3.stderr.exp
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-3.stderr.exp (added)
+++ trunk/memcheck/tests/leak-autofreepool-3.stderr.exp Sat Sep 24 22:15:44 2016
@@ -0,0 +1,10 @@
+
+
+HEAP SUMMARY:
+ in use at exit: ... bytes in ... blocks
+ total heap usage: ... allocs, ... frees, ... bytes allocated
+
+All heap blocks were freed -- no leaks are possible
+
+For counts of detected and suppressed errors, rerun with: -v
+ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Added: trunk/memcheck/tests/leak-autofreepool-3.vgtest
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-3.vgtest (added)
+++ trunk/memcheck/tests/leak-autofreepool-3.vgtest Sat Sep 24 22:15:44 2016
@@ -0,0 +1,4 @@
+prog: leak-autofreepool
+vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes
+args: 3
+stderr_filter: filter_allocs
Added: trunk/memcheck/tests/leak-autofreepool-4.stderr.exp
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-4.stderr.exp (added)
+++ trunk/memcheck/tests/leak-autofreepool-4.stderr.exp Sat Sep 24 22:15:44 2016
@@ -0,0 +1,17 @@
+
+
+HEAP SUMMARY:
+ in use at exit: ... bytes in ... blocks
+ total heap usage: ... allocs, ... frees, ... bytes allocated
+
+320 bytes in 20 blocks are definitely lost in loss record ... of ...
+
+LEAK SUMMARY:
+ definitely lost: 320 bytes in 20 blocks
+ indirectly lost: 0 bytes in 0 blocks
+ possibly lost: 4,096 bytes in 1 blocks
+ still reachable: 0 bytes in 0 blocks
+ suppressed: 0 bytes in 0 blocks
+
+For counts of detected and suppressed errors, rerun with: -v
+ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
Added: trunk/memcheck/tests/leak-autofreepool-4.vgtest
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-4.vgtest (added)
+++ trunk/memcheck/tests/leak-autofreepool-4.vgtest Sat Sep 24 22:15:44 2016
@@ -0,0 +1,4 @@
+prog: leak-autofreepool
+vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes
+args: 4
+stderr_filter: filter_allocs
Added: trunk/memcheck/tests/leak-autofreepool-5.stderr.exp
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-5.stderr.exp (added)
+++ trunk/memcheck/tests/leak-autofreepool-5.stderr.exp Sat Sep 24 22:15:44 2016
@@ -0,0 +1,34 @@
+
+
+HEAP SUMMARY:
+ in use at exit: ... bytes in ... blocks
+ total heap usage: ... allocs, ... frees, ... bytes allocated
+
+Block 0x..........0x........ overlaps with block 0x..........0x........
+Blocks allocation contexts:
+ ...
+
+ ...
+This is usually caused by using VALGRIND_MALLOCLIKE_BLOCK in an inappropriate way.
+
+Memcheck: mc_leakcheck.c:... (vgMemCheck_detect_memory_leaks): the 'impossible' happened.
+
+host stacktrace:
+ ...
+
+sched status:
+ running_tid=1
+
+
+Note: see also the FAQ in the source distribution.
+It contains workarounds to several common problems.
+In particular, if Valgrind aborted or crashed after
+identifying problems in your program, there's a good chance
+that fixing those problems will prevent Valgrind aborting or
+crashing, especially if it happened in m_mallocfree.c.
+
+If that doesn't help, please report this bug to: www.valgrind.org
+
+In the bug report, send all the above text, the valgrind
+version, and what OS and version you are using. Thanks.
+
Added: trunk/memcheck/tests/leak-autofreepool-5.vgtest
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool-5.vgtest (added)
+++ trunk/memcheck/tests/leak-autofreepool-5.vgtest Sat Sep 24 22:15:44 2016
@@ -0,0 +1,4 @@
+prog: leak-autofreepool
+vgopts: --leak-check=full --show-possibly-lost=no --track-origins=yes
+args: 5
+stderr_filter: filter_overlaperror
Added: trunk/memcheck/tests/leak-autofreepool.c
==============================================================================
--- trunk/memcheck/tests/leak-autofreepool.c (added)
+++ trunk/memcheck/tests/leak-autofreepool.c Sat Sep 24 22:15:44 2016
@@ -0,0 +1,226 @@
+
+#include <stdlib.h>
+#include <stdint.h>
+#include <assert.h>
+#include <string.h>
+#include <stdio.h>
+
+#include "../memcheck.h"
+
+// Test VALGRIND_CREATE_META_MEMPOOL features, the VALGRIND_MEMPOOL_METAPOOL and
+// VALGRIND_MEMPOOL_AUTO_FREE flags.
+// Also show that without these, having a custom allocator that:
+// - Allocates a MEMPOOL
+// - Uses ITSELF to get large blocks to populate the pool (so these are marked
+// as MALLOCLIKE blocks)
+// - Then passes out MALLOCLIKE blocks out of these pool blocks
+// Was not previously supported by the 'loose model' for mempools in memcheck
+// because it spotted these (correctly) as overlapping blocks (test case 3
+// below).
+// The VALGRIND_MEMPOOL_METAPOOL says not to treat these as overlaps.
+//
+// Also, when one of these metapool blocks is freed, memcheck will not auto-free
+// the MALLOCLIKE blocks allocated from the meta-pool, and report them as leaks.
+// When VALGRIND_MEMPOOL_AUTO_FREE is passed, no such leaks are reported.
+// This is for custom allocators that destroy a pool without freeing the objects
+// allocated from it, because that is the defined behaviour of the allocator.
+
+struct pool
+{
+ size_t allocated;
+ size_t used;
+ uint8_t *buf;
+};
+
+struct cell
+{
+ struct cell *next;
+ int x;
+};
+
+static struct pool _PlainPool, *PlainPool = &_PlainPool;
+static struct pool _MetaPool, *MetaPool = &_MetaPool;
+
+#define N 10
+#define POOL_BLOCK_SIZE 4096
+// For easy testing, the plain mempool uses N allocations, the
+// metapool 2 * N (so 10 reported leaks are from the plain pool, 20 must be
+// from the metapool.
+
+static int MetaPoolFlags = 0;
+static int CleanupBeforeExit = 0;
+
+static struct cell *cells_plain[2 * N];
+static struct cell *cells_meta[2 * N];
+
+static char PlainBlock[POOL_BLOCK_SIZE];
+static char MetaBlock[POOL_BLOCK_SIZE];
+
+void create_meta_pool (void)
+{
+ VALGRIND_CREATE_META_MEMPOOL(MetaPool, 0, 0, MetaPoolFlags);
+ VALGRIND_MEMPOOL_ALLOC(MetaPool, MetaBlock, POOL_BLOCK_SIZE);
+
+ MetaPool->buf = (uint8_t *) MetaBlock;
+ MetaPool->allocated = POOL_BLOCK_SIZE;
+ MetaPool->used = 0;
+
+ /* A pool-block is expected to have metadata, and the core of
+ valgrind sees a MALLOCLIKE_BLOCK that starts at the same address
+ as a MEMPOOLBLOCK as a MEMPOOLBLOCK, hence never as a leak.
+ Introduce such some simulated metadata.
+ */
+
+ MetaPool->buf += sizeof(uint8_t);
+ MetaPool->used += sizeof(uint8_t);
+}
+
+static void create_plain_pool (void)
+{
+ VALGRIND_CREATE_MEMPOOL(PlainPool, 0, 0);
+
+ PlainPool->buf = (uint8_t *) PlainBlock;
+ PlainPool->allocated = POOL_BLOCK_SIZE;
+ PlainPool->used = 0;
+
+ /* Same overhead */
+ PlainPool->buf += sizeof(uint8_t);
+ PlainPool->used += sizeof(uint8_t);
+}
+
+static void *allocate_meta_style (struct pool *p, size_t n)
+{
+ void *a = p->buf + p->used;
+ assert(p->used + n < p->allocated);
+
+ // Simulate a custom allocator that allocates memory either directly for
+ // the application or for a custom memory pool: All are marked as MALLOCLIKE.
+ VALGRIND_MALLOCLIKE_BLOCK(a, n, 0, 0);
+ p->used += n;
+
+ return a;
+}
+
+static void *allocate_plain_style (struct pool *p, size_t n)
+{
+ void *a = p->buf + p->used;
+ assert(p->used + n < p->allocated);
+
+ // And this is custom allocator that knows what it is allocating from a pool.
+ VALGRIND_MEMPOOL_ALLOC(p, a, n);
+ p->used += n;
+
+ return a;
+}
+
+/* flags */
+
+static void set_flags ( int n )
+{
+ switch (n) {
+ // Case 0: No special flags. VALGRIND_CREATE_META_MEMPOOL is same as
+ // VALGRIND_CREATE_MEMPOOL.
+ // When mempools are destroyed, the METAPOOL leaks because auto-free is
+ // missing. Must show 2*N (20) leaks.
+ // The VALGRIND_MEMPOOL_ALLOC items from the plain pool are automatically
+ // destroyed. CleanupBeforeExit means the metapool is freed and destroyed
+ // (simulating an app that cleans up before it exits), and when false it
+ // simply exits with the pool unaltered.
+ case 0:
+ MetaPoolFlags = 0;
+ CleanupBeforeExit = 1;
+ break;
+
+ // Case 1: VALGRIND_MEMPOOL_METAPOOL, no auto-free.
+ // Without explicit free, these MALLOCLIKE_BLOCK blocks are considered
+ // leaks. So this case should show same as case 0: 20 leaks.
+ case 1:
+ MetaPoolFlags = VALGRIND_MEMPOOL_METAPOOL;
+ CleanupBeforeExit = 1;
+ break;
+
+ // Same as before, but now the MALLOCLIKE blocks are auto-freed.
+ // Must show 0 leaks.
+ case 2:
+ MetaPoolFlags = VALGRIND_MEMPOOL_AUTO_FREE | VALGRIND_MEMPOOL_METAPOOL;
+ CleanupBeforeExit = 1;
+ break;
+
+ case 3:
+ // Just auto-free, with cleanup. The cleanup removes the overlapping
+ // blocks, so this is the same as case 2: No leaks, no problems.
+ MetaPoolFlags = VALGRIND_MEMPOOL_AUTO_FREE;
+ CleanupBeforeExit = 1;
+ break;
+
+ case 4:
+ // No auto-free, no cleanup. Leaves overlapping blocks detected
+ // by valgrind, but those are ignored because of the METAPOOL.
+ // So, no crash, no problems, but 20 leaks.
+ MetaPoolFlags = VALGRIND_MEMPOOL_METAPOOL;
+ CleanupBeforeExit = 0;
+ break;
+
+ case 5:
+ // Main reason for the VALGRIND_MEMPOOL_METAPOOL flags: When not
+ // specified, and the application has a memorypool that has MALLOC_LIKE
+ // overlapping allocations, that leaves block(s) that overlap.
+ // Causes a fatal error.
+ // The METAPOOL allows the overlap. Test must show that without that
+ // flag, a fatal error occurs.
+ MetaPoolFlags = 0;
+ CleanupBeforeExit = 0;
+ break;
+
+ default:
+ assert(0);
+ }
+}
+
+int main( int argc, char** argv )
+{
+ int arg;
+ size_t i;
+
+ assert(argc == 2);
+ assert(argv[1]);
+ assert(strlen(argv[1]) == 1);
+ assert(argv[1][0] >= '0' && argv[1][0] <= '9');
+ arg = atoi( argv[1] );
+ set_flags( arg );
+
+ create_plain_pool();
+ create_meta_pool();
+
+ // N plain allocs
+ for (i = 0; i < N; ++i) {
+ cells_plain[i] = allocate_plain_style(PlainPool,sizeof(struct cell));
+ }
+
+ // 2*N meta allocs
+ for (i = 0; i < 2 * N; ++i) {
+ cells_meta[i] = allocate_meta_style(MetaPool,sizeof(struct cell));
+ }
+
+ // Leak the memory from the pools by losing the pointers.
+ for (i = 0; i < N; ++i) {
+ cells_plain[i] = NULL;
+ }
+
+ for (i = 0; i < 2 * N; ++i) {
+ cells_meta[i] = NULL;
+ }
+
+ // This must free MALLOCLIKE allocations from the pool when
+ // VALGRIND_MEMPOOL_AUTO_FREE
+ // is set for the pool and report leaks when not.
+ if (CleanupBeforeExit) {
+ VALGRIND_MEMPOOL_FREE(MetaPool, MetaBlock);
+ VALGRIND_DESTROY_MEMPOOL(MetaPool);
+ }
+
+ // Cleanup.
+ VALGRIND_DESTROY_MEMPOOL(PlainPool);
+
+ return 0;
+}
|
|
From: Mark W. <mj...@re...> - 2016-09-24 20:12:27
|
On Sat, Sep 24, 2016 at 02:04:34PM +0200, Philippe Waroquiers wrote: > Mark/Ivo, > > I am (now?) seeing random failures of helgrind|drd/tests/bar_bad* > (also now seeing failures in nightly builds). > > I have encountered such failures on amd64/debian 8, and on ppc64/gcc110. > > I think (not sure) this was working some days/weeks ago. > > Any idea? No, not really. I believe this test never really was deterministly good. But the recent changes to it definitely made it worse. The change committed makes the testcase no longer hang against newer glibc pthread_barrier implementations by adding a timer that triggers after a short delay canceling the barrier if it looks like the test was hanging. Unfortunately this seems to have made the testcase much more nondeterministic than before. It succeeds more often than it fails for me (both on old and new glibc), but it definitely does fail randomly. See also the (still open) bug report: https://bugs.kde.org/show_bug.cgi?id=358213 |
|
From: <sv...@va...> - 2016-09-24 12:58:36
|
Author: philippe
Date: Sat Sep 24 13:58:29 2016
New Revision: 15983
Log:
Fix warning introduced by revision 15982
Modified:
trunk/coregrind/m_main.c
Modified: trunk/coregrind/m_main.c
==============================================================================
--- trunk/coregrind/m_main.c (original)
+++ trunk/coregrind/m_main.c Sat Sep 24 13:58:29 2016
@@ -2706,7 +2706,7 @@
signal, terminate the entire system with that same fatal signal. */
VG_(debugLog)(1, "core_os",
"VG_(terminate_NORETURN)(tid=%u) schedretcode %s"
- " os_state.exit_code %d fatalsig %d\n",
+ " os_state.exit_code %ld fatalsig %d\n",
tid, VG_(name_of_VgSchedReturnCode)(tids_schedretcode),
VG_(threads)[tid].os_state.exitcode,
VG_(threads)[tid].os_state.fatalsig);
|
|
From: <sv...@va...> - 2016-09-24 12:06:42
|
Author: philippe
Date: Sat Sep 24 13:06:34 2016
New Revision: 15982
Log:
Fix 361615 - Inconsistent termination for multithreaded process terminated by signal
Test program by earl_chew
Added:
trunk/none/tests/pth_term_signal.c
trunk/none/tests/pth_term_signal.stderr.exp
trunk/none/tests/pth_term_signal.vgtest
Modified:
trunk/NEWS
trunk/coregrind/m_main.c
trunk/coregrind/m_scheduler/scheduler.c
trunk/coregrind/m_signals.c
trunk/coregrind/pub_core_scheduler.h
trunk/none/tests/Makefile.am
Modified: trunk/NEWS
==============================================================================
--- trunk/NEWS (original)
+++ trunk/NEWS Sat Sep 24 13:06:34 2016
@@ -143,6 +143,7 @@
361226 s390x: risbgn (EC59) not implemented
361253 [s390x] ex_clone.c:42: undefined reference to `pthread_create'
361354 ppc64[le]: wire up separate socketcalls system calls
+361615 Inconsistent termination for multithreaded process terminated by signal
361926 Unhandled Solaris syscall: sysfs(84)
362009 Valgrind dumps core on unimplemented functionality before threads are created
362329 Valgrind does not support the IBM POWER ISA 3.0 instructions, part 3/5
Modified: trunk/coregrind/m_main.c
==============================================================================
--- trunk/coregrind/m_main.c (original)
+++ trunk/coregrind/m_main.c Sat Sep 24 13:06:34 2016
@@ -2705,7 +2705,11 @@
sys_exit, do likewise; if the (last) thread stopped due to a fatal
signal, terminate the entire system with that same fatal signal. */
VG_(debugLog)(1, "core_os",
- "VG_(terminate_NORETURN)(tid=%u)\n", tid);
+ "VG_(terminate_NORETURN)(tid=%u) schedretcode %s"
+ " os_state.exit_code %d fatalsig %d\n",
+ tid, VG_(name_of_VgSchedReturnCode)(tids_schedretcode),
+ VG_(threads)[tid].os_state.exitcode,
+ VG_(threads)[tid].os_state.fatalsig);
switch (tids_schedretcode) {
case VgSrc_ExitThread: /* the normal way out (Linux, Solaris) */
Modified: trunk/coregrind/m_scheduler/scheduler.c
==============================================================================
--- trunk/coregrind/m_scheduler/scheduler.c (original)
+++ trunk/coregrind/m_scheduler/scheduler.c Sat Sep 24 13:06:34 2016
@@ -1653,11 +1653,6 @@
}
-/*
- This causes all threads to forceably exit. They aren't actually
- dead by the time this returns; you need to call
- VG_(reap_threads)() to wait for them.
- */
void VG_(nuke_all_threads_except) ( ThreadId me, VgSchedReturnCode src )
{
ThreadId tid;
Modified: trunk/coregrind/m_signals.c
==============================================================================
--- trunk/coregrind/m_signals.c (original)
+++ trunk/coregrind/m_signals.c Sat Sep 24 13:06:34 2016
@@ -1654,8 +1654,8 @@
/*
Perform the default action of a signal. If the signal is fatal, it
- marks all threads as needing to exit, but it doesn't actually kill
- the process or thread.
+ terminates all other threads, but it doesn't actually kill
+ the process and calling thread.
If we're not being quiet, then print out some more detail about
fatal signals (esp. core dumping signals).
@@ -1933,12 +1933,13 @@
VG_(setrlimit)(VKI_RLIMIT_CORE, &zero);
}
- /* stash fatal signal in main thread */
// what's this for?
//VG_(threads)[VG_(master_tid)].os_state.fatalsig = sigNo;
- /* everyone dies */
+ /* everyone but tid dies */
VG_(nuke_all_threads_except)(tid, VgSrc_FatalSig);
+ VG_(reap_threads)(tid);
+ /* stash fatal signal in this thread */
VG_(threads)[tid].exitreason = VgSrc_FatalSig;
VG_(threads)[tid].os_state.fatalsig = sigNo;
}
Modified: trunk/coregrind/pub_core_scheduler.h
==============================================================================
--- trunk/coregrind/pub_core_scheduler.h (original)
+++ trunk/coregrind/pub_core_scheduler.h Sat Sep 24 13:06:34 2016
@@ -51,7 +51,9 @@
If it isn't blocked in a syscall, has no effect on the thread. */
extern void VG_(get_thread_out_of_syscall)(ThreadId tid);
-/* Nuke all threads except tid. */
+/* This causes all threads except tid to forceably exit. They aren't actually
+ dead by the time this returns; you need to call
+ VG_(reap_threads)() to wait for them. */
extern void VG_(nuke_all_threads_except) ( ThreadId me,
VgSchedReturnCode reason );
Modified: trunk/none/tests/Makefile.am
==============================================================================
--- trunk/none/tests/Makefile.am (original)
+++ trunk/none/tests/Makefile.am Sat Sep 24 13:06:34 2016
@@ -167,6 +167,7 @@
pth_rwlock.stderr.exp pth_rwlock.vgtest \
pth_stackalign.stderr.exp \
pth_stackalign.stdout.exp pth_stackalign.vgtest \
+ pth_term_signal.stderr.exp pth_term_signal.vgtest \
rcrl.stderr.exp rcrl.stdout.exp rcrl.vgtest \
readline1.stderr.exp readline1.stdout.exp \
readline1.vgtest \
@@ -224,7 +225,7 @@
pselect_sigmask_null \
pth_atfork1 pth_blockedsig pth_cancel1 pth_cancel2 pth_cvsimple \
pth_empty pth_exit pth_exit2 pth_mutexspeed pth_once pth_rwlock \
- pth_stackalign \
+ pth_stackalign pth_term_signal\
rcrl readline1 \
require-text-symbol \
res_search resolv \
@@ -315,6 +316,7 @@
pth_rwlock_CFLAGS += --std=c99
endif
pth_stackalign_LDADD = -lpthread
+pth_term_signal_LDADD = -lpthread
res_search_LDADD = -lresolv -lpthread
resolv_CFLAGS = $(AM_CFLAGS)
resolv_LDADD = -lresolv -lpthread
Added: trunk/none/tests/pth_term_signal.c
==============================================================================
--- trunk/none/tests/pth_term_signal.c (added)
+++ trunk/none/tests/pth_term_signal.c Sat Sep 24 13:06:34 2016
@@ -0,0 +1,91 @@
+#include <stdio.h>
+#include <unistd.h>
+#include <signal.h>
+#include <pthread.h>
+#include <assert.h>
+
+#include <sys/wait.h>
+
+void *
+slavethread(void *arg)
+{
+ sigset_t sigmask;
+
+ if (sigfillset(&sigmask))
+ {
+ fprintf(stderr, "Error line %u\n", __LINE__);
+ _exit(255);
+ }
+
+ if (pthread_sigmask(SIG_UNBLOCK, &sigmask, 0))
+ {
+ fprintf(stderr, "Error line %u\n", __LINE__);
+ _exit(255);
+ }
+
+ while (1)
+ sleep(1);
+}
+
+void
+childprocess()
+{
+ pthread_t slave;
+
+ if (pthread_create(&slave, 0, &slavethread, 0))
+ {
+ fprintf(stderr, "Error line %u\n", __LINE__);
+ _exit(255);
+ }
+
+ while (1)
+ sleep(1);
+}
+
+int main(int argc, char **argv)
+{
+ sigset_t sigmask;
+
+ if (sigfillset(&sigmask))
+ {
+ fprintf(stderr, "Error line %u\n", __LINE__);
+ return 255;
+ }
+
+ if (pthread_sigmask(SIG_BLOCK, &sigmask, 0))
+ {
+ fprintf(stderr, "Error line %u\n", __LINE__);
+ return 255;
+ }
+
+ int childpid = fork();
+
+ if (-1 == childpid)
+ {
+ fprintf(stderr, "Error line %u\n", __LINE__);
+ return 255;
+ }
+
+ if ( ! childpid)
+ childprocess();
+
+ if (kill(childpid, SIGTERM))
+ {
+ fprintf(stderr, "Error line %u\n", __LINE__);
+ return 255;
+ }
+
+ int status;
+ if (childpid != waitpid(childpid, &status, 0))
+ {
+ fprintf(stderr, "Error line %u\n", __LINE__);
+ return 255;
+ }
+
+ assert(WIFSIGNALED(status));
+
+ fprintf(stderr, "Signal %d\n", WTERMSIG(status));
+ assert(WTERMSIG(status) == SIGTERM);
+
+ return 0;
+}
Added: trunk/none/tests/pth_term_signal.stderr.exp
==============================================================================
--- trunk/none/tests/pth_term_signal.stderr.exp (added)
+++ trunk/none/tests/pth_term_signal.stderr.exp Sat Sep 24 13:06:34 2016
@@ -0,0 +1 @@
+Signal 15
Added: trunk/none/tests/pth_term_signal.vgtest
==============================================================================
--- trunk/none/tests/pth_term_signal.vgtest (added)
+++ trunk/none/tests/pth_term_signal.vgtest Sat Sep 24 13:06:34 2016
@@ -0,0 +1,2 @@
+prog: pth_term_signal
+vgopts: -q
|
|
From: Philippe W. <phi...@sk...> - 2016-09-24 12:04:45
|
Mark/Ivo, I am (now?) seeing random failures of helgrind|drd/tests/bar_bad* (also now seeing failures in nightly builds). I have encountered such failures on amd64/debian 8, and on ppc64/gcc110. I think (not sure) this was working some days/weeks ago. Any idea? Philippe |