You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
| 2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
| 2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
| 2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
| 2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
| 2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
| 2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
| 2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
| 2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
| 2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
| 2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
| 2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
| 2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
| 2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
| 2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
| 2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
| 2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
| 2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
| 2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
| 2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
|
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
1
|
2
(2) |
3
|
4
|
|
5
(2) |
6
|
7
|
8
(3) |
9
|
10
|
11
|
|
12
|
13
(4) |
14
(3) |
15
|
16
(4) |
17
(1) |
18
(1) |
|
19
|
20
(1) |
21
(1) |
22
(1) |
23
(1) |
24
|
25
|
|
26
|
27
(1) |
28
|
29
|
30
(8) |
31
|
|
|
From: Phil L. <plo...@sa...> - 2013-05-14 20:31:43
|
According to the helgrind manual "When a mutex is unlocked by thread T1 and later (or immediately) locked by thread T2, then the memory accesses in T1 prior to the unlock must happen-before those in T2 after it acquires the lock". Two possible ways to interpret this: Scenario #1: int shared_value; pthread_mutex_t lock; Thread 1: pthread_mutex_lock(&lock); shared_value = 10; pthread_mutex_unlock(&lock); Thread 2: pthread_mutex_lock(&lock); ... = shared_value; pthread_mutex_unlock(&lock); For this, no data race on shared_value should be reported; Scenario #2: pthread_mutex_t lock; int* shared_ptr; Thread 1: int* my_ptr = new int; *my_ptr = 10; pthread_mutex_lock(&lock); shared_ptr = my_ptr; pthread_mutex_unlock(&lock); Thread 2: pthread_mutex_lock(&lock); int* my_ptr = shared_ptr; pthread_mutex_unlock(&lock); ... = *my_ptr; In this case, there should be no data race on shared_ptr, similar to scenario #1. Will helgrind report a data race on the memory shared_ptr points to, or is memory only deemed to be safe while a semaphore is locked. Phil ----- Phil Longstaff Senior Software Engineer x2904 |
|
From: Roland M. <rol...@nr...> - 2013-05-14 02:28:56
|
On Thu, Apr 25, 2013 at 1:42 PM, Sebastian Feld <seb...@gm...> wrote: > On Wed, Apr 24, 2013 at 11:10 PM, Roland Mainz <rol...@nr...> wrote: >> On Wed, Apr 24, 2013 at 10:14 PM, Roland Mainz <rol...@nr...> wrote: >>> On Wed, Apr 24, 2013 at 12:45 AM, John Reiser <jr...@bi...> wrote: >>>>> Does valgrind provide any replacements for glibc's >>>>> |__malloc_initialize_hook()| ? It seems this call and it's |*hook*()| >>>>> siblings are depreciated now (at least in SuSE >=12.3) ... >>>> >>>> There is no glibc replacement. [And the reasoning is correct.] >>>> There is no valgrind replacement. >>>> You must change your basic approach. >>>> >>>> We went through this just 6 months ago. >>>> Check the archives of this mailing list: >>>> >>>> [Valgrind-users] __malloc_hook >>>> Amir Szekely <ki...@gm...> >>>> 10/19/2012 >>>> >>>> That thread contains code that works. >>>> [The modification to detect the first use is obvious.] >>> >>> Grumpf... I tried that... but the combination how the stuff I'd like >>> to instrument+debug is build+used makes that solution more or less >>> impossible (for example... the allocator system lives in a seperate >>> namespace, e.g. it has |malloc()|&&|free()| etc. but all symbols are >>> prefixed with |_ast|, e.g. |_ast_malloc()|, |_ast_free()| etc.). >>> >>> I tried to work around the issues with the API provided in >>> <valgrind/valgrind.h> ... but it seems this doesn't detect any >>> read-from-unallocated etc. or even the plain double-free situations >>> (patch below) ... erm... is the API around >>> |VALGRIND_MALLOCLIKE_BLOCK()| know to work in valgrind-3.8.1 ? >>> >>> -- snip -- >>> --- src/lib/libast/vmalloc/vmbest.c 2012-06-28 22:12:14.000000000 +0200 >>> +++ src/lib/libast/vmalloc/vmbest.c 2013-04-24 03:03:44.207373019 +0200 >>> @@ -10,40 +10,42 @@ >>> * http://www.eclipse.org/org/documents/epl-v10.html * >>> * (with md5 checksum b35adb5213ca9657e911e9befb180842) * >>> * * >>> * Information and Software Systems Research * >>> * AT&T Research * >>> * Florham Park NJ * >>> * * >>> * Glenn Fowler <gs...@re...> * >>> * David Korn <dg...@re...> * >>> * Phong Vo <kp...@re...> * >>> * * >>> ***********************************************************************/ >>> #if defined(_UWIN) && defined(_BLD_ast) >>> >>> void _STUB_vmbest(){} >>> >>> #else >>> >>> #include "vmhdr.h" >>> >>> +#include <valgrind/valgrind.h> >>> + >>> /* Best-fit allocation method. This is based on a best-fit strategy >>> ** using a splay tree for storage of lists of free blocks of the same >>> ** size. Recent free blocks may be cached for fast reuse. >>> ** >>> ** Written by Kiem-Phong Vo, kp...@re..., 01/16/94. >>> */ >>> >>> #ifdef DEBUG >>> static int N_free; /* # of free calls */ >>> static int N_alloc; /* # of alloc calls */ >>> static int N_resize; /* # of resize calls */ >>> static int N_wild; /* # allocated from the wild block */ >>> static int N_last; /* # allocated from last free block */ >>> static int N_reclaim; /* # of bestreclaim calls */ >>> #endif /*DEBUG*/ >>> >>> #define COMPACT 8 /* factor to decide when to >>> compact */ >>> >>> /* Check to see if a block is in the free tree */ >>> #if __STD_C >>> @@ -692,41 +694,44 @@ >>> >>> if(VMWILD(vd,np)) >>> { SIZE(np) &= ~BITS; >>> SELF(np) = np; >>> ap = NEXT(np); /**/ASSERT(ISBUSY(SIZE(ap))); >>> SETPFREE(SIZE(ap)); >>> vd->wild = np; >>> } >>> else vd->free = np; >>> } >>> >>> SETBUSY(SIZE(tp)); >>> } >>> >>> done: >>> if(tp && !local && (vd->mode&VM_TRACE) && _Vmtrace && >>> VMETHOD(vd) == VM_MTBEST) >>> (*_Vmtrace)(vm,NIL(Vmuchar_t*),(Vmuchar_t*)DATA(tp),orgsize,0); >>> >>> CLRLOCK(vm,local); /**/ASSERT(_vmbestcheck(vd, NIL(Block_t*)) == 0); >>> >>> - return tp ? DATA(tp) : NIL(Void_t*); >>> + void *res= tp ? DATA(tp) : NIL(Void_t*); >>> + if (!local) >>> + VALGRIND_MALLOCLIKE_BLOCK(res, size, 0, 0); >>> + return res; >>> } >>> >>> #if __STD_C >>> static long bestaddr(Vmalloc_t* vm, Void_t* addr, int local ) >>> #else >>> static long bestaddr(vm, addr, local) >>> Vmalloc_t* vm; /* region allocating from */ >>> Void_t* addr; /* address to check */ >>> int local; >>> #endif >>> { >>> reg Seg_t* seg; >>> reg Block_t *b, *endb; >>> reg long offset; >>> reg Vmdata_t* vd = vm->data; >>> >>> /**/ASSERT(local ? (vd->lock == 1) : 1 ); >>> SETLOCK(vm, local); >>> >>> offset = -1L; b = endb = NIL(Block_t*); >>> @@ -816,40 +821,43 @@ >>> vd->free = bp; >>> else >>> { /**/ASSERT(!vmonlist(CACHE(vd)[S_CACHE], bp) ); >>> LINK(bp) = CACHE(vd)[S_CACHE]; >>> CACHE(vd)[S_CACHE] = bp; >>> } >>> >>> /* coalesce on freeing large blocks to avoid fragmentation */ >>> if(SIZE(bp) >= 2*vd->incr) >>> { bestreclaim(vd,NIL(Block_t*),0); >>> if(vd->wild && SIZE(vd->wild) >= COMPACT*vd->incr) >>> KPVCOMPACT(vm,bestcompact); >>> } >>> } >>> >>> if(!local && _Vmtrace && (vd->mode&VM_TRACE) && VMETHOD(vd) == >>> VM_MTBEST ) >>> (*_Vmtrace)(vm,(Vmuchar_t*)data,NIL(Vmuchar_t*), (s&~BITS), 0); >>> >>> CLRLOCK(vm, local); /**/ASSERT(_vmbestcheck(vd, NIL(Block_t*)) == 0); >>> >>> + if (!local) >>> + VALGRIND_FREELIKE_BLOCK(data, 0); >>> + >>> return 0; >>> } >>> >>> #if __STD_C >>> static Void_t* bestresize(Vmalloc_t* vm, Void_t* data, reg size_t >>> size, int type, int local) >>> #else >>> static Void_t* bestresize(vm, data, size, type, local) >>> Vmalloc_t* vm; /* region allocating from */ >>> Void_t* data; /* old block of data */ >>> reg size_t size; /* new size */ >>> int type; /* !=0 to move, <0 for not copy */ >>> int local; >>> #endif >>> { >>> reg Block_t *rp, *np, *t; >>> size_t s, bs; >>> size_t oldz = 0, orgsize = size; >>> Void_t *oldd = 0, *orgdata = data; >>> Vmdata_t *vd = vm->data; >>> >>> @@ -936,40 +944,46 @@ >>> { if(type&VM_RSCOPY) >>> memcpy(data, oldd, bs); >>> >>> do_free: /* reclaim these right away */ >>> SETJUNK(SIZE(rp)); >>> LINK(rp) = CACHE(vd)[S_CACHE]; >>> CACHE(vd)[S_CACHE] = rp; >>> bestreclaim(vd, NIL(Block_t*), S_CACHE); >>> } >>> } >>> } >>> >>> if(data && (type&VM_RSZERO) && (size = SIZE(BLOCK(data))&~BITS) > oldz ) >>> memset((Void_t*)((Vmuchar_t*)data + oldz), 0, size-oldz); >>> >>> if(!local && _Vmtrace && data && (vd->mode&VM_TRACE) && >>> VMETHOD(vd) == VM_MTBEST) >>> (*_Vmtrace)(vm, (Vmuchar_t*)orgdata, (Vmuchar_t*)data, >>> orgsize, 0); >>> >>> CLRLOCK(vm, local); /**/ASSERT(_vmbestcheck(vd, NIL(Block_t*)) == 0); >>> >>> + if (!local) >>> + { >>> + VALGRIND_FREELIKE_BLOCK(orgdata, 0); >>> + VALGRIND_MALLOCLIKE_BLOCK(data, size, 0, 0); >>> + } >>> + >>> return data; >>> } >>> >>> #if __STD_C >>> static long bestsize(Vmalloc_t* vm, Void_t* addr, int local ) >>> #else >>> static long bestsize(vm, addr, local) >>> Vmalloc_t* vm; /* region allocating from */ >>> Void_t* addr; /* address to check */ >>> int local; >>> #endif >>> { >>> Seg_t *seg; >>> Block_t *b, *endb; >>> long size; >>> Vmdata_t *vd = vm->data; >>> >>> SETLOCK(vm, local); >>> >>> size = -1L; >>> -- snip -- >> >> ... aaand more digging: I found >> http://code.google.com/p/valgrind-variant/source/browse/trunk/valgrind/coregrind/m_replacemalloc/vg_replace_malloc.c#1175 >> which seems to be from one of the valgrind forks... what about >> talking-up that idea and provide a command line option called >> --allocator-sym-redirect which works like passing down a small list of >> symbol mappings to instruct valgrind that it should monitor some extra >> allocators. >> >> Example: >> $ valgrind "--allocator-sym-redirect=sh_malloc=malloc,sh_free=free,sh_calloc=calloc" >> ... # would instruct valgrind to take function |sh_malloc()| as an >> alternative |malloc(), |sh_free()| as alternative |free()| version >> etc. etc. >> >> The only issue is that if multiple allocators are active within a >> single process we may need some kind of "grouping" to explain valgrind >> that memory allocated by |sh_malloc()| can not be freed by |tcfree()| >> or |_ast_free()| ... maybe it could be done using '{'- and '}'-pairs, >> e.g. $ valgrind >> "--allocator-sym-redirect={sh_malloc=malloc,sh_free=free,sh_calloc=calloc},{_ast_malloc=malloc,_ast_free=free,_ast_calloc=calloc}" >> ... # > > The idea of (finally!) providing such an option sounds like a very > good idea. Until now the only way to probe python and bash4 via > valgrind is to poke in the valgrind sources (which should never > happen). > > I also think the idea to let valgrind detect mixing of different > allocators is a very valuable feature since this has been a source of > more and more bugs. Usually happens in complex projects with use many > different shared libraries, all with their own memory allocators. Uhm... was there any feedback yet for that idea ? ---- Bye, Roland -- __ . . __ (o.\ \/ /.o) rol...@nr... \__\/\/__/ MPEG specialist, C&&JAVA&&Sun&&Unix programmer /O /==\ O\ TEL +49 641 3992797 (;O/ \/ \O;) |