You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(20) |
2
(19) |
3
(7) |
|
4
(13) |
5
(24) |
6
(9) |
7
(12) |
8
(8) |
9
(34) |
10
(28) |
|
11
(20) |
12
(23) |
13
(12) |
14
(10) |
15
(15) |
16
(24) |
17
(26) |
|
18
(17) |
19
(14) |
20
(14) |
21
(8) |
22
(12) |
23
(22) |
24
(10) |
|
25
(21) |
26
(21) |
27
(18) |
28
(8) |
29
(13) |
30
(15) |
|
|
From: Julian S. <js...@ac...> - 2007-11-08 12:55:25
|
> I assume that valgrind can not track history of accesses to all memory > addresses with corresponding stacks. Too expensive. True. > May be it can keep such history for those addresses where a race has been > detected? That could be possible at reasonable cost. > But it will not help if the memory was accessed only once in the first > thread. True. Oh well. > Another way is to run valgrind twice -- the first run will act as usual and > the second run will take the report of the first one and treat addresses in > the report specially. Though, of course, running valgrind twice is not fun. > What I would like to see is: > ==30205== Possible data race during write of size 4 at 0x5C52028 You can sort-of do this already. Rerun with --trace-addr=0x5C52028 and --trace-level=1. This will give you a 1-line summary for each access to 0x5C52028. At --trace-level=2 you get a complete stack trace. I would be interested to know if that is useful. J |
|
From: Konstantin S. <kon...@gm...> - 2007-11-08 12:30:16
|
Dear valgrind developers, I have a question regarding thrcheck. If thrcheck detects a race it prints something like this: ==30205== Possible data race during write of size 4 at 0x5C52028 ==30205== at 0x80484D7: main (heap_race.c:18) ==30205== Old state: owned exclusively by thread #2 ==30205== New state: shared-modified by threads #1, #2 ==30205== Reason: this thread, #1, holds no locks at all So, the debug info gives us the line of code where the *second* access to this memory has happened. But it does not give any information about the *first* access. Of course, in a simple program the first access could be found by just looking at the code. If the accessed memory is global/static variable, we can run the program under gdb with watchpoint. However, if the memory in question is dynamically allocated and buried deep inside some structures, we are in trouble. Do you have any suggestion how to find the first access in a general case? I assume that valgrind can not track history of accesses to all memory addresses with corresponding stacks. Too expensive. May be it can keep such history for those addresses where a race has been detected? But it will not help if the memory was accessed only once in the first thread. Another way is to run valgrind twice -- the first run will act as usual and the second run will take the report of the first one and treat addresses in the report specially. Though, of course, running valgrind twice is not fun. What I would like to see is: ==30205== Possible data race during write of size 4 at 0x5C52028 ==30205== at 0x80484D7: main (heap_race.c:18) ==30205== Old state: owned exclusively by thread #2 ==30205== New state: shared-modified by threads #1, #2 ==30205== Reason: this thread, #1, holds no locks at all *==30205== One of the previous accesses to 0x5C52028 in thread #2 was ==30205== at 0x1234567: somewhere (somewhere.c:123) ==30205== by 0x80484D7: main (heap_race.c:18) * Actually, these two approaches are complimentary. Ideas? Suggestions? Thanks, --kcc |
|
From: Tom H. <th...@cy...> - 2007-11-08 03:31:57
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2007-11-08 03:15:02 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 287 tests, 33 stderr failures, 1 stdout failure, 27 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-11-08 03:24:26
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2007-11-08 03:05:05 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 321 tests, 4 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-11-08 03:23:47
|
Nightly build on dellow ( x86_64, Fedora 7 ) started at 2007-11-08 03:10:04 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 321 tests, 4 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) |
|
From: Tom H. <th...@cy...> - 2007-11-08 03:18:54
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-11-08 03:00:03 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 323 tests, 6 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: <sv...@va...> - 2007-11-08 02:29:37
|
Author: sewardj
Date: 2007-11-08 02:29:36 +0000 (Thu, 08 Nov 2007)
New Revision: 7110
Log:
Some well-known open-source software that shall remain nameless
considers it important to do malloc(-1), which causes Thrcheck's
allocator to assert. Detect such attempts and return NULL. Logic is
identical to that in memcheck/mc_malloc_wrappers.c.
Modified:
branches/THRCHECK/thrcheck/tc_main.c
Modified: branches/THRCHECK/thrcheck/tc_main.c
===================================================================
--- branches/THRCHECK/thrcheck/tc_main.c 2007-11-07 11:05:23 UTC (rev 7109)
+++ branches/THRCHECK/thrcheck/tc_main.c 2007-11-08 02:29:36 UTC (rev 7110)
@@ -6999,6 +6999,7 @@
Addr p;
MallocMeta* md;
+ tl_assert( ((SSizeT)szB) >= 0 );
p = (Addr)VG_(cli_malloc)(alignB, szB);
if (!p) {
return NULL;
@@ -7023,23 +7024,33 @@
return (void*)p;
}
+/* Re the checks for less-than-zero (also in tc_cli__realloc below):
+ Cast to a signed type to catch any unexpectedly negative args.
+ We're assuming here that the size asked for is not greater than
+ 2^31 bytes (for 32-bit platforms) or 2^63 bytes (for 64-bit
+ platforms). */
static void* tc_cli__malloc ( ThreadId tid, SizeT n ) {
+ if (((SSizeT)n) < 0) return NULL;
return handle_alloc ( tid, n, VG_(clo_alignment),
/*is_zeroed*/False );
}
static void* tc_cli____builtin_new ( ThreadId tid, SizeT n ) {
+ if (((SSizeT)n) < 0) return NULL;
return handle_alloc ( tid, n, VG_(clo_alignment),
/*is_zeroed*/False );
}
static void* tc_cli____builtin_vec_new ( ThreadId tid, SizeT n ) {
+ if (((SSizeT)n) < 0) return NULL;
return handle_alloc ( tid, n, VG_(clo_alignment),
/*is_zeroed*/False );
}
static void* tc_cli__memalign ( ThreadId tid, SizeT align, SizeT n ) {
+ if (((SSizeT)n) < 0) return NULL;
return handle_alloc ( tid, n, align,
/*is_zeroed*/False );
}
static void* tc_cli__calloc ( ThreadId tid, SizeT nmemb, SizeT size1 ) {
+ if ( ((SSizeT)nmemb) < 0 || ((SSizeT)size1) < 0 ) return NULL;
return handle_alloc ( tid, nmemb*size1, VG_(clo_alignment),
/*is_zeroed*/True );
}
@@ -7093,6 +7104,8 @@
Addr payload = (Addr)payloadV;
+ if (((SSizeT)new_size) < 0) return NULL;
+
md = (MallocMeta*) VG_(HT_lookup)( tc_mallocmeta_table, (UWord)payload );
if (!md)
return NULL; /* apparently realloc-ing a bogus address. Oh well. */
|
|
From: <js...@ac...> - 2007-11-08 01:16:54
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-11-08 02:00:01 CET Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 255 tests, 11 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/deep-C (stderr) massif/tests/peak2 (stderr) massif/tests/realloc (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |