You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(11) |
2
(9) |
3
(14) |
4
(18) |
5
(13) |
|
6
(4) |
7
(12) |
8
(16) |
9
(14) |
10
(8) |
11
(9) |
12
(7) |
|
13
(12) |
14
(6) |
15
(14) |
16
(5) |
17
(10) |
18
(8) |
19
(5) |
|
20
(10) |
21
(16) |
22
(5) |
23
(14) |
24
(10) |
25
(11) |
26
(6) |
|
27
(9) |
28
(8) |
29
(11) |
30
(9) |
31
(18) |
|
|
|
From: Konstantin S. <kon...@gm...> - 2008-01-30 18:20:30
|
> > > My impression of MSMProp1 is that it is like a pure happens-before > detector, except that it deals with locks using locksets rather than > by generating happens-before edges in the graph. I have the same impression :) > > > Indeed, I wonder if it is possible to take the formal description of > MSMProp1 and from it derive the formal description of a pure happens- > before detector I posted earlier in this thread (Wed Jan 23 14:30:21 2008) > by (1) removing all locksets from the MSMProp1 description, and > (2) adding happens-before edges for lock/unlock events. It would > be interesting to try. I shall try this... But after all we have exp-drd. The problem with pure HB is (iiuc) that it has too many false negatives (not mentioning performance and scalability): // First: Second: // 1. write // 2. MU.Lock() // 3. MU.Unlock() (sleep) // a. MU.Lock() // b. MU.Unlock(); // c. write Pure HB will not see a race between the writes because you have HB edge between Unlock() and Lock(). Both MSMHelgrind and MSMProp1 detect this race (test47), while exp-drd does not. > I don't think you can have zero false positives and zero false > negatives at the same time (holy grail ...), because all these state > machines are different approximations to something which is NP-complete > (and therefore which we cannot compute exactly). Oh, sure. I think that it makes sense to have several different machines in helgrind so that a user can choose one that fits his application best. For example, if an application never creates/join threads (except for startup/shutdown) and does not use semahpores, cond vars, message queues, etc, Eraser will be enough. If on contrary the application never use locks (but uses barriers, semaphores, message queues, etc) locksets are useless. The applications I try to analyze use both locks and HB stuff, so I need both. > > We, however, still have this scan-all-memory in > shadow_mem_make_NoAccess. > > The comment says: > > 7. Modify all shadow words, by removing ToDelete from the lockset > > of all ShM and ShR states. Note this involves a complete scan > > over map_shmem, which is very expensive... > > > > Do we really need this? > > If we don't do that and if a new lock gets allocated at the same memory > > location we may miss some very weird race. > > Any other reason for doing this? > > I only did it to be clean / safe. Otherwise I think if you deallocate > a lock and allocate a new one at the same address, very strange things > will happen. > > Is it really needed? I don't know. But I don't know how to prove/argue > that it is not needed. Alternative question is, is it possible to do > this cheaply/incrementally? I thought about using unique lock id in locksets instead of Lock*. It leads to a problem of mapping from the id to Lock* though... But we only need this while reporting a race. > There is also a different problem for the scaling of Helgrind to > very large programs: the Lock structures are freed when locks are > destroyed, but Segment and Thread are never freed. So a program > which generates a very large number of Segments, or Threads, will > eventually cause Helgrind to run out of memory. I can see that > Helgrind's storage management strategy will need to be redesigned > at some future point. Yea... Perhaps we could garbage-collect old segments and if some SVal still refers to a deleted one, behave conservatively. Not sure about threads... If a program creates more than 1M threads, we are in trouble performance-wise anyway. > Are you intending to make a revised MSMProp1 patch that can deal > with larger programs? Certainly. Right now it is ~60% slower than MSMHelgrind, but there are still things to improve there. I've been trying both machines on some of my applications. MSMProp1 reports ~30% more races. Those extra races look similar to test10. Most (but not all!) of them look harmless which makes me think about classifying data races somehow. See e.g. http://www.eecs.umich.edu/~nsatish/DataRace-PLDI07.html. Also with MSMProp1 I see less false positives on my apps. > I would like to compare it to MSMHelgrind on OpenOffice and Firefox. Trying to run OpenOffice and Firefox on my machine leads to this. --1052-- VALGRIND INTERNAL ERROR: Valgrind received a signal 11 (SIGSEGV) - exiting --1052-- si_code=1; Faulting address: 0x0; sp: 0x478CDF4 valgrind: the 'impossible' happened: Killed by fatal signal ==1052== at 0x3801979C: vgPlain_strlen (m_libcbase.c:232) ==1052== by 0x38010941: evh__pre_mem_read_asciiz (hg_main.c:5763) ==1052== by 0x38037D47: vgPlain_client_syscall (syswrap-main.c:850) ==1052== by 0x38035899: vgPlain_scheduler (scheduler.c:790) ==1052== by 0x38048C0D: run_a_thread_NORETURN (syswrap-linux.c:89) I have better luck with konqueror, I'll try to compare the two machines on it. Any set of *unit* tests won't be enough to evaluate new machines, but it would be very useful to make the first steps. I beg everyone interested to contribute to racecheck_unittest or similar. --kcc |
|
From: Julian S. <js...@ac...> - 2008-01-30 12:14:34
|
On Wednesday 30 January 2008 03:05, Konstantin Serebryany wrote:
> Few observations regarding MSMProp1:
>
> When in exclusive state (i.e. state is Read or Write and #SS == 1) and we
> access the memory from the same thread, HB(SS,currS) is always true.
> This means that each access that leaves us in the same thread rewrites the
> shadow value with SS={currS}; LS=currLS.
> In other words, while in exclusive state we do not track the history, but
> only store information about the last access.
> This helps us to avoid false positives on tests like test{37,43,44,45},
> while still reporting a (true) race on test10.
My impression of MSMProp1 is that it is like a pure happens-before
detector, except that it deals with locks using locksets rather than
by generating happens-before edges in the graph.
Indeed, I wonder if it is possible to take the formal description of
MSMProp1 and from it derive the formal description of a pure happens-
before detector I posted earlier in this thread (Wed Jan 23 14:30:21 2008)
by (1) removing all locksets from the MSMProp1 description, and
(2) adding happens-before edges for lock/unlock events. It would
be interesting to try.
> Both MSMProp1 and MSMHelgrind have false negative on test46:
> //
> // First: Second:
> // 1. write
> // 2. MU.Lock()
> // 3. write
> // 4. MU.Unlock() (sleep)
> // a. MU.Lock()
> // b. write
> // c. MU.Unlock();
I don't think you can have zero false positives and zero false
negatives at the same time (holy grail ...), because all these state
machines are different approximations to something which is NP-complete
(and therefore which we cannot compute exactly).
> With MSMProp1 we no longer need to scan all memory at thread_join.
Good.
> We, however, still have this scan-all-memory in shadow_mem_make_NoAccess.
> The comment says:
> 7. Modify all shadow words, by removing ToDelete from the lockset
> of all ShM and ShR states. Note this involves a complete scan
> over map_shmem, which is very expensive...
>
> Do we really need this?
> If we don't do that and if a new lock gets allocated at the same memory
> location we may miss some very weird race.
> Any other reason for doing this?
I only did it to be clean / safe. Otherwise I think if you deallocate
a lock and allocate a new one at the same address, very strange things
will happen.
Is it really needed? I don't know. But I don't know how to prove/argue
that it is not needed. Alternative question is, is it possible to do
this cheaply/incrementally?
There is also a different problem for the scaling of Helgrind to
very large programs: the Lock structures are freed when locks are
destroyed, but Segment and Thread are never freed. So a program
which generates a very large number of Segments, or Threads, will
eventually cause Helgrind to run out of memory. I can see that
Helgrind's storage management strategy will need to be redesigned
at some future point.
Are you intending to make a revised MSMProp1 patch that can deal
with larger programs? I would like to compare it to MSMHelgrind
on OpenOffice and Firefox.
J
|
|
From: Tom H. <th...@cy...> - 2008-01-30 04:03:40
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2008-01-30 03:15:07 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 339 tests, 84 stderr failures, 1 stdout failure, 29 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/lsframe1 (stderr) memcheck/tests/lsframe2 (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-names (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc04_free_lock (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc10_rec_lock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc13_laog1 (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) exp-drd/tests/fp_race (stderr) exp-drd/tests/fp_race2 (stderr) exp-drd/tests/matinv (stderr) exp-drd/tests/pth_barrier (stderr) exp-drd/tests/pth_broadcast (stderr) exp-drd/tests/pth_cond_race (stderr) exp-drd/tests/pth_cond_race2 (stderr) exp-drd/tests/pth_create_chain (stderr) exp-drd/tests/pth_detached (stderr) exp-drd/tests/pth_detached2 (stderr) exp-drd/tests/sem_as_mutex (stderr) exp-drd/tests/sem_as_mutex2 (stderr) exp-drd/tests/sigalrm (stderr) exp-drd/tests/tc17_sembar (stderr) exp-drd/tests/tc18_semabuse (stderr) |
|
From: Tom H. <th...@cy...> - 2008-01-30 03:46:50
|
Nightly build on aston ( x86_64, Fedora Core 5 ) started at 2008-01-30 03:20:04 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 375 tests, 15 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/pth_cond_race (stderr) exp-drd/tests/sem_as_mutex2 (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 375 tests, 15 stderr failures, 1 stdout failure, 1 post failure == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) massif/tests/long-names (post) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/pth_cond_race (stderr) exp-drd/tests/sem_as_mutex2 (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Jan 30 03:33:28 2008 --- new.short Wed Jan 30 03:46:52 2008 *************** *** 8,10 **** ! == 375 tests, 15 stderr failures, 1 stdout failure, 1 post failure == memcheck/tests/addressable (stderr) --- 8,10 ---- ! == 375 tests, 15 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/addressable (stderr) *************** *** 18,20 **** memcheck/tests/xml1 (stderr) - massif/tests/long-names (post) none/tests/blockfault (stderr) --- 18,19 ---- |
|
From: Tom H. <th...@cy...> - 2008-01-30 03:39:06
|
Nightly build on trojan ( x86_64, Fedora Core 6 ) started at 2008-01-30 03:25:04 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 373 tests, 8 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/sem_as_mutex2 (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 373 tests, 8 stderr failures, 5 stdout failures, 1 post failure == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) massif/tests/long-names (post) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/sem_as_mutex2 (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Jan 30 03:32:20 2008 --- new.short Wed Jan 30 03:39:09 2008 *************** *** 8,10 **** ! == 373 tests, 8 stderr failures, 5 stdout failures, 1 post failure == memcheck/tests/pointer-trace (stderr) --- 8,10 ---- ! == 373 tests, 8 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) *************** *** 14,16 **** memcheck/tests/x86/scalar (stderr) - massif/tests/long-names (post) none/tests/cmdline1 (stdout) --- 14,15 ---- |
|
From: Tom H. <th...@cy...> - 2008-01-30 03:38:30
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2008-01-30 03:05:07 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 373 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/pth_cond_race (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 373 tests, 8 stderr failures, 2 stdout failures, 1 post failure == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) massif/tests/long-names (post) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/pth_cond_race (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Jan 30 03:20:06 2008 --- new.short Wed Jan 30 03:38:31 2008 *************** *** 8,10 **** ! == 373 tests, 8 stderr failures, 2 stdout failures, 1 post failure == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 373 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) *************** *** 14,16 **** memcheck/tests/xml1 (stderr) - massif/tests/long-names (post) none/tests/mremap (stderr) --- 14,15 ---- |
|
From: Tom H. <th...@cy...> - 2008-01-30 03:36:57
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2008-01-30 03:10:04 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 373 tests, 9 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/pth_cond_race (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 373 tests, 9 stderr failures, 3 stdout failures, 1 post failure == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) massif/tests/long-names (post) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/pth_cond_race (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Jan 30 03:18:53 2008 --- new.short Wed Jan 30 03:27:26 2008 *************** *** 8,10 **** ! == 373 tests, 9 stderr failures, 3 stdout failures, 1 post failure == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 373 tests, 9 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) *************** *** 14,19 **** memcheck/tests/xml1 (stderr) - massif/tests/long-names (post) none/tests/mremap (stderr) none/tests/mremap2 (stdout) ! none/tests/pth_detached (stdout) helgrind/tests/tc18_semabuse (stderr) --- 14,18 ---- memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) ! none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) |
|
From: Tom H. <th...@cy...> - 2008-01-30 03:15:34
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2008-01-30 03:00:03 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 375 tests, 30 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/blockfault (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 375 tests, 30 stderr failures, 1 stdout failure, 1 post failure == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) massif/tests/long-names (post) none/tests/blockfault (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Jan 30 03:07:27 2008 --- new.short Wed Jan 30 03:15:19 2008 *************** *** 8,10 **** ! == 375 tests, 30 stderr failures, 1 stdout failure, 1 post failure == memcheck/tests/addressable (stderr) --- 8,10 ---- ! == 375 tests, 30 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/addressable (stderr) *************** *** 19,21 **** memcheck/tests/x86/scalar_supp (stderr) - massif/tests/long-names (post) none/tests/blockfault (stderr) --- 19,20 ---- |
|
From: Konstantin S. <kon...@gm...> - 2008-01-30 02:06:08
|
Few observations regarding MSMProp1:
When in exclusive state (i.e. state is Read or Write and #SS == 1) and we
access the memory from the same thread, HB(SS,currS) is always true.
This means that each access that leaves us in the same thread rewrites the
shadow value with SS={currS}; LS=currLS.
In other words, while in exclusive state we do not track the history, but
only store information about the last access.
This helps us to avoid false positives on tests like test{37,43,44,45},
while still reporting a (true) race on test10.
Both MSMProp1 and MSMHelgrind have false negative on test46:
//
// First: Second:
// 1. write
// 2. MU.Lock()
// 3. write
// 4. MU.Unlock() (sleep)
// a. MU.Lock()
// b. write
// c. MU.Unlock();
With MSMProp1 we no longer need to scan all memory at thread_join.
We, however, still have this scan-all-memory in shadow_mem_make_NoAccess.
The comment says:
7. Modify all shadow words, by removing ToDelete from the lockset
of all ShM and ShR states. Note this involves a complete scan
over map_shmem, which is very expensive...
Do we really need this?
If we don't do that and if a new lock gets allocated at the same memory
location we may miss some very weird race.
Any other reason for doing this?
Thanks,
--kcc
On Jan 24, 2008 12:49 PM, Konstantin Serebryany <
kon...@gm...> wrote:
>
>
> On Jan 24, 2008 11:22 PM, Konstantin Serebryany <
> kon...@gm...> wrote:
>
> > I see you changed the "Race if ..." condition in MSMProp1 from "LS={}"
> > > to "LS={} and #SS > 1", yes?
> > >
> >
> > Yep. That's sort of obvious. There is no race in exclusive state.
> >
> >
> > >
> > > I tried to summarise the MSMProp1 state machine, so as to get a
> > > clearer idea of what behaviour it allows and does not allow. But
> > > really I am guessing. Can you fix/refine this summary? You must
> > > have some intuition of what the state machine allows/disallows.
> > >
> > > J
> > >
> > >
> > > * define a "synchronization event" as any of the following:
> > > semaphore post-waits
> > > condition variable signal-wait when the waiter blocks
> > > thread creation
> > > thread joinage
> > > a barrier
> > >
> >
> > There are several primary "synchronization events":
> > semaphore post-waits
> > condition variable signal-wait when the waiter blocks or the
> > while-wait loop is annotated
> > thread creation/joinage
> >
> > and various possible secondary (defined via primary) events:
> > message queue (aka PCQ), defined through semaphore.
> > barrier, defined through condition variable
> >
> >
> >
> > > * synchronisation events partition the execution of a threaded program
> > > into a set of segments which have a partial ordering, called the
> > > the "happens-before" ordering.
> > >
> > > * a location may be read by a segment S, without a protecting lock,
> > > only if all writes to it happened-before S
> > >
> > > * a location may be written by a segment S, without a protecting
> > > lock, only if all reads and all writes to it happened-before S
> > >
> > > * a location may be read by a segment S, using a protecting lock, only
> > > if all writes to it which are concurrent (not-happens-before) use
> > > the same protecting lock
> > >
> > > * a location may be read by a segment S, using a protecting lock,
> > > only if all reads and all writes to it which are concurrent
> > > (not happens-before) use the same protecting lock
> > >
> >
> >
> > I could not express it better! I'll borrow this description for the wiki
> > :)
> >
> >
> > It also helps to see what this machine can not do.
> > Good examples are test38 and test40 which differ only by calls to
> > sleep().
> > test38 is not handled by this machine, while test40 works fine.
>
>
>
> Ehmm.
> The last bullet should probably be 'a location may be *written* by a
> segment S'
> And actually it is not correct. For test38 this bullet is true, but we
> still have a false positive.
> Same for reads and test28.
>
>
> --kcc
>
|