You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(11) |
2
(9) |
3
(14) |
4
(18) |
5
(13) |
|
6
(4) |
7
(12) |
8
(16) |
9
(14) |
10
(8) |
11
(9) |
12
(7) |
|
13
(12) |
14
(6) |
15
(14) |
16
(5) |
17
(10) |
18
(8) |
19
(5) |
|
20
(10) |
21
(16) |
22
(5) |
23
(14) |
24
(10) |
25
(11) |
26
(6) |
|
27
(9) |
28
(8) |
29
(11) |
30
(9) |
31
(18) |
|
|
|
From: Konstantin S. <kon...@gm...> - 2008-01-23 18:25:51
|
Hi Bart, > > I had a look at the file thread_wrappers_pthread.h. I was surprised to > find a class with the name Mutex that has both a mutex and a condition > variable as members. This is made to implement methods like LockWhen that are equivalent to a sequence MU.Lock(); while(cond==false)) CV.Wait(); ANNOTATE_COND_WAIT Consider it as a syntax sugar. If you don't use methods like LockWhen, Mutex/CondVar is *exactly* the same as pthread_mutex_t/pthread_convar_t. > Are you aware of the monitor concept that was > defined by C.A.R. Hoare ? See e.g. > http://en.wikipedia.org/wiki/Monitor_(synchronization)<http://en.wikipedia.org/wiki/Monitor_%28synchronization%29> > Sort of :) > > Furthermore, using a condition variable by itself (class CondVar) is > *very* error prone and can easily introduce race conditions. A pthread > condition variable should always be used in combination with a > pthread mutex. Sure. CondVar::Signal()/Wait() should always be called under Mutex. If this is not the case, the test is incorrect. --kcc |
|
From: Bart V. A. <bar...@gm...> - 2008-01-23 18:02:54
|
Hello Konstantin, I had a look at the file thread_wrappers_pthread.h. I was surprised to find a class with the name Mutex that has both a mutex and a condition variable as members. Are you aware of the monitor concept that was defined by C.A.R. Hoare ? See e.g. http://en.wikipedia.org/wiki/Monitor_(synchronization) Furthermore, using a condition variable by itself (class CondVar) is *very* error prone and can easily introduce race conditions. A pthread condition variable should always be used in combination with a pthread mutex. Bart. |
|
From: Konstantin S. <kon...@gm...> - 2008-01-23 14:41:45
|
> > * In MSMProp1, isn't "Read" state similar to "ShR" and "Write" > > > similar to "ShM" ? > > > > Similar, but not the same. Hence the different name to avoid phrases > like > > 'ShR from MSMHelgrind is different from ShR from MSMProp1 ... ' > > Also Sh in ShR/ShM means 'shared' while Read/Write are not necessary > > shared. > > Fair enough. If I ignore the HB(SS,currS), then is it approximately > correct to understand that "Write" means "last access was a write" and > "Read" means "last access was a read" ? Emm, 'Read' means that if there were writes to this memory location they all 'happened before' one of the reads. 'Write' means there were writes and this is not 'Read'. > > > > > * I like that MSMProp1 removes E9/10/11/12 from MSMHelgrind. > > > Those scanned all of memory and were potentially very expensive. > > > > We move this cost to handle_read/write, at least partially. But yes, I > like > > it too. > > You mean into the SS_update and LS_update "procedures" ? At least it > is lazy, though -- presumably the cost only happens for locations which > are later accessed. > Yes. > > > Not sure how to express it graphically in a more clear way... > > I would find the table a little clearer if it was like this: > > Edge OldState AccessType Condition NewState NewSegSet NewLockSet RaceIf > Sounds nice. > > Then the inputs to the FSM are clearly separated into cols 2,3,4 and > the outputs to cols 5,6,7,8. > > Also, I was unclear if the transition rules as given exactly cover the > possible input space exactly once? > Or do I have to read from top to bottom and use the first rule that > matches? Yes. Sorry for ambiguity. I'll try to address it. > At the moment I get the > impression there is a hidden dependency, that E7 only applies if > E6 does not apply -- I cannot just look at E7 by itself, and the input > data I have, and see if it applies or not. > > > By 'pure happens-before detector' you understand the one which creates > > happens-before relation after lock/unlock? > > Yes. Details posted already. > > J > |
|
From: Julian S. <js...@ac...> - 2008-01-23 14:18:45
|
> > * In MSMProp1, isn't "Read" state similar to "ShR" and "Write" > > similar to "ShM" ? > > Similar, but not the same. Hence the different name to avoid phrases like > 'ShR from MSMHelgrind is different from ShR from MSMProp1 ... ' > Also Sh in ShR/ShM means 'shared' while Read/Write are not necessary > shared. Fair enough. If I ignore the HB(SS,currS), then is it approximately correct to understand that "Write" means "last access was a write" and "Read" means "last access was a read" ? > > * I like that MSMProp1 removes E9/10/11/12 from MSMHelgrind. > > Those scanned all of memory and were potentially very expensive. > > We move this cost to handle_read/write, at least partially. But yes, I like > it too. You mean into the SS_update and LS_update "procedures" ? At least it is lazy, though -- presumably the cost only happens for locations which are later accessed. > Not sure how to express it graphically in a more clear way... I would find the table a little clearer if it was like this: Edge OldState AccessType Condition NewState NewSegSet NewLockSet RaceIf Then the inputs to the FSM are clearly separated into cols 2,3,4 and the outputs to cols 5,6,7,8. Also, I was unclear if the transition rules as given exactly cover the possible input space exactly once? Or do I have to read from top to bottom and use the first rule that matches? At the moment I get the impression there is a hidden dependency, that E7 only applies if E6 does not apply -- I cannot just look at E7 by itself, and the input data I have, and see if it applies or not. > By 'pure happens-before detector' you understand the one which creates > happens-before relation after lock/unlock? Yes. Details posted already. J |
|
From: Julian S. <js...@ac...> - 2008-01-23 13:32:31
|
> I think it is possible to describe a pure happens-before detector in
> the same framework (almost).
Like this:
Generate segments as currently. Also, generate segments for each
lock/unlock operation.
Shadow information for each address A is now a pair:
"w-owner" (thread segment)
"r-owners" (set of thread segments)
no locksets are tracked
w-owner(A) is the segment that most recently wrote A.
r-owners(A) are the segment(s) that most recently read A. As per
comment in previous message, r-owners is potentially redundant; we
only need to store the maximum-frontier subset of r-owners. (*)
For segments S1, S2,
define S1 >= S2 to mean "S1 == S2 or S1 happens-after S2"
Then the update rules are:
for a write in seg S:
valid if S >= w-owner and S >= all elements in r-owners
if (!valid) report race
w-owner := S
r-owners := {}
// intuition: a valid write must happen-after the previous
// write, or happen in the same segment. Also, a valid write
// must happen-after all reads, or be in the same segment
for a read in seg S:
valid if S >= w-owner
r-owners := union(r-owners, {S})
// intuition: a valid read must happen-after the previous write.
// but that's all; it can happen in parallel with other reads.
// we merely need to record that it happened, in order that we
// can check that a later write happens-after all reads
(*) follows trivially from definition of max(a segment set). The
only comparison against r-owners is
"S >= all elements in r-owners"
which is equivalent to
"S >= all elements in max(r-owners)"
J
|
|
From: Konstantin S. <kon...@gm...> - 2008-01-23 13:19:33
|
>
>
> * I did not understand what the red/green boxes ("R:", "W:")
> signify. Maybe it would be simpler to remove them? At least
> for MSMHelgrind the names "ShR" and "ShM", by themselves,
> carry enough meaning for me.
That's just a visual sugar. :)
I wanted to create a machine with states were we store separate segment sets
for R and W. Such states would have two boxes: one red and one green.
But I've abandoned it for now as it seems to complex (and useless :))
Anyway, I agree they are redundant here.
>
>
> * In MSMProp1, isn't "Read" state similar to "ShR" and "Write"
> similar to "ShM" ?
Similar, but not the same. Hence the different name to avoid phrases like
'ShR from MSMHelgrind is different from ShR from MSMProp1 ... '
Also Sh in ShR/ShM means 'shared' while Read/Write are not necessary shared.
> * I like that MSMProp1 removes E9/10/11/12 from MSMHelgrind.
> Those scanned all of memory and were potentially very expensive.
We move this cost to handle_read/write, at least partially. But yes, I like
it too.
>
>
> * So it is kind-of like MSMHelgrind, except that
> - The Excl is merged back into ShR / ShR
> - Segment-sets are tracked, not thread-sets
> - transition E6 allows transfer of ownership at any read event
> which happens-after all previous accesses - MSMHelgrind only
> allows such transfers of memory in Excl state
Yes, except that E4/E5/E7 also transfer ownership.
These edges change segment sets using happens-before (see SS_update).
And if HB(oldSS, currS) these edges take the current locks instead of
intersecting with old lock set (see LS_update).
For example, if we are looking at the first access after a barrier, we will
have HB(oldSS, currS)==True.
Not sure how to express it graphically in a more clear way...
>
>
> * A general comment when you are dealing with segment sets rather
> than thread sets. There is a possible redundancy which you may
> need to remove in the implementation:
>
> HB(seg,segSet) is equivalent to HB(seg,max(segSet))
>
> and max(segSet) is always same size or smaller than segSet.
> Intuitively, max(segSet) == segSet except that it only has the
> most recent seg for each thread:
>
> max(segSet) = { seg1 | seg1 <- segSet,
> not (exists seg2 <- segSet
> such that seg1 < seg2) }
>
> if that makes any sense. (Standard lattice-theory stuff).
>
Cool!
>
>
> > I will try to implement it in helgrind and see if it really works.
>
> Good.
>
> > Regrading size of SVal: I will prefer 64-bit over 48 bit. These extra 16
> > bits will give us some room for experiments with 'heavy' state machines.
>
> 48- or 64-bits is more or less unavoidable. Not sure 48-bit is practical
> - it might give fewer cache misses than 64-bit, but is likely to involve
> more instructions to pack/unpack the values where needed.
>
> ------------
>
> I like the idea of describing these state machines in a relatively
> uniform notation.
>
> I think it is possible to describe a pure happens-before detector in
> the same framework (almost). This would give a lower bound on the false
> error rate and so would be a useful reference point.
By 'pure happens-before detector' you understand the one which creates
happens-before relation after lock/unlock?
--kcc
|
|
From: Julian S. <js...@ac...> - 2008-01-23 12:50:59
|
> I've described the proposed memory state machine at > http://code.google.com/p/data-race-test/wiki/MSMProp1. That's good. It will be useful to have some results from alternative state machines -- essentially, to see if we can find a better compromise between scheduling-sensitivity and false error rates. Some comments (without really understanding all the consequences of your proposal): * I did not understand what the red/green boxes ("R:", "W:") signify. Maybe it would be simpler to remove them? At least for MSMHelgrind the names "ShR" and "ShM", by themselves, carry enough meaning for me. * In MSMProp1, isn't "Read" state similar to "ShR" and "Write" similar to "ShM" ? * I like that MSMProp1 removes E9/10/11/12 from MSMHelgrind. Those scanned all of memory and were potentially very expensive. * So it is kind-of like MSMHelgrind, except that - The Excl is merged back into ShR / ShR - Segment-sets are tracked, not thread-sets - transition E6 allows transfer of ownership at any read event which happens-after all previous accesses - MSMHelgrind only allows such transfers of memory in Excl state * A general comment when you are dealing with segment sets rather than thread sets. There is a possible redundancy which you may need to remove in the implementation: HB(seg,segSet) is equivalent to HB(seg,max(segSet)) and max(segSet) is always same size or smaller than segSet. Intuitively, max(segSet) == segSet except that it only has the most recent seg for each thread: max(segSet) = { seg1 | seg1 <- segSet, not (exists seg2 <- segSet such that seg1 < seg2) } if that makes any sense. (Standard lattice-theory stuff). > I will try to implement it in helgrind and see if it really works. Good. > Regrading size of SVal: I will prefer 64-bit over 48 bit. These extra 16 > bits will give us some room for experiments with 'heavy' state machines. 48- or 64-bits is more or less unavoidable. Not sure 48-bit is practical - it might give fewer cache misses than 64-bit, but is likely to involve more instructions to pack/unpack the values where needed. ------------ I like the idea of describing these state machines in a relatively uniform notation. I think it is possible to describe a pure happens-before detector in the same framework (almost). This would give a lower bound on the false error rate and so would be a useful reference point. J |
|
From: Konstantin S. <kon...@gm...> - 2008-01-23 09:19:10
|
Hi, I've described the proposed memory state machine at http://code.google.com/p/data-race-test/wiki/MSMProp1 . It has some false positives but imho better than the current one. I will try to implement it in helgrind and see if it really works. I also attempted to document the current helgrind's state machine ( http://code.google.com/p/data-race-test/wiki/MSMHelgrind). Comments are welcome :) Regrading size of SVal: I will prefer 64-bit over 48 bit. These extra 16 bits will give us some room for experiments with 'heavy' state machines. --kcc |
|
From: Tom H. <th...@cy...> - 2008-01-23 04:04:08
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2008-01-23 03:15:08 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 339 tests, 84 stderr failures, 1 stdout failure, 29 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/lsframe1 (stderr) memcheck/tests/lsframe2 (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-names (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc04_free_lock (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc10_rec_lock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc13_laog1 (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) exp-drd/tests/fp_race (stderr) exp-drd/tests/fp_race2 (stderr) exp-drd/tests/matinv (stderr) exp-drd/tests/pth_barrier (stderr) exp-drd/tests/pth_broadcast (stderr) exp-drd/tests/pth_cond_race (stderr) exp-drd/tests/pth_cond_race2 (stderr) exp-drd/tests/pth_create_chain (stderr) exp-drd/tests/pth_detached (stderr) exp-drd/tests/pth_detached2 (stderr) exp-drd/tests/sem_as_mutex (stderr) exp-drd/tests/sem_as_mutex2 (stderr) exp-drd/tests/sigalrm (stderr) exp-drd/tests/tc17_sembar (stderr) exp-drd/tests/tc18_semabuse (stderr) |
|
From: Tom H. <th...@cy...> - 2008-01-23 03:47:11
|
Nightly build on aston ( x86_64, Fedora Core 5 ) started at 2008-01-23 03:20:05 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 375 tests, 15 stderr failures, 1 stdout failure, 1 post failure == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) massif/tests/long-names (post) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/pth_cond_race (stderr) exp-drd/tests/sem_as_mutex2 (stderr) |
|
From: Tom H. <th...@cy...> - 2008-01-23 03:39:40
|
Nightly build on trojan ( x86_64, Fedora Core 6 ) started at 2008-01-23 03:25:05 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 373 tests, 7 stderr failures, 5 stdout failures, 1 post failure == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) massif/tests/long-names (post) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/sem_as_mutex2 (stderr) |
|
From: Tom H. <th...@cy...> - 2008-01-23 03:39:13
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2008-01-23 03:05:11 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 373 tests, 8 stderr failures, 2 stdout failures, 1 post failure == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) massif/tests/long-names (post) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/pth_cond_race (stderr) |
|
From: Tom H. <th...@cy...> - 2008-01-23 03:29:26
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2008-01-23 03:10:04 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 373 tests, 9 stderr failures, 3 stdout failures, 1 post failure == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) massif/tests/long-names (post) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/pth_cond_race (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 373 tests, 9 stderr failures, 4 stdout failures, 1 post failure == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) massif/tests/long-names (post) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_cvsimple (stdout) none/tests/pth_detached (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/pth_cond_race (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Jan 23 03:19:50 2008 --- new.short Wed Jan 23 03:29:29 2008 *************** *** 8,10 **** ! == 373 tests, 9 stderr failures, 4 stdout failures, 1 post failure == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 373 tests, 9 stderr failures, 3 stdout failures, 1 post failure == memcheck/tests/malloc_free_fill (stderr) *************** *** 18,20 **** none/tests/pth_cvsimple (stdout) - none/tests/pth_detached (stdout) helgrind/tests/tc18_semabuse (stderr) --- 18,19 ---- |
|
From: Tom H. <th...@cy...> - 2008-01-23 03:16:11
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2008-01-23 03:00:02 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 375 tests, 30 stderr failures, 1 stdout failure, 1 post failure == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) massif/tests/long-names (post) none/tests/blockfault (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) |