You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(7) |
2
(7) |
3
(11) |
4
(3) |
5
(6) |
|
6
(14) |
7
(25) |
8
(14) |
9
(21) |
10
(16) |
11
(3) |
12
(12) |
|
13
|
14
(5) |
15
(11) |
16
(4) |
17
(18) |
18
(15) |
19
|
|
20
(1) |
21
(14) |
22
(7) |
23
(14) |
24
(9) |
25
(14) |
26
(5) |
|
27
(12) |
28
(1) |
29
(5) |
30
|
|
|
|
|
From: Philippe W. <phi...@sk...> - 2011-11-08 22:09:26
|
> So, where did we get to with the fair scheduling? I can't remember the
> final outcome w.r.t. performance loss (if any). Perhaps a conservative
> approach is to add fair scheduling, but have it disabled by default and
> enabled with a command line flag, which we can add to the relevant .vgtest
> files.
>
> J
The summary of state + work to be done as I understand:
* with CPU frequency scaling disabled (i.e. all CPUs always full speed), the last version
of the fair scheduler is slightly faster than the pipe lock.
(work needed: more measurements, a.o. on other platforms than x86/amd64).
* CPU freq scaling however has a much worse impact on the fair scheduler (50% degradation)
than on the pipe scheduler (10..20% degradation)
(work needed: either use sched_setaffinity ( ...) from time to time to ensure all threads
are on the same CPU, then on whatever cpu, then ...
or preferrably a syscall telling: "I want all my threads to have affinity to a single cpu
whatever the cpu and feel free to change it from time to time but not too often" :)
Such a syscall might take years to land in distribution kernels or never land)
* fair scheduler does not link on some distributions (e.g. RHEL 5, when compiling in 32bits),
as not all gcc versions have the needed builtin
(work needed: have replacement assembly code when builtin not available)
(or keep the pipe scheduler in this case)
* (work needed) do an implementation for Darwin
(or keep the pipe scheduler on this platform)
* (work needed) : investigate linkability/behaviour/performance on other platforms
(ARM, ppc, android, ...)
|
|
From: Julian S. <js...@ac...> - 2011-11-08 21:27:05
|
> On Tuesday, November 08, 2011, Philippe Waroquiers wrote: > > Florian tried a fence on s390x, and I tried a fence on amd64. > None of these solved the problem reliably. Sure. I am not saying that I think a fence will solve the problem reliably. I do think that not having fences will make it impossible to solve it reliably though. > So, it looks like at least for this test, the safest (or mandatory?) > changes would be: * have a clean solution for the delay500ms (because on > small overloaded computers, delay500ms might not be enough) > * have fence instructions > * have fair scheduling (otherwise, on big computers with enough idle > CPUs, the unfairness might cause a non termination as only one thread ever > runs). I agree. So, where did we get to with the fair scheduling? I can't remember the final outcome w.r.t. performance loss (if any). Perhaps a conservative approach is to add fair scheduling, but have it disabled by default and enabled with a command line flag, which we can add to the relevant .vgtest files. J |
|
From: Philippe W. <phi...@sk...> - 2011-11-08 21:07:50
|
> Without proper fencing, we don't have any assurance that such
> test programs will finish in finite time even when running natively.
> At least if they are properly fenced, then any failure to terminate
> must be caused only by Valgrind's games with thread scheduling.
Florian tried a fence on s390x, and I tried a fence on amd64.
None of these solved the problem reliably.
>
> Then, if that doesn't help, we might also need to use the fair-scheduling
> stuff that Bart has been working on.
I just tried Bart's fair scheduler on ppc64 with a recent svn version.
With the fair scheduler: annotate_hbefore consistently takes around 1.4 seconds
with helgrind, and 4.8 seconds with drd.
With the pipe lock, I have seen variations up to 15+ minutes
(helgrind or drd) + seen at least once more than one hour of cpu (killed).
So, it looks like at least for this test, the safest (or mandatory?) changes would be:
* have a clean solution for the delay500ms (because on small overloaded computers,
delay500ms might not be enough)
* have fence instructions
* have fair scheduling (otherwise, on big computers with enough idle CPUs, the
unfairness might cause a non termination as only one thread ever runs).
Philippe
|
|
From: Julian S. <js...@ac...> - 2011-11-08 20:37:02
|
On Tuesday, November 08, 2011, Josef Weidendorfer wrote: > > When doing instrumentation, pay attention to the Ist.IMark.delta > > fields. This makes the --ct-verbose=1 output make a lot more sense > > for Thumb code. Should have no effect on any other platform. > > Ah, makes sense. I suppose this bug is also in cachegrind then? Yes, I suspect you are correct. Hmm. That should be fixed too. J |
|
From: <sv...@va...> - 2011-11-08 20:20:52
|
Author: florian Date: 2011-11-08 20:16:09 +0000 (Tue, 08 Nov 2011) New Revision: 12262 Log: Remove TEST_TOOLS and TEXT_EXP_TOOLS as they are no longer needed. Modified: trunk/Makefile.am Modified: trunk/Makefile.am =================================================================== --- trunk/Makefile.am 2011-11-08 20:14:35 UTC (rev 12261) +++ trunk/Makefile.am 2011-11-08 20:16:09 UTC (rev 12262) @@ -16,16 +16,6 @@ exp-bbv \ exp-dhat -# DDD: once all tools work on Darwin, TEST_TOOLS and TEST_EXP_TOOLS can be -# replaced with TOOLS and EXP_TOOLS. -TEST_TOOLS = $(TOOLS) -if !VGCONF_OS_IS_DARWIN - TEST_EXP_TOOLS = $(EXP_TOOLS) -else - TEST_EXP_TOOLS = exp-bbv -endif - - # Put docs last because building the HTML is slow and we want to get # everything else working before we try it. SUBDIRS = \ @@ -76,13 +66,13 @@ ## Preprend @PERL@ because tests/vg_regtest isn't executable regtest: check - -tests/check_makefile_consistency gdbserver_tests $(TEST_TOOLS) $(TEST_EXP_TOOLS) + -tests/check_makefile_consistency gdbserver_tests $(TOOLS) $(EXP_TOOLS) gdbserver_tests/make_local_links $(GDB) - @PERL@ tests/vg_regtest gdbserver_tests $(TEST_TOOLS) $(TEST_EXP_TOOLS) + @PERL@ tests/vg_regtest gdbserver_tests $(TOOLS) $(EXP_TOOLS) nonexp-regtest: check - @PERL@ tests/vg_regtest $(TEST_TOOLS) + @PERL@ tests/vg_regtest $(TOOLS) exp-regtest: check - @PERL@ tests/vg_regtest gdbserver_tests $(TEST_EXP_TOOLS) + @PERL@ tests/vg_regtest gdbserver_tests $(EXP_TOOLS) # Nb: gdbserver_tests are put in exp-regtest rather than nonexp-regtest # because they are tested with various valgrind tools, so might be using # an experimental tool. |
|
From: <sv...@va...> - 2011-11-08 20:19:19
|
Author: florian Date: 2011-11-08 20:14:35 +0000 (Tue, 08 Nov 2011) New Revision: 12261 Log: Fix prerequisite to also require linux. So testcases get skipped and do not fail on Darwin. Modified: trunk/exp-sgcheck/tests/bad_percentify.vgtest trunk/exp-sgcheck/tests/globalerr.vgtest trunk/exp-sgcheck/tests/hackedbz2.vgtest trunk/exp-sgcheck/tests/hsg.vgtest trunk/exp-sgcheck/tests/preen_invars.vgtest trunk/exp-sgcheck/tests/stackerr.vgtest Modified: trunk/exp-sgcheck/tests/bad_percentify.vgtest =================================================================== --- trunk/exp-sgcheck/tests/bad_percentify.vgtest 2011-11-08 19:32:57 UTC (rev 12260) +++ trunk/exp-sgcheck/tests/bad_percentify.vgtest 2011-11-08 20:14:35 UTC (rev 12261) @@ -1,2 +1,2 @@ -prereq: ./is_arch_supported +prereq: ./is_arch_supported && ../../tests/os_test linux prog: bad_percentify Modified: trunk/exp-sgcheck/tests/globalerr.vgtest =================================================================== --- trunk/exp-sgcheck/tests/globalerr.vgtest 2011-11-08 19:32:57 UTC (rev 12260) +++ trunk/exp-sgcheck/tests/globalerr.vgtest 2011-11-08 20:14:35 UTC (rev 12261) @@ -1,2 +1,2 @@ -prereq: ./is_arch_supported +prereq: ./is_arch_supported && ../../tests/os_test linux prog: globalerr Modified: trunk/exp-sgcheck/tests/hackedbz2.vgtest =================================================================== --- trunk/exp-sgcheck/tests/hackedbz2.vgtest 2011-11-08 19:32:57 UTC (rev 12260) +++ trunk/exp-sgcheck/tests/hackedbz2.vgtest 2011-11-08 20:14:35 UTC (rev 12261) @@ -1,2 +1,2 @@ -prereq: ./is_arch_supported +prereq: ./is_arch_supported && ../../tests/os_test linux prog: hackedbz2 Modified: trunk/exp-sgcheck/tests/hsg.vgtest =================================================================== --- trunk/exp-sgcheck/tests/hsg.vgtest 2011-11-08 19:32:57 UTC (rev 12260) +++ trunk/exp-sgcheck/tests/hsg.vgtest 2011-11-08 20:14:35 UTC (rev 12261) @@ -1,4 +1,4 @@ -prereq: ./is_arch_supported +prereq: ./is_arch_supported && ../../tests/os_test linux prog: hsg vgopts: --xml=yes --xml-fd=2 --log-file=/dev/null stderr_filter: ../../memcheck/tests/filter_xml Modified: trunk/exp-sgcheck/tests/preen_invars.vgtest =================================================================== --- trunk/exp-sgcheck/tests/preen_invars.vgtest 2011-11-08 19:32:57 UTC (rev 12260) +++ trunk/exp-sgcheck/tests/preen_invars.vgtest 2011-11-08 20:14:35 UTC (rev 12261) @@ -1,2 +1,2 @@ -prereq: ./is_arch_supported +prereq: ./is_arch_supported && ../../tests/os_test linux prog: preen_invars Modified: trunk/exp-sgcheck/tests/stackerr.vgtest =================================================================== --- trunk/exp-sgcheck/tests/stackerr.vgtest 2011-11-08 19:32:57 UTC (rev 12260) +++ trunk/exp-sgcheck/tests/stackerr.vgtest 2011-11-08 20:14:35 UTC (rev 12261) @@ -1,3 +1,3 @@ -prereq: ./is_arch_supported +prereq: ./is_arch_supported && ../../tests/os_test linux vgopts: --num-callers=3 prog: stackerr |
|
From: Josef W. <Jos...@gm...> - 2011-11-08 20:18:48
|
On 08.11.2011 20:32, sv...@va... wrote: > Author: sewardj > Date: 2011-11-08 19:32:57 +0000 (Tue, 08 Nov 2011) > New Revision: 12260 > > Log: > When doing instrumentation, pay attention to the Ist.IMark.delta > fields. This makes the --ct-verbose=1 output make a lot more sense > for Thumb code. Should have no effect on any other platform. Ah, makes sense. I suppose this bug is also in cachegrind then? Josef |
|
From: <sv...@va...> - 2011-11-08 19:37:40
|
Author: sewardj
Date: 2011-11-08 19:32:57 +0000 (Tue, 08 Nov 2011)
New Revision: 12260
Log:
When doing instrumentation, pay attention to the Ist.IMark.delta
fields. This makes the --ct-verbose=1 output make a lot more sense
for Thumb code. Should have no effect on any other platform.
Modified:
trunk/callgrind/main.c
Modified: trunk/callgrind/main.c
===================================================================
--- trunk/callgrind/main.c 2011-11-06 22:43:33 UTC (rev 12259)
+++ trunk/callgrind/main.c 2011-11-08 19:32:57 UTC (rev 12260)
@@ -905,10 +905,9 @@
VexGuestExtents* vge,
IRType gWordTy, IRType hWordTy )
{
- Int i, isize;
+ Int i;
IRStmt* st;
Addr origAddr;
- Addr64 cia; /* address of current insn */
InstrInfo* curr_inode = NULL;
ClgState clgs;
UInt cJumps = 0;
@@ -944,10 +943,9 @@
st = sbIn->stmts[i];
CLG_ASSERT(Ist_IMark == st->tag);
- origAddr = (Addr)st->Ist.IMark.addr;
- cia = st->Ist.IMark.addr;
- isize = st->Ist.IMark.len;
- CLG_ASSERT(origAddr == st->Ist.IMark.addr); // XXX: check no overflow
+ origAddr = (Addr)st->Ist.IMark.addr + (Addr)st->Ist.IMark.delta;
+ CLG_ASSERT(origAddr == st->Ist.IMark.addr
+ + st->Ist.IMark.delta); // XXX: check no overflow
/* Get BB struct (creating if necessary).
* JS: The hash table is keyed with orig_addr_noredir -- important!
@@ -977,8 +975,8 @@
break;
case Ist_IMark: {
- cia = st->Ist.IMark.addr;
- isize = st->Ist.IMark.len;
+ Addr64 cia = st->Ist.IMark.addr + st->Ist.IMark.delta;
+ Int isize = st->Ist.IMark.len;
CLG_ASSERT(clgs.instr_offset == (Addr)cia - origAddr);
// If Vex fails to decode an instruction, the size will be zero.
// Pretend otherwise.
|
|
From: Greg P. <gp...@ap...> - 2011-11-08 18:13:56
|
On Nov 8, 2011, at 4:50 AM, Julian Seward wrote: > I'd prefer to try and fix this problem by first, adding > memory fences in the regtests that do inter-thread memory access > without synchronisation. This would be simple if we make a > function to force a complete load and store fence on all platforms, __sync_synchronize() ought to work. http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html -- Greg Parker gp...@ap... Runtime Wrangler |
|
From: John R. <jr...@bi...> - 2011-11-08 16:50:04
|
> [snip] to my > knowledge, there are no Intel processors that have hardware support for DFP. The x87 FPU (about 30 years old) and all x86 CPUs beginning with Pentium (1993) have FBLD and FBSTP which load or store signed 18-digit packed BCD integers to or from internal binary floating point [which is compatible with IEEE 754-1985: sign, 15-bit biased exponent, 64-bit significand with implied leading '1'; various control and status bits.] In addition, all x86 CPUs before x86_64 have AAA, AAD, AAM, AAS (Ascii Adjust after Addition/Division/Multiplication/Subtraction) which facilitate BCD arithmetic. These are supported by CPU flag bit AF (AsciiFlag) which is the CarryOut from bit 3 [the bit with positional value (1<<3).] Although this is not directly the scheme specified by IEEE 754-2008, such hardware does support decimal floating point arithmetic several times faster than is possible in a software-only implementation. -- |
|
From: Maynard J. <may...@us...> - 2011-11-08 14:21:56
|
On 11/07/2011 2:20 AM, Julian Seward wrote: > > Hi Maynard, all, > >>> different approaches we could take to implement this support: >>> 1. Use existing PowerPC support >>> 2. Define new Iops (hopefully could get by with something less than 50) >>> 3. Use the Iex_CCall type of IRExpr to invoke a helper that executes >>> the > > I suspected this day would come. To be clear, I'm not per se > opposed to new Iops. It's just that we already have zillions of > them, so I'm a little wary of adding en mass to them, especially > considering it's difficult to get rid of them later if they should > turn out to be the wrong thing / not well thought out / whatever. > > Clearly (1) is impractical and (3), well, that might be doable, but > you lose the ability to do much useful analysis on the resulting IR: > Memcheck's V-bit analysis will simply worst-case it, so there's no > opportunity to do any more sophisticated analysis. Also, doing one > function call per machine operation is going to be slow. > > So new Iops look unavoidable in this case. > > What I would ask is, can you + the s390 folks make an initial proposal > of the new Iops you need, with names, types and a summary of behaviour. > The aim would be to come up with a minimal but efficient set of Iops > that will support DFP on both Power and s390. Then we can mash it around > and see how it looks. Also, some indication of how this relates to > the Intel DFP support that Christian mentioned, would be useful. Julian, Thanks for your response. Carl will develop a proposal for a set of common DFP Iops and have it reviewed by the s390 folks before sending to you and the list for comments. As for how PowerPC and System z DFP relates to Intel . . . to my knowledge, there are no Intel processors that have hardware support for DFP. Perhaps there's an Intel person who watches this mailing list and can comment if there's anything that's been made public that we've missed. -Maynard > > J > |
|
From: Julian S. <js...@ac...> - 2011-11-08 12:55:48
|
On Tuesday, November 08, 2011, Christian Borntraeger wrote:
> Julian, do you know if the reason for the high retry value of 2000 is still
> valid?
2000 seemed to work well with various MPI libraries running on
late model Pentium 4 and early Opterons, IIRC. This was some
years ago. The MPI libraries poke bits of hardware in high
performance network cards that are mapped into user space. It
might have been for the Quadrics cards.
I'm not sure that we can decide on any number that works well in
all circumstances.
I'd prefer to try and fix this problem by first, adding
memory fences in the regtests that do inter-thread memory access
without synchronisation. This would be simple if we make a
function to force a complete load and store fence on all platforms,
eg
void complete_mem_fence ( void )
{
#if defined(VGA_x86)
__asm__ __volatile__("mfence");
#elif defined(VGA_s390x)
// equivalent on s390,
etc
etc
}
and use it consistently in the tests that require it.
Without proper fencing, we don't have any assurance that such
test programs will finish in finite time even when running natively.
At least if they are properly fenced, then any failure to terminate
must be caused only by Valgrind's games with thread scheduling.
Then, if that doesn't help, we might also need to use the fair-scheduling
stuff that Bart has been working on.
J
|
|
From: Christian B. <bor...@de...> - 2011-11-08 09:02:09
|
Another thing that helped on my system for interlocked atomic updates was this patch:
===================================================================
--- coregrind/m_scheduler/scheduler.c (revision 12257)
+++ coregrind/m_scheduler/scheduler.c (working copy)
@@ -1161,8 +1161,8 @@
before swapping to another. That means that short term
spins waiting for hardware to poke memory won't cause a
thread swap. */
- if (VG_(dispatch_ctr) > 2000)
- VG_(dispatch_ctr) = 2000;
+ if (VG_(dispatch_ctr) > 20)
+ VG_(dispatch_ctr) = 20;
break;
case VG_TRC_INNER_COUNTERZERO:
Julian, do you know if the reason for the high retry value of 2000 is still valid?
|
|
From: Christian B. <bor...@de...> - 2011-11-08 08:31:13
|
On 07/11/11 22:17, Philippe Waroquiers wrote:
> Note: I just have one problem : I did one build + regtest on this computer, and had drd running on
> annotate_hbefore that took about one hour of cpu (then I killed it).
> gcc compile farm highly recommends to avoid such looping tests (so, I will put a ulimit
> in cpu when launching the nightly test).
There is probably more than one problem with this testcase, but I am at least aware of one
problem that will hit you on virtualized/loaded systems if the overall load is too high:
this code
[...]
void* thread_fn1 ( void* arg )
{
UWord* w = (UWord*)arg;
delay500ms(); // ensure t2 gets to its wait first
[...]
does not ensure that t2 gets to its wait reliably. Its just very likely, but 500ms can still
be too short, if the guest scheduling puts t2 on a cpu which was not scheduled by a busy
hypervisor or if the system itself is loaded. A thread being runnable does not mean that it
will really run.
A potential fix for the endless loop might be
--- valgrind-upstream.orig/helgrind/tests/annotate_hbefore.c
+++ valgrind-upstream/helgrind/tests/annotate_hbefore.c
@@ -8,6 +8,7 @@
#include <pthread.h>
#include <stdio.h>
#include <assert.h>
+#include <signal.h>
#include "../../helgrind/helgrind.h"
@@ -294,6 +295,7 @@ int main ( void )
r= pthread_create( &t2, NULL, &thread_fn2, (void*)&w ); assert(!r);
r= pthread_join( t1, NULL ); assert(!r);
- r= pthread_join( t2, NULL ); assert(!r);
+ /* there is a race on busy systems, ensure to exit */
+ r= pthread_kill( t2, SIGKILL);
return 0;
}
but this can result in a test case failure if the race triggers.
At least it does not break the test suite
Christian
|