You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
(17) |
2
(14) |
3
(15) |
4
(30) |
5
(18) |
6
(12) |
7
(10) |
|
8
(11) |
9
(11) |
10
(14) |
11
(12) |
12
(12) |
13
(8) |
14
(5) |
|
15
(11) |
16
(19) |
17
(15) |
18
(15) |
19
(16) |
20
(9) |
21
(9) |
|
22
(12) |
23
(11) |
24
(10) |
25
(5) |
26
(11) |
27
(12) |
28
(20) |
|
29
(11) |
30
(21) |
|
|
|
|
|
|
From: <sv...@va...> - 2008-06-15 12:22:31
|
Author: bart Date: 2008-06-15 13:22:37 +0100 (Sun, 15 Jun 2008) New Revision: 8235 Log: Continued working on DRD's documentation. Modified: trunk/exp-drd/docs/drd-manual.xml Modified: trunk/exp-drd/docs/drd-manual.xml =================================================================== --- trunk/exp-drd/docs/drd-manual.xml 2008-06-15 12:21:55 UTC (rev 8234) +++ trunk/exp-drd/docs/drd-manual.xml 2008-06-15 12:22:37 UTC (rev 8235) @@ -12,42 +12,90 @@ on the Valgrind command line.</para> <sect1 id="drd-manual.overview" xreflabel="Overview"> -<title>Overview</title> +<title>Introduction</title> <para> DRD is a Valgrind tool for detecting errors in multithreaded C and C++ -programs that use the POSIX threading primitives, also known as -pthreads. POSIX threads is the most widely available threading library -on Unix systems. +shared-memory programs. The tool works for any program that uses the +POSIX threading primitives or a threading library built on top of the +POSIX threading primitives. POSIX threads, also known as Pthreads, is +the most widely available threading library on Unix systems. </para> <para> -The next section provides -<link linkend="drd-manual.multithreading"> -background information about multithreading</link>. +Multithreaded programming is error prone. Depending on how multithreading is +expressed in a program, one or more of the following problems can pop up in a +multithreaded program: +<itemizedlist> + <listitem> + <para> + A data race, i.e. one or more threads access the same memory + location without sufficient locking. + </para> + </listitem> + <listitem> + <para> + Lock contention: one thread blocks the progress of another thread + by holding a lock too long. + </para> + </listitem> + <listitem> + <para> + Deadlock: two or more threads wait for each other indefinitely. + </para> + </listitem> + <listitem> + <para> + False sharing: threads on two different processors access different + variables in the same cache line frequently, causing frequent exchange + of cache lines and slowing down both threads. + </para> + </listitem> + <listitem> + <para> + Improper use of the POSIX threads API. + </para> + </listitem> +</itemizedlist> </para> <para> -DRD can detect two classes of errors, which are discussed in detail: +Although the likelihood of some classes of multithreaded programming +errors can be reduced by a disciplined programming style, a tool for +automatic detection of runtime threading errors is always a great help +when developing multithreaded software. </para> +<para> +The remainder of this manual is organized as follows. In the next +section it is discussed which <link +linkend="drd-manual.mt-progr-models"> multithreading programming +paradigms</link> exist. +</para> + +<para>Then there is a +<link linkend="drd-manual.options">summary of command-line +options</link>. +</para> + +<para> +DRD can detect three classes of errors, which are discussed in detail: +</para> + <orderedlist> <listitem> - <para><link linkend="drd-manual.api-checks"> - Misuses of the POSIX threads API.</link></para> + <para><link linkend="drd-manual.data-races">Data races</link>.</para> </listitem> <listitem> - <para><link linkend="drd-manual.data-races"> - Data races -- accessing memory without adequate locking. - </link></para> + <para><link linkend="drd-manual.lock-contention">Lock contention</link>. + </para> </listitem> + <listitem> + <para><link linkend="drd-manual.api-checks"> + Misuse of the POSIX threads API</link>.</para> + </listitem> </orderedlist> -<para>Then there is a -<link linkend="drd-manual.options">summary of command-line -options.</link> -</para> - <para>Finally, there is a section about the current <link linkend="drd-manual.limitations">limitations</link> of DRD. @@ -56,23 +104,94 @@ </sect1> -<sect1 id="drd-manual.multithreading" xreflabel="Multithreading"> -<title>Multithreaded Programming</title> -</sect1> +<sect1 id="drd-manual.mt-progr-models" xreflabel="MT-progr-models"> +<title>Multithreaded Programming Paradigms</title> +<para> +For many applications multithreading is a necessity. There are two +reasons why the use of threads may be required: +<itemizedlist> + <listitem> + <para> + To model concurrent activities. Managing the state of one activity + per thread is a simpler programming model than multiplexing the states + of multiple activities in a single thread. This is why most server and + embedded software is multithreaded. + </para> + </listitem> + <listitem> + <para> + To let computations run on multiple CPU cores simultaneously. This is + why many High Performance Computing (HPC) applications are multithreaded. + </para> + </listitem> +</itemizedlist> +</para> -<sect1 id="drd-manual.api-checks" xreflabel="API Checks"> -<title>Detected errors: Misuses of the POSIX threads API</title> -</sect1> +<para> +Multithreaded programs can be developed by using one or more of the +following paradigms. Which paradigm is appropriate also depends on the +application type -- modeling concurrent activities versus HPC. +<itemizedlist> + <listitem> + <para> + Locking: data that is shared between threads may only be accessed + after a lock is obtained on the mutex(es) associated with the + shared data item. The POSIX threads library, the Qt library + and the Boost.Thread library support this paradigm directly. + </para> + </listitem> + <listitem> + <para> + Message passing: any data that has to be passed from one thread to + another is sent via a message to that other thread. No data is explicitly + shared. Well known implementations of the message passing paradigm are + MPI and CORBA. + </para> + </listitem> + <listitem> + <para> + Software Transactional Memory (STM). Just like the locking + paradigm, with STM data is shared between threads. While the + locking paradigm requires that all associated mutexes are locked + before the shared data is accessed, with the STM paradigm after + each transaction it is verified whether there were conflicting + transactions. If there were conflicts, the transaction is aborted, + otherwise it is committed. This is a so-called optimistic + approach. Not all C, C++ and Fortran compilers already support STM. + </para> + </listitem> + <listitem> + <para> + Automatic parallelization: a compiler converts a sequential + program into a multithreaded program. The original program can + contain parallelization hints. As an example, gcc version 4.3.0 + and later supports OpenMP, a set of standardized compiler + directives which tell a compiler how to parallelize a C, C++ or + Fortran program. + </para> + </listitem> +</itemizedlist> +</para> +<para> +Next to the above paradigms, most CPU instruction sets support atomic +memory accesses. Such operations are the most efficient way to update +a single value on a system with multiple CPU cores. +</para> -<sect1 id="drd-manual.data-races" xreflabel="Data Races"> -<title>Detected errors: Data Races</title> +<para> +DRD supports any combination of multithreaded programming paradigms +and atomic memory accesses, as long as the libraries that implement +the paradigms are based on POSIX threads. Direct use of e.g. Linux' +futexes is not recognized by DRD and will result in false positives. +</para> + </sect1> <sect1 id="drd-manual.options" xreflabel="DRD Options"> -<title>DRD Options</title> +<title>Command Line Options</title> <para>The following end-user options are available:</para> @@ -90,8 +209,46 @@ </sect1> + +<sect1 id="drd-manual.data-races" xreflabel="Data Races"> +<title>Data Races</title> +</sect1> + + +<sect1 id="drd-manual.lock-contention" xreflabel="Lock Contention"> +<title>Lock Contention</title> +</sect1> + + +<sect1 id="drd-manual.api-checks" xreflabel="API Checks"> +<title>Misuse of the POSIX threads API</title> +</sect1> + + +<sect1 id="drd-manual.clientreqs" xreflabel="Client requests"> +<title>Client Requests</title> + +<para> +Just as for other Valgrind tools it is possible to pass information +from a client program to the DRD tool. +</para> + +</sect1> + + +<sect1 id="drd-manual.openmp" xreflabel="OpenMP"> +<title>Debugging OpenMP Programs With DRD</title> + +<para> +Just as for other Valgrind tools it is possible to pass information +from a client program to the DRD tool. +</para> + +</sect1> + + <sect1 id="drd-manual.limitations" xreflabel="Limitations"> -<title>DRD Limitations</title> +<title>Limitations</title> <para>DRD currently has the following limitations:</para> @@ -106,15 +263,15 @@ </listitem> <listitem><para>When running DRD on a PowerPC CPU, DRD will report false positives on atomic operations. See also <ulink - url="http://bugs.kde.org/show_bug.cgi?id=162354">bug 162354</ulink>. + url="http://bugs.kde.org/show_bug.cgi?id=162354">KDE bug 162354</ulink>. </para></listitem> <listitem><para>DRD, just like memcheck, will refuse to start on Linux distributions where all symbol information has been removed from ld.so. This is e.g. the case for openSUSE 10.3 -- see - also <ulink - url="http://bugzilla.novell.com/show_bug.cgi?id=396197">bug 396197</ulink>. + also <ulink url="http://bugzilla.novell.com/show_bug.cgi?id=396197"> + Novell bug 396197</ulink>. </para></listitem> - <listitem><para>If you compile the DRD sourcecode yourself, you need + <listitem><para>If you compile the DRD source code yourself, you need gcc 3.0 or later. gcc 2.95 is not supported.</para> </listitem> |
|
From: <sv...@va...> - 2008-06-15 12:21:49
|
Author: bart
Date: 2008-06-15 13:21:55 +0100 (Sun, 15 Jun 2008)
New Revision: 8234
Log:
Updated Testing.txt.
Modified:
trunk/exp-drd/Testing.txt
Modified: trunk/exp-drd/Testing.txt
===================================================================
--- trunk/exp-drd/Testing.txt 2008-06-15 12:12:28 UTC (rev 8233)
+++ trunk/exp-drd/Testing.txt 2008-06-15 12:21:55 UTC (rev 8234)
@@ -10,7 +10,9 @@
3. Test whether DRD works with standard KDE applications and whether it does
not print any false positives:
./vg-in-place --tool=exp-drd kate
+ ./vg-in-place --tool=exp-drd --check-stack-var=yes kate
./vg-in-place --trace-children=yes --tool=exp-drd knode
+ ./vg-in-place --trace-children=yes --tool=exp-drd --check-stack-var=yes knode
./vg-in-place --trace-children=yes --tool=exp-drd amarokapp
4. Test whether DRD works with standard GNOME applications. Expect race reports
after having closed the GNOME terminal window:
|
|
From: Bart V. A. <bar...@gm...> - 2008-06-15 12:21:10
|
Hello,
As you probably noticed I'm converting DRD's documentation from the
README.txt format into XML format. Is it OK to add this documentation
at the place indicated by the patch below into Valgrind's manual ?
Bart.
Index: docs/xml/manual.xml
===================================================================
--- docs/xml/manual.xml (revision 8229)
+++ docs/xml/manual.xml (working copy)
@@ -32,6 +32,8 @@
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../helgrind/docs/hg-manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
+ <xi:include href="../../exp-drd/docs/drd-manual.xml" parse="xml"
+ xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../massif/docs/ms-manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../none/docs/nl-manual.xml" parse="xml"
|
|
From: <sv...@va...> - 2008-06-15 12:12:25
|
Author: sewardj
Date: 2008-06-15 13:12:28 +0100 (Sun, 15 Jun 2008)
New Revision: 8233
Log:
* rename flag --exe-context-for-locks to --num-callers-for-locks so as
to be consistent with the existing --num-callers flag.
* update --tool=helgrind --help text
Modified:
branches/HGDEV/helgrind/hg_main.c
Modified: branches/HGDEV/helgrind/hg_main.c
===================================================================
--- branches/HGDEV/helgrind/hg_main.c 2008-06-15 11:38:35 UTC (rev 8232)
+++ branches/HGDEV/helgrind/hg_main.c 2008-06-15 12:12:28 UTC (rev 8233)
@@ -326,7 +326,7 @@
// Size of context for locks. Usually should be less than --num-callers
// since collecting context for locks is quite expensive.
-static UInt clo_exe_context_for_locks = 9;
+static UInt clo_num_callers_for_locks = 9;
/* This has to do with printing error messages. See comments on
@@ -1009,7 +1009,7 @@
tid = map_threads_maybe_reverse_lookup_SLOW(thr);
lk->acquired_at = VG_(record_depth_N_ExeContext)
(tid, 0/*first_ip_delta*/,
- clo_exe_context_for_locks);
+ clo_num_callers_for_locks);
} else {
tl_assert(!HG_(isEmptyBag)(&lk->heldBy));
}
@@ -1067,7 +1067,7 @@
tid = map_threads_maybe_reverse_lookup_SLOW(thr);
lk->acquired_at
= VG_(record_depth_N_ExeContext)
- (tid, 0/*first_ip_delta*/, clo_exe_context_for_locks);
+ (tid, 0/*first_ip_delta*/, clo_num_callers_for_locks);
} else {
tl_assert(!HG_(isEmptyBag)(&lk->heldBy));
}
@@ -9677,8 +9677,8 @@
else if (VG_CLO_STREQN(19, arg, "--trace-after-race=")) {
clo_trace_after_race = VG_(atoll)(&arg[19]);
}
- else if (VG_CLO_STREQN(24, arg, "--exe-context-for-locks=")) {
- clo_exe_context_for_locks = VG_(atoll)(&arg[24]);
+ else if (VG_CLO_STREQN(24, arg, "--num-callers-for-locks=")) {
+ clo_num_callers_for_locks = VG_(atoll)(&arg[24]);
}
else if (VG_CLO_STREQ(arg, "--ss-recycle=yes"))
@@ -9753,15 +9753,21 @@
VG_(printf)(
" --happens-before=none|threads|all [all] consider no events, thread\n"
" create/join, create/join/cvsignal/cvwait/semwait/post as sync points\n"
-" --trace-addr=0xXXYYZZ show all state changes for address 0xXXYYZZ\n"
-" --trace-level=0|1|2 verbosity level of --trace-addr [1]\n"
-" --max-segment-set-size=<N> limit mem use by limiting SegSet sizes [20]\n"
-" --ignore-n=<N> speedup hack; add documentation\n"
-" --ignore-i=<N> speedup hack; add documentation\n"
-" --ss-recycle=no|yes [yes] recycle segment sets\n"
-" --pure-happens-before=no|yes [no] be a pure-happens-before detector\n"
-" --more-context=no|yes [yes] record context at lock lossage\n"
-" and at segment creation\n"
+" --trace-addr=0xXXYYZZ show all state changes for address 0xXXYYZZ\n"
+" --trace-level=0|1|2 verbosity level of --trace-addr [1]\n"
+" --max-segment-set-size=<N>\n"
+" limit mem use by limiting SegSet sizes [20]\n"
+" --ignore-n=<N> speedup hack; add documentation\n"
+" --ignore-i=<N> speedup hack; add documentation\n"
+" --ss-recycle=no|yes recycle segment sets [yes]\n"
+" --pure-happens-before=no|yes\n"
+" be a pure-happens-before detector [no]\n"
+" --more-context=no|yes record context at lock lossage\n"
+" and at segment creation [yes]\n"
+" --ignore-in-dtor=no|yes suppress races involving C++ destructors [no]\n"
+" --trace-after-race=<N> limits tracing of racey addresses [50]\n"
+" --num-callers-for-locks=<N> show <N> callers in stack traces\n"
+" for lock acquisitions/releases [9]\n"
);
VG_(replacement_malloc_print_usage)();
}
|
|
From: <sv...@va...> - 2008-06-15 11:38:31
|
Author: sewardj
Date: 2008-06-15 12:38:35 +0100 (Sun, 15 Jun 2008)
New Revision: 8232
Log:
More Helgrind changes from Konstantin Serebryany:
* implemented support for frame-level wildcards in suppression
files (coregrind change, not Helgrind)
* new flag --ignore-in-dtor=no|yes (default: no):
a flag to suppress Race reports inside C++ destructors (such reports
are quite often false positives, especially when reference counting
is used actively).
* new flag --trace-after-race=N (default: N=50):
a flag to limit the number of traces for a racey address. Reduces the
output by 10-100x. (this is all for limiting traces so far).
* new flag --exe-context-for-locks=N (default: 9):
Size of context for locks. Usually should be less than --num-callers
since collecting context for locks is quite expensive.
(XXX: should rename this to --num-callers-for-locks)
Modified:
branches/HGDEV/coregrind/m_errormgr.c
branches/HGDEV/coregrind/m_execontext.c
branches/HGDEV/coregrind/pub_core_execontext.h
branches/HGDEV/helgrind/hg_main.c
branches/HGDEV/include/pub_tool_execontext.h
Modified: branches/HGDEV/coregrind/m_errormgr.c
===================================================================
--- branches/HGDEV/coregrind/m_errormgr.c 2008-06-15 09:13:28 UTC (rev 8231)
+++ branches/HGDEV/coregrind/m_errormgr.c 2008-06-15 11:38:35 UTC (rev 8232)
@@ -184,7 +184,8 @@
enum {
NoName, /* Error case */
ObjName, /* Name is of an shared object file. */
- FunName /* Name is of a function. */
+ FunName, /* Name is of a function. */
+ Wildcard /* Wildcard '*', i.e. any number of functions. */
}
SuppLocTy;
@@ -927,7 +928,12 @@
p->ty = ObjName;
return True;
}
- VG_(printf)("location should start with fun: or obj:\n");
+ if (VG_(strncmp)(p->name, "*", 1) == 0) {
+ // leave the name
+ p->ty = Wildcard;
+ return True;
+ }
+ VG_(printf)("location should start with 'fun:', 'obj:' or '*'\n");
return False;
}
@@ -1083,14 +1089,21 @@
eof = VG_(get_line) ( fd, buf, N_BUF );
} while (!eof && !VG_STREQ(buf, "}"));
}
+ tl_assert(i >= 1);
+ while (i > 0 && tmp_callers[i-1].ty == Wildcard) {
+ i--;
+ }
+ if (i == 0) {
+ BOMB("the stack trace contains only wildcards");
+ }
// Copy tmp_callers[] into supp->callers[]
supp->n_callers = i;
supp->callers = VG_(arena_malloc)(VG_AR_CORE, i*sizeof(SuppLoc));
+
for (i = 0; i < supp->n_callers; i++) {
supp->callers[i] = tmp_callers[i];
}
-
supp->next = suppressions;
suppressions = supp;
}
@@ -1148,23 +1161,26 @@
static
Bool supp_matches_callers(Error* err, Supp* su)
{
- Int i;
+ Int err_i, su_i;
Char caller_name[ERRTXT_LEN];
- StackTrace ips = VG_(extract_StackTrace)(err->where);
+ StackTrace ips = VG_(extract_StackTrace)(err->where);
+ UInt n_ips = VG_(extract_StackTraceSize)(err->where);
+ Bool has_asterisk = False;
- for (i = 0; i < su->n_callers; i++) {
- Addr a = ips[i];
- vg_assert(su->callers[i].name != NULL);
+ err_i = 0;
+ su_i = 0;
+ while (su_i < su->n_callers && err_i < n_ips) {
+ Addr a = ips[err_i];
+ vg_assert(su->callers[su_i].name != NULL);
// The string to be used in the unknown case ("???") can be anything
// that couldn't be a valid function or objname. --gen-suppressions
// prints 'obj:*' for such an entry, which will match any string we
// use.
- switch (su->callers[i].ty) {
+ switch (su->callers[su_i].ty) {
case ObjName:
if (!VG_(get_objname)(a, caller_name, ERRTXT_LEN))
VG_(strcpy)(caller_name, "???");
break;
-
case FunName:
// Nb: mangled names used in suppressions. Do, though,
// Z-demangle them, since otherwise it's possible to wind
@@ -1174,13 +1190,34 @@
if (!VG_(get_fnname_Z_demangle_only)(a, caller_name, ERRTXT_LEN))
VG_(strcpy)(caller_name, "???");
break;
- default: VG_(tool_panic)("supp_matches_callers");
+ case Wildcard:
+ has_asterisk = True;
+ su_i++;
+ continue;
+ break;
+ default:
+ VG_(tool_panic)("supp_matches_callers");
}
- if (0) VG_(printf)("cmp %s %s\n", su->callers[i].name, caller_name);
- if (!VG_(string_match)(su->callers[i].name, caller_name))
- return False;
+ tl_assert(su->callers[su_i].ty != Wildcard);
+ if (0) VG_(printf)("cmp %s %s\n", su->callers[su_i].name, caller_name);
+ if (!VG_(string_match)(su->callers[su_i].name, caller_name)) {
+ if (!has_asterisk)
+ return False;
+ // we are handling asterisk, just go to the next element of ips.
+ err_i++;
+ continue;
+ }
+ // we found a match, no more asterisk...
+ has_asterisk = False;
+ su_i++;
+ err_i++;
}
+ if (has_asterisk) {
+ // we were still trying to match asterisk. No match.
+ return False;
+ }
+
/* If we reach here, it's a match */
return True;
}
@@ -1193,6 +1230,7 @@
{
Supp* su;
Supp* su_prev;
+ Supp* result = NULL;
/* stats gathering */
em_supplist_searches++;
@@ -1202,6 +1240,8 @@
for (su = suppressions; su != NULL; su = su->next) {
em_supplist_cmps++;
if (supp_matches_error(su, err) && supp_matches_callers(err, su)) {
+ result = su;
+#if 1
/* got a match. Move this entry to the head of the list
in the hope of making future searches cheaper. */
if (su_prev) {
@@ -1210,11 +1250,15 @@
su->next = suppressions;
suppressions = su;
}
- return su;
+ return result;
+#else
+ // used for testing the suppressions mechanism.
+ VG_(printf)("Found match: %s\n", su->sname);
+#endif
}
su_prev = su;
}
- return NULL; /* no matches */
+ return result;
}
/* Show accumulated error-list and suppression-list search stats.
Modified: branches/HGDEV/coregrind/m_execontext.c
===================================================================
--- branches/HGDEV/coregrind/m_execontext.c 2008-06-15 09:13:28 UTC (rev 8231)
+++ branches/HGDEV/coregrind/m_execontext.c 2008-06-15 11:38:35 UTC (rev 8232)
@@ -164,6 +164,16 @@
}
+void VG_(apply_ExeContext)( void(*action)(UInt n, Addr ip),
+ ExeContext* ec, UInt n_ips )
+{
+ VG_(apply_StackTrace)(action, ec->ips,
+ n_ips < ec->n_ips ? n_ips : ec->n_ips);
+}
+
+
+
+
/* Compare two ExeContexts, comparing all callers. */
Bool VG_(eq_ExeContext) ( VgRes res, ExeContext* e1, ExeContext* e2 )
{
@@ -281,7 +291,8 @@
}
static ExeContext* record_ExeContext_wrk ( ThreadId tid, Word first_ip_delta,
- Bool first_ip_only )
+ Bool first_ip_only,
+ UInt n_ips_requested )
{
Int i;
Addr ips[VG_DEEPEST_BACKTRACE];
@@ -298,21 +309,21 @@
vg_assert(sizeof(void*) == sizeof(Addr));
init_ExeContext_storage();
- vg_assert(VG_(clo_backtrace_size) >= 1 &&
- VG_(clo_backtrace_size) <= VG_DEEPEST_BACKTRACE);
+ vg_assert(n_ips_requested >= 1 &&
+ n_ips_requested <= VG_DEEPEST_BACKTRACE);
if (first_ip_only) {
vg_assert(VG_(is_valid_tid)(tid));
n_ips = 1;
ips[0] = VG_(get_IP)(tid);
} else {
- n_ips = VG_(get_StackTrace)( tid, ips, VG_(clo_backtrace_size),
+ n_ips = VG_(get_StackTrace)( tid, ips, n_ips_requested,
NULL/*array to dump SP values in*/,
NULL/*array to dump FP values in*/,
first_ip_delta );
}
- tl_assert(n_ips >= 1 && n_ips <= VG_(clo_backtrace_size));
+ tl_assert(n_ips >= 1 && n_ips <= n_ips_requested);
/* Now figure out if we've seen this one before. First hash it so
as to determine the list number. */
@@ -393,12 +404,23 @@
ExeContext* VG_(record_ExeContext)( ThreadId tid, Word first_ip_delta ) {
return record_ExeContext_wrk( tid, first_ip_delta,
- False/*!first_ip_only*/ );
+ False/*!first_ip_only*/,
+ VG_(clo_backtrace_size) );
}
+ExeContext* VG_(record_depth_N_ExeContext)( ThreadId tid,
+ Word first_ip_delta,
+ UInt n_ips_requested ) {
+ return record_ExeContext_wrk( tid, first_ip_delta,
+ False/*!first_ip_only*/,
+ n_ips_requested );
+}
+
+
+
ExeContext* VG_(record_depth_1_ExeContext)( ThreadId tid ) {
return record_ExeContext_wrk( tid, 0/*first_ip_delta*/,
- True/*first_ip_only*/ );
+ True/*first_ip_only*/, 1);
}
@@ -407,6 +429,12 @@
return e->ips;
}
+UInt VG_(extract_StackTraceSize) ( ExeContext* e )
+{
+ return e->n_ips;
+}
+
+
/*--------------------------------------------------------------------*/
/*--- end m_execontext.c ---*/
/*--------------------------------------------------------------------*/
Modified: branches/HGDEV/coregrind/pub_core_execontext.h
===================================================================
--- branches/HGDEV/coregrind/pub_core_execontext.h 2008-06-15 09:13:28 UTC (rev 8231)
+++ branches/HGDEV/coregrind/pub_core_execontext.h 2008-06-15 11:38:35 UTC (rev 8232)
@@ -52,6 +52,9 @@
// pub_core_stacktrace.h also.)
extern /*StackTrace*/Addr* VG_(extract_StackTrace) ( ExeContext* e );
+// Return the number of elements in ExeContext.
+extern UInt VG_(extract_StackTraceSize) ( ExeContext* e );
+
#endif // __PUB_CORE_EXECONTEXT_H
/*--------------------------------------------------------------------*/
Modified: branches/HGDEV/helgrind/hg_main.c
===================================================================
--- branches/HGDEV/helgrind/hg_main.c 2008-06-15 09:13:28 UTC (rev 8231)
+++ branches/HGDEV/helgrind/hg_main.c 2008-06-15 11:38:35 UTC (rev 8232)
@@ -317,6 +317,18 @@
// Very time and memory consuming!
static Bool clo_pure_happens_before = False;
+// If true, all races inside a C++ destructor will be ignored.
+static Bool clo_ignore_in_dtor = False;
+
+// Print no more than this number of traces after a race has been detected.
+static UInt clo_trace_after_race = 50;
+
+
+// Size of context for locks. Usually should be less than --num-callers
+// since collecting context for locks is quite expensive.
+static UInt clo_exe_context_for_locks = 9;
+
+
/* This has to do with printing error messages. See comments on
announce_threadset() and summarise_threadset(). Perhaps it
should be a command line option. */
@@ -995,7 +1007,9 @@
ThreadId tid;
tl_assert(HG_(isEmptyBag)(&lk->heldBy));
tid = map_threads_maybe_reverse_lookup_SLOW(thr);
- lk->acquired_at = VG_(record_ExeContext(tid, 0/*first_ip_delta*/));
+ lk->acquired_at = VG_(record_depth_N_ExeContext)
+ (tid, 0/*first_ip_delta*/,
+ clo_exe_context_for_locks);
} else {
tl_assert(!HG_(isEmptyBag)(&lk->heldBy));
}
@@ -1051,8 +1065,9 @@
ThreadId tid;
tl_assert(HG_(isEmptyBag)(&lk->heldBy));
tid = map_threads_maybe_reverse_lookup_SLOW(thr);
- lk->acquired_at =
- VG_(record_ExeContext(tid, 0/*first_ip_delta*/));
+ lk->acquired_at
+ = VG_(record_depth_N_ExeContext)
+ (tid, 0/*first_ip_delta*/, clo_exe_context_for_locks);
} else {
tl_assert(!HG_(isEmptyBag)(&lk->heldBy));
}
@@ -2280,7 +2295,7 @@
static void mem_trace_on(UWord mem, ThreadId tid)
{
Thread *thr = map_threads_lookup( tid );
- if (clo_trace_level <= 0 && mem != 0xDEADBEAFUL) return;
+ if (clo_trace_level <= 0) return;
if (!mem_trace_map) {
mem_trace_map = HG_(newFM)( hg_zalloc, hg_free, NULL);
}
@@ -2288,7 +2303,7 @@
VG_(message)(Vg_UserMsg, "ENABLED TRACE {{{: %p; S%d/T%d", mem,
(Int)thr->csegid,
(Int)thr->errmsg_index);
- if (clo_trace_level >= 2 || mem == 0xDEADBEAF) {
+ if (clo_trace_level >= 2) {
VG_(get_and_pp_StackTrace)( tid, 15);
}
VG_(message)(Vg_UserMsg, "}}}");
@@ -2334,7 +2349,7 @@
static void set_mu_is_cv(Word mu, ThreadId tid)
{
ExeContext *context = NULL;
- // context = VG_(record_ExeContext(tid, -1/*first_ip_delta*/));
+ // context = VG_(record_ExeContext)(tid, -1/*first_ip_delta*/);
if (!mu_is_cv_map) {
mu_is_cv_map = HG_(newFM) (hg_zalloc, hg_free, NULL);
}
@@ -3547,7 +3562,7 @@
// different thread, but happens-before
*hb_all_p = True;
newSS = SS_mk_singleton(currS);
- if (UNLIKELY(do_trace)) {
+ if (UNLIKELY(0 && do_trace)) {
VG_(printf)("HB(S%d/T%d,cur)=1\n",
S, SEG_get(S)->thr->errmsg_index);
}
@@ -3556,7 +3571,7 @@
// Not happened-before. Leave this segment in SS.
tl_assert(currS != S);
newSS = HG_(doubletonWS)(univ_ssets, currS, S);
- if (UNLIKELY(do_trace)) {
+ if (UNLIKELY(0 && do_trace)) {
VG_(printf)("HB(S%d/T%d,cur)=0\n",
S, SEG_get(S)->thr->errmsg_index);
}
@@ -3622,7 +3637,7 @@
hb = True;
}
// trace
- if (do_trace) {
+ if (0 && do_trace) {
VG_(printf)("HB(S%d/T%d,cur)=%d\n",
S, SEG_get(S)->thr->errmsg_index, hb);
}
@@ -3726,8 +3741,32 @@
}
+// One such object is associated with each traced address.
+typedef struct {
+ UInt n_accesses;
+ // more fields are likely to be added in the future.
+} TraceInfo;
+static WordFM *trace_info_map; // addr->TraceInfo;
+
+static TraceInfo *get_trace_info(Addr a)
+{
+ UWord key, val;
+ TraceInfo *res;
+ if (!trace_info_map) {
+ trace_info_map = HG_(newFM)( hg_zalloc, hg_free, NULL);
+ }
+
+ if (HG_(lookupFM)(trace_info_map, &key, &val, a)) {
+ tl_assert(key == (UWord)a);
+ return (TraceInfo*)val;
+ }
+ res = (TraceInfo*)hg_zalloc(sizeof(TraceInfo)); // zero-initialized
+ HG_(addToFM)(trace_info_map, (UWord)a, (UWord)res);
+ return res;
+}
+
static void msm_do_trace(Thread *thr,
Addr a,
SVal sv_old,
@@ -3741,8 +3780,20 @@
// don't trace if the state is unchanged.
return;
}
+
+ TraceInfo *info = get_trace_info(a);
+ info->n_accesses++;
+
+ if (info->n_accesses > clo_trace_after_race) {
+ // we already printed too many traces
+ return;
+ }
+
+
show_sval(buf, sizeof(buf), sv_new);
- VG_(message)(Vg_UserMsg, "TRACE {{{: Access = {%p S%d/T%d %s} State = {%s}", a,
+ VG_(message)(Vg_UserMsg,
+ "TRACE[%d] {{{: Access = {%p S%d/T%d %s} State = {%s}",
+ info->n_accesses, a,
(int)thr->csegid, thr->errmsg_index,
is_w ? "write" : "read", buf);
if (trace_level >= 2) {
@@ -3761,6 +3812,9 @@
VG_(message)(Vg_UserMsg, "}}}");
VG_(message)(Vg_UserMsg, ""); // empty line
+
+
+
}
static INLINE
@@ -3899,7 +3953,8 @@
if (is_race && get_SHVAL_TRACE_BIT(sv_old)) {
// Race is found for the second time.
// Stop tracing and start ignoring this memory location.
- VG_(message)(Vg_UserMsg, "Race on %p is found again", a);
+ VG_(message)(Vg_UserMsg, "Race on %p is found again after %u accesses",
+ a, get_trace_info(a)->n_accesses);
sv_new = SHVAL_Ignore;
is_race = False;
}
@@ -5873,7 +5928,8 @@
ThreadId tid = map_threads_maybe_reverse_lookup_SLOW(thr);
if (clo_more_context && tid != VG_INVALID_THREADID)
- SEG_set_context(*new_segidP, VG_(record_ExeContext(tid,-1/*first_ip_delta*/)));
+ SEG_set_context(*new_segidP,
+ VG_(record_ExeContext)(tid,-1/*first_ip_delta*/));
}
@@ -8886,11 +8942,25 @@
return sizeof(XError);
}
+// Ugly. Need to return a value from apply_StackTrace()...
+static Bool destructor_detected = False;
+// A callback to be passed to apply_StackTrace().
+// A function is a DTOR iff it contains '::~'.
+static void detect_destructor(UInt n, Addr ip)
+{
+ static UChar buf[4096];
+ VG_(describe_IP)(ip, buf, sizeof(buf));
+ if (VG_(strstr)(buf, "::~")) {
+ destructor_detected = True;
+ }
+}
+
static Bool record_error_Race ( Thread* thr,
Addr data_addr, Bool isWrite, Int szB,
SVal old_sv, SVal new_sv,
ExeContext* mb_lastlock ) {
XError xe;
+ ThreadId tid = map_threads_maybe_reverse_lookup_SLOW(thr);
tl_assert( is_sane_Thread(thr) );
init_XError(&xe);
xe.tag = XE_Race;
@@ -8915,7 +8985,6 @@
any reported races. It appears that ld.so does intentionally
racey things in PLTs and it's simplest just to ignore it. */
if (1) {
- ThreadId tid = map_threads_maybe_reverse_lookup_SLOW(thr);
if (tid != VG_INVALID_THREADID) {
Addr ip_at_error = VG_(get_IP)( tid );
if (VG_(seginfo_sect_kind)(NULL, 0, ip_at_error) == Vg_SectPLT) {
@@ -8934,9 +9003,20 @@
}
}
+ destructor_detected = False;
+ if (1) {
+ // check if the stack trace contains a DTOR
+ ExeContext *context = VG_(record_ExeContext)(tid,-1/*first_ip_delta*/);
+ VG_(apply_ExeContext)(detect_destructor, context, 1000);
+ if (destructor_detected && clo_ignore_in_dtor) return False;
+ }
+
Bool res = VG_(maybe_record_error)( map_threads_reverse_lookup_SLOW(thr),
XE_Race, data_addr, NULL, &xe );
+ if (res && destructor_detected) {
+ VG_(message)(Vg_UserMsg, "NOTE: this race was detected inside a DTOR");
+ }
return res;
}
@@ -9594,6 +9674,12 @@
if (clo_max_segment_set_size < 4)
clo_max_segment_set_size = 4;
}
+ else if (VG_CLO_STREQN(19, arg, "--trace-after-race=")) {
+ clo_trace_after_race = VG_(atoll)(&arg[19]);
+ }
+ else if (VG_CLO_STREQN(24, arg, "--exe-context-for-locks=")) {
+ clo_exe_context_for_locks = VG_(atoll)(&arg[24]);
+ }
else if (VG_CLO_STREQ(arg, "--ss-recycle=yes"))
clo_ss_recycle = True;
@@ -9605,6 +9691,11 @@
else if (VG_CLO_STREQ(arg, "--more-context=no"))
clo_more_context = False;
+ else if (VG_CLO_STREQ(arg, "--ignore-in-dtor=yes"))
+ clo_ignore_in_dtor = True;
+ else if (VG_CLO_STREQ(arg, "--ignore-in-dtor=no"))
+ clo_ignore_in_dtor = False;
+
else if (VG_CLO_STREQ(arg, "--pure-happens-before=yes"))
clo_pure_happens_before = True;
else if (VG_CLO_STREQ(arg, "--pure-happens-before=no"))
Modified: branches/HGDEV/include/pub_tool_execontext.h
===================================================================
--- branches/HGDEV/include/pub_tool_execontext.h 2008-06-15 09:13:28 UTC (rev 8231)
+++ branches/HGDEV/include/pub_tool_execontext.h 2008-06-15 11:38:35 UTC (rev 8232)
@@ -57,6 +57,13 @@
extern
ExeContext* VG_(record_ExeContext) ( ThreadId tid, Word first_ip_delta );
+// Same as record_ExeContext, but request 'n_ips_requested' ips
+// instead of the value of --num-callers.
+extern
+ExeContext* VG_(record_depth_N_ExeContext) ( ThreadId tid,
+ Word first_ip_delta,
+ UInt n_ips_requested );
+
// Trivial version of VG_(record_ExeContext), which just records the
// thread's current program counter but does not do any stack
// unwinding. This is useful in some rare cases when we suspect the
@@ -72,7 +79,7 @@
extern void VG_(apply_ExeContext)( void(*action)(UInt n, Addr ip),
ExeContext* ec, UInt n_ips );
-// Compare two ExeContexts. Number of callers considered depends on `res':
+// Compare two ExeContexts. Number of callers considered depends on 'res':
// Vg_LowRes: 2
// Vg_MedRes: 4
// Vg_HighRes: all
|
|
From: <sv...@va...> - 2008-06-15 09:13:32
|
Author: bart
Date: 2008-06-15 10:13:28 +0100 (Sun, 15 Jun 2008)
New Revision: 8231
Log:
Changed script such that DRD times are compared to native -p4 time instead of native -p1 time.
Modified:
trunk/exp-drd/scripts/run-splash2
Modified: trunk/exp-drd/scripts/run-splash2
===================================================================
--- trunk/exp-drd/scripts/run-splash2 2008-06-13 19:44:51 UTC (rev 8230)
+++ trunk/exp-drd/scripts/run-splash2 2008-06-15 09:13:28 UTC (rev 8231)
@@ -7,27 +7,32 @@
source "$(dirname $0)/measurement-functions"
function run_test {
- local tmp avg1=1 stddev1=1 avg2=1 stddev2=1
+ local tmp avg1=1 stddev1=1 avg2=1 stddev2=1 p=4
tmp="/tmp/test-timing.$$"
rm -f "${tmp}"
- test_output="${1}.out" measure_runtime "$@" | avgstddev > "$tmp"
+ test_output="${1}.out" measure_runtime "$@" -p1 | avgstddev > "$tmp"
read avg1 stddev1 < "$tmp"
echo "Average time: ${avg1} +/- ${stddev1} seconds"
- for p in 4
- do
- test_output="${1}-drd-with-stack-var-${p}.out" \
- print_runtime_ratio $VG --tool=exp-drd --check-stack-var=yes "$@" -p$p
+ test_output="${1}.out" measure_runtime "$@" -p2 | avgstddev > "$tmp"
+ read avg1 stddev1 < "$tmp"
+ echo "Average time: ${avg1} +/- ${stddev1} seconds"
- test_output="${1}-drd-without-stack-var-${p}.out" \
- print_runtime_ratio $VG --tool=exp-drd --check-stack-var=no "$@" -p$p
+ test_output="${1}.out" measure_runtime "$@" -p4 | avgstddev > "$tmp"
+ read avg1 stddev1 < "$tmp"
+ echo "Average time: ${avg1} +/- ${stddev1} seconds"
- test_output="${1}-helgrind-${p}.out" \
- print_runtime_ratio $VG --tool=helgrind "$@" -p$p
- done
+ test_output="${1}-drd-with-stack-var-${p}.out" \
+ print_runtime_ratio $VG --tool=exp-drd --check-stack-var=yes "$@" -p$p
+ test_output="${1}-drd-without-stack-var-${p}.out" \
+ print_runtime_ratio $VG --tool=exp-drd --check-stack-var=no "$@" -p$p
+
+ test_output="${1}-helgrind-${p}.out" \
+ print_runtime_ratio $VG --tool=helgrind "$@" -p$p
+
echo ''
rm -f "$tmp"
@@ -60,14 +65,14 @@
##############################################################################
# Results (-p4): native DRD DRD HG ITC ITC
-# time w/ filter w/ filter
+# (-p1) w/ filter w/ filter
# ............................................................................
# Cholesky 0.29 103 64 37 239 82
# FFT 0.19 65 32 556 90 41
# LU, contiguous blocks 0.76 44 36 97 428 128
# LU, non-contiguous blocks 0.80 45 39 59 428 128
# Ocean, contiguous partitions 19.40 39 32 54 90 28
-# Ocean, non-continguous partns 0.29 26 29 53 90 28
+# Ocean, non-contiguous partns 0.29 26 29 53 90 28
# Radiosity 3.11 223 61 58 485 163
# Radix 4.05 16 14 85 222 56
# Raytrace 2.21 272 53 89 172 53
@@ -78,14 +83,14 @@
# Software: Ubuntu 7.10 server, 64-bit, gcc 4.1.3, xload -update 1 running.
##############################################################################
# Results (-p4): native DRD DRD HG ITC ITC
-# time w/ filter w/ filter
+# (-p1) w/ filter w/ filter
# ............................................................................
# Cholesky 0.29 89 64 37 239 82
# FFT 0.19 50 32 556 90 41
# LU, contiguous blocks 0.76 41 38 97 428 128
# LU, non-contiguous blocks 0.80 49 47 59 428 128
# Ocean, contiguous partitions 19.40 39 33 54 90 28
-# Ocean, non-continguous partns 0.29 25 29 53 90 28
+# Ocean, non-contiguous partns 0.29 25 29 53 90 28
# Radiosity 3.11 164 60 58 485 163
# Radix 4.05 16 14 85 222 56
# Raytrace 2.21 169 56 89 172 53
@@ -96,14 +101,14 @@
# Software: Ubuntu 7.10 server, 64-bit, gcc 4.3.1, xload -update 1 running.
##############################################################################
# Results (-p4): native DRD DRD HG ITC ITC
-# time w/ filter w/ filter
+# (-p1) w/ filter w/ filter
# ............................................................................
# Cholesky 0.21 100 64 36 239 82
# FFT 0.12 100 38 224 90 41
# LU, contiguous blocks 0.57 58 50 96 428 128
# LU, non-contiguous blocks 0.61 62 56 60 428 128
# Ocean, contiguous partitions 14.33 43 34 58 90 28
-# Ocean, non-continguous partns 0.21 30 33 56 90 28
+# Ocean, non-contiguous partns 0.21 30 33 56 90 28
# Radiosity 2.33 244 63 60 485 163
# Radix 2.81 15 13 90 222 56
# Raytrace 1.65 340 52 88 172 53
@@ -114,14 +119,14 @@
# Software: openSUSE 10.3, 64-bit, gcc 4.2.1, runlevel 5, X screensaver: blank
##############################################################################
# Results (-p4): native DRD DRD HG ITC ITC
-# time w/ filter w/ filter
+# (-p1) w/ filter w/ filter
# ............................................................................
# Cholesky 0.21 85 62 36 239 82
# FFT 0.12 82 41 224 90 41
# LU, contiguous blocks 0.57 45 42 96 428 128
# LU, non-contiguous blocks 0.61 53 53 60 428 128
# Ocean, contiguous partitions 14.33 40 32 58 90 28
-# Ocean, non-continguous partns 0.21 28 32 56 90 28
+# Ocean, non-contiguous partns 0.21 28 32 56 90 28
# Radiosity 2.33 175 62 60 485 163
# Radix 2.81 17 15 90 222 56
# Raytrace 1.65 233 29 88 172 53
@@ -131,6 +136,24 @@
# Hardware: dual-core Intel Core2 Duo E6750, 2.66 GHz, 4 MB L2 cache, 2 GB RAM.
# Software: openSUSE 10.3, 64-bit, gcc 4.3.1, runlevel 5, X screensaver: blank
##############################################################################
+# Results (-p4): native DRD DRD HG ITC ITC
+# (-p1) (-p2) (-p4) w/ filter w/ filter
+# ............................................................................
+# Cholesky 0.21 0.14 4.62 4 3 2 239 82
+# FFT 0.12 0.08 0.07 138 72 380 90 41
+# LU, contiguous 0.57 0.34 0.34 92 88 96 428 128
+# LU, non-contiguous 0.59 0.32 0.35 116 118 60 428 128
+# Ocean, contiguous 14.49 9.73 9.70 65 53 .. 90 28
+# Ocean, non-contiguous 0.21 0.12 0.12 60 65 105 90 28
+# Radiosity 2.33 2.32 2.33 177 62 60 485 163
+# Radix 2.81 1.45 1.46 37 35 171 222 56
+# Raytrace 1.65 1.64 1.64 235 55 89 172 53
+# Water-n2 0.14 0.12 0.12 129 36 55 189 39
+# Water-sp 0.14 0.13 0.12 124 36 54 183 34
+# ............................................................................
+# Hardware: dual-core Intel Core2 Duo E6750, 2.66 GHz, 4 MB L2 cache, 2 GB RAM.
+# Software: openSUSE 10.3, 64-bit, gcc 4.3.1, runlevel 5, X screensaver: blank
+##############################################################################
cache_size=$(($(get_cache_size)/2))
log2_cache_size=$(log2 ${cache_size})
|
|
From: Tom H. <th...@cy...> - 2008-06-15 02:59:07
|
Nightly build on aston ( x86_64, Fedora Core 5 ) started at 2008-06-15 03:20:10 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 437 tests, 7 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 437 tests, 6 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sun Jun 15 03:39:58 2008 --- new.short Sun Jun 15 03:59:13 2008 *************** *** 8,10 **** ! == 437 tests, 6 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 437 tests, 7 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) *************** *** 15,16 **** --- 15,17 ---- helgrind/tests/tc20_verifywrap (stderr) + helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-06-15 02:42:02
|
Nightly build on trojan ( x86_64, Fedora Core 6 ) started at 2008-06-15 03:25:03 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 435 tests, 7 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap2 (stdout) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-06-15 02:40:33
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2008-06-15 03:05:10 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 431 tests, 4 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-06-15 02:37:45
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2008-06-15 03:10:04 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 431 tests, 7 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/mremap2 (stdout) none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 431 tests, 7 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sun Jun 15 03:23:58 2008 --- new.short Sun Jun 15 03:37:50 2008 *************** *** 8,10 **** ! == 431 tests, 7 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) --- 8,10 ---- ! == 431 tests, 7 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) *************** *** 14,15 **** --- 14,16 ---- none/tests/mremap2 (stdout) + none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) |
|
From: Tom H. <th...@cy...> - 2008-06-15 02:28:20
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2008-06-15 03:00:06 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 437 tests, 30 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/origin5-bz2 (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/amd64/insn_ssse3 (stdout) none/tests/amd64/insn_ssse3 (stderr) none/tests/amd64/ssse3_misaligned (stderr) none/tests/blockfault (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap2 (stdout) none/tests/x86/insn_ssse3 (stdout) none/tests/x86/insn_ssse3 (stderr) none/tests/x86/ssse3_misaligned (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) |