You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
(8) |
2
(8) |
3
(15) |
4
(14) |
5
(12) |
6
(40) |
7
(9) |
|
8
(5) |
9
(12) |
10
(9) |
11
(13) |
12
(7) |
13
(7) |
14
(19) |
|
15
(18) |
16
(13) |
17
(16) |
18
(8) |
19
(16) |
20
(16) |
21
(12) |
|
22
(21) |
23
(39) |
24
(27) |
25
(33) |
26
(41) |
27
(17) |
28
(15) |
|
From: <sv...@va...> - 2009-02-15 23:38:30
|
Author: njn
Date: 2009-02-15 23:38:24 +0000 (Sun, 15 Feb 2009)
New Revision: 9175
Log:
Added 'os_test' and 'platform_test' scripts that ensure that inappropriate
OS- and platform-specific regtests are not run. This avoids 11 regtest
failures on Darwin.
Renamed 'cputest' to 'arch_test' to be consistent with 'os_test' and
'platform_test'.
Added:
branches/DARWIN/tests/arch_test.c
branches/DARWIN/tests/os_test.in
branches/DARWIN/tests/platform_test
Removed:
branches/DARWIN/tests/cputest.c
Modified:
branches/DARWIN/configure.in
branches/DARWIN/docs/internals/porting-HOWTO.txt
branches/DARWIN/docs/internals/porting-to-ARM.txt
branches/DARWIN/memcheck/tests/x86/fxsave.vgtest
branches/DARWIN/memcheck/tests/x86/sse1_memory.vgtest
branches/DARWIN/memcheck/tests/x86/sse2_memory.vgtest
branches/DARWIN/memcheck/tests/x86/xor-undef-x86.vgtest
branches/DARWIN/none/tests/amd64/insn_sse3.vgtest
branches/DARWIN/none/tests/amd64/insn_ssse3.vgtest
branches/DARWIN/none/tests/amd64/ssse3_misaligned.vgtest
branches/DARWIN/none/tests/x86/bug137714-x86.vgtest
branches/DARWIN/none/tests/x86/cse_fail.vgtest
branches/DARWIN/none/tests/x86/insn_cmov.vgtest
branches/DARWIN/none/tests/x86/insn_fpu.vgtest
branches/DARWIN/none/tests/x86/insn_mmx.vgtest
branches/DARWIN/none/tests/x86/insn_mmxext.vgtest
branches/DARWIN/none/tests/x86/insn_sse.vgtest
branches/DARWIN/none/tests/x86/insn_sse2.vgtest
branches/DARWIN/none/tests/x86/insn_sse3.vgtest
branches/DARWIN/none/tests/x86/insn_ssse3.vgtest
branches/DARWIN/none/tests/x86/ssse3_misaligned.vgtest
branches/DARWIN/tests/Makefile.am
branches/DARWIN/tests/vg_regtest.in
Modified: branches/DARWIN/configure.in
===================================================================
--- branches/DARWIN/configure.in 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/configure.in 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1783,6 +1783,7 @@
docs/xml/Makefile
tests/Makefile
tests/vg_regtest
+ tests/os_test
perf/Makefile
perf/vg_perf
include/Makefile
Modified: branches/DARWIN/docs/internals/porting-HOWTO.txt
===================================================================
--- branches/DARWIN/docs/internals/porting-HOWTO.txt 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/docs/internals/porting-HOWTO.txt 2009-02-15 23:38:24 UTC (rev 9175)
@@ -68,8 +68,9 @@
Once it runs ok:
-- Add the cpu to the tests/cputest.c file so the reg test script will work.
- (Don't forget to add it to all_archs[].)
+- Add the arch to the tests/arch_test.c file so the reg test script will work.
+ (Don't forget to add it to all_archs[].) Likewise for os_test.in and
+ platform_test.
- Ensure the regression tests work, and add some arch-specific tests to
none/tests directory.
Modified: branches/DARWIN/docs/internals/porting-to-ARM.txt
===================================================================
--- branches/DARWIN/docs/internals/porting-to-ARM.txt 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/docs/internals/porting-to-ARM.txt 2009-02-15 23:38:24 UTC (rev 9175)
@@ -156,7 +156,7 @@
# error VG_UCONTEXT_SYSCALL_RET undefined for ARM/Linux
=============================================================================
-From tests/cputest.c
+From tests/arch_test.c
=============================================================================
- You'll need to add "arm" to all_archs[].
Modified: branches/DARWIN/memcheck/tests/x86/fxsave.vgtest
===================================================================
--- branches/DARWIN/memcheck/tests/x86/fxsave.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/memcheck/tests/x86/fxsave.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,4 +1,4 @@
prog: fxsave
-prereq: ../../../tests/cputest x86-sse
+prereq: ../../../tests/arch_test x86-sse
vgopts: -q
args: x
Modified: branches/DARWIN/memcheck/tests/x86/sse1_memory.vgtest
===================================================================
--- branches/DARWIN/memcheck/tests/x86/sse1_memory.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/memcheck/tests/x86/sse1_memory.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,4 +1,4 @@
prog: sse_memory
vgopts: -q
args: sse1
-prereq: ../../../tests/cputest x86-sse
+prereq: ../../../tests/arch_test x86-sse
Modified: branches/DARWIN/memcheck/tests/x86/sse2_memory.vgtest
===================================================================
--- branches/DARWIN/memcheck/tests/x86/sse2_memory.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/memcheck/tests/x86/sse2_memory.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,4 +1,4 @@
prog: sse_memory
vgopts: -q
args: sse2
-prereq: ../../../tests/cputest x86-sse2
+prereq: ../../../tests/arch_test x86-sse2
Modified: branches/DARWIN/memcheck/tests/x86/xor-undef-x86.vgtest
===================================================================
--- branches/DARWIN/memcheck/tests/x86/xor-undef-x86.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/memcheck/tests/x86/xor-undef-x86.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,2 +1,2 @@
prog: xor-undef-x86
-prereq: ../../../tests/cputest x86-sse
+prereq: ../../../tests/arch_test x86-sse
Modified: branches/DARWIN/none/tests/amd64/insn_sse3.vgtest
===================================================================
--- branches/DARWIN/none/tests/amd64/insn_sse3.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/amd64/insn_sse3.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ../../../none/tests/amd64/insn_sse3
-prereq: ../../../tests/cputest amd64-sse3
+prereq: ../../../tests/arch_test amd64-sse3
vgopts: -q
Modified: branches/DARWIN/none/tests/amd64/insn_ssse3.vgtest
===================================================================
--- branches/DARWIN/none/tests/amd64/insn_ssse3.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/amd64/insn_ssse3.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ../../../none/tests/amd64/insn_ssse3
-prereq: ../../../tests/cputest amd64-sse3
+prereq: ../../../tests/arch_test amd64-sse3
vgopts: -q
Modified: branches/DARWIN/none/tests/amd64/ssse3_misaligned.vgtest
===================================================================
--- branches/DARWIN/none/tests/amd64/ssse3_misaligned.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/amd64/ssse3_misaligned.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ssse3_misaligned
-prereq: ../../../tests/cputest amd64-sse3
+prereq: ../../../tests/arch_test amd64-sse3
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/bug137714-x86.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/bug137714-x86.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/bug137714-x86.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: bug137714-x86
-prereq: ../../../tests/cputest x86-sse2
+prereq: ../../../tests/arch_test x86-sse2
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/cse_fail.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/cse_fail.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/cse_fail.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: cse_fail
-prereq: ../../../tests/cputest x86-sse
+prereq: ../../../tests/arch_test x86-sse
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/insn_cmov.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/insn_cmov.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/insn_cmov.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ../../../none/tests/x86/insn_cmov
-prereq: ../../../tests/cputest x86-cmov
+prereq: ../../../tests/arch_test x86-cmov
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/insn_fpu.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/insn_fpu.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/insn_fpu.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ../../../none/tests/x86/insn_fpu
-prereq: ../../../tests/cputest x86-fpu
+prereq: ../../../tests/arch_test x86-fpu
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/insn_mmx.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/insn_mmx.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/insn_mmx.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ../../../none/tests/x86/insn_mmx
-prereq: ../../../tests/cputest x86-mmx
+prereq: ../../../tests/arch_test x86-mmx
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/insn_mmxext.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/insn_mmxext.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/insn_mmxext.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ../../../none/tests/x86/insn_mmxext
-prereq: ../../../tests/cputest x86-mmxext
+prereq: ../../../tests/arch_test x86-mmxext
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/insn_sse.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/insn_sse.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/insn_sse.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ../../../none/tests/x86/insn_sse
-prereq: ../../../tests/cputest x86-sse
+prereq: ../../../tests/arch_test x86-sse
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/insn_sse2.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/insn_sse2.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/insn_sse2.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ../../../none/tests/x86/insn_sse2
-prereq: ../../../tests/cputest x86-sse2
+prereq: ../../../tests/arch_test x86-sse2
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/insn_sse3.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/insn_sse3.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/insn_sse3.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ../../../none/tests/x86/insn_sse3
-prereq: ../../../tests/cputest x86-sse3
+prereq: ../../../tests/arch_test x86-sse3
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/insn_ssse3.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/insn_ssse3.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/insn_ssse3.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ../../../none/tests/x86/insn_ssse3
-prereq: ../../../tests/cputest x86-sse3
+prereq: ../../../tests/arch_test x86-sse3
vgopts: -q
Modified: branches/DARWIN/none/tests/x86/ssse3_misaligned.vgtest
===================================================================
--- branches/DARWIN/none/tests/x86/ssse3_misaligned.vgtest 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/none/tests/x86/ssse3_misaligned.vgtest 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,3 +1,3 @@
prog: ssse3_misaligned
-prereq: ../../../tests/cputest x86-sse3
+prereq: ../../../tests/arch_test x86-sse3
vgopts: -q
Modified: branches/DARWIN/tests/Makefile.am
===================================================================
--- branches/DARWIN/tests/Makefile.am 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/tests/Makefile.am 2009-02-15 23:38:24 UTC (rev 9175)
@@ -3,19 +3,21 @@
include $(top_srcdir)/Makefile.flags.am
noinst_SCRIPTS = \
- vg_regtest \
filter_addresses \
filter_discards \
filter_libc \
filter_numbers \
filter_stderr_basic \
filter_sink \
- filter_test_paths
+ filter_test_paths \
+ os_test \
+ platform_test \
+ vg_regtest
EXTRA_DIST = $(noinst_SCRIPTS)
check_PROGRAMS = \
- cputest \
+ arch_test \
toobig-allocs \
true
@@ -23,7 +25,7 @@
AM_CXXFLAGS = $(AM_CFLAGS)
# generic C ones
-cputest_CFLAGS = $(AM_CFLAGS) \
+arch_test_CFLAGS = $(AM_CFLAGS) \
-DVGA_$(VGCONF_ARCH_PRI)=1 \
-DVGO_$(VGCONF_OS)=1 \
-DVGP_$(VGCONF_ARCH_PRI)_$(VGCONF_OS)=1
Copied: branches/DARWIN/tests/arch_test.c (from rev 9173, branches/DARWIN/tests/cputest.c)
===================================================================
--- branches/DARWIN/tests/arch_test.c (rev 0)
+++ branches/DARWIN/tests/arch_test.c 2009-02-15 23:38:24 UTC (rev 9175)
@@ -0,0 +1,207 @@
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+
+// This file determines which architectures that this Valgrind installation
+// supports, which depends on the machine's architecture. It also depends
+// on the configuration options; for example, if Valgrind is installed on
+// an AMD64 machine but has been configured with --enable-only32bit then
+// this program will not match "amd64".
+//
+// We return:
+// - 0 if the machine matches the asked-for cpu
+// - 1 if it didn't match, but did match the name of another arch
+// - 2 otherwise
+
+// Nb: When updating this file for a new architecture, add the name to
+// 'all_archs' as well as adding go().
+
+#define False 0
+#define True 1
+typedef int Bool;
+
+char* all_archs[] = {
+ "amd64",
+ "ppc32",
+ "ppc64",
+ "x86",
+ NULL
+};
+
+//-----------------------------------------------------------------------------
+// ppc32-linux
+//---------------------------------------------------------------------------
+#if defined(VGP_ppc32_linux)
+static Bool go(char* cpu)
+{
+ if ( strcmp( cpu, "ppc32" ) == 0 )
+ return True;
+ return False;
+}
+#endif // VGP_ppc32_linux
+
+//---------------------------------------------------------------------------
+// ppc64-linux
+//---------------------------------------------------------------------------
+#if defined(VGP_ppc64_linux)
+static Bool go(char* cpu)
+{
+ if ( strcmp( cpu, "ppc64" ) == 0 )
+ return True;
+ if ( strcmp( cpu, "ppc32" ) == 0 )
+ return True;
+ return False;
+}
+#endif // VGP_ppc64_linux
+
+//---------------------------------------------------------------------------
+// ppc{32,64}-aix
+//---------------------------------------------------------------------------
+#if defined(VGP_ppc32_aix5) || defined(VGP_ppc64_aix5)
+static Bool go(char* cpu)
+{
+ if (sizeof(void*) == 8) {
+ /* cpu is in 64-bit mode */
+ if ( strcmp( cpu, "ppc64" ) == 0 )
+ return True;
+ if ( strcmp( cpu, "ppc32" ) == 0 )
+ return True;
+ } else {
+ if ( strcmp( cpu, "ppc32" ) == 0 )
+ return True;
+ }
+ return False;
+}
+#endif // VGP_ppc32_aix5 || VGP_ppc64_aix5
+
+//---------------------------------------------------------------------------
+// {x86,amd64}-linux (part 1 of 2)
+//---------------------------------------------------------------------------
+#if defined(VGP_x86_linux) || defined(VGP_amd64_linux)
+static void cpuid ( unsigned int n,
+ unsigned int* a, unsigned int* b,
+ unsigned int* c, unsigned int* d )
+{
+ __asm__ __volatile__ (
+ "cpuid"
+ : "=a" (*a), "=b" (*b), "=c" (*c), "=d" (*d) /* output */
+ : "0" (n) /* input */
+ );
+}
+#endif // VGP_x86_linux || VGP_amd64_linux
+
+//---------------------------------------------------------------------------
+// {x86,amd64}-darwin (part 1 of 2)
+//---------------------------------------------------------------------------
+#if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
+static void cpuid ( unsigned int n,
+ unsigned int* a, unsigned int* b,
+ unsigned int* c, unsigned int* d )
+{
+ __asm__ __volatile__ (
+ "pushl %%eax\n\t"
+ "pushl %%ebx\n\t"
+ "pushl %%ecx\n\t"
+ "pushl %%edx\n\t"
+ "movl %4, %%eax\n\t"
+ "cpuid\n\t"
+ "movl %%eax,%0\n\t"
+ "movl %%ebx,%1\n\t"
+ "movl %%ecx,%2\n\t"
+ "movl %%edx,%3\n\t"
+ "popl %%edx\n\t"
+ "popl %%ecx\n\t"
+ "popl %%ebx\n\t"
+ "popl %%eax\n\t"
+ : "=m" (*a), "=m" (*b), "=m" (*c), "=m" (*d)
+ : "mr" (n)
+ );
+}
+#endif // VGP_x86_darwin || VGP_amd64_darwin
+
+//---------------------------------------------------------------------------
+// {x86,amd64}-{linux,darwin} (part 2 of 2)
+//---------------------------------------------------------------------------
+#if defined(VGP_x86_linux) || defined(VGP_amd64_linux) || \
+ defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
+static Bool go(char* cpu)
+{
+ unsigned int level = 0, cmask = 0, dmask = 0, a, b, c, d;
+
+ if ( strcmp( cpu, "x86" ) == 0 ) {
+ return True;
+ } else if ( strcmp( cpu, "x86-fpu" ) == 0 ) {
+ level = 1;
+ dmask = 1 << 0;
+ } else if ( strcmp( cpu, "x86-cmov" ) == 0 ) {
+ level = 1;
+ dmask = 1 << 15;
+ } else if ( strcmp( cpu, "x86-mmx" ) == 0 ) {
+ level = 1;
+ dmask = 1 << 23;
+ } else if ( strcmp( cpu, "x86-mmxext" ) == 0 ) {
+ level = 0x80000001;
+ dmask = 1 << 22;
+ } else if ( strcmp( cpu, "x86-sse" ) == 0 ) {
+ level = 1;
+ dmask = 1 << 25;
+ } else if ( strcmp( cpu, "x86-sse2" ) == 0 ) {
+ level = 1;
+ dmask = 1 << 26;
+ } else if ( strcmp( cpu, "x86-sse3" ) == 0 ) {
+ level = 1;
+ cmask = 1 << 0;
+ } else if ( strcmp( cpu, "x86-ssse3" ) == 0 ) {
+ level = 1;
+ cmask = 1 << 9;
+#if defined(VGA_amd64)
+ } else if ( strcmp( cpu, "amd64" ) == 0 ) {
+ return True;
+ } else if ( strcmp( cpu, "amd64-sse3" ) == 0 ) {
+ level = 1;
+ cmask = 1 << 0;
+ } else if ( strcmp( cpu, "amd64-ssse3" ) == 0 ) {
+ level = 1;
+ cmask = 1 << 9;
+#endif
+ } else {
+ return False;
+ }
+
+ assert( !(cmask != 0 && dmask != 0) );
+ assert( !(cmask == 0 && dmask == 0) );
+
+ cpuid( level & 0x80000000, &a, &b, &c, &d );
+
+ if ( a >= level ) {
+ cpuid( level, &a, &b, &c, &d );
+
+ if (dmask > 0 && (d & dmask) != 0) return True;
+ if (cmask > 0 && (c & cmask) != 0) return True;
+ }
+ return False;
+}
+#endif // VGP_x86_linux || VGP_amd64_linux ||
+ // VGP_x86_darwin || VGP_amd64_darwin
+
+
+//---------------------------------------------------------------------------
+// main
+//---------------------------------------------------------------------------
+int main(int argc, char **argv)
+{
+ int i;
+ if ( argc != 2 ) {
+ fprintf( stderr, "usage: arch_test <cpu-type>\n" );
+ exit( 2 );
+ }
+ if (go( argv[1] )) {
+ return 0; // matched
+ }
+ for (i = 0; NULL != all_archs[i]; i++) {
+ if ( strcmp( argv[1], all_archs[i] ) == 0 )
+ return 1;
+ }
+ return 2;
+}
Deleted: branches/DARWIN/tests/cputest.c
===================================================================
--- branches/DARWIN/tests/cputest.c 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/tests/cputest.c 2009-02-15 23:38:24 UTC (rev 9175)
@@ -1,207 +0,0 @@
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-#include <assert.h>
-
-// This file determines which architectures that this Valgrind installation
-// supports, which depends on the machine's architecture. It also depends
-// on the configuration options; for example, if Valgrind is installed on
-// an AMD64 machine but has been configured with --enable-only32bit then
-// this program will not match "amd64".
-//
-// We return:
-// - 0 if the machine matches the asked-for cpu
-// - 1 if it didn't match, but did match the name of another arch
-// - 2 otherwise
-
-// Nb: When updating this file for a new architecture, add the name to
-// 'all_archs' as well as adding go().
-
-#define False 0
-#define True 1
-typedef int Bool;
-
-char* all_archs[] = {
- "amd64",
- "ppc32",
- "ppc64",
- "x86",
- NULL
-};
-
-//-----------------------------------------------------------------------------
-// ppc32-linux
-//---------------------------------------------------------------------------
-#if defined(VGP_ppc32_linux)
-static Bool go(char* cpu)
-{
- if ( strcmp( cpu, "ppc32" ) == 0 )
- return True;
- return False;
-}
-#endif // VGP_ppc32_linux
-
-//---------------------------------------------------------------------------
-// ppc64-linux
-//---------------------------------------------------------------------------
-#if defined(VGP_ppc64_linux)
-static Bool go(char* cpu)
-{
- if ( strcmp( cpu, "ppc64" ) == 0 )
- return True;
- if ( strcmp( cpu, "ppc32" ) == 0 )
- return True;
- return False;
-}
-#endif // VGP_ppc64_linux
-
-//---------------------------------------------------------------------------
-// ppc{32,64}-aix
-//---------------------------------------------------------------------------
-#if defined(VGP_ppc32_aix5) || defined(VGP_ppc64_aix5)
-static Bool go(char* cpu)
-{
- if (sizeof(void*) == 8) {
- /* cpu is in 64-bit mode */
- if ( strcmp( cpu, "ppc64" ) == 0 )
- return True;
- if ( strcmp( cpu, "ppc32" ) == 0 )
- return True;
- } else {
- if ( strcmp( cpu, "ppc32" ) == 0 )
- return True;
- }
- return False;
-}
-#endif // VGP_ppc32_aix5 || VGP_ppc64_aix5
-
-//---------------------------------------------------------------------------
-// {x86,amd64}-linux (part 1 of 2)
-//---------------------------------------------------------------------------
-#if defined(VGP_x86_linux) || defined(VGP_amd64_linux)
-static void cpuid ( unsigned int n,
- unsigned int* a, unsigned int* b,
- unsigned int* c, unsigned int* d )
-{
- __asm__ __volatile__ (
- "cpuid"
- : "=a" (*a), "=b" (*b), "=c" (*c), "=d" (*d) /* output */
- : "0" (n) /* input */
- );
-}
-#endif // VGP_x86_linux || VGP_amd64_linux
-
-//---------------------------------------------------------------------------
-// {x86,amd64}-darwin (part 1 of 2)
-//---------------------------------------------------------------------------
-#if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
-static void cpuid ( unsigned int n,
- unsigned int* a, unsigned int* b,
- unsigned int* c, unsigned int* d )
-{
- __asm__ __volatile__ (
- "pushl %%eax\n\t"
- "pushl %%ebx\n\t"
- "pushl %%ecx\n\t"
- "pushl %%edx\n\t"
- "movl %4, %%eax\n\t"
- "cpuid\n\t"
- "movl %%eax,%0\n\t"
- "movl %%ebx,%1\n\t"
- "movl %%ecx,%2\n\t"
- "movl %%edx,%3\n\t"
- "popl %%edx\n\t"
- "popl %%ecx\n\t"
- "popl %%ebx\n\t"
- "popl %%eax\n\t"
- : "=m" (*a), "=m" (*b), "=m" (*c), "=m" (*d)
- : "mr" (n)
- );
-}
-#endif // VGP_x86_darwin || VGP_amd64_darwin
-
-//---------------------------------------------------------------------------
-// {x86,amd64}-{linux,darwin} (part 2 of 2)
-//---------------------------------------------------------------------------
-#if defined(VGP_x86_linux) || defined(VGP_amd64_linux) || \
- defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
-static Bool go(char* cpu)
-{
- unsigned int level = 0, cmask = 0, dmask = 0, a, b, c, d;
-
- if ( strcmp( cpu, "x86" ) == 0 ) {
- return True;
- } else if ( strcmp( cpu, "x86-fpu" ) == 0 ) {
- level = 1;
- dmask = 1 << 0;
- } else if ( strcmp( cpu, "x86-cmov" ) == 0 ) {
- level = 1;
- dmask = 1 << 15;
- } else if ( strcmp( cpu, "x86-mmx" ) == 0 ) {
- level = 1;
- dmask = 1 << 23;
- } else if ( strcmp( cpu, "x86-mmxext" ) == 0 ) {
- level = 0x80000001;
- dmask = 1 << 22;
- } else if ( strcmp( cpu, "x86-sse" ) == 0 ) {
- level = 1;
- dmask = 1 << 25;
- } else if ( strcmp( cpu, "x86-sse2" ) == 0 ) {
- level = 1;
- dmask = 1 << 26;
- } else if ( strcmp( cpu, "x86-sse3" ) == 0 ) {
- level = 1;
- cmask = 1 << 0;
- } else if ( strcmp( cpu, "x86-ssse3" ) == 0 ) {
- level = 1;
- cmask = 1 << 9;
-#if defined(VGA_amd64)
- } else if ( strcmp( cpu, "amd64" ) == 0 ) {
- return True;
- } else if ( strcmp( cpu, "amd64-sse3" ) == 0 ) {
- level = 1;
- cmask = 1 << 0;
- } else if ( strcmp( cpu, "amd64-ssse3" ) == 0 ) {
- level = 1;
- cmask = 1 << 9;
-#endif
- } else {
- return False;
- }
-
- assert( !(cmask != 0 && dmask != 0) );
- assert( !(cmask == 0 && dmask == 0) );
-
- cpuid( level & 0x80000000, &a, &b, &c, &d );
-
- if ( a >= level ) {
- cpuid( level, &a, &b, &c, &d );
-
- if (dmask > 0 && (d & dmask) != 0) return True;
- if (cmask > 0 && (c & cmask) != 0) return True;
- }
- return False;
-}
-#endif // VGP_x86_linux || VGP_amd64_linux ||
- // VGP_x86_darwin || VGP_amd64_darwin
-
-
-//---------------------------------------------------------------------------
-// main
-//---------------------------------------------------------------------------
-int main(int argc, char **argv)
-{
- int i;
- if ( argc != 2 ) {
- fprintf( stderr, "usage: cputest <cpu-type>\n" );
- exit( 2 );
- }
- if (go( argv[1] )) {
- return 0; // matched
- }
- for (i = 0; NULL != all_archs[i]; i++) {
- if ( strcmp( argv[1], all_archs[i] ) == 0 )
- return 1;
- }
- return 2;
-}
Added: branches/DARWIN/tests/os_test.in
===================================================================
--- branches/DARWIN/tests/os_test.in (rev 0)
+++ branches/DARWIN/tests/os_test.in 2009-02-15 23:38:24 UTC (rev 9175)
@@ -0,0 +1,29 @@
+#! /bin/sh
+
+# This script determines which OSes that this Valgrind installation
+# supports, which depends on the machine's OS.
+# We return:
+# - 0 if the machine matches the asked-for OS
+# - 1 if it didn't match, but did match the name of another OS
+# - 2 otherwise
+
+# Nb: When updating this file for a new OS, add the name to 'all_OSes'.
+
+all_OSes="linux aix5 darwin"
+
+if [ $# -ne 1 ] ; then
+ echo "usage: os_test <os-type>"
+ exit 2;
+fi
+
+if [ $1 = @VGCONF_OS@ ] ; then
+ exit 0; # Matches this OS.
+fi
+
+for os in $all_OSes ; do
+ if [ $1 = $os ] ; then
+ exit 1; # Matches another Valgrind-supported OS.
+ fi
+done
+
+exit 2; # Doesn't match any Valgrind-supported OS.
Added: branches/DARWIN/tests/platform_test
===================================================================
--- branches/DARWIN/tests/platform_test (rev 0)
+++ branches/DARWIN/tests/platform_test 2009-02-15 23:38:24 UTC (rev 9175)
@@ -0,0 +1,37 @@
+#! /bin/sh
+
+# This script determines which platforms that this Valgrind installation
+# supports.
+# We return:
+# - 0 if the machine matches the asked-for platform
+# - 1 if it didn't match, but did match the name of another platform
+# - 2 otherwise
+
+# Nb: When updating this file for a new platform, add the name to
+# 'all_platforms'.
+
+all_platforms=
+all_platforms="$all_platforms x86-linux amd64-linux ppc32-linux ppc64-linux"
+all_platforms="$all_platforms ppc32-aix5 ppc64-aix5"
+all_platforms="$all_platforms x86-darwin amd64-darwin"
+
+if [ $# -ne 2 ] ; then
+ echo "usage: platform_test <arch-type> <OS-type>"
+ exit 2;
+fi
+
+# Get the directory holding the arch_test and os_test, which will be the same
+# as the one holding this script.
+dir=`dirname $0`
+
+if $dir/arch_test $1 && sh $dir/os_test $2 ; then
+ exit 0; # Matches this platform.
+fi
+
+for p in $all_platforms ; do
+ if [ $1-$2 = $p ] ; then
+ exit 1; # Matches another Valgrind-supported platform.
+ fi
+done
+
+exit 2; # Doesn't match any Valgrind-supported platform.
Modified: branches/DARWIN/tests/vg_regtest.in
===================================================================
--- branches/DARWIN/tests/vg_regtest.in 2009-02-15 22:04:20 UTC (rev 9174)
+++ branches/DARWIN/tests/vg_regtest.in 2009-02-15 23:38:24 UTC (rev 9175)
@@ -379,14 +379,18 @@
# Ignore dirs into which we should not recurse.
if ($dir =~ /^(BitKeeper|CVS|SCCS|docs|doc)$/) { return; }
- (-x "$tests_dir/tests/cputest") or die
- "vg_regtest: 'cputest' is missing. Did you forget to 'make check'?\n";
+ (-x "$tests_dir/tests/arch_test") or die
+ "vg_regtest: 'arch_test' is missing. Did you forget to 'make check'?\n";
# Ignore any dir whose name matches that of an architecture which is not
- # the architecture we are running on (eg. when running on x86, ignore ppc/
- # directories).
+ # the architecture we are running on. Eg. when running on x86, ignore
+ # ppc/ directories ('arch_test' returns 1 for this case). Likewise for
+ # the OS and platform.
# Nb: weird Perl-ism -- exit code of '1' is seen by Perl as 256...
- if (256 == system("$tests_dir/tests/cputest $dir")) { return; }
+ if (256 == system( "$tests_dir/tests/arch_test $dir")) { return; }
+ if (256 == system("sh $tests_dir/tests/os_test $dir")) { return; }
+ if ($dir =~ /(\w+)-(\w+)/ &&
+ 256 == system("sh $tests_dir/tests/platform_test $1 $2")) { return; }
chdir($dir) or die "Could not change into $dir\n";
|
|
From: <sv...@va...> - 2009-02-15 22:21:06
|
Author: njn
Date: 2009-02-15 22:04:20 +0000 (Sun, 15 Feb 2009)
New Revision: 9174
Log:
Clarify a comment.
Modified:
branches/DARWIN/none/tests/Makefile.am
Modified: branches/DARWIN/none/tests/Makefile.am
===================================================================
--- branches/DARWIN/none/tests/Makefile.am 2009-02-15 16:18:03 UTC (rev 9173)
+++ branches/DARWIN/none/tests/Makefile.am 2009-02-15 22:04:20 UTC (rev 9174)
@@ -162,8 +162,12 @@
tls tls.so tls2.so vgprintf \
coolo_sigaction gxx304
-# DDD: not sure if these ones should work on Darwin or not... if not, should
-# be moved into linux/. ('pending' probably should, it uses signals.)
+# DDD:
+# - async-sigs and fdleak_ipv4 both build and run under Valgrind (although
+# they die)... I think they are disabled because they currently screw up
+# other tests.
+# - 'pending' doesn't build because Darwin lacks sigwaitinfo [should
+# probably do a configure-time check for it]
if ! VGCONF_OS_IS_DARWIN
check_PROGRAMS += \
async-sigs \
|
|
From: <sv...@va...> - 2009-02-15 16:18:07
|
Author: bart
Date: 2009-02-15 16:18:03 +0000 (Sun, 15 Feb 2009)
New Revision: 9173
Log:
drd_bitmap_test compiles again.
Modified:
trunk/drd/drd_bitmap.c
trunk/drd/tests/drd_bitmap_test.c
Modified: trunk/drd/drd_bitmap.c
===================================================================
--- trunk/drd/drd_bitmap.c 2009-02-15 15:59:20 UTC (rev 9172)
+++ trunk/drd/drd_bitmap.c 2009-02-15 16:18:03 UTC (rev 9173)
@@ -23,17 +23,18 @@
*/
-#include "pub_tool_basics.h" // Addr, SizeT
-#include "pub_tool_debuginfo.h" // VG_(get_objname)()
-#include "pub_tool_libcassert.h" // tl_assert()
-#include "pub_tool_libcbase.h" // VG_(memset)
-#include "pub_tool_libcprint.h" // VG_(printf)
-#include "pub_tool_machine.h" // VG_(get_IP)()
-#include "pub_tool_mallocfree.h" // VG_(malloc), VG_(free)
-#include "pub_drd_bitmap.h"
+#include "drd_basics.h" /* DRD_() */
#include "drd_bitmap.h"
#include "drd_error.h"
#include "drd_suppression.h"
+#include "pub_drd_bitmap.h"
+#include "pub_tool_basics.h" /* Addr, SizeT */
+#include "pub_tool_debuginfo.h" /* VG_(get_objname)() */
+#include "pub_tool_libcassert.h" /* tl_assert() */
+#include "pub_tool_libcbase.h" /* VG_(memset) */
+#include "pub_tool_libcprint.h" /* VG_(printf) */
+#include "pub_tool_machine.h" /* VG_(get_IP)() */
+#include "pub_tool_mallocfree.h" /* VG_(malloc), VG_(free) */
/* Forward declarations. */
Modified: trunk/drd/tests/drd_bitmap_test.c
===================================================================
--- trunk/drd/tests/drd_bitmap_test.c 2009-02-15 15:59:20 UTC (rev 9172)
+++ trunk/drd/tests/drd_bitmap_test.c 2009-02-15 16:18:03 UTC (rev 9173)
@@ -72,48 +72,48 @@
struct bitmap* bm2;
unsigned i, j;
- bm = bm_new();
+ bm = DRD_(bm_new)();
for (i = 0; i < sizeof(s_test1_args)/sizeof(s_test1_args[0]); i++)
{
- bm_access_range(bm,
- s_test1_args[i].address,
- s_test1_args[i].address + s_test1_args[i].size,
- s_test1_args[i].access_type);
+ DRD_(bm_access_range)(bm,
+ s_test1_args[i].address,
+ s_test1_args[i].address + s_test1_args[i].size,
+ s_test1_args[i].access_type);
}
if (s_verbose)
{
VG_(printf)("Bitmap contents:\n");
- bm_print(bm);
+ DRD_(bm_print)(bm);
}
for (i = 0; i < sizeof(s_test1_args)/sizeof(s_test1_args[0]); i++)
{
for (j = 0; j < s_test1_args[i].size; j++)
{
- tl_assert(bm_has_1(bm,
- s_test1_args[i].address + j,
- s_test1_args[i].access_type));
+ tl_assert(DRD_(bm_has_1)(bm,
+ s_test1_args[i].address + j,
+ s_test1_args[i].access_type));
}
}
if (s_verbose)
VG_(printf)("Merge result:\n");
- bm2 = bm_new();
- bm_merge2(bm2, bm);
- bm_merge2(bm2, bm);
+ bm2 = DRD_(bm_new)();
+ DRD_(bm_merge2)(bm2, bm);
+ DRD_(bm_merge2)(bm2, bm);
if (s_verbose)
- bm_print(bm2);
+ DRD_(bm_print)(bm2);
//assert(bm_equal(bm, bm2));
- assert(bm_equal(bm2, bm));
+ assert(DRD_(bm_equal)(bm2, bm));
if (s_verbose)
VG_(printf)("Deleting bitmap bm\n");
- bm_delete(bm);
+ DRD_(bm_delete)(bm);
if (s_verbose)
VG_(printf)("Deleting bitmap bm2\n");
- bm_delete(bm2);
+ DRD_(bm_delete)(bm2);
}
/** Test whether bm_equal() works correctly. */
@@ -122,20 +122,20 @@
struct bitmap* bm1;
struct bitmap* bm2;
- bm1 = bm_new();
- bm2 = bm_new();
- bm_access_load_1(bm1, 7);
- bm_access_load_1(bm2, ADDR0_COUNT + 7);
- assert(! bm_equal(bm1, bm2));
- assert(! bm_equal(bm2, bm1));
- bm_access_load_1(bm2, 7);
- assert(! bm_equal(bm1, bm2));
- assert(! bm_equal(bm2, bm1));
- bm_access_store_1(bm1, ADDR0_COUNT + 7);
- assert(! bm_equal(bm1, bm2));
- assert(! bm_equal(bm2, bm1));
- bm_delete(bm2);
- bm_delete(bm1);
+ bm1 = DRD_(bm_new)();
+ bm2 = DRD_(bm_new)();
+ DRD_(bm_access_load_1)(bm1, 7);
+ DRD_(bm_access_load_1)(bm2, ADDR0_COUNT + 7);
+ assert(! DRD_(bm_equal)(bm1, bm2));
+ assert(! DRD_(bm_equal)(bm2, bm1));
+ DRD_(bm_access_load_1)(bm2, 7);
+ assert(! DRD_(bm_equal)(bm1, bm2));
+ assert(! DRD_(bm_equal)(bm2, bm1));
+ DRD_(bm_access_store_1)(bm1, ADDR0_COUNT + 7);
+ assert(! DRD_(bm_equal)(bm1, bm2));
+ assert(! DRD_(bm_equal)(bm2, bm1));
+ DRD_(bm_delete)(bm2);
+ DRD_(bm_delete)(bm1);
}
/** Torture test of the functions that set or clear a range of bits. */
@@ -148,90 +148,90 @@
assert(outer_loop_step >= 1);
assert(inner_loop_step >= 1);
- bm1 = bm_new();
- bm2 = bm_new();
+ bm1 = DRD_(bm_new)();
+ bm2 = DRD_(bm_new)();
for (i = ADDR0_COUNT - 2 * BITS_PER_UWORD;
i < ADDR0_COUNT + 2 * BITS_PER_UWORD;
i += outer_loop_step)
{
for (j = i + 1; j < ADDR0_COUNT + 2 * BITS_PER_UWORD; j += inner_loop_step)
{
- bm_access_range_load(bm1, i, j);
- bm_clear_load(bm1, i, j);
- assert(bm_equal(bm1, bm2));
- bm_access_load_1(bm1, i);
- bm_clear_load(bm1, i, i+1);
- assert(bm_equal(bm1, bm2));
- bm_access_load_2(bm1, i);
- bm_clear_load(bm1, i, i+2);
- assert(bm_equal(bm1, bm2));
- bm_access_load_4(bm1, i);
- bm_clear_load(bm1, i, i+4);
- assert(bm_equal(bm1, bm2));
- bm_access_load_8(bm1, i);
- bm_clear_load(bm1, i, i+8);
- assert(bm_equal(bm1, bm2));
- bm_access_range_store(bm1, i, j);
- bm_clear_store(bm1, i, j);
- assert(bm_equal(bm1, bm2));
- bm_access_store_1(bm1, i);
- bm_clear_store(bm1, i, i + 1);
- assert(bm_equal(bm1, bm2));
- bm_access_store_2(bm1, i);
- bm_clear_store(bm1, i, i + 2);
- assert(bm_equal(bm1, bm2));
- bm_access_store_4(bm1, i);
- bm_clear_store(bm1, i, i + 4);
- assert(bm_equal(bm1, bm2));
- bm_access_store_8(bm1, i);
- bm_clear_store(bm1, i, i + 8);
- assert(bm_equal(bm1, bm2));
+ DRD_(bm_access_range_load)(bm1, i, j);
+ DRD_(bm_clear_load)(bm1, i, j);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_access_load_1)(bm1, i);
+ DRD_(bm_clear_load)(bm1, i, i+1);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_access_load_2)(bm1, i);
+ DRD_(bm_clear_load)(bm1, i, i+2);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_access_load_4)(bm1, i);
+ DRD_(bm_clear_load)(bm1, i, i+4);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_access_load_8)(bm1, i);
+ DRD_(bm_clear_load)(bm1, i, i+8);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_access_range_store)(bm1, i, j);
+ DRD_(bm_clear_store)(bm1, i, j);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_access_store_1)(bm1, i);
+ DRD_(bm_clear_store)(bm1, i, i + 1);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_access_store_2)(bm1, i);
+ DRD_(bm_clear_store)(bm1, i, i + 2);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_access_store_4)(bm1, i);
+ DRD_(bm_clear_store)(bm1, i, i + 4);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_access_store_8)(bm1, i);
+ DRD_(bm_clear_store)(bm1, i, i + 8);
+ assert(DRD_(bm_equal)(bm1, bm2));
}
}
- bm_access_range_load(bm1, 0, 2 * ADDR0_COUNT + 2 * BITS_PER_UWORD);
- bm_access_range_store(bm1, 0, 2 * ADDR0_COUNT + 2 * BITS_PER_UWORD);
- bm_access_range_load(bm2, 0, 2 * ADDR0_COUNT + 2 * BITS_PER_UWORD);
- bm_access_range_store(bm2, 0, 2 * ADDR0_COUNT + 2 * BITS_PER_UWORD);
+ DRD_(bm_access_range_load)(bm1, 0, 2 * ADDR0_COUNT + 2 * BITS_PER_UWORD);
+ DRD_(bm_access_range_store)(bm1, 0, 2 * ADDR0_COUNT + 2 * BITS_PER_UWORD);
+ DRD_(bm_access_range_load)(bm2, 0, 2 * ADDR0_COUNT + 2 * BITS_PER_UWORD);
+ DRD_(bm_access_range_store)(bm2, 0, 2 * ADDR0_COUNT + 2 * BITS_PER_UWORD);
for (i = ADDR0_COUNT - 2 * BITS_PER_UWORD;
i < ADDR0_COUNT + 2 * BITS_PER_UWORD;
i += outer_loop_step)
{
for (j = i + 1; j < ADDR0_COUNT + 2 * BITS_PER_UWORD; j += inner_loop_step)
{
- bm_clear_load(bm1, i, j);
- bm_access_range_load(bm1, i, j);
- assert(bm_equal(bm1, bm2));
- bm_clear_load(bm1, i, i+1);
- bm_access_load_1(bm1, i);
- assert(bm_equal(bm1, bm2));
- bm_clear_load(bm1, i, i+2);
- bm_access_load_2(bm1, i);
- assert(bm_equal(bm1, bm2));
- bm_clear_load(bm1, i, i+4);
- bm_access_load_4(bm1, i);
- assert(bm_equal(bm1, bm2));
- bm_clear_load(bm1, i, i+8);
- bm_access_load_8(bm1, i);
- assert(bm_equal(bm1, bm2));
- bm_clear_store(bm1, i, j);
- bm_access_range_store(bm1, i, j);
- assert(bm_equal(bm1, bm2));
- bm_clear_store(bm1, i, i+1);
- bm_access_store_1(bm1, i);
- assert(bm_equal(bm1, bm2));
- bm_clear_store(bm1, i, i+2);
- bm_access_store_2(bm1, i);
- assert(bm_equal(bm1, bm2));
- bm_clear_store(bm1, i, i+4);
- bm_access_store_4(bm1, i);
- assert(bm_equal(bm1, bm2));
- bm_clear_store(bm1, i, i+8);
- bm_access_store_8(bm1, i);
- assert(bm_equal(bm1, bm2));
+ DRD_(bm_clear_load)(bm1, i, j);
+ DRD_(bm_access_range_load)(bm1, i, j);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_clear_load)(bm1, i, i+1);
+ DRD_(bm_access_load_1)(bm1, i);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_clear_load)(bm1, i, i+2);
+ DRD_(bm_access_load_2)(bm1, i);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_clear_load)(bm1, i, i+4);
+ DRD_(bm_access_load_4)(bm1, i);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_clear_load)(bm1, i, i+8);
+ DRD_(bm_access_load_8)(bm1, i);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_clear_store)(bm1, i, j);
+ DRD_(bm_access_range_store)(bm1, i, j);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_clear_store)(bm1, i, i+1);
+ DRD_(bm_access_store_1)(bm1, i);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_clear_store)(bm1, i, i+2);
+ DRD_(bm_access_store_2)(bm1, i);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_clear_store)(bm1, i, i+4);
+ DRD_(bm_access_store_4)(bm1, i);
+ assert(DRD_(bm_equal)(bm1, bm2));
+ DRD_(bm_clear_store)(bm1, i, i+8);
+ DRD_(bm_access_store_8)(bm1, i);
+ assert(DRD_(bm_equal)(bm1, bm2));
}
}
- bm_delete(bm2);
- bm_delete(bm1);
+ DRD_(bm_delete)(bm2);
+ DRD_(bm_delete)(bm1);
}
int main(int argc, char** argv)
|
|
From: <sv...@va...> - 2009-02-15 15:59:28
|
Author: bart
Date: 2009-02-15 15:59:20 +0000 (Sun, 15 Feb 2009)
New Revision: 9172
Log:
Wrapped DRD_() macro around yet even more function names.
Modified:
trunk/drd/drd_bitmap.c
trunk/drd/drd_load_store.c
trunk/drd/drd_load_store.h
trunk/drd/drd_main.c
trunk/drd/drd_segment.c
trunk/drd/drd_suppression.c
trunk/drd/drd_thread.c
trunk/drd/drd_thread_bitmap.h
trunk/drd/pub_drd_bitmap.h
Modified: trunk/drd/drd_bitmap.c
===================================================================
--- trunk/drd/drd_bitmap.c 2009-02-15 14:46:17 UTC (rev 9171)
+++ trunk/drd/drd_bitmap.c 2009-02-15 15:59:20 UTC (rev 9172)
@@ -47,14 +47,14 @@
const struct bitmap2* const bm2r);
-/* Local constants. */
+/* Local variables. */
static ULong s_bitmap_creation_count;
/* Function definitions. */
-struct bitmap* bm_new()
+struct bitmap* DRD_(bm_new)()
{
unsigned i;
struct bitmap* bm;
@@ -81,7 +81,7 @@
return bm;
}
-void bm_delete(struct bitmap* const bm)
+void DRD_(bm_delete)(struct bitmap* const bm)
{
struct bitmap2* bm2;
struct bitmap2ref* bm2ref;
@@ -107,9 +107,9 @@
* Record an access of type access_type at addresses a .. a + size - 1 in
* bitmap bm.
*/
-void bm_access_range(struct bitmap* const bm,
- const Addr a1, const Addr a2,
- const BmAccessTypeT access_type)
+void DRD_(bm_access_range)(struct bitmap* const bm,
+ const Addr a1, const Addr a2,
+ const BmAccessTypeT access_type)
{
Addr b, b_next;
@@ -171,34 +171,34 @@
}
}
-void bm_access_range_load(struct bitmap* const bm,
- const Addr a1, const Addr a2)
+void DRD_(bm_access_range_load)(struct bitmap* const bm,
+ const Addr a1, const Addr a2)
{
- bm_access_range(bm, a1, a2, eLoad);
+ DRD_(bm_access_range)(bm, a1, a2, eLoad);
}
-void bm_access_load_1(struct bitmap* const bm, const Addr a1)
+void DRD_(bm_access_load_1)(struct bitmap* const bm, const Addr a1)
{
bm_access_aligned_load(bm, a1, 1);
}
-void bm_access_load_2(struct bitmap* const bm, const Addr a1)
+void DRD_(bm_access_load_2)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 1) == 0)
bm_access_aligned_load(bm, a1, 2);
else
- bm_access_range(bm, a1, a1 + 2, eLoad);
+ DRD_(bm_access_range)(bm, a1, a1 + 2, eLoad);
}
-void bm_access_load_4(struct bitmap* const bm, const Addr a1)
+void DRD_(bm_access_load_4)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 3) == 0)
bm_access_aligned_load(bm, a1, 4);
else
- bm_access_range(bm, a1, a1 + 4, eLoad);
+ DRD_(bm_access_range)(bm, a1, a1 + 4, eLoad);
}
-void bm_access_load_8(struct bitmap* const bm, const Addr a1)
+void DRD_(bm_access_load_8)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 7) == 0)
bm_access_aligned_load(bm, a1, 8);
@@ -208,37 +208,37 @@
bm_access_aligned_load(bm, a1 + 4, 4);
}
else
- bm_access_range(bm, a1, a1 + 8, eLoad);
+ DRD_(bm_access_range)(bm, a1, a1 + 8, eLoad);
}
-void bm_access_range_store(struct bitmap* const bm,
- const Addr a1, const Addr a2)
+void DRD_(bm_access_range_store)(struct bitmap* const bm,
+ const Addr a1, const Addr a2)
{
- bm_access_range(bm, a1, a2, eStore);
+ DRD_(bm_access_range)(bm, a1, a2, eStore);
}
-void bm_access_store_1(struct bitmap* const bm, const Addr a1)
+void DRD_(bm_access_store_1)(struct bitmap* const bm, const Addr a1)
{
bm_access_aligned_store(bm, a1, 1);
}
-void bm_access_store_2(struct bitmap* const bm, const Addr a1)
+void DRD_(bm_access_store_2)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 1) == 0)
bm_access_aligned_store(bm, a1, 2);
else
- bm_access_range(bm, a1, a1 + 2, eStore);
+ DRD_(bm_access_range)(bm, a1, a1 + 2, eStore);
}
-void bm_access_store_4(struct bitmap* const bm, const Addr a1)
+void DRD_(bm_access_store_4)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 3) == 0)
bm_access_aligned_store(bm, a1, 4);
else
- bm_access_range(bm, a1, a1 + 4, eStore);
+ DRD_(bm_access_range)(bm, a1, a1 + 4, eStore);
}
-void bm_access_store_8(struct bitmap* const bm, const Addr a1)
+void DRD_(bm_access_store_8)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 7) == 0)
bm_access_aligned_store(bm, a1, 8);
@@ -248,16 +248,16 @@
bm_access_aligned_store(bm, a1 + 4, 4);
}
else
- bm_access_range(bm, a1, a1 + 8, eStore);
+ DRD_(bm_access_range)(bm, a1, a1 + 8, eStore);
}
-Bool bm_has(struct bitmap* const bm, const Addr a1, const Addr a2,
- const BmAccessTypeT access_type)
+Bool DRD_(bm_has)(struct bitmap* const bm, const Addr a1, const Addr a2,
+ const BmAccessTypeT access_type)
{
Addr b;
for (b = a1; b < a2; b++)
{
- if (! bm_has_1(bm, b, access_type))
+ if (! DRD_(bm_has_1)(bm, b, access_type))
{
return False;
}
@@ -265,7 +265,8 @@
return True;
}
-Bool bm_has_any_load(struct bitmap* const bm, const Addr a1, const Addr a2)
+Bool
+DRD_(bm_has_any_load)(struct bitmap* const bm, const Addr a1, const Addr a2)
{
Addr b, b_next;
@@ -317,8 +318,8 @@
return 0;
}
-Bool bm_has_any_store(struct bitmap* const bm,
- const Addr a1, const Addr a2)
+Bool DRD_(bm_has_any_store)(struct bitmap* const bm,
+ const Addr a1, const Addr a2)
{
Addr b, b_next;
@@ -372,8 +373,8 @@
/* Return True if there is a read access, write access or both */
/* to any of the addresses in the range [ a1, a2 [ in bitmap bm. */
-Bool bm_has_any_access(struct bitmap* const bm,
- const Addr a1, const Addr a2)
+Bool DRD_(bm_has_any_access)(struct bitmap* const bm,
+ const Addr a1, const Addr a2)
{
Addr b, b_next;
@@ -425,11 +426,12 @@
return False;
}
-/** Report whether an access of type access_type at address a is recorded in
- * bitmap bm.
+/**
+ * Report whether an access of type access_type at address a is recorded in
+ * bitmap bm.
*/
-Bool bm_has_1(struct bitmap* const bm,
- const Addr a, const BmAccessTypeT access_type)
+Bool DRD_(bm_has_1)(struct bitmap* const bm,
+ const Addr a, const BmAccessTypeT access_type)
{
const struct bitmap2* p2;
const struct bitmap1* p1;
@@ -448,9 +450,7 @@
return False;
}
-void bm_clear(struct bitmap* const bm,
- const Addr a1,
- const Addr a2)
+void DRD_(bm_clear)(struct bitmap* const bm, const Addr a1, const Addr a2)
{
Addr b, b_next;
@@ -507,11 +507,11 @@
}
}
-/** Clear all references to loads in bitmap bm starting at address a1 and
- * up to but not including address a2.
+/**
+ * Clear all references to loads in bitmap bm starting at address a1 and
+ * up to but not including address a2.
*/
-void bm_clear_load(struct bitmap* const bm,
- const Addr a1, const Addr a2)
+void DRD_(bm_clear_load)(struct bitmap* const bm, const Addr a1, const Addr a2)
{
Addr a;
@@ -525,11 +525,12 @@
}
}
-/** Clear all references to stores in bitmap bm starting at address a1 and
- * up to but not including address a2.
+/**
+ * Clear all references to stores in bitmap bm starting at address a1 and
+ * up to but not including address a2.
*/
-void bm_clear_store(struct bitmap* const bm,
- const Addr a1, const Addr a2)
+void DRD_(bm_clear_store)(struct bitmap* const bm,
+ const Addr a1, const Addr a2)
{
Addr a;
@@ -543,23 +544,24 @@
}
}
-/** Clear bitmap bm starting at address a1 and up to but not including address
- * a2. Return True if and only if any of the addresses was set before
- * clearing.
+/**
+ * Clear bitmap bm starting at address a1 and up to but not including address
+ * a2. Return True if and only if any of the addresses was set before
+ * clearing.
*/
-Bool bm_test_and_clear(struct bitmap* const bm,
- const Addr a1, const Addr a2)
+Bool DRD_(bm_test_and_clear)(struct bitmap* const bm,
+ const Addr a1, const Addr a2)
{
Bool result;
- result = bm_has_any_access(bm, a1, a2) != 0;
- bm_clear(bm, a1, a2);
+ result = DRD_(bm_has_any_access)(bm, a1, a2) != 0;
+ DRD_(bm_clear)(bm, a1, a2);
return result;
}
-Bool bm_has_conflict_with(struct bitmap* const bm,
- const Addr a1, const Addr a2,
- const BmAccessTypeT access_type)
+Bool DRD_(bm_has_conflict_with)(struct bitmap* const bm,
+ const Addr a1, const Addr a2,
+ const BmAccessTypeT access_type)
{
Addr b, b_next;
@@ -623,80 +625,81 @@
return False;
}
-Bool bm_load_has_conflict_with(struct bitmap* const bm,
- const Addr a1, const Addr a2)
+Bool DRD_(bm_load_has_conflict_with)(struct bitmap* const bm,
+ const Addr a1, const Addr a2)
{
- return bm_has_conflict_with(bm, a1, a2, eLoad);
+ return DRD_(bm_has_conflict_with)(bm, a1, a2, eLoad);
}
-Bool bm_load_1_has_conflict_with(struct bitmap* const bm, const Addr a1)
+Bool DRD_(bm_load_1_has_conflict_with)(struct bitmap* const bm, const Addr a1)
{
return bm_aligned_load_has_conflict_with(bm, a1, 1);
}
-Bool bm_load_2_has_conflict_with(struct bitmap* const bm, const Addr a1)
+Bool DRD_(bm_load_2_has_conflict_with)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 1) == 0)
return bm_aligned_load_has_conflict_with(bm, a1, 2);
else
- return bm_has_conflict_with(bm, a1, a1 + 2, eLoad);
+ return DRD_(bm_has_conflict_with)(bm, a1, a1 + 2, eLoad);
}
-Bool bm_load_4_has_conflict_with(struct bitmap* const bm, const Addr a1)
+Bool DRD_(bm_load_4_has_conflict_with)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 3) == 0)
return bm_aligned_load_has_conflict_with(bm, a1, 4);
else
- return bm_has_conflict_with(bm, a1, a1 + 4, eLoad);
+ return DRD_(bm_has_conflict_with)(bm, a1, a1 + 4, eLoad);
}
-Bool bm_load_8_has_conflict_with(struct bitmap* const bm, const Addr a1)
+Bool DRD_(bm_load_8_has_conflict_with)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 7) == 0)
return bm_aligned_load_has_conflict_with(bm, a1, 8);
else
- return bm_has_conflict_with(bm, a1, a1 + 8, eLoad);
+ return DRD_(bm_has_conflict_with)(bm, a1, a1 + 8, eLoad);
}
-Bool bm_store_1_has_conflict_with(struct bitmap* const bm, const Addr a1)
+Bool DRD_(bm_store_1_has_conflict_with)(struct bitmap* const bm, const Addr a1)
{
return bm_aligned_store_has_conflict_with(bm, a1, 1);
}
-Bool bm_store_2_has_conflict_with(struct bitmap* const bm, const Addr a1)
+Bool DRD_(bm_store_2_has_conflict_with)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 1) == 0)
return bm_aligned_store_has_conflict_with(bm, a1, 2);
else
- return bm_has_conflict_with(bm, a1, a1 + 2, eStore);
+ return DRD_(bm_has_conflict_with)(bm, a1, a1 + 2, eStore);
}
-Bool bm_store_4_has_conflict_with(struct bitmap* const bm, const Addr a1)
+Bool DRD_(bm_store_4_has_conflict_with)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 3) == 0)
return bm_aligned_store_has_conflict_with(bm, a1, 4);
else
- return bm_has_conflict_with(bm, a1, a1 + 4, eStore);
+ return DRD_(bm_has_conflict_with)(bm, a1, a1 + 4, eStore);
}
-Bool bm_store_8_has_conflict_with(struct bitmap* const bm, const Addr a1)
+Bool DRD_(bm_store_8_has_conflict_with)(struct bitmap* const bm, const Addr a1)
{
if ((a1 & 7) == 0)
return bm_aligned_store_has_conflict_with(bm, a1, 8);
else
- return bm_has_conflict_with(bm, a1, a1 + 8, eStore);
+ return DRD_(bm_has_conflict_with)(bm, a1, a1 + 8, eStore);
}
-Bool bm_store_has_conflict_with(struct bitmap* const bm,
- const Addr a1, const Addr a2)
+Bool DRD_(bm_store_has_conflict_with)(struct bitmap* const bm,
+ const Addr a1, const Addr a2)
{
- return bm_has_conflict_with(bm, a1, a2, eStore);
+ return DRD_(bm_has_conflict_with)(bm, a1, a2, eStore);
}
-/** Return True if the two bitmaps *lhs and *rhs are identical, and false
- * if not.
+/**
+ * Return True if the two bitmaps *lhs and *rhs are identical, and false
+ * if not.
*/
-Bool bm_equal(struct bitmap* const lhs, struct bitmap* const rhs)
+Bool DRD_(bm_equal)(struct bitmap* const lhs, struct bitmap* const rhs)
{
struct bitmap2* bm2l;
struct bitmap2ref* bm2l_ref;
@@ -715,7 +718,7 @@
while (bm2l_ref
&& (bm2l = bm2l_ref->bm2)
&& bm2l
- && ! bm_has_any_access(lhs,
+ && ! DRD_(bm_has_any_access)(lhs,
bm2l->addr << ADDR0_BITS,
(bm2l->addr + 1) << ADDR0_BITS))
{
@@ -738,7 +741,7 @@
}
bm2r = bm2r_ref->bm2;
tl_assert(bm2r);
- tl_assert(bm_has_any_access(rhs,
+ tl_assert(DRD_(bm_has_any_access)(rhs,
bm2r->addr << ADDR0_BITS,
(bm2r->addr + 1) << ADDR0_BITS));
@@ -756,7 +759,7 @@
bm2r = VG_(OSetGen_Next)(rhs->oset);
if (bm2r)
{
- tl_assert(bm_has_any_access(rhs,
+ tl_assert(DRD_(bm_has_any_access)(rhs,
bm2r->addr << ADDR0_BITS,
(bm2r->addr + 1) << ADDR0_BITS));
#if 0
@@ -769,7 +772,7 @@
return True;
}
-void bm_swap(struct bitmap* const bm1, struct bitmap* const bm2)
+void DRD_(bm_swap)(struct bitmap* const bm1, struct bitmap* const bm2)
{
OSet* const tmp = bm1->oset;
bm1->oset = bm2->oset;
@@ -777,8 +780,8 @@
}
/** Merge bitmaps *lhs and *rhs into *lhs. */
-void bm_merge2(struct bitmap* const lhs,
- struct bitmap* const rhs)
+void DRD_(bm_merge2)(struct bitmap* const lhs,
+ struct bitmap* const rhs)
{
struct bitmap2* bm2l;
struct bitmap2ref* bm2l_ref;
@@ -814,8 +817,7 @@
* @param rhs Bitmap to be compared with lhs.
* @return !=0 if there are data races, == 0 if there are none.
*/
-int bm_has_races(struct bitmap* const lhs,
- struct bitmap* const rhs)
+int DRD_(bm_has_races)(struct bitmap* const lhs, struct bitmap* const rhs)
{
VG_(OSetGen_ResetIter)(lhs->oset);
VG_(OSetGen_ResetIter)(rhs->oset);
@@ -868,7 +870,7 @@
return 0;
}
-void bm_print(struct bitmap* const bm)
+void DRD_(bm_print)(struct bitmap* const bm)
{
struct bitmap2* bm2;
struct bitmap2ref* bm2ref;
@@ -898,17 +900,17 @@
}
}
-ULong bm_get_bitmap_creation_count(void)
+ULong DRD_(bm_get_bitmap_creation_count)(void)
{
return s_bitmap_creation_count;
}
-ULong bm_get_bitmap2_node_creation_count(void)
+ULong DRD_(bm_get_bitmap2_node_creation_count)(void)
{
return s_bitmap2_node_creation_count;
}
-ULong bm_get_bitmap2_creation_count(void)
+ULong DRD_(bm_get_bitmap2_creation_count)(void)
{
return s_bitmap2_creation_count;
}
@@ -933,9 +935,8 @@
* @param a1 client address shifted right by ADDR0_BITS.
* @param bm bitmap pointer.
*/
-static
-struct bitmap2* bm2_make_exclusive(struct bitmap* const bm,
- struct bitmap2ref* const bm2ref)
+static struct bitmap2* bm2_make_exclusive(struct bitmap* const bm,
+ struct bitmap2ref* const bm2ref)
{
UWord a1;
struct bitmap2* bm2;
Modified: trunk/drd/drd_load_store.c
===================================================================
--- trunk/drd/drd_load_store.c 2009-02-15 14:46:17 UTC (rev 9171)
+++ trunk/drd/drd_load_store.c 2009-02-15 15:59:20 UTC (rev 9172)
@@ -125,7 +125,7 @@
&drei);
}
-VG_REGPARM(2) void drd_trace_load(Addr addr, SizeT size)
+VG_REGPARM(2) void DRD_(trace_load)(Addr addr, SizeT size)
{
#ifdef ENABLE_DRD_CONSISTENCY_CHECKS
/* The assert below has been commented out because of performance reasons.*/
@@ -191,7 +191,7 @@
}
}
-VG_REGPARM(2) void drd_trace_store(Addr addr, SizeT size)
+VG_REGPARM(2) void DRD_(trace_store)(Addr addr, SizeT size)
{
#ifdef ENABLE_DRD_CONSISTENCY_CHECKS
/* The assert below has been commented out because of performance reasons.*/
@@ -347,7 +347,7 @@
argv = mkIRExprVec_2(addr_expr, size_expr);
di = unsafeIRDirty_0_N(/*regparms*/2,
"drd_trace_load",
- VG_(fnptr_to_fnentry)(drd_trace_load),
+ VG_(fnptr_to_fnentry)(DRD_(trace_load)),
argv);
break;
}
@@ -412,7 +412,7 @@
argv = mkIRExprVec_2(addr_expr, size_expr);
di = unsafeIRDirty_0_N(/*regparms*/2,
"drd_trace_store",
- VG_(fnptr_to_fnentry)(drd_trace_store),
+ VG_(fnptr_to_fnentry)(DRD_(trace_store)),
argv);
break;
}
@@ -524,7 +524,7 @@
di = unsafeIRDirty_0_N(
/*regparms*/2,
"drd_trace_load",
- VG_(fnptr_to_fnentry)(drd_trace_load),
+ VG_(fnptr_to_fnentry)(DRD_(trace_load)),
argv);
addStmtToIRSB(bb, IRStmt_Dirty(di));
}
@@ -534,7 +534,7 @@
di = unsafeIRDirty_0_N(
/*regparms*/2,
"drd_trace_store",
- VG_(fnptr_to_fnentry)(drd_trace_store),
+ VG_(fnptr_to_fnentry)(DRD_(trace_store)),
argv);
addStmtToIRSB(bb, IRStmt_Dirty(di));
}
Modified: trunk/drd/drd_load_store.h
===================================================================
--- trunk/drd/drd_load_store.h 2009-02-15 14:46:17 UTC (rev 9171)
+++ trunk/drd/drd_load_store.h 2009-02-15 15:59:20 UTC (rev 9172)
@@ -45,8 +45,8 @@
IRType const hWordTy);
void DRD_(trace_mem_access)(const Addr addr, const SizeT size,
const BmAccessTypeT access_type);
-VG_REGPARM(2) void drd_trace_load(Addr addr, SizeT size);
-VG_REGPARM(2) void drd_trace_store(Addr addr, SizeT size);
+VG_REGPARM(2) void DRD_(trace_load)(Addr addr, SizeT size);
+VG_REGPARM(2) void DRD_(trace_store)(Addr addr, SizeT size);
#endif // __DRD_LOAD_STORE_H
Modified: trunk/drd/drd_main.c
===================================================================
--- trunk/drd/drd_main.c 2009-02-15 14:46:17 UTC (rev 9171)
+++ trunk/drd/drd_main.c 2009-02-15 15:59:20 UTC (rev 9172)
@@ -222,7 +222,7 @@
{
if (size > 0)
{
- drd_trace_load(a, size);
+ DRD_(trace_load)(a, size);
}
}
@@ -245,7 +245,7 @@
tl_assert(size < 4096);
if (size > 0)
{
- drd_trace_load(a, size);
+ DRD_(trace_load)(a, size);
}
}
@@ -257,7 +257,7 @@
DRD_(thread_set_vg_running_tid)(VG_(get_running_tid)());
if (size > 0)
{
- drd_trace_store(a, size);
+ DRD_(trace_store)(a, size);
}
}
@@ -560,11 +560,11 @@
DRD_(get_barrier_segment_creation_count)());
VG_(message)(Vg_UserMsg,
" bitmaps: %lld level 1 / %lld level 2 bitmap refs",
- bm_get_bitmap_creation_count(),
- bm_get_bitmap2_node_creation_count());
+ DRD_(bm_get_bitmap_creation_count)(),
+ DRD_(bm_get_bitmap2_node_creation_count)());
VG_(message)(Vg_UserMsg,
" and %lld level 2 bitmaps were allocated.",
- bm_get_bitmap2_creation_count());
+ DRD_(bm_get_bitmap2_creation_count)());
VG_(message)(Vg_UserMsg,
" mutex: %lld non-recursive lock/unlock events.",
DRD_(get_mutex_lock_count)());
Modified: trunk/drd/drd_segment.c
===================================================================
--- trunk/drd/drd_segment.c 2009-02-15 14:46:17 UTC (rev 9171)
+++ trunk/drd/drd_segment.c 2009-02-15 15:59:20 UTC (rev 9172)
@@ -79,7 +79,7 @@
else
DRD_(vc_init)(&sg->vc, 0, 0);
DRD_(vc_increment)(&sg->vc, created);
- sg->bm = bm_new();
+ sg->bm = DRD_(bm_new)();
if (DRD_(s_trace_segment))
{
@@ -103,7 +103,7 @@
tl_assert(sg->refcnt == 0);
DRD_(vc_cleanup)(&sg->vc);
- bm_delete(sg->bm);
+ DRD_(bm_delete)(sg->bm);
sg->bm = 0;
}
@@ -214,7 +214,7 @@
// Keep sg1->stacktrace.
// Keep sg1->vc.
// Merge sg2->bm into sg1->bm.
- bm_merge2(sg1->bm, sg2->bm);
+ DRD_(bm_merge2)(sg1->bm, sg2->bm);
}
/** Print the vector clock and the bitmap of the specified segment. */
@@ -224,7 +224,7 @@
VG_(printf)("vc: ");
DRD_(vc_print)(&sg->vc);
VG_(printf)("\n");
- bm_print(sg->bm);
+ DRD_(bm_print)(sg->bm);
}
/** Query whether segment tracing has been enabled. */
Modified: trunk/drd/drd_suppression.c
===================================================================
--- trunk/drd/drd_suppression.c 2009-02-15 14:46:17 UTC (rev 9171)
+++ trunk/drd/drd_suppression.c 2009-02-15 15:59:20 UTC (rev 9172)
@@ -52,7 +52,7 @@
void DRD_(suppression_init)(void)
{
tl_assert(DRD_(s_suppressed) == 0);
- DRD_(s_suppressed) = bm_new();
+ DRD_(s_suppressed) = DRD_(bm_new)();
tl_assert(DRD_(s_suppressed));
}
@@ -67,7 +67,7 @@
tl_assert(a1 < a2);
// tl_assert(! drd_is_any_suppressed(a1, a2));
- bm_access_range_store(DRD_(s_suppressed), a1, a2);
+ DRD_(bm_access_range_store)(DRD_(s_suppressed), a1, a2);
}
void DRD_(finish_suppression)(const Addr a1, const Addr a2)
@@ -86,7 +86,7 @@
VG_(get_and_pp_StackTrace)(VG_(get_running_tid)(), 12);
tl_assert(False);
}
- bm_clear_store(DRD_(s_suppressed), a1, a2);
+ DRD_(bm_clear_store)(DRD_(s_suppressed), a1, a2);
}
/**
@@ -96,7 +96,7 @@
*/
Bool DRD_(is_suppressed)(const Addr a1, const Addr a2)
{
- return bm_has(DRD_(s_suppressed), a1, a2, eStore);
+ return DRD_(bm_has)(DRD_(s_suppressed), a1, a2, eStore);
}
/**
@@ -106,14 +106,14 @@
*/
Bool DRD_(is_any_suppressed)(const Addr a1, const Addr a2)
{
- return bm_has_any_store(DRD_(s_suppressed), a1, a2);
+ return DRD_(bm_has_any_store)(DRD_(s_suppressed), a1, a2);
}
void DRD_(start_tracing_address_range)(const Addr a1, const Addr a2)
{
tl_assert(a1 < a2);
- bm_access_range_load(DRD_(s_suppressed), a1, a2);
+ DRD_(bm_access_range_load)(DRD_(s_suppressed), a1, a2);
if (! DRD_(g_any_address_traced))
{
DRD_(g_any_address_traced) = True;
@@ -124,17 +124,17 @@
{
tl_assert(a1 < a2);
- bm_clear_load(DRD_(s_suppressed), a1, a2);
+ DRD_(bm_clear_load)(DRD_(s_suppressed), a1, a2);
if (DRD_(g_any_address_traced))
{
DRD_(g_any_address_traced)
- = bm_has_any_load(DRD_(s_suppressed), 0, ~(Addr)0);
+ = DRD_(bm_has_any_load)(DRD_(s_suppressed), 0, ~(Addr)0);
}
}
Bool DRD_(is_any_traced)(const Addr a1, const Addr a2)
{
- return bm_has_any_load(DRD_(s_suppressed), a1, a2);
+ return DRD_(bm_has_any_load)(DRD_(s_suppressed), a1, a2);
}
void DRD_(suppression_stop_using_mem)(const Addr a1, const Addr a2)
@@ -144,7 +144,7 @@
Addr b;
for (b = a1; b < a2; b++)
{
- if (bm_has_1(DRD_(s_suppressed), b, eStore))
+ if (DRD_(bm_has_1)(DRD_(s_suppressed), b, eStore))
{
VG_(message)(Vg_DebugMsg,
"stop_using_mem(0x%lx, %ld) finish suppression of 0x%lx",
@@ -154,5 +154,5 @@
}
tl_assert(a1);
tl_assert(a1 < a2);
- bm_clear(DRD_(s_suppressed), a1, a2);
+ DRD_(bm_clear)(DRD_(s_suppressed), a1, a2);
}
Modified: trunk/drd/drd_thread.c
===================================================================
--- trunk/drd/drd_thread.c 2009-02-15 14:46:17 UTC (rev 9171)
+++ trunk/drd/drd_thread.c 2009-02-15 15:59:20 UTC (rev 9172)
@@ -867,13 +867,13 @@
if (other_user == DRD_INVALID_THREADID
&& i != DRD_(g_drd_running_tid))
{
- if (UNLIKELY(bm_test_and_clear(p->bm, a1, a2)))
+ if (UNLIKELY(DRD_(bm_test_and_clear)(p->bm, a1, a2)))
{
other_user = i;
}
continue;
}
- bm_clear(p->bm, a1, a2);
+ DRD_(bm_clear)(p->bm, a1, a2);
}
}
@@ -882,7 +882,7 @@
* conflict set.
*/
if (other_user != DRD_INVALID_THREADID
- && bm_has_any_access(DRD_(g_conflict_set), a1, a2))
+ && DRD_(bm_has_any_access)(DRD_(g_conflict_set), a1, a2))
{
DRD_(thread_compute_conflict_set)(&DRD_(g_conflict_set),
DRD_(thread_get_running_tid)());
@@ -989,7 +989,8 @@
break;
if (! DRD_(vc_lte)(&p->vc, &q->vc))
{
- if (bm_has_conflict_with(q->bm, addr, addr + size, access_type))
+ if (DRD_(bm_has_conflict_with)(q->bm, addr, addr + size,
+ access_type))
{
tl_assert(q->stacktrace);
show_call_stack(i, "Other segment start",
@@ -1015,7 +1016,7 @@
for (p = DRD_(g_threadinfo)[tid].first; p; p = p->next)
{
- if (bm_has(p->bm, addr, addr + size, access_type))
+ if (DRD_(bm_has)(p->bm, addr, addr + size, access_type))
{
thread_report_conflicting_segments_segment(tid, addr, size,
access_type, p);
@@ -1042,8 +1043,8 @@
return True;
DRD_(thread_compute_conflict_set)(&computed_conflict_set, tid);
- result = bm_equal(DRD_(g_conflict_set), computed_conflict_set);
- bm_delete(computed_conflict_set);
+ result = DRD_(bm_equal)(DRD_(g_conflict_set), computed_conflict_set);
+ DRD_(bm_delete)(computed_conflict_set);
return result;
}
@@ -1061,14 +1062,14 @@
tl_assert(tid == DRD_(g_drd_running_tid));
DRD_(s_update_conflict_set_count)++;
- DRD_(s_conflict_set_bitmap_creation_count) -= bm_get_bitmap_creation_count();
- DRD_(s_conflict_set_bitmap2_creation_count) -= bm_get_bitmap2_creation_count();
+ DRD_(s_conflict_set_bitmap_creation_count) -= DRD_(bm_get_bitmap_creation_count)();
+ DRD_(s_conflict_set_bitmap2_creation_count) -= DRD_(bm_get_bitmap2_creation_count)();
if (*conflict_set)
{
- bm_delete(*conflict_set);
+ DRD_(bm_delete)(*conflict_set);
}
- *conflict_set = bm_new();
+ *conflict_set = DRD_(bm_new)();
if (DRD_(s_trace_conflict_set))
{
@@ -1119,7 +1120,7 @@
&q->vc);
VG_(message)(Vg_UserMsg, "%s", msg);
}
- bm_merge2(*conflict_set, q->bm);
+ DRD_(bm_merge2)(*conflict_set, q->bm);
}
else
{
@@ -1139,13 +1140,13 @@
}
}
- DRD_(s_conflict_set_bitmap_creation_count) += bm_get_bitmap_creation_count();
- DRD_(s_conflict_set_bitmap2_creation_count) += bm_get_bitmap2_creation_count();
+ DRD_(s_conflict_set_bitmap_creation_count) += DRD_(bm_get_bitmap_creation_count)();
+ DRD_(s_conflict_set_bitmap2_creation_count) += DRD_(bm_get_bitmap2_creation_count)();
if (0 && DRD_(s_trace_conflict_set))
{
VG_(message)(Vg_UserMsg, "[%d] new conflict set:", tid);
- bm_print(*conflict_set);
+ DRD_(bm_print)(*conflict_set);
VG_(message)(Vg_UserMsg, "[%d] end of new conflict set.", tid);
}
}
Modified: trunk/drd/drd_thread_bitmap.h
===================================================================
--- trunk/drd/drd_thread_bitmap.h 2009-02-15 14:46:17 UTC (rev 9171)
+++ trunk/drd/drd_thread_bitmap.h 2009-02-15 15:59:20 UTC (rev 9172)
@@ -35,8 +35,8 @@
static __inline__
Bool bm_access_load_1_triggers_conflict(const Addr a1)
{
- bm_access_load_1(DRD_(running_thread_get_segment)()->bm, a1);
- return bm_load_1_has_conflict_with(DRD_(thread_get_conflict_set)(), a1);
+ DRD_(bm_access_load_1)(DRD_(running_thread_get_segment)()->bm, a1);
+ return DRD_(bm_load_1_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1);
}
static __inline__
@@ -49,8 +49,8 @@
}
else
{
- bm_access_range(DRD_(running_thread_get_segment)()->bm, a1, a1 + 2, eLoad);
- return bm_has_conflict_with(DRD_(thread_get_conflict_set)(), a1, a1 + 2, eLoad);
+ DRD_(bm_access_range)(DRD_(running_thread_get_segment)()->bm, a1, a1 + 2, eLoad);
+ return DRD_(bm_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1, a1 + 2, eLoad);
}
}
@@ -64,8 +64,8 @@
}
else
{
- bm_access_range(DRD_(running_thread_get_segment)()->bm, a1, a1 + 4, eLoad);
- return bm_has_conflict_with(DRD_(thread_get_conflict_set)(), a1, a1 + 4, eLoad);
+ DRD_(bm_access_range)(DRD_(running_thread_get_segment)()->bm, a1, a1 + 4, eLoad);
+ return DRD_(bm_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1, a1 + 4, eLoad);
}
}
@@ -81,27 +81,27 @@
{
bm_access_aligned_load(DRD_(running_thread_get_segment)()->bm, a1 + 0, 4);
bm_access_aligned_load(DRD_(running_thread_get_segment)()->bm, a1 + 4, 4);
- return bm_has_conflict_with(DRD_(thread_get_conflict_set)(), a1, a1 + 8, eLoad);
+ return DRD_(bm_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1, a1 + 8, eLoad);
}
else
{
- bm_access_range(DRD_(running_thread_get_segment)()->bm, a1, a1 + 8, eLoad);
- return bm_has_conflict_with(DRD_(thread_get_conflict_set)(), a1, a1 + 8, eLoad);
+ DRD_(bm_access_range)(DRD_(running_thread_get_segment)()->bm, a1, a1 + 8, eLoad);
+ return DRD_(bm_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1, a1 + 8, eLoad);
}
}
static __inline__
Bool bm_access_load_triggers_conflict(const Addr a1, const Addr a2)
{
- bm_access_range_load(DRD_(running_thread_get_segment)()->bm, a1, a2);
- return bm_load_has_conflict_with(DRD_(thread_get_conflict_set)(), a1, a2);
+ DRD_(bm_access_range_load)(DRD_(running_thread_get_segment)()->bm, a1, a2);
+ return DRD_(bm_load_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1, a2);
}
static __inline__
Bool bm_access_store_1_triggers_conflict(const Addr a1)
{
- bm_access_store_1(DRD_(running_thread_get_segment)()->bm, a1);
- return bm_store_1_has_conflict_with(DRD_(thread_get_conflict_set)(), a1);
+ DRD_(bm_access_store_1)(DRD_(running_thread_get_segment)()->bm, a1);
+ return DRD_(bm_store_1_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1);
}
static __inline__
@@ -114,8 +114,8 @@
}
else
{
- bm_access_range(DRD_(running_thread_get_segment)()->bm, a1, a1 + 2, eStore);
- return bm_has_conflict_with(DRD_(thread_get_conflict_set)(), a1, a1 + 2, eStore);
+ DRD_(bm_access_range)(DRD_(running_thread_get_segment)()->bm, a1, a1 + 2, eStore);
+ return DRD_(bm_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1, a1 + 2, eStore);
}
}
@@ -129,8 +129,8 @@
}
else
{
- bm_access_range(DRD_(running_thread_get_segment)()->bm, a1, a1 + 4, eStore);
- return bm_has_conflict_with(DRD_(thread_get_conflict_set)(), a1, a1 + 4, eStore);
+ DRD_(bm_access_range)(DRD_(running_thread_get_segment)()->bm, a1, a1 + 4, eStore);
+ return DRD_(bm_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1, a1 + 4, eStore);
}
}
@@ -146,20 +146,20 @@
{
bm_access_aligned_store(DRD_(running_thread_get_segment)()->bm, a1 + 0, 4);
bm_access_aligned_store(DRD_(running_thread_get_segment)()->bm, a1 + 4, 4);
- return bm_has_conflict_with(DRD_(thread_get_conflict_set)(), a1, a1 + 8, eStore);
+ return DRD_(bm_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1, a1 + 8, eStore);
}
else
{
- bm_access_range(DRD_(running_thread_get_segment)()->bm, a1, a1 + 8, eStore);
- return bm_has_conflict_with(DRD_(thread_get_conflict_set)(), a1, a1 + 8, eStore);
+ DRD_(bm_access_range)(DRD_(running_thread_get_segment)()->bm, a1, a1 + 8, eStore);
+ return DRD_(bm_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1, a1 + 8, eStore);
}
}
static __inline__
Bool bm_access_store_triggers_conflict(const Addr a1, const Addr a2)
{
- bm_access_range_store(DRD_(running_thread_get_segment)()->bm, a1, a2);
- return bm_store_has_conflict_with(DRD_(thread_get_conflict_set)(), a1, a2);
+ DRD_(bm_access_range_store)(DRD_(running_thread_get_segment)()->bm, a1, a2);
+ return DRD_(bm_store_has_conflict_with)(DRD_(thread_get_conflict_set)(), a1, a2);
}
#endif // __DRD_THREAD_BITMAP_H
Modified: trunk/drd/pub_drd_bitmap.h
===================================================================
--- trunk/drd/pub_drd_bitmap.h 2009-02-15 14:46:17 UTC (rev 9171)
+++ trunk/drd/pub_drd_bitmap.h 2009-02-15 15:59:20 UTC (rev 9172)
@@ -23,9 +23,11 @@
*/
-// A bitmap is a data structure that contains information about which
-// addresses have been accessed for reading or writing within a given
-// segment.
+/*
+ * A bitmap is a data structure that contains information about which
+ * addresses have been accessed for reading or writing within a given
+ * segment.
+ */
#ifndef __PUB_DRD_BITMAP_H
@@ -35,7 +37,7 @@
#include "pub_tool_basics.h" /* Addr, SizeT */
-// Constant definitions.
+/* Defines. */
#define LHS_R (1<<0)
#define LHS_W (1<<1)
@@ -45,81 +47,84 @@
|| (((a) & LHS_W) && ((a) & (RHS_R | RHS_W))))
-// Forward declarations.
+/* Forward declarations. */
+
struct bitmap;
-// Datatype definitions.
+/* Datatype definitions. */
+
typedef enum { eLoad, eStore, eStart, eEnd } BmAccessTypeT;
-// Function declarations.
-struct bitmap* bm_new(void);
-void bm_delete(struct bitmap* const bm);
-void bm_access_range(struct bitmap* const bm,
- const Addr a1, const Addr a2,
- const BmAccessTypeT access_type);
-void bm_access_range_load(struct bitmap* const bm,
- const Addr a1, const Addr a2);
-void bm_access_load_1(struct bitmap* const bm, const Addr a1);
-void bm_access_load_2(struct bitmap* const bm, const Addr a1);
-void bm_access_load_4(struct bitmap* const bm, const Addr a1);
-void bm_access_load_8(struct bitmap* const bm, const Addr a1);
-void bm_access_range_store(struct bitmap* const bm,
+/* Function declarations. */
+
+struct bitmap* DRD_(bm_new)(void);
+void DRD_(bm_delete)(struct bitmap* const bm);
+void DRD_(bm_access_range)(struct bitmap* const bm,
+ const Addr a1, const Addr a2,
+ const BmAccessTypeT access_type);
+void DRD_(bm_access_range_load)(struct bitmap* const bm,
+ const Addr a1, const Addr a2);
+void DRD_(bm_access_load_1)(struct bitmap* const bm, const Addr a1);
+void DRD_(bm_access_load_2)(struct bitmap* const bm, const Addr a1);
+void DRD_(bm_access_load_4)(struct bitmap* const bm, const Addr a1);
+void DRD_(bm_access_load_8)(struct bitmap* const bm, const Addr a1);
+void DRD_(bm_access_range_store)(struct bitmap* const bm,
+ const Addr a1, const Addr a2);
+void DRD_(bm_access_store_1)(struct bitmap* const bm, const Addr a1);
+void DRD_(bm_access_store_2)(struct bitmap* const bm, const Addr a1);
+void DRD_(bm_access_store_4)(struct bitmap* const bm, const Addr a1);
+void DRD_(bm_access_store_8)(struct bitmap* const bm, const Addr a1);
+Bool DRD_(bm_has)(struct bitmap* const bm,
+ const Addr a1, const Addr a2,
+ const BmAccessTypeT access_type);
+Bool DRD_(bm_has_any_load)(struct bitmap* const bm,
const Addr a1, const Addr a2);
-void bm_access_store_1(struct bitmap* const bm, const Addr a1);
-void bm_access_store_2(struct bitmap* const bm, const Addr a1);
-void bm_access_store_4(struct bitmap* const bm, const Addr a1);
-void bm_access_store_8(struct bitmap* const bm, const Addr a1);
-Bool bm_has(struct bitmap* const bm,
- const Addr a1, const Addr a2,
- const BmAccessTypeT access_type);
-Bool bm_has_any_load(struct bitmap* const bm,
- const Addr a1, const Addr a2);
-Bool bm_has_any_store(struct bitmap* const bm,
- const Addr a1, const Addr a2);
-Bool bm_has_any_access(struct bitmap* const bm,
- const Addr a1, const Addr a2);
-Bool bm_has_1(struct bitmap* const bm,
- const Addr address, const BmAccessTypeT access_type);
-void bm_clear(struct bitmap* const bm,
- const Addr a1, const Addr a2);
-void bm_clear_load(struct bitmap* const bm,
- const Addr a1, const Addr a2);
-void bm_clear_store(struct bitmap* const bm,
+Bool DRD_(bm_has_any_store)(struct bitmap* const bm,
+ const Addr a1, const Addr a2);
+Bool DRD_(bm_has_any_access)(struct bitmap* const bm,
+ const Addr a1, const Addr a2);
+Bool DRD_(bm_has_1)(struct bitmap* const bm,
+ const Addr address, const BmAccessTypeT access_type);
+void DRD_(bm_clear)(struct bitmap* const bm,
const Addr a1, const Addr a2);
-Bool bm_test_and_clear(struct bitmap* const bm,
- const Addr a1, const Addr a2);
-Bool bm_has_conflict_with(struct bitmap* const bm,
- const Addr a1, const Addr a2,
- const BmAccessTypeT access_type);
-Bool bm_load_1_has_conflict_with(struct bitmap* const bm, const Addr a1);
-Bool bm_load_2_has_conflict_with(struct bitmap* const bm, const Addr a1);
-Bool bm_load_4_has_conflict_with(struct bitmap* const bm, const Addr a1);
-Bool bm_load_8_has_conflict_with(struct bitmap* const bm, const Addr a1);
-Bool bm_load_has_conflict_with(struct bitmap* const bm,
- const Addr a1, const Addr a2);
-Bool bm_store_1_has_conflict_with(struct bitmap* const bm,const Addr a1);
-Bool bm_store_2_has_conflict_with(struct bitmap* const bm,const Addr a1);
-Bool bm_store_4_has_conflict_with(struct bitmap* const bm,const Addr a1);
-Bool bm_store_8_has_conflict_with(struct bitmap* const bm,const Addr a1);
-Bool bm_store_has_conflict_with(struct bitmap* const bm,
- const Addr a1, const Addr a2);
-Bool bm_equal(struct bitmap* const lhs, struct bitmap* const rhs);
-void bm_swap(struct bitmap* const bm1, struct bitmap* const bm2);
-void bm_merge2(struct bitmap* const lhs,
- struct bitmap* const rhs);
-int bm_has_races(struct bitmap* const bm1,
- struct bitmap* const bm2);
-void bm_report_races(ThreadId const tid1, ThreadId const tid2,
- struct bitmap* const bm1,
- struct bitmap* const bm2);
-void bm_print(struct bitmap* bm);
-ULong bm_get_bitmap_creation_count(void);
-ULong bm_get_bitmap2_node_creation_count(void);
-ULong bm_get_bitmap2_creation_count(void);
+void DRD_(bm_clear_load)(struct bitmap* const bm,
+ const Addr a1, const Addr a2);
+void DRD_(bm_clear_store)(struct bitmap* const bm,
+ const Addr a1, const Addr a2);
+Bool DRD_(bm_test_and_clear)(struct bitmap* const bm,
+ const Addr a1, const Addr a2);
+Bool DRD_(bm_has_conflict_with)(struct bitmap* const bm,
+ const Addr a1, const Addr a2,
+ const BmAccessTypeT access_type);
+Bool DRD_(bm_load_1_has_conflict_with)(struct bitmap* const bm, const Addr a1);
+Bool DRD_(bm_load_2_has_conflict_with)(struct bitmap* const bm, const Addr a1);
+Bool DRD_(bm_load_4_has_conflict_with)(struct bitmap* const bm, const Addr a1);
+Bool DRD_(bm_load_8_has_conflict_with)(struct bitmap* const bm, const Addr a1);
+Bool DRD_(bm_load_has_conflict_with)(struct bitmap* const bm,
+ const Addr a1, const Addr a2);
+Bool DRD_(bm_store_1_has_conflict_with)(struct bitmap* const bm,const Addr a1);
+Bool DRD_(bm_store_2_has_conflict_with)(struct bitmap* const bm,const Addr a1);
+Bool DRD_(bm_store_4_has_conflict_with)(struct bitmap* const bm,const Addr a1);
+Bool DRD_(bm_store_8_has_conflict_with)(struct bitmap* const bm,const Addr a1);
+Bool DRD_(bm_store_has_conflict_with)(struct bitmap* const bm,
+ const Addr a1, const Addr a2);
+Bool DRD_(bm_equal)(struct bitmap* const lhs, struct bitmap* const rhs);
+void DRD_(bm_swap)(struct bitmap* const bm1, struct bitmap* const bm2);
+void DRD_(bm_merge2)(struct bitmap* const lhs,
+ struct bitmap* const rhs);
+int DRD_(bm_has_races)(struct bitmap* const bm1,
+ struct bitmap* const bm2);
+void DRD_(bm_report_races)(ThreadId const tid1, ThreadId const tid2,
+ struct bitmap* const bm1,
+ struct bitmap* const bm2);
+void DRD_(bm_print)(struct bitmap* bm);
+ULong DRD_(bm_get_bitmap_creation_count)(void);
+ULong DRD_(bm_get_bitmap2_node_creation_count)(void);
+ULong DRD_(bm_get_bitmap2_creation_count)(void);
-void bm_test(void);
+void DRD_(bm_test)(void);
#endif /* __PUB_DRD_BITMAP_H */
|
|
From: <sv...@va...> - 2009-02-15 14:46:24
|
Author: bart
Date: 2009-02-15 14:46:17 +0000 (Sun, 15 Feb 2009)
New Revision: 9171
Log:
Wrapped DRD_() macro around even more function and variable names.
Modified:
trunk/drd/drd_error.c
trunk/drd/drd_error.h
trunk/drd/drd_main.c
trunk/drd/drd_malloc_wrappers.c
trunk/drd/drd_malloc_wrappers.h
Modified: trunk/drd/drd_error.c
===================================================================
--- trunk/drd/drd_error.c 2009-02-15 14:18:02 UTC (rev 9170)
+++ trunk/drd/drd_error.c 2009-02-15 14:46:17 UTC (rev 9171)
@@ -43,23 +43,25 @@
/* Local variables. */
-static Bool s_drd_show_conflicting_segments = True;
+static Bool DRD_(s_show_conflicting_segments) = True;
-void set_show_conflicting_segments(const Bool scs)
+void DRD_(set_show_conflicting_segments)(const Bool scs)
{
- s_drd_show_conflicting_segments = scs;
+ DRD_(s_show_conflicting_segments) = scs;
}
-/** Describe a data address range [a,a+len[ as good as possible, for error
- * messages, putting the result in ai.
+/**
+ * Describe a data address range [a,a+len[ as good as possible, for error
+ * messages, putting the result in ai.
*/
static
-void describe_malloced_addr(Addr const a, SizeT const len, AddrInfo* const ai)
+void DRD_(describe_malloced_addr)(Addr const a, SizeT const len,
+ AddrInfo* const ai)
{
Addr data;
- if (drd_heap_addrinfo(a, &data, &ai->size, &ai->lastchange))
+ if (DRD_(heap_addrinfo)(a, &data, &ai->size, &ai->lastchange))
{
ai->akind = eMallocd;
ai->rwoffset = a - data;
@@ -70,11 +72,12 @@
}
}
-/** Report where an object has been observed for the first time. The printed
- * call stack will either refer to a pthread_*_init() or a pthread_*lock()
- * call.
+/**
+ * Report where an object has been observed for the first time. The printed
+ * call stack will either refer to a pthread_*_init() or a pthread_*lock()
+ * call.
*/
-static void first_observed(const Addr obj)
+static void DRD_(first_observed)(const Addr obj)
{
DrdClientobj* cl;
@@ -91,7 +94,8 @@
}
static
-void drd_report_data_race2(Error* const err, const DataRaceErrInfo* const dri)
+void DRD_(drd_report_data_race)(Error* const err,
+ const DataRaceErrInfo* const dri)
{
AddrInfo ai;
const unsigned descr_size = 256;
@@ -109,7 +113,7 @@
VG_(get_data_description)(descr1, descr2, descr_size, dri->addr);
if (descr1[0] == 0)
{
- describe_malloced_addr(dri->addr, dri->size, &ai);
+ DRD_(describe_malloced_addr)(dri->addr, dri->size, &ai);
}
VG_(message)(Vg_UserMsg,
"Conflicting %s by thread %d/%d at 0x%08lx size %ld",
@@ -150,7 +154,7 @@
VG_(message)(Vg_UserMsg, "Allocation context: unknown.");
}
}
- if (s_drd_show_conflicting_segments)
+ if (DRD_(s_show_conflicting_segments))
{
DRD_(thread_report_conflicting_segments)(dri->tid,
dri->addr, dri->size,
@@ -161,17 +165,17 @@
VG_(free)(descr1);
}
-static Bool drd_tool_error_eq(VgRes res, Error* e1, Error* e2)
+static Bool DRD_(drd_tool_error_eq)(VgRes res, Error* e1, Error* e2)
{
return False;
}
-static void drd_tool_error_pp(Error* const e)
+static void DRD_(drd_tool_error_pp)(Error* const e)
{
switch (VG_(get_error_kind)(e))
{
case DataRaceErr: {
- drd_report_data_race2(e, VG_(get_error_extra)(e));
+ DRD_(drd_report_data_race)(e, VG_(get_error_extra)(e));
break;
}
case MutexErr: {
@@ -193,7 +197,7 @@
p->mutex);
}
VG_(pp_ExeContext)(VG_(get_error_where)(e));
- first_observed(p->mutex);
+ DRD_(first_observed)(p->mutex);
break;
}
case CondErr: {
@@ -203,7 +207,7 @@
VG_(get_error_string)(e),
cdei->cond);
VG_(pp_ExeContext)(VG_(get_error_where)(e));
- first_observed(cdei->cond);
+ DRD_(first_observed)(cdei->cond);
break;
}
case CondDestrErr: {
@@ -214,7 +218,7 @@
cdi->cond, cdi->mutex,
DRD_(DrdThreadIdToVgThreadId)(cdi->tid), cdi->tid);
VG_(pp_ExeContext)(VG_(get_error_where)(e));
- first_observed(cdi->mutex);
+ DRD_(first_observed)(cdi->mutex);
break;
}
case CondRaceErr: {
@@ -225,8 +229,8 @@
" by the signalling thread.",
cei->cond, cei->mutex);
VG_(pp_ExeContext)(VG_(get_error_where)(e));
- first_observed(cei->cond);
- first_observed(cei->mutex);
+ DRD_(first_observed)(cei->cond);
+ DRD_(first_observed)(cei->mutex);
break;
}
case CondWaitErr: {
@@ -238,9 +242,9 @@
cwei->mutex1,
cwei->mutex2);
VG_(pp_ExeContext)(VG_(get_error_where)(e));
- first_observed(cwei->cond);
- first_observed(cwei->mutex1);
- first_observed(cwei->mutex2);
+ DRD_(first_observed)(cwei->cond);
+ DRD_(first_observed)(cwei->mutex1);
+ DRD_(first_observed)(cwei->mutex2);
break;
}
case SemaphoreErr: {
@@ -251,7 +255,7 @@
VG_(get_error_string)(e),
sei->semaphore);
VG_(pp_ExeContext)(VG_(get_error_where)(e));
- first_observed(sei->semaphore);
+ DRD_(first_observed)(sei->semaphore);
break;
}
case BarrierErr: {
@@ -262,7 +266,7 @@
VG_(get_error_string)(e),
bei->barrier);
VG_(pp_ExeContext)(VG_(get_error_where)(e));
- first_observed(bei->barrier);
+ DRD_(first_observed)(bei->barrier);
break;
}
case RwlockErr: {
@@ -273,7 +277,7 @@
VG_(get_error_string)(e),
p->rwlock);
VG_(pp_ExeContext)(VG_(get_error_where)(e));
- first_observed(p->rwlock);
+ DRD_(first_observed)(p->rwlock);
break;
}
case HoldtimeErr: {
@@ -289,7 +293,7 @@
p->hold_time_ms,
p->threshold_ms);
VG_(pp_ExeContext)(VG_(get_error_where)(e));
- first_observed(p->synchronization_object);
+ DRD_(first_observed)(p->synchronization_object);
break;
}
case GenericErr: {
@@ -307,7 +311,7 @@
}
}
-static UInt drd_tool_error_update_extra(Error* e)
+static UInt DRD_(drd_tool_error_update_extra)(Error* e)
{
switch (VG_(get_error_kind)(e))
{
@@ -339,7 +343,7 @@
}
}
-static Bool drd_tool_error_recog(Char* const name, Supp* const supp)
+static Bool DRD_(drd_tool_error_recog)(Char* const name, Supp* const supp)
{
SuppKind skind = 0;
@@ -372,12 +376,13 @@
return True;
}
-static Bool drd_tool_error_read_extra(Int fd, Char* buf, Int nBuf, Supp* supp)
+static
+Bool DRD_(drd_tool_error_read_extra)(Int fd, Char* buf, Int nBuf, Supp* supp)
{
return True;
}
-static Bool drd_tool_error_matches(Error* const e, Supp* const supp)
+static Bool DRD_(drd_tool_error_matches)(Error* const e, Supp* const supp)
{
switch (VG_(get_supp_kind)(supp))
{
@@ -385,7 +390,7 @@
return True;
}
-static Char* drd_tool_error_name(Error* e)
+static Char* DRD_(drd_tool_error_name)(Error* e)
{
switch (VG_(get_error_kind)(e))
{
@@ -406,24 +411,19 @@
return 0;
}
-static void drd_tool_error_print_extra(Error* e)
-{
- switch (VG_(get_error_kind)(e))
- {
- // VG_(printf)(" %s\n", VG_(get_error_string)(err));
- }
-}
+static void DRD_(drd_tool_error_print_extra)(Error* e)
+{ }
void DRD_(register_error_handlers)(void)
{
// Tool error reporting.
- VG_(needs_tool_errors)(drd_tool_error_eq,
- drd_tool_error_pp,
+ VG_(needs_tool_errors)(DRD_(drd_tool_error_eq),
+ DRD_(drd_tool_error_pp),
True,
- drd_tool_error_update_extra,
- drd_tool_error_recog,
- drd_tool_error_read_extra,
- drd_tool_error_matches,
- drd_tool_error_name,
- drd_tool_error_print_extra);
+ DRD_(drd_tool_error_update_extra),
+ DRD_(drd_tool_error_recog),
+ DRD_(drd_tool_error_read_extra),
+ DRD_(drd_tool_error_matches),
+ DRD_(drd_tool_error_name),
+ DRD_(drd_tool_error_print_extra));
}
Modified: trunk/drd/drd_error.h
===================================================================
--- trunk/drd/drd_error.h 2009-02-15 14:18:02 UTC (rev 9170)
+++ trunk/drd/drd_error.h 2009-02-15 14:46:17 UTC (rev 9171)
@@ -147,7 +147,7 @@
} GenericErrInfo;
-void set_show_conflicting_segments(const Bool scs);
+void DRD_(set_show_conflicting_segments)(const Bool scs);
void DRD_(register_error_handlers)(void);
Modified: trunk/drd/drd_main.c
===================================================================
--- trunk/drd/drd_main.c 2009-02-15 14:18:02 UTC (rev 9170)
+++ trunk/drd/drd_main.c 2009-02-15 14:46:17 UTC (rev 9171)
@@ -125,7 +125,7 @@
if (segment_merging != -1)
DRD_(thread_set_segment_merging)(segment_merging);
if (show_confl_seg != -1)
- set_show_conflicting_segments(show_confl_seg);
+ DRD_(set_show_conflicting_segments)(show_confl_seg);
if (trace_address)
{
const Addr addr = VG_(strtoll16)(trace_address, 0);
@@ -568,7 +568,7 @@
VG_(message)(Vg_UserMsg,
" mutex: %lld non-recursive lock/unlock events.",
DRD_(get_mutex_lock_count)());
- drd_print_malloc_stats();
+ DRD_(print_malloc_stats)();
}
}
@@ -615,8 +615,8 @@
VG_(track_pre_thread_ll_exit) (drd_thread_finished);
// Other stuff.
- drd_register_malloc_wrappers(drd_start_using_mem_w_ecu,
- drd_stop_using_nonstack_mem);
+ DRD_(register_malloc_wrappers)(drd_start_using_mem_w_ecu,
+ drd_stop_using_nonstack_mem);
DRD_(clientreq_init)();
Modified: trunk/drd/drd_malloc_wrappers.c
===================================================================
--- trunk/drd/drd_malloc_wrappers.c 2009-02-15 14:18:02 UTC (rev 9170)
+++ trunk/drd/drd_malloc_wrappers.c 2009-02-15 14:46:17 UTC (rev 9171)
@@ -38,11 +38,8 @@
#include "pub_tool_tooliface.h"
-/*------------------------------------------------------------*/
-/*--- Definitions ---*/
-/*------------------------------------------------------------*/
+/* Local type definitions. */
-
typedef struct _DRD_Chunk {
struct _DRD_Chunk* next;
Addr data; // ptr to actual block
@@ -50,25 +47,25 @@
ExeContext* where; // where it was allocated
} DRD_Chunk;
-static StartUsingMem s_start_using_mem_callback;
-static StopUsingMem s_stop_using_mem_callback;
+
+/* Local variables. */
+
+static StartUsingMem DRD_(s_start_using_mem_callback);
+static StopUsingMem DRD_(s_stop_using_mem_callback);
/* Stats ... */
-static SizeT cmalloc_n_mallocs = 0;
-static SizeT cmalloc_n_frees = 0;
-static SizeT cmalloc_bs_mallocd = 0;
+static SizeT DRD_(s_cmalloc_n_mallocs) = 0;
+static SizeT DRD_(s_cmalloc_n_frees) = 0;
+static SizeT DRD_(s_cmalloc_bs_mallocd) = 0;
+/* Record malloc'd blocks. */
+static VgHashTable DRD_(s_malloc_list) = NULL;
/*------------------------------------------------------------*/
/*--- Tracking malloc'd and free'd blocks ---*/
/*------------------------------------------------------------*/
-/* Record malloc'd blocks. */
-static VgHashTable drd_malloc_list = NULL;
-
-
-/* Allocate its shadow chunk, put it on the appropriate list. */
-static
-DRD_Chunk* create_DRD_Chunk(ThreadId tid, Addr p, SizeT size)
+/** Allocate its shadow chunk, put it on the appropriate list. */
+static DRD_Chunk* DRD_(create_chunk)(ThreadId tid, Addr p, SizeT size)
{
DRD_Chunk* mc = VG_(malloc)("drd.malloc_wrappers.cDC.1",
sizeof(DRD_Chunk));
@@ -86,13 +83,13 @@
/* Allocate memory and note change in memory available */
static
__inline__
-void* drd_new_block(ThreadId tid,
- SizeT size, SizeT align,
- Bool is_zeroed)
+void* DRD_(new_block)(ThreadId tid,
+ SizeT size, SizeT align,
+ Bool is_zeroed)
{
Addr p;
- cmalloc_n_mallocs ++;
+ DRD_(s_cmalloc_n_mallocs) ++;
// Allocate and zero
p = (Addr)VG_(cli_malloc)(align, size);
@@ -100,44 +97,39 @@
return NULL;
}
if (is_zeroed) VG_(memset)((void*)p, 0, size);
- s_start_using_mem_callback(p, p + size, 0/*ec_uniq*/);
+ DRD_(s_start_using_mem_callback)(p, p + size, 0/*ec_uniq*/);
// Only update this stat if allocation succeeded.
- cmalloc_bs_mallocd += size;
+ DRD_(s_cmalloc_bs_mallocd) += size;
- VG_(HT_add_node)(drd_malloc_list, create_DRD_Chunk(tid, p, size));
+ VG_(HT_add_node)(DRD_(s_malloc_list), DRD_(create_chunk)(tid, p, size));
return (void*)p;
}
-static
-void* drd_malloc(ThreadId tid, SizeT n)
+static void* DRD_(malloc)(ThreadId tid, SizeT n)
{
- return drd_new_block(tid, n, VG_(clo_alignment), /*is_zeroed*/False);
+ return DRD_(new_block)(tid, n, VG_(clo_alignment), /*is_zeroed*/False);
}
-static
-void* drd_memalign(ThreadId tid, SizeT align, SizeT n)
+static void* DRD_(memalign)(ThreadId tid, SizeT align, SizeT n)
{
- return drd_new_block(tid, n, align, /*is_zeroed*/False);
+ return DRD_(new_block)(tid, n, align, /*is_zeroed*/False);
}
-static
-void* drd_calloc(ThreadId tid, SizeT nmemb, SizeT size1)
+static void* DRD_(calloc)(ThreadId tid, SizeT nmemb, SizeT size1)
{
- return drd_new_block(tid, nmemb*size1, VG_(clo_alignment),
- /*is_zeroed*/True);
+ return DRD_(new_block)(tid, nmemb*size1, VG_(clo_alignment),
+ /*is_zeroed*/True);
}
-static
-__inline__
-void drd_handle_free(ThreadId tid, Addr p)
+static __inline__ void DRD_(handle_free)(ThreadId tid, Addr p)
{
DRD_Chunk* mc;
- cmalloc_n_frees++;
+ DRD_(s_cmalloc_n_frees)++;
- mc = VG_(HT_remove)(drd_malloc_list, (UWord)p);
+ mc = VG_(HT_remove)(DRD_(s_malloc_list), (UWord)p);
if (mc == NULL)
{
tl_assert(0);
@@ -146,31 +138,29 @@
{
tl_assert(p == mc->data);
if (mc->size > 0)
- s_stop_using_mem_callback(mc->data, mc->size);
+ DRD_(s_stop_using_mem_callback)(mc->data, mc->size);
VG_(cli_free)((void*)p);
VG_(free)(mc);
}
}
-static
-void drd_free(ThreadId tid, void* p)
+static void DRD_(free)(ThreadId tid, void* p)
{
- drd_handle_free(tid, (Addr)p);
+ DRD_(handle_free)(tid, (Addr)p);
}
-static
-void* drd_realloc(ThreadId tid, void* p_old, SizeT new_size)
+static void* DRD_(realloc)(ThreadId tid, void* p_old, SizeT new_size)
{
DRD_Chunk* mc;
void* p_new;
SizeT old_size;
- cmalloc_n_frees ++;
- cmalloc_n_mallocs ++;
- cmalloc_bs_mallocd += new_size;
+ DRD_(s_cmalloc_n_frees) ++;
+ DRD_(s_cmalloc_n_mallocs) ++;
+ DRD_(s_cmalloc_bs_mallocd) += new_size;
/* Remove the old block */
- mc = VG_(HT_remove)(drd_malloc_list, (UWord)p_old);
+ mc = VG_(HT_remove)(DRD_(s_malloc_list), (UWord)p_old);
if (mc == NULL) {
tl_assert(0);
return NULL;
@@ -188,7 +178,7 @@
else if (old_size > new_size)
{
/* new size is smaller */
- s_stop_using_mem_callback(mc->data + new_size, old_size);
+ DRD_(s_stop_using_mem_callback)(mc->data + new_size, old_size);
mc->size = new_size;
mc->where = VG_(record_ExeContext)(tid, 0);
p_new = p_old;
@@ -206,12 +196,12 @@
VG_(memcpy)((void*)a_new, p_old, mc->size);
/* Free old memory */
- s_stop_using_mem_callback(mc->data, mc->size);
+ DRD_(s_stop_using_mem_callback)(mc->data, mc->size);
VG_(free)(mc);
// Allocate a new chunk.
- mc = create_DRD_Chunk(tid, a_new, new_size);
- s_start_using_mem_callback(a_new, a_new + new_size, 0/*ec_uniq*/);
+ mc = DRD_(create_chunk)(tid, a_new, new_size);
+ DRD_(s_start_using_mem_callback)(a_new, a_new + new_size, 0/*ec_uniq*/);
}
else
{
@@ -226,65 +216,62 @@
// will have removed and then re-added mc unnecessarily. But that's ok
// because shrinking a block with realloc() is (presumably) much rarer
// than growing it, and this way simplifies the growing case.
- VG_(HT_add_node)(drd_malloc_list, mc);
+ VG_(HT_add_node)(DRD_(s_malloc_list), mc);
return p_new;
}
-static
-void* drd___builtin_new(ThreadId tid, SizeT n)
+static void* DRD_(__builtin_new)(ThreadId tid, SizeT n)
{
- void* const result = drd_new_block(tid, n, VG_(clo_alignment), /*is_zeroed*/False);
+ void* const result = DRD_(new_block)(tid, n, VG_(clo_alignment), /*is_zeroed*/False);
//VG_(message)(Vg_DebugMsg, "__builtin_new(%d, %d) = %p", tid, n, result);
return result;
}
-static
-void drd___builtin_delete(ThreadId tid, void* p)
+static void DRD_(__builtin_delete)(ThreadId tid, void* p)
{
//VG_(message)(Vg_DebugMsg, "__builtin_delete(%d, %p)", tid, p);
- drd_handle_free(tid, (Addr)p);
+ DRD_(handle_free)(tid, (Addr)p);
}
-static
-void* drd___builtin_vec_new(ThreadId tid, SizeT n)
+static void* DRD_(__builtin_vec_new)(ThreadId tid, SizeT n)
{
- return drd_new_block(tid, n, VG_(clo_alignment), /*is_zeroed*/False);
+ return DRD_(new_block)(tid, n, VG_(clo_alignment), /*is_zeroed*/False);
}
-static
-void drd___builtin_vec_delete(ThreadId tid, void* p)
+static void DRD_(__builtin_vec_delete)(ThreadId tid, void* p)
{
- drd_handle_free(tid, (Addr)p);
+ DRD_(handle_free)(tid, (Addr)p);
}
-void drd_register_malloc_wrappers(const StartUsingMem start_using_mem_callback,
- const StopUsingMem stop_using_mem_callback)
+void DRD_(register_malloc_wrappers)(const StartUsingMem start_callback,
+ const StopUsingMem stop_callback)
{
- tl_assert(drd_malloc_list == 0);
- drd_malloc_list = VG_(HT_construct)("drd_malloc_list"); // a big prime
- tl_assert(drd_malloc_list != 0);
- tl_assert(stop_using_mem_callback);
+ tl_assert(DRD_(s_malloc_list) == 0);
+ DRD_(s_malloc_list) = VG_(HT_construct)("drd_malloc_list"); // a big prime
+ tl_assert(DRD_(s_malloc_list) != 0);
+ tl_assert(start_callback);
+ tl_assert(stop_callback);
- s_start_using_mem_callback = start_using_mem_callback;
- s_stop_using_mem_callback = stop_using_mem_callback;
+ DRD_(s_start_using_mem_callback) = start_callback;
+ DRD_(s_stop_using_mem_callback) = stop_callback;
- VG_(needs_malloc_replacement)(drd_malloc,
- drd___builtin_new,
- drd___builtin_vec_new,
- drd_memalign,
- drd_calloc,
- drd_free,
- drd___builtin_delete,
- drd___builtin_vec_delete,
- drd_realloc,
+ VG_(needs_malloc_replacement)(DRD_(malloc),
+ DRD_(__builtin_new),
+ DRD_(__builtin_vec_new),
+ DRD_(memalign),
+ DRD_(calloc),
+ DRD_(free),
+ DRD_(__builtin_delete),
+ DRD_(__builtin_vec_delete),
+ DRD_(realloc),
0);
}
-Bool drd_heap_addrinfo(Addr const a,
- Addr* const data,
- SizeT* const size,
- ExeContext** const where)
+Bool DRD_(heap_addrinfo)(Addr const a,
+ Addr* const data,
+ SizeT* const size,
+ ExeContext** const where)
{
DRD_Chunk* mc;
@@ -292,8 +279,8 @@
tl_assert(size);
tl_assert(where);
- VG_(HT_ResetIter)(drd_malloc_list);
- while ((mc = VG_(HT_Next)(drd_malloc_list)))
+ VG_(HT_ResetIter)(DRD_(s_malloc_list));
+ while ((mc = VG_(HT_Next)(DRD_(s_malloc_list))))
{
if (mc->data <= a && a < mc->data + mc->size)
{
@@ -310,7 +297,7 @@
/*--- Statistics printing ---*/
/*------------------------------------------------------------*/
-void drd_print_malloc_stats(void)
+void DRD_(print_malloc_stats)(void)
{
DRD_Chunk* mc;
SizeT nblocks = 0;
@@ -322,8 +309,8 @@
return;
/* Count memory still in use. */
- VG_(HT_ResetIter)(drd_malloc_list);
- while ((mc = VG_(HT_Next)(drd_malloc_list)))
+ VG_(HT_ResetIter)(DRD_(s_malloc_list));
+ while ((mc = VG_(HT_Next)(DRD_(s_malloc_list))))
{
nblocks++;
nbytes += mc->size;
@@ -334,8 +321,8 @@
nbytes, nblocks);
VG_(message)(Vg_DebugMsg,
"malloc/free: %lu allocs, %lu frees, %lu bytes allocated.",
- cmalloc_n_mallocs,
- cmalloc_n_frees, cmalloc_bs_mallocd);
+ DRD_(s_cmalloc_n_mallocs),
+ DRD_(s_cmalloc_n_frees), DRD_(s_cmalloc_bs_mallocd));
if (VG_(clo_verbosity) > 1)
VG_(message)(Vg_DebugMsg, " ");
}
Modified: trunk/drd/drd_malloc_wrappers.h
===================================================================
--- trunk/drd/drd_malloc_wrappers.h 2009-02-15 14:18:02 UTC (rev 9170)
+++ trunk/drd/drd_malloc_wrappers.h 2009-02-15 14:46:17 UTC (rev 9171)
@@ -26,21 +26,22 @@
#define __MALLOC_WRAPPERS_H
-#include "pub_tool_basics.h" // Bool
-#include "pub_tool_execontext.h" // ExeContext
+#include "drd_basics.h" /* DRD_() */
+#include "pub_tool_basics.h" /* Bool */
+#include "pub_tool_execontext.h" /* ExeContext */
typedef void (*StartUsingMem)(const Addr a1, const Addr a2, UInt ec_uniq);
typedef void (*StopUsingMem)(const Addr a1, const Addr a2);
-void drd_register_malloc_wrappers(const StartUsingMem start_using_mem_callback,
- const StopUsingMem stop_using_mem_callback);
-Bool drd_heap_addrinfo(Addr const a,
- Addr* const data,
- SizeT* const size,
- ExeContext** const where);
-void drd_print_malloc_stats(void);
+void DRD_(register_malloc_wrappers)(const StartUsingMem start_callback,
+ const StopUsingMem stop_callback);
+Bool DRD_(heap_addrinfo)(Addr const a,
+ Addr* const data,
+ SizeT* const size,
+ ExeContext** const where);
+void DRD_(print_malloc_stats)(void);
#endif // __MALLOC_WRAPPERS_H
|
|
From: <sv...@va...> - 2009-02-15 14:18:09
|
Author: bart
Date: 2009-02-15 14:18:02 +0000 (Sun, 15 Feb 2009)
New Revision: 9170
Log:
Wrapped DRD_() macro around even more function names.
Modified:
trunk/drd/drd_clientobj.c
trunk/drd/drd_clientreq.c
trunk/drd/drd_cond.c
trunk/drd/drd_cond.h
trunk/drd/drd_main.c
trunk/drd/drd_mutex.c
trunk/drd/drd_mutex.h
trunk/drd/drd_rwlock.c
trunk/drd/drd_rwlock.h
trunk/drd/drd_semaphore.c
trunk/drd/drd_semaphore.h
trunk/drd/drd_thread.c
Modified: trunk/drd/drd_clientobj.c
===================================================================
--- trunk/drd/drd_clientobj.c 2009-02-15 13:16:52 UTC (rev 9169)
+++ trunk/drd/drd_clientobj.c 2009-02-15 14:18:02 UTC (rev 9170)
@@ -38,24 +38,26 @@
/* Local variables. */
-static OSet* s_clientobj;
-static Bool s_trace_clientobj;
+static OSet* DRD_(s_clientobj_set);
+static Bool DRD_(s_trace_clientobj);
/* Function definitions. */
void DRD_(clientobj_set_trace)(const Bool trace)
{
- s_trace_clientobj = trace;
+ DRD_(s_trace_clientobj) = trace;
}
/** Initialize the client object set. */
void DRD_(clientobj_init)(void)
{
- tl_assert(s_clientobj == 0);
- s_clientobj = VG_(OSetGen_Create)(0, 0, VG_(malloc), "drd.clientobj.ci.1",
- VG_(free));
- tl_assert(s_clientobj);
+ tl_assert(DRD_(s_clientobj_set) == 0);
+ DRD_(s_clientobj_set) = VG_(OSetGen_Create)(0, 0,
+ VG_(malloc),
+ "drd.clientobj.ci.1",
+ VG_(free));
+ tl_assert(DRD_(s_clientobj_set));
}
/**
@@ -65,10 +67,10 @@
*/
void DRD_(clientobj_cleanup)(void)
{
- tl_assert(s_clientobj);
- tl_assert(VG_(OSetGen_Size)(s_clientobj) == 0);
- VG_(OSetGen_Destroy)(s_clientobj);
- s_clientobj = 0;
+ tl_assert(DRD_(s_clientobj_set));
+ tl_assert(VG_(OSetGen_Size)(DRD_(s_clientobj_set)) == 0);
+ VG_(OSetGen_Destroy)(DRD_(s_clientobj_set));
+ DRD_(s_clientobj_set) = 0;
}
/** Return the data associated with the client object at client address addr.
@@ -77,7 +79,7 @@
*/
DrdClientobj* DRD_(clientobj_get_any)(const Addr addr)
{
- return VG_(OSetGen_Lookup)(s_clientobj, &addr);
+ return VG_(OSetGen_Lookup)(DRD_(s_clientobj_set), &addr);
}
/** Return the data associated with the client object at client address addr
@@ -87,7 +89,7 @@
DrdClientobj* DRD_(clientobj_get)(const Addr addr, const ObjType t)
{
DrdClientobj* p;
- p = VG_(OSetGen_Lookup)(s_clientobj, &addr);
+ p = VG_(OSetGen_Lookup)(DRD_(s_clientobj_set), &addr);
if (p && p->any.type == t)
return p;
return 0;
@@ -101,8 +103,8 @@
DrdClientobj *p;
tl_assert(a1 < a2);
- VG_(OSetGen_ResetIter)(s_clientobj);
- for ( ; (p = VG_(OSetGen_Next)(s_clientobj)) != 0; )
+ VG_(OSetGen_ResetIter)(DRD_(s_clientobj_set));
+ for ( ; (p = VG_(OSetGen_Next)(DRD_(s_clientobj_set))) != 0; )
{
if (a1 <= p->any.a1 && p->any.a1 < a2)
{
@@ -121,20 +123,20 @@
DrdClientobj* p;
tl_assert(! DRD_(clientobj_present)(a1, a1 + 1));
- tl_assert(VG_(OSetGen_Lookup)(s_clientobj, &a1) == 0);
+ tl_assert(VG_(OSetGen_Lookup)(DRD_(s_clientobj_set), &a1) == 0);
- if (s_trace_clientobj)
+ if (DRD_(s_trace_clientobj))
{
VG_(message)(Vg_UserMsg, "Adding client object 0x%lx of type %d", a1, t);
}
- p = VG_(OSetGen_AllocNode)(s_clientobj, sizeof(*p));
+ p = VG_(OSetGen_AllocNode)(DRD_(s_clientobj_set), sizeof(*p));
VG_(memset)(p, 0, sizeof(*p));
p->any.a1 = a1;
p->any.type = t;
p->any.first_observed_at = VG_(record_ExeContext)(VG_(get_running_tid)(), 0);
- VG_(OSetGen_Insert)(s_clientobj, p);
- tl_assert(VG_(OSetGen_Lookup)(s_clientobj, &a1) == p);
+ VG_(OSetGen_Insert)(DRD_(s_clientobj_set), p);
+ tl_assert(VG_(OSetGen_Lookup)(DRD_(s_clientobj_set), &a1) == p);
DRD_(start_suppression)(a1, a1 + 1, "clientobj");
return p;
}
@@ -143,7 +145,7 @@
{
DrdClientobj* p;
- if (s_trace_clientobj)
+ if (DRD_(s_trace_clientobj))
{
VG_(message)(Vg_UserMsg, "Removing client object 0x%lx of type %d",
addr, t);
@@ -153,15 +155,15 @@
#endif
}
- p = VG_(OSetGen_Lookup)(s_clientobj, &addr);
+ p = VG_(OSetGen_Lookup)(DRD_(s_clientobj_set), &addr);
tl_assert(p->any.type == t);
- p = VG_(OSetGen_Remove)(s_clientobj, &addr);
+ p = VG_(OSetGen_Remove)(DRD_(s_clientobj_set), &addr);
if (p)
{
- tl_assert(VG_(OSetGen_Lookup)(s_clientobj, &addr) == 0);
+ tl_assert(VG_(OSetGen_Lookup)(DRD_(s_clientobj_set), &addr) == 0);
tl_assert(p->any.cleanup);
(*p->any.cleanup)(p);
- VG_(OSetGen_FreeNode)(s_clientobj, p);
+ VG_(OSetGen_FreeNode)(DRD_(s_clientobj_set), p);
return True;
}
return False;
@@ -172,13 +174,13 @@
Addr removed_at;
DrdClientobj* p;
- tl_assert(s_clientobj);
+ tl_assert(DRD_(s_clientobj_set));
if (! DRD_(is_any_suppressed)(a1, a2))
return;
- VG_(OSetGen_ResetIter)(s_clientobj);
- p = VG_(OSetGen_Next)(s_clientobj);
+ VG_(OSetGen_ResetIter)(DRD_(s_clientobj_set));
+ p = VG_(OSetGen_Next)(DRD_(s_clientobj_set));
for ( ; p != 0; )
{
if (a1 <= p->any.a1 && p->any.a1 < a2)
@@ -187,27 +189,28 @@
DRD_(clientobj_remove)(p->any.a1, p->any.type);
/* The above call removes an element from the oset and hence */
/* invalidates the iterator. Set the iterator back. */
- VG_(OSetGen_ResetIter)(s_clientobj);
- while ((p = VG_(OSetGen_Next)(s_clientobj)) != 0
+ VG_(OSetGen_ResetIter)(DRD_(s_clientobj_set));
+ while ((p = VG_(OSetGen_Next)(DRD_(s_clientobj_set))) != 0
&& p->any.a1 <= removed_at)
{ }
}
else
{
- p = VG_(OSetGen_Next)(s_clientobj);
+ p = VG_(OSetGen_Next)(DRD_(s_clientobj_set));
}
}
}
void DRD_(clientobj_resetiter)(void)
{
- VG_(OSetGen_ResetIter)(s_clientobj);
+ VG_(OSetGen_ResetIter)(DRD_(s_clientobj_set));
}
DrdClientobj* DRD_(clientobj_next)(const ObjType t)
{
DrdClientobj* p;
- while ((p = VG_(OSetGen_Next)(s_clientobj)) != 0 && p->any.type != t)
+ while ((p = VG_(OSetGen_Next)(DRD_(s_clientobj_set))) != 0
+ && p->any.type != t)
;
return p;
}
Modified: trunk/drd/drd_clientreq.c
===================================================================
--- trunk/drd/drd_clientreq.c 2009-02-15 13:16:52 UTC (rev 9169)
+++ trunk/drd/drd_clientreq.c 2009-02-15 14:18:02 UTC (rev 9170)
@@ -162,7 +162,7 @@
case VG_USERREQ__PRE_MUTEX_INIT:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- mutex_init(arg[1], arg[2]);
+ DRD_(mutex_init)(arg[1], arg[2]);
break;
case VG_USERREQ__POST_MUTEX_INIT:
@@ -175,22 +175,22 @@
case VG_USERREQ__POST_MUTEX_DESTROY:
if (DRD_(thread_leave_synchr)(drd_tid) == 0)
- mutex_post_destroy(arg[1]);
+ DRD_(mutex_post_destroy)(arg[1]);
break;
case VG_USERREQ__PRE_MUTEX_LOCK:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- mutex_pre_lock(arg[1], arg[2], arg[3]);
+ DRD_(mutex_pre_lock)(arg[1], arg[2], arg[3]);
break;
case VG_USERREQ__POST_MUTEX_LOCK:
if (DRD_(thread_leave_synchr)(drd_tid) == 0)
- mutex_post_lock(arg[1], arg[2], False/*post_cond_wait*/);
+ DRD_(mutex_post_lock)(arg[1], arg[2], False/*post_cond_wait*/);
break;
case VG_USERREQ__PRE_MUTEX_UNLOCK:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- mutex_unlock(arg[1], arg[2]);
+ DRD_(mutex_unlock)(arg[1], arg[2]);
break;
case VG_USERREQ__POST_MUTEX_UNLOCK:
@@ -208,7 +208,7 @@
case VG_USERREQ__PRE_COND_INIT:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- cond_pre_init(arg[1]);
+ DRD_(cond_pre_init)(arg[1]);
break;
case VG_USERREQ__POST_COND_INIT:
@@ -221,7 +221,7 @@
case VG_USERREQ__POST_COND_DESTROY:
if (DRD_(thread_leave_synchr)(drd_tid) == 0)
- cond_post_destroy(arg[1]);
+ DRD_(cond_post_destroy)(arg[1]);
break;
case VG_USERREQ__PRE_COND_WAIT:
@@ -230,8 +230,8 @@
const Addr cond = arg[1];
const Addr mutex = arg[2];
const MutexT mutex_type = arg[3];
- mutex_unlock(mutex, mutex_type);
- cond_pre_wait(cond, mutex);
+ DRD_(mutex_unlock)(mutex, mutex_type);
+ DRD_(cond_pre_wait)(cond, mutex);
}
break;
@@ -241,14 +241,14 @@
const Addr cond = arg[1];
const Addr mutex = arg[2];
const Bool took_lock = arg[3];
- cond_post_wait(cond);
- mutex_post_lock(mutex, took_lock, True);
+ DRD_(cond_post_wait)(cond);
+ DRD_(mutex_post_lock)(mutex, took_lock, True);
}
break;
case VG_USERREQ__PRE_COND_SIGNAL:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- cond_pre_signal(arg[1]);
+ DRD_(cond_pre_signal)(arg[1]);
break;
case VG_USERREQ__POST_COND_SIGNAL:
@@ -257,7 +257,7 @@
case VG_USERREQ__PRE_COND_BROADCAST:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- cond_pre_broadcast(arg[1]);
+ DRD_(cond_pre_broadcast)(arg[1]);
break;
case VG_USERREQ__POST_COND_BROADCAST:
@@ -266,7 +266,7 @@
case VG_USERREQ__PRE_SEM_INIT:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- semaphore_init(arg[1], arg[2], arg[3]);
+ DRD_(semaphore_init)(arg[1], arg[2], arg[3]);
break;
case VG_USERREQ__POST_SEM_INIT:
@@ -279,27 +279,27 @@
case VG_USERREQ__POST_SEM_DESTROY:
if (DRD_(thread_leave_synchr)(drd_tid) == 0)
- semaphore_destroy(arg[1]);
+ DRD_(semaphore_destroy)(arg[1]);
break;
case VG_USERREQ__PRE_SEM_WAIT:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- semaphore_pre_wait(arg[1]);
+ DRD_(semaphore_pre_wait)(arg[1]);
break;
case VG_USERREQ__POST_SEM_WAIT:
if (DRD_(thread_leave_synchr)(drd_tid) == 0)
- semaphore_post_wait(drd_tid, arg[1], arg[2]);
+ DRD_(semaphore_post_wait)(drd_tid, arg[1], arg[2]);
break;
case VG_USERREQ__PRE_SEM_POST:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- semaphore_pre_post(drd_tid, arg[1]);
+ DRD_(semaphore_pre_post)(drd_tid, arg[1]);
break;
case VG_USERREQ__POST_SEM_POST:
if (DRD_(thread_leave_synchr)(drd_tid) == 0)
- semaphore_post_post(drd_tid, arg[1], arg[2]);
+ DRD_(semaphore_post_post)(drd_tid, arg[1], arg[2]);
break;
case VG_USERREQ__PRE_BARRIER_INIT:
@@ -331,36 +331,36 @@
break;
case VG_USERREQ__PRE_RWLOCK_INIT:
- rwlock_pre_init(arg[1]);
+ DRD_(rwlock_pre_init)(arg[1]);
break;
case VG_USERREQ__POST_RWLOCK_DESTROY:
- rwlock_post_destroy(arg[1]);
+ DRD_(rwlock_post_destroy)(arg[1]);
break;
case VG_USERREQ__PRE_RWLOCK_RDLOCK:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- rwlock_pre_rdlock(arg[1]);
+ DRD_(rwlock_pre_rdlock)(arg[1]);
break;
case VG_USERREQ__POST_RWLOCK_RDLOCK:
if (DRD_(thread_leave_synchr)(drd_tid) == 0)
- rwlock_post_rdlock(arg[1], arg[2]);
+ DRD_(rwlock_post_rdlock)(arg[1], arg[2]);
break;
case VG_USERREQ__PRE_RWLOCK_WRLOCK:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- rwlock_pre_wrlock(arg[1]);
+ DRD_(rwlock_pre_wrlock)(arg[1]);
break;
case VG_USERREQ__POST_RWLOCK_WRLOCK:
if (DRD_(thread_leave_synchr)(drd_tid) == 0)
- rwlock_post_wrlock(arg[1], arg[2]);
+ DRD_(rwlock_post_wrlock)(arg[1], arg[2]);
break;
case VG_USERREQ__PRE_RWLOCK_UNLOCK:
if (DRD_(thread_enter_synchr)(drd_tid) == 0)
- rwlock_pre_unlock(arg[1]);
+ DRD_(rwlock_pre_unlock)(arg[1]);
break;
case VG_USERREQ__POST_RWLOCK_UNLOCK:
Modified: trunk/drd/drd_cond.c
===================================================================
--- trunk/drd/drd_cond.c 2009-02-15 13:16:52 UTC (rev 9169)
+++ trunk/drd/drd_cond.c 2009-02-15 14:18:02 UTC (rev 9170)
@@ -38,35 +38,35 @@
/* Local functions. */
-static void cond_cleanup(struct cond_info* p);
+static void DRD_(cond_cleanup)(struct cond_info* p);
/* Local variables. */
-static Bool s_drd_report_signal_unlocked = True;
-static Bool s_trace_cond;
+static Bool DRD_(s_report_signal_unlocked) = True;
+static Bool DRD_(s_trace_cond);
/* Function definitions. */
-void cond_set_report_signal_unlocked(const Bool r)
+void DRD_(cond_set_report_signal_unlocked)(const Bool r)
{
- s_drd_report_signal_unlocked = r;
+ DRD_(s_report_signal_unlocked) = r;
}
-void cond_set_trace(const Bool trace_cond)
+void DRD_(cond_set_trace)(const Bool trace_cond)
{
- s_trace_cond = trace_cond;
+ DRD_(s_trace_cond) = trace_cond;
}
static
-void cond_initialize(struct cond_info* const p, const Addr cond)
+void DRD_(cond_initialize)(struct cond_info* const p, const Addr cond)
{
tl_assert(cond != 0);
tl_assert(p->a1 == cond);
tl_assert(p->type == ClientCondvar);
- p->cleanup = (void(*)(DrdClientobj*))cond_cleanup;
+ p->cleanup = (void(*)(DrdClientobj*))(DRD_(cond_cleanup));
p->waiter_count = 0;
p->mutex = 0;
}
@@ -75,7 +75,7 @@
* Free the memory that was allocated by cond_initialize(). Called by
* DRD_(clientobj_remove)().
*/
-static void cond_cleanup(struct cond_info* p)
+static void DRD_(cond_cleanup)(struct cond_info* p)
{
tl_assert(p);
if (p->mutex)
@@ -95,7 +95,7 @@
}
}
-static struct cond_info* cond_get_or_allocate(const Addr cond)
+static struct cond_info* DRD_(cond_get_or_allocate)(const Addr cond)
{
struct cond_info *p;
@@ -104,23 +104,23 @@
if (p == 0)
{
p = &(DRD_(clientobj_add)(cond, ClientCondvar)->cond);
- cond_initialize(p, cond);
+ DRD_(cond_initialize)(p, cond);
}
return p;
}
-static struct cond_info* cond_get(const Addr cond)
+static struct cond_info* DRD_(cond_get)(const Addr cond)
{
tl_assert(offsetof(DrdClientobj, cond) == 0);
return &(DRD_(clientobj_get)(cond, ClientCondvar)->cond);
}
/** Called before pthread_cond_init(). */
-void cond_pre_init(const Addr cond)
+void DRD_(cond_pre_init)(const Addr cond)
{
struct cond_info* p;
- if (s_trace_cond)
+ if (DRD_(s_trace_cond))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_init cond 0x%lx",
@@ -129,7 +129,7 @@
cond);
}
- p = cond_get(cond);
+ p = DRD_(cond_get)(cond);
if (p)
{
@@ -141,15 +141,15 @@
&cei);
}
- p = cond_get_or_allocate(cond);
+ p = DRD_(cond_get_or_allocate)(cond);
}
/** Called after pthread_cond_destroy(). */
-void cond_post_destroy(const Addr cond)
+void DRD_(cond_post_destroy)(const Addr cond)
{
struct cond_info* p;
- if (s_trace_cond)
+ if (DRD_(s_trace_cond))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_destroy cond 0x%lx",
@@ -158,7 +158,7 @@
cond);
}
- p = cond_get(cond);
+ p = DRD_(cond_get)(cond);
if (p == 0)
{
CondErrInfo cei = { .cond = cond };
@@ -187,12 +187,12 @@
/** Called before pthread_cond_wait(). Note: before this function is called,
* mutex_unlock() has already been called from drd_clientreq.c.
*/
-int cond_pre_wait(const Addr cond, const Addr mutex)
+int DRD_(cond_pre_wait)(const Addr cond, const Addr mutex)
{
struct cond_info* p;
struct mutex_info* q;
- if (s_trace_cond)
+ if (DRD_(s_trace_cond))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_pre_wait cond 0x%lx",
@@ -201,7 +201,7 @@
cond);
}
- p = cond_get_or_allocate(cond);
+ p = DRD_(cond_get_or_allocate)(cond);
tl_assert(p);
if (p->waiter_count == 0)
@@ -220,8 +220,9 @@
&cwei);
}
tl_assert(p->mutex);
- q = mutex_get(p->mutex);
- if (q && q->owner == DRD_(thread_get_running_tid)() && q->recursion_count > 0)
+ q = DRD_(mutex_get)(p->mutex);
+ if (q
+ && q->owner == DRD_(thread_get_running_tid)() && q->recursion_count > 0)
{
const ThreadId vg_tid = VG_(get_running_tid)();
MutexErrInfo MEI = { q->a1, q->recursion_count, q->owner };
@@ -233,18 +234,18 @@
}
else if (q == 0)
{
- not_a_mutex(p->mutex);
+ DRD_(not_a_mutex)(p->mutex);
}
return ++p->waiter_count;
}
/** Called after pthread_cond_wait(). */
-int cond_post_wait(const Addr cond)
+int DRD_(cond_post_wait)(const Addr cond)
{
struct cond_info* p;
- if (s_trace_cond)
+ if (DRD_(s_trace_cond))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_post_wait cond 0x%lx",
@@ -253,7 +254,7 @@
cond);
}
- p = cond_get(cond);
+ p = DRD_(cond_get)(cond);
if (p)
{
if (p->waiter_count > 0)
@@ -269,16 +270,16 @@
return 0;
}
-static void cond_signal(Addr const cond)
+static void DRD_(cond_signal)(Addr const cond)
{
const ThreadId vg_tid = VG_(get_running_tid)();
const DrdThreadId drd_tid = DRD_(VgThreadIdToDrdThreadId)(vg_tid);
- struct cond_info* const cond_p = cond_get(cond);
+ struct cond_info* const cond_p = DRD_(cond_get)(cond);
if (cond_p && cond_p->waiter_count > 0)
{
- if (s_drd_report_signal_unlocked
- && ! mutex_is_locked_by(cond_p->mutex, drd_tid))
+ if (DRD_(s_report_signal_unlocked)
+ && ! DRD_(mutex_is_locked_by)(cond_p->mutex, drd_tid))
{
/* A signal is sent while the associated mutex has not been locked. */
/* This can indicate but is not necessarily a race condition. */
@@ -300,9 +301,9 @@
}
/** Called before pthread_cond_signal(). */
-void cond_pre_signal(Addr const cond)
+void DRD_(cond_pre_signal)(Addr const cond)
{
- if (s_trace_cond)
+ if (DRD_(s_trace_cond))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_signal cond 0x%lx",
@@ -311,13 +312,13 @@
cond);
}
- cond_signal(cond);
+ DRD_(cond_signal)(cond);
}
/** Called before pthread_cond_broadcast(). */
-void cond_pre_broadcast(Addr const cond)
+void DRD_(cond_pre_broadcast)(Addr const cond)
{
- if (s_trace_cond)
+ if (DRD_(s_trace_cond))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_broadcast cond 0x%lx",
@@ -326,9 +327,9 @@
cond);
}
- cond_signal(cond);
+ DRD_(cond_signal)(cond);
}
/** Called after pthread_cond_destroy(). */
-void cond_thread_delete(const DrdThreadId tid)
+void DRD_(cond_thread_delete)(const DrdThreadId tid)
{ }
Modified: trunk/drd/drd_cond.h
===================================================================
--- trunk/drd/drd_cond.h 2009-02-15 13:16:52 UTC (rev 9169)
+++ trunk/drd/drd_cond.h 2009-02-15 14:18:02 UTC (rev 9170)
@@ -38,15 +38,15 @@
/* Function declarations. */
-void cond_set_report_signal_unlocked(const Bool r);
-void cond_set_trace(const Bool trace_cond);
-void cond_pre_init(const Addr cond);
-void cond_post_destroy(const Addr cond);
-int cond_pre_wait(const Addr cond, const Addr mutex);
-int cond_post_wait(const Addr cond);
-void cond_pre_signal(const Addr cond);
-void cond_pre_broadcast(const Addr cond);
-void cond_thread_delete(const DrdThreadId tid);
+void DRD_(cond_set_report_signal_unlocked)(const Bool r);
+void DRD_(cond_set_trace)(const Bool trace_cond);
+void DRD_(cond_pre_init)(const Addr cond);
+void DRD_(cond_post_destroy)(const Addr cond);
+int DRD_(cond_pre_wait)(const Addr cond, const Addr mutex);
+int DRD_(cond_post_wait)(const Addr cond);
+void DRD_(cond_pre_signal)(const Addr cond);
+void DRD_(cond_pre_broadcast)(const Addr cond);
+void DRD_(cond_thread_delete)(const DrdThreadId tid);
#endif /* __DRD_COND_H */
Modified: trunk/drd/drd_main.c
===================================================================
--- trunk/drd/drd_main.c 2009-02-15 13:16:52 UTC (rev 9169)
+++ trunk/drd/drd_main.c 2009-02-15 14:18:02 UTC (rev 9170)
@@ -111,16 +111,16 @@
DRD_(set_check_stack_accesses)(check_stack_accesses);
if (exclusive_threshold_ms != -1)
{
- mutex_set_lock_threshold(exclusive_threshold_ms);
- rwlock_set_exclusive_threshold(exclusive_threshold_ms);
+ DRD_(mutex_set_lock_threshold)(exclusive_threshold_ms);
+ DRD_(rwlock_set_exclusive_threshold)(exclusive_threshold_ms);
}
if (report_signal_unlocked != -1)
{
- cond_set_report_signal_unlocked(report_signal_unlocked);
+ DRD_(cond_set_report_signal_unlocked)(report_signal_unlocked);
}
if (shared_threshold_ms != -1)
{
- rwlock_set_shared_threshold(shared_threshold_ms);
+ DRD_(rwlock_set_shared_threshold)(shared_threshold_ms);
}
if (segment_merging != -1)
DRD_(thread_set_segment_merging)(segment_merging);
@@ -136,7 +136,7 @@
if (trace_clientobj != -1)
DRD_(clientobj_set_trace)(trace_clientobj);
if (trace_cond != -1)
- cond_set_trace(trace_cond);
+ DRD_(cond_set_trace)(trace_cond);
if (trace_csw != -1)
DRD_(thread_trace_context_switches)(trace_csw);
if (trace_fork_join != -1)
@@ -144,13 +144,13 @@
if (trace_conflict_set != -1)
DRD_(thread_trace_conflict_set)(trace_conflict_set);
if (trace_mutex != -1)
- mutex_set_trace(trace_mutex);
+ DRD_(mutex_set_trace)(trace_mutex);
if (trace_rwlock != -1)
- rwlock_set_trace(trace_rwlock);
+ DRD_(rwlock_set_trace)(trace_rwlock);
if (trace_segment != -1)
DRD_(sg_set_trace)(trace_segment);
if (trace_semaphore != -1)
- semaphore_set_trace(trace_semaphore);
+ DRD_(semaphore_set_trace)(trace_semaphore);
if (trace_suppression != -1)
DRD_(suppression_set_trace)(trace_suppression);
@@ -554,9 +554,9 @@
DRD_(thread_get_discard_ordered_segments_count)());
VG_(message)(Vg_UserMsg,
" (%lld m, %lld rw, %lld s, %lld b)",
- get_mutex_segment_creation_count(),
- get_rwlock_segment_creation_count(),
- get_semaphore_segment_creation_count(),
+ DRD_(get_mutex_segment_creation_count)(),
+ DRD_(get_rwlock_segment_creation_count)(),
+ DRD_(get_semaphore_segment_creation_count)(),
DRD_(get_barrier_segment_creation_count)());
VG_(message)(Vg_UserMsg,
" bitmaps: %lld level 1 / %lld level 2 bitmap refs",
@@ -567,7 +567,7 @@
bm_get_bitmap2_creation_count());
VG_(message)(Vg_UserMsg,
" mutex: %lld non-recursive lock/unlock events.",
- get_mutex_lock_count());
+ DRD_(get_mutex_lock_count)());
drd_print_malloc_stats();
}
}
Modified: trunk/drd/drd_mutex.c
===================================================================
--- trunk/drd/drd_mutex.c 2009-02-15 13:16:52 UTC (rev 9169)
+++ trunk/drd/drd_mutex.c 2009-02-15 14:18:02 UTC (rev 9170)
@@ -36,42 +36,42 @@
#include "pub_tool_threadstate.h" // VG_(get_running_tid)()
-// Local functions.
+/* Local functions. */
-static void mutex_cleanup(struct mutex_info* p);
-static Bool mutex_is_locked(struct mutex_info* const p);
+static void DRD_(mutex_cleanup)(struct mutex_info* p);
+static Bool DRD_(mutex_is_locked)(struct mutex_info* const p);
-// Local variables.
+/* Local variables. */
-static Bool s_trace_mutex;
-static ULong s_mutex_lock_count;
-static ULong s_mutex_segment_creation_count;
-static UInt s_mutex_lock_threshold_ms = 1000 * 1000;
+static Bool DRD_(s_trace_mutex);
+static ULong DRD_(s_mutex_lock_count);
+static ULong DRD_(s_mutex_segment_creation_count);
+static UInt DRD_(s_mutex_lock_threshold_ms) = 1000 * 1000;
-// Function definitions.
+/* Function definitions. */
-void mutex_set_trace(const Bool trace_mutex)
+void DRD_(mutex_set_trace)(const Bool trace_mutex)
{
tl_assert(!! trace_mutex == trace_mutex);
- s_trace_mutex = trace_mutex;
+ DRD_(s_trace_mutex) = trace_mutex;
}
-void mutex_set_lock_threshold(const UInt lock_threshold_ms)
+void DRD_(mutex_set_lock_threshold)(const UInt lock_threshold_ms)
{
- s_mutex_lock_threshold_ms = lock_threshold_ms;
+ DRD_(s_mutex_lock_threshold_ms) = lock_threshold_ms;
}
static
-void mutex_initialize(struct mutex_info* const p,
- const Addr mutex, const MutexT mutex_type)
+void DRD_(mutex_initialize)(struct mutex_info* const p,
+ const Addr mutex, const MutexT mutex_type)
{
tl_assert(mutex);
tl_assert(mutex_type != mutex_type_unknown);
tl_assert(p->a1 == mutex);
- p->cleanup = (void(*)(DrdClientobj*))&mutex_cleanup;
+ p->cleanup = (void(*)(DrdClientobj*))&(DRD_(mutex_cleanup));
p->mutex_type = mutex_type;
p->recursion_count = 0;
p->owner = DRD_INVALID_THREADID;
@@ -81,23 +81,23 @@
}
/** Deallocate the memory that was allocated by mutex_initialize(). */
-static void mutex_cleanup(struct mutex_info* p)
+static void DRD_(mutex_cleanup)(struct mutex_info* p)
{
tl_assert(p);
- if (s_trace_mutex)
+ if (DRD_(s_trace_mutex))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] mutex_destroy %s 0x%lx rc %d owner %d",
VG_(get_running_tid)(),
DRD_(thread_get_running_tid)(),
- mutex_get_typename(p),
+ DRD_(mutex_get_typename)(p),
p->a1,
p ? p->recursion_count : -1,
p ? p->owner : DRD_INVALID_THREADID);
}
- if (mutex_is_locked(p))
+ if (DRD_(mutex_is_locked)(p))
{
MutexErrInfo MEI = { p->a1, p->recursion_count, p->owner };
VG_(maybe_record_error)(VG_(get_running_tid)(),
@@ -112,7 +112,7 @@
}
/** Let Valgrind report that there is no mutex object at address 'mutex'. */
-void not_a_mutex(const Addr mutex)
+void DRD_(not_a_mutex)(const Addr mutex)
{
MutexErrInfo MEI = { mutex, -1, DRD_INVALID_THREADID };
VG_(maybe_record_error)(VG_(get_running_tid)(),
@@ -124,7 +124,7 @@
static
struct mutex_info*
-mutex_get_or_allocate(const Addr mutex, const MutexT mutex_type)
+DRD_(mutex_get_or_allocate)(const Addr mutex, const MutexT mutex_type)
{
struct mutex_info* p;
@@ -137,18 +137,18 @@
if (DRD_(clientobj_present)(mutex, mutex + 1))
{
- not_a_mutex(mutex);
+ DRD_(not_a_mutex)(mutex);
return 0;
}
tl_assert(mutex_type != mutex_type_unknown);
p = &(DRD_(clientobj_add)(mutex, ClientMutex)->mutex);
- mutex_initialize(p, mutex, mutex_type);
+ DRD_(mutex_initialize)(p, mutex, mutex_type);
return p;
}
-struct mutex_info* mutex_get(const Addr mutex)
+struct mutex_info* DRD_(mutex_get)(const Addr mutex)
{
tl_assert(offsetof(DrdClientobj, mutex) == 0);
return &(DRD_(clientobj_get)(mutex, ClientMutex)->mutex);
@@ -156,29 +156,29 @@
/** Called before pthread_mutex_init(). */
struct mutex_info*
-mutex_init(const Addr mutex, const MutexT mutex_type)
+DRD_(mutex_init)(const Addr mutex, const MutexT mutex_type)
{
struct mutex_info* p;
tl_assert(mutex_type != mutex_type_unknown);
- if (s_trace_mutex)
+ if (DRD_(s_trace_mutex))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] mutex_init %s 0x%lx",
VG_(get_running_tid)(),
DRD_(thread_get_running_tid)(),
- mutex_type_name(mutex_type),
+ DRD_(mutex_type_name)(mutex_type),
mutex);
}
if (mutex_type == mutex_type_invalid_mutex)
{
- not_a_mutex(mutex);
+ DRD_(not_a_mutex)(mutex);
return 0;
}
- p = mutex_get(mutex);
+ p = DRD_(mutex_get)(mutex);
if (p)
{
const ThreadId vg_tid = VG_(get_running_tid)();
@@ -191,20 +191,20 @@
&MEI);
return p;
}
- p = mutex_get_or_allocate(mutex, mutex_type);
+ p = DRD_(mutex_get_or_allocate)(mutex, mutex_type);
return p;
}
/** Called after pthread_mutex_destroy(). */
-void mutex_post_destroy(const Addr mutex)
+void DRD_(mutex_post_destroy)(const Addr mutex)
{
struct mutex_info* p;
- p = mutex_get(mutex);
+ p = DRD_(mutex_get)(mutex);
if (p == 0)
{
- not_a_mutex(mutex);
+ DRD_(not_a_mutex)(mutex);
return;
}
@@ -216,23 +216,23 @@
* an attempt is made to lock recursively a synchronization object that must
* not be locked recursively.
*/
-void mutex_pre_lock(const Addr mutex, MutexT mutex_type,
- const Bool trylock)
+void DRD_(mutex_pre_lock)(const Addr mutex, MutexT mutex_type,
+ const Bool trylock)
{
struct mutex_info* p;
- p = mutex_get_or_allocate(mutex, mutex_type);
+ p = DRD_(mutex_get_or_allocate)(mutex, mutex_type);
if (mutex_type == mutex_type_unknown)
mutex_type = p->mutex_type;
- if (s_trace_mutex)
+ if (DRD_(s_trace_mutex))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] %s %s 0x%lx rc %d owner %d",
VG_(get_running_tid)(),
DRD_(thread_get_running_tid)(),
trylock ? "pre_mutex_lock " : "mutex_trylock ",
- p ? mutex_get_typename(p) : "(?)",
+ p ? DRD_(mutex_get_typename)(p) : "(?)",
mutex,
p ? p->recursion_count : -1,
p ? p->owner : DRD_INVALID_THREADID);
@@ -240,7 +240,7 @@
if (p == 0)
{
- not_a_mutex(mutex);
+ DRD_(not_a_mutex)(mutex);
return;
}
@@ -248,7 +248,7 @@
if (mutex_type == mutex_type_invalid_mutex)
{
- not_a_mutex(mutex);
+ DRD_(not_a_mutex)(mutex);
return;
}
@@ -271,22 +271,22 @@
* Note: this function must be called after pthread_mutex_lock() has been
* called, or a race condition is triggered !
*/
-void mutex_post_lock(const Addr mutex, const Bool took_lock,
- const Bool post_cond_wait)
+void DRD_(mutex_post_lock)(const Addr mutex, const Bool took_lock,
+ const Bool post_cond_wait)
{
const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
struct mutex_info* p;
- p = mutex_get(mutex);
+ p = DRD_(mutex_get)(mutex);
- if (s_trace_mutex)
+ if (DRD_(s_trace_mutex))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] %s %s 0x%lx rc %d owner %d%s",
VG_(get_running_tid)(),
drd_tid,
post_cond_wait ? "cond_post_wait " : "post_mutex_lock",
- p ? mutex_get_typename(p) : "(?)",
+ p ? DRD_(mutex_get_typename)(p) : "(?)",
mutex,
p ? p->recursion_count : 0,
p ? p->owner : VG_INVALID_THREADID,
@@ -306,12 +306,12 @@
DRD_(thread_combine_vc2)(drd_tid, &p->last_locked_segment->vc);
}
DRD_(thread_new_segment)(drd_tid);
- s_mutex_segment_creation_count++;
+ DRD_(s_mutex_segment_creation_count)++;
p->owner = drd_tid;
p->acquiry_time_ms = VG_(read_millisecond_timer)();
p->acquired_at = VG_(record_ExeContext)(VG_(get_running_tid)(), 0);
- s_mutex_lock_count++;
+ DRD_(s_mutex_lock_count)++;
}
else if (p->owner != drd_tid)
{
@@ -325,41 +325,42 @@
p->recursion_count++;
}
-/** Update mutex_info state when unlocking the pthread_mutex_t mutex.
+/**
+ * Update mutex_info state when unlocking the pthread_mutex_t mutex.
*
- * @param mutex Pointer to pthread_mutex_t data structure in the client space.
- * @param tid ThreadId of the thread calling pthread_mutex_unlock().
- * @param vc Pointer to the current vector clock of thread tid.
+ * @param mutex Pointer to pthread_mutex_t data structure in the client space.
+ * @param tid ThreadId of the thread calling pthread_mutex_unlock().
+ * @param vc Pointer to the current vector clock of thread tid.
*
- * @return New value of the mutex recursion count.
+ * @return New value of the mutex recursion count.
*
- * @note This function must be called before pthread_mutex_unlock() is called,
- * or a race condition is triggered !
+ * @note This function must be called before pthread_mutex_unlock() is called,
+ * or a race condition is triggered !
*/
-void mutex_unlock(const Addr mutex, MutexT mutex_type)
+void DRD_(mutex_unlock)(const Addr mutex, MutexT mutex_type)
{
const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
const ThreadId vg_tid = VG_(get_running_tid)();
struct mutex_info* p;
- p = mutex_get(mutex);
+ p = DRD_(mutex_get)(mutex);
if (mutex_type == mutex_type_unknown)
mutex_type = p->mutex_type;
- if (s_trace_mutex)
+ if (DRD_(s_trace_mutex))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] mutex_unlock %s 0x%lx rc %d",
vg_tid,
drd_tid,
- p ? mutex_get_typename(p) : "(?)",
+ p ? DRD_(mutex_get_typename)(p) : "(?)",
mutex,
p ? p->recursion_count : 0);
}
if (p == 0 || mutex_type == mutex_type_invalid_mutex)
{
- not_a_mutex(mutex);
+ DRD_(not_a_mutex)(mutex);
return;
}
@@ -399,13 +400,13 @@
if (p->recursion_count == 0)
{
- if (s_mutex_lock_threshold_ms > 0)
+ if (DRD_(s_mutex_lock_threshold_ms) > 0)
{
ULong held = VG_(read_millisecond_timer)() - p->acquiry_time_ms;
- if (held > s_mutex_lock_threshold_ms)
+ if (held > DRD_(s_mutex_lock_threshold_ms))
{
HoldtimeErrInfo HEI
- = { mutex, p->acquired_at, held, s_mutex_lock_threshold_ms };
+ = { mutex, p->acquired_at, held, DRD_(s_mutex_lock_threshold_ms) };
VG_(maybe_record_error)(vg_tid,
HoldtimeErr,
VG_(get_IP)(vg_tid),
@@ -421,31 +422,31 @@
DRD_(thread_get_latest_segment)(&p->last_locked_segment, drd_tid);
DRD_(thread_new_segment)(drd_tid);
p->acquired_at = 0;
- s_mutex_segment_creation_count++;
+ DRD_(s_mutex_segment_creation_count)++;
}
}
void DRD_(spinlock_init_or_unlock)(const Addr spinlock)
{
- struct mutex_info* mutex_p = mutex_get(spinlock);
+ struct mutex_info* mutex_p = DRD_(mutex_get)(spinlock);
if (mutex_p)
{
- mutex_unlock(spinlock, mutex_type_spinlock);
+ DRD_(mutex_unlock)(spinlock, mutex_type_spinlock);
}
else
{
- mutex_init(spinlock, mutex_type_spinlock);
+ DRD_(mutex_init)(spinlock, mutex_type_spinlock);
}
}
-const char* mutex_get_typename(struct mutex_info* const p)
+const char* DRD_(mutex_get_typename)(struct mutex_info* const p)
{
tl_assert(p);
- return mutex_type_name(p->mutex_type);
+ return DRD_(mutex_type_name)(p->mutex_type);
}
-const char* mutex_type_name(const MutexT mt)
+const char* DRD_(mutex_type_name)(const MutexT mt)
{
switch (mt)
{
@@ -466,15 +467,15 @@
}
/** Return true if the specified mutex is locked by any thread. */
-static Bool mutex_is_locked(struct mutex_info* const p)
+static Bool DRD_(mutex_is_locked)(struct mutex_info* const p)
{
tl_assert(p);
return (p->recursion_count > 0);
}
-Bool mutex_is_locked_by(const Addr mutex, const DrdThreadId tid)
+Bool DRD_(mutex_is_locked_by)(const Addr mutex, const DrdThreadId tid)
{
- struct mutex_info* const p = mutex_get(mutex);
+ struct mutex_info* const p = DRD_(mutex_get)(mutex);
if (p)
{
return (p->recursion_count > 0 && p->owner == tid);
@@ -482,9 +483,9 @@
return False;
}
-int mutex_get_recursion_count(const Addr mutex)
+int DRD_(mutex_get_recursion_count)(const Addr mutex)
{
- struct mutex_info* const p = mutex_get(mutex);
+ struct mutex_info* const p = DRD_(mutex_get)(mutex);
tl_assert(p);
return p->recursion_count;
}
@@ -493,7 +494,7 @@
* Call this function when thread tid stops to exist, such that the
* "last owner" field can be cleared if it still refers to that thread.
*/
-void mutex_thread_delete(const DrdThreadId tid)
+void DRD_(mutex_thread_delete)(const DrdThreadId tid)
{
struct mutex_info* p;
@@ -514,12 +515,12 @@
}
}
-ULong get_mutex_lock_count(void)
+ULong DRD_(get_mutex_lock_count)(void)
{
- return s_mutex_lock_count;
+ return DRD_(s_mutex_lock_count);
}
-ULong get_mutex_segment_creation_count(void)
+ULong DRD_(get_mutex_segment_creation_count)(void)
{
- return s_mutex_segment_creation_count;
+ return DRD_(s_mutex_segment_creation_count);
}
Modified: trunk/drd/drd_mutex.h
===================================================================
--- trunk/drd/drd_mutex.h 2009-02-15 13:16:52 UTC (rev 9169)
+++ trunk/drd/drd_mutex.h 2009-02-15 14:18:02 UTC (rev 9170)
@@ -35,26 +35,25 @@
struct mutex_info;
-void mutex_set_trace(const Bool trace_mutex);
-void mutex_set_lock_threshold(const UInt lock_threshold_ms);
-struct mutex_info* mutex_init(const Addr mutex,
- const MutexT mutex_type);
-void mutex_post_destroy(const Addr mutex);
-void not_a_mutex(const Addr mutex);
-struct mutex_info* mutex_get(const Addr mutex);
-void mutex_pre_lock(const Addr mutex, const MutexT mutex_type,
- const Bool trylock);
-void mutex_post_lock(const Addr mutex, const Bool took_lock,
- const Bool post_cond_wait);
-void mutex_unlock(const Addr mutex, const MutexT mutex_type);
+void DRD_(mutex_set_trace)(const Bool trace_mutex);
+void DRD_(mutex_set_lock_threshold)(const UInt lock_threshold_ms);
+struct mutex_info* DRD_(mutex_init)(const Addr mutex, const MutexT mutex_type);
+void DRD_(mutex_post_destroy)(const Addr mutex);
+void DRD_(not_a_mutex)(const Addr mutex);
+struct mutex_info* DRD_(mutex_get)(const Addr mutex);
+void DRD_(mutex_pre_lock)(const Addr mutex, const MutexT mutex_type,
+ const Bool trylock);
+void DRD_(mutex_post_lock)(const Addr mutex, const Bool took_lock,
+ const Bool post_cond_wait);
+void DRD_(mutex_unlock)(const Addr mutex, const MutexT mutex_type);
void DRD_(spinlock_init_or_unlock)(const Addr spinlock);
-const char* mutex_get_typename(struct mutex_info* const p);
-const char* mutex_type_name(const MutexT mt);
-Bool mutex_is_locked_by(const Addr mutex, const DrdThreadId tid);
-int mutex_get_recursion_count(const Addr mutex);
-void mutex_thread_delete(const DrdThreadId tid);
-ULong get_mutex_lock_count(void);
-ULong get_mutex_segment_creation_count(void);
+const char* DRD_(mutex_get_typename)(struct mutex_info* const p);
+const char* DRD_(mutex_type_name)(const MutexT mt);
+Bool DRD_(mutex_is_locked_by)(const Addr mutex, const DrdThreadId tid);
+int DRD_(mutex_get_recursion_count)(const Addr mutex);
+void DRD_(mutex_thread_delete)(const DrdThreadId tid);
+ULong DRD_(get_mutex_lock_count)(void);
+ULong DRD_(get_mutex_segment_creation_count)(void);
#endif /* __DRD_MUTEX_H */
Modified: trunk/drd/drd_rwlock.c
===================================================================
--- trunk/drd/drd_rwlock.c 2009-02-15 13:16:52 UTC (rev 9169)
+++ trunk/drd/drd_rwlock.c 2009-02-15 14:18:02 UTC (rev 9170)
@@ -36,7 +36,7 @@
#include "pub_tool_threadstate.h" // VG_(get_running_tid)()
-// Type definitions.
+/* Local type definitions. */
struct rwlock_thread_info
{
@@ -48,38 +48,38 @@
};
-// Local functions.
+/* Local functions. */
-static void rwlock_cleanup(struct rwlock_info* p);
-static ULong s_rwlock_segment_creation_count;
+static void DRD_(rwlock_cleanup)(struct rwlock_info* p);
-// Local variables.
+/* Local variables. */
-static Bool s_trace_rwlock;
-static UInt s_exclusive_threshold_ms;
-static UInt s_shared_threshold_ms;
+static Bool DRD_(s_trace_rwlock);
+static UInt DRD_(s_exclusive_threshold_ms);
+static UInt DRD_(s_shared_threshold_ms);
+static ULong DRD_(s_rwlock_segment_creation_count);
-// Function definitions.
+/* Function definitions. */
-void rwlock_set_trace(const Bool trace_rwlock)
+void DRD_(rwlock_set_trace)(const Bool trace_rwlock)
{
- tl_assert(!! trace_rwlock == trace_rwlock);
- s_trace_rwlock = trace_rwlock;
+ tl_assert(trace_rwlock == False || trace_rwlock == True);
+ DRD_(s_trace_rwlock) = trace_rwlock;
}
-void rwlock_set_exclusive_threshold(const UInt exclusive_threshold_ms)
+void DRD_(rwlock_set_exclusive_threshold)(const UInt exclusive_threshold_ms)
{
- s_exclusive_threshold_ms = exclusive_threshold_ms;
+ DRD_(s_exclusive_threshold_ms) = exclusive_threshold_ms;
}
-void rwlock_set_shared_threshold(const UInt shared_threshold_ms)
+void DRD_(rwlock_set_shared_threshold)(const UInt shared_threshold_ms)
{
- s_shared_threshold_ms = shared_threshold_ms;
+ DRD_(s_shared_threshold_ms) = shared_threshold_ms;
}
-static Bool rwlock_is_rdlocked(struct rwlock_info* p)
+static Bool DRD_(rwlock_is_rdlocked)(struct rwlock_info* p)
{
struct rwlock_thread_info* q;
@@ -91,7 +91,7 @@
return False;
}
-static Bool rwlock_is_wrlocked(struct rwlock_info* p)
+static Bool DRD_(rwlock_is_wrlocked)(struct rwlock_info* p)
{
struct rwlock_thread_info* q;
@@ -103,12 +103,13 @@
return False;
}
-static Bool rwlock_is_locked(struct rwlock_info* p)
+static Bool DRD_(rwlock_is_locked)(struct rwlock_info* p)
{
- return rwlock_is_rdlocked(p) || rwlock_is_wrlocked(p);
+ return DRD_(rwlock_is_rdlocked)(p) || DRD_(rwlock_is_wrlocked)(p);
}
-static Bool rwlock_is_rdlocked_by(struct rwlock_info* p, const DrdThreadId tid)
+static Bool DRD_(rwlock_is_rdlocked_by)(struct rwlock_info* p,
+ const DrdThreadId tid)
{
const UWord uword_tid = tid;
struct rwlock_thread_info* q;
@@ -117,7 +118,8 @@
return q && q->reader_nesting_count > 0;
}
-static Bool rwlock_is_wrlocked_by(struct rwlock_info* p, const DrdThreadId tid)
+static Bool DRD_(rwlock_is_wrlocked_by)(struct rwlock_info* p,
+ const DrdThreadId tid)
{
const UWord uword_tid = tid;
struct rwlock_thread_info* q;
@@ -126,14 +128,17 @@
return q && q->writer_nesting_count > 0;
}
-static Bool rwlock_is_locked_by(struct rwlock_info* p, const DrdThreadId tid)
+static Bool DRD_(rwlock_is_locked_by)(struct rwlock_info* p,
+ const DrdThreadId tid)
{
- return rwlock_is_rdlocked_by(p, tid) || rwlock_is_wrlocked_by(p, tid);
+ return (DRD_(rwlock_is_rdlocked_by)(p, tid)
+ || DRD_(rwlock_is_wrlocked_by)(p, tid));
}
/** Either look up or insert a node corresponding to DRD thread id 'tid'. */
static
-struct rwlock_thread_info* lookup_or_insert_node(OSet* oset, const UWord tid)
+struct rwlock_thread_info*
+DRD_(lookup_or_insert_node)(OSet* oset, const UWord tid)
{
struct rwlock_thread_info* q;
@@ -152,12 +157,13 @@
return q;
}
-/** Combine the vector clock corresponding to the last unlock operation of
- * reader-writer lock p into the vector clock of thread 'tid'.
+/**
+ * Combine the vector clock corresponding to the last unlock operation of
+ * reader-writer lock p into the vector clock of thread 'tid'.
*/
-static void rwlock_combine_other_vc(struct rwlock_info* const p,
- const DrdThreadId tid,
- const Bool readers_too)
+static void DRD_(rwlock_combine_other_vc)(struct rwlock_info* const p,
+ const DrdThreadId tid,
+ const Bool readers_too)
{
struct rwlock_thread_info* q;
@@ -173,13 +179,13 @@
/** Initialize the rwlock_info data structure *p. */
static
-void rwlock_initialize(struct rwlock_info* const p, const Addr rwlock)
+void DRD_(rwlock_initialize)(struct rwlock_info* const p, const Addr rwlock)
{
tl_assert(rwlock != 0);
tl_assert(p->a1 == rwlock);
tl_assert(p->type == ClientRwlock);
- p->cleanup = (void(*)(DrdClientobj*))&rwlock_cleanup;
+ p->cleanup = (void(*)(DrdClientobj*))&(DRD_(rwlock_cleanup));
p->thread_info = VG_(OSetGen_Create)(
0, 0, VG_(malloc), "drd.rwlock.ri.1", VG_(free));
p->acquiry_time_ms = 0;
@@ -187,13 +193,13 @@
}
/** Deallocate the memory that was allocated by rwlock_initialize(). */
-static void rwlock_cleanup(struct rwlock_info* p)
+static void DRD_(rwlock_cleanup)(struct rwlock_info* p)
{
struct rwlock_thread_info* q;
tl_assert(p);
- if (s_trace_rwlock)
+ if (DRD_(s_trace_rwlock))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] rwlock_destroy 0x%lx",
@@ -202,7 +208,7 @@
p->a1);
}
- if (rwlock_is_locked(p))
+ if (DRD_(rwlock_is_locked)(p))
{
RwlockErrInfo REI = { p->a1 };
VG_(maybe_record_error)(VG_(get_running_tid)(),
@@ -222,7 +228,7 @@
static
struct rwlock_info*
-rwlock_get_or_allocate(const Addr rwlock)
+DRD_(rwlock_get_or_allocate)(const Addr rwlock)
{
struct rwlock_info* p;
@@ -245,22 +251,22 @@
}
p = &(DRD_(clientobj_add)(rwlock, ClientRwlock)->rwlock);
- rwlock_initialize(p, rwlock);
+ DRD_(rwlock_initialize)(p, rwlock);
return p;
}
-static struct rwlock_info* rwlock_get(const Addr rwlock)
+static struct rwlock_info* DRD_(rwlock_get)(const Addr rwlock)
{
tl_assert(offsetof(DrdClientobj, rwlock) == 0);
return &(DRD_(clientobj_get)(rwlock, ClientRwlock)->rwlock);
}
/** Called before pthread_rwlock_init(). */
-struct rwlock_info* rwlock_pre_init(const Addr rwlock)
+struct rwlock_info* DRD_(rwlock_pre_init)(const Addr rwlock)
{
struct rwlock_info* p;
- if (s_trace_rwlock)
+ if (DRD_(s_trace_rwlock))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] rwlock_init 0x%lx",
@@ -269,7 +275,7 @@
rwlock);
}
- p = rwlock_get(rwlock);
+ p = DRD_(rwlock_get)(rwlock);
if (p)
{
@@ -284,17 +290,17 @@
return p;
}
- p = rwlock_get_or_allocate(rwlock);
+ p = DRD_(rwlock_get_or_allocate)(rwlock);
return p;
}
/** Called after pthread_rwlock_destroy(). */
-void rwlock_post_destroy(const Addr rwlock)
+void DRD_(rwlock_post_destroy)(const Addr rwlock)
{
struct rwlock_info* p;
- p = rwlock_get(rwlock);
+ p = DRD_(rwlock_get)(rwlock);
if (p == 0)
{
GenericErrInfo GEI;
@@ -309,16 +315,17 @@
DRD_(clientobj_remove)(rwlock, ClientRwlock);
}
-/** Called before pthread_rwlock_rdlock() is invoked. If a data structure for
- * the client-side object was not yet created, do this now. Also check whether
- * an attempt is made to lock recursively a synchronization object that must
- * not be locked recursively.
+/**
+ * Called before pthread_rwlock_rdlock() is invoked. If a data structure for
+ * the client-side object was not yet created, do this now. Also check whether
+ * an attempt is made to lock recursively a synchronization object that must
+ * not be locked recursively.
*/
-void rwlock_pre_rdlock(const Addr rwlock)
+void DRD_(rwlock_pre_rdlock)(const Addr rwlock)
{
struct rwlock_info* p;
- if (s_trace_rwlock)
+ if (DRD_(s_trace_rwlock))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] pre_rwlock_rdlock 0x%lx",
@@ -327,10 +334,10 @@
rwlock);
}
- p = rwlock_get_or_allocate(rwlock);
+ p = DRD_(rwlock_get_or_allocate)(rwlock);
tl_assert(p);
- if (rwlock_is_wrlocked_by(p, DRD_(thread_get_running_tid)()))
+ if (DRD_(rwlock_is_wrlocked_by)(p, DRD_(thread_get_running_tid)()))
{
VG_(message)(Vg_UserMsg,
"reader-writer lock 0x%lx is already locked for"
@@ -339,17 +346,18 @@
}
}
-/** Update rwlock_info state when locking the pthread_rwlock_t mutex.
- * Note: this function must be called after pthread_rwlock_rdlock() has been
- * called, or a race condition is triggered !
+/**
+ * Update rwlock_info state when locking the pthread_rwlock_t mutex.
+ * Note: this function must be called after pthread_rwlock_rdlock() has been
+ * called, or a race condition is triggered !
*/
-void rwlock_post_rdlock(const Addr rwlock, const Bool took_lock)
+void DRD_(rwlock_post_rdlock)(const Addr rwlock, const Bool took_lock)
{
const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
struct rwlock_info* p;
struct rwlock_thread_info* q;
- if (s_trace_rwlock)
+ if (DRD_(s_trace_rwlock))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] post_rwlock_rdlock 0x%lx",
@@ -358,38 +366,39 @@
rwlock);
}
- p = rwlock_get(rwlock);
+ p = DRD_(rwlock_get)(rwlock);
if (! p || ! took_lock)
return;
- tl_assert(! rwlock_is_wrlocked(p));
+ tl_assert(! DRD_(rwlock_is_wrlocked)(p));
- q = lookup_or_insert_node(p->thread_info, drd_tid);
+ q = DRD_(lookup_or_insert_node)(p->thread_info, drd_tid);
if (++q->reader_nesting_count == 1)
{
- rwlock_combine_other_vc(p, drd_tid, False);
+ DRD_(rwlock_combine_other_vc)(p, drd_tid, False);
q->last_lock_was_writer_lock = False;
DRD_(thread_new_segment)(drd_tid);
- s_rwlock_segment_creation_count++;
+ DRD_(s_rwlock_segment_creation_count)++;
p->acquiry_time_ms = VG_(read_millisecond_timer)();
p->acquired_at = VG_(record_ExeContext)(VG_(get_running_tid)(), 0);
}
}
-/** Called before pthread_rwlock_wrlock() is invoked. If a data structure for
- * the client-side object was not yet created, do this now. Also check whether
- * an attempt is made to lock recursively a synchronization object that must
- * not be locked recursively.
+/**
+ * Called before pthread_rwlock_wrlock() is invoked. If a data structure for
+ * the client-side object was not yet created, do this now. Also check whether
+ * an attempt is made to lock recursively a synchronization object that must
+ * not be locked recursively.
*/
-void rwlock_pre_wrlock(const Addr rwlock)
+void DRD_(rwlock_pre_wrlock)(const Addr rwlock)
{
struct rwlock_info* p;
- p = rwlock_get(rwlock);
+ p = DRD_(rwlock_get)(rwlock);
- if (s_trace_rwlock)
+ if (DRD_(s_trace_rwlock))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] pre_rwlock_wrlock 0x%lx",
@@ -400,12 +409,12 @@
if (p == 0)
{
- p = rwlock_get_or_allocate(rwlock);
+ p = DRD_(rwlock_get_or_allocate)(rwlock);
}
tl_assert(p);
- if (rwlock_is_wrlocked_by(p, DRD_(thread_get_running_tid)()))
+ if (DRD_(rwlock_is_wrlocked_by)(p, DRD_(thread_get_running_tid)()))
{
RwlockErrInfo REI = { p->a1 };
VG_(maybe_record_error)(VG_(get_running_tid)(),
@@ -421,15 +430,15 @@
* Note: this function must be called after pthread_rwlock_wrlock() has
* finished, or a race condition is triggered !
*/
-void rwlock_post_wrlock(const Addr rwlock, const Bool took_lock)
+void DRD_(rwlock_post_wrlock)(const Addr rwlock, const Bool took_lock)
{
const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
struct rwlock_info* p;
struct rwlock_thread_info* q;
- p = rwlock_get(rwlock);
+ p = DRD_(rwlock_get)(rwlock);
- if (s_trace_rwlock)
+ if (DRD_(s_trace_rwlock))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] post_rwlock_wrlock 0x%lx",
@@ -441,14 +450,15 @@
if (! p || ! took_lock)
return;
- q = lookup_or_insert_node(p->thread_info, DRD_(thread_get_running_tid)());
+ q = DRD_(lookup_or_insert_node)(p->thread_info,
+ DRD_(thread_get_running_tid)());
tl_assert(q->writer_nesting_count == 0);
q->writer_nesting_count++;
q->last_lock_was_writer_lock = True;
tl_assert(q->writer_nesting_count == 1);
- rwlock_combine_other_vc(p, drd_tid, True);
+ DRD_(rwlock_combine_other_vc)(p, drd_tid, True);
DRD_(thread_new_segment)(drd_tid);
- s_rwlock_segment_creation_count++;
+ DRD_(s_rwlock_segment_creation_count)++;
p->acquiry_time_ms = VG_(read_millisecond_timer)();
p->acquired_at = VG_(record_ExeContext)(VG_(get_running_tid)(), 0);
}
@@ -462,14 +472,14 @@
* @param tid ThreadId of the thread calling pthread_rwlock_unlock().
* @param vc Pointer to the current vector clock of thread tid.
*/
-void rwlock_pre_unlock(const Addr rwlock)
+void DRD_(rwlock_pre_unlock)(const Addr rwlock)
{
const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
const ThreadId vg_tid = VG_(get_running_tid)();
struct rwlock_info* p;
struct rwlock_thread_info* q;
- if (s_trace_rwlock)
+ if (DRD_(s_trace_rwlock))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] rwlock_unlock 0x%lx",
@@ -478,7 +488,7 @@
rwlock);
}
- p = rwlock_get(rwlock);
+ p = DRD_(rwlock_get)(rwlock);
if (p == 0)
{
GenericErrInfo GEI;
@@ -489,7 +499,7 @@
&GEI);
return;
}
- if (! rwlock_is_locked_by(p, drd_tid))
+ if (! DRD_(rwlock_is_locked_by)(p, drd_tid))
{
RwlockErrInfo REI = { p->a1 };
VG_(maybe_record_error)(vg_tid,
@@ -499,18 +509,18 @@
&REI);
return;
}
- q = lookup_or_insert_node(p->thread_info, drd_tid);
+ q = DRD_(lookup_or_insert_node)(p->thread_info, drd_tid);
tl_assert(q);
if (q->reader_nesting_count > 0)
{
q->reader_nesting_count--;
- if (q->reader_nesting_count == 0 && s_shared_threshold_ms > 0)
+ if (q->reader_nesting_count == 0 && DRD_(s_shared_threshold_ms) > 0)
{
ULong held = VG_(read_millisecond_timer)() - p->acquiry_time_ms;
- if (held > s_shared_threshold_ms)
+ if (held > DRD_(s_shared_threshold_ms))
{
HoldtimeErrInfo HEI
- = { rwlock, p->acquired_at, held, s_shared_threshold_ms };
+ = { rwlock, p->acquired_at, held, DRD_(s_shared_threshold_ms) };
VG_(maybe_record_error)(vg_tid,
HoldtimeErr,
VG_(get_IP)(vg_tid),
@@ -522,13 +532,13 @@
else if (q->writer_nesting_count > 0)
{
q->writer_nesting_count--;
- if (q->writer_nesting_count == 0 && s_exclusive_threshold_ms > 0)
+ if (q->writer_nesting_count == 0 && DRD_(s_exclusive_threshold_ms) > 0)
{
ULong held = VG_(read_millisecond_timer)() - p->acquiry_time_ms;
- if (held > s_exclusive_threshold_ms)
+ if (held > DRD_(s_exclusive_threshold_ms))
{
HoldtimeErrInfo HEI
- = { rwlock, p->acquired_at, held, s_exclusive_threshold_ms };
+ = { rwlock, p->acquired_at, held, DRD_(s_exclusive_threshold_ms) };
VG_(maybe_record_error)(vg_tid,
HoldtimeErr,
VG_(get_IP)(vg_tid),
@@ -550,7 +560,7 @@
DRD_(thread_get_latest_segment)(&q->last_unlock_segment, drd_tid);
DRD_(thread_new_segment)(drd_tid);
- s_rwlock_segment_creation_count++;
+ DRD_(s_rwlock_segment_creation_count)++;
}
}
@@ -558,7 +568,7 @@
* Call this function when thread tid stops to exist, such that the
* "last owner" field can be cleared if it still refers to that thread.
*/
-void rwlock_thread_delete(const DrdThreadId tid)
+void DRD_(rwlock_thread_delete)(const DrdThreadId tid)
{
struct rwlock_info* p;
@@ -566,7 +576,7 @@
for ( ; (p = &(DRD_(clientobj_next)(ClientRwlock)->rwlock)) != 0; )
{
struct rwlock_thread_info* q;
- if (rwlock_is_locked_by(p, tid))
+ if (DRD_(rwlock_is_locked_by)(p, tid))
{
RwlockErrInfo REI = { p->a1 };
VG_(maybe_record_error)(VG_(get_running_tid)(),
@@ -574,14 +584,14 @@
VG_(get_IP)(VG_(get_running_tid)()),
"Reader-writer lock still locked at thread exit",
&REI);
- q = lookup_or_insert_node(p->thread_info, tid);
+ q = DRD_(lookup_or_insert_node)(p->thread_info, tid);
q->reader_nesting_count = 0;
q->writer_nesting_count = 0;
}
}
}
-ULong get_rwlock_segment_creation_count(void)
+ULong DRD_(get_rwlock_segment_creation_count)(void)
{
- return s_rwlock_segment_creation_count;
+ return DRD_(s_rwlock_segment_creation_count);
}
Modified: trunk/drd/drd_rwlock.h
===================================================================
--- trunk/drd/drd_rwlock.h 2009-02-15 13:16:52 UTC (rev 9169)
+++ trunk/drd/drd_rwlock.h 2009-02-15 14:18:02 UTC (rev 9170)
@@ -38,18 +38,18 @@
struct rwlock_info;
-void rwlock_set_trace(const Bool trace_rwlock);
-void rwlock_set_exclusive_threshold(const UInt exclusive_threshold_ms);
-void rwlock_set_shared_threshold(const UInt shared_threshold_ms);
-struct rwlock_info* rwlock_pre_init(const Addr rwlock);
-void rwlock_post_destroy(const Addr rwlock);
-void rwlock_pre_rdlock(const Addr rwlock);
-void rwlock_post_rdlock(const Addr rwlock, const Bool took_lock);
-void rwlock_pre_wrlock(const Addr rwlock);
-void rwlock_post_wrlock(const Addr rwlock, const Bool took_lock);
-void rwlock_pre_unlock(const Addr rwlock);
-void rwlock_thread_delete(const DrdThreadId tid);
-ULong get_rwlock_segment_creation_count(void);
+void DRD_(rwlock_set_trace)(const Bool trace_rwlock);
+void DRD_(rwlock_set_exclusive_threshold)(const UInt exclusive_threshold_ms);
+void DRD_(rwlock_set_shared_threshold)(const UInt shared_threshold_ms);
+struct rwlock_info* DRD_(rwlock_pre_init)(const Addr rwlock);
+void DRD_(rwlock_post_destroy)(const Addr rwlock);
+void DRD_(rwlock_pre_rdlock)(const Addr rwlock);
+void DRD_(rwlock_post_rdlock)(const Addr rwlock, const Bool took_lock);
+void DRD_(rwlock_pre_wrlock)(const Addr rwlock);
+void DRD_(rwlock_post_wrlock)(const Addr rwlock, const Bool took_lock);
+void DRD_(rwlock_pre_unlock)(const Addr rwlock);
+void DRD_(rwlock_thread_delete)(const DrdThreadId tid);
+ULong DRD_(get_rwlock_segment_creation_count)(void);
#endif /* __DRD_RWLOCK_H */
Modified: trunk/drd/drd_semaphore.c
===================================================================
--- trunk/drd/drd_semaphore.c 2009-02-15 13:16:52 UTC (rev 9169)
+++ trunk/drd/drd_semaphore.c 2009-02-15 14:18:02 UTC (rev 9170)
@@ -35,20 +35,21 @@
#include "pub_tool_threadstate.h" // VG_(get_running_tid)()
-// Local functions.
+/* Local functions. */
-static void semaphore_cleanup(struct semaphore_info* p);
+static void DRD_(semaphore_cleanup)(struct semaphore_info* p);
-// Local variables.
+/* Local variables. */
-static Bool s_trace_semaphore;
-static ULong s_semaphore_segment_creation_count;
+static Bool DRD_(s_trace_semaphore);
+static ULong DRD_(s_semaphore_segment_creation_count);
-// Function definitions.
+/* Function definitions. */
-static void segment_push(struct semaphore_info* p, Segment* sg)
+/** Push a segment at the end of the queue 'p->last_sem_post_seg'. */
+static void DRD_(segment_push)(struct semaphore_info* p, Segment* sg)
{
Word n;
@@ -61,7 +62,8 @@
tl_assert(*(Segment**)VG_(indexXA)(p->last_sem_post_seg, n) == sg);
}
-static Segment* segment_pop(struct semaphore_info* p)
+/** Pop a segment from the beginning of the queue 'p->last_sem_post_seg'. */
+static Segment* DRD_(segment_pop)(struct semaphore_info* p)
{
Word sz;
Segment* sg;
@@ -81,19 +83,25 @@
return sg;
}
-void semaphore_set_trace(const Bool trace_semaphore)
+/** Enable or disable tracing of semaphore actions. */
+void DRD_(semaphore_set_trace)(const Bool trace_semaphore)
{
- s_trace_semaphore = trace_semaphore;
+ DRD_(s_trace_semaphore) = trace_semaphore;
}
+/**
+ * Initialize the memory 'p' points at as a semaphore_info structure for the
+ * client semaphore at client addres 'semaphore'.
+ */
static
-void semaphore_initialize(struct semaphore_info* const p, const Addr semaphore)
+void DRD_(semaphore_initialize)(struct semaphore_info* const p,
+ const Addr semaphore)
{
tl_assert(semaphore != 0);
tl_assert(p->a1 == semaphore);
tl_assert(p->type == ClientSemaphore);
- p->cleanup = (void(*)(DrdClientobj*))semaphore_cleanup;
+ p->cleanup = (void(*)(DrdClientobj*))(DRD_(semaphore_cleanup));
p->waits_to_skip = 0;
p->value = 0;
p->waiters = 0;
@@ -106,7 +114,7 @@
* Free the memory that was allocated by semaphore_initialize(). Called by
* DRD_(clientobj_remove)().
*/
-static void semaphore_cleanup(struct semaphore_info* p)
+static void DRD_(semaphore_cleanup)(struct semaphore_info* p)
{
Segment* sg;
@@ -120,14 +128,19 @@
" upon",
&sei);
}
- while ((sg = segment_pop(p)))
+ while ((sg = DRD_(segment_pop)(p)))
DRD_(sg_put)(sg);
VG_(deleteXA)(p->last_sem_post_seg);
}
+/**
+ * Return a pointer to the structure with information about the specified
+ * client semaphore. Allocate a new structure if such a structure did not
+ * yet exist.
+ */
static
struct semaphore_info*
-semaphore_get_or_allocate(const Addr semaphore)
+DRD_(semaphore_get_or_allocate)(const Addr semaphore)
{
struct semaphore_info *p;
@@ -137,25 +150,30 @@
{
tl_assert(offsetof(DrdClientobj, semaphore) == 0);
p = &(DRD_(clientobj_add)(semaphore, ClientSemaphore)->semaphore);
- semaphore_initialize(p, semaphore);
+ DRD_(semaphore_initialize)(p, semaphore);
}
return p;
}
-static struct semaphore_info* semaphore_get(const Addr semaphore)
+/**
+ * Return a pointe...
[truncated message content] |
|
From: <sv...@va...> - 2009-02-15 13:16:58
|
Author: bart
Date: 2009-02-15 13:16:52 +0000 (Sun, 15 Feb 2009)
New Revision: 9169
Log:
Changed a global variable into a local variable.
Modified:
trunk/drd/drd_cond.c
trunk/drd/drd_cond.h
trunk/drd/drd_main.c
Modified: trunk/drd/drd_cond.c
===================================================================
--- trunk/drd/drd_cond.c 2009-02-15 13:11:14 UTC (rev 9168)
+++ trunk/drd/drd_cond.c 2009-02-15 13:16:52 UTC (rev 9169)
@@ -41,18 +41,19 @@
static void cond_cleanup(struct cond_info* p);
-/* Global variables. */
-
-Bool s_drd_report_signal_unlocked = True;
-
-
/* Local variables. */
+static Bool s_drd_report_signal_unlocked = True;
static Bool s_trace_cond;
/* Function definitions. */
+void cond_set_report_signal_unlocked(const Bool r)
+{
+ s_drd_report_signal_unlocked = r;
+}
+
void cond_set_trace(const Bool trace_cond)
{
s_trace_cond = trace_cond;
Modified: trunk/drd/drd_cond.h
===================================================================
--- trunk/drd/drd_cond.h 2009-02-15 13:11:14 UTC (rev 9168)
+++ trunk/drd/drd_cond.h 2009-02-15 13:16:52 UTC (rev 9169)
@@ -36,13 +36,9 @@
struct cond_info;
-/* Global variables. */
-
-extern Bool s_drd_report_signal_unlocked;
-
-
/* Function declarations. */
+void cond_set_report_signal_unlocked(const Bool r);
void cond_set_trace(const Bool trace_cond);
void cond_pre_init(const Addr cond);
void cond_post_destroy(const Addr cond);
Modified: trunk/drd/drd_main.c
===================================================================
--- trunk/drd/drd_main.c 2009-02-15 13:11:14 UTC (rev 9168)
+++ trunk/drd/drd_main.c 2009-02-15 13:16:52 UTC (rev 9169)
@@ -66,6 +66,7 @@
{
int check_stack_accesses = -1;
int exclusive_threshold_ms = -1;
+ int report_signal_unlocked = -1;
int segment_merging = -1;
int shared_threshold_ms = -1;
int show_confl_seg = -1;
@@ -84,7 +85,7 @@
VG_BOOL_CLO (arg, "--check-stack-var", check_stack_accesses)
else VG_BOOL_CLO(arg, "--drd-stats", DRD_(s_print_stats))
- else VG_BOOL_CLO(arg,"--report-signal-unlocked",s_drd_report_signal_unlocked)
+ else VG_BOOL_CLO(arg,"--report-signal-unlocked",report_signal_unlocked)
else VG_BOOL_CLO(arg, "--segment-merging", segment_merging)
else VG_BOOL_CLO(arg, "--show-confl-seg", show_confl_seg)
else VG_BOOL_CLO(arg, "--show-stack-usage", DRD_(s_show_stack_usage))
@@ -113,6 +114,10 @@
mutex_set_lock_threshold(exclusive_threshold_ms);
rwlock_set_exclusive_threshold(exclusive_threshold_ms);
}
+ if (report_signal_unlocked != -1)
+ {
+ cond_set_report_signal_unlocked(report_signal_unlocked);
+ }
if (shared_threshold_ms != -1)
{
rwlock_set_shared_threshold(shared_threshold_ms);
|
|
From: <sv...@va...> - 2009-02-15 13:11:21
|
Author: bart
Date: 2009-02-15 13:11:14 +0000 (Sun, 15 Feb 2009)
New Revision: 9168
Log:
Wrapped DRD_() macro around thread-related function names.
Modified:
trunk/drd/drd_barrier.c
trunk/drd/drd_clientreq.c
trunk/drd/drd_cond.c
trunk/drd/drd_error.c
trunk/drd/drd_load_store.c
trunk/drd/drd_main.c
trunk/drd/drd_mutex.c
trunk/drd/drd_rwlock.c
trunk/drd/drd_segment.c
trunk/drd/drd_semaphore.c
trunk/drd/drd_thread.c
trunk/drd/drd_thread.h
trunk/drd/drd_thread_bitmap.h
Modified: trunk/drd/drd_barrier.c
===================================================================
--- trunk/drd/drd_barrier.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_barrier.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -218,7 +218,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] barrier_reinit %s 0x%lx count %ld -> %ld",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
DRD_(barrier_get_typename)(p),
barrier,
p->count,
@@ -229,7 +229,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] barrier_init %s 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
DRD_(barrier_get_typename)(p),
barrier);
}
@@ -263,7 +263,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] barrier_destroy %s 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
DRD_(barrier_get_typename)(p),
barrier);
}
@@ -318,7 +318,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] barrier_pre_wait %s 0x%lx iteration %ld",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
DRD_(barrier_get_typename)(p),
barrier,
p->pre_iteration);
@@ -332,7 +332,7 @@
VG_(OSetGen_Insert)(p->oset, q);
tl_assert(VG_(OSetGen_Lookup)(p->oset, &word_tid) == q);
}
- thread_get_latest_segment(&q->sg[p->pre_iteration], tid);
+ DRD_(thread_get_latest_segment)(&q->sg[p->pre_iteration], tid);
if (--p->pre_waiters_left <= 0)
{
@@ -396,11 +396,11 @@
if (r != q)
{
tl_assert(r->sg[p->post_iteration]);
- thread_combine_vc2(tid, &r->sg[p->post_iteration]->vc);
+ DRD_(thread_combine_vc2)(tid, &r->sg[p->post_iteration]->vc);
}
}
- thread_new_segment(tid);
+ DRD_(thread_new_segment)(tid);
DRD_(s_barrier_segment_creation_count)++;
if (--p->post_waiters_left <= 0)
Modified: trunk/drd/drd_clientreq.c
===================================================================
--- trunk/drd/drd_clientreq.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_clientreq.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -66,10 +66,10 @@
Bool DRD_(handle_client_request)(ThreadId vg_tid, UWord* arg, UWord* ret)
{
UWord result = 0;
- const DrdThreadId drd_tid = thread_get_running_tid();
+ const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
tl_assert(vg_tid == VG_(get_running_tid()));
- tl_assert(VgThreadIdToDrdThreadId(vg_tid) == drd_tid);
+ tl_assert(DRD_(VgThreadIdToDrdThreadId)(vg_tid) == drd_tid);
switch (arg[0])
{
@@ -109,14 +109,14 @@
i, sps[i], fps[i], desc);
}
#endif
- thread_set_stack_startup(drd_tid, VG_(get_SP)(vg_tid));
+ DRD_(thread_set_stack_startup)(drd_tid, VG_(get_SP)(vg_tid));
DRD_(start_suppression)(topmost_sp, VG_(thread_get_stack_max)(vg_tid),
"stack top");
break;
}
case VG_USERREQ__DRD_START_NEW_SEGMENT:
- thread_new_segment(PtThreadIdToDrdThreadId(arg[1]));
+ DRD_(thread_new_segment)(DRD_(PtThreadIdToDrdThreadId)(arg[1]));
break;
case VG_USERREQ__DRD_START_TRACE_ADDR:
@@ -128,31 +128,32 @@
break;
case VG_USERREQ__DRD_STOP_RECORDING:
- thread_stop_recording(drd_tid);
+ DRD_(thread_stop_recording)(drd_tid);
break;
case VG_USERREQ__DRD_START_RECORDING:
- thread_start_recording(drd_tid);
+ DRD_(thread_start_recording)(drd_tid);
break;
case VG_USERREQ__SET_PTHREADID:
// pthread_self() returns 0 for programs not linked with libpthread.so.
if (arg[1] != INVALID_POSIX_THREADID)
- thread_set_pthreadid(drd_tid, arg[1]);
+ DRD_(thread_set_pthreadid)(drd_tid, arg[1]);
break;
case VG_USERREQ__SET_JOINABLE:
- thread_set_joinable(PtThreadIdToDrdThreadId(arg[1]), (Bool)arg[2]);
+ DRD_(thread_set_joinable)(DRD_(PtThreadIdToDrdThreadId)(arg[1]),
+ (Bool)arg[2]);
break;
case VG_USERREQ__POST_THREAD_JOIN:
tl_assert(arg[1]);
- DRD_(thread_post_join)(drd_tid, PtThreadIdToDrdThreadId(arg[1]));
+ DRD_(thread_post_join)(drd_tid, DRD_(PtThreadIdToDrdThreadId)(arg[1]));
break;
case VG_USERREQ__PRE_THREAD_CANCEL:
tl_assert(arg[1]);
- thread_pre_cancel(drd_tid);
+ DRD_(thread_pre_cancel)(drd_tid);
break;
case VG_USERREQ__POST_THREAD_CANCEL:
@@ -160,71 +161,71 @@
break;
case VG_USERREQ__PRE_MUTEX_INIT:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
mutex_init(arg[1], arg[2]);
break;
case VG_USERREQ__POST_MUTEX_INIT:
- thread_leave_synchr(drd_tid);
+ DRD_(thread_leave_synchr)(drd_tid);
break;
case VG_USERREQ__PRE_MUTEX_DESTROY:
- thread_enter_synchr(drd_tid);
+ DRD_(thread_enter_synchr)(drd_tid);
break;
case VG_USERREQ__POST_MUTEX_DESTROY:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
mutex_post_destroy(arg[1]);
break;
case VG_USERREQ__PRE_MUTEX_LOCK:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
mutex_pre_lock(arg[1], arg[2], arg[3]);
break;
case VG_USERREQ__POST_MUTEX_LOCK:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
mutex_post_lock(arg[1], arg[2], False/*post_cond_wait*/);
break;
case VG_USERREQ__PRE_MUTEX_UNLOCK:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
mutex_unlock(arg[1], arg[2]);
break;
case VG_USERREQ__POST_MUTEX_UNLOCK:
- thread_leave_synchr(drd_tid);
+ DRD_(thread_leave_synchr)(drd_tid);
break;
case VG_USERREQ__PRE_SPIN_INIT_OR_UNLOCK:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
DRD_(spinlock_init_or_unlock)(arg[1]);
break;
case VG_USERREQ__POST_SPIN_INIT_OR_UNLOCK:
- thread_leave_synchr(drd_tid);
+ DRD_(thread_leave_synchr)(drd_tid);
break;
case VG_USERREQ__PRE_COND_INIT:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
cond_pre_init(arg[1]);
break;
case VG_USERREQ__POST_COND_INIT:
- thread_leave_synchr(drd_tid);
+ DRD_(thread_leave_synchr)(drd_tid);
break;
case VG_USERREQ__PRE_COND_DESTROY:
- thread_enter_synchr(drd_tid);
+ DRD_(thread_enter_synchr)(drd_tid);
break;
case VG_USERREQ__POST_COND_DESTROY:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
cond_post_destroy(arg[1]);
break;
case VG_USERREQ__PRE_COND_WAIT:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
{
const Addr cond = arg[1];
const Addr mutex = arg[2];
@@ -235,7 +236,7 @@
break;
case VG_USERREQ__POST_COND_WAIT:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
{
const Addr cond = arg[1];
const Addr mutex = arg[2];
@@ -246,86 +247,86 @@
break;
case VG_USERREQ__PRE_COND_SIGNAL:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
cond_pre_signal(arg[1]);
break;
case VG_USERREQ__POST_COND_SIGNAL:
- thread_leave_synchr(drd_tid);
+ DRD_(thread_leave_synchr)(drd_tid);
break;
case VG_USERREQ__PRE_COND_BROADCAST:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
cond_pre_broadcast(arg[1]);
break;
case VG_USERREQ__POST_COND_BROADCAST:
- thread_leave_synchr(drd_tid);
+ DRD_(thread_leave_synchr)(drd_tid);
break;
case VG_USERREQ__PRE_SEM_INIT:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
semaphore_init(arg[1], arg[2], arg[3]);
break;
case VG_USERREQ__POST_SEM_INIT:
- thread_leave_synchr(drd_tid);
+ DRD_(thread_leave_synchr)(drd_tid);
break;
case VG_USERREQ__PRE_SEM_DESTROY:
- thread_enter_synchr(drd_tid);
+ DRD_(thread_enter_synchr)(drd_tid);
break;
case VG_USERREQ__POST_SEM_DESTROY:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
semaphore_destroy(arg[1]);
break;
case VG_USERREQ__PRE_SEM_WAIT:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
semaphore_pre_wait(arg[1]);
break;
case VG_USERREQ__POST_SEM_WAIT:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
semaphore_post_wait(drd_tid, arg[1], arg[2]);
break;
case VG_USERREQ__PRE_SEM_POST:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
semaphore_pre_post(drd_tid, arg[1]);
break;
case VG_USERREQ__POST_SEM_POST:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
semaphore_post_post(drd_tid, arg[1], arg[2]);
break;
case VG_USERREQ__PRE_BARRIER_INIT:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
DRD_(barrier_init)(arg[1], arg[2], arg[3], arg[4]);
break;
case VG_USERREQ__POST_BARRIER_INIT:
- thread_leave_synchr(drd_tid);
+ DRD_(thread_leave_synchr)(drd_tid);
break;
case VG_USERREQ__PRE_BARRIER_DESTROY:
- thread_enter_synchr(drd_tid);
+ DRD_(thread_enter_synchr)(drd_tid);
break;
case VG_USERREQ__POST_BARRIER_DESTROY:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
DRD_(barrier_destroy)(arg[1], arg[2]);
break;
case VG_USERREQ__PRE_BARRIER_WAIT:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
DRD_(barrier_pre_wait)(drd_tid, arg[1], arg[2]);
break;
case VG_USERREQ__POST_BARRIER_WAIT:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
DRD_(barrier_post_wait)(drd_tid, arg[1], arg[2], arg[3]);
break;
@@ -338,32 +339,32 @@
break;
case VG_USERREQ__PRE_RWLOCK_RDLOCK:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
rwlock_pre_rdlock(arg[1]);
break;
case VG_USERREQ__POST_RWLOCK_RDLOCK:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
rwlock_post_rdlock(arg[1], arg[2]);
break;
case VG_USERREQ__PRE_RWLOCK_WRLOCK:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
rwlock_pre_wrlock(arg[1]);
break;
case VG_USERREQ__POST_RWLOCK_WRLOCK:
- if (thread_leave_synchr(drd_tid) == 0)
+ if (DRD_(thread_leave_synchr)(drd_tid) == 0)
rwlock_post_wrlock(arg[1], arg[2]);
break;
case VG_USERREQ__PRE_RWLOCK_UNLOCK:
- if (thread_enter_synchr(drd_tid) == 0)
+ if (DRD_(thread_enter_synchr)(drd_tid) == 0)
rwlock_pre_unlock(arg[1]);
break;
case VG_USERREQ__POST_RWLOCK_UNLOCK:
- thread_leave_synchr(drd_tid);
+ DRD_(thread_leave_synchr)(drd_tid);
break;
default:
Modified: trunk/drd/drd_cond.c
===================================================================
--- trunk/drd/drd_cond.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_cond.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -124,7 +124,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_init cond 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
cond);
}
@@ -153,7 +153,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_destroy cond 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
cond);
}
@@ -196,7 +196,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_pre_wait cond 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
cond);
}
@@ -220,7 +220,7 @@
}
tl_assert(p->mutex);
q = mutex_get(p->mutex);
- if (q && q->owner == thread_get_running_tid() && q->recursion_count > 0)
+ if (q && q->owner == DRD_(thread_get_running_tid)() && q->recursion_count > 0)
{
const ThreadId vg_tid = VG_(get_running_tid)();
MutexErrInfo MEI = { q->a1, q->recursion_count, q->owner };
@@ -248,7 +248,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_post_wait cond 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
cond);
}
@@ -271,7 +271,7 @@
static void cond_signal(Addr const cond)
{
const ThreadId vg_tid = VG_(get_running_tid)();
- const DrdThreadId drd_tid = VgThreadIdToDrdThreadId(vg_tid);
+ const DrdThreadId drd_tid = DRD_(VgThreadIdToDrdThreadId)(vg_tid);
struct cond_info* const cond_p = cond_get(cond);
if (cond_p && cond_p->waiter_count > 0)
@@ -306,7 +306,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_signal cond 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
cond);
}
@@ -321,7 +321,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] cond_broadcast cond 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
cond);
}
Modified: trunk/drd/drd_error.c
===================================================================
--- trunk/drd/drd_error.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_error.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -114,7 +114,7 @@
VG_(message)(Vg_UserMsg,
"Conflicting %s by thread %d/%d at 0x%08lx size %ld",
dri->access_type == eStore ? "store" : "load",
- DrdThreadIdToVgThreadId(dri->tid),
+ DRD_(DrdThreadIdToVgThreadId)(dri->tid),
dri->tid,
dri->addr,
dri->size);
@@ -152,8 +152,9 @@
}
if (s_drd_show_conflicting_segments)
{
- thread_report_conflicting_segments(dri->tid,
- dri->addr, dri->size, dri->access_type);
+ DRD_(thread_report_conflicting_segments)(dri->tid,
+ dri->addr, dri->size,
+ dri->access_type);
}
VG_(free)(descr2);
@@ -211,7 +212,7 @@
"%s: cond 0x%lx, mutex 0x%lx locked by thread %d/%d",
VG_(get_error_string)(e),
cdi->cond, cdi->mutex,
- DrdThreadIdToVgThreadId(cdi->tid), cdi->tid);
+ DRD_(DrdThreadIdToVgThreadId)(cdi->tid), cdi->tid);
VG_(pp_ExeContext)(VG_(get_error_where)(e));
first_observed(cdi->mutex);
break;
Modified: trunk/drd/drd_load_store.c
===================================================================
--- trunk/drd/drd_load_store.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_load_store.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -52,20 +52,20 @@
/* Local variables. */
-static Bool s_drd_check_stack_accesses = False;
+static Bool DRD_(s_check_stack_accesses) = False;
/* Function definitions. */
Bool DRD_(get_check_stack_accesses)()
{
- return s_drd_check_stack_accesses;
+ return DRD_(s_check_stack_accesses);
}
void DRD_(set_check_stack_accesses)(const Bool c)
{
tl_assert(c == False || c == True);
- s_drd_check_stack_accesses = c;
+ DRD_(s_check_stack_accesses) = c;
}
void DRD_(trace_mem_access)(const Addr addr, const SizeT size,
@@ -74,7 +74,8 @@
if (DRD_(is_any_traced)(addr, addr + size))
{
char vc[80];
- DRD_(vc_snprint)(vc, sizeof(vc), thread_get_vc(thread_get_running_tid()));
+ DRD_(vc_snprint)(vc, sizeof(vc),
+ DRD_(thread_get_vc)(DRD_(thread_get_running_tid)()));
VG_(message)(Vg_UserMsg,
"%s 0x%lx size %ld (vg %d / drd %d / vc %s)",
access_type == eLoad
@@ -89,11 +90,11 @@
addr,
size,
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
vc);
VG_(get_and_pp_StackTrace)(VG_(get_running_tid)(),
VG_(clo_backtrace_size));
- tl_assert(DrdThreadIdToVgThreadId(thread_get_running_tid())
+ tl_assert(DRD_(DrdThreadIdToVgThreadId)(DRD_(thread_get_running_tid)())
== VG_(get_running_tid)());
}
}
@@ -113,7 +114,7 @@
{
DataRaceErrInfo drei;
- drei.tid = thread_get_running_tid();
+ drei.tid = DRD_(thread_get_running_tid)();
drei.addr = addr;
drei.size = size;
drei.access_type = access_type;
@@ -132,8 +133,9 @@
== VgThreadIdToDrdThreadId(VG_(get_running_tid())));
#endif
- if (running_thread_is_recording()
- && (s_drd_check_stack_accesses || ! thread_address_on_stack(addr))
+ if (DRD_(running_thread_is_recording)()
+ && (DRD_(s_check_stack_accesses)
+ || ! DRD_(thread_address_on_stack)(addr))
&& bm_access_load_triggers_conflict(addr, addr + size)
&& ! DRD_(is_suppressed)(addr, addr + size))
{
@@ -143,8 +145,9 @@
static VG_REGPARM(1) void drd_trace_load_1(Addr addr)
{
- if (running_thread_is_recording()
- && (s_drd_check_stack_accesses || ! thread_address_on_stack(addr))
+ if (DRD_(running_thread_is_recording)()
+ && (DRD_(s_check_stack_accesses)
+ || ! DRD_(thread_address_on_stack)(addr))
&& bm_access_load_1_triggers_conflict(addr)
&& ! DRD_(is_suppressed)(addr, addr + 1))
{
@@ -154,8 +157,9 @@
static VG_REGPARM(1) void drd_trace_load_2(Addr addr)
{
- if (running_thread_is_recording()
- && (s_drd_check_stack_accesses || ! thread_address_on_stack(addr))
+ if (DRD_(running_thread_is_recording)()
+ && (DRD_(s_check_stack_accesses)
+ || ! DRD_(thread_address_on_stack)(addr))
&& bm_access_load_2_triggers_conflict(addr)
&& ! DRD_(is_suppressed)(addr, addr + 2))
{
@@ -165,8 +169,9 @@
static VG_REGPARM(1) void drd_trace_load_4(Addr addr)
{
- if (running_thread_is_recording()
- && (s_drd_check_stack_accesses || ! thread_address_on_stack(addr))
+ if (DRD_(running_thread_is_recording)()
+ && (DRD_(s_check_stack_accesses)
+ || ! DRD_(thread_address_on_stack)(addr))
&& bm_access_load_4_triggers_conflict(addr)
&& ! DRD_(is_suppressed)(addr, addr + 4))
{
@@ -176,8 +181,9 @@
static VG_REGPARM(1) void drd_trace_load_8(Addr addr)
{
- if (running_thread_is_recording()
- && (s_drd_check_stack_accesses || ! thread_address_on_stack(addr))
+ if (DRD_(running_thread_is_recording)()
+ && (DRD_(s_check_stack_accesses)
+ || ! DRD_(thread_address_on_stack)(addr))
&& bm_access_load_8_triggers_conflict(addr)
&& ! DRD_(is_suppressed)(addr, addr + 8))
{
@@ -193,8 +199,9 @@
== VgThreadIdToDrdThreadId(VG_(get_running_tid())));
#endif
- if (running_thread_is_recording()
- && (s_drd_check_stack_accesses || ! thread_address_on_stack(addr))
+ if (DRD_(running_thread_is_recording)()
+ && (DRD_(s_check_stack_accesses)
+ || ! DRD_(thread_address_on_stack)(addr))
&& bm_access_store_triggers_conflict(addr, addr + size)
&& ! DRD_(is_suppressed)(addr, addr + size))
{
@@ -204,8 +211,9 @@
static VG_REGPARM(1) void drd_trace_store_1(Addr addr)
{
- if (running_thread_is_recording()
- && (s_drd_check_stack_accesses || ! thread_address_on_stack(addr))
+ if (DRD_(running_thread_is_recording)()
+ && (DRD_(s_check_stack_accesses)
+ || ! DRD_(thread_address_on_stack)(addr))
&& bm_access_store_1_triggers_conflict(addr)
&& ! DRD_(is_suppressed)(addr, addr + 1))
{
@@ -215,8 +223,9 @@
static VG_REGPARM(1) void drd_trace_store_2(Addr addr)
{
- if (running_thread_is_recording()
- && (s_drd_check_stack_accesses || ! thread_address_on_stack(addr))
+ if (DRD_(running_thread_is_recording)()
+ && (DRD_(s_check_stack_accesses)
+ || ! DRD_(thread_address_on_stack)(addr))
&& bm_access_store_2_triggers_conflict(addr)
&& ! DRD_(is_suppressed)(addr, addr + 2))
{
@@ -226,8 +235,9 @@
static VG_REGPARM(1) void drd_trace_store_4(Addr addr)
{
- if (running_thread_is_recording()
- && (s_drd_check_stack_accesses || ! thread_address_on_stack(addr))
+ if (DRD_(running_thread_is_recording)()
+ && (DRD_(s_check_stack_accesses)
+ || ! DRD_(thread_address_on_stack)(addr))
&& bm_access_store_4_triggers_conflict(addr)
&& ! DRD_(is_suppressed)(addr, addr + 4))
{
@@ -237,8 +247,9 @@
static VG_REGPARM(1) void drd_trace_store_8(Addr addr)
{
- if (running_thread_is_recording()
- && (s_drd_check_stack_accesses || ! thread_address_on_stack(addr))
+ if (DRD_(running_thread_is_recording)()
+ && (DRD_(s_check_stack_accesses)
+ || ! DRD_(thread_address_on_stack)(addr))
&& bm_access_store_8_triggers_conflict(addr)
&& ! DRD_(is_suppressed)(addr, addr + 8))
{
@@ -298,7 +309,7 @@
mkIRExpr_HWord(size)))));
}
- if (! s_drd_check_stack_accesses && is_stack_access(bb, addr_expr))
+ if (! DRD_(s_check_stack_accesses) && is_stack_access(bb, addr_expr))
return;
switch (size)
@@ -363,7 +374,7 @@
mkIRExpr_HWord(size)))));
}
- if (! s_drd_check_stack_accesses && is_stack_access(bb, addr_expr))
+ if (! DRD_(s_check_stack_accesses) && is_stack_access(bb, addr_expr))
return;
switch (size)
Modified: trunk/drd/drd_main.c
===================================================================
--- trunk/drd/drd_main.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_main.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -118,7 +118,7 @@
rwlock_set_shared_threshold(shared_threshold_ms);
}
if (segment_merging != -1)
- thread_set_segment_merging(segment_merging);
+ DRD_(thread_set_segment_merging)(segment_merging);
if (show_confl_seg != -1)
set_show_conflicting_segments(show_confl_seg);
if (trace_address)
@@ -133,11 +133,11 @@
if (trace_cond != -1)
cond_set_trace(trace_cond);
if (trace_csw != -1)
- thread_trace_context_switches(trace_csw);
+ DRD_(thread_trace_context_switches)(trace_csw);
if (trace_fork_join != -1)
DRD_(thread_set_trace_fork_join)(trace_fork_join);
if (trace_conflict_set != -1)
- thread_trace_conflict_set(trace_conflict_set);
+ DRD_(thread_trace_conflict_set)(trace_conflict_set);
if (trace_mutex != -1)
mutex_set_trace(trace_mutex);
if (trace_rwlock != -1)
@@ -249,7 +249,7 @@
const Addr a,
const SizeT size)
{
- thread_set_vg_running_tid(VG_(get_running_tid)());
+ DRD_(thread_set_vg_running_tid)(VG_(get_running_tid)());
if (size > 0)
{
drd_trace_store(a, size);
@@ -295,7 +295,7 @@
}
if (! is_stack_mem || DRD_(get_check_stack_accesses)())
{
- thread_stop_using_mem(a1, a2);
+ DRD_(thread_stop_using_mem)(a1, a2);
DRD_(clientobj_stop_using_mem)(a1, a2);
DRD_(suppression_stop_using_mem)(a1, a2);
}
@@ -361,7 +361,7 @@
const Bool rr, const Bool ww, const Bool xx,
ULong di_handle)
{
- thread_set_vg_running_tid(VG_(get_running_tid)());
+ DRD_(thread_set_vg_running_tid)(VG_(get_running_tid)());
drd_start_using_mem(a, len);
@@ -374,7 +374,8 @@
static __inline__
void drd_start_using_mem_stack(const Addr a, const SizeT len)
{
- thread_set_stack_min(thread_get_running_tid(), a - VG_STACK_REDZONE_SZB);
+ DRD_(thread_set_stack_min)(DRD_(thread_get_running_tid)(),
+ a - VG_STACK_REDZONE_SZB);
drd_start_using_mem(a - VG_STACK_REDZONE_SZB,
len + VG_STACK_REDZONE_SZB);
}
@@ -385,8 +386,8 @@
static __inline__
void drd_stop_using_mem_stack(const Addr a, const SizeT len)
{
- thread_set_stack_min(thread_get_running_tid(),
- a + len - VG_STACK_REDZONE_SZB);
+ DRD_(thread_set_stack_min)(DRD_(thread_get_running_tid)(),
+ a + len - VG_STACK_REDZONE_SZB);
drd_stop_using_mem(a - VG_STACK_REDZONE_SZB, len + VG_STACK_REDZONE_SZB,
True);
}
@@ -395,7 +396,7 @@
const Addr a, const SizeT len,
ThreadId tid_for_whom_the_signal_frame_is_being_constructed)
{
- thread_set_vg_running_tid(VG_(get_running_tid)());
+ DRD_(thread_set_vg_running_tid)(VG_(get_running_tid)());
drd_start_using_mem(a, len);
}
@@ -407,12 +408,12 @@
static
void drd_pre_thread_create(const ThreadId creator, const ThreadId created)
{
- const DrdThreadId drd_creator = VgThreadIdToDrdThreadId(creator);
+ const DrdThreadId drd_creator = DRD_(VgThreadIdToDrdThreadId)(creator);
tl_assert(created != VG_INVALID_THREADID);
- thread_pre_create(drd_creator, created);
- if (IsValidDrdThreadId(drd_creator))
+ DRD_(thread_pre_create)(drd_creator, created);
+ if (DRD_(IsValidDrdThreadId)(drd_creator))
{
- thread_new_segment(drd_creator);
+ DRD_(thread_new_segment)(drd_creator);
}
if (DRD_(thread_get_trace_fork_join)())
{
@@ -432,7 +433,7 @@
tl_assert(vg_created != VG_INVALID_THREADID);
- drd_created = thread_post_create(vg_created);
+ drd_created = DRD_(thread_post_create)(vg_created);
if (DRD_(thread_get_trace_fork_join)())
{
VG_(message)(Vg_DebugMsg,
@@ -441,9 +442,9 @@
}
if (! DRD_(get_check_stack_accesses)())
{
- DRD_(start_suppression)(thread_get_stack_max(drd_created)
- - thread_get_stack_size(drd_created),
- thread_get_stack_max(drd_created),
+ DRD_(start_suppression)(DRD_(thread_get_stack_max)(drd_created)
+ - DRD_(thread_get_stack_size)(drd_created),
+ DRD_(thread_get_stack_max)(drd_created),
"stack");
}
}
@@ -455,28 +456,29 @@
tl_assert(VG_(get_running_tid)() == vg_tid);
- drd_tid = VgThreadIdToDrdThreadId(vg_tid);
+ drd_tid = DRD_(VgThreadIdToDrdThreadId)(vg_tid);
if (DRD_(thread_get_trace_fork_join)())
{
VG_(message)(Vg_DebugMsg,
"drd_thread_finished tid = %d/%d%s",
vg_tid,
drd_tid,
- thread_get_joinable(drd_tid)
+ DRD_(thread_get_joinable)(drd_tid)
? ""
: " (which is a detached thread)");
}
if (DRD_(s_show_stack_usage))
{
- const SizeT stack_size = thread_get_stack_size(drd_tid);
+ const SizeT stack_size = DRD_(thread_get_stack_size)(drd_tid);
const SizeT used_stack
- = thread_get_stack_max(drd_tid) - thread_get_stack_min_min(drd_tid);
+ = (DRD_(thread_get_stack_max)(drd_tid)
+ - DRD_(thread_get_stack_min_min)(drd_tid));
VG_(message)(Vg_UserMsg,
"thread %d/%d%s finished and used %ld bytes out of %ld"
" on its stack. Margin: %ld bytes.",
vg_tid,
drd_tid,
- thread_get_joinable(drd_tid)
+ DRD_(thread_get_joinable)(drd_tid)
? ""
: " (which is a detached thread)",
used_stack,
@@ -484,12 +486,12 @@
stack_size - used_stack);
}
- drd_stop_using_mem(thread_get_stack_min(drd_tid),
- thread_get_stack_max(drd_tid)
- - thread_get_stack_min(drd_tid),
+ drd_stop_using_mem(DRD_(thread_get_stack_min)(drd_tid),
+ DRD_(thread_get_stack_max)(drd_tid)
+ - DRD_(thread_get_stack_min)(drd_tid),
True);
- thread_stop_recording(drd_tid);
- thread_finished(drd_tid);
+ DRD_(thread_stop_recording)(drd_tid);
+ DRD_(thread_finished)(drd_tid);
}
//
@@ -514,7 +516,7 @@
static void drd_start_client_code(const ThreadId tid, const ULong bbs_done)
{
tl_assert(tid == VG_(get_running_tid)());
- thread_set_vg_running_tid(tid);
+ DRD_(thread_set_vg_running_tid)(tid);
}
static void DRD_(fini)(Int exitcode)
@@ -527,12 +529,12 @@
ULong dscvc;
update_conflict_set_count
- = thread_get_update_conflict_set_count(&dsnsc, &dscvc);
+ = DRD_(thread_get_update_conflict_set_count)(&dsnsc, &dscvc);
VG_(message)(Vg_UserMsg,
" thread: %lld context switches"
" / %lld updates of the conflict set",
- thread_get_context_switch_count(),
+ DRD_(thread_get_context_switch_count)(),
update_conflict_set_count);
VG_(message)(Vg_UserMsg,
" (%lld new sg + %lld combine vc + %lld csw).",
@@ -544,7 +546,7 @@
" %lld discard points.",
DRD_(sg_get_segments_created_count)(),
DRD_(sg_get_max_segments_alive_count)(),
- thread_get_discard_ordered_segments_count());
+ DRD_(thread_get_discard_ordered_segments_count)());
VG_(message)(Vg_UserMsg,
" (%lld m, %lld rw, %lld s, %lld b)",
get_mutex_segment_creation_count(),
Modified: trunk/drd/drd_mutex.c
===================================================================
--- trunk/drd/drd_mutex.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_mutex.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -90,7 +90,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] mutex_destroy %s 0x%lx rc %d owner %d",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
mutex_get_typename(p),
p->a1,
p ? p->recursion_count : -1,
@@ -167,7 +167,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] mutex_init %s 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
mutex_type_name(mutex_type),
mutex);
}
@@ -230,7 +230,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] %s %s 0x%lx rc %d owner %d",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
trylock ? "pre_mutex_lock " : "mutex_trylock ",
p ? mutex_get_typename(p) : "(?)",
mutex,
@@ -253,7 +253,7 @@
}
if (! trylock
- && p->owner == thread_get_running_tid()
+ && p->owner == DRD_(thread_get_running_tid)()
&& p->recursion_count >= 1
&& mutex_type != mutex_type_recursive_mutex)
{
@@ -274,7 +274,7 @@
void mutex_post_lock(const Addr mutex, const Bool took_lock,
const Bool post_cond_wait)
{
- const DrdThreadId drd_tid = thread_get_running_tid();
+ const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
struct mutex_info* p;
p = mutex_get(mutex);
@@ -303,9 +303,9 @@
if (last_owner != drd_tid && last_owner != DRD_INVALID_THREADID)
{
tl_assert(p->last_locked_segment);
- thread_combine_vc2(drd_tid, &p->last_locked_segment->vc);
+ DRD_(thread_combine_vc2)(drd_tid, &p->last_locked_segment->vc);
}
- thread_new_segment(drd_tid);
+ DRD_(thread_new_segment)(drd_tid);
s_mutex_segment_creation_count++;
p->owner = drd_tid;
@@ -338,7 +338,7 @@
*/
void mutex_unlock(const Addr mutex, MutexT mutex_type)
{
- const DrdThreadId drd_tid = thread_get_running_tid();
+ const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
const ThreadId vg_tid = VG_(get_running_tid)();
struct mutex_info* p;
@@ -418,8 +418,8 @@
/* current vector clock of the thread such that it is available when */
/* this mutex is locked again. */
- thread_get_latest_segment(&p->last_locked_segment, drd_tid);
- thread_new_segment(drd_tid);
+ DRD_(thread_get_latest_segment)(&p->last_locked_segment, drd_tid);
+ DRD_(thread_new_segment)(drd_tid);
p->acquired_at = 0;
s_mutex_segment_creation_count++;
}
Modified: trunk/drd/drd_rwlock.c
===================================================================
--- trunk/drd/drd_rwlock.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_rwlock.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -166,7 +166,7 @@
{
if (q->tid != tid && (readers_too || q->last_lock_was_writer_lock))
{
- thread_combine_vc2(tid, &q->last_unlock_segment->vc);
+ DRD_(thread_combine_vc2)(tid, &q->last_unlock_segment->vc);
}
}
}
@@ -198,7 +198,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] rwlock_destroy 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
p->a1);
}
@@ -265,7 +265,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] rwlock_init 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
rwlock);
}
@@ -323,14 +323,14 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] pre_rwlock_rdlock 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
rwlock);
}
p = rwlock_get_or_allocate(rwlock);
tl_assert(p);
- if (rwlock_is_wrlocked_by(p, thread_get_running_tid()))
+ if (rwlock_is_wrlocked_by(p, DRD_(thread_get_running_tid)()))
{
VG_(message)(Vg_UserMsg,
"reader-writer lock 0x%lx is already locked for"
@@ -345,7 +345,7 @@
*/
void rwlock_post_rdlock(const Addr rwlock, const Bool took_lock)
{
- const DrdThreadId drd_tid = thread_get_running_tid();
+ const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
struct rwlock_info* p;
struct rwlock_thread_info* q;
@@ -370,7 +370,7 @@
{
rwlock_combine_other_vc(p, drd_tid, False);
q->last_lock_was_writer_lock = False;
- thread_new_segment(drd_tid);
+ DRD_(thread_new_segment)(drd_tid);
s_rwlock_segment_creation_count++;
p->acquiry_time_ms = VG_(read_millisecond_timer)();
@@ -394,7 +394,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] pre_rwlock_wrlock 0x%lx",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
rwlock);
}
@@ -405,7 +405,7 @@
tl_assert(p);
- if (rwlock_is_wrlocked_by(p, thread_get_running_tid()))
+ if (rwlock_is_wrlocked_by(p, DRD_(thread_get_running_tid)()))
{
RwlockErrInfo REI = { p->a1 };
VG_(maybe_record_error)(VG_(get_running_tid)(),
@@ -423,7 +423,7 @@
*/
void rwlock_post_wrlock(const Addr rwlock, const Bool took_lock)
{
- const DrdThreadId drd_tid = thread_get_running_tid();
+ const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
struct rwlock_info* p;
struct rwlock_thread_info* q;
@@ -441,13 +441,13 @@
if (! p || ! took_lock)
return;
- q = lookup_or_insert_node(p->thread_info, thread_get_running_tid());
+ q = lookup_or_insert_node(p->thread_info, DRD_(thread_get_running_tid)());
tl_assert(q->writer_nesting_count == 0);
q->writer_nesting_count++;
q->last_lock_was_writer_lock = True;
tl_assert(q->writer_nesting_count == 1);
rwlock_combine_other_vc(p, drd_tid, True);
- thread_new_segment(drd_tid);
+ DRD_(thread_new_segment)(drd_tid);
s_rwlock_segment_creation_count++;
p->acquiry_time_ms = VG_(read_millisecond_timer)();
p->acquired_at = VG_(record_ExeContext)(VG_(get_running_tid)(), 0);
@@ -464,7 +464,7 @@
*/
void rwlock_pre_unlock(const Addr rwlock)
{
- const DrdThreadId drd_tid = thread_get_running_tid();
+ const DrdThreadId drd_tid = DRD_(thread_get_running_tid)();
const ThreadId vg_tid = VG_(get_running_tid)();
struct rwlock_info* p;
struct rwlock_thread_info* q;
@@ -548,8 +548,8 @@
/* current vector clock of the thread such that it is available when */
/* this rwlock is locked again. */
- thread_get_latest_segment(&q->last_unlock_segment, drd_tid);
- thread_new_segment(drd_tid);
+ DRD_(thread_get_latest_segment)(&q->last_unlock_segment, drd_tid);
+ DRD_(thread_new_segment)(drd_tid);
s_rwlock_segment_creation_count++;
}
}
Modified: trunk/drd/drd_segment.c
===================================================================
--- trunk/drd/drd_segment.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_segment.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -56,13 +56,14 @@
DrdThreadId const created)
{
Segment* creator_sg;
- ThreadId vg_created = DrdThreadIdToVgThreadId(created);
+ ThreadId vg_created = DRD_(DrdThreadIdToVgThreadId)(created);
tl_assert(sg);
- tl_assert(creator == DRD_INVALID_THREADID || IsValidDrdThreadId(creator));
+ tl_assert(creator == DRD_INVALID_THREADID
+ || DRD_(IsValidDrdThreadId)(creator));
creator_sg = (creator != DRD_INVALID_THREADID
- ? thread_get_segment(creator) : 0);
+ ? DRD_(thread_get_segment)(creator) : 0);
sg->next = 0;
sg->prev = 0;
@@ -86,7 +87,7 @@
VG_(snprintf)(msg, sizeof(msg),
"New segment for thread %d/%d with vc ",
created != VG_INVALID_THREADID
- ? DrdThreadIdToVgThreadId(created)
+ ? DRD_(DrdThreadIdToVgThreadId)(created)
: DRD_INVALID_THREADID,
created);
DRD_(vc_snprint)(msg + VG_(strlen)(msg), sizeof(msg) - VG_(strlen)(msg),
Modified: trunk/drd/drd_semaphore.c
===================================================================
--- trunk/drd/drd_semaphore.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_semaphore.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -160,7 +160,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] semaphore_init 0x%lx value %u",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
semaphore,
value);
}
@@ -202,7 +202,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] semaphore_destroy 0x%lx value %u",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
semaphore,
p ? p->value : 0);
}
@@ -249,7 +249,7 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] semaphore_wait 0x%lx value %u -> %u",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
semaphore,
p ? p->value : 0,
p ? p->value - 1 : 0);
@@ -282,10 +282,10 @@
if (p->last_sem_post_tid != tid
&& p->last_sem_post_tid != DRD_INVALID_THREADID)
{
- thread_combine_vc2(tid, &sg->vc);
+ DRD_(thread_combine_vc2)(tid, &sg->vc);
}
DRD_(sg_put)(sg);
- thread_new_segment(tid);
+ DRD_(thread_new_segment)(tid);
s_semaphore_segment_creation_count++;
}
}
@@ -305,15 +305,15 @@
VG_(message)(Vg_UserMsg,
"[%d/%d] semaphore_post 0x%lx value %u -> %u",
VG_(get_running_tid)(),
- thread_get_running_tid(),
+ DRD_(thread_get_running_tid)(),
semaphore,
p->value - 1, p->value);
}
p->last_sem_post_tid = tid;
- thread_new_segment(tid);
+ DRD_(thread_new_segment)(tid);
sg = 0;
- thread_get_latest_segment(&sg, tid);
+ DRD_(thread_get_latest_segment)(&sg, tid);
tl_assert(sg);
segment_push(p, sg);
s_semaphore_segment_creation_count++;
Modified: trunk/drd/drd_thread.c
===================================================================
--- trunk/drd/drd_thread.c 2009-02-15 12:14:52 UTC (rev 9167)
+++ trunk/drd/drd_thread.c 2009-02-15 13:11:14 UTC (rev 9168)
@@ -47,12 +47,13 @@
/* Local functions. */
-static void thread_append_segment(const DrdThreadId tid,
- Segment* const sg);
-static void thread_discard_segment(const DrdThreadId tid, Segment* const sg);
-static Bool thread_conflict_set_up_to_date(const DrdThreadId tid);
-static void thread_compute_conflict_set(struct bitmap** conflict_set,
- const DrdThreadId tid);
+static void DRD_(thread_append_segment)(const DrdThreadId tid,
+ Segment* const sg);
+static void DRD_(thread_discard_segment)(const DrdThreadId tid,
+ Segment* const sg);
+static Bool DRD_(thread_conflict_set_up_to_date)(const DrdThreadId tid);
+static void DRD_(thread_compute_conflict_set)(struct bitmap** conflict_set,
+ const DrdThreadId tid);
/* Local variables. */
@@ -76,13 +77,13 @@
/* Function definitions. */
-void thread_trace_context_switches(const Bool t)
+void DRD_(thread_trace_context_switches)(const Bool t)
{
tl_assert(t == False || t == True);
DRD_(s_trace_context_switches) = t;
}
-void thread_trace_conflict_set(const Bool t)
+void DRD_(thread_trace_conflict_set)(const Bool t)
{
tl_assert(t == False || t == True);
DRD_(s_trace_conflict_set) = t;
@@ -99,7 +100,7 @@
DRD_(s_trace_fork_join) = t;
}
-void thread_set_segment_merging(const Bool m)
+void DRD_(thread_set_segment_merging)(const Bool m)
{
tl_assert(m == False || m == True);
DRD_(s_segment_merging) = m;
@@ -109,7 +110,7 @@
* Convert Valgrind's ThreadId into a DrdThreadId. Report failure if
* Valgrind's ThreadId does not yet exist.
*/
-DrdThreadId VgThreadIdToDrdThreadId(const ThreadId tid)
+DrdThreadId DRD_(VgThreadIdToDrdThreadId)(const ThreadId tid)
{
int i;
@@ -128,12 +129,11 @@
return DRD_INVALID_THREADID;
}
-static
-DrdThreadId VgThreadIdToNewDrdThreadId(const ThreadId tid)
+static DrdThreadId DRD_(VgThreadIdToNewDrdThreadId)(const ThreadId tid)
{
int i;
- tl_assert(VgThreadIdToDrdThreadId(tid) == DRD_INVALID_THREADID);
+ tl_assert(DRD_(VgThreadIdToDrdThreadId)(tid) == DRD_INVALID_THREADID);
for (i = 1; i < DRD_N_THREADS; i++)
{
@@ -163,7 +163,7 @@
return DRD_INVALID_THREADID;
}
-DrdThreadId PtThreadIdToDrdThreadId(const PThreadId tid)
+DrdThreadId DRD_(PtThreadIdToDrdThreadId)(const PThreadId tid)
{
int i;
@@ -180,7 +180,7 @@
return DRD_INVALID_THREADID;
}
-ThreadId DrdThreadIdToVgThreadId(const DrdThreadId tid)
+ThreadId DRD_(DrdThreadIdToVgThreadId)(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
@@ -195,7 +195,7 @@
* ThreadInfo struct.
* @return True if sane, False if not.
*/
-static Bool sane_ThreadInfo(const ThreadInfo* const ti)
+static Bool DRD_(sane_ThreadInfo)(const ThreadInfo* const ti)
{
Segment* p;
for (p = ti->first; p; p = p->next) {
@@ -222,19 +222,19 @@
* from the context of the creator thread, before the new thread has been
* created.
*/
-DrdThreadId thread_pre_create(const DrdThreadId creator,
- const ThreadId vg_created)
+DrdThreadId DRD_(thread_pre_create)(const DrdThreadId creator,
+ const ThreadId vg_created)
{
DrdThreadId created;
- tl_assert(VgThreadIdToDrdThreadId(vg_created) == DRD_INVALID_THREADID);
- created = VgThreadIdToNewDrdThreadId(vg_created);
+ tl_assert(DRD_(VgThreadIdToDrdThreadId)(vg_created) == DRD_INVALID_THREADID);
+ created = DRD_(VgThreadIdToNewDrdThreadId)(vg_created);
tl_assert(0 <= (int)created && created < DRD_N_THREADS
&& created != DRD_INVALID_THREADID);
tl_assert(DRD_(g_threadinfo)[created].first == 0);
tl_assert(DRD_(g_threadinfo)[created].last == 0);
- thread_append_segment(created, DRD_(sg_new)(creator, created));
+ DRD_(thread_append_segment)(created, DRD_(sg_new)(creator, created));
return created;
}
@@ -245,9 +245,9 @@
* on the newly created thread, e.g. from the handler installed via
* VG_(track_pre_thread_first_insn)().
*/
-DrdThreadId thread_post_create(const ThreadId vg_created)
+DrdThreadId DRD_(thread_post_create)(const ThreadId vg_created)
{
- const DrdThreadId created = VgThreadIdToDrdThreadId(vg_created);
+ const DrdThreadId created = DRD_(VgThreadIdToDrdThreadId)(vg_created);
tl_assert(0 <= (int)created && created < DRD_N_THREADS
&& created != DRD_INVALID_THREADID);
@@ -268,16 +268,16 @@
*/
void DRD_(thread_post_join)(DrdThreadId drd_joiner, DrdThreadId drd_joinee)
{
- tl_assert(IsValidDrdThreadId(drd_joiner));
- tl_assert(IsValidDrdThreadId(drd_joinee));
- thread_new_segment(drd_joinee);
- thread_combine_vc(drd_joiner, drd_joinee);
- thread_new_segment(drd_joiner);
+ tl_assert(DRD_(IsValidDrdThreadId)(drd_joiner));
+ tl_assert(DRD_(IsValidDrdThreadId)(drd_joinee));
+ DRD_(thread_new_segment)(drd_joinee);
+ DRD_(thread_combine_vc)(drd_joiner, drd_joinee);
+ DRD_(thread_new_segment)(drd_joiner);
if (DRD_(s_trace_fork_join))
{
- const ThreadId joiner = DrdThreadIdToVgThreadId(drd_joiner);
- const ThreadId joinee = DrdThreadIdToVgThreadId(drd_joinee);
+ const ThreadId joiner = DRD_(DrdThreadIdToVgThreadId)(drd_joiner);
+ const ThreadId joinee = DRD_(DrdThreadIdToVgThreadId)(drd_joinee);
const unsigned msg_size = 256;
char* msg;
@@ -291,7 +291,7 @@
VG_(snprintf)(msg + VG_(strlen)(msg), msg_size - VG_(strlen)(msg),
", new vc: ");
DRD_(vc_snprint)(msg + VG_(strlen)(msg), msg_size - VG_(strlen)(msg),
- thread_get_vc(drd_joiner));
+ DRD_(thread_get_vc)(drd_joiner));
}
VG_(message)(Vg_DebugMsg, "%s", msg);
VG_(free)(msg);
@@ -299,11 +299,11 @@
if (! DRD_(get_check_stack_accesses)())
{
- DRD_(finish_suppression)(thread_get_stack_max(drd_joinee)
- - thread_get_stack_size(drd_joinee),
- thread_get_stack_max(drd_joinee));
+ DRD_(finish_suppression)(DRD_(thread_get_stack_max)(drd_joinee)
+ - DRD_(thread_get_stack_size)(drd_joinee),
+ DRD_(thread_get_stack_max)(drd_joinee));
}
- thread_delete(drd_joinee);
+ DRD_(thread_delete)(drd_joinee);
mutex_thread_delete(drd_joinee);
cond_thread_delete(drd_joinee);
semaphore_thread_delete(drd_joinee);
@@ -316,7 +316,8 @@
* Any conflicting accesses in the range stack_startup..stack_max will be
* ignored.
*/
-void thread_set_stack_startup(const DrdThreadId tid, const Addr stack_startup)
+void DRD_(thread_set_stack_startup)(const DrdThreadId tid,
+ const Addr stack_startup)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
@@ -325,28 +326,28 @@
DRD_(g_threadinfo)[tid].stack_startup = stack_startup;
}
-Addr thread_get_stack_min(const DrdThreadId tid)
+Addr DRD_(thread_get_stack_min)(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
return DRD_(g_threadinfo)[tid].stack_min;
}
-Addr thread_get_stack_min_min(const DrdThreadId tid)
+Addr DRD_(thread_get_stack_min_min)(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
return DRD_(g_threadinfo)[tid].stack_min_min;
}
-Addr thread_get_stack_max(const DrdThreadId tid)
+Addr DRD_(thread_get_stack_max)(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
return DRD_(g_threadinfo)[tid].stack_max;
}
-SizeT thread_get_stack_size(const DrdThreadId tid)
+SizeT DRD_(thread_get_stack_size)(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
@@ -357,7 +358,7 @@
* Clean up thread-specific data structures. Call this just after
* pthread_join().
*/
-void thread_delete(const DrdThreadId tid)
+void DRD_(thread_delete)(const DrdThreadId tid)
{
Segment* sg;
Segment* sg_prev;
@@ -384,7 +385,7 @@
* thread_delete() is called. Note: thread_delete() is only called for
* joinable threads, not for detached threads.
*/
-void thread_finished(const DrdThreadId tid)
+void DRD_(thread_finished)(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
@@ -406,7 +407,7 @@
}
/** Called just before pthread_cancel(). */
-void thread_pre_cancel(const DrdThreadId tid)
+void DRD_(thread_pre_cancel)(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
@@ -415,7 +416,7 @@
DRD_(g_threadinfo)[tid].synchr_nesting = 0;
}
-void thread_set_pthreadid(const DrdThreadId tid, const PThreadId ptid)
+void DRD_(thread_set_pthreadid)(const DrdThreadId tid, const PThreadId ptid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
@@ -425,14 +426,14 @@
DRD_(g_threadinfo)[tid].pt_threadid = ptid;
}
-Bool thread_get_joinable(const DrdThreadId tid)
+Bool DRD_(thread_get_joinable)(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
return ! DRD_(g_threadinfo)[tid].detached_posix_thread;
}
-void thread_set_joinable(const DrdThreadId tid, const Bool joinable)
+void DRD_(thread_set_joinable)(const DrdThreadId tid, const Bool joinable)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
@@ -448,20 +449,22 @@
DRD_(g_threadinfo)[tid].detached_posix_thread = ! joinable;
}
-void thread_set_vg_running_tid(const ThreadId vg_tid)
+void DRD_(thread_set_vg_running_tid)(const ThreadId vg_tid)
{
tl_assert(vg_tid != VG_INVALID_THREADID);
if (vg_tid != DRD_(s_vg_running_tid))
{
- thread_set_running_tid(vg_tid, VgThreadIdToDrdThreadId(vg_tid));
+ DRD_(thread_set_running_tid)(vg_tid,
+ DRD_(VgThreadIdToDrdThreadId)(vg_tid));
}
tl_assert(DRD_(s_vg_running_tid) != VG_INVALID_THREADID);
tl_assert(DRD_(g_drd_running_tid) != DRD_INVALID_THREADID);
}
-void thread_set_running_tid(const ThreadId vg_tid, const DrdThreadId drd_tid)
+void DRD_(thread_set_running_tid)(const ThreadId vg_tid,
+ const DrdThreadId drd_tid)
{
tl_assert(vg_tid != VG_INVALID_THREADID);
tl_assert(drd_tid != DRD_INVALID_THREADID);
@@ -475,12 +478,12 @@
"Context switch from thread %d/%d to thread %d/%d;"
" segments: %llu",
DRD_(s_vg_running_tid), DRD_(g_drd_running_tid),
- DrdThreadIdToVgThreadId(drd_tid), drd_tid,
+ DRD_(DrdThreadIdToVgThreadId)(drd_tid), drd_tid,
DRD_(sg_get_segments_alive_count)());
}
DRD_(s_vg_running_tid) = vg_tid;
DRD_(g_drd_running_tid) = drd_tid;
- thread_compute_conflict_set(&DRD_(g_conflict_set), drd_tid);
+ DRD_(thread_compute_conflict_set)(&DRD_(g_conflict_set), drd_tid);
DRD_(s_context_switch_count)++;
}
@@ -488,31 +491,32 @@
tl_assert(DRD_(g_drd_running_tid) != DRD_INVALID_THREADID);
}
-int thread_enter_synchr(const DrdThreadId tid)
+int DRD_(thread_enter_synchr)(const DrdThreadId tid)
{
- tl_assert(IsValidDrdThreadId(tid));
+ tl_assert(DRD_(IsValidDrdThreadId)(tid));
return DRD_(g_threadinfo)[tid].synchr_nesting++;
}
-int thread_leave_synchr(const DrdThreadId tid)
+int DRD_(thread_leave_synchr)(const DrdThreadId tid)
{
- tl_assert(IsValidDrdThreadId(tid));
+ tl_assert(DRD_(IsValidDrdThreadId)(tid));
tl_assert(DRD_(g_threadinfo)[tid].synchr_nesting >= 1);
return --DRD_(g_threadinfo)[tid].synchr_nesting;
}
-int thread_get_synchr_nesting_count(const DrdThreadId tid)
+int DRD_(thread_get_synchr_nesting_count)(const DrdThreadId tid)
{
- tl_assert(IsValidDrdThreadId(tid));
+ tl_assert(DRD_(IsValidDrdThreadId)(tid));
return DRD_(g_threadinfo)[tid].synchr_nesting;
}
/** Append a new segment at the end of the segment list. */
-static void thread_append_segment(const DrdThreadId tid, Segment* const sg)
+static
+void DRD_(thread_append_segment)(const DrdThreadId tid, Segment* const sg)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- // tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[tid]));
+ // tl_assert(DRD_(sane_ThreadInfo)(&DRD_(g_threadinfo)[tid]));
sg->prev = DRD_(g_threadinfo)[tid].last;
sg->next = 0;
if (DRD_(g_threadinfo)[tid].last)
@@ -520,18 +524,19 @@
DRD_(g_threadinfo)[tid].last = sg;
if (DRD_(g_threadinfo)[tid].first == 0)
DRD_(g_threadinfo)[tid].first = sg;
- // tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[tid]));
+ // tl_assert(DRD_(sane_ThreadInfo)(&DRD_(g_threadinfo)[tid]));
}
/**
* Remove a segment from the segment list of thread threadid, and free the
* associated memory.
*/
-static void thread_discard_segment(const DrdThreadId tid, Segment* const sg)
+static
+void DRD_(thread_discard_segment)(const DrdThreadId tid, Segment* const sg)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- //tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[tid]));
+ //tl_assert(DRD_(sane_ThreadInfo)(&DRD_(g_threadinfo)[tid]));
if (sg->prev)
sg->prev->next = sg->next;
@@ -543,10 +548,10 @@
DRD_(g_threadinfo)[tid].last = sg->prev;
DRD_(sg_put)(sg);
- //tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[tid]));
+ //tl_assert(DRD_(sane_ThreadInfo)(&DRD_(g_threadinfo)[tid]));
}
-VectorClock* thread_get_vc(const DrdThreadId tid)
+VectorClock* DRD_(thread_get_vc)(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
@@ -557,7 +562,7 @@
/**
* Return the latest segment of thread 'tid' and increment its reference count.
*/
-void thread_get_latest_segment(Segment** sg, const DrdThreadId tid)
+void DRD_(thread_get_latest_segment)(Segment** sg, const DrdThreadId tid)
{
tl_assert(sg);
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
@@ -573,14 +578,15 @@
* (Michiel Ronsse calls this "clock snooping" in his papers about DIOTA).
* @param vc pointer to a vectorclock, holds result upon return.
*/
-static void thread_compute_minimum_vc(VectorClock* vc)
+static void DRD_(thread_compute_minimum_vc)(VectorClock* vc)
{
unsigned i;
Bool first;
Segment* latest_sg;
first = True;
- for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]);
+ i++)
{
latest_sg = DRD_(g_threadinfo)[i].last;
if (latest_sg)
@@ -594,14 +600,15 @@
}
}
-static void thread_compute_maximum_vc(VectorClock* vc)
+static void DRD_(thread_compute_maximum_vc)(VectorClock* vc)
{
unsigned i;
Bool first;
Segment* latest_sg;
first = True;
- for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]);
+ i++)
{
latest_sg = DRD_(g_threadinfo)[i].last;
if (latest_sg)
@@ -620,7 +627,7 @@
* clock of every thread -- these segments can no longer be involved in a
* data race.
*/
-static void thread_discard_ordered_segments(void)
+static void DRD_(thread_discard_ordered_segments)(void)
{
unsigned i;
VectorClock thread_vc_min;
@@ -628,14 +635,14 @@
DRD_(s_discard_ordered_segments_count)++;
DRD_(vc_init)(&thread_vc_min, 0, 0);
- thread_compute_minimum_vc(&thread_vc_min);
+ DRD_(thread_compute_minimum_vc)(&thread_vc_min);
if (DRD_(sg_get_trace)())
{
char msg[256];
VectorClock thread_vc_max;
DRD_(vc_init)(&thread_vc_max, 0, 0);
- thread_compute_maximum_vc(&thread_vc_max);
+ DRD_(thread_compute_maximum_vc)(&thread_vc_max);
VG_(snprintf)(msg, sizeof(msg),
"Discarding ordered segments -- min vc is ");
DRD_(vc_snprint)(msg + VG_(strlen)(msg), sizeof(msg) - VG_(strlen)(msg),
@@ -648,7 +655,8 @@
DRD_(vc_cleanup)(&thread_vc_max);
}
- for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]);
+ i++)
{
Segment* sg;
Segment* sg_next;
@@ -656,7 +664,7 @@
sg && (sg_next = sg->next) && DRD_(vc_lte)(&sg->vc, &thread_vc_min);
sg = sg_next)
{
- thread_discard_segment(i, sg);
+ DRD_(thread_discard_segment)(i, sg);
}
}
DRD_(vc_cleanup)(&thread_vc_min);
@@ -675,11 +683,12 @@
{
unsigned i;
- for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]);
+ i++)
{
Segment* sg;
- // tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[i]));
+ // tl_assert(DRD_(sane_ThreadInfo)(&DRD_(g_threadinfo)[i]));
for (sg = DRD_(g_threadinfo)[i].first; sg; sg = sg->next)
{
@@ -690,11 +699,11 @@
{
/* Merge sg and sg->next into sg. */
DRD_(sg_merge)(sg, sg->next);
- thread_discard_segment(i, sg->next);
+ DRD_(thread_discard_segment)(i, sg->next);
}
}
- // tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[i]));
+ // tl_assert(DRD_(sane_ThreadInfo)(&DRD_(g_threadinfo)[i]));
}
}
@@ -729,7 +738,8 @@
if (old_sg == 0)
return True;
- for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]);
+ i++)
{
Segment* q;
@@ -770,7 +780,7 @@
* Create a new segment for the specified thread, and discard any segments
* that cannot cause races anymore.
*/
-void thread_new_segment(const DrdThreadId tid)
+void DRD_(thread_new_segment)(const DrdThreadId tid)
{
Segment* new_sg;
@@ -778,20 +788,20 @@
&& tid != DRD_INVALID_THREADID);
new_sg = DRD_(sg_new)(tid, tid);
- thread_append_segment(tid, new_sg);
+ DRD_(thread_append_segment)(tid, new_sg);
if (conflict_set_update_needed(tid, new_sg))
{
- thread_compute_conflict_set(&DRD_(g_conflict_set),
- DRD_(g_drd_running_tid));
+ DRD_(thread_compute_conflict_set)(&DRD_(g_conflict_set),
+ DRD_(g_drd_running_tid));
DRD_(s_conflict_set_new_segment_count)++;
}
else if (tid == DRD_(g_drd_running_tid))
{
- ...
[truncated message content] |
|
From: <sv...@va...> - 2009-02-15 12:15:00
|
Author: bart
Date: 2009-02-15 12:14:52 +0000 (Sun, 15 Feb 2009)
New Revision: 9167
Log:
Wrapped DRD_() macro around global and static variables in drd_thread.[ch].
Modified:
trunk/drd/drd_thread.c
trunk/drd/drd_thread.h
Modified: trunk/drd/drd_thread.c
===================================================================
--- trunk/drd/drd_thread.c 2009-02-15 11:34:57 UTC (rev 9166)
+++ trunk/drd/drd_thread.c 2009-02-15 12:14:52 UTC (rev 9167)
@@ -45,7 +45,7 @@
-// Local functions.
+/* Local functions. */
static void thread_append_segment(const DrdThreadId tid,
Segment* const sg);
@@ -55,60 +55,60 @@
const DrdThreadId tid);
-// Local variables.
+/* Local variables. */
-static ULong s_context_switch_count;
-static ULong s_discard_ordered_segments_count;
-static ULong s_update_conflict_set_count;
-static ULong s_conflict_set_new_segment_count;
-static ULong s_conflict_set_combine_vc_count;
-static ULong s_conflict_set_bitmap_creation_count;
-static ULong s_conflict_set_bitmap2_creation_count;
-static ThreadId s_vg_running_tid = VG_INVALID_THREADID;
-DrdThreadId s_drd_running_tid = DRD_INVALID_THREADID;
-ThreadInfo s_threadinfo[DRD_N_THREADS];
-struct bitmap* s_conflict_set;
-static Bool s_trace_context_switches = False;
-static Bool s_trace_conflict_set = False;
-static Bool s_trace_fork_join = False;
-static Bool s_segment_merging = True;
+static ULong DRD_(s_context_switch_count);
+static ULong DRD_(s_discard_ordered_segments_count);
+static ULong DRD_(s_update_conflict_set_count);
+static ULong DRD_(s_conflict_set_new_segment_count);
+static ULong DRD_(s_conflict_set_combine_vc_count);
+static ULong DRD_(s_conflict_set_bitmap_creation_count);
+static ULong DRD_(s_conflict_set_bitmap2_creation_count);
+static ThreadId DRD_(s_vg_running_tid) = VG_INVALID_THREADID;
+DrdThreadId DRD_(g_drd_running_tid) = DRD_INVALID_THREADID;
+ThreadInfo DRD_(g_threadinfo)[DRD_N_THREADS];
+struct bitmap* DRD_(g_conflict_set);
+static Bool DRD_(s_trace_context_switches) = False;
+static Bool DRD_(s_trace_conflict_set) = False;
+static Bool DRD_(s_trace_fork_join) = False;
+static Bool DRD_(s_segment_merging) = True;
-// Function definitions.
+/* Function definitions. */
void thread_trace_context_switches(const Bool t)
{
tl_assert(t == False || t == True);
- s_trace_context_switches = t;
+ DRD_(s_trace_context_switches) = t;
}
void thread_trace_conflict_set(const Bool t)
{
tl_assert(t == False || t == True);
- s_trace_conflict_set = t;
+ DRD_(s_trace_conflict_set) = t;
}
Bool DRD_(thread_get_trace_fork_join)(void)
{
- return s_trace_fork_join;
+ return DRD_(s_trace_fork_join);
}
void DRD_(thread_set_trace_fork_join)(const Bool t)
{
tl_assert(t == False || t == True);
- s_trace_fork_join = t;
+ DRD_(s_trace_fork_join) = t;
}
void thread_set_segment_merging(const Bool m)
{
tl_assert(m == False || m == True);
- s_segment_merging = m;
+ DRD_(s_segment_merging) = m;
}
/**
* Convert Valgrind's ThreadId into a DrdThreadId. Report failure if
* Valgrind's ThreadId does not yet exist.
- **/
+ */
DrdThreadId VgThreadIdToDrdThreadId(const ThreadId tid)
{
int i;
@@ -118,8 +118,8 @@
for (i = 1; i < DRD_N_THREADS; i++)
{
- if (s_threadinfo[i].vg_thread_exists == True
- && s_threadinfo[i].vg_threadid == tid)
+ if (DRD_(g_threadinfo)[i].vg_thread_exists == True
+ && DRD_(g_threadinfo)[i].vg_threadid == tid)
{
return i;
}
@@ -137,23 +137,23 @@
for (i = 1; i < DRD_N_THREADS; i++)
{
- if (s_threadinfo[i].vg_thread_exists == False
- && s_threadinfo[i].posix_thread_exists == False
- && s_threadinfo[i].detached_posix_thread == False)
+ if (DRD_(g_threadinfo)[i].vg_thread_exists == False
+ && DRD_(g_threadinfo)[i].posix_thread_exists == False
+ && DRD_(g_threadinfo)[i].detached_posix_thread == False)
{
- s_threadinfo[i].vg_thread_exists = True;
- s_threadinfo[i].vg_threadid = tid;
- s_threadinfo[i].pt_threadid = INVALID_POSIX_THREADID;
- s_threadinfo[i].stack_min = 0;
- s_threadinfo[i].stack_min_min = 0;
- s_threadinfo[i].stack_startup = 0;
- s_threadinfo[i].stack_max = 0;
- s_threadinfo[i].is_recording = True;
- s_threadinfo[i].synchr_nesting = 0;
- if (s_threadinfo[i].first != 0)
+ DRD_(g_threadinfo)[i].vg_thread_exists = True;
+ DRD_(g_threadinfo)[i].vg_threadid = tid;
+ DRD_(g_threadinfo)[i].pt_threadid = INVALID_POSIX_THREADID;
+ DRD_(g_threadinfo)[i].stack_min = 0;
+ DRD_(g_threadinfo)[i].stack_min_min = 0;
+ DRD_(g_threadinfo)[i].stack_startup = 0;
+ DRD_(g_threadinfo)[i].stack_max = 0;
+ DRD_(g_threadinfo)[i].is_recording = True;
+ DRD_(g_threadinfo)[i].synchr_nesting = 0;
+ if (DRD_(g_threadinfo)[i].first != 0)
VG_(printf)("drd thread id = %d\n", i);
- tl_assert(s_threadinfo[i].first == 0);
- tl_assert(s_threadinfo[i].last == 0);
+ tl_assert(DRD_(g_threadinfo)[i].first == 0);
+ tl_assert(DRD_(g_threadinfo)[i].last == 0);
return i;
}
}
@@ -171,8 +171,8 @@
for (i = 1; i < DRD_N_THREADS; i++)
{
- if (s_threadinfo[i].posix_thread_exists
- && s_threadinfo[i].pt_threadid == tid)
+ if (DRD_(g_threadinfo)[i].posix_thread_exists
+ && DRD_(g_threadinfo)[i].pt_threadid == tid)
{
return i;
}
@@ -184,15 +184,16 @@
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- return (s_threadinfo[tid].vg_thread_exists
- ? s_threadinfo[tid].vg_threadid
+ return (DRD_(g_threadinfo)[tid].vg_thread_exists
+ ? DRD_(g_threadinfo)[tid].vg_threadid
: VG_INVALID_THREADID);
}
#if 0
-/** Sanity check of the doubly linked list of segments referenced by a
- * ThreadInfo struct.
- * @return True if sane, False if not.
+/**
+ * Sanity check of the doubly linked list of segments referenced by a
+ * ThreadInfo struct.
+ * @return True if sane, False if not.
*/
static Bool sane_ThreadInfo(const ThreadInfo* const ti)
{
@@ -231,15 +232,15 @@
tl_assert(0 <= (int)created && created < DRD_N_THREADS
&& created != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[created].first == 0);
- tl_assert(s_threadinfo[created].last == 0);
+ tl_assert(DRD_(g_threadinfo)[created].first == 0);
+ tl_assert(DRD_(g_threadinfo)[created].last == 0);
thread_append_segment(created, DRD_(sg_new)(creator, created));
return created;
}
/**
- * Initialize s_threadinfo[] for a newly created thread. Must be called after
+ * Initialize DRD_(g_threadinfo)[] for a newly created thread. Must be called after
* the thread has been created and before any client instructioins are run
* on the newly created thread, e.g. from the handler installed via
* VG_(track_pre_thread_first_insn)().
@@ -251,18 +252,20 @@
tl_assert(0 <= (int)created && created < DRD_N_THREADS
&& created != DRD_INVALID_THREADID);
- s_threadinfo[created].stack_max = VG_(thread_get_stack_max)(vg_created);
- s_threadinfo[created].stack_startup = s_threadinfo[created].stack_max;
- s_threadinfo[created].stack_min = s_threadinfo[created].stack_max;
- s_threadinfo[created].stack_min_min = s_threadinfo[created].stack_max;
- s_threadinfo[created].stack_size = VG_(thread_get_stack_size)(vg_created);
- tl_assert(s_threadinfo[created].stack_max != 0);
+ DRD_(g_threadinfo)[created].stack_max = VG_(thread_get_stack_max)(vg_created);
+ DRD_(g_threadinfo)[created].stack_startup = DRD_(g_threadinfo)[created].stack_max;
+ DRD_(g_threadinfo)[created].stack_min = DRD_(g_threadinfo)[created].stack_max;
+ DRD_(g_threadinfo)[created].stack_min_min = DRD_(g_threadinfo)[created].stack_max;
+ DRD_(g_threadinfo)[created].stack_size = VG_(thread_get_stack_size)(vg_created);
+ tl_assert(DRD_(g_threadinfo)[created].stack_max != 0);
return created;
}
-/* Process VG_USERREQ__POST_THREAD_JOIN. This client request is invoked just */
-/* after thread drd_joiner joined thread drd_joinee. */
+/**
+ * Process VG_USERREQ__POST_THREAD_JOIN. This client request is invoked just
+ * after thread drd_joiner joined thread drd_joinee.
+ */
void DRD_(thread_post_join)(DrdThreadId drd_joiner, DrdThreadId drd_joinee)
{
tl_assert(IsValidDrdThreadId(drd_joiner));
@@ -271,7 +274,7 @@
thread_combine_vc(drd_joiner, drd_joinee);
thread_new_segment(drd_joiner);
- if (s_trace_fork_join)
+ if (DRD_(s_trace_fork_join))
{
const ThreadId joiner = DrdThreadIdToVgThreadId(drd_joiner);
const ThreadId joinee = DrdThreadIdToVgThreadId(drd_joinee);
@@ -307,45 +310,47 @@
DRD_(barrier_thread_delete)(drd_joinee);
}
-/* NPTL hack: NPTL allocates the 'struct pthread' on top of the stack, */
-/* and accesses this data structure from multiple threads without locking. */
-/* Any conflicting accesses in the range stack_startup..stack_max will be */
-/* ignored. */
+/**
+ * NPTL hack: NPTL allocates the 'struct pthread' on top of the stack,
+ * and accesses this data structure from multiple threads without locking.
+ * Any conflicting accesses in the range stack_startup..stack_max will be
+ * ignored.
+ */
void thread_set_stack_startup(const DrdThreadId tid, const Addr stack_startup)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[tid].stack_min <= stack_startup);
- tl_assert(stack_startup <= s_threadinfo[tid].stack_max);
- s_threadinfo[tid].stack_startup = stack_startup;
+ tl_assert(DRD_(g_threadinfo)[tid].stack_min <= stack_startup);
+ tl_assert(stack_startup <= DRD_(g_threadinfo)[tid].stack_max);
+ DRD_(g_threadinfo)[tid].stack_startup = stack_startup;
}
Addr thread_get_stack_min(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- return s_threadinfo[tid].stack_min;
+ return DRD_(g_threadinfo)[tid].stack_min;
}
Addr thread_get_stack_min_min(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- return s_threadinfo[tid].stack_min_min;
+ return DRD_(g_threadinfo)[tid].stack_min_min;
}
Addr thread_get_stack_max(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- return s_threadinfo[tid].stack_max;
+ return DRD_(g_threadinfo)[tid].stack_max;
}
SizeT thread_get_stack_size(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- return s_threadinfo[tid].stack_size;
+ return DRD_(g_threadinfo)[tid].stack_size;
}
/**
@@ -359,42 +364,44 @@
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[tid].synchr_nesting >= 0);
- for (sg = s_threadinfo[tid].last; sg; sg = sg_prev)
+ tl_assert(DRD_(g_threadinfo)[tid].synchr_nesting >= 0);
+ for (sg = DRD_(g_threadinfo)[tid].last; sg; sg = sg_prev)
{
sg_prev = sg->prev;
sg->prev = 0;
sg->next = 0;
DRD_(sg_put)(sg);
}
- s_threadinfo[tid].vg_thread_exists = False;
- s_threadinfo[tid].posix_thread_exists = False;
- tl_assert(s_threadinfo[tid].detached_posix_thread == False);
- s_threadinfo[tid].first = 0;
- s_threadinfo[tid].last = 0;
+ DRD_(g_threadinfo)[tid].vg_thread_exists = False;
+ DRD_(g_threadinfo)[tid].posix_thread_exists = False;
+ tl_assert(DRD_(g_threadinfo)[tid].detached_posix_thread == False);
+ DRD_(g_threadinfo)[tid].first = 0;
+ DRD_(g_threadinfo)[tid].last = 0;
}
-/* Called after a thread performed its last memory access and before */
-/* thread_delete() is called. Note: thread_delete() is only called for */
-/* joinable threads, not for detached threads. */
+/**
+ * Called after a thread performed its last memory access and before
+ * thread_delete() is called. Note: thread_delete() is only called for
+ * joinable threads, not for detached threads.
+ */
void thread_finished(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- s_threadinfo[tid].vg_thread_exists = False;
+ DRD_(g_threadinfo)[tid].vg_thread_exists = False;
- if (s_threadinfo[tid].detached_posix_thread)
+ if (DRD_(g_threadinfo)[tid].detached_posix_thread)
{
/* Once a detached thread has finished, its stack is deallocated and */
/* should no longer be taken into account when computing the conflict set*/
- s_threadinfo[tid].stack_min = s_threadinfo[tid].stack_max;
+ DRD_(g_threadinfo)[tid].stack_min = DRD_(g_threadinfo)[tid].stack_max;
/* For a detached thread, calling pthread_exit() invalidates the */
/* POSIX thread ID associated with the detached thread. For joinable */
/* POSIX threads however, the POSIX thread ID remains live after the */
/* pthread_exit() call until pthread_join() is called. */
- s_threadinfo[tid].posix_thread_exists = False;
+ DRD_(g_threadinfo)[tid].posix_thread_exists = False;
}
}
@@ -403,26 +410,26 @@
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[tid].pt_threadid != INVALID_POSIX_THREADID);
+ tl_assert(DRD_(g_threadinfo)[tid].pt_threadid != INVALID_POSIX_THREADID);
- s_threadinfo[tid].synchr_nesting = 0;
+ DRD_(g_threadinfo)[tid].synchr_nesting = 0;
}
void thread_set_pthreadid(const DrdThreadId tid, const PThreadId ptid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[tid].pt_threadid == INVALID_POSIX_THREADID);
+ tl_assert(DRD_(g_threadinfo)[tid].pt_threadid == INVALID_POSIX_THREADID);
tl_assert(ptid != INVALID_POSIX_THREADID);
- s_threadinfo[tid].posix_thread_exists = True;
- s_threadinfo[tid].pt_threadid = ptid;
+ DRD_(g_threadinfo)[tid].posix_thread_exists = True;
+ DRD_(g_threadinfo)[tid].pt_threadid = ptid;
}
Bool thread_get_joinable(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- return ! s_threadinfo[tid].detached_posix_thread;
+ return ! DRD_(g_threadinfo)[tid].detached_posix_thread;
}
void thread_set_joinable(const DrdThreadId tid, const Bool joinable)
@@ -430,28 +437,28 @@
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
tl_assert(!! joinable == joinable);
- tl_assert(s_threadinfo[tid].pt_threadid != INVALID_POSIX_THREADID);
+ tl_assert(DRD_(g_threadinfo)[tid].pt_threadid != INVALID_POSIX_THREADID);
#if 0
VG_(message)(Vg_DebugMsg,
"thread_set_joinable(%d/%d, %s)",
tid,
- s_threadinfo[tid].vg_threadid,
+ DRD_(g_threadinfo)[tid].vg_threadid,
joinable ? "joinable" : "detached");
#endif
- s_threadinfo[tid].detached_posix_thread = ! joinable;
+ DRD_(g_threadinfo)[tid].detached_posix_thread = ! joinable;
}
void thread_set_vg_running_tid(const ThreadId vg_tid)
{
tl_assert(vg_tid != VG_INVALID_THREADID);
- if (vg_tid != s_vg_running_tid)
+ if (vg_tid != DRD_(s_vg_running_tid))
{
thread_set_running_tid(vg_tid, VgThreadIdToDrdThreadId(vg_tid));
}
- tl_assert(s_vg_running_tid != VG_INVALID_THREADID);
- tl_assert(s_drd_running_tid != DRD_INVALID_THREADID);
+ tl_assert(DRD_(s_vg_running_tid) != VG_INVALID_THREADID);
+ tl_assert(DRD_(g_drd_running_tid) != DRD_INVALID_THREADID);
}
void thread_set_running_tid(const ThreadId vg_tid, const DrdThreadId drd_tid)
@@ -459,45 +466,45 @@
tl_assert(vg_tid != VG_INVALID_THREADID);
tl_assert(drd_tid != DRD_INVALID_THREADID);
- if (vg_tid != s_vg_running_tid)
+ if (vg_tid != DRD_(s_vg_running_tid))
{
- if (s_trace_context_switches
- && s_drd_running_tid != DRD_INVALID_THREADID)
+ if (DRD_(s_trace_context_switches)
+ && DRD_(g_drd_running_tid) != DRD_INVALID_THREADID)
{
VG_(message)(Vg_DebugMsg,
"Context switch from thread %d/%d to thread %d/%d;"
" segments: %llu",
- s_vg_running_tid, s_drd_running_tid,
+ DRD_(s_vg_running_tid), DRD_(g_drd_running_tid),
DrdThreadIdToVgThreadId(drd_tid), drd_tid,
DRD_(sg_get_segments_alive_count)());
}
- s_vg_running_tid = vg_tid;
- s_drd_running_tid = drd_tid;
- thread_compute_conflict_set(&s_conflict_set, drd_tid);
- s_context_switch_count++;
+ DRD_(s_vg_running_tid) = vg_tid;
+ DRD_(g_drd_running_tid) = drd_tid;
+ thread_compute_conflict_set(&DRD_(g_conflict_set), drd_tid);
+ DRD_(s_context_switch_count)++;
}
- tl_assert(s_vg_running_tid != VG_INVALID_THREADID);
- tl_assert(s_drd_running_tid != DRD_INVALID_THREADID);
+ tl_assert(DRD_(s_vg_running_tid) != VG_INVALID_THREADID);
+ tl_assert(DRD_(g_drd_running_tid) != DRD_INVALID_THREADID);
}
int thread_enter_synchr(const DrdThreadId tid)
{
tl_assert(IsValidDrdThreadId(tid));
- return s_threadinfo[tid].synchr_nesting++;
+ return DRD_(g_threadinfo)[tid].synchr_nesting++;
}
int thread_leave_synchr(const DrdThreadId tid)
{
tl_assert(IsValidDrdThreadId(tid));
- tl_assert(s_threadinfo[tid].synchr_nesting >= 1);
- return --s_threadinfo[tid].synchr_nesting;
+ tl_assert(DRD_(g_threadinfo)[tid].synchr_nesting >= 1);
+ return --DRD_(g_threadinfo)[tid].synchr_nesting;
}
int thread_get_synchr_nesting_count(const DrdThreadId tid)
{
tl_assert(IsValidDrdThreadId(tid));
- return s_threadinfo[tid].synchr_nesting;
+ return DRD_(g_threadinfo)[tid].synchr_nesting;
}
/** Append a new segment at the end of the segment list. */
@@ -505,59 +512,60 @@
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- // tl_assert(sane_ThreadInfo(&s_threadinfo[tid]));
- sg->prev = s_threadinfo[tid].last;
+ // tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[tid]));
+ sg->prev = DRD_(g_threadinfo)[tid].last;
sg->next = 0;
- if (s_threadinfo[tid].last)
- s_threadinfo[tid].last->next = sg;
- s_threadinfo[tid].last = sg;
- if (s_threadinfo[tid].first == 0)
- s_threadinfo[tid].first = sg;
- // tl_assert(sane_ThreadInfo(&s_threadinfo[tid]));
+ if (DRD_(g_threadinfo)[tid].last)
+ DRD_(g_threadinfo)[tid].last->next = sg;
+ DRD_(g_threadinfo)[tid].last = sg;
+ if (DRD_(g_threadinfo)[tid].first == 0)
+ DRD_(g_threadinfo)[tid].first = sg;
+ // tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[tid]));
}
-/** Remove a segment from the segment list of thread threadid, and free the
- * associated memory.
+/**
+ * Remove a segment from the segment list of thread threadid, and free the
+ * associated memory.
*/
static void thread_discard_segment(const DrdThreadId tid, Segment* const sg)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- //tl_assert(sane_ThreadInfo(&s_threadinfo[tid]));
+ //tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[tid]));
if (sg->prev)
sg->prev->next = sg->next;
if (sg->next)
sg->next->prev = sg->prev;
- if (sg == s_threadinfo[tid].first)
- s_threadinfo[tid].first = sg->next;
- if (sg == s_threadinfo[tid].last)
- s_threadinfo[tid].last = sg->prev;
+ if (sg == DRD_(g_threadinfo)[tid].first)
+ DRD_(g_threadinfo)[tid].first = sg->next;
+ if (sg == DRD_(g_threadinfo)[tid].last)
+ DRD_(g_threadinfo)[tid].last = sg->prev;
DRD_(sg_put)(sg);
- //tl_assert(sane_ThreadInfo(&s_threadinfo[tid]));
+ //tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[tid]));
}
VectorClock* thread_get_vc(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[tid].last);
- return &s_threadinfo[tid].last->vc;
+ tl_assert(DRD_(g_threadinfo)[tid].last);
+ return &DRD_(g_threadinfo)[tid].last->vc;
}
-/** Return the latest segment of thread 'tid' and increment its reference
- * count.
+/**
+ * Return the latest segment of thread 'tid' and increment its reference count.
*/
void thread_get_latest_segment(Segment** sg, const DrdThreadId tid)
{
tl_assert(sg);
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[tid].last);
+ tl_assert(DRD_(g_threadinfo)[tid].last);
DRD_(sg_put)(*sg);
- *sg = DRD_(sg_get)(s_threadinfo[tid].last);
+ *sg = DRD_(sg_get)(DRD_(g_threadinfo)[tid].last);
}
/**
@@ -572,9 +580,9 @@
Segment* latest_sg;
first = True;
- for (i = 0; i < sizeof(s_threadinfo) / sizeof(s_threadinfo[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
{
- latest_sg = s_threadinfo[i].last;
+ latest_sg = DRD_(g_threadinfo)[i].last;
if (latest_sg)
{
if (first)
@@ -593,9 +601,9 @@
Segment* latest_sg;
first = True;
- for (i = 0; i < sizeof(s_threadinfo) / sizeof(s_threadinfo[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
{
- latest_sg = s_threadinfo[i].last;
+ latest_sg = DRD_(g_threadinfo)[i].last;
if (latest_sg)
{
if (first)
@@ -617,7 +625,7 @@
unsigned i;
VectorClock thread_vc_min;
- s_discard_ordered_segments_count++;
+ DRD_(s_discard_ordered_segments_count)++;
DRD_(vc_init)(&thread_vc_min, 0, 0);
thread_compute_minimum_vc(&thread_vc_min);
@@ -640,11 +648,11 @@
DRD_(vc_cleanup)(&thread_vc_max);
}
- for (i = 0; i < sizeof(s_threadinfo) / sizeof(s_threadinfo[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
{
Segment* sg;
Segment* sg_next;
- for (sg = s_threadinfo[i].first;
+ for (sg = DRD_(g_threadinfo)[i].first;
sg && (sg_next = sg->next) && DRD_(vc_lte)(&sg->vc, &thread_vc_min);
sg = sg_next)
{
@@ -654,25 +662,26 @@
DRD_(vc_cleanup)(&thread_vc_min);
}
-/** Merge all segments that may be merged without triggering false positives
- * or discarding real data races. For the theoretical background of segment
- * merging, see also the following paper:
- * Mark Christiaens, Michiel Ronsse and Koen De Bosschere.
- * Bounding the number of segment histories during data race detection.
- * Parallel Computing archive, Volume 28, Issue 9, pp 1221-1238,
- * September 2002.
+/**
+ * Merge all segments that may be merged without triggering false positives
+ * or discarding real data races. For the theoretical background of segment
+ * merging, see also the following paper:
+ * Mark Christiaens, Michiel Ronsse and Koen De Bosschere.
+ * Bounding the number of segment histories during data race detection.
+ * Parallel Computing archive, Volume 28, Issue 9, pp 1221-1238,
+ * September 2002.
*/
static void thread_merge_segments(void)
{
unsigned i;
- for (i = 0; i < sizeof(s_threadinfo) / sizeof(s_threadinfo[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
{
Segment* sg;
- // tl_assert(sane_ThreadInfo(&s_threadinfo[i]));
+ // tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[i]));
- for (sg = s_threadinfo[i].first; sg; sg = sg->next)
+ for (sg = DRD_(g_threadinfo)[i].first; sg; sg = sg->next)
{
if (DRD_(sg_get_refcnt)(sg) == 1
&& sg->next
@@ -685,19 +694,20 @@
}
}
- // tl_assert(sane_ThreadInfo(&s_threadinfo[i]));
+ // tl_assert(sane_ThreadInfo(&DRD_(g_threadinfo)[i]));
}
}
-/** Every change in the vector clock of a thread may cause segments that
- * were previously ordered to this thread to become unordered. Hence,
- * it may be necessary to recalculate the conflict set if the vector clock
- * of the current thread is updated. This function check whether such a
- * recalculation is necessary.
+/**
+ * Every change in the vector clock of a thread may cause segments that
+ * were previously ordered to this thread to become unordered. Hence,
+ * it may be necessary to recalculate the conflict set if the vector clock
+ * of the current thread is updated. This function check whether such a
+ * recalculation is necessary.
*
- * @param tid Thread ID of the thread to which a new segment has been
- * appended.
- * @param new_sg Pointer to the most recent segment of thread tid.
+ * @param tid Thread ID of the thread to which a new segment has been
+ * appended.
+ * @param new_sg Pointer to the most recent segment of thread tid.
*/
static Bool conflict_set_update_needed(const DrdThreadId tid,
const Segment* const new_sg)
@@ -710,7 +720,7 @@
/* If a new segment was added to another thread than the running thread, */
/* just tell the caller to update the conflict set. */
- if (tid != s_drd_running_tid)
+ if (tid != DRD_(g_drd_running_tid))
return True;
/* Always let the caller update the conflict set after creation of the */
@@ -719,14 +729,14 @@
if (old_sg == 0)
return True;
- for (i = 0; i < sizeof(s_threadinfo) / sizeof(s_threadinfo[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
{
Segment* q;
- if (i == s_drd_running_tid)
+ if (i == DRD_(g_drd_running_tid))
continue;
- for (q = s_threadinfo[i].last; q; q = q->prev)
+ for (q = DRD_(g_threadinfo)[i].last; q; q = q->prev)
{
/* If the expression below evaluates to false, this expression will */
/* also evaluate to false for all subsequent iterations. So stop */
@@ -756,8 +766,9 @@
#endif
}
-/** Create a new segment for the specified thread, and discard any segments
- * that cannot cause races anymore.
+/**
+ * Create a new segment for the specified thread, and discard any segments
+ * that cannot cause races anymore.
*/
void thread_new_segment(const DrdThreadId tid)
{
@@ -771,18 +782,21 @@
if (conflict_set_update_needed(tid, new_sg))
{
- thread_compute_conflict_set(&s_conflict_set, s_drd_running_tid);
- s_conflict_set_new_segment_count++;
+ thread_compute_conflict_set(&DRD_(g_conflict_set),
+ DRD_(g_drd_running_tid));
+ DRD_(s_conflict_set_new_segment_count)++;
}
- else if (tid == s_drd_running_tid)
+ else if (tid == DRD_(g_drd_running_tid))
{
- tl_assert(thread_conflict_set_up_to_date(s_drd_running_tid));
+ tl_assert(thread_conflict_set_up_to_date(DRD_(g_drd_running_tid)));
}
thread_discard_ordered_segments();
- if (s_segment_merging)
+ if (DRD_(s_segment_merging))
+ {
thread_merge_segments();
+ }
}
/** Call this function after thread 'joiner' joined thread 'joinee'. */
@@ -793,36 +807,39 @@
&& joiner != DRD_INVALID_THREADID);
tl_assert(0 <= (int)joinee && joinee < DRD_N_THREADS
&& joinee != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[joiner].last);
- tl_assert(s_threadinfo[joinee].last);
- DRD_(vc_combine)(&s_threadinfo[joiner].last->vc, &s_threadinfo[joinee].last->vc);
+ tl_assert(DRD_(g_threadinfo)[joiner].last);
+ tl_assert(DRD_(g_threadinfo)[joinee].last);
+ DRD_(vc_combine)(&DRD_(g_threadinfo)[joiner].last->vc,
+ &DRD_(g_threadinfo)[joinee].last->vc);
thread_discard_ordered_segments();
- if (joiner == s_drd_running_tid)
+ if (joiner == DRD_(g_drd_running_tid))
{
- thread_compute_conflict_set(&s_conflict_set, joiner);
+ thread_compute_conflict_set(&DRD_(g_conflict_set), joiner);
}
}
-/** Call this function after thread 'tid' had to wait because of thread
- * synchronization until the memory accesses in the segment with vector clock
- * 'vc' finished.
+/**
+ * Call this function after thread 'tid' had to wait because of thread
+ * synchronization until the memory accesses in the segment with vector clock
+ * 'vc' finished.
*/
void thread_combine_vc2(DrdThreadId tid, const VectorClock* const vc)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[tid].last);
+ tl_assert(DRD_(g_threadinfo)[tid].last);
tl_assert(vc);
- DRD_(vc_combine)(&s_threadinfo[tid].last->vc, vc);
- thread_compute_conflict_set(&s_conflict_set, tid);
+ DRD_(vc_combine)(&DRD_(g_threadinfo)[tid].last->vc, vc);
+ thread_compute_conflict_set(&DRD_(g_conflict_set), tid);
thread_discard_ordered_segments();
- s_conflict_set_combine_vc_count++;
+ DRD_(s_conflict_set_combine_vc_count)++;
}
-/** Call this function whenever a thread is no longer using the memory
- * [ a1, a2 [, e.g. because of a call to free() or a stack pointer
- * increase.
+/**
+ * Call this function whenever a thread is no longer using the memory
+ * [ a1, a2 [, e.g. because of a call to free() or a stack pointer
+ * increase.
*/
void thread_stop_using_mem(const Addr a1, const Addr a2)
{
@@ -831,13 +848,13 @@
/* For all threads, mark the range [ a1, a2 [ as no longer in use. */
other_user = DRD_INVALID_THREADID;
- for (i = 0; i < sizeof(s_threadinfo) / sizeof(s_threadinfo[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
{
Segment* p;
- for (p = s_threadinfo[i].first; p; p = p->next)
+ for (p = DRD_(g_threadinfo)[i].first; p; p = p->next)
{
if (other_user == DRD_INVALID_THREADID
- && i != s_drd_running_tid)
+ && i != DRD_(g_drd_running_tid))
{
if (UNLIKELY(bm_test_and_clear(p->bm, a1, a2)))
{
@@ -849,12 +866,14 @@
}
}
- /* If any other thread had accessed memory in [ a1, a2 [, update the */
- /* conflict set. */
+ /*
+ * If any other thread had accessed memory in [ a1, a2 [, update the
+ * conflict set.
+ */
if (other_user != DRD_INVALID_THREADID
- && bm_has_any_access(s_conflict_set, a1, a2))
+ && bm_has_any_access(DRD_(g_conflict_set), a1, a2))
{
- thread_compute_conflict_set(&s_conflict_set, thread_get_running_tid());
+ thread_compute_conflict_set(&DRD_(g_conflict_set), thread_get_running_tid());
}
}
@@ -862,16 +881,16 @@
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(! s_threadinfo[tid].is_recording);
- s_threadinfo[tid].is_recording = True;
+ tl_assert(! DRD_(g_threadinfo)[tid].is_recording);
+ DRD_(g_threadinfo)[tid].is_recording = True;
}
void thread_stop_recording(const DrdThreadId tid)
{
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[tid].is_recording);
- s_threadinfo[tid].is_recording = False;
+ tl_assert(DRD_(g_threadinfo)[tid].is_recording);
+ DRD_(g_threadinfo)[tid].is_recording = False;
}
void thread_print_all(void)
@@ -879,20 +898,20 @@
unsigned i;
Segment* p;
- for (i = 0; i < sizeof(s_threadinfo) / sizeof(s_threadinfo[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
{
- if (s_threadinfo[i].first)
+ if (DRD_(g_threadinfo)[i].first)
{
VG_(printf)("**************\n"
"* thread %3d (%d/%d/%d/0x%lx/%d) *\n"
"**************\n",
i,
- s_threadinfo[i].vg_thread_exists,
- s_threadinfo[i].vg_threadid,
- s_threadinfo[i].posix_thread_exists,
- s_threadinfo[i].pt_threadid,
- s_threadinfo[i].detached_posix_thread);
- for (p = s_threadinfo[i].first; p; p = p->next)
+ DRD_(g_threadinfo)[i].vg_thread_exists,
+ DRD_(g_threadinfo)[i].vg_threadid,
+ DRD_(g_threadinfo)[i].posix_thread_exists,
+ DRD_(g_threadinfo)[i].pt_threadid,
+ DRD_(g_threadinfo)[i].detached_posix_thread);
+ for (p = DRD_(g_threadinfo)[i].first; p; p = p->next)
{
DRD_(sg_print)(p);
}
@@ -939,17 +958,19 @@
&& tid != DRD_INVALID_THREADID);
tl_assert(p);
- for (i = 0; i < sizeof(s_threadinfo) / sizeof(s_threadinfo[0]); i++)
+ for (i = 0; i < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); i++)
{
if (i != tid)
{
Segment* q;
- for (q = s_threadinfo[i].last; q; q = q->prev)
+ for (q = DRD_(g_threadinfo)[i].last; q; q = q->prev)
{
- // Since q iterates over the segments of thread i in order of
- // decreasing vector clocks, if q->vc <= p->vc, then
- // q->next->vc <= p->vc will also hold. Hence, break out of the
- // loop once this condition is met.
+ /*
+ * Since q iterates over the segments of thread i in order of
+ * decreasing vector clocks, if q->vc <= p->vc, then
+ * q->next->vc <= p->vc will also hold. Hence, break out of the
+ * loop once this condition is met.
+ */
if (DRD_(vc_lte)(&q->vc, &p->vc))
break;
if (! DRD_(vc_lte)(&p->vc, &q->vc))
@@ -978,7 +999,7 @@
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- for (p = s_threadinfo[tid].first; p; p = p->next)
+ for (p = DRD_(g_threadinfo)[tid].first; p; p = p->next)
{
if (bm_has(p->bm, addr, addr + size, access_type))
{
@@ -988,8 +1009,9 @@
}
}
-/** Verify whether the conflict set for thread tid is up to date. Only perform
- * the check if the environment variable DRD_VERIFY_CONFLICT_SET has been set.
+/**
+ * Verify whether the conflict set for thread tid is up to date. Only perform
+ * the check if the environment variable DRD_VERIFY_CONFLICT_SET has been set.
*/
static Bool thread_conflict_set_up_to_date(const DrdThreadId tid)
{
@@ -1006,13 +1028,14 @@
return True;
thread_compute_conflict_set(&computed_conflict_set, tid);
- result = bm_equal(s_conflict_set, computed_conflict_set);
+ result = bm_equal(DRD_(g_conflict_set), computed_conflict_set);
bm_delete(computed_conflict_set);
return result;
}
-/** Compute a bitmap that represents the union of all memory accesses of all
- * segments that are unordered to the current segment of the thread tid.
+/**
+ * Compute a bitmap that represents the union of all memory accesses of all
+ * segments that are unordered to the current segment of the thread tid.
*/
static void thread_compute_conflict_set(struct bitmap** conflict_set,
const DrdThreadId tid)
@@ -1021,11 +1044,11 @@
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(tid == s_drd_running_tid);
+ tl_assert(tid == DRD_(g_drd_running_tid));
- s_update_conflict_set_count++;
- s_conflict_set_bitmap_creation_count -= bm_get_bitmap_creation_count();
- s_conflict_set_bitmap2_creation_count -= bm_get_bitmap2_creation_count();
+ DRD_(s_update_conflict_set_count)++;
+ DRD_(s_conflict_set_bitmap_creation_count) -= bm_get_bitmap_creation_count();
+ DRD_(s_conflict_set_bitmap2_creation_count) -= bm_get_bitmap2_creation_count();
if (*conflict_set)
{
@@ -1033,7 +1056,7 @@
}
*conflict_set = bm_new();
- if (s_trace_conflict_set)
+ if (DRD_(s_trace_conflict_set))
{
char msg[256];
@@ -1042,15 +1065,15 @@
DrdThreadIdToVgThreadId(tid), tid);
DRD_(vc_snprint)(msg + VG_(strlen)(msg),
sizeof(msg) - VG_(strlen)(msg),
- &s_threadinfo[tid].last->vc);
+ &DRD_(g_threadinfo)[tid].last->vc);
VG_(message)(Vg_UserMsg, "%s", msg);
}
- p = s_threadinfo[tid].last;
+ p = DRD_(g_threadinfo)[tid].last;
{
unsigned j;
- if (s_trace_conflict_set)
+ if (DRD_(s_trace_conflict_set))
{
char msg[256];
@@ -1063,16 +1086,16 @@
VG_(message)(Vg_UserMsg, "%s", msg);
}
- for (j = 0; j < sizeof(s_threadinfo) / sizeof(s_threadinfo[0]); j++)
+ for (j = 0; j < sizeof(DRD_(g_threadinfo)) / sizeof(DRD_(g_threadinfo)[0]); j++)
{
if (j != tid && IsValidDrdThreadId(j))
{
const Segment* q;
- for (q = s_threadinfo[j].last; q; q = q->prev)
+ for (q = DRD_(g_threadinfo)[j].last; q; q = q->prev)
{
if (! DRD_(vc_lte)(&q->vc, &p->vc) && ! DRD_(vc_lte)(&p->vc, &q->vc))
{
- if (s_trace_conflict_set)
+ if (DRD_(s_trace_conflict_set))
{
char msg[256];
VG_(snprintf)(msg, sizeof(msg),
@@ -1086,7 +1109,7 @@
}
else
{
- if (s_trace_conflict_set)
+ if (DRD_(s_trace_conflict_set))
{
char msg[256];
VG_(snprintf)(msg, sizeof(msg),
@@ -1102,10 +1125,10 @@
}
}
- s_conflict_set_bitmap_creation_count += bm_get_bitmap_creation_count();
- s_conflict_set_bitmap2_creation_count += bm_get_bitmap2_creation_count();
+ DRD_(s_conflict_set_bitmap_creation_count) += bm_get_bitmap_creation_count();
+ DRD_(s_conflict_set_bitmap2_creation_count) += bm_get_bitmap2_creation_count();
- if (0 && s_trace_conflict_set)
+ if (0 && DRD_(s_trace_conflict_set))
{
VG_(message)(Vg_UserMsg, "[%d] new conflict set:", tid);
bm_print(*conflict_set);
@@ -1115,29 +1138,29 @@
ULong thread_get_context_switch_count(void)
{
- return s_context_switch_count;
+ return DRD_(s_context_switch_count);
}
ULong thread_get_discard_ordered_segments_count(void)
{
- return s_discard_ordered_segments_count;
+ return DRD_(s_discard_ordered_segments_count);
}
ULong thread_get_update_conflict_set_count(ULong* dsnsc, ULong* dscvc)
{
tl_assert(dsnsc);
tl_assert(dscvc);
- *dsnsc = s_conflict_set_new_segment_count;
- *dscvc = s_conflict_set_combine_vc_count;
- return s_update_conflict_set_count;
+ *dsnsc = DRD_(s_conflict_set_new_segment_count);
+ *dscvc = DRD_(s_conflict_set_combine_vc_count);
+ return DRD_(s_update_conflict_set_count);
}
ULong thread_get_conflict_set_bitmap_creation_count(void)
{
- return s_conflict_set_bitmap_creation_count;
+ return DRD_(s_conflict_set_bitmap_creation_count);
}
ULong thread_get_conflict_set_bitmap2_creation_count(void)
{
- return s_conflict_set_bitmap2_creation_count;
+ return DRD_(s_conflict_set_bitmap2_creation_count);
}
Modified: trunk/drd/drd_thread.h
===================================================================
--- trunk/drd/drd_thread.h 2009-02-15 11:34:57 UTC (rev 9166)
+++ trunk/drd/drd_thread.h 2009-02-15 12:14:52 UTC (rev 9167)
@@ -27,7 +27,7 @@
#define __THREAD_H
-// Includes.
+/* Include directives. */
#include "drd_basics.h"
#include "drd_segment.h"
@@ -37,7 +37,7 @@
#include "pub_tool_threadstate.h" // VG_N_THREADS
-// Defines.
+/* Defines. */
#define DRD_N_THREADS VG_N_THREADS
@@ -51,7 +51,7 @@
#define INVALID_POSIX_THREADID ((PThreadId)0)
-// Type definitions.
+/* Type definitions. */
typedef UWord PThreadId;
@@ -61,34 +61,38 @@
Segment* last;
ThreadId vg_threadid;
PThreadId pt_threadid;
- Addr stack_min_min; /** Lowest value stack pointer ever had. */
- Addr stack_min; /** Current stack pointer. */
- Addr stack_startup; /** Stack pointer after pthread_create() finished.*/
- Addr stack_max; /** Top of stack. */
- SizeT stack_size; /** Maximum size of stack. */
- /// Indicates whether the Valgrind core knows about this thread.
+ Addr stack_min_min; /**< Lowest value stack pointer ever had. */
+ Addr stack_min; /**< Current stack pointer. */
+ Addr stack_startup; /**<Stack pointer after pthread_create() finished.*/
+ Addr stack_max; /**< Top of stack. */
+ SizeT stack_size; /**< Maximum size of stack. */
+ /** Indicates whether the Valgrind core knows about this thread. */
Bool vg_thread_exists;
- /// Indicates whether there is an associated POSIX thread ID.
+ /** Indicates whether there is an associated POSIX thread ID. */
Bool posix_thread_exists;
- /// If true, indicates that there is a corresponding POSIX thread ID and
- /// a corresponding OS thread that is detached.
+ /**
+ * If true, indicates that there is a corresponding POSIX thread ID and
+ * a corresponding OS thread that is detached.
+ */
Bool detached_posix_thread;
- /// Wether recording of memory accesses is active.
+ /** Wether recording of memory accesses is active. */
Bool is_recording;
- /// Nesting level of synchronization functions called by the client.
+ /** Nesting level of synchronization functions called by the client. */
Int synchr_nesting;
} ThreadInfo;
-// Local variables of drd_thread.c that are declared here such that these
-// can be accessed by inline functions.
+/*
+ * Local variables of drd_thread.c that are declared here such that these
+ * can be accessed by inline functions.
+ */
-extern DrdThreadId s_drd_running_tid;
-extern ThreadInfo s_threadinfo[DRD_N_THREADS];
-extern struct bitmap* s_conflict_set;
+extern DrdThreadId DRD_(g_drd_running_tid);
+extern ThreadInfo DRD_(g_threadinfo)[DRD_N_THREADS];
+extern struct bitmap* DRD_(g_conflict_set);
-// Function declarations.
+/* Function declarations. */
void thread_trace_context_switches(const Bool t);
void thread_trace_conflict_set(const Bool t);
@@ -147,37 +151,39 @@
ULong thread_get_conflict_set_bitmap2_creation_count(void);
+/* Inline function definitions. */
+
static __inline__
Bool IsValidDrdThreadId(const DrdThreadId tid)
{
return (0 <= (int)tid && tid < DRD_N_THREADS && tid != DRD_INVALID_THREADID
- && ! (s_threadinfo[tid].vg_thread_exists == False
- && s_threadinfo[tid].posix_thread_exists == False
- && s_threadinfo[tid].detached_posix_thread == False));
+ && ! (DRD_(g_threadinfo)[tid].vg_thread_exists == False
+ && DRD_(g_threadinfo)[tid].posix_thread_exists == False
+ && DRD_(g_threadinfo)[tid].detached_posix_thread == False));
}
static __inline__
DrdThreadId thread_get_running_tid(void)
{
- tl_assert(s_drd_running_tid != DRD_INVALID_THREADID);
- return s_drd_running_tid;
+ tl_assert(DRD_(g_drd_running_tid) != DRD_INVALID_THREADID);
+ return DRD_(g_drd_running_tid);
}
static __inline__
struct bitmap* thread_get_conflict_set(void)
{
- return s_conflict_set;
+ return DRD_(g_conflict_set);
}
static __inline__
Bool running_thread_is_recording(void)
{
#ifdef ENABLE_DRD_CONSISTENCY_CHECKS
- tl_assert(0 <= (int)s_drd_running_tid && s_drd_running_tid < DRD_N_THREADS
- && s_drd_running_tid != DRD_INVALID_THREADID);
+ tl_assert(0 <= (int)DRD_(g_drd_running_tid) && DRD_(g_drd_running_tid) < DRD_N_THREADS
+ && DRD_(g_drd_running_tid) != DRD_INVALID_THREADID);
#endif
- return (s_threadinfo[s_drd_running_tid].synchr_nesting == 0
- && s_threadinfo[s_drd_running_tid].is_recording);
+ return (DRD_(g_threadinfo)[DRD_(g_drd_running_tid)].synchr_nesting == 0
+ && DRD_(g_threadinfo)[DRD_(g_drd_running_tid)].is_recording);
}
static __inline__
@@ -188,27 +194,28 @@
&& tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
#endif
- s_threadinfo[tid].stack_min = stack_min;
+ DRD_(g_threadinfo)[tid].stack_min = stack_min;
#ifdef ENABLE_DRD_CONSISTENCY_CHECKS
/* This function can be called after the thread has been created but */
/* before drd_post_thread_create() has filled in stack_max. */
- tl_assert(s_threadinfo[tid].stack_min < s_threadinfo[tid].stack_max
- || s_threadinfo[tid].stack_max == 0);
+ tl_assert(DRD_(g_threadinfo)[tid].stack_min < DRD_(g_threadinfo)[tid].stack_max
+ || DRD_(g_threadinfo)[tid].stack_max == 0);
#endif
- if (UNLIKELY(stack_min < s_threadinfo[tid].stack_min_min))
+ if (UNLIKELY(stack_min < DRD_(g_threadinfo)[tid].stack_min_min))
{
- s_threadinfo[tid].stack_min_min = stack_min;
+ DRD_(g_threadinfo)[tid].stack_min_min = stack_min;
}
}
-/** Return true if and only if the specified address is on the stack of the
- * currently scheduled thread.
+/**
+ * Return true if and only if the specified address is on the stack of the
+ * currently scheduled thread.
*/
static __inline__
Bool thread_address_on_stack(const Addr a)
{
- return (s_threadinfo[s_drd_running_tid].stack_min <= a
- && a < s_threadinfo[s_drd_running_tid].stack_max);
+ return (DRD_(g_threadinfo)[DRD_(g_drd_running_tid)].stack_min <= a
+ && a < DRD_(g_threadinfo)[DRD_(g_drd_running_tid)].stack_max);
}
/** Return a pointer to the latest segment for the specified thread. */
@@ -218,16 +225,16 @@
#ifdef ENABLE_DRD_CONSISTENCY_CHECKS
tl_assert(0 <= (int)tid && tid < DRD_N_THREADS
&& tid != DRD_INVALID_THREADID);
- tl_assert(s_threadinfo[tid].last);
+ tl_assert(DRD_(g_threadinfo)[tid].last);
#endif
- return s_threadinfo[tid].last;
+ return DRD_(g_threadinfo)[tid].last;
}
/** Return a pointer to the latest segment for the running thread. */
static __inline__
Segment* running_thread_get_segment(void)
{
- return thread_get_segment(s_drd_running_tid);
+ return thread_get_segment(DRD_(g_drd_running_tid));
}
#endif // __THREAD_H
|
|
From: <sv...@va...> - 2009-02-15 12:14:32
|
Author: bart
Date: 2009-02-15 11:34:57 +0000 (Sun, 15 Feb 2009)
New Revision: 9166
Log:
Wrapped DRD_() macro around all client object function names.
Modified:
trunk/drd/drd_barrier.c
trunk/drd/drd_clientobj.c
trunk/drd/drd_clientobj.h
trunk/drd/drd_cond.c
trunk/drd/drd_error.c
trunk/drd/drd_main.c
trunk/drd/drd_mutex.c
trunk/drd/drd_rwlock.c
trunk/drd/drd_semaphore.c
Modified: trunk/drd/drd_barrier.c
===================================================================
--- trunk/drd/drd_barrier.c 2009-02-15 11:00:29 UTC (rev 9165)
+++ trunk/drd/drd_barrier.c 2009-02-15 11:34:57 UTC (rev 9166)
@@ -116,8 +116,9 @@
VG_(free));
}
-/** Deallocate the memory allocated by barrier_initialize() and in p->oset.
- * Called by clientobj_destroy().
+/**
+ * Deallocate the memory allocated by barrier_initialize() and in p->oset.
+ * Called by clientobj_destroy().
*/
void DRD_(barrier_cleanup)(struct barrier_info* p)
{
@@ -156,10 +157,10 @@
tl_assert(barrier_type == pthread_barrier || barrier_type == gomp_barrier);
tl_assert(offsetof(DrdClientobj, barrier) == 0);
- p = &(clientobj_get(barrier, ClientBarrier)->barrier);
+ p = &(DRD_(clientobj_get)(barrier, ClientBarrier)->barrier);
if (p == 0)
{
- p = &(clientobj_add(barrier, ClientBarrier)->barrier);
+ p = &(DRD_(clientobj_add)(barrier, ClientBarrier)->barrier);
DRD_(barrier_initialize)(p, barrier, barrier_type, count);
}
return p;
@@ -170,7 +171,7 @@
static struct barrier_info* DRD_(barrier_get)(const Addr barrier)
{
tl_assert(offsetof(DrdClientobj, barrier) == 0);
- return &(clientobj_get(barrier, ClientBarrier)->barrier);
+ return &(DRD_(clientobj_get)(barrier, ClientBarrier)->barrier);
}
/** Initialize a barrier with client address barrier, client size size, and
@@ -288,7 +289,7 @@
&bei);
}
- clientobj_remove(p->a1, ClientBarrier);
+ DRD_(clientobj_remove)(p->a1, ClientBarrier);
}
/** Called before pthread_barrier_wait(). */
@@ -415,8 +416,8 @@
{
struct barrier_info* p;
- clientobj_resetiter();
- for ( ; (p = &clientobj_next(ClientBarrier)->barrier) != 0; )
+ DRD_(clientobj_resetiter)();
+ for ( ; (p = &(DRD_(clientobj_next)(ClientBarrier)->barrier)) != 0; )
{
struct barrier_thread_info* q;
const UWord word_tid = tid;
Modified: trunk/drd/drd_clientobj.c
===================================================================
--- trunk/drd/drd_clientobj.c 2009-02-15 11:00:29 UTC (rev 9165)
+++ trunk/drd/drd_clientobj.c 2009-02-15 11:34:57 UTC (rev 9166)
@@ -36,21 +36,21 @@
#include "pub_tool_threadstate.h" // VG_(get_running_tid)()
-// Local variables.
+/* Local variables. */
static OSet* s_clientobj;
static Bool s_trace_clientobj;
-// Function definitions.
+/* Function definitions. */
-void clientobj_set_trace(const Bool trace)
+void DRD_(clientobj_set_trace)(const Bool trace)
{
s_trace_clientobj = trace;
}
/** Initialize the client object set. */
-void clientobj_init(void)
+void DRD_(clientobj_init)(void)
{
tl_assert(s_clientobj == 0);
s_clientobj = VG_(OSetGen_Create)(0, 0, VG_(malloc), "drd.clientobj.ci.1",
@@ -58,10 +58,12 @@
tl_assert(s_clientobj);
}
-/** Free the memory allocated for the client object set.
- * @pre Client object set is empty.
+/**
+ * Free the memory allocated for the client object set.
+ *
+ * @pre Client object set is empty.
*/
-void clientobj_cleanup(void)
+void DRD_(clientobj_cleanup)(void)
{
tl_assert(s_clientobj);
tl_assert(VG_(OSetGen_Size)(s_clientobj) == 0);
@@ -73,7 +75,7 @@
* Return 0 if there is no client object in the set with the specified start
* address.
*/
-DrdClientobj* clientobj_get_any(const Addr addr)
+DrdClientobj* DRD_(clientobj_get_any)(const Addr addr)
{
return VG_(OSetGen_Lookup)(s_clientobj, &addr);
}
@@ -82,7 +84,7 @@
* and that has object type t. Return 0 if there is no client object in the
* set with the specified start address.
*/
-DrdClientobj* clientobj_get(const Addr addr, const ObjType t)
+DrdClientobj* DRD_(clientobj_get)(const Addr addr, const ObjType t)
{
DrdClientobj* p;
p = VG_(OSetGen_Lookup)(s_clientobj, &addr);
@@ -94,7 +96,7 @@
/** Return true if and only if the address range of any client object overlaps
* with the specified address range.
*/
-Bool clientobj_present(const Addr a1, const Addr a2)
+Bool DRD_(clientobj_present)(const Addr a1, const Addr a2)
{
DrdClientobj *p;
@@ -114,12 +116,11 @@
* of type t. Suppress data race reports on the address range [addr,addr+size[.
* @pre No other client object is present in the address range [addr,addr+size[.
*/
-DrdClientobj*
-clientobj_add(const Addr a1, const ObjType t)
+DrdClientobj* DRD_(clientobj_add)(const Addr a1, const ObjType t)
{
DrdClientobj* p;
- tl_assert(! clientobj_present(a1, a1 + 1));
+ tl_assert(! DRD_(clientobj_present)(a1, a1 + 1));
tl_assert(VG_(OSetGen_Lookup)(s_clientobj, &a1) == 0);
if (s_trace_clientobj)
@@ -138,7 +139,7 @@
return p;
}
-Bool clientobj_remove(const Addr addr, const ObjType t)
+Bool DRD_(clientobj_remove)(const Addr addr, const ObjType t)
{
DrdClientobj* p;
@@ -166,7 +167,7 @@
return False;
}
-void clientobj_stop_using_mem(const Addr a1, const Addr a2)
+void DRD_(clientobj_stop_using_mem)(const Addr a1, const Addr a2)
{
Addr removed_at;
DrdClientobj* p;
@@ -183,7 +184,7 @@
if (a1 <= p->any.a1 && p->any.a1 < a2)
{
removed_at = p->any.a1;
- clientobj_remove(p->any.a1, p->any.type);
+ DRD_(clientobj_remove)(p->any.a1, p->any.type);
/* The above call removes an element from the oset and hence */
/* invalidates the iterator. Set the iterator back. */
VG_(OSetGen_ResetIter)(s_clientobj);
@@ -198,12 +199,12 @@
}
}
-void clientobj_resetiter(void)
+void DRD_(clientobj_resetiter)(void)
{
VG_(OSetGen_ResetIter)(s_clientobj);
}
-DrdClientobj* clientobj_next(const ObjType t)
+DrdClientobj* DRD_(clientobj_next)(const ObjType t)
{
DrdClientobj* p;
while ((p = VG_(OSetGen_Next)(s_clientobj)) != 0 && p->any.type != t)
@@ -211,7 +212,7 @@
return p;
}
-const char* clientobj_type_name(const ObjType t)
+const char* DRD_(clientobj_type_name)(const ObjType t)
{
switch (t)
{
Modified: trunk/drd/drd_clientobj.h
===================================================================
--- trunk/drd/drd_clientobj.h 2009-02-15 11:00:29 UTC (rev 9165)
+++ trunk/drd/drd_clientobj.h 2009-02-15 11:34:57 UTC (rev 9166)
@@ -27,20 +27,20 @@
#define __DRD_CLIENTOBJ_H
+#include "drd_basics.h" /* DrdThreadId */
#include "drd_clientreq.h" /* MutexT */
-#include "drd_thread.h" /* DrdThreadId */
#include "pub_tool_basics.h"
#include "pub_tool_execontext.h" /* ExeContext */
#include "pub_tool_oset.h"
#include "pub_tool_xarray.h"
-// Forward declarations.
+/* Forward declarations. */
union drd_clientobj;
-// Type definitions.
+/* Type definitions. */
typedef enum {
ClientMutex = 1,
@@ -60,16 +60,16 @@
struct mutex_info
{
- Addr a1;
- ObjType type;
- void (*cleanup)(union drd_clientobj*);
- ExeContext* first_observed_at;
- MutexT mutex_type; // pthread_mutex_t or pthread_spinlock_t.
- int recursion_count; // 0 if free, >= 1 if locked.
- DrdThreadId owner; // owner if locked, last owner if free.
- Segment* last_locked_segment;
- ULong acquiry_time_ms;
- ExeContext* acquired_at;
+ Addr a1;
+ ObjType type;
+ void (*cleanup)(union drd_clientobj*);
+ ExeContext* first_observed_at;
+ MutexT mutex_type; // pthread_mutex_t or pthread_spinlock_t.
+ int recursion_count; // 0 if free, >= 1 if locked.
+ DrdThreadId owner; // owner if locked, last owner if free.
+ struct segment* last_locked_segment;
+ ULong acquiry_time_ms;
+ ExeContext* acquired_at;
};
struct cond_info
@@ -134,20 +134,20 @@
} DrdClientobj;
-// Function declarations.
+/* Function declarations. */
-void clientobj_set_trace(const Bool trace);
-void clientobj_init(void);
-void clientobj_cleanup(void);
-DrdClientobj* clientobj_get_any(const Addr addr);
-DrdClientobj* clientobj_get(const Addr addr, const ObjType t);
-Bool clientobj_present(const Addr a1, const Addr a2);
-DrdClientobj* clientobj_add(const Addr a1, const ObjType t);
-Bool clientobj_remove(const Addr addr, const ObjType t);
-void clientobj_stop_using_mem(const Addr a1, const Addr a2);
-void clientobj_resetiter(void);
-DrdClientobj* clientobj_next(const ObjType t);
-const char* clientobj_type_name(const ObjType t);
+void DRD_(clientobj_set_trace)(const Bool trace);
+void DRD_(clientobj_init)(void);
+void DRD_(clientobj_cleanup)(void);
+DrdClientobj* DRD_(clientobj_get_any)(const Addr addr);
+DrdClientobj* DRD_(clientobj_get)(const Addr addr, const ObjType t);
+Bool DRD_(clientobj_present)(const Addr a1, const Addr a2);
+DrdClientobj* DRD_(clientobj_add)(const Addr a1, const ObjType t);
+Bool DRD_(clientobj_remove)(const Addr addr, const ObjType t);
+void DRD_(clientobj_stop_using_mem)(const Addr a1, const Addr a2);
+void DRD_(clientobj_resetiter)(void);
+DrdClientobj* DRD_(clientobj_next)(const ObjType t);
+const char* DRD_(clientobj_type_name)(const ObjType t);
#endif /* __DRD_CLIENTOBJ_H */
Modified: trunk/drd/drd_cond.c
===================================================================
--- trunk/drd/drd_cond.c 2009-02-15 11:00:29 UTC (rev 9165)
+++ trunk/drd/drd_cond.c 2009-02-15 11:34:57 UTC (rev 9166)
@@ -70,8 +70,9 @@
p->mutex = 0;
}
-/** Free the memory that was allocated by cond_initialize(). Called by
- * clientobj_remove().
+/**
+ * Free the memory that was allocated by cond_initialize(). Called by
+ * DRD_(clientobj_remove)().
*/
static void cond_cleanup(struct cond_info* p)
{
@@ -79,7 +80,7 @@
if (p->mutex)
{
struct mutex_info* q;
- q = &clientobj_get(p->mutex, ClientMutex)->mutex;
+ q = &(DRD_(clientobj_get)(p->mutex, ClientMutex)->mutex);
tl_assert(q);
{
CondDestrErrInfo cde = { p->a1, q->a1, q->owner };
@@ -98,10 +99,10 @@
struct cond_info *p;
tl_assert(offsetof(DrdClientobj, cond) == 0);
- p = &clientobj_get(cond, ClientCondvar)->cond;
+ p = &(DRD_(clientobj_get)(cond, ClientCondvar)->cond);
if (p == 0)
{
- p = &clientobj_add(cond, ClientCondvar)->cond;
+ p = &(DRD_(clientobj_add)(cond, ClientCondvar)->cond);
cond_initialize(p, cond);
}
return p;
@@ -110,7 +111,7 @@
static struct cond_info* cond_get(const Addr cond)
{
tl_assert(offsetof(DrdClientobj, cond) == 0);
- return &clientobj_get(cond, ClientCondvar)->cond;
+ return &(DRD_(clientobj_get)(cond, ClientCondvar)->cond);
}
/** Called before pthread_cond_init(). */
@@ -179,7 +180,7 @@
&cei);
}
- clientobj_remove(p->a1, ClientCondvar);
+ DRD_(clientobj_remove)(p->a1, ClientCondvar);
}
/** Called before pthread_cond_wait(). Note: before this function is called,
Modified: trunk/drd/drd_error.c
===================================================================
--- trunk/drd/drd_error.c 2009-02-15 11:00:29 UTC (rev 9165)
+++ trunk/drd/drd_error.c 2009-02-15 11:34:57 UTC (rev 9166)
@@ -78,13 +78,13 @@
{
DrdClientobj* cl;
- cl = clientobj_get_any(obj);
+ cl = DRD_(clientobj_get_any)(obj);
if (cl)
{
tl_assert(cl->any.first_observed_at);
VG_(message)(Vg_UserMsg,
"%s 0x%lx was first observed at:",
- clientobj_type_name(cl->any.type),
+ DRD_(clientobj_type_name)(cl->any.type),
obj);
VG_(pp_ExeContext)(cl->any.first_observed_at);
}
Modified: trunk/drd/drd_main.c
===================================================================
--- trunk/drd/drd_main.c 2009-02-15 11:00:29 UTC (rev 9165)
+++ trunk/drd/drd_main.c 2009-02-15 11:34:57 UTC (rev 9166)
@@ -129,7 +129,7 @@
if (trace_barrier != -1)
DRD_(barrier_set_trace)(trace_barrier);
if (trace_clientobj != -1)
- clientobj_set_trace(trace_clientobj);
+ DRD_(clientobj_set_trace)(trace_clientobj);
if (trace_cond != -1)
cond_set_trace(trace_cond);
if (trace_csw != -1)
@@ -296,7 +296,7 @@
if (! is_stack_mem || DRD_(get_check_stack_accesses)())
{
thread_stop_using_mem(a1, a2);
- clientobj_stop_using_mem(a1, a2);
+ DRD_(clientobj_stop_using_mem)(a1, a2);
DRD_(suppression_stop_using_mem)(a1, a2);
}
}
@@ -615,7 +615,7 @@
DRD_(suppression_init)();
- clientobj_init();
+ DRD_(clientobj_init)();
}
Modified: trunk/drd/drd_mutex.c
===================================================================
--- trunk/drd/drd_mutex.c 2009-02-15 11:00:29 UTC (rev 9165)
+++ trunk/drd/drd_mutex.c 2009-02-15 11:34:57 UTC (rev 9166)
@@ -129,13 +129,13 @@
struct mutex_info* p;
tl_assert(offsetof(DrdClientobj, mutex) == 0);
- p = &clientobj_get(mutex, ClientMutex)->mutex;
+ p = &(DRD_(clientobj_get)(mutex, ClientMutex)->mutex);
if (p)
{
return p;
}
- if (clientobj_present(mutex, mutex + 1))
+ if (DRD_(clientobj_present)(mutex, mutex + 1))
{
not_a_mutex(mutex);
return 0;
@@ -143,7 +143,7 @@
tl_assert(mutex_type != mutex_type_unknown);
- p = &clientobj_add(mutex, ClientMutex)->mutex;
+ p = &(DRD_(clientobj_add)(mutex, ClientMutex)->mutex);
mutex_initialize(p, mutex, mutex_type);
return p;
}
@@ -151,7 +151,7 @@
struct mutex_info* mutex_get(const Addr mutex)
{
tl_assert(offsetof(DrdClientobj, mutex) == 0);
- return &clientobj_get(mutex, ClientMutex)->mutex;
+ return &(DRD_(clientobj_get)(mutex, ClientMutex)->mutex);
}
/** Called before pthread_mutex_init(). */
@@ -208,7 +208,7 @@
return;
}
- clientobj_remove(mutex, ClientMutex);
+ DRD_(clientobj_remove)(mutex, ClientMutex);
}
/** Called before pthread_mutex_lock() is invoked. If a data structure for
@@ -497,8 +497,8 @@
{
struct mutex_info* p;
- clientobj_resetiter();
- for ( ; (p = &clientobj_next(ClientMutex)->mutex) != 0; )
+ DRD_(clientobj_resetiter)();
+ for ( ; (p = &(DRD_(clientobj_next)(ClientMutex)->mutex)) != 0; )
{
if (p->owner == tid && p->recursion_count > 0)
{
Modified: trunk/drd/drd_rwlock.c
===================================================================
--- trunk/drd/drd_rwlock.c 2009-02-15 11:00:29 UTC (rev 9165)
+++ trunk/drd/drd_rwlock.c 2009-02-15 11:34:57 UTC (rev 9166)
@@ -227,13 +227,13 @@
struct rwlock_info* p;
tl_assert(offsetof(DrdClientobj, rwlock) == 0);
- p = &clientobj_get(rwlock, ClientRwlock)->rwlock;
+ p = &(DRD_(clientobj_get)(rwlock, ClientRwlock)->rwlock);
if (p)
{
return p;
}
- if (clientobj_present(rwlock, rwlock + 1))
+ if (DRD_(clientobj_present)(rwlock, rwlock + 1))
{
GenericErrInfo GEI;
VG_(maybe_record_error)(VG_(get_running_tid)(),
@@ -244,7 +244,7 @@
return 0;
}
- p = &clientobj_add(rwlock, ClientRwlock)->rwlock;
+ p = &(DRD_(clientobj_add)(rwlock, ClientRwlock)->rwlock);
rwlock_initialize(p, rwlock);
return p;
}
@@ -252,7 +252,7 @@
static struct rwlock_info* rwlock_get(const Addr rwlock)
{
tl_assert(offsetof(DrdClientobj, rwlock) == 0);
- return &clientobj_get(rwlock, ClientRwlock)->rwlock;
+ return &(DRD_(clientobj_get)(rwlock, ClientRwlock)->rwlock);
}
/** Called before pthread_rwlock_init(). */
@@ -306,7 +306,7 @@
return;
}
- clientobj_remove(rwlock, ClientRwlock);
+ DRD_(clientobj_remove)(rwlock, ClientRwlock);
}
/** Called before pthread_rwlock_rdlock() is invoked. If a data structure for
@@ -562,8 +562,8 @@
{
struct rwlock_info* p;
- clientobj_resetiter();
- for ( ; (p = &clientobj_next(ClientRwlock)->rwlock) != 0; )
+ DRD_(clientobj_resetiter)();
+ for ( ; (p = &(DRD_(clientobj_next)(ClientRwlock)->rwlock)) != 0; )
{
struct rwlock_thread_info* q;
if (rwlock_is_locked_by(p, tid))
Modified: trunk/drd/drd_semaphore.c
===================================================================
--- trunk/drd/drd_semaphore.c 2009-02-15 11:00:29 UTC (rev 9165)
+++ trunk/drd/drd_semaphore.c 2009-02-15 11:34:57 UTC (rev 9166)
@@ -102,8 +102,9 @@
VG_(free), sizeof(Segment*));
}
-/** Free the memory that was allocated by semaphore_initialize(). Called by
- * clientobj_remove().
+/**
+ * Free the memory that was allocated by semaphore_initialize(). Called by
+ * DRD_(clientobj_remove)().
*/
static void semaphore_cleanup(struct semaphore_info* p)
{
@@ -131,11 +132,11 @@
struct semaphore_info *p;
tl_assert(offsetof(DrdClientobj, semaphore) == 0);
- p = &clientobj_get(semaphore, ClientSemaphore)->semaphore;
+ p = &(DRD_(clientobj_get)(semaphore, ClientSemaphore)->semaphore);
if (p == 0)
{
tl_assert(offsetof(DrdClientobj, semaphore) == 0);
- p = &clientobj_add(semaphore, ClientSemaphore)->semaphore;
+ p = &(DRD_(clientobj_add)(semaphore, ClientSemaphore)->semaphore);
semaphore_initialize(p, semaphore);
}
return p;
@@ -144,7 +145,7 @@
static struct semaphore_info* semaphore_get(const Addr semaphore)
{
tl_assert(offsetof(DrdClientobj, semaphore) == 0);
- return &clientobj_get(semaphore, ClientSemaphore)->semaphore;
+ return &(DRD_(clientobj_get)(semaphore, ClientSemaphore)->semaphore);
}
/** Called before sem_init(). */
@@ -217,7 +218,7 @@
return;
}
- clientobj_remove(semaphore, ClientSemaphore);
+ DRD_(clientobj_remove)(semaphore, ClientSemaphore);
}
/** Called before sem_wait(). */
|
|
From: <sv...@va...> - 2009-02-15 11:00:35
|
Author: bart
Date: 2009-02-15 11:00:29 +0000 (Sun, 15 Feb 2009)
New Revision: 9165
Log:
Wrapped DRD_() macro around all barrier-related function names.
Modified:
trunk/drd/drd_barrier.c
trunk/drd/drd_barrier.h
trunk/drd/drd_clientreq.c
trunk/drd/drd_main.c
trunk/drd/drd_thread.c
Modified: trunk/drd/drd_barrier.c
===================================================================
--- trunk/drd/drd_barrier.c 2009-02-15 10:40:44 UTC (rev 9164)
+++ trunk/drd/drd_barrier.c 2009-02-15 11:00:29 UTC (rev 9165)
@@ -36,7 +36,7 @@
#include "pub_tool_threadstate.h" // VG_(get_running_tid)()
-// Type definitions.
+/* Type definitions. */
/** Information associated with one thread participating in a barrier. */
struct barrier_thread_info
@@ -49,31 +49,32 @@
};
-// Local functions.
+/* Local functions. */
-static void barrier_cleanup(struct barrier_info* p);
-static const char* barrier_get_typename(struct barrier_info* const p);
-static const char* barrier_type_name(const BarrierT bt);
+static void DRD_(barrier_cleanup)(struct barrier_info* p);
+static const char* DRD_(barrier_get_typename)(struct barrier_info* const p);
+static const char* DRD_(barrier_type_name)(const BarrierT bt);
-// Local variables.
+/* Local variables. */
-static Bool s_trace_barrier = False;
-static ULong s_barrier_segment_creation_count;
+static Bool DRD_(s_trace_barrier) = False;
+static ULong DRD_(s_barrier_segment_creation_count);
-// Function definitions.
+/* Function definitions. */
-void barrier_set_trace(const Bool trace_barrier)
+void DRD_(barrier_set_trace)(const Bool trace_barrier)
{
- s_trace_barrier = trace_barrier;
+ DRD_(s_trace_barrier) = trace_barrier;
}
/** Initialize the structure *p with the specified thread ID and iteration
* information. */
-static void barrier_thread_initialize(struct barrier_thread_info* const p,
- const DrdThreadId tid,
- const Word iteration)
+static
+void DRD_(barrier_thread_initialize)(struct barrier_thread_info* const p,
+ const DrdThreadId tid,
+ const Word iteration)
{
p->tid = tid;
p->iteration = iteration;
@@ -82,7 +83,7 @@
}
/** Deallocate the memory that was allocated in barrier_thread_initialize(). */
-static void barrier_thread_destroy(struct barrier_thread_info* const p)
+static void DRD_(barrier_thread_destroy)(struct barrier_thread_info* const p)
{
tl_assert(p);
DRD_(sg_put)(p->sg[0]);
@@ -92,16 +93,16 @@
/** Initialize the structure *p with the specified client-side barrier address,
* barrier object size and number of participants in each barrier. */
static
-void barrier_initialize(struct barrier_info* const p,
- const Addr barrier,
- const BarrierT barrier_type,
- const Word count)
+void DRD_(barrier_initialize)(struct barrier_info* const p,
+ const Addr barrier,
+ const BarrierT barrier_type,
+ const Word count)
{
tl_assert(barrier != 0);
tl_assert(barrier_type == pthread_barrier || barrier_type == gomp_barrier);
tl_assert(p->a1 == barrier);
- p->cleanup = (void(*)(DrdClientobj*))barrier_cleanup;
+ p->cleanup = (void(*)(DrdClientobj*))DRD_(barrier_cleanup);
p->barrier_type = barrier_type;
p->count = count;
p->pre_iteration = 0;
@@ -118,7 +119,7 @@
/** Deallocate the memory allocated by barrier_initialize() and in p->oset.
* Called by clientobj_destroy().
*/
-void barrier_cleanup(struct barrier_info* p)
+void DRD_(barrier_cleanup)(struct barrier_info* p)
{
struct barrier_thread_info* q;
@@ -138,7 +139,7 @@
VG_(OSetGen_ResetIter)(p->oset);
for ( ; (q = VG_(OSetGen_Next)(p->oset)) != 0; )
{
- barrier_thread_destroy(q);
+ DRD_(barrier_thread_destroy)(q);
}
VG_(OSetGen_Destroy)(p->oset);
}
@@ -147,38 +148,38 @@
* found, add it. */
static
struct barrier_info*
-barrier_get_or_allocate(const Addr barrier,
- const BarrierT barrier_type, const Word count)
+DRD_(barrier_get_or_allocate)(const Addr barrier,
+ const BarrierT barrier_type, const Word count)
{
struct barrier_info *p;
tl_assert(barrier_type == pthread_barrier || barrier_type == gomp_barrier);
tl_assert(offsetof(DrdClientobj, barrier) == 0);
- p = &clientobj_get(barrier, ClientBarrier)->barrier;
+ p = &(clientobj_get(barrier, ClientBarrier)->barrier);
if (p == 0)
{
- p = &clientobj_add(barrier, ClientBarrier)->barrier;
- barrier_initialize(p, barrier, barrier_type, count);
+ p = &(clientobj_add(barrier, ClientBarrier)->barrier);
+ DRD_(barrier_initialize)(p, barrier, barrier_type, count);
}
return p;
}
/** Look up the address of the information associated with the client-side
* barrier object. */
-static struct barrier_info* barrier_get(const Addr barrier)
+static struct barrier_info* DRD_(barrier_get)(const Addr barrier)
{
tl_assert(offsetof(DrdClientobj, barrier) == 0);
- return &clientobj_get(barrier, ClientBarrier)->barrier;
+ return &(clientobj_get(barrier, ClientBarrier)->barrier);
}
/** Initialize a barrier with client address barrier, client size size, and
* where count threads participate in each barrier.
* Called before pthread_barrier_init().
*/
-void barrier_init(const Addr barrier,
- const BarrierT barrier_type, const Word count,
- const Bool reinitialization)
+void DRD_(barrier_init)(const Addr barrier,
+ const BarrierT barrier_type, const Word count,
+ const Bool reinitialization)
{
struct barrier_info* p;
@@ -196,7 +197,7 @@
if (! reinitialization && barrier_type == pthread_barrier)
{
- p = barrier_get(barrier);
+ p = DRD_(barrier_get)(barrier);
if (p)
{
BarrierErrInfo bei = { barrier };
@@ -207,9 +208,9 @@
&bei);
}
}
- p = barrier_get_or_allocate(barrier, barrier_type, count);
+ p = DRD_(barrier_get_or_allocate)(barrier, barrier_type, count);
- if (s_trace_barrier)
+ if (DRD_(s_trace_barrier))
{
if (reinitialization)
{
@@ -217,7 +218,7 @@
"[%d/%d] barrier_reinit %s 0x%lx count %ld -> %ld",
VG_(get_running_tid)(),
thread_get_running_tid(),
- barrier_get_typename(p),
+ DRD_(barrier_get_typename)(p),
barrier,
p->count,
count);
@@ -228,7 +229,7 @@
"[%d/%d] barrier_init %s 0x%lx",
VG_(get_running_tid)(),
thread_get_running_tid(),
- barrier_get_typename(p),
+ DRD_(barrier_get_typename)(p),
barrier);
}
}
@@ -250,19 +251,19 @@
}
/** Called after pthread_barrier_destroy(). */
-void barrier_destroy(const Addr barrier, const BarrierT barrier_type)
+void DRD_(barrier_destroy)(const Addr barrier, const BarrierT barrier_type)
{
struct barrier_info* p;
- p = barrier_get(barrier);
+ p = DRD_(barrier_get)(barrier);
- if (s_trace_barrier)
+ if (DRD_(s_trace_barrier))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] barrier_destroy %s 0x%lx",
VG_(get_running_tid)(),
thread_get_running_tid(),
- barrier_get_typename(p),
+ DRD_(barrier_get_typename)(p),
barrier);
}
@@ -291,14 +292,14 @@
}
/** Called before pthread_barrier_wait(). */
-void barrier_pre_wait(const DrdThreadId tid, const Addr barrier,
- const BarrierT barrier_type)
+void DRD_(barrier_pre_wait)(const DrdThreadId tid, const Addr barrier,
+ const BarrierT barrier_type)
{
struct barrier_info* p;
struct barrier_thread_info* q;
const UWord word_tid = tid;
- p = barrier_get(barrier);
+ p = DRD_(barrier_get)(barrier);
if (p == 0 && barrier_type == gomp_barrier)
{
VG_(message)(Vg_UserMsg, "");
@@ -311,13 +312,13 @@
}
tl_assert(p);
- if (s_trace_barrier)
+ if (DRD_(s_trace_barrier))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] barrier_pre_wait %s 0x%lx iteration %ld",
VG_(get_running_tid)(),
thread_get_running_tid(),
- barrier_get_typename(p),
+ DRD_(barrier_get_typename)(p),
barrier,
p->pre_iteration);
}
@@ -326,7 +327,7 @@
if (q == 0)
{
q = VG_(OSetGen_AllocNode)(p->oset, sizeof(*q));
- barrier_thread_initialize(q, tid, p->pre_iteration);
+ DRD_(barrier_thread_initialize)(q, tid, p->pre_iteration);
VG_(OSetGen_Insert)(p->oset, q);
tl_assert(VG_(OSetGen_Lookup)(p->oset, &word_tid) == q);
}
@@ -340,20 +341,20 @@
}
/** Called after pthread_barrier_wait(). */
-void barrier_post_wait(const DrdThreadId tid, const Addr barrier,
- const BarrierT barrier_type, const Bool waited)
+void DRD_(barrier_post_wait)(const DrdThreadId tid, const Addr barrier,
+ const BarrierT barrier_type, const Bool waited)
{
struct barrier_info* p;
- p = barrier_get(barrier);
+ p = DRD_(barrier_get)(barrier);
- if (s_trace_barrier)
+ if (DRD_(s_trace_barrier))
{
VG_(message)(Vg_UserMsg,
"[%d/%d] barrier_post_wait %s 0x%lx iteration %ld",
VG_(get_running_tid)(),
tid,
- p ? barrier_get_typename(p) : "(?)",
+ p ? DRD_(barrier_get_typename)(p) : "(?)",
barrier,
p ? p->post_iteration : -1);
}
@@ -384,7 +385,7 @@
&bei);
q = VG_(OSetGen_AllocNode)(p->oset, sizeof(*q));
- barrier_thread_initialize(q, tid, p->pre_iteration);
+ DRD_(barrier_thread_initialize)(q, tid, p->pre_iteration);
VG_(OSetGen_Insert)(p->oset, q);
tl_assert(VG_(OSetGen_Lookup)(p->oset, &word_tid) == q);
}
@@ -399,7 +400,7 @@
}
thread_new_segment(tid);
- s_barrier_segment_creation_count++;
+ DRD_(s_barrier_segment_creation_count)++;
if (--p->post_waiters_left <= 0)
{
@@ -410,7 +411,7 @@
}
/** Call this function when thread tid stops to exist. */
-void barrier_thread_delete(const DrdThreadId tid)
+void DRD_(barrier_thread_delete)(const DrdThreadId tid)
{
struct barrier_info* p;
@@ -425,20 +426,20 @@
*/
if (q)
{
- barrier_thread_destroy(q);
+ DRD_(barrier_thread_destroy)(q);
VG_(OSetGen_FreeNode)(p->oset, q);
}
}
}
-static const char* barrier_get_typename(struct barrier_info* const p)
+static const char* DRD_(barrier_get_typename)(struct barrier_info* const p)
{
tl_assert(p);
- return barrier_type_name(p->barrier_type);
+ return DRD_(barrier_type_name)(p->barrier_type);
}
-static const char* barrier_type_name(const BarrierT bt)
+static const char* DRD_(barrier_type_name)(const BarrierT bt)
{
switch (bt)
{
@@ -450,7 +451,7 @@
return "?";
}
-ULong get_barrier_segment_creation_count(void)
+ULong DRD_(get_barrier_segment_creation_count)(void)
{
- return s_barrier_segment_creation_count;
+ return DRD_(s_barrier_segment_creation_count);
}
Modified: trunk/drd/drd_barrier.h
===================================================================
--- trunk/drd/drd_barrier.h 2009-02-15 10:40:44 UTC (rev 9164)
+++ trunk/drd/drd_barrier.h 2009-02-15 11:00:29 UTC (rev 9165)
@@ -30,26 +30,26 @@
#define __DRD_BARRIER_H
+#include "drd_basics.h" // DrdThreadId
#include "drd_clientreq.h" // BarrierT
-#include "drd_thread.h" // DrdThreadId
#include "pub_tool_basics.h" // Addr
struct barrier_info;
-void barrier_set_trace(const Bool trace_barrier);
-void barrier_init(const Addr barrier,
- const BarrierT barrier_type, const Word count,
- const Bool reinitialization);
-void barrier_destroy(const Addr barrier, const BarrierT barrier_type);
-void barrier_pre_wait(const DrdThreadId tid, const Addr barrier,
- const BarrierT barrier_type);
-void barrier_post_wait(const DrdThreadId tid, const Addr barrier,
- const BarrierT barrier_type, const Bool waited);
-void barrier_thread_delete(const DrdThreadId threadid);
-void barrier_stop_using_mem(const Addr a1, const Addr a2);
-ULong get_barrier_segment_creation_count(void);
+void DRD_(barrier_set_trace)(const Bool trace_barrier);
+void DRD_(barrier_init)(const Addr barrier,
+ const BarrierT barrier_type, const Word count,
+ const Bool reinitialization);
+void DRD_(barrier_destroy)(const Addr barrier, const BarrierT barrier_type);
+void DRD_(barrier_pre_wait)(const DrdThreadId tid, const Addr barrier,
+ const BarrierT barrier_type);
+void DRD_(barrier_post_wait)(const DrdThreadId tid, const Addr barrier,
+ const BarrierT barrier_type, const Bool waited);
+void DRD_(barrier_thread_delete)(const DrdThreadId threadid);
+void DRD_(barrier_stop_using_mem)(const Addr a1, const Addr a2);
+ULong DRD_(get_barrier_segment_creation_count)(void);
#endif /* __DRD_BARRIER_H */
Modified: trunk/drd/drd_clientreq.c
===================================================================
--- trunk/drd/drd_clientreq.c 2009-02-15 10:40:44 UTC (rev 9164)
+++ trunk/drd/drd_clientreq.c 2009-02-15 11:00:29 UTC (rev 9165)
@@ -303,7 +303,7 @@
case VG_USERREQ__PRE_BARRIER_INIT:
if (thread_enter_synchr(drd_tid) == 0)
- barrier_init(arg[1], arg[2], arg[3], arg[4]);
+ DRD_(barrier_init)(arg[1], arg[2], arg[3], arg[4]);
break;
case VG_USERREQ__POST_BARRIER_INIT:
@@ -316,17 +316,17 @@
case VG_USERREQ__POST_BARRIER_DESTROY:
if (thread_leave_synchr(drd_tid) == 0)
- barrier_destroy(arg[1], arg[2]);
+ DRD_(barrier_destroy)(arg[1], arg[2]);
break;
case VG_USERREQ__PRE_BARRIER_WAIT:
if (thread_enter_synchr(drd_tid) == 0)
- barrier_pre_wait(drd_tid, arg[1], arg[2]);
+ DRD_(barrier_pre_wait)(drd_tid, arg[1], arg[2]);
break;
case VG_USERREQ__POST_BARRIER_WAIT:
if (thread_leave_synchr(drd_tid) == 0)
- barrier_post_wait(drd_tid, arg[1], arg[2], arg[3]);
+ DRD_(barrier_post_wait)(drd_tid, arg[1], arg[2], arg[3]);
break;
case VG_USERREQ__PRE_RWLOCK_INIT:
Modified: trunk/drd/drd_main.c
===================================================================
--- trunk/drd/drd_main.c 2009-02-15 10:40:44 UTC (rev 9164)
+++ trunk/drd/drd_main.c 2009-02-15 11:00:29 UTC (rev 9165)
@@ -127,7 +127,7 @@
DRD_(start_tracing_address_range)(addr, addr + 1);
}
if (trace_barrier != -1)
- barrier_set_trace(trace_barrier);
+ DRD_(barrier_set_trace)(trace_barrier);
if (trace_clientobj != -1)
clientobj_set_trace(trace_clientobj);
if (trace_cond != -1)
@@ -550,7 +550,7 @@
get_mutex_segment_creation_count(),
get_rwlock_segment_creation_count(),
get_semaphore_segment_creation_count(),
- get_barrier_segment_creation_count());
+ DRD_(get_barrier_segment_creation_count)());
VG_(message)(Vg_UserMsg,
" bitmaps: %lld level 1 / %lld level 2 bitmap refs",
bm_get_bitmap_creation_count(),
Modified: trunk/drd/drd_thread.c
===================================================================
--- trunk/drd/drd_thread.c 2009-02-15 10:40:44 UTC (rev 9164)
+++ trunk/drd/drd_thread.c 2009-02-15 11:00:29 UTC (rev 9165)
@@ -304,7 +304,7 @@
mutex_thread_delete(drd_joinee);
cond_thread_delete(drd_joinee);
semaphore_thread_delete(drd_joinee);
- barrier_thread_delete(drd_joinee);
+ DRD_(barrier_thread_delete)(drd_joinee);
}
/* NPTL hack: NPTL allocates the 'struct pthread' on top of the stack, */
|
|
From: <sv...@va...> - 2009-02-15 10:40:52
|
Author: bart
Date: 2009-02-15 10:40:44 +0000 (Sun, 15 Feb 2009)
New Revision: 9164
Log:
Restored the previous method for passing arguments from the creator thread to the created thread, since the new approach made some regression tests fail. It is not yet clear to me why.
Modified:
trunk/drd/drd_pthread_intercepts.c
Modified: trunk/drd/drd_pthread_intercepts.c
===================================================================
--- trunk/drd/drd_pthread_intercepts.c 2009-02-15 10:38:37 UTC (rev 9163)
+++ trunk/drd/drd_pthread_intercepts.c 2009-02-15 10:40:44 UTC (rev 9164)
@@ -40,9 +40,9 @@
------------------------------------------------------------------ */
/*
- Define _GNU_SOURCE to make sure that pthread_spinlock_t is available when
- compiling with older glibc versions (2.3 or before).
-*/
+ * Define _GNU_SOURCE to make sure that pthread_spinlock_t is available when
+ * compiling with older glibc versions (2.3 or before).
+ */
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
@@ -73,7 +73,8 @@
void* (*start)(void*);
void* arg;
int detachstate;
-} VgPosixThreadArgs;
+ int wrapper_started;
+} DrdPosixThreadArgs;
/* Local function declarations. */
@@ -173,21 +174,16 @@
static void* DRD_(thread_wrapper)(void* arg)
{
int res;
- VgPosixThreadArgs* arg_ptr;
- VgPosixThreadArgs arg_copy;
+ DrdPosixThreadArgs* arg_ptr;
+ DrdPosixThreadArgs arg_copy;
VALGRIND_DO_CLIENT_REQUEST(res, 0, VG_USERREQ__DRD_SUPPRESS_CURRENT_STACK,
0, 0, 0, 0, 0);
- arg_ptr = (VgPosixThreadArgs*)arg;
+ arg_ptr = (DrdPosixThreadArgs*)arg;
arg_copy = *arg_ptr;
+ arg_ptr->wrapper_started = 1;
- /*
- * Free the memory 'arg_ptr' points at now such that it does not get
- * leaked when the function called below throws a C++ exception.
- */
- free(arg_ptr);
-
VALGRIND_DO_CLIENT_REQUEST(res, -1, VG_USERREQ__SET_PTHREADID,
pthread_self(), 0, 0, 0, 0);
@@ -276,42 +272,52 @@
int res;
int ret;
OrigFn fn;
- VgPosixThreadArgs* vgargs;
+ DrdPosixThreadArgs thread_args;
VALGRIND_GET_ORIG_FN(fn);
- vgargs = malloc(sizeof *vgargs);
- assert(vgargs);
- vgargs->start = start;
- vgargs->arg = arg;
+ DRD_IGNORE_VAR(thread_args.wrapper_started);
+ thread_args.start = start;
+ thread_args.arg = arg;
+ thread_args.wrapper_started = 0;
/*
* Find out whether the thread will be started as a joinable thread
* or as a detached thread. If no thread attributes have been specified,
* the new thread will be started as a joinable thread.
*/
- vgargs->detachstate = PTHREAD_CREATE_JOINABLE;
+ thread_args.detachstate = PTHREAD_CREATE_JOINABLE;
if (attr)
{
- if (pthread_attr_getdetachstate(attr, &vgargs->detachstate) != 0)
+ if (pthread_attr_getdetachstate(attr, &thread_args.detachstate) != 0)
{
assert(0);
}
}
- assert(vgargs->detachstate == PTHREAD_CREATE_JOINABLE
- || vgargs->detachstate == PTHREAD_CREATE_DETACHED);
+ assert(thread_args.detachstate == PTHREAD_CREATE_JOINABLE
+ || thread_args.detachstate == PTHREAD_CREATE_DETACHED);
/* Suppress NPTL-specific conflicts between creator and created thread. */
VALGRIND_DO_CLIENT_REQUEST(res, -1, VG_USERREQ__DRD_STOP_RECORDING,
0, 0, 0, 0, 0);
- CALL_FN_W_WWWW(ret, fn, thread, attr, DRD_(thread_wrapper), vgargs);
+ CALL_FN_W_WWWW(ret, fn, thread, attr, DRD_(thread_wrapper), &thread_args);
VALGRIND_DO_CLIENT_REQUEST(res, -1, VG_USERREQ__DRD_START_RECORDING,
0, 0, 0, 0, 0);
- /* Free the memory 'vgargs' points at if pthread_create() failed. */
- if (ret != 0)
- free(vgargs);
+ if (ret == 0)
+ {
+ /*
+ * Wait until the thread wrapper started.
+ * @todo Find out why some regression tests fail if thread arguments are
+ * passed via dynamically allocated memory and if the loop below is
+ * removed.
+ */
+ while (! thread_args.wrapper_started)
+ {
+ sched_yield();
+ }
+ }
return ret;
}
|
|
From: <sv...@va...> - 2009-02-15 10:38:44
|
Author: bart
Date: 2009-02-15 10:38:37 +0000 (Sun, 15 Feb 2009)
New Revision: 9163
Log:
Added more comments / rearranged function order.
Modified:
trunk/drd/drd_thread.c
Modified: trunk/drd/drd_thread.c
===================================================================
--- trunk/drd/drd_thread.c 2009-02-15 10:36:32 UTC (rev 9162)
+++ trunk/drd/drd_thread.c 2009-02-15 10:38:37 UTC (rev 9163)
@@ -213,6 +213,14 @@
}
#endif
+/**
+ * Create the first segment for a newly started thread.
+ *
+ * This function is called from the handler installed via
+ * VG_(track_pre_thread_ll_create)(). The Valgrind core invokes this handler
+ * from the context of the creator thread, before the new thread has been
+ * created.
+ */
DrdThreadId thread_pre_create(const DrdThreadId creator,
const ThreadId vg_created)
{
@@ -230,7 +238,29 @@
return created;
}
+/**
+ * Initialize s_threadinfo[] for a newly created thread. Must be called after
+ * the thread has been created and before any client instructioins are run
+ * on the newly created thread, e.g. from the handler installed via
+ * VG_(track_pre_thread_first_insn)().
+ */
+DrdThreadId thread_post_create(const ThreadId vg_created)
+{
+ const DrdThreadId created = VgThreadIdToDrdThreadId(vg_created);
+ tl_assert(0 <= (int)created && created < DRD_N_THREADS
+ && created != DRD_INVALID_THREADID);
+
+ s_threadinfo[created].stack_max = VG_(thread_get_stack_max)(vg_created);
+ s_threadinfo[created].stack_startup = s_threadinfo[created].stack_max;
+ s_threadinfo[created].stack_min = s_threadinfo[created].stack_max;
+ s_threadinfo[created].stack_min_min = s_threadinfo[created].stack_max;
+ s_threadinfo[created].stack_size = VG_(thread_get_stack_size)(vg_created);
+ tl_assert(s_threadinfo[created].stack_max != 0);
+
+ return created;
+}
+
/* Process VG_USERREQ__POST_THREAD_JOIN. This client request is invoked just */
/* after thread drd_joiner joined thread drd_joinee. */
void DRD_(thread_post_join)(DrdThreadId drd_joiner, DrdThreadId drd_joinee)
@@ -277,26 +307,6 @@
barrier_thread_delete(drd_joinee);
}
-/** Allocate the first segment for a thread. Call this just after
- * pthread_create().
- */
-DrdThreadId thread_post_create(const ThreadId vg_created)
-{
- const DrdThreadId created = VgThreadIdToDrdThreadId(vg_created);
-
- tl_assert(0 <= (int)created && created < DRD_N_THREADS
- && created != DRD_INVALID_THREADID);
-
- s_threadinfo[created].stack_max = VG_(thread_get_stack_max)(vg_created);
- s_threadinfo[created].stack_startup = s_threadinfo[created].stack_max;
- s_threadinfo[created].stack_min = s_threadinfo[created].stack_max;
- s_threadinfo[created].stack_min_min = s_threadinfo[created].stack_max;
- s_threadinfo[created].stack_size = VG_(thread_get_stack_size)(vg_created);
- tl_assert(s_threadinfo[created].stack_max != 0);
-
- return created;
-}
-
/* NPTL hack: NPTL allocates the 'struct pthread' on top of the stack, */
/* and accesses this data structure from multiple threads without locking. */
/* Any conflicting accesses in the range stack_startup..stack_max will be */
|
|
From: <sv...@va...> - 2009-02-15 10:36:38
|
Author: bart
Date: 2009-02-15 10:36:32 +0000 (Sun, 15 Feb 2009)
New Revision: 9162
Log:
Changed the order of the function definitions.
Modified:
trunk/drd/drd_clientreq.c
Modified: trunk/drd/drd_clientreq.c
===================================================================
--- trunk/drd/drd_clientreq.c 2009-02-15 10:19:35 UTC (rev 9161)
+++ trunk/drd/drd_clientreq.c 2009-02-15 10:36:32 UTC (rev 9162)
@@ -47,48 +47,15 @@
static Addr DRD_(highest_used_stack_address)(const ThreadId vg_tid);
+/* Function definitions. */
+
/**
- * Walk the stack up to the highest stack frame, and return the stack pointer
- * of the highest stack frame. It is assumed that there are no more than
- * ten stack frames above the current frame. This should be no problem
- * since this function is either called indirectly from the _init() function
- * in vgpreload_exp-drd-*.so or from the thread wrapper for a newly created
- * thread. See also drd_pthread_intercepts.c.
+ * Tell the Valgrind core the address of the DRD function that processes
+ * client requests. Must be called before any client code is run.
*/
-static Addr DRD_(highest_used_stack_address)(const ThreadId vg_tid)
+void DRD_(clientreq_init)(void)
{
- UInt nframes;
- const UInt n_ips = 10;
- UInt i;
- Addr ips[n_ips], sps[n_ips];
- Addr husa;
-
- nframes = VG_(get_StackTrace)(vg_tid, ips, n_ips, sps, 0, 0);
- tl_assert(1 <= nframes && nframes <= n_ips);
-
- /* A hack to work around VG_(get_StackTrace)()'s behavior that sometimes */
- /* the topmost stackframes it returns are bogus (this occurs sometimes */
- /* at least on amd64, ppc32 and ppc64). */
-
- husa = sps[0];
-
- tl_assert(VG_(thread_get_stack_max)(vg_tid)
- - VG_(thread_get_stack_size)(vg_tid) <= husa
- && husa < VG_(thread_get_stack_max)(vg_tid));
-
- for (i = 1; i < nframes; i++)
- {
- if (sps[i] == 0)
- break;
- if (husa < sps[i] && sps[i] < VG_(thread_get_stack_max)(vg_tid))
- husa = sps[i];
- }
-
- tl_assert(VG_(thread_get_stack_max)(vg_tid)
- - VG_(thread_get_stack_size)(vg_tid) <= husa
- && husa < VG_(thread_get_stack_max)(vg_tid));
-
- return husa;
+ VG_(needs_client_requests)(DRD_(handle_client_request));
}
/**
@@ -411,10 +378,45 @@
}
/**
- * Tell the Valgrind core the address of the DRD function that processes
- * client requests. Must be called before any client code is run.
+ * Walk the stack up to the highest stack frame, and return the stack pointer
+ * of the highest stack frame. It is assumed that there are no more than
+ * ten stack frames above the current frame. This should be no problem
+ * since this function is either called indirectly from the _init() function
+ * in vgpreload_exp-drd-*.so or from the thread wrapper for a newly created
+ * thread. See also drd_pthread_intercepts.c.
*/
-void DRD_(clientreq_init)(void)
+static Addr DRD_(highest_used_stack_address)(const ThreadId vg_tid)
{
- VG_(needs_client_requests)(DRD_(handle_client_request));
+ UInt nframes;
+ const UInt n_ips = 10;
+ UInt i;
+ Addr ips[n_ips], sps[n_ips];
+ Addr husa;
+
+ nframes = VG_(get_StackTrace)(vg_tid, ips, n_ips, sps, 0, 0);
+ tl_assert(1 <= nframes && nframes <= n_ips);
+
+ /* A hack to work around VG_(get_StackTrace)()'s behavior that sometimes */
+ /* the topmost stackframes it returns are bogus (this occurs sometimes */
+ /* at least on amd64, ppc32 and ppc64). */
+
+ husa = sps[0];
+
+ tl_assert(VG_(thread_get_stack_max)(vg_tid)
+ - VG_(thread_get_stack_size)(vg_tid) <= husa
+ && husa < VG_(thread_get_stack_max)(vg_tid));
+
+ for (i = 1; i < nframes; i++)
+ {
+ if (sps[i] == 0)
+ break;
+ if (husa < sps[i] && sps[i] < VG_(thread_get_stack_max)(vg_tid))
+ husa = sps[i];
+ }
+
+ tl_assert(VG_(thread_get_stack_max)(vg_tid)
+ - VG_(thread_get_stack_size)(vg_tid) <= husa
+ && husa < VG_(thread_get_stack_max)(vg_tid));
+
+ return husa;
}
|
|
From: <sv...@va...> - 2009-02-15 10:19:43
|
Author: bart
Date: 2009-02-15 10:19:35 +0000 (Sun, 15 Feb 2009)
New Revision: 9161
Log:
Cleaned up the source code of the atomic_var regression test, without changing the actual test.
Modified:
trunk/drd/tests/atomic_var.c
trunk/drd/tests/atomic_var.stderr.exp-with-atomic-builtins-1
trunk/drd/tests/atomic_var.stderr.exp-with-atomic-builtins-2
Modified: trunk/drd/tests/atomic_var.c
===================================================================
--- trunk/drd/tests/atomic_var.c 2009-02-14 17:19:58 UTC (rev 9160)
+++ trunk/drd/tests/atomic_var.c 2009-02-15 10:19:35 UTC (rev 9161)
@@ -1,8 +1,13 @@
-/** Race condition around use of atomic variable.
- * Note: for the i386 and x86_64 memory models, thread 2 must print y = 1.
- * On PPC however, both y = 0 and y = 1 are legal results. This is because
- * the PPC memory model allows different CPU's to observe stores to variables
- * in different cache lines in a different order.
+/**
+ * This test program triggers a single race condition on variable s_y.
+ * Although another variable (s_x) is also modified by both threads, no race
+ * condition must be reported on this variable since it is only accessed via
+ * atomic instructions.
+ *
+ * Note: for the i386 and x86_64 memory models, thread 2 must print y = 1.
+ * On PPC however, both y = 0 and y = 1 are legal results. This is because
+ * the PPC memory model allows different CPU's to observe stores to variables
+ * in different cache lines in a different order.
*/
@@ -14,28 +19,19 @@
#include "../../config.h"
-/** Only gcc 4.1.0 and later have atomic builtins. */
+/* Atomic builtins are only supported by gcc 4.1.0 and later. */
+
#if defined(HAVE_BUILTIN_ATOMIC)
+
static __inline__
int sync_add_and_fetch(int* p, int i)
{
return __sync_add_and_fetch(p, i);
}
-#else
-static __inline__
-int sync_add_and_fetch(int* p, int i)
-{
- if (i == 0)
- return *p;
- return (*p += i);
-}
-#endif
-
-#ifdef HAVE_BUILTIN_ATOMIC
static int s_x = 0;
-/* s_dummy[] ensures that s_x and s_y are not in the same cache line. */
-static char s_dummy[512];
+/* g_dummy[] ensures that s_x and s_y are not in the same cache line. */
+char g_dummy[512];
static int s_y = 0;
static void* thread_func_1(void* arg)
@@ -52,11 +48,9 @@
fprintf(stderr, "y = %d\n", s_y);
return 0;
}
-#endif
int main(int argc, char** argv)
{
-#ifdef HAVE_BUILTIN_ATOMIC
int i;
const int n_threads = 2;
pthread_t tid[n_threads];
@@ -68,13 +62,17 @@
pthread_join(tid[i], 0);
fprintf(stderr, "Test finished.\n");
- /* Suppress the compiler warning about s_dummy not being used. */
- s_dummy[0]++;
+ return 0;
+}
+
#else
+
+int main(int argc, char** argv)
+{
fprintf(stderr,
"Sorry, but your compiler does not have built-in support for atomic"
" operations.\n");
-#endif
-
return 0;
}
+
+#endif
Modified: trunk/drd/tests/atomic_var.stderr.exp-with-atomic-builtins-1
===================================================================
--- trunk/drd/tests/atomic_var.stderr.exp-with-atomic-builtins-1 2009-02-14 17:19:58 UTC (rev 9160)
+++ trunk/drd/tests/atomic_var.stderr.exp-with-atomic-builtins-1 2009-02-15 10:19:35 UTC (rev 9161)
@@ -7,7 +7,7 @@
by 0x........: (within libpthread-?.?.so)
by 0x........: clone (in /...libc...)
Location 0x........ is 0 bytes inside local var "s_y"
-declared at atomic_var.c:39, in frame #? of thread 2
+declared at atomic_var.c:35, in frame #? of thread 2
y = 1
Test finished.
Modified: trunk/drd/tests/atomic_var.stderr.exp-with-atomic-builtins-2
===================================================================
--- trunk/drd/tests/atomic_var.stderr.exp-with-atomic-builtins-2 2009-02-14 17:19:58 UTC (rev 9160)
+++ trunk/drd/tests/atomic_var.stderr.exp-with-atomic-builtins-2 2009-02-15 10:19:35 UTC (rev 9161)
@@ -7,7 +7,7 @@
by 0x........: (within libpthread-?.?.so)
by 0x........: clone (in /...libc...)
Location 0x........ is 0 bytes inside local var "s_y"
-declared at atomic_var.c:39, in frame #? of thread 3
+declared at atomic_var.c:35, in frame #? of thread 3
y = 1
Test finished.
|
|
From: Tom H. <th...@cy...> - 2009-02-15 03:47:54
|
Nightly build on vauxhall ( x86_64, Fedora 10 ) started at 2009-02-15 03:20:05 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 486 tests, 10 stderr failures, 0 stdout failures, 0 post failures == drd/tests/atomic_var (stderr) drd/tests/fp_race (stderr) drd/tests/hg04_race (stderr) drd/tests/hg05_race2 (stderr) drd/tests/rwlock_race (stderr) drd/tests/sem_as_mutex (stderr) drd/tests/tc06_two_races (stderr) drd/tests/tc16_byterace (stderr) drd/tests/tc18_semabuse (stderr) memcheck/tests/x86-linux/scalar (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 486 tests, 1 stderr failure, 0 stdout failures, 0 post failures == memcheck/tests/x86-linux/scalar (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sun Feb 15 03:33:56 2009 --- new.short Sun Feb 15 03:47:48 2009 *************** *** 8,10 **** ! == 486 tests, 1 stderr failure, 0 stdout failures, 0 post failures == memcheck/tests/x86-linux/scalar (stderr) --- 8,19 ---- ! == 486 tests, 10 stderr failures, 0 stdout failures, 0 post failures == ! drd/tests/atomic_var (stderr) ! drd/tests/fp_race (stderr) ! drd/tests/hg04_race (stderr) ! drd/tests/hg05_race2 (stderr) ! drd/tests/rwlock_race (stderr) ! drd/tests/sem_as_mutex (stderr) ! drd/tests/tc06_two_races (stderr) ! drd/tests/tc16_byterace (stderr) ! drd/tests/tc18_semabuse (stderr) memcheck/tests/x86-linux/scalar (stderr) |
|
From: Tom H. <th...@cy...> - 2009-02-15 03:45:06
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2009-02-15 03:05:06 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 477 tests, 8 stderr failures, 0 stdout failures, 0 post failures == drd/tests/hg04_race (stderr) drd/tests/hg05_race2 (stderr) exp-ptrcheck/tests/ccc (stderr) exp-ptrcheck/tests/preen_invars (stderr) exp-ptrcheck/tests/pth_create (stderr) exp-ptrcheck/tests/pth_specific (stderr) helgrind/tests/tc20_verifywrap (stderr) memcheck/tests/x86-linux/scalar (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 477 tests, 6 stderr failures, 0 stdout failures, 0 post failures == exp-ptrcheck/tests/ccc (stderr) exp-ptrcheck/tests/preen_invars (stderr) exp-ptrcheck/tests/pth_create (stderr) exp-ptrcheck/tests/pth_specific (stderr) helgrind/tests/tc20_verifywrap (stderr) memcheck/tests/x86-linux/scalar (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sun Feb 15 03:24:41 2009 --- new.short Sun Feb 15 03:44:57 2009 *************** *** 8,10 **** ! == 477 tests, 6 stderr failures, 0 stdout failures, 0 post failures == exp-ptrcheck/tests/ccc (stderr) --- 8,12 ---- ! == 477 tests, 8 stderr failures, 0 stdout failures, 0 post failures == ! drd/tests/hg04_race (stderr) ! drd/tests/hg05_race2 (stderr) exp-ptrcheck/tests/ccc (stderr) |
|
From: Tom H. <th...@cy...> - 2009-02-15 03:32:15
|
Nightly build on mg ( x86_64, Fedora 9 ) started at 2009-02-15 03:10:04 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 483 tests, 9 stderr failures, 2 stdout failures, 0 post failures == drd/tests/fp_race (stderr) drd/tests/hg05_race2 (stderr) drd/tests/tc06_two_races (stderr) drd/tests/tc18_semabuse (stderr) exp-ptrcheck/tests/ccc (stderr) exp-ptrcheck/tests/preen_invars (stderr) exp-ptrcheck/tests/pth_create (stderr) exp-ptrcheck/tests/pth_specific (stderr) memcheck/tests/linux/timerfd-syscall (stdout) memcheck/tests/x86-linux/scalar (stderr) none/tests/linux/mremap2 (stdout) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 483 tests, 5 stderr failures, 2 stdout failures, 0 post failures == exp-ptrcheck/tests/ccc (stderr) exp-ptrcheck/tests/preen_invars (stderr) exp-ptrcheck/tests/pth_create (stderr) exp-ptrcheck/tests/pth_specific (stderr) memcheck/tests/linux/timerfd-syscall (stdout) memcheck/tests/x86-linux/scalar (stderr) none/tests/linux/mremap2 (stdout) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sun Feb 15 03:20:59 2009 --- new.short Sun Feb 15 03:32:06 2009 *************** *** 8,10 **** ! == 483 tests, 5 stderr failures, 2 stdout failures, 0 post failures == exp-ptrcheck/tests/ccc (stderr) --- 8,14 ---- ! == 483 tests, 9 stderr failures, 2 stdout failures, 0 post failures == ! drd/tests/fp_race (stderr) ! drd/tests/hg05_race2 (stderr) ! drd/tests/tc06_two_races (stderr) ! drd/tests/tc18_semabuse (stderr) exp-ptrcheck/tests/ccc (stderr) |