You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
1
(18) |
2
(35) |
3
(34) |
4
(25) |
5
(16) |
6
(32) |
|
7
(7) |
8
(13) |
9
(20) |
10
(17) |
11
(10) |
12
(18) |
13
(9) |
|
14
(5) |
15
(26) |
16
(23) |
17
(5) |
18
(1) |
19
(4) |
20
(3) |
|
21
(10) |
22
(13) |
23
(13) |
24
(23) |
25
(30) |
26
(13) |
27
(6) |
|
28
(1) |
29
(16) |
30
(15) |
|
|
|
|
|
From: <sv...@va...> - 2009-06-02 23:21:33
|
Author: njn
Date: 2009-06-03 00:21:28 +0100 (Wed, 03 Jun 2009)
New Revision: 10216
Log:
Merge r10215 (manual intro additions) from the trunk.
Modified:
branches/VALGRIND_3_4_BRANCH/docs/xml/manual-intro.xml
Modified: branches/VALGRIND_3_4_BRANCH/docs/xml/manual-intro.xml
===================================================================
--- branches/VALGRIND_3_4_BRANCH/docs/xml/manual-intro.xml 2009-06-02 23:20:40 UTC (rev 10215)
+++ branches/VALGRIND_3_4_BRANCH/docs/xml/manual-intro.xml 2009-06-02 23:21:28 UTC (rev 10216)
@@ -94,19 +94,6 @@
</listitem>
<listitem>
- <para><command>Massif</command> is a heap profiler.
- It measures how much heap memory programs use. In particular,
- it can give you information about heap blocks, heap
- administration overheads, and stack sizes.</para>
-
- <para>Heap profiling can help you reduce the amount of
- memory your program uses. On modern machines with virtual
- memory, this reduces the chances that your program will run out
- of memory, and may make it faster by reducing the amount of
- paging needed.</para>
- </listitem>
-
- <listitem>
<para><command>Helgrind</command> detects synchronisation errors
in programs that use the POSIX pthreads threading primitives. It
detects the following three classes of errors:</para>
@@ -127,9 +114,34 @@
<para>Problems like these often result in unreproducible,
timing-dependent crashes, deadlocks and other misbehaviour, and
can be difficult to find by other means.</para>
+ </listitem>
+ <listitem>
+ <para><command>DRD</command> is similar to Helgrind, but uses a
+ different analysis technique and so may find different problems.
+ </para>
</listitem>
+ <listitem>
+ <para><command>Massif</command> is a heap profiler.
+ It measures how much heap memory programs use. In particular,
+ it can give you information about heap blocks, heap
+ administration overheads, and stack sizes.</para>
+
+ <para>Heap profiling can help you reduce the amount of
+ memory your program uses. On modern machines with virtual
+ memory, this reduces the chances that your program will run out
+ of memory, and may make it faster by reducing the amount of
+ paging needed.</para>
+ </listitem>
+
+ <listitem>
+ <para><command>Ptrcheck</command> is an experimental pointer checking
+ tool. Its functionality overlaps somewhat with Memcheck's, but it can
+ find some problems that Memcheck would miss.</para>
+ </listitem>
+
+
</orderedlist>
@@ -158,7 +170,7 @@
version 2. The <computeroutput>valgrind/*.h</computeroutput> headers
that you may wish to include in your code (eg.
<filename>valgrind.h</filename>, <filename>memcheck.h</filename>,
-<filename>helgrind.h</filename>) are
+<filename>helgrind.h</filename>, etc.) are
distributed under a BSD-style license, so you may include them in your
code without worrying about license conflicts. Some of the PThreads
test cases, <filename>pth_*.c</filename>, are taken from "Pthreads
|
|
From: <sv...@va...> - 2009-06-02 23:20:48
|
Author: njn
Date: 2009-06-03 00:20:40 +0100 (Wed, 03 Jun 2009)
New Revision: 10215
Log:
Add descriptions of DRD and Ptrcheck in the manual intro. Bart, Julian,
please change these if you don't like what I've written, and merge the
changes to the 3.4.X branch.
Modified:
trunk/docs/xml/manual-intro.xml
Modified: trunk/docs/xml/manual-intro.xml
===================================================================
--- trunk/docs/xml/manual-intro.xml 2009-06-02 15:11:42 UTC (rev 10214)
+++ trunk/docs/xml/manual-intro.xml 2009-06-02 23:20:40 UTC (rev 10215)
@@ -94,19 +94,6 @@
</listitem>
<listitem>
- <para><command>Massif</command> is a heap profiler.
- It measures how much heap memory programs use. In particular,
- it can give you information about heap blocks, heap
- administration overheads, and stack sizes.</para>
-
- <para>Heap profiling can help you reduce the amount of
- memory your program uses. On modern machines with virtual
- memory, this reduces the chances that your program will run out
- of memory, and may make it faster by reducing the amount of
- paging needed.</para>
- </listitem>
-
- <listitem>
<para><command>Helgrind</command> detects synchronisation errors
in programs that use the POSIX pthreads threading primitives. It
detects the following three classes of errors:</para>
@@ -127,9 +114,34 @@
<para>Problems like these often result in unreproducible,
timing-dependent crashes, deadlocks and other misbehaviour, and
can be difficult to find by other means.</para>
+ </listitem>
+ <listitem>
+ <para><command>DRD</command> is similar to Helgrind, but uses a
+ different analysis technique and so may find different problems.
+ </para>
</listitem>
+ <listitem>
+ <para><command>Massif</command> is a heap profiler.
+ It measures how much heap memory programs use. In particular,
+ it can give you information about heap blocks, heap
+ administration overheads, and stack sizes.</para>
+
+ <para>Heap profiling can help you reduce the amount of
+ memory your program uses. On modern machines with virtual
+ memory, this reduces the chances that your program will run out
+ of memory, and may make it faster by reducing the amount of
+ paging needed.</para>
+ </listitem>
+
+ <listitem>
+ <para><command>Ptrcheck</command> is an experimental pointer checking
+ tool. Its functionality overlaps somewhat with Memcheck's, but it can
+ find some problems that Memcheck would miss.</para>
+ </listitem>
+
+
</orderedlist>
@@ -158,7 +170,7 @@
version 2. The <computeroutput>valgrind/*.h</computeroutput> headers
that you may wish to include in your code (eg.
<filename>valgrind.h</filename>, <filename>memcheck.h</filename>,
-<filename>helgrind.h</filename>) are
+<filename>helgrind.h</filename>, etc.) are
distributed under a BSD-style license, so you may include them in your
code without worrying about license conflicts. Some of the PThreads
test cases, <filename>pth_*.c</filename>, are taken from "Pthreads
|
|
From: Konstantin S. <kon...@gm...> - 2009-06-02 19:23:55
|
On Tue, Jun 2, 2009 at 10:27 PM, Roger Martin <ro...@qu...> wrote: > Hi developers, > > I've been having a productive time with massif; it is a very useful tool to > have in my toolbox. Thank you. > > Thought it may help someone else if the documentation at: > http://valgrind.org/docs/manual/ms-manual.html > where it is talking about the -g flag, included a note about not applying a > gcc build flag called -static. This flag apparently changes the resulting > of the executable so that massif cannot find new, malloc, etc. > ....... > First off, as for the other Valgrind tools, you should compile with > debugging info (the -g flag) . You can compile and link with static > libraries but do not include -static flag during linking. ... > ....... Such information could be found somewhere in the docs (though maybe not on the front page). What we really need, is a BIG WARNING when valgrind tools encounter static executable or lack of debug info. --kcc > Something like that. > > If the -static flag is set, valgrind massif will not find any memory > allocation routines and the output will be blank. Only the beginning nil > snapshot and that's it. Remove the -static and massif works great. > > ------------------------------------------------------------------------------ > OpenSolaris 2009.06 is a cutting edge operating system for enterprises > looking to deploy the next generation of Solaris that includes the latest > innovations from Sun and the OpenSource community. Download a copy and > enjoy capabilities such as Networking, Storage and Virtualization. > Go to: http://p.sf.net/sfu/opensolaris-get > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers > > |
|
From: Roger M. <ro...@qu...> - 2009-06-02 18:27:59
|
Hi developers, I've been having a productive time with massif; it is a very useful tool to have in my toolbox. Thank you. Thought it may help someone else if the documentation at: http://valgrind.org/docs/manual/ms-manual.html where it is talking about the -g flag, included a note about not applying a gcc build flag called -static. This flag apparently changes the resulting of the executable so that massif cannot find new, malloc, etc. ....... First off, as for the other Valgrind tools, you should compile with debugging info (the |-g| flag) . You can compile and link with static libraries but do not include -static flag during linking. ... ....... Something like that. If the -static flag is set, valgrind massif will not find any memory allocation routines and the output will be blank. Only the beginning nil snapshot and that's it. Remove the -static and massif works great. |
|
From: <sv...@va...> - 2009-06-02 15:11:51
|
Author: bart
Date: 2009-06-02 16:11:42 +0100 (Tue, 02 Jun 2009)
New Revision: 10214
Log:
Some source code modifications that should help getting tsan_unittest.cpp compiled on Darwin.
Modified:
trunk/drd/tests/tsan_thread_wrappers_pthread.h
trunk/drd/tests/tsan_unittest.cpp
Modified: trunk/drd/tests/tsan_thread_wrappers_pthread.h
===================================================================
--- trunk/drd/tests/tsan_thread_wrappers_pthread.h 2009-06-02 15:03:44 UTC (rev 10213)
+++ trunk/drd/tests/tsan_thread_wrappers_pthread.h 2009-06-02 15:11:42 UTC (rev 10214)
@@ -51,8 +51,10 @@
#include <stdio.h>
#include <limits.h> // INT_MAX
-#ifdef OS_MACOSX
+#ifdef _APPLE_
#include <libkern/OSAtomic.h>
+#define NO_BARRIER
+#define NO_TLS
#endif
#include <string>
@@ -105,7 +107,7 @@
#ifndef NO_SPINLOCK
/// helgrind does not (yet) support spin locks, so we annotate them.
-#ifndef OS_MACOSX
+#ifndef _APPLE_
class SpinLock {
public:
SpinLock() {
@@ -150,7 +152,7 @@
private:
OSSpinLock mu_;
};
-#endif // OS_MACOSX
+#endif // _APPLE_
#endif // NO_SPINLOCK
@@ -587,7 +589,7 @@
int AtomicIncrement(volatile int *value, int increment);
-#ifndef OS_MACOSX
+#ifndef _APPLE_
inline int AtomicIncrement(volatile int *value, int increment) {
return __sync_add_and_fetch(value, increment);
}
@@ -606,7 +608,7 @@
*out = memalign(al, size);
return (*out == 0);
}
-#endif // OS_MACOSX
+#endif // _APPLE_
#endif // THREAD_WRAPPERS_PTHREAD_H
// vim:shiftwidth=2:softtabstop=2:expandtab:foldmethod=marker
Modified: trunk/drd/tests/tsan_unittest.cpp
===================================================================
--- trunk/drd/tests/tsan_unittest.cpp 2009-06-02 15:03:44 UTC (rev 10213)
+++ trunk/drd/tests/tsan_unittest.cpp 2009-06-02 15:11:42 UTC (rev 10214)
@@ -85,7 +85,7 @@
#include <stdlib.h>
#include <dirent.h>
-#ifndef OS_MACOSX
+#ifndef _APPLE_
#include <malloc.h>
#endif
|
|
From: <sv...@va...> - 2009-06-02 15:03:50
|
Author: bart
Date: 2009-06-02 16:03:44 +0100 (Tue, 02 Jun 2009)
New Revision: 10213
Log:
- Portability improvement: switched from __gnu_cxx::hash_map<> (a gcc
extension) to std::map<> (standard C++).
- Replaced tempnam() by mkdtemp() / mkstemp() because gcc emits a warning
about the former.
Modified:
trunk/drd/tests/Makefile.am
trunk/drd/tests/tsan_unittest.cpp
Modified: trunk/drd/tests/Makefile.am
===================================================================
--- trunk/drd/tests/Makefile.am 2009-06-02 11:20:06 UTC (rev 10212)
+++ trunk/drd/tests/Makefile.am 2009-06-02 15:03:44 UTC (rev 10213)
@@ -287,7 +287,7 @@
new_delete_SOURCES = new_delete.cpp
tsan_unittest_SOURCES = tsan_unittest.cpp
-tsan_unittest_CXXFLAGS = $(AM_CXXFLAGS) -Wno-deprecated -Wno-sign-compare\
+tsan_unittest_CXXFLAGS = $(AM_CXXFLAGS) -Wno-sign-compare\
-Wno-shadow @FLAG_W_NO_EMPTY_BODY@
if HAVE_BOOST_1_35
Modified: trunk/drd/tests/tsan_unittest.cpp
===================================================================
--- trunk/drd/tests/tsan_unittest.cpp 2009-06-02 11:20:06 UTC (rev 10212)
+++ trunk/drd/tests/tsan_unittest.cpp 2009-06-02 15:03:44 UTC (rev 10213)
@@ -73,7 +73,6 @@
#include <vector>
#include <string>
#include <map>
-#include <ext/hash_map>
#include <algorithm>
#include <cstring> // strlen(), index(), rindex()
#include <ctime>
@@ -3817,15 +3816,20 @@
// test79 TN. Swap. {{{1
namespace test79 {
-__gnu_cxx::hash_map<int, int> MAP;
+#if 0
+typedef __gnu_cxx::hash_map<int, int> map_t;
+#else
+typedef std::map<int, int> map_t;
+#endif
+map_t MAP;
Mutex MU;
-// Here we use swap to pass hash_map between threads.
+// Here we use swap to pass MAP between threads.
// The synchronization is correct, but w/o ANNOTATE_MUTEX_IS_USED_AS_CONDVAR
// Helgrind will complain.
void Worker1() {
- __gnu_cxx::hash_map<int, int> tmp;
+ map_t tmp;
MU.Lock();
// We swap the new empty map 'tmp' with 'MAP'.
MAP.swap(tmp);
@@ -6229,9 +6233,14 @@
// test134 TN. Swap. Variant of test79. {{{1
namespace test134 {
-__gnu_cxx::hash_map<int, int> map;
+#if 0
+typedef __gnu_cxx::hash_map<int, int> map_t;
+#else
+typedef std::map<int, int> map_t;
+#endif
+map_t map;
Mutex mu;
-// Here we use swap to pass hash_map between threads.
+// Here we use swap to pass map between threads.
// The synchronization is correct, but w/o the annotation
// any hybrid detector will complain.
@@ -6243,7 +6252,7 @@
// These arcs can be created by HAPPENS_{BEFORE,AFTER} annotations, but it is
// much simpler to apply pure-happens-before mode to the mutex mu.
void Swapper() {
- __gnu_cxx::hash_map<int, int> tmp;
+ map_t tmp;
MutexLock lock(&mu);
ANNOTATE_HAPPENS_AFTER(&map);
// We swap the new empty map 'tmp' with 'map'.
@@ -6411,8 +6420,11 @@
// test140 TN. Swap. Variant of test79 and test134. {{{1
namespace test140 {
-//typedef std::map<int,int> Container;
- typedef __gnu_cxx::hash_map<int, int> Container;
+#if 0
+typedef __gnu_cxx::hash_map<int, int> Container;
+#else
+typedef std::map<int,int> Container;
+#endif
Mutex mu;
static Container container;
@@ -6519,13 +6531,13 @@
FAST_MODE_INIT(&GLOB2);
printf("test141: FP. unlink/fopen, rmdir/opendir.\n");
- dir_name = tempnam("/tmp", NULL);
- mkdir(dir_name, 0700);
+ dir_name = strdup("/tmp/tsan-XXXXXX");
+ mkdtemp(dir_name);
- filename = tempnam(dir_name, NULL);
- FILE *fp = fopen(filename, "w");
- CHECK(fp);
- fclose(fp);
+ filename = strdup((std::string() + dir_name + "/XXXXXX").c_str());
+ const int fd = mkstemp(filename);
+ CHECK(fd >= 0);
+ close(fd);
MyThreadArray mta1(Waker1, Waiter1);
mta1.Start();
|
|
From: Tom H. <to...@co...> - 2009-06-02 14:08:37
|
Bart Van Assche wrote: > Did you delete the aforementioned revisions from the repository ? Did > you know that you can also back out changes using a command like svn > -r10203:10202 (highest revision number first) ? I prefer the latter > because it preserves the entire modification history. A common way to > move changes between revisions r1 and r2 from the trunk to a branch > when using Subversion is to remove these from the trunk first (by > merging r2:r1, which generates revision r3), then creating a branch, > and then merging r3:r2 on the branch. The last command reverts the > undelete, and hence restores r1. That's exactly what Nick did - he committed r10204 which reverted those earlier revisions. You can't actually delete revisions - well not without lots of fiddling on the server anyway. Tom -- Tom Hughes (to...@co...) http://www.compton.nu/ |
|
From: Bart V. A. <bar...@gm...> - 2009-06-02 13:02:21
|
On Tue, Jun 2, 2009 at 8:54 AM, <sv...@va...> wrote: > Author: njn > Date: 2009-06-02 07:54:57 +0100 (Tue, 02 Jun 2009) > New Revision: 10204 > > Log: > Back out r10197--r10200 and r10202--r10203. I'm going to put them, and > further, related changes, on a branch instead. > > > > Modified: > trunk/Makefile.flags.am > trunk/coregrind/Makefile.am > trunk/drd/tests/Makefile.am > > > Modified: trunk/Makefile.flags.am [ ... ] Hello Nicholas, Did you delete the aforementioned revisions from the repository ? Did you know that you can also back out changes using a command like svn -r10203:10202 (highest revision number first) ? I prefer the latter because it preserves the entire modification history. A common way to move changes between revisions r1 and r2 from the trunk to a branch when using Subversion is to remove these from the trunk first (by merging r2:r1, which generates revision r3), then creating a branch, and then merging r3:r2 on the branch. The last command reverts the undelete, and hence restores r1. Bart. |
|
From: <sv...@va...> - 2009-06-02 12:49:00
|
Author: sewardj
Date: 2009-06-02 13:39:33 +0100 (Tue, 02 Jun 2009)
New Revision: 1899
Log:
guestAccessWhichMightOverlapPutI: handle IRStmt_CAS.
Modified:
branches/DCAS/priv/ir/iropt.c
Modified: branches/DCAS/priv/ir/iropt.c
===================================================================
--- branches/DCAS/priv/ir/iropt.c 2009-06-02 08:18:56 UTC (rev 1898)
+++ branches/DCAS/priv/ir/iropt.c 2009-06-02 12:39:33 UTC (rev 1899)
@@ -3033,6 +3033,13 @@
/* just be paranoid ... these should be rare. */
return True;
+ case Ist_CAS:
+ /* This is unbelievably lame, but it's probably not
+ significant from a performance point of view. Really, a
+ CAS is a load-store op, so it should be safe to say False.
+ However .. */
+ return True;
+
case Ist_Dirty:
/* If the dirty call has any guest effects at all, give up.
Probably could do better. */
|
|
From: <sv...@va...> - 2009-06-02 11:20:10
|
Author: sewardj
Date: 2009-06-02 12:20:06 +0100 (Tue, 02 Jun 2009)
New Revision: 10212
Log:
Try, and fail, to correctly handle sys_ipc(SHMAT, ...) on
ppc{32,64}-linux. Also factor out a bit of duplicated code.
Modified:
branches/DCAS/exp-ptrcheck/h_main.c
Modified: branches/DCAS/exp-ptrcheck/h_main.c
===================================================================
--- branches/DCAS/exp-ptrcheck/h_main.c 2009-06-02 11:12:29 UTC (rev 10211)
+++ branches/DCAS/exp-ptrcheck/h_main.c 2009-06-02 11:20:06 UTC (rev 10212)
@@ -2412,6 +2412,9 @@
# if defined(__NR_shmget)
ADD(1, __NR_shmget);
# endif
+# if defined(__NR_ipc) && defined(VKI_SHMAT)
+ ADD(1, __NR_ipc); /* ppc{32,64}-linux horrors */
+# endif
/* --------------- AIX5 --------------- */
@@ -2495,14 +2498,9 @@
/* Deal with the common case */
pair = VG_(indexXA)( post_syscall_table, i );
- if (pair->uw2 == 0) {
- /* the common case */
- VG_(set_syscall_return_shadows)(
- tid, /* retval */ (UWord)NONPTR, 0,
- /* error */ (UWord)NONPTR, 0
- );
- return;
- }
+ if (pair->uw2 == 0)
+ /* the common case */
+ goto res_NONPTR_err_NONPTR;
/* Special handling for all remaining cases */
tl_assert(pair->uw2 == 1);
@@ -2515,24 +2513,15 @@
syscall completes. */
post_reg_write_nonptr_or_unknown( tid, PC_OFF_FS_ZERO,
PC_SZB_FS_ZERO );
- VG_(set_syscall_return_shadows)(
- tid, /* retval */ (UWord)NONPTR, 0,
- /* error */ (UWord)NONPTR, 0
- );
- return;
+ goto res_NONPTR_err_NONPTR;
}
# endif
# if defined(__NR_brk)
// With brk(), result (of kernel syscall, not glibc wrapper) is a heap
// pointer. Make the shadow UNKNOWN.
- if (sysno == __NR_brk) {
- VG_(set_syscall_return_shadows)(
- tid, /* retval */ (UWord)UNKNOWN, 0,
- /* error */ (UWord)NONPTR, 0
- );
- return;
- }
+ if (sysno == __NR_brk)
+ goto res_UNKNOWN_err_NONPTR;
# endif
// With mmap, new_mem_mmap() has already been called and added the
@@ -2551,13 +2540,9 @@
) {
if (sr_isError(res)) {
// mmap() had an error, return value is a small negative integer
- VG_(set_syscall_return_shadows)( tid, /*val*/ (UWord)NONPTR, 0,
- /*err*/ (UWord)NONPTR, 0 );
- if (0) VG_(printf)("ZZZZZZZ mmap res -> NONPTR\n");
+ goto res_NONPTR_err_NONPTR;
} else {
- VG_(set_syscall_return_shadows)( tid, /*val*/ (UWord)UNKNOWN, 0,
- /*err*/ (UWord)NONPTR, 0 );
- if (0) VG_(printf)("ZZZZZZZ mmap res -> UNKNOWN\n");
+ goto res_UNKNOWN_err_NONPTR;
}
return;
}
@@ -2567,24 +2552,40 @@
# if defined(__NR_shmat)
if (sysno == __NR_shmat) {
if (sr_isError(res)) {
- VG_(set_syscall_return_shadows)( tid, /*val*/ (UWord)NONPTR, 0,
- /*err*/ (UWord)NONPTR, 0 );
- if (0) VG_(printf)("ZZZZZZZ shmat res -> NONPTR\n");
+ goto res_NONPTR_err_NONPTR;
} else {
- VG_(set_syscall_return_shadows)( tid, /*val*/ (UWord)UNKNOWN, 0,
- /*err*/ (UWord)NONPTR, 0 );
- if (0) VG_(printf)("ZZZZZZZ shmat res -> UNKNOWN\n");
+ goto res_UNKNOWN_err_NONPTR;
}
- return;
}
# endif
# if defined(__NR_shmget)
- if (sysno == __NR_shmget) {
+ if (sysno == __NR_shmget)
// FIXME: is this correct?
- VG_(set_syscall_return_shadows)( tid, /*val*/ (UWord)UNKNOWN, 0,
- /*err*/ (UWord)NONPTR, 0 );
- return;
+ goto res_UNKNOWN_err_NONPTR;
+# endif
+
+# if defined(__NR_ipc) && defined(VKI_SHMAT)
+ /* perhaps this should be further conditionalised with
+ && (defined(VGP_ppc32_linux) || defined(VGP_ppc64_linux)
+ Note, this just copies the behaviour of __NR_shmget above.
+
+ JRS 2009 June 02: it seems that the return value from
+ sys_ipc(VKI_SHMAT, ...) doesn't have much relationship to the
+ result returned by the originating user-level shmat call. It's
+ different (and much lower) by a large but integral number of
+ pages. I don't have time to chase this right now. Observed on
+ ppc{32,64}-linux. Result appears to be false errors from apps
+ using shmat. Confusion though -- shouldn't be related to the
+ actual numeric values returned by the syscall, though, should
+ it? Confused. Maybe some bad interaction with a
+ nonpointer-or-unknown heuristic? */
+ if (sysno == __NR_ipc) {
+ if (args[0] == VKI_SHMAT) {
+ goto res_UNKNOWN_err_NONPTR;
+ } else {
+ goto res_NONPTR_err_NONPTR;
+ }
}
# endif
@@ -2592,6 +2593,16 @@
post_syscall_table has .w2 == 1, which in turn implies there
should be special-case code for it above. */
tl_assert(0);
+
+ res_NONPTR_err_NONPTR:
+ VG_(set_syscall_return_shadows)( tid, /* retval */ (UWord)NONPTR, 0,
+ /* error */ (UWord)NONPTR, 0 );
+ return;
+
+ res_UNKNOWN_err_NONPTR:
+ VG_(set_syscall_return_shadows)( tid, /* retval */ (UWord)UNKNOWN, 0,
+ /* error */ (UWord)NONPTR, 0 );
+ return;
}
|
|
From: <sv...@va...> - 2009-06-02 11:12:33
|
Author: bart Date: 2009-06-02 12:12:29 +0100 (Tue, 02 Jun 2009) New Revision: 10211 Log: Fixes for systems without built-in functions for atomic memory access. Modified: trunk/drd/tests/Makefile.am trunk/drd/tests/atomic_var.vgtest trunk/drd/tests/circular_buffer.vgtest Modified: trunk/drd/tests/Makefile.am =================================================================== --- trunk/drd/tests/Makefile.am 2009-06-02 10:13:49 UTC (rev 10210) +++ trunk/drd/tests/Makefile.am 2009-06-02 11:12:29 UTC (rev 10211) @@ -241,15 +241,14 @@ sem_as_mutex \ sigalrm \ thread_name \ - trylock \ - tsan_unittest + trylock if HAVE_BOOST_1_35 check_PROGRAMS += boost_thread endif if HAVE_BUILTIN_ATOMIC -check_PROGRAMS += annotate_rwlock atomic_var circular_buffer +check_PROGRAMS += annotate_rwlock atomic_var circular_buffer tsan_unittest endif if HAVE_OPENMP Modified: trunk/drd/tests/atomic_var.vgtest =================================================================== --- trunk/drd/tests/atomic_var.vgtest 2009-06-02 10:13:49 UTC (rev 10210) +++ trunk/drd/tests/atomic_var.vgtest 2009-06-02 11:12:29 UTC (rev 10211) @@ -1,4 +1,4 @@ -prereq: ./supported_libpthread +prereq: test -e atomic_var && ./supported_libpthread vgopts: --var-info=yes --check-stack-var=yes --show-confl-seg=no prog: atomic_var stderr_filter: filter_stderr_and_thread_no Modified: trunk/drd/tests/circular_buffer.vgtest =================================================================== --- trunk/drd/tests/circular_buffer.vgtest 2009-06-02 10:13:49 UTC (rev 10210) +++ trunk/drd/tests/circular_buffer.vgtest 2009-06-02 11:12:29 UTC (rev 10211) @@ -1,3 +1,3 @@ -prereq: ./supported_libpthread +prereq: test -e circular_buffer && ./supported_libpthread prog: circular_buffer args: -q |
|
From: <sv...@va...> - 2009-06-02 10:13:59
|
Author: sewardj
Date: 2009-06-02 11:13:49 +0100 (Tue, 02 Jun 2009)
New Revision: 10210
Log:
Support 32- and 64-bit IR store-conditionals in exp-ptrcheck.
Ugly and tiresome.
--This line, and those below, will be ignored--
M h_main.c
Modified:
branches/DCAS/exp-ptrcheck/h_main.c
Modified: branches/DCAS/exp-ptrcheck/h_main.c
===================================================================
--- branches/DCAS/exp-ptrcheck/h_main.c 2009-06-02 08:25:59 UTC (rev 10209)
+++ branches/DCAS/exp-ptrcheck/h_main.c 2009-06-02 10:13:49 UTC (rev 10210)
@@ -2894,25 +2894,73 @@
// ------------------ Store handlers ------------------ //
/* On 32 bit targets, we will use:
- check_store1 check_store2 check_store4_P
+ check_store1 check_store2 check_store4_P check_store4C_P
check_store4 (for 32-bit nonpointer stores)
check_store8_ms4B_ls4B (for 64-bit stores)
check_store16_ms4B_4B_4B_ls4B (for xmm/altivec stores)
On 64 bit targets, we will use:
- check_store1 check_store2 check_store4 check_store8_P
+ check_store1 check_store2 check_store4 check_store4C
+ check_store8_P check_store_8C_P
check_store8_all8B (for 64-bit nonpointer stores)
check_store16_ms8B_ls8B (for xmm/altivec stores)
A "_P" handler writes a pointer to memory, and so has an extra
argument -- the pointer's shadow value. That implies that
- check_store4_P is only to be called on a 32 bit host and
- check_store8_P is only to be called on a 64 bit host. For all
+ check_store4{,C}_P is only to be called on a 32 bit host and
+ check_store8{,C}_P is only to be called on a 64 bit host. For all
other cases, and for the misaligned _P cases, the strategy is to
let the store go through, and then snoop around with
nonptr_or_unknown to fix up the shadow values of any affected
words. */
+/* Helpers for store-conditionals. Ugly kludge :-(
+ They all return 1 if the SC was successful and 0 if it failed. */
+static inline UWord do_store_conditional_32( Addr m/*dst*/, UInt t/*val*/ )
+{
+# if defined(VGA_ppc32) || defined(VGA_ppc64)
+ UWord success;
+ /* If this assertion fails, the underlying IR is (semantically) ill-formed
+ as per the IR spec for IRStmt_Store. */
+ tl_assert(VG_IS_4_ALIGNED(m));
+ __asm__ __volatile__(
+ "stwcx. %2,0,%1" "\n\t" /* data,0,addr */
+ "mfcr %0" "\n\t"
+ "srwi %0,%0,29" "\n\t" /* move relevant CR bit to LSB */
+ : /*out*/"=b"(success)
+ : /*in*/ "b"(m), "b"( (UWord)t )
+ : /*trash*/ "memory", "cc"
+ /* Note: srwi is OK even on 64-bit host because the we're
+ after bit 29 (normal numbering) and we mask off all the
+ other junk just below. */
+ );
+ return success & (UWord)1;
+# else
+ tl_assert(0); /* not implemented on other platforms */
+# endif
+}
+
+static inline UWord do_store_conditional_64( Addr m/*dst*/, ULong t/*val*/ )
+{
+# if defined(VGA_ppc64)
+ UWord success;
+ /* If this assertion fails, the underlying IR is (semantically) ill-formed
+ as per the IR spec for IRStmt_Store. */
+ tl_assert(VG_IS_8_ALIGNED(m));
+ __asm__ __volatile__(
+ "stdcx. %2,0,%1" "\n\t" /* data,0,addr */
+ "mfcr %0" "\n\t"
+ "srdi %0,%0,29" "\n\t" /* move relevant CR bit to LSB */
+ : /*out*/"=b"(success)
+ : /*in*/ "b"(m), "b"( (UWord)t )
+ : /*trash*/ "memory", "cc"
+ );
+ return success & (UWord)1;
+# else
+ tl_assert(0); /* not implemented on other platforms */
+# endif
+}
+
/* Apply nonptr_or_unknown to all the words intersecting
[a, a+len). */
static VG_REGPARM(2)
@@ -3044,6 +3092,29 @@
}
}
+// This handles 64 bit store-conditionals on 64 bit targets. It must
+// not be called on 32 bit targets.
+static VG_REGPARM(3)
+UWord check_store8C_P(Addr m, Seg* mptr_vseg, UWord t, Seg* t_vseg)
+{
+ UWord success;
+ tl_assert(sizeof(UWord) == 8); /* DO NOT REMOVE */
+# if SC_SEGS
+ checkSeg(t_vseg);
+ checkSeg(mptr_vseg);
+# endif
+ check_load_or_store(/*is_write*/True, m, 8, mptr_vseg);
+ // Actually *do* the STORE here
+ success = do_store_conditional_64( m, t );
+ if (VG_IS_8_ALIGNED(m)) {
+ set_mem_vseg( m, t_vseg );
+ } else {
+ // straddling two words
+ nonptr_or_unknown_range(m, 8);
+ }
+ return success;
+}
+
// This handles 32 bit stores on 32 bit targets. It must
// not be called on 64 bit targets.
static VG_REGPARM(3)
@@ -3065,6 +3136,29 @@
}
}
+// This handles 32 bit store-conditionals on 32 bit targets. It must
+// not be called on 64 bit targets.
+static VG_REGPARM(3)
+UWord check_store4C_P(Addr m, Seg* mptr_vseg, UWord t, Seg* t_vseg)
+{
+ UWord success;
+ tl_assert(sizeof(UWord) == 4); /* DO NOT REMOVE */
+# if SC_SEGS
+ checkSeg(t_vseg);
+ checkSeg(mptr_vseg);
+# endif
+ check_load_or_store(/*is_write*/True, m, 4, mptr_vseg);
+ // Actually *do* the STORE here
+ success = do_store_conditional_32( m, t );
+ if (VG_IS_4_ALIGNED(m)) {
+ set_mem_vseg( m, t_vseg );
+ } else {
+ // straddling two words
+ nonptr_or_unknown_range(m, 4);
+ }
+ return success;
+}
+
// Used for both 32 bit and 64 bit targets.
static VG_REGPARM(3)
void check_store4(Addr m, Seg* mptr_vseg, UWord t)
@@ -3078,6 +3172,23 @@
nonptr_or_unknown_range(m, 4);
}
+// Used for 32-bit store-conditionals on 64 bit targets only. It must
+// not be called on 32 bit targets.
+static VG_REGPARM(3)
+UWord check_store4C(Addr m, Seg* mptr_vseg, UWord t)
+{
+ UWord success;
+ tl_assert(sizeof(UWord) == 8); /* DO NOT REMOVE */
+# if SC_SEGS
+ checkSeg(mptr_vseg);
+# endif
+ check_load_or_store(/*is_write*/True, m, 4, mptr_vseg);
+ // Actually *do* the STORE here
+ success = do_store_conditional_32( m, t );
+ nonptr_or_unknown_range(m, 4);
+ return success;
+}
+
// Used for both 32 bit and 64 bit targets.
static VG_REGPARM(3)
void check_store2(Addr m, Seg* mptr_vseg, UWord t)
@@ -4062,8 +4173,8 @@
}
}
-/* Generate into 'ane', instrumentation for 'st'. Also copy 'st'
- itself into 'ane' (the caller does not do so). This is somewhat
+/* Generate into 'pce', instrumentation for 'st'. Also copy 'st'
+ itself into 'pce' (the caller does not do so). This is somewhat
complex and relies heavily on the assumption that the incoming IR
is in flat form.
@@ -4225,20 +4336,25 @@
Only word-sized values are shadowed. If this is a
store-conditional, .resSC will denote a non-word-typed
temp, and so we don't need to shadow it. Assert about the
- type, tho.
-
- JRS 1 June 09: urr, this totally breaks with
- store-conditional, since there's no platform-independent
- way for the helper to do that and extract the success bit.
- Ick.
+ type, tho. However, since we're not re-emitting the
+ original IRStmt_Store, but rather doing it as part of the
+ helper function, we need to actually do a SC in the
+ helper, and assign the result bit to .resSC. Ugly.
*/
IRExpr* data = st->Ist.Store.data;
IRExpr* addr = st->Ist.Store.addr;
IRType d_ty = typeOfIRExpr(pce->bb->tyenv, data);
IRExpr* addrv = schemeEw_Atom( pce, addr );
- if (st->Ist.Store.resSC != IRTemp_INVALID) {
- tl_assert(typeOfIRTemp(pce->bb->tyenv, st->Ist.Store.resSC)
- == Ity_I1); /* viz, not something we want to shadow */
+ IRTemp resSC = st->Ist.Store.resSC;
+ if (resSC != IRTemp_INVALID) {
+ tl_assert(typeOfIRTemp(pce->bb->tyenv, resSC) == Ity_I1);
+ /* viz, not something we want to shadow */
+ /* also, throw out all store-conditional cases that
+ we can't handle */
+ if (pce->gWordTy == Ity_I32 && d_ty != Ity_I32)
+ goto unhandled;
+ if (pce->gWordTy == Ity_I64 && d_ty != Ity_I32 && d_ty != Ity_I64)
+ goto unhandled;
}
if (pce->gWordTy == Ity_I32) {
/* ------ 32 bit host/guest (cough, cough) ------ */
@@ -4246,9 +4362,24 @@
/* Integer word case */
case Ity_I32: {
IRExpr* datav = schemeEw_Atom( pce, data );
- gen_dirty_v_WWWW( pce,
- &check_store4_P, "check_store4_P",
- addr, addrv, data, datav );
+ if (resSC == IRTemp_INVALID) {
+ /* "normal" store */
+ gen_dirty_v_WWWW( pce,
+ &check_store4_P, "check_store4_P",
+ addr, addrv, data, datav );
+ } else {
+ /* store-conditional; need to snarf the success bit */
+ IRTemp resSC32
+ = gen_dirty_W_WWWW( pce,
+ &check_store4C_P,
+ "check_store4C_P",
+ addr, addrv, data, datav );
+ /* presumably resSC32 will really be Ity_I32. In
+ any case we'll get jumped by the IR sanity
+ checker if it's not, when it sees the
+ following statement. */
+ assign( 'I', pce, resSC, unop(Iop_32to1, mkexpr(resSC32)) );
+ }
break;
}
/* Integer subword cases */
@@ -4337,17 +4468,39 @@
/* Integer word case */
case Ity_I64: {
IRExpr* datav = schemeEw_Atom( pce, data );
- gen_dirty_v_WWWW( pce,
- &check_store8_P, "check_store8_P",
- addr, addrv, data, datav );
+ if (resSC == IRTemp_INVALID) {
+ /* "normal" store */
+ gen_dirty_v_WWWW( pce,
+ &check_store8_P, "check_store8_P",
+ addr, addrv, data, datav );
+ } else {
+ IRTemp resSC64
+ = gen_dirty_W_WWWW( pce,
+ &check_store8C_P,
+ "check_store8C_P",
+ addr, addrv, data, datav );
+ assign( 'I', pce, resSC, unop(Iop_64to1, mkexpr(resSC64)) );
+ }
break;
}
/* Integer subword cases */
case Ity_I32:
- gen_dirty_v_WWW( pce,
- &check_store4, "check_store4",
- addr, addrv,
- uwiden_to_host_word( pce, data ));
+ if (resSC == IRTemp_INVALID) {
+ /* "normal" store */
+ gen_dirty_v_WWW( pce,
+ &check_store4, "check_store4",
+ addr, addrv,
+ uwiden_to_host_word( pce, data ));
+ } else {
+ /* store-conditional; need to snarf the success bit */
+ IRTemp resSC64
+ = gen_dirty_W_WWW( pce,
+ &check_store4C,
+ "check_store4C",
+ addr, addrv,
+ uwiden_to_host_word( pce, data ));
+ assign( 'I', pce, resSC, unop(Iop_64to1, mkexpr(resSC64)) );
+ }
break;
case Ity_I16:
gen_dirty_v_WWW( pce,
|
|
From: <sv...@va...> - 2009-06-02 08:26:08
|
Author: sewardj
Date: 2009-06-02 09:25:59 +0100 (Tue, 02 Jun 2009)
New Revision: 10209
Log:
Change tool instrumentation routines to track VEX branches/DCAS
changes in r1898 (add support for LL/SC and get rid of various prior
kludges).
It also adds to coregrind the ability to synthesise a SIGBUS, since
ppc requires (or, at least, results in) SIGBUS to be thrown in the
case where the address to l{w,d}arx and st{w,d}cx. is not naturally
aligned.
Modified:
branches/DCAS/callgrind/main.c
branches/DCAS/coregrind/m_scheduler/scheduler.c
branches/DCAS/coregrind/m_signals.c
branches/DCAS/coregrind/pub_core_signals.h
branches/DCAS/drd/drd_load_store.c
branches/DCAS/exp-ptrcheck/h_main.c
branches/DCAS/helgrind/hg_main.c
branches/DCAS/massif/ms_main.c
branches/DCAS/memcheck/mc_machine.c
branches/DCAS/memcheck/mc_translate.c
Modified: branches/DCAS/callgrind/main.c
===================================================================
--- branches/DCAS/callgrind/main.c 2009-06-02 08:24:20 UTC (rev 10208)
+++ branches/DCAS/callgrind/main.c 2009-06-02 08:25:59 UTC (rev 10209)
@@ -483,8 +483,14 @@
static
void addConstMemStoreStmt( IRSB* bbOut, UWord addr, UInt val, IRType hWordTy)
{
+ /* JRS 2009june01: re IRTemp_INVALID, am assuming that this
+ function is used only to create instrumentation, and not to
+ copy/reconstruct IRStmt_Stores that were in the incoming IR
+ superblock. If that is not a correct assumption, then things
+ will break badly on PowerPC, esp w/ threaded apps. */
addStmtToIRSB( bbOut,
IRStmt_Store(CLGEndness,
+ IRTemp_INVALID,
IRExpr_Const(hWordTy == Ity_I32 ?
IRConst_U32( addr ) :
IRConst_U64( addr )),
Modified: branches/DCAS/coregrind/m_scheduler/scheduler.c
===================================================================
--- branches/DCAS/coregrind/m_scheduler/scheduler.c 2009-06-02 08:24:20 UTC (rev 10208)
+++ branches/DCAS/coregrind/m_scheduler/scheduler.c 2009-06-02 08:25:59 UTC (rev 10209)
@@ -614,22 +614,6 @@
trc = 0;
dispatch_ctr_SAVED = VG_(dispatch_ctr);
-# if defined(VGA_ppc32) || defined(VGA_ppc64)
- /* This is necessary due to the hacky way vex models reservations
- on ppc. It's really quite incorrect for each thread to have its
- own reservation flag/address, since it's really something that
- all threads share (that's the whole point). But having shared
- guest state is something we can't model with Vex. However, as
- per PaulM's 2.4.0ppc, the reservation is modelled using a
- reservation flag which is cleared at each context switch. So it
- is indeed possible to get away with a per thread-reservation if
- the thread's reservation is cleared before running it.
- */
- /* Clear any existing reservation that this thread might have made
- last time it was running. */
- VG_(threads)[tid].arch.vex.guest_RESVN = 0;
-# endif
-
# if defined(VGP_ppc32_aix5) || defined(VGP_ppc64_aix5)
/* On AIX, we need to get a plausible value for SPRG3 for this
thread, since it's used I think as a thread-state pointer. It
@@ -1102,6 +1086,10 @@
VG_(synth_fault)(tid);
break;
+ case VEX_TRC_JMP_SIGBUS:
+ VG_(synth_sigbus)(tid);
+ break;
+
case VEX_TRC_JMP_NODECODE:
VG_(message)(Vg_UserMsg,
"valgrind: Unrecognised instruction at address %#lx.",
Modified: branches/DCAS/coregrind/m_signals.c
===================================================================
--- branches/DCAS/coregrind/m_signals.c 2009-06-02 08:24:20 UTC (rev 10208)
+++ branches/DCAS/coregrind/m_signals.c 2009-06-02 08:25:59 UTC (rev 10209)
@@ -1635,6 +1635,27 @@
deliver_signal(tid, &info, NULL);
}
+// Synthesise a SIGBUS.
+void VG_(synth_sigbus)(ThreadId tid)
+{
+ vki_siginfo_t info;
+
+ vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
+
+ VG_(memset)(&info, 0, sizeof(info));
+ info.si_signo = VKI_SIGBUS;
+ /* There are several meanings to SIGBUS (as per POSIX, presumably),
+ but the most widely understood is "invalid address alignment",
+ so let's use that. */
+ info.si_code = VKI_BUS_ADRALN;
+ /* If we knew the invalid address in question, we could put it
+ in .si_addr. Oh well. */
+ /* info.VKI_SIGINFO_si_addr = (void*)addr; */
+
+ resume_scheduler(tid);
+ deliver_signal(tid, &info, NULL);
+}
+
// Synthesise a SIGTRAP.
void VG_(synth_sigtrap)(ThreadId tid)
{
Modified: branches/DCAS/coregrind/pub_core_signals.h
===================================================================
--- branches/DCAS/coregrind/pub_core_signals.h 2009-06-02 08:24:20 UTC (rev 10208)
+++ branches/DCAS/coregrind/pub_core_signals.h 2009-06-02 08:25:59 UTC (rev 10209)
@@ -73,6 +73,7 @@
extern void VG_(synth_fault_perms) (ThreadId tid, Addr addr);
extern void VG_(synth_sigill) (ThreadId tid, Addr addr);
extern void VG_(synth_sigtrap) (ThreadId tid);
+extern void VG_(synth_sigbus) (ThreadId tid);
/* Extend the stack to cover addr, if possible */
extern Bool VG_(extend_stack)(Addr addr, UInt maxsize);
Modified: branches/DCAS/drd/drd_load_store.c
===================================================================
--- branches/DCAS/drd/drd_load_store.c 2009-06-02 08:24:20 UTC (rev 10208)
+++ branches/DCAS/drd/drd_load_store.c 2009-06-02 08:25:59 UTC (rev 10209)
@@ -449,7 +449,6 @@
IRSB* bb;
IRExpr** argv;
Bool instrument = True;
- Bool bus_locked = False;
/* Set up BB */
bb = emptyIRSB();
@@ -483,16 +482,6 @@
{
case Imbe_Fence:
break; /* not interesting */
- case Imbe_BusLock:
- case Imbe_SnoopedStoreBegin:
- tl_assert(! bus_locked);
- bus_locked = True;
- break;
- case Imbe_BusUnlock:
- case Imbe_SnoopedStoreEnd:
- tl_assert(bus_locked);
- bus_locked = False;
- break;
default:
tl_assert(0);
}
@@ -500,7 +489,8 @@
break;
case Ist_Store:
- if (instrument && ! bus_locked)
+ if (instrument && /* ignore stores resulting from st{d,w}cx. */
+ st->Ist.Store.resSC == IRTemp_INVALID)
{
instrument_store(bb,
st->Ist.Store.addr,
@@ -546,8 +536,7 @@
argv);
addStmtToIRSB(bb, IRStmt_Dirty(di));
}
- if ((mFx == Ifx_Write || mFx == Ifx_Modify)
- && ! bus_locked)
+ if (mFx == Ifx_Write || mFx == Ifx_Modify)
{
di = unsafeIRDirty_0_N(
/*regparms*/2,
@@ -570,8 +559,6 @@
}
}
- tl_assert(! bus_locked);
-
return bb;
}
Modified: branches/DCAS/exp-ptrcheck/h_main.c
===================================================================
--- branches/DCAS/exp-ptrcheck/h_main.c 2009-06-02 08:24:20 UTC (rev 10208)
+++ branches/DCAS/exp-ptrcheck/h_main.c 2009-06-02 08:25:59 UTC (rev 10209)
@@ -1536,7 +1536,6 @@
if (o == GOF(CTR) && is4) goto exactly1;
if (o == GOF(CIA) && is4) goto none;
if (o == GOF(IP_AT_SYSCALL) && is4) goto none;
- if (o == GOF(RESVN) && is4) goto none;
if (o == GOF(TISTART) && is4) goto none;
if (o == GOF(TILEN) && is4) goto none;
if (o == GOF(REDIR_SP) && is4) goto none;
@@ -1700,7 +1699,6 @@
if (o == GOF(CTR) && is8) goto exactly1;
if (o == GOF(CIA) && is8) goto none;
if (o == GOF(IP_AT_SYSCALL) && is8) goto none;
- if (o == GOF(RESVN) && is8) goto none;
if (o == GOF(TISTART) && is8) goto none;
if (o == GOF(TILEN) && is8) goto none;
if (o == GOF(REDIR_SP) && is8) goto none;
@@ -4223,11 +4221,25 @@
the post-hoc ugly hack of inspecting and "improving" the
shadow data after the store, in the case where it isn't an
aligned word store.
+
+ Only word-sized values are shadowed. If this is a
+ store-conditional, .resSC will denote a non-word-typed
+ temp, and so we don't need to shadow it. Assert about the
+ type, tho.
+
+ JRS 1 June 09: urr, this totally breaks with
+ store-conditional, since there's no platform-independent
+ way for the helper to do that and extract the success bit.
+ Ick.
*/
IRExpr* data = st->Ist.Store.data;
IRExpr* addr = st->Ist.Store.addr;
IRType d_ty = typeOfIRExpr(pce->bb->tyenv, data);
IRExpr* addrv = schemeEw_Atom( pce, addr );
+ if (st->Ist.Store.resSC != IRTemp_INVALID) {
+ tl_assert(typeOfIRTemp(pce->bb->tyenv, st->Ist.Store.resSC)
+ == Ity_I1); /* viz, not something we want to shadow */
+ }
if (pce->gWordTy == Ity_I32) {
/* ------ 32 bit host/guest (cough, cough) ------ */
switch (d_ty) {
Modified: branches/DCAS/helgrind/hg_main.c
===================================================================
--- branches/DCAS/helgrind/hg_main.c 2009-06-02 08:24:20 UTC (rev 10208)
+++ branches/DCAS/helgrind/hg_main.c 2009-06-02 08:25:59 UTC (rev 10209)
@@ -3601,40 +3601,6 @@
}
-//static void instrument_memory_bus_event ( IRSB* bbOut, IRMBusEvent event )
-//{
-// switch (event) {
-// case Imbe_SnoopedStoreBegin:
-// case Imbe_SnoopedStoreEnd:
-// /* These arise from ppc stwcx. insns. They should perhaps be
-// handled better. */
-// break;
-// case Imbe_Fence:
-// break; /* not interesting */
-// case Imbe_BusLock:
-// case Imbe_BusUnlock:
-// addStmtToIRSB(
-// bbOut,
-// IRStmt_Dirty(
-// unsafeIRDirty_0_N(
-// 0/*regparms*/,
-// event == Imbe_BusLock ? "evh__bus_lock"
-// : "evh__bus_unlock",
-// VG_(fnptr_to_fnentry)(
-// event == Imbe_BusLock ? &evh__bus_lock
-// : &evh__bus_unlock
-// ),
-// mkIRExprVec_0()
-// )
-// )
-// );
-// break;
-// default:
-// tl_assert(0);
-// }
-//}
-
-
static
IRSB* hg_instrument ( VgCallbackClosure* closure,
IRSB* bbIn,
@@ -3644,7 +3610,6 @@
{
Int i;
IRSB* bbOut;
- Bool isSnoopedStore = False;
Addr64 cia; /* address of current insn */
IRStmt* st;
@@ -3697,22 +3662,6 @@
switch (st->Ist.MBE.event) {
case Imbe_Fence:
break; /* not interesting */
- /* Imbe_Bus{Lock,Unlock} arise from x86/amd64 LOCK
- prefixed instructions. */
- case Imbe_BusLock:
- break;
- case Imbe_BusUnlock:
- break;
- /* Imbe_SnoopedStore{Begin,End} arise from ppc
- stwcx. instructions. */
- case Imbe_SnoopedStoreBegin:
- tl_assert(isSnoopedStore == False);
- isSnoopedStore = True;
- break;
- case Imbe_SnoopedStoreEnd:
- tl_assert(isSnoopedStore == True);
- isSnoopedStore = False;
- break;
default:
goto unhandled;
}
@@ -3721,8 +3670,7 @@
case Ist_CAS: {
/* Atomic read-modify-write cycle. Just pretend it's a
read. */
- IRCAS* cas = st->Ist.CAS.details;
- tl_assert(!isSnoopedStore);
+ IRCAS* cas = st->Ist.CAS.details;
/* FIXME: handle DCAS ! */
if (cas->oldHi != IRTemp_INVALID || cas->expdHi || cas->dataHi)
goto unhandled;
@@ -3737,7 +3685,9 @@
}
case Ist_Store:
- if (!isSnoopedStore)
+ /* It seems we pretend that store-conditionals don't
+ exist, viz, just ignore them ... */
+ if (st->Ist.Store.resSC == IRTemp_INVALID)
instrument_mem_access(
bbOut,
st->Ist.Store.addr,
@@ -3745,9 +3695,11 @@
True/*isStore*/,
sizeofIRType(hWordTy)
);
- break;
+ break;
case Ist_WrTmp: {
+ /* ... whereas here we don't care whether a load is a
+ vanilla one or a load-linked. */
IRExpr* data = st->Ist.WrTmp.data;
if (data->tag == Iex_Load) {
instrument_mem_access(
@@ -3776,11 +3728,6 @@
sizeofIRType(hWordTy)
);
}
- /* This isn't really correct. Really the
- instrumentation should be only added when
- !isSnoopedStore, just like with
- Ist_Store. Still, I don't think this is
- particularly important. */
if (d->mFx == Ifx_Write || d->mFx == Ifx_Modify) {
instrument_mem_access(
bbOut, d->mAddr, dataSize, True/*isStore*/,
Modified: branches/DCAS/massif/ms_main.c
===================================================================
--- branches/DCAS/massif/ms_main.c 2009-06-02 08:24:20 UTC (rev 10208)
+++ branches/DCAS/massif/ms_main.c 2009-06-02 08:25:59 UTC (rev 10209)
@@ -1871,12 +1871,14 @@
IRTemp t2 = newIRTemp(sbOut->tyenv, Ity_I64);
IRExpr* counter_addr = mkIRExpr_HWord( (HWord)&guest_instrs_executed );
- IRStmt* st1 = IRStmt_WrTmp(t1, IRExpr_Load(END, Ity_I64, counter_addr));
+ IRStmt* st1 = IRStmt_WrTmp(t1, IRExpr_Load(False/*!isLL*/,
+ END, Ity_I64, counter_addr));
IRStmt* st2 =
IRStmt_WrTmp(t2,
IRExpr_Binop(Iop_Add64, IRExpr_RdTmp(t1),
IRExpr_Const(IRConst_U64(n))));
- IRStmt* st3 = IRStmt_Store(END, counter_addr, IRExpr_RdTmp(t2));
+ IRStmt* st3 = IRStmt_Store(END, IRTemp_INVALID/*"not store-conditional"*/,
+ counter_addr, IRExpr_RdTmp(t2));
addStmtToIRSB( sbOut, st1 );
addStmtToIRSB( sbOut, st2 );
Modified: branches/DCAS/memcheck/mc_machine.c
===================================================================
--- branches/DCAS/memcheck/mc_machine.c 2009-06-02 08:24:20 UTC (rev 10208)
+++ branches/DCAS/memcheck/mc_machine.c 2009-06-02 08:25:59 UTC (rev 10209)
@@ -182,7 +182,6 @@
if (o == GOF(CIA) && sz == 8) return -1;
if (o == GOF(IP_AT_SYSCALL) && sz == 8) return -1; /* slot unused */
- if (o == GOF(RESVN) && sz == 8) return -1;
if (o == GOF(FPROUND) && sz == 4) return -1;
if (o == GOF(EMWARN) && sz == 4) return -1;
if (o == GOF(TISTART) && sz == 8) return -1;
@@ -341,7 +340,6 @@
if (o == GOF(CIA) && sz == 4) return -1;
if (o == GOF(IP_AT_SYSCALL) && sz == 4) return -1; /* slot unused */
- if (o == GOF(RESVN) && sz == 4) return -1;
if (o == GOF(FPROUND) && sz == 4) return -1;
if (o == GOF(VRSAVE) && sz == 4) return -1;
if (o == GOF(EMWARN) && sz == 4) return -1;
Modified: branches/DCAS/memcheck/mc_translate.c
===================================================================
--- branches/DCAS/memcheck/mc_translate.c 2009-06-02 08:24:20 UTC (rev 10208)
+++ branches/DCAS/memcheck/mc_translate.c 2009-06-02 08:25:59 UTC (rev 10209)
@@ -4179,6 +4179,32 @@
st->Ist.Store.data,
NULL /* shadow data */,
NULL/*guard*/ );
+ /* If this is a store conditional, it writes to .resSC a
+ value indicating whether or not the store succeeded.
+ Just claim this value is always defined. In the
+ PowerPC interpretation of store-conditional,
+ definedness of the success indication depends on
+ whether the address of the store matches the
+ reservation address. But we can't tell that here (and
+ anyway, we're not being PowerPC-specific). At least we
+ are guarantted that the definedness of the store
+ address, and its addressibility, will be checked as per
+ normal. So it seems pretty safe to just say that the
+ success indication is always defined.
+
+ In schemeS, for origin tracking, we must
+ correspondingly set a no-origin value for the origin
+ shadow of resSC.
+ */
+ if (st->Ist.Store.resSC != IRTemp_INVALID) {
+ assign( 'V', &mce,
+ findShadowTmpV(&mce, st->Ist.Store.resSC),
+ definedOfType(
+ shadowTypeV(
+ typeOfIRTemp(mce.sb->tyenv,
+ st->Ist.Store.resSC)
+ )));
+ }
break;
case Ist_Exit:
@@ -4899,6 +4925,14 @@
dataB = schemeE( mce, st->Ist.Store.data );
gen_store_b( mce, dszB, st->Ist.Store.addr, 0/*offset*/, dataB,
NULL/*guard*/ );
+ /* For the rationale behind this, see comments at the place
+ where the V-shadow for .resSC is constructed, in the main
+ loop in MC_(instrument). In short, wee regard .resSc as
+ always-defined. */
+ if (st->Ist.Store.resSC != IRTemp_INVALID) {
+ assign( 'B', mce, findShadowTmpB(mce, st->Ist.Store.resSC),
+ mkU32(0) );
+ }
break;
}
case Ist_Put: {
|
|
From: <sv...@va...> - 2009-06-02 08:24:28
|
Author: sewardj
Date: 2009-06-02 09:24:20 +0100 (Tue, 02 Jun 2009)
New Revision: 10208
Log:
Remove blank line and add a missing include needed on ppc{32,64}-linux.
Modified:
branches/DCAS/coregrind/m_machine.c
Modified: branches/DCAS/coregrind/m_machine.c
===================================================================
--- branches/DCAS/coregrind/m_machine.c 2009-06-02 07:09:27 UTC (rev 10207)
+++ branches/DCAS/coregrind/m_machine.c 2009-06-02 08:24:20 UTC (rev 10208)
@@ -36,6 +36,7 @@
#include "pub_core_machine.h"
#include "pub_core_cpuid.h"
#include "pub_core_libcsignal.h" // for ppc32 messing with SIGILL and SIGFPE
+#include "pub_core_debuglog.h"
#define INSTR_PTR(regs) ((regs).vex.VG_INSTR_PTR)
@@ -583,7 +584,6 @@
VG_(sigaction)(VKI_SIGFPE, NULL, &saved_sigfpe_act);
tmp_sigfpe_act = saved_sigfpe_act;
-
/* NODEFER: signal handler does not return (from the kernel's point of
view), hence if it is to successfully catch a signal more than once,
we need the NODEFER flag. */
|
Author: sewardj
Date: 2009-06-02 09:18:56 +0100 (Tue, 02 Jun 2009)
New Revision: 1898
Log:
As part of changes to support atomic instructions directly in
Valgrind, add support at the IR level for linked loads and store
conditionals, a la the lwarx/stwcx etc insns in ppc, but abstractified
suitably.
IRExpr_Load gets a new Bool field, indicating whether it's a normal
load or a load-linked (reservation-setting).
IRStmt_Store gets a new IRTemp field. If this is IRTemp_INVALID, this
is a normal store. If it's not IRTemp_INVALID, this is a
store-conditional, and the success/failure bit resulting from the
store is written to the new IRTemp.
There are restrictions on alignment of addresses in LL and SC IR
loads. See libvex_ir.h for details.
Other small IR defn changes:
* IRMBusEvent loses Imbe_BusLock, Imbe_BusUnlock,
Imbe_SnoopedStoreBegin, Imbe_SnoopedStoreEnd. These were all
semantic kludges and can now be removed.
* ppc32 and ppc64 guest state loses the guest_RESVN field. This was
part of a kludge to fake up the behaviour of st{w,d}cx. enough to
make threaded code work. It is no longer necessary.
The rest of these changes just pushes these through the compilation
pipeline in the normal way. One minor notable point is that iropt
considers a linked load as inhibiting tree-building, so as to
guarantee that it will not reorder linked-loads w.r.t. any other
loads.
Modified:
branches/DCAS/priv/guest-amd64/toIR.c
branches/DCAS/priv/guest-arm/toIR.c
branches/DCAS/priv/guest-ppc/ghelpers.c
branches/DCAS/priv/guest-ppc/toIR.c
branches/DCAS/priv/guest-x86/toIR.c
branches/DCAS/priv/host-amd64/isel.c
branches/DCAS/priv/host-arm/isel.c
branches/DCAS/priv/host-ppc/hdefs.c
branches/DCAS/priv/host-ppc/hdefs.h
branches/DCAS/priv/host-ppc/isel.c
branches/DCAS/priv/host-x86/isel.c
branches/DCAS/priv/ir/irdefs.c
branches/DCAS/priv/ir/irmatch.c
branches/DCAS/priv/ir/iropt.c
branches/DCAS/pub/libvex_guest_ppc32.h
branches/DCAS/pub/libvex_guest_ppc64.h
branches/DCAS/pub/libvex_ir.h
branches/DCAS/pub/libvex_trc_values.h
Modified: branches/DCAS/priv/guest-amd64/toIR.c
===================================================================
--- branches/DCAS/priv/guest-amd64/toIR.c 2009-05-21 21:55:50 UTC (rev 1897)
+++ branches/DCAS/priv/guest-amd64/toIR.c 2009-06-02 08:18:56 UTC (rev 1898)
@@ -305,12 +305,12 @@
static void storeLE ( IRExpr* addr, IRExpr* data )
{
- stmt( IRStmt_Store(Iend_LE,addr,data) );
+ stmt( IRStmt_Store(Iend_LE, IRTemp_INVALID, addr, data) );
}
static IRExpr* loadLE ( IRType ty, IRExpr* data )
{
- return IRExpr_Load(Iend_LE,ty,data);
+ return IRExpr_Load(False, Iend_LE, ty, data);
}
static IROp mkSizedOp ( IRType ty, IROp op8 )
@@ -8825,9 +8825,6 @@
/* pfx holds the summary of prefixes. */
Prefix pfx = PFX_EMPTY;
- /* do we need follow the insn with MBusEvent(BusUnlock) ? */
- Bool unlock_bus_after_insn = False;
-
/* Set result defaults. */
dres.whatNext = Dis_Continue;
dres.len = 0;
@@ -8975,8 +8972,6 @@
if (pfx & PFX_LOCK) {
if (can_be_used_with_LOCK_prefix( (UChar*)&guest_code[delta] )) {
- stmt( IRStmt_MBE(Imbe_BusLock) );
- unlock_bus_after_insn = True;
DIP("lock ");
} else {
*expect_CAS = False;
@@ -15012,18 +15007,6 @@
nameIRegE(sz, pfx, modrm));
} else {
*expect_CAS = True;
- /* Need to add IRStmt_MBE(Imbe_BusLock). */
- if (pfx & PFX_LOCK) {
- /* check it's already been taken care of */
- vassert(unlock_bus_after_insn);
- } else {
- vassert(!unlock_bus_after_insn);
- stmt( IRStmt_MBE(Imbe_BusLock) );
- unlock_bus_after_insn = True;
- }
- /* Because unlock_bus_after_insn is now True, generic logic
- at the bottom of disInstr will add the
- IRStmt_MBE(Imbe_BusUnlock). */
addr = disAMode ( &alen, vbi, pfx, delta, dis_buf, 0 );
assign( t1, loadLE(ty, mkexpr(addr)) );
assign( t2, getIRegG(sz, pfx, modrm) );
@@ -16061,8 +16044,6 @@
insn, but nevertheless be paranoid and update it again right
now. */
stmt( IRStmt_Put( OFFB_RIP, mkU64(guest_RIP_curr_instr) ) );
- if (unlock_bus_after_insn)
- stmt( IRStmt_MBE(Imbe_BusUnlock) );
jmp_lit(Ijk_NoDecode, guest_RIP_curr_instr);
dres.whatNext = Dis_StopHere;
dres.len = 0;
@@ -16079,8 +16060,6 @@
decode_success:
/* All decode successes end up here. */
DIP("\n");
- if (unlock_bus_after_insn)
- stmt( IRStmt_MBE(Imbe_BusUnlock) );
dres.len = (Int)toUInt(delta - delta_start);
return dres;
}
Modified: branches/DCAS/priv/guest-arm/toIR.c
===================================================================
--- branches/DCAS/priv/guest-arm/toIR.c 2009-05-21 21:55:50 UTC (rev 1897)
+++ branches/DCAS/priv/guest-arm/toIR.c 2009-06-02 08:18:56 UTC (rev 1898)
@@ -495,7 +495,7 @@
static void storeLE ( IRExpr* addr, IRExpr* data )
{
- stmt( IRStmt_Store(Iend_LE,addr,data) );
+ stmt( IRStmt_Store(Iend_LE, IRTemp_INVALID, addr, data) );
}
static IRExpr* unop ( IROp op, IRExpr* a )
@@ -545,7 +545,7 @@
static IRExpr* loadLE ( IRType ty, IRExpr* data )
{
- return IRExpr_Load(Iend_LE,ty,data);
+ return IRExpr_Load(False, Iend_LE, ty, data);
}
#if 0
Modified: branches/DCAS/priv/guest-ppc/ghelpers.c
===================================================================
--- branches/DCAS/priv/guest-ppc/ghelpers.c 2009-05-21 21:55:50 UTC (rev 1897)
+++ branches/DCAS/priv/guest-ppc/ghelpers.c 2009-06-02 08:18:56 UTC (rev 1898)
@@ -477,8 +477,6 @@
vex_state->guest_EMWARN = EmWarn_NONE;
- vex_state->guest_RESVN = 0;
-
vex_state->guest_TISTART = 0;
vex_state->guest_TILEN = 0;
@@ -636,7 +634,7 @@
vex_state->guest_EMWARN = EmWarn_NONE;
- vex_state->guest_RESVN = 0;
+ vex_state->padding = 0;
vex_state->guest_TISTART = 0;
vex_state->guest_TILEN = 0;
@@ -650,6 +648,8 @@
vex_state->guest_IP_AT_SYSCALL = 0;
vex_state->guest_SPRG3_RO = 0;
+
+ vex_state->padding2 = 0;
}
@@ -767,7 +767,7 @@
/* Describe any sections to be regarded by Memcheck as
'always-defined'. */
- .n_alwaysDefd = 12,
+ .n_alwaysDefd = 11,
.alwaysDefd
= { /* 0 */ ALWAYSDEFD32(guest_CIA),
@@ -776,12 +776,11 @@
/* 3 */ ALWAYSDEFD32(guest_TILEN),
/* 4 */ ALWAYSDEFD32(guest_VSCR),
/* 5 */ ALWAYSDEFD32(guest_FPROUND),
- /* 6 */ ALWAYSDEFD32(guest_RESVN),
- /* 7 */ ALWAYSDEFD32(guest_NRADDR),
- /* 8 */ ALWAYSDEFD32(guest_NRADDR_GPR2),
- /* 9 */ ALWAYSDEFD32(guest_REDIR_SP),
- /* 10 */ ALWAYSDEFD32(guest_REDIR_STACK),
- /* 11 */ ALWAYSDEFD32(guest_IP_AT_SYSCALL)
+ /* 6 */ ALWAYSDEFD32(guest_NRADDR),
+ /* 7 */ ALWAYSDEFD32(guest_NRADDR_GPR2),
+ /* 8 */ ALWAYSDEFD32(guest_REDIR_SP),
+ /* 9 */ ALWAYSDEFD32(guest_REDIR_STACK),
+ /* 10 */ ALWAYSDEFD32(guest_IP_AT_SYSCALL)
}
};
@@ -818,12 +817,11 @@
/* 3 */ ALWAYSDEFD64(guest_TILEN),
/* 4 */ ALWAYSDEFD64(guest_VSCR),
/* 5 */ ALWAYSDEFD64(guest_FPROUND),
- /* 6 */ ALWAYSDEFD64(guest_RESVN),
- /* 7 */ ALWAYSDEFD64(guest_NRADDR),
- /* 8 */ ALWAYSDEFD64(guest_NRADDR_GPR2),
- /* 9 */ ALWAYSDEFD64(guest_REDIR_SP),
- /* 10 */ ALWAYSDEFD64(guest_REDIR_STACK),
- /* 11 */ ALWAYSDEFD64(guest_IP_AT_SYSCALL)
+ /* 6 */ ALWAYSDEFD64(guest_NRADDR),
+ /* 7 */ ALWAYSDEFD64(guest_NRADDR_GPR2),
+ /* 8 */ ALWAYSDEFD64(guest_REDIR_SP),
+ /* 9 */ ALWAYSDEFD64(guest_REDIR_STACK),
+ /* 10 */ ALWAYSDEFD64(guest_IP_AT_SYSCALL)
}
};
Modified: branches/DCAS/priv/guest-ppc/toIR.c
===================================================================
--- branches/DCAS/priv/guest-ppc/toIR.c 2009-05-21 21:55:50 UTC (rev 1897)
+++ branches/DCAS/priv/guest-ppc/toIR.c 2009-06-02 08:18:56 UTC (rev 1898)
@@ -232,7 +232,6 @@
#define OFFB_EMWARN offsetofPPCGuestState(guest_EMWARN)
#define OFFB_TISTART offsetofPPCGuestState(guest_TISTART)
#define OFFB_TILEN offsetofPPCGuestState(guest_TILEN)
-#define OFFB_RESVN offsetofPPCGuestState(guest_RESVN)
#define OFFB_NRADDR offsetofPPCGuestState(guest_NRADDR)
#define OFFB_NRADDR_GPR2 offsetofPPCGuestState(guest_NRADDR_GPR2)
@@ -326,7 +325,6 @@
PPC_GST_EMWARN, // Emulation warnings
PPC_GST_TISTART,// For icbi: start of area to invalidate
PPC_GST_TILEN, // For icbi: length of area to invalidate
- PPC_GST_RESVN, // For lwarx/stwcx.
PPC_GST_IP_AT_SYSCALL, // the CIA of the most recently executed SC insn
PPC_GST_SPRG3_RO, // SPRG3
PPC_GST_MAX
@@ -464,11 +462,12 @@
stmt( IRStmt_WrTmp(dst, e) );
}
+/* This generates a normal (non store-conditional) store. */
static void storeBE ( IRExpr* addr, IRExpr* data )
{
- vassert(typeOfIRExpr(irsb->tyenv, addr) == Ity_I32 ||
- typeOfIRExpr(irsb->tyenv, addr) == Ity_I64);
- stmt( IRStmt_Store(Iend_BE,addr,data) );
+ IRType tyA = typeOfIRExpr(irsb->tyenv, addr);
+ vassert(tyA == Ity_I32 || tyA == Ity_I64);
+ stmt( IRStmt_Store(Iend_BE, IRTemp_INVALID, addr, data) );
}
static IRExpr* unop ( IROp op, IRExpr* a )
@@ -517,11 +516,23 @@
return IRExpr_Const(IRConst_U64(i));
}
+/* This generates a normal (non load-linked) load. */
static IRExpr* loadBE ( IRType ty, IRExpr* data )
{
- return IRExpr_Load(Iend_BE,ty,data);
+ return IRExpr_Load(False, Iend_BE, ty, data);
}
+/* And this, a linked load. */
+static IRExpr* loadlinkedBE ( IRType ty, IRExpr* data )
+{
+ if (mode64) {
+ vassert(ty == Ity_I32 || ty == Ity_I64);
+ } else {
+ vassert(ty == Ity_I32);
+ }
+ return IRExpr_Load(True, Iend_BE, ty, data);
+}
+
static IRExpr* mkOR1 ( IRExpr* arg1, IRExpr* arg2 )
{
vassert(typeOfIRExpr(irsb->tyenv, arg1) == Ity_I1);
@@ -832,26 +843,26 @@
}
/* IR narrows I32/I64 -> I8/I16/I32 */
-static IRExpr* mkSzNarrow8 ( IRType ty, IRExpr* src )
+static IRExpr* mkNarrowTo8 ( IRType ty, IRExpr* src )
{
vassert(ty == Ity_I32 || ty == Ity_I64);
return ty == Ity_I64 ? unop(Iop_64to8, src) : unop(Iop_32to8, src);
}
-static IRExpr* mkSzNarrow16 ( IRType ty, IRExpr* src )
+static IRExpr* mkNarrowTo16 ( IRType ty, IRExpr* src )
{
vassert(ty == Ity_I32 || ty == Ity_I64);
return ty == Ity_I64 ? unop(Iop_64to16, src) : unop(Iop_32to16, src);
}
-static IRExpr* mkSzNarrow32 ( IRType ty, IRExpr* src )
+static IRExpr* mkNarrowTo32 ( IRType ty, IRExpr* src )
{
vassert(ty == Ity_I32 || ty == Ity_I64);
return ty == Ity_I64 ? unop(Iop_64to32, src) : src;
}
/* Signed/Unsigned IR widens I8/I16/I32 -> I32/I64 */
-static IRExpr* mkSzWiden8 ( IRType ty, IRExpr* src, Bool sined )
+static IRExpr* mkWidenFrom8 ( IRType ty, IRExpr* src, Bool sined )
{
IROp op;
vassert(ty == Ity_I32 || ty == Ity_I64);
@@ -860,7 +871,7 @@
return unop(op, src);
}
-static IRExpr* mkSzWiden16 ( IRType ty, IRExpr* src, Bool sined )
+static IRExpr* mkWidenFrom16 ( IRType ty, IRExpr* src, Bool sined )
{
IROp op;
vassert(ty == Ity_I32 || ty == Ity_I64);
@@ -869,7 +880,7 @@
return unop(op, src);
}
-static IRExpr* mkSzWiden32 ( IRType ty, IRExpr* src, Bool sined )
+static IRExpr* mkWidenFrom32 ( IRType ty, IRExpr* src, Bool sined )
{
vassert(ty == Ity_I32 || ty == Ity_I64);
if (ty == Ity_I32)
@@ -1113,30 +1124,6 @@
/* non-zero rotate */ rot );
}
-#if 0
-/* ROTL32_64(src64, rot_amt5)
- Weirdo 32bit rotl on ppc64:
- rot32 = ROTL(src_lo32,y);
- return (rot32|rot32);
-*/
-static IRExpr* /* :: Ity_I64 */ ROTL32_64 ( IRExpr* src64,
- IRExpr* rot_amt )
-{
- IRExpr *mask, *rot32;
- vassert(mode64); // used only in 64bit mode
- vassert(typeOfIRExpr(irsb->tyenv,src64) == Ity_I64);
- vassert(typeOfIRExpr(irsb->tyenv,rot_amt) == Ity_I8);
-
- mask = binop(Iop_And8, rot_amt, mkU8(31));
- rot32 = ROTL( unop(Iop_64to32, src64), rot_amt );
-
- return binop(Iop_Or64,
- binop(Iop_Shl64, unop(Iop_32Uto64, rot32), mkU8(32)),
- unop(Iop_32Uto64, rot32));
-}
-#endif
-
-
/* Standard effective address calc: (rA + rB) */
static IRExpr* ea_rA_idxd ( UInt rA, UInt rB )
{
@@ -1208,6 +1195,38 @@
}
+/* Exit the trace if ADDR (intended to be a guest memory address) is
+ not ALIGN-aligned, generating a request for a SIGBUS followed by a
+ restart of the current insn. */
+static void gen_SIGBUS_if_misaligned ( IRTemp addr, UChar align )
+{
+ vassert(align == 4 || align == 8);
+ if (mode64) {
+ vassert(typeOfIRTemp(irsb->tyenv, addr) == Ity_I64);
+ stmt(
+ IRStmt_Exit(
+ binop(Iop_CmpNE64,
+ binop(Iop_And64, mkexpr(addr), mkU64(align-1)),
+ mkU64(0)),
+ Ijk_SigBUS,
+ IRConst_U64( guest_CIA_curr_instr )
+ )
+ );
+ } else {
+ vassert(typeOfIRTemp(irsb->tyenv, addr) == Ity_I32);
+ stmt(
+ IRStmt_Exit(
+ binop(Iop_CmpNE32,
+ binop(Iop_And32, mkexpr(addr), mkU32(align-1)),
+ mkU32(0)),
+ Ijk_SigBUS,
+ IRConst_U32( guest_CIA_curr_instr )
+ )
+ );
+ }
+}
+
+
/* Generate AbiHints which mark points at which the ELF or PowerOpen
ABIs say that the stack red zone (viz, -N(r1) .. -1(r1), for some
N) becomes undefined. That is at function calls and returns. ELF
@@ -2125,9 +2144,6 @@
binop( Iop_Shl32, getXER_CA32(), mkU8(29)),
getXER_BC32()));
- case PPC_GST_RESVN:
- return IRExpr_Get( OFFB_RESVN, ty);
-
default:
vex_printf("getGST(ppc): reg = %u", reg);
vpanic("getGST(ppc)");
@@ -2257,11 +2273,6 @@
stmt( IRStmt_Put( OFFB_TILEN, src) );
break;
- case PPC_GST_RESVN:
- vassert( ty_src == ty );
- stmt( IRStmt_Put( OFFB_RESVN, src) );
- break;
-
default:
vex_printf("putGST(ppc): reg = %u", reg);
vpanic("putGST(ppc)");
@@ -2495,7 +2506,7 @@
flag_OE ? "o" : "", flag_rC ? ".":"",
rD_addr, rA_addr, rB_addr);
// rD = rA + rB + XER[CA]
- assign( old_xer_ca, mkSzWiden32(ty, getXER_CA32(), False) );
+ assign( old_xer_ca, mkWidenFrom32(ty, getXER_CA32(), False) );
assign( rD, binop( mkSzOp(ty, Iop_Add8), mkexpr(rA),
binop( mkSzOp(ty, Iop_Add8),
mkexpr(rB), mkexpr(old_xer_ca))) );
@@ -2521,7 +2532,7 @@
rD_addr, rA_addr, rB_addr);
// rD = rA + (-1) + XER[CA]
// => Just another form of adde
- assign( old_xer_ca, mkSzWiden32(ty, getXER_CA32(), False) );
+ assign( old_xer_ca, mkWidenFrom32(ty, getXER_CA32(), False) );
min_one = mkSzImm(ty, (Long)-1);
assign( rD, binop( mkSzOp(ty, Iop_Add8), mkexpr(rA),
binop( mkSzOp(ty, Iop_Add8),
@@ -2547,7 +2558,7 @@
rD_addr, rA_addr, rB_addr);
// rD = rA + (0) + XER[CA]
// => Just another form of adde
- assign( old_xer_ca, mkSzWiden32(ty, getXER_CA32(), False) );
+ assign( old_xer_ca, mkWidenFrom32(ty, getXER_CA32(), False) );
assign( rD, binop( mkSzOp(ty, Iop_Add8),
mkexpr(rA), mkexpr(old_xer_ca)) );
set_XER_CA( ty, PPCG_FLAG_OP_ADDE,
@@ -2744,7 +2755,7 @@
flag_OE ? "o" : "", flag_rC ? ".":"",
rD_addr, rA_addr, rB_addr);
// rD = (log not)rA + rB + XER[CA]
- assign( old_xer_ca, mkSzWiden32(ty, getXER_CA32(), False) );
+ assign( old_xer_ca, mkWidenFrom32(ty, getXER_CA32(), False) );
assign( rD, binop( mkSzOp(ty, Iop_Add8),
unop( mkSzOp(ty, Iop_Not8), mkexpr(rA)),
binop( mkSzOp(ty, Iop_Add8),
@@ -2771,7 +2782,7 @@
rD_addr, rA_addr);
// rD = (log not)rA + (-1) + XER[CA]
// => Just another form of subfe
- assign( old_xer_ca, mkSzWiden32(ty, getXER_CA32(), False) );
+ assign( old_xer_ca, mkWidenFrom32(ty, getXER_CA32(), False) );
min_one = mkSzImm(ty, (Long)-1);
assign( rD, binop( mkSzOp(ty, Iop_Add8),
unop( mkSzOp(ty, Iop_Not8), mkexpr(rA)),
@@ -2798,7 +2809,7 @@
rD_addr, rA_addr);
// rD = (log not)rA + (0) + XER[CA]
// => Just another form of subfe
- assign( old_xer_ca, mkSzWiden32(ty, getXER_CA32(), False) );
+ assign( old_xer_ca, mkWidenFrom32(ty, getXER_CA32(), False) );
assign( rD, binop( mkSzOp(ty, Iop_Add8),
unop( mkSzOp(ty, Iop_Not8),
mkexpr(rA)), mkexpr(old_xer_ca)) );
@@ -2936,8 +2947,8 @@
if (flag_L == 1) {
putCR321(crfD, unop(Iop_64to8, binop(Iop_CmpORD64S, a, b)));
} else {
- a = mkSzNarrow32( ty, a );
- b = mkSzNarrow32( ty, b );
+ a = mkNarrowTo32( ty, a );
+ b = mkNarrowTo32( ty, b );
putCR321(crfD, unop(Iop_32to8, binop(Iop_CmpORD32S, a, b)));
}
putCR0( crfD, getXER_SO() );
@@ -2949,8 +2960,8 @@
if (flag_L == 1) {
putCR321(crfD, unop(Iop_64to8, binop(Iop_CmpORD64U, a, b)));
} else {
- a = mkSzNarrow32( ty, a );
- b = mkSzNarrow32( ty, b );
+ a = mkNarrowTo32( ty, a );
+ b = mkNarrowTo32( ty, b );
putCR321(crfD, unop(Iop_32to8, binop(Iop_CmpORD32U, a, b)));
}
putCR0( crfD, getXER_SO() );
@@ -2977,8 +2988,8 @@
if (flag_L == 1) {
putCR321(crfD, unop(Iop_64to8, binop(Iop_CmpORD64S, a, b)));
} else {
- a = mkSzNarrow32( ty, a );
- b = mkSzNarrow32( ty, b );
+ a = mkNarrowTo32( ty, a );
+ b = mkNarrowTo32( ty, b );
putCR321(crfD, unop(Iop_32to8,binop(Iop_CmpORD32S, a, b)));
}
putCR0( crfD, getXER_SO() );
@@ -2996,8 +3007,8 @@
if (flag_L == 1) {
putCR321(crfD, unop(Iop_64to8, binop(Iop_CmpORD64U, a, b)));
} else {
- a = mkSzNarrow32( ty, a );
- b = mkSzNarrow32( ty, b );
+ a = mkNarrowTo32( ty, a );
+ b = mkNarrowTo32( ty, b );
putCR321(crfD, unop(Iop_32to8, binop(Iop_CmpORD32U, a, b)));
}
putCR0( crfD, getXER_SO() );
@@ -3117,7 +3128,7 @@
// Iop_Clz32 undefined for arg==0, so deal with that case:
irx = binop(Iop_CmpNE32, lo32, mkU32(0));
- assign(rA, mkSzWiden32(ty,
+ assign(rA, mkWidenFrom32(ty,
IRExpr_Mux0X( unop(Iop_1Uto8, irx),
mkU32(32),
unop(Iop_Clz32, lo32)),
@@ -3538,7 +3549,7 @@
case 0x22: // lbz (Load B & Zero, PPC32 p433)
DIP("lbz r%u,%d(r%u)\n", rD_addr, (Int)simm16, rA_addr);
val = loadBE(Ity_I8, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden8(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom8(ty, val, False) );
break;
case 0x23: // lbzu (Load B & Zero, Update, PPC32 p434)
@@ -3548,14 +3559,14 @@
}
DIP("lbzu r%u,%d(r%u)\n", rD_addr, (Int)simm16, rA_addr);
val = loadBE(Ity_I8, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden8(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom8(ty, val, False) );
putIReg( rA_addr, mkexpr(EA) );
break;
case 0x2A: // lha (Load HW Alg, PPC32 p445)
DIP("lha r%u,%d(r%u)\n", rD_addr, (Int)simm16, rA_addr);
val = loadBE(Ity_I16, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden16(ty, val, True) );
+ putIReg( rD_addr, mkWidenFrom16(ty, val, True) );
break;
case 0x2B: // lhau (Load HW Alg, Update, PPC32 p446)
@@ -3565,14 +3576,14 @@
}
DIP("lhau r%u,%d(r%u)\n", rD_addr, (Int)simm16, rA_addr);
val = loadBE(Ity_I16, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden16(ty, val, True) );
+ putIReg( rD_addr, mkWidenFrom16(ty, val, True) );
putIReg( rA_addr, mkexpr(EA) );
break;
case 0x28: // lhz (Load HW & Zero, PPC32 p450)
DIP("lhz r%u,%d(r%u)\n", rD_addr, (Int)simm16, rA_addr);
val = loadBE(Ity_I16, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden16(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom16(ty, val, False) );
break;
case 0x29: // lhzu (Load HW & and Zero, Update, PPC32 p451)
@@ -3582,14 +3593,14 @@
}
DIP("lhzu r%u,%d(r%u)\n", rD_addr, (Int)simm16, rA_addr);
val = loadBE(Ity_I16, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden16(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom16(ty, val, False) );
putIReg( rA_addr, mkexpr(EA) );
break;
case 0x20: // lwz (Load W & Zero, PPC32 p460)
DIP("lwz r%u,%d(r%u)\n", rD_addr, (Int)simm16, rA_addr);
val = loadBE(Ity_I32, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden32(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom32(ty, val, False) );
break;
case 0x21: // lwzu (Load W & Zero, Update, PPC32 p461))
@@ -3599,7 +3610,7 @@
}
DIP("lwzu r%u,%d(r%u)\n", rD_addr, (Int)simm16, rA_addr);
val = loadBE(Ity_I32, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden32(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom32(ty, val, False) );
putIReg( rA_addr, mkexpr(EA) );
break;
@@ -3618,14 +3629,14 @@
return False;
}
val = loadBE(Ity_I8, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden8(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom8(ty, val, False) );
putIReg( rA_addr, mkexpr(EA) );
break;
case 0x057: // lbzx (Load B & Zero, Indexed, PPC32 p436)
DIP("lbzx r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr);
val = loadBE(Ity_I8, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden8(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom8(ty, val, False) );
break;
case 0x177: // lhaux (Load HW Alg, Update Indexed, PPC32 p447)
@@ -3635,14 +3646,14 @@
}
DIP("lhaux r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr);
val = loadBE(Ity_I16, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden16(ty, val, True) );
+ putIReg( rD_addr, mkWidenFrom16(ty, val, True) );
putIReg( rA_addr, mkexpr(EA) );
break;
case 0x157: // lhax (Load HW Alg, Indexed, PPC32 p448)
DIP("lhax r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr);
val = loadBE(Ity_I16, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden16(ty, val, True) );
+ putIReg( rD_addr, mkWidenFrom16(ty, val, True) );
break;
case 0x137: // lhzux (Load HW & Zero, Update Indexed, PPC32 p452)
@@ -3652,14 +3663,14 @@
}
DIP("lhzux r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr);
val = loadBE(Ity_I16, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden16(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom16(ty, val, False) );
putIReg( rA_addr, mkexpr(EA) );
break;
case 0x117: // lhzx (Load HW & Zero, Indexed, PPC32 p453)
DIP("lhzx r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr);
val = loadBE(Ity_I16, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden16(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom16(ty, val, False) );
break;
case 0x037: // lwzux (Load W & Zero, Update Indexed, PPC32 p462)
@@ -3669,14 +3680,14 @@
}
DIP("lwzux r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr);
val = loadBE(Ity_I32, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden32(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom32(ty, val, False) );
putIReg( rA_addr, mkexpr(EA) );
break;
case 0x017: // lwzx (Load W & Zero, Indexed, PPC32 p463)
DIP("lwzx r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr);
val = loadBE(Ity_I32, mkexpr(EA));
- putIReg( rD_addr, mkSzWiden32(ty, val, False) );
+ putIReg( rD_addr, mkWidenFrom32(ty, val, False) );
break;
@@ -3798,7 +3809,7 @@
switch (opc1) {
case 0x26: // stb (Store B, PPC32 p509)
DIP("stb r%u,%d(r%u)\n", rS_addr, simm16, rA_addr);
- storeBE( mkexpr(EA), mkSzNarrow8(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo8(ty, mkexpr(rS)) );
break;
case 0x27: // stbu (Store B, Update, PPC32 p510)
@@ -3808,12 +3819,12 @@
}
DIP("stbu r%u,%d(r%u)\n", rS_addr, simm16, rA_addr);
putIReg( rA_addr, mkexpr(EA) );
- storeBE( mkexpr(EA), mkSzNarrow8(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo8(ty, mkexpr(rS)) );
break;
case 0x2C: // sth (Store HW, PPC32 p522)
DIP("sth r%u,%d(r%u)\n", rS_addr, simm16, rA_addr);
- storeBE( mkexpr(EA), mkSzNarrow16(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo16(ty, mkexpr(rS)) );
break;
case 0x2D: // sthu (Store HW, Update, PPC32 p524)
@@ -3823,12 +3834,12 @@
}
DIP("sthu r%u,%d(r%u)\n", rS_addr, simm16, rA_addr);
putIReg( rA_addr, mkexpr(EA) );
- storeBE( mkexpr(EA), mkSzNarrow16(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo16(ty, mkexpr(rS)) );
break;
case 0x24: // stw (Store W, PPC32 p530)
DIP("stw r%u,%d(r%u)\n", rS_addr, simm16, rA_addr);
- storeBE( mkexpr(EA), mkSzNarrow32(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo32(ty, mkexpr(rS)) );
break;
case 0x25: // stwu (Store W, Update, PPC32 p534)
@@ -3838,7 +3849,7 @@
}
DIP("stwu r%u,%d(r%u)\n", rS_addr, simm16, rA_addr);
putIReg( rA_addr, mkexpr(EA) );
- storeBE( mkexpr(EA), mkSzNarrow32(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo32(ty, mkexpr(rS)) );
break;
/* X Form : all these use EA_indexed */
@@ -3856,12 +3867,12 @@
}
DIP("stbux r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
putIReg( rA_addr, mkexpr(EA) );
- storeBE( mkexpr(EA), mkSzNarrow8(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo8(ty, mkexpr(rS)) );
break;
case 0x0D7: // stbx (Store B Indexed, PPC32 p512)
DIP("stbx r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
- storeBE( mkexpr(EA), mkSzNarrow8(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo8(ty, mkexpr(rS)) );
break;
case 0x1B7: // sthux (Store HW, Update Indexed, PPC32 p525)
@@ -3871,12 +3882,12 @@
}
DIP("sthux r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
putIReg( rA_addr, mkexpr(EA) );
- storeBE( mkexpr(EA), mkSzNarrow16(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo16(ty, mkexpr(rS)) );
break;
case 0x197: // sthx (Store HW Indexed, PPC32 p526)
DIP("sthx r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
- storeBE( mkexpr(EA), mkSzNarrow16(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo16(ty, mkexpr(rS)) );
break;
case 0x0B7: // stwux (Store W, Update Indexed, PPC32 p535)
@@ -3886,12 +3897,12 @@
}
DIP("stwux r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
putIReg( rA_addr, mkexpr(EA) );
- storeBE( mkexpr(EA), mkSzNarrow32(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo32(ty, mkexpr(rS)) );
break;
case 0x097: // stwx (Store W Indexed, PPC32 p536)
DIP("stwx r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
- storeBE( mkexpr(EA), mkSzNarrow32(ty, mkexpr(rS)) );
+ storeBE( mkexpr(EA), mkNarrowTo32(ty, mkexpr(rS)) );
break;
@@ -3977,8 +3988,8 @@
DIP("lmw r%u,%d(r%u)\n", rD_addr, simm16, rA_addr);
for (r = rD_addr; r <= 31; r++) {
irx_addr = binop(Iop_Add32, mkexpr(EA), mkU32(ea_off));
- putIReg( r, mkSzWiden32(ty, loadBE(Ity_I32, irx_addr ),
- False) );
+ putIReg( r, mkWidenFrom32(ty, loadBE(Ity_I32, irx_addr ),
+ False) );
ea_off += 4;
}
break;
@@ -3987,7 +3998,7 @@
DIP("stmw r%u,%d(r%u)\n", rS_addr, simm16, rA_addr);
for (r = rS_addr; r <= 31; r++) {
irx_addr = binop(Iop_Add32, mkexpr(EA), mkU32(ea_off));
- storeBE( irx_addr, mkSzNarrow32(ty, getIReg(r)) );
+ storeBE( irx_addr, mkNarrowTo32(ty, getIReg(r)) );
ea_off += 4;
}
break;
@@ -4033,11 +4044,11 @@
vassert(shift == 0 || shift == 8 || shift == 16 || shift == 24);
putIReg(
rD,
- mkSzWiden32(
+ mkWidenFrom32(
ty,
binop(
Iop_Or32,
- mkSzNarrow32(ty, getIReg(rD)),
+ mkNarrowTo32(ty, getIReg(rD)),
binop(
Iop_Shl32,
unop(
@@ -4085,7 +4096,7 @@
binop(mkSzOp(ty,Iop_Add8), e_EA, mkSzImm(ty,i)),
unop(Iop_32to8,
binop(Iop_Shr32,
- mkSzNarrow32(ty, getIReg(rS)),
+ mkNarrowTo32(ty, getIReg(rS)),
mkU8(toUChar(shift))))
);
shift -= 8;
@@ -4819,7 +4830,6 @@
IRType ty = mode64 ? Ity_I64 : Ity_I32;
IRTemp EA = newTemp(ty);
- IRTemp rS = newTemp(ty);
assign( EA, ea_rAor0_idxd( rA_addr, rB_addr ) );
@@ -4857,53 +4867,46 @@
hardware, I think as to whether or not contention is
likely. So we can just ignore it. */
DIP("lwarx r%u,r%u,r%u,EH=%u\n", rD_addr, rA_addr, rB_addr, (UInt)b0);
- putIReg( rD_addr, mkSzWiden32(ty, loadBE(Ity_I32, mkexpr(EA)),
- False) );
- /* Take a reservation */
- putGST( PPC_GST_RESVN, mkexpr(EA) );
+
+ // trap if misaligned
+ gen_SIGBUS_if_misaligned( EA, 4 );
+
+ // and actually do the load
+ putIReg( rD_addr, mkWidenFrom32(ty, loadlinkedBE(Ity_I32, mkexpr(EA)),
+ False) );
break;
case 0x096: {
// stwcx. (Store Word Conditional Indexed, PPC32 p532)
- IRTemp resaddr = newTemp(ty);
+ // Note this has to handle stwcx. in both 32- and 64-bit modes,
+ // so isn't quite as straightforward as it might otherwise be.
+ IRTemp rS = newTemp(Ity_I32);
+ IRTemp resSC;
if (b0 != 1) {
vex_printf("dis_memsync(ppc)(stwcx.,b0)\n");
return False;
}
DIP("stwcx. r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
- assign( rS, getIReg(rS_addr) );
- /* First set up as if the reservation failed */
- // Set CR0[LT GT EQ S0] = 0b000 || XER[SO]
- putCR321(0, mkU8(0<<1));
- putCR0(0, getXER_SO());
+ // trap if misaligned
+ gen_SIGBUS_if_misaligned( EA, 4 );
- /* Get the reservation address into a temporary, then
- clear it. */
- assign( resaddr, getGST(PPC_GST_RESVN) );
- putGST( PPC_GST_RESVN, mkSzImm(ty, 0) );
+ // Get the data to be stored, and narrow to 32 bits if necessary
+ assign( rS, mkNarrowTo32(ty, getIReg(rS_addr)) );
- /* Skip the rest if the reservation really did fail. */
- stmt( IRStmt_Exit(
- ( mode64 ?
- binop(Iop_CmpNE64, mkexpr(resaddr), mkexpr(EA)) :
- binop(Iop_CmpNE32, mkexpr(resaddr), mkexpr(EA)) ),
- Ijk_Boring,
- mkSzConst( ty, nextInsnAddr()) ));
+ // Do the store, and get success/failure bit into resSC
+ resSC = newTemp(Ity_I1);
+ stmt( IRStmt_Store(Iend_BE, resSC, mkexpr(EA), mkexpr(rS)) );
- /* Note for mode64:
+ // Set CR0[LT GT EQ S0] = 0b000 || XER[SO] on failure
+ // Set CR0[LT GT EQ S0] = 0b001 || XER[SO] on success
+ putCR321(0, binop(Iop_Shl8, unop(Iop_1Uto8, mkexpr(resSC)), mkU8(1)));
+ putCR0(0, getXER_SO());
+
+ /* Note:
If resaddr != lwarx_resaddr, CR0[EQ] is undefined, and
whether rS is stored is dependent on that value. */
-
- /* Success? Do the (32bit) store. Mark the store as
- snooped, so that threading tools can handle it differently
- if necessary. */
- stmt( IRStmt_MBE(Imbe_SnoopedStoreBegin) );
- storeBE( mkexpr(EA), mkSzNarrow32(ty, mkexpr(rS)) );
- stmt( IRStmt_MBE(Imbe_SnoopedStoreEnd) );
-
- // Set CR0[LT GT EQ S0] = 0b001 || XER[SO]
- putCR321(0, mkU8(1<<1));
+ /* So I guess we can just ignore this case? */
break;
}
@@ -4950,41 +4953,48 @@
in the documentation) is merely a hint bit to the
hardware, I think as to whether or not contention is
likely. So we can just ignore it. */
+ if (!mode64)
+ return False;
DIP("ldarx r%u,r%u,r%u,EH=%u\n", rD_addr, rA_addr, rB_addr, (UInt)b0);
- putIReg( rD_addr, loadBE(Ity_I64, mkexpr(EA)) );
- // Take a reservation
- putGST( PPC_GST_RESVN, mkexpr(EA) );
+
+ // trap if misaligned
+ gen_SIGBUS_if_misaligned( EA, 8 );
+
+ // and actually do the load
+ putIReg( rD_addr, loadlinkedBE(Ity_I64, mkexpr(EA)) );
break;
case 0x0D6: { // stdcx. (Store DWord Condition Indexd, PPC64 p581)
- IRTemp resaddr = newTemp(ty);
+ // A marginally simplified version of the stwcx. case
+ IRTemp rS = newTemp(Ity_I64);
+ IRTemp resSC;
if (b0 != 1) {
vex_printf("dis_memsync(ppc)(stdcx.,b0)\n");
return False;
}
+ if (!mode64)
+ return False;
DIP("stdcx. r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
+
+ // trap if misaligned
+ gen_SIGBUS_if_misaligned( EA, 8 );
+
+ // Get the data to be stored
assign( rS, getIReg(rS_addr) );
- // First set up as if the reservation failed
- // Set CR0[LT GT EQ S0] = 0b000 || XER[SO]
- putCR321(0, mkU8(0<<1));
+ // Do the store, and get success/failure bit into resSC
+ resSC = newTemp(Ity_I1);
+ stmt( IRStmt_Store(Iend_BE, resSC, mkexpr(EA), mkexpr(rS)) );
+
+ // Set CR0[LT GT EQ S0] = 0b000 || XER[SO] on failure
+ // Set CR0[LT GT EQ S0] = 0b001 || XER[SO] on success
+ putCR321(0, binop(Iop_Shl8, unop(Iop_1Uto8, mkexpr(resSC)), mkU8(1)));
putCR0(0, getXER_SO());
-
- // Get the reservation address into a temporary, then clear it.
- assign( resaddr, getGST(PPC_GST_RESVN) );
- putGST( PPC_GST_RESVN, mkSzImm(ty, 0) );
- // Skip the rest if the reservation really did fail.
- stmt( IRStmt_Exit( binop(Iop_CmpNE64, mkexpr(resaddr),
- mkexpr(EA)),
- Ijk_Boring,
- IRConst_U64(nextInsnAddr())) );
-
- // Success? Do the store
- storeBE( mkexpr(EA), mkexpr(rS) );
-
- // Set CR0[LT GT EQ S0] = 0b001 || XER[SO]
- putCR321(0, mkU8(1<<1));
+ /* Note:
+ If resaddr != lwarx_resaddr, CR0[EQ] is undefined, and
+ whether rS is stored is dependent on that value. */
+ /* So I guess we can just ignore this case? */
break;
}
@@ -5029,8 +5039,8 @@
assign( rS, getIReg(rS_addr) );
assign( rB, getIReg(rB_addr) );
- assign( rS_lo32, mkSzNarrow32(ty, mkexpr(rS)) );
- assign( rB_lo32, mkSzNarrow32(ty, mkexpr(rB)) );
+ assign( rS_lo32, mkNarrowTo32(ty, mkexpr(rS)) );
+ assign( rB_lo32, mkNarrowTo32(ty, mkexpr(rB)) );
if (opc1 == 0x1F) {
switch (opc2) {
@@ -5054,7 +5064,7 @@
binop( Iop_Sar32,
binop(Iop_Shl32, mkexpr(rB_lo32), mkU8(26)),
mkU8(31))) );
- assign( rA, mkSzWiden32(ty, e_tmp, /* Signed */False) );
+ assign( rA, mkWidenFrom32(ty, e_tmp, /* Signed */False) );
break;
}
@@ -5079,13 +5089,13 @@
IRExpr_Mux0X( mkexpr(outofrange),
mkexpr(sh_amt),
mkU32(31)) ) );
- assign( rA, mkSzWiden32(ty, e_tmp, /* Signed */True) );
+ assign( rA, mkWidenFrom32(ty, e_tmp, /* Signed */True) );
set_XER_CA( ty, PPCG_FLAG_OP_SRAW,
mkexpr(rA),
- mkSzWiden32(ty, mkexpr(rS_lo32), True),
- mkSzWiden32(ty, mkexpr(sh_amt), True ),
- mkSzWiden32(ty, getXER_CA32(), True) );
+ mkWidenFrom32(ty, mkexpr(rS_lo32), True),
+ mkWidenFrom32(ty, mkexpr(sh_amt), True ),
+ mkWidenFrom32(ty, getXER_CA32(), True) );
break;
}
@@ -5105,9 +5115,9 @@
set_XER_CA( ty, PPCG_FLAG_OP_SRAWI,
mkexpr(rA),
- mkSzWiden32(ty, mkexpr(rS_lo32), /* Syned */True),
+ mkWidenFrom32(ty, mkexpr(rS_lo32), /* Syned */True),
mkSzImm(ty, sh_imm),
- mkSzWiden32(ty, getXER_CA32(), /* Syned */False) );
+ mkWidenFrom32(ty, getXER_CA32(), /* Syned */False) );
break;
case 0x218: // srw (Shift Right Word, PPC32 p508)
@@ -5132,7 +5142,7 @@
binop(Iop_Shl32, mkexpr(rB_lo32),
mkU8(26)),
mkU8(31))));
- assign( rA, mkSzWiden32(ty, e_tmp, /* Signed */False) );
+ assign( rA, mkWidenFrom32(ty, e_tmp, /* Signed */False) );
break;
@@ -5182,7 +5192,7 @@
);
set_XER_CA( ty, PPCG_FLAG_OP_SRAD,
mkexpr(rA), mkexpr(rS), mkexpr(sh_amt),
- mkSzWiden32(ty, getXER_CA32(), /* Syned */False) );
+ mkWidenFrom32(ty, getXER_CA32(), /* Syned */False) );
break;
}
@@ -5197,7 +5207,7 @@
mkexpr(rA),
getIReg(rS_addr),
mkU64(sh_imm),
- mkSzWiden32(ty, getXER_CA32(), /* Syned */False) );
+ mkWidenFrom32(ty, getXER_CA32(), /* Syned */False) );
break;
case 0x21B: // srd (Shift Right DWord, PPC64 p574)
@@ -5305,27 +5315,27 @@
DIP("lhbrx r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr);
assign( w1, unop(Iop_16Uto32, loadBE(Ity_I16, mkexpr(EA))) );
assign( w2, gen_byterev16(w1) );
- putIReg( rD_addr, mkSzWiden32(ty, mkexpr(w2),
- /* Signed */False) );
+ putIReg( rD_addr, mkWidenFrom32(ty, mkexpr(w2),
+ /* Signed */False) );
break;
case 0x216: // lwbrx (Load Word Byte-Reverse Indexed, PPC32 p459)
DIP("lwbrx r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr);
assign( w1, loadBE(Ity_I32, mkexpr(EA)) );
assign( w2, gen_byterev32(w1) );
- putIReg( rD_addr, mkSzWiden32(ty, mkexpr(w2),
- /* Signed */False) );
+ putIReg( rD_addr, mkWidenFrom32(ty, mkexpr(w2),
+ /* Signed */False) );
break;
case 0x396: // sthbrx (Store Half Word Byte-Reverse Indexed, PPC32 p523)
DIP("sthbrx r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
- assign( w1, mkSzNarrow32(ty, getIReg(rS_addr)) );
+ assign( w1, mkNarrowTo32(ty, getIReg(rS_addr)) );
storeBE( mkexpr(EA), unop(Iop_32to16, gen_byterev16(w1)) );
break;
case 0x296: // stwbrx (Store Word Byte-Reverse Indxd, PPC32 p531)
DIP("stwbrx r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
- assign( w1, mkSzNarrow32(ty, getIReg(rS_addr)) );
+ assign( w1, mkNarrowTo32(ty, getIReg(rS_addr)) );
storeBE( mkexpr(EA), gen_byterev32(w1) );
break;
@@ -5403,14 +5413,14 @@
// implementation of mfocr (from the 2.02 arch spec)
if (b11to20 == 0) {
DIP("mfcr r%u\n", rD_addr);
- putIReg( rD_addr, mkSzWiden32(ty, getGST( PPC_GST_CR ),
- /* Signed */False) );
+ putIReg( rD_addr, mkWidenFrom32(ty, getGST( PPC_GST_CR ),
+ /* Signed */False) );
break;
}
if (b20 == 1 && b11 == 0) {
DIP("mfocrf r%u,%u\n", rD_addr, CRM);
- putIReg( rD_addr, mkSzWiden32(ty, getGST( PPC_GST_CR ),
- /* Signed */False) );
+ putIReg( rD_addr, mkWidenFrom32(ty, getGST( PPC_GST_CR ),
+ /* Signed */False) );
break;
}
/* not decodable */
@@ -5422,8 +5432,8 @@
switch (SPR) { // Choose a register...
case 0x1:
DIP("mfxer r%u\n", rD_addr);
- putIReg( rD_addr, mkSzWiden32(ty, getGST( PPC_GST_XER ),
- /* Signed */False) );
+ putIReg( rD_addr, mkWidenFrom32(ty, getGST( PPC_GST_XER ),
+ /* Signed */False) );
break;
case 0x8:
DIP("mflr r%u\n", rD_addr);
@@ -5435,8 +5445,8 @@
break;
case 0x100:
DIP("mfvrsave r%u\n", rD_addr);
- putIReg( rD_addr, mkSzWiden32(ty, getGST( PPC_GST_VRSAVE ),
- /* Signed */False) );
+ putIReg( rD_addr, mkWidenFrom32(ty, getGST( PPC_GST_VRSAVE ),
+ /* Signed */False) );
break;
case 0x103:
@@ -5488,8 +5498,8 @@
case 269:
DIP("mftbu r%u", rD_addr);
putIReg( rD_addr,
- mkSzWiden32(ty, unop(Iop_64HIto32, mkexpr(val)),
- /* Signed */False) );
+ mkWidenFrom32(ty, unop(Iop_64HIto32, mkexpr(val)),
+ /* Signed */False) );
break;
case 268:
DIP("mftb r%u", rD_addr);
@@ -5530,7 +5540,7 @@
shft = 4*(7-cr);
putGST_field( PPC_GST_CR,
binop(Iop_Shr32,
- mkSzNarrow32(ty, mkexpr(rS)),
+ mkNarrowTo32(ty, mkexpr(rS)),
mkU8(shft)), cr );
}
break;
@@ -5541,7 +5551,7 @@
switch (SPR) { // Choose a register...
case 0x1:
DIP("mtxer r%u\n", rS_addr);
- putGST( PPC_GST_XER, mkSzNarrow32(ty, mkexpr(rS)) );
+ putGST( PPC_GST_XER, mkNarrowTo32(ty, mkexpr(rS)) );
break;
case 0x8:
DIP("mtlr r%u\n", rS_addr);
@@ -5553,7 +5563,7 @@
break;
case 0x100:
DIP("mtvrsave r%u\n", rS_addr);
- putGST( PPC_GST_VRSAVE, mkSzNarrow32(ty, mkexpr(rS)) );
+ putGST( PPC_GST_VRSAVE, mkNarrowTo32(ty, mkexpr(rS)) );
break;
default:
@@ -6908,7 +6918,7 @@
UInt vD_off = vectorGuestRegOffset(vD_addr);
IRExpr** args = mkIRExprVec_3(
mkU32(vD_off),
- binop(Iop_And32, mkSzNarrow32(ty, mkexpr(EA)),
+ binop(Iop_And32, mkNarrowTo32(ty, mkexpr(EA)),
mkU32(0xF)),
mkU32(0)/*left*/ );
if (!mode64) {
@@ -6941,7 +6951,7 @@
UInt vD_off = vectorGuestRegOffset(vD_addr);
IRExpr** args = mkIRExprVec_3(
mkU32(vD_off),
- binop(Iop_And32, mkSzNarrow32(ty, mkexpr(EA)),
+ binop(Iop_And32, mkNarrowTo32(ty, mkexpr(EA)),
mkU32(0xF)),
mkU32(1)/*right*/ );
if (!mode64) {
@@ -7040,7 +7050,7 @@
DIP("stvebx v%d,r%u,r%u\n", vS_addr, rA_addr, rB_addr);
assign( eb, binop(Iop_And8, mkU8(0xF),
unop(Iop_32to8,
- mkSzNarrow32(ty, mkexpr(EA)) )) );
+ mkNarrowTo32(ty, mkexpr(EA)) )) );
assign( idx, binop(Iop_Shl8,
binop(Iop_Sub8, mkU8(15), mkexpr(eb)),
mkU8(3)) );
@@ -7053,7 +7063,7 @@
DIP("stvehx v%d,r%u,r%u\n", vS_addr, rA_addr, rB_addr);
assign( addr_aligned, addr_align(mkexpr(EA), 2) );
assign( eb, binop(Iop_And8, mkU8(0xF),
- mkSzNarrow8(ty, mkexpr(addr_aligned) )) );
+ mkNarrowTo8(ty, mkexpr(addr_aligned) )) );
assign( idx, binop(Iop_Shl8,
binop(Iop_Sub8, mkU8(14), mkexpr(eb)),
mkU8(3)) );
@@ -7066,7 +7076,7 @@
DIP("stvewx v%d,r%u,r%u\n", vS_addr, rA_addr, rB_addr);
assign( addr_aligned, addr_align(mkexpr(EA), 4) );
assign( eb, binop(Iop_And8, mkU8(0xF),
- mkSzNarrow8(ty, mkexpr(addr_aligned) )) );
+ mkNarrowTo8(ty, mkexpr(addr_aligned) )) );
assign( idx, binop(Iop_Shl8,
binop(Iop_Sub8, mkU8(12), mkexpr(eb)),
mkU8(3)) );
Modified: branches/DCAS/priv/guest-x86/toIR.c
===================================================================
--- branches/DCAS/priv/guest-x86/toIR.c 2009-05-21 21:55:50 UTC (rev 1897)
+++ branches/DCAS/priv/guest-x86/toIR.c 2009-06-02 08:18:56 UTC (rev 1898)
@@ -641,7 +641,7 @@
static void storeLE ( IRExpr* addr, IRExpr* data )
{
- stmt( IRStmt_Store(Iend_LE,addr,data) );
+ stmt( IRStmt_Store(Iend_LE, IRTemp_INVALID, addr, data) );
}
static IRExpr* unop ( IROp op, IRExpr* a )
@@ -703,7 +703,7 @@
static IRExpr* loadLE ( IRType ty, IRExpr* data )
{
- return IRExpr_Load(Iend_LE,ty,data);
+ return IRExpr_Load(False, Iend_LE, ty, data);
}
static IROp mkSizedOp ( IRType ty, IROp op8 )
@@ -7827,9 +7827,6 @@
/* Gets set to True if a LOCK prefix is seen. */
Bool pfx_lock = False;
- /* do we need follow the insn with MBusEvent(BusUnlock) ? */
- Bool unlock_bus_after_insn = False;
-
/* Set result defaults. */
dres.whatNext = Dis_Continue;
dres.len = 0;
@@ -7983,8 +7980,6 @@
if (pfx_lock) {
if (can_be_used_with_LOCK_prefix( (UChar*)&guest_code[delta] )) {
- stmt( IRStmt_MBE(Imbe_BusLock) );
- unlock_bus_after_insn = True;
DIP("lock ");
} else {
*expect_CAS = False;
@@ -13791,18 +13786,6 @@
nameIReg(sz,eregOfRM(modrm)));
} else {
*expect_CAS = True;
- /* Need to add IRStmt_MBE(Imbe_BusLock). */
- if (pfx_lock) {
- /* check it's already been taken care of */
- vassert(unlock_bus_after_insn);
- } else {
- vassert(!unlock_bus_after_insn);
- stmt( IRStmt_MBE(Imbe_BusLock) );
- unlock_bus_after_insn = True;
- }
- /* Because unlock_bus_after_insn is now True, generic logic
- at the bottom of disInstr will add the
- IRStmt_MBE(Imbe_BusUnlock). */
addr = disAMode ( &alen, sorb, delta, dis_buf );
assign( t1, loadLE(ty,mkexpr(addr)) );
assign( t2, getIReg(sz,gregOfRM(modrm)) );
@@ -14726,8 +14709,6 @@
insn, but nevertheless be paranoid and update it again right
now. */
stmt( IRStmt_Put( OFFB_EIP, mkU32(guest_EIP_curr_instr) ) );
- if (unlock_bus_after_insn)
- stmt( IRStmt_MBE(Imbe_BusUnlock) );
jmp_lit(Ijk_NoDecode, guest_EIP_curr_instr);
dres.whatNext = Dis_StopHere;
dres.len = 0;
@@ -14744,8 +14725,6 @@
decode_success:
/* All decode successes end up here. */
DIP("\n");
- if (unlock_bus_after_insn)
- stmt( IRStmt_MBE(Imbe_BusUnlock) );
dres.len = delta - delta_start;
return dres;
}
Modified: branches/DCAS/priv/host-amd64/isel.c
===================================================================
--- branches/DCAS/priv/host-amd64/isel.c 2009-05-21 21:55:50 UTC (rev 1897)
+++ branches/DCAS/priv/host-amd64/isel.c 2009-06-02 08:18:56 UTC (rev 1898)
@@ -857,8 +857,11 @@
HReg dst = newVRegI(env);
AMD64AMode* amode = iselIntExpr_AMode ( env, e->Iex.Load.addr );
+ /* We can't handle big-endian loads, nor load-linked. */
if (e->Iex.Load.end != Iend_LE)
goto irreducible;
+ if (e->Iex.Load.isLL)
+ goto irreducible;
if (ty == Ity_I64) {
addInstr(env, AMD64Instr_Alu64R(Aalu_MOV,
@@ -1959,7 +1962,8 @@
}
/* special case: 64-bit load from memory */
- if (e->tag == Iex_Load && ty == Ity_I64 && e->Iex.Load.end == Iend_LE) {
+ if (e->tag == Iex_Load && ty == Ity_I64
+ && e->Iex.Load.end == Iend_LE && !e->Iex.Load.isLL) {
AMD64AMode* am = iselIntExpr_AMode(env, e->Iex.Load.addr);
return AMD64RMI_Mem(am);
}
@@ -2738,7 +2742,7 @@
return lookupIRTemp(env, e->Iex.RdTmp.tmp);
}
- if (e->tag == Iex_Load && e->Iex.Load.end == Iend_LE) {
+ if (e->tag == Iex_Load && e->Iex.Load.end == Iend_LE && !e->Iex.Load.isLL) {
AMD64AMode* am;
HReg res = newVRegV(env);
vassert(e->Iex.Load.ty == Ity_F32);
@@ -2862,7 +2866,7 @@
return res;
}
- if (e->tag == Iex_Load && e->Iex.Load.end == Iend_LE) {
+ if (e->tag == Iex_Load && e->Iex.Load.end == Iend_LE && !e->Iex.Load.isLL) {
AMD64AMode* am;
HReg res = newVRegV(env);
vassert(e->Iex.Load.ty == Ity_F64);
@@ -3167,7 +3171,7 @@
return dst;
}
- if (e->tag == Iex_Load && e->Iex.Load.end == Iend_LE) {
+ if (e->tag == Iex_Load && e->Iex.Load.end == Iend_LE && !e->Iex.Load.isLL) {
HReg dst = newVRegV(env);
AMD64AMode* am = iselIntExpr_AMode(env, e->Iex.Load.addr);
addInstr(env, AMD64Instr_SseLdSt( True/*load*/, 16, dst, am ));
@@ -3589,11 +3593,12 @@
/* --------- STORE --------- */
case Ist_Store: {
- IRType tya = typeOfIRExpr(env->type_env, stmt->Ist.Store.addr);
- IRType tyd = typeOfIRExpr(env->type_env, stmt->Ist.Store.data);
- IREndness end = stmt->Ist.Store.end;
+ IRType tya = typeOfIRExpr(env->type_env, stmt->Ist.Store.addr);
+ IRType tyd = typeOfIRExpr(env->type_env, stmt->Ist.Store.data);
+ IREndness end = stmt->Ist.Store.end;
+ IRTemp resSC = stmt->Ist.Store.resSC;
- if (tya != Ity_I64 || end != Iend_LE)
+ if (tya != Ity_I64 || end != Iend_LE || resSC != IRTemp_INVALID)
goto stmt_fail;
if (tyd == Ity_I64) {
@@ -3813,9 +3818,6 @@
case Imbe_Fence:
addInstr(env, AMD64Instr_MFence());
return;
- case Imbe_BusLock:
- case Imbe_BusUnlock:
- return;
default:
break;
}
Modified: branches/DCAS/priv/host-arm/isel.c
===================================================================
--- branches/DCAS/priv/host-arm/isel.c 2009-05-21 21:55:50 UTC (rev 1897)
+++ branches/DCAS/priv/host-arm/isel.c 2009-06-02 08:18:56 UTC (rev 1898)
@@ -757,8 +757,9 @@
IRType tya = typeOfIRExpr(env->type_env, stmt->Ist.Store.addr);
IRType tyd = typeOfIRExpr(env->type_env, stmt->Ist.Store.data);
IREndness end = stmt->Ist.Store.end;
+ IRTemp resSC = stmt->Ist.Store.resSC;
- if (tya != Ity_I32 || end != Iend_LE)
+ if (tya != Ity_I32 || end != Iend_LE || resSC != IRTemp_INVALID)
goto stmt_fail;
reg = iselIntExpr_R(env, stmt->Ist.Store.data);
Modified: branches/DCAS/priv/host-ppc/hdefs.c
===================================================================
--- branches/DCAS/priv/host-ppc/hdefs.c 2009-05-21 21:55:50 UTC (rev 1897)
+++ branches/DCAS/priv/host-ppc/hdefs.c 2009-06-02 08:18:56 UTC (rev 1898)
@@ -844,7 +844,7 @@
}
PPCInstr* PPCInstr_CMov ( PPCCondCode cond,
HReg dst, PPCRI* src ) {
- PPCInstr* i = LibVEX_Alloc(sizeof(PPCInstr));
+ PPCInstr* i = LibVEX_Alloc(sizeof(PPCInstr));
i->tag = Pin_CMov;
i->Pin.CMov.cond = cond;
i->Pin.CMov.src = src;
@@ -863,6 +863,18 @@
if (sz == 8) vassert(mode64);
return i;
}
+PPCInstr* PPCInstr_LoadL ( UChar sz,
+ HReg dst, HReg src, Bool mode64 )
+{
+ PPCInstr* i = LibVEX_Alloc(sizeof(PPCInstr));
+ i->tag = Pin_LoadL;
+ i->Pin.LoadL.sz = sz;
+ i->Pin.LoadL.src = src;
+ i->Pin.LoadL.dst = dst;
+ vassert(sz == 4 || sz == 8);
+ if (sz == 8) vassert(mode64);
+ return i;
+}
PPCInstr* PPCInstr_Store ( UChar sz, PPCAMode* dst, HReg src,
Bool mode64 ) {
PPCInstr* i = LibVEX_Alloc(sizeof(PPCInstr));
@@ -874,6 +886,16 @@
if (sz == 8) vassert(mode64);
return i;
}
+PPCInstr* PPCInstr_StoreC ( UChar sz, HReg dst, HReg src, Bool mode64 ) {
+ PPCInstr* i = LibVEX_Alloc(sizeof(PPCInstr));
+ i->tag = Pin_StoreC;
+ i->Pin.StoreC.sz = sz;
+ i->Pin.StoreC.src = src;
+ i->Pin.StoreC.dst = dst;
+ vassert(sz == 4 || sz == 8);
+ if (sz == 8) vassert(mode64);
+ return i;
+}
PPCInstr* PPCInstr_Set ( PPCCondCode cond, HReg dst ) {
PPCInstr* i = LibVEX_Alloc(sizeof(PPCInstr));
i->tag = Pin_Set;
@@ -1311,6 +1333,12 @@
ppPPCAMode(i->Pin.Load.src);
return;
}
+ case Pin_LoadL:
+ vex_printf("l%carx ", i->Pin.LoadL.sz==4 ? 'w' : 'd');
+ ppHRegPPC(i->Pin.LoadL.dst);
+ vex_printf(",%%r0,");
+ ppHRegPPC(i->Pin.LoadL.src);
+ return;
case Pin_Store: {
UChar sz = i->Pin.Store.sz;
Bool idxd = toBool(i->Pin.Store.dst->tag == Pam_RR);
@@ -1321,6 +1349,12 @@
ppPPCAMode(i->Pin.Store.dst);
return;
}
+ case Pin_StoreC:
+ vex_printf("st%ccx. ", i->Pin.StoreC.sz==4 ? 'w' : 'd');
+ ppHRegPPC(i->Pin.StoreC.src);
+ vex_printf(",%%r0,");
+ ppHRegPPC(i->Pin.StoreC.dst);
+ return;
case Pin_Set: {
PPCCondCode cc = i->Pin.Set.cond;
vex_printf("set (%s),", showPPCCondCode(cc));
@@ -1702,7 +1736,7 @@
/* Finally, there is the issue that the insn trashes a
register because the literal target address has to be
loaded into a register. %r10 seems a suitable victim.
- (Can't use %r0, as use ops that interpret it as value zero). */
+ (Can't use %r0, as some insns interpret it as value zero). */
addHRegUse(u, HRmWrite, hregPPC_GPR10(mode64));
/* Upshot of this is that the assembler really must use %r10,
and no other, as a destination temporary. */
@@ -1728,10 +1762,18 @@
addRegUsage_PPCAMode(u, i->Pin.Load.src);
addHRegUse(u, HRmWrite, i->Pin.Load.dst);
return;
+ case Pin_LoadL:
+ addHRegUse(u, HRmRead, i->Pin.LoadL.src);
+ addHRegUse(u, HRmWrite, i->Pin.LoadL.dst);
+ return;
case Pin_Store:
addHRegUse(u, HRmRead, i->Pin.Store.src);
addRegUsage_PPCAMode(u, i->Pin.Store.dst);
return;
+ case Pin_StoreC:
+ addHRegUse(u, HRmRead, i->Pin.StoreC.src);
+ addHRegUse(u, HRmRead, i->Pin.StoreC.dst);
+ return;
case Pin_Set:
addHRegUse(u, HRmWrite, i->Pin.Set.dst);
return;
@@ -1934,10 +1976,18 @@
mapRegs_PPCAMode(m, i->Pin.Load.src);
mapReg(m, &i->Pin.Load.dst);
return;
+ case Pin_LoadL:
+ mapReg(m, &i->Pin.LoadL.src);
+ mapReg(m, &i->Pin.LoadL.dst);
+ return;
case Pin_Store:
mapReg(m, &i->Pin.Store.src);
mapRegs_PPCAMode(m, i->Pin.Store.dst);
return;
+ case Pin_StoreC:
+ mapReg(m, &i->Pin.StoreC.src);
+ mapReg(m, &i->Pin.StoreC.dst);
+ return;
case Pin_Set:
mapReg(m, &i->Pin.Set.dst);
return;
@@ -2954,6 +3004,7 @@
case Ijk_TInval: trc = VEX_TRC_JMP_TINVAL; break;
case Ijk_NoRedir: trc = VEX_TRC_JMP_NOREDIR; break;
case Ijk_SigTRAP: trc = VEX_TRC_JMP_SIGTRAP; break;
+ case Ijk_SigBUS: trc = VEX_TRC_JMP_SIGBUS; break;
case Ijk_Ret:
case Ijk_Call:
case Ijk_Boring:
@@ -3067,6 +3118,20 @@
}
}
+ case Pin_LoadL: {
+ if (i->Pin.LoadL.sz == 4) {
+ p = mkFormX(p, 31, iregNo(i->Pin.LoadL.dst, mode64),
+ 0, iregNo(i->Pin.LoadL.src, mode64), 20, 0);
+ goto done;
+ }
+ if (i->Pin.LoadL.sz == 8 && mode64) {
+ p = mkFormX(p, 31, iregNo(i->Pin.LoadL.dst, mode64),
+ 0, iregNo(i->Pin.LoadL.src, mode64), 84, 0);
+ goto done;
+ }
+ goto bad;
+ }
+
case Pin_Set: {
/* Make the destination register be 1 or 0, depending on whether
the relevant condition holds. */
@@ -3103,8 +3168,8 @@
case Pin_MFence: {
p = mkFormX(p, 31, 0, 0, 0, 598, 0); // sync, PPC32 p616
-// CAB: Should this be isync?
-// p = mkFormXL(p, 19, 0, 0, 0, 150, 0); // isync, PPC32 p467
+ // CAB: Should this be isync?
+ // p = mkFormXL(p, 19, 0, 0, 0, 150, 0); // isync, PPC32 p467
goto done;
}
@@ -3147,6 +3212,20 @@
goto done;
}
+ case Pin_StoreC: {
+ if (i->Pin.StoreC.sz == 4) {
+ p = mkFormX(p, 31, iregNo(i->Pin.StoreC.src, mode64),
+ 0, iregNo(i->Pin.StoreC.dst, mode64), 150, 1);
+ goto done;
+ }
+ if (i->Pin.StoreC.sz == 8 && mode64) {
+ p = mkFormX(p, 31, iregNo(i->Pin.StoreC.src, mode64),
+ 0, iregNo(i->Pin.StoreC.dst, mode64), 214, 1);
+ goto done;
+ }
+ goto bad;
+ }
+
case Pin_FpUnary: {
UInt fr_dst = fregNo(i->Pin.FpUnary.dst);
UInt fr_src = fregNo(i->Pin.FpUnary.src);
Modified: branches/DCAS/priv/host-ppc/hdefs.h
===================================================================
--- branches/DCAS/priv/host-ppc/hdefs.h 2009-05-21 21:55:50 UTC (rev 1897)
+++ branches/DCAS/priv/host-ppc/hdefs.h 2009-06-02 08:18:56 UTC (rev 1898)
@@ -459,7 +459,9 @@
Pin_Goto, /* conditional/unconditional jmp to dst */
Pin_CMov, /* conditional move */
Pin_Load, /* zero-extending load a 8|16|32|64 bit value from mem */
+ Pin_LoadL, /* load-linked (lwarx/ldarx) 32|64 bit value from mem */
Pin_Store, /* store a 8|16|32|64 bit value to mem */
+ Pin_StoreC, /* store-conditional (stwcx./stdcx.) 32|64 bit val */
Pin_Set, /* convert condition code to value 0 or 1 */
Pin_MfCR, /* move from condition register to GPR */
Pin_MFence, /* mem fence */
@@ -604,12 +606,24 @@
HReg dst;
PPCAMode* src;
} Load;
+ /* Load-and-reserve (lwarx, ldarx) */
+ struct {
+ UChar sz; /* 4|8 */
+ HReg dst;
+ HReg src;
+ } LoadL;
/* 64/32/16/8 bit stores */
struct {
UChar sz; /* 1|2|4|8 */
PPCAMode* dst;
HReg src;
} Store;
+ /* Store-conditional (stwcx., stdcx.) */
+ struct {
+ UChar sz; /* 4|8 */
+ HReg dst;
+ HReg src;
+ } StoreC;
/* Convert a ppc condition code to value 0 or 1. */
struct {
PPCCondCode cond;
@@ -791,8 +805,12 @@
extern PPCInstr* PPCInstr_CMov ( PPCCondCode, HReg dst, PPCRI* src );
extern PPCInstr* PPCInstr_Load ( UChar sz,
HReg dst, PPCAMode* src, Bool mode64 );
+extern PPCInstr* PPCInstr_LoadL ( UChar sz,
+ HReg dst, HReg src, Bool mode64 );
extern PPCInstr* PPCInstr_Store ( UChar sz, PPCAMode* dst...
[truncated message content] |
|
From: Konstantin S. <kon...@gm...> - 2009-06-02 08:05:30
|
On Tue, Jun 2, 2009 at 11:54 AM, Bart Van Assche <bar...@gm...> wrote: > On Tue, Jun 2, 2009 at 9:36 AM, Konstantin Serebryany > <kon...@gm...> wrote: >> On Mon, Jun 1, 2009 at 8:19 PM, Bart Van Assche >>> One additional remark regarding the ANNOTATE_HAPPENS_* macro's: a data >>> race detection tool has to allocate some memory to keep track of the >>> inter-thread ordering imposed by these annotations. >> >> True. >> >>> Since >>> ANNOTATE_HAPPENS_AFTER may be invoked multiple times with the same >>> argument, a tool cannot know when it can free the memory allocated to >>> implement such an annotation. I have added the >>> ANNOTATE_HAPPENS_AFTER_DONE() annotation in drd.h for this purpose. >> >> I would avoid doing this. >> Once the program frees the memory which was used as an argument to >> ANNOTATE_HAPPENS_*, the detector may release the resources. >> (ThreadSanitizer doesn't do this now. I haven't seen this as a >> problem, but maybe it is...) > > I agree that the tool should free the allocated resources at the time > the client object is freed. But what should a tool do when the > argument passed to ANNOTATE_HAPPENS_* is not a valid client address, > such as in unit tests 30 and 31 ? Hm. Maybe just leak the resources? I hate to have annotations that do not represent any synchronization and are need just for bookkeeping. In ThreadSanitizer I have to cleanup the whole state from time to time (usually, once per 10-20 minutes). This is done because of other reasons, but the leaked objects are freed as well. --kcc > > Bart. > |
|
From: Bart V. A. <bar...@gm...> - 2009-06-02 07:54:21
|
On Tue, Jun 2, 2009 at 9:36 AM, Konstantin Serebryany <kon...@gm...> wrote: > On Mon, Jun 1, 2009 at 8:19 PM, Bart Van Assche >> One additional remark regarding the ANNOTATE_HAPPENS_* macro's: a data >> race detection tool has to allocate some memory to keep track of the >> inter-thread ordering imposed by these annotations. > > True. > >> Since >> ANNOTATE_HAPPENS_AFTER may be invoked multiple times with the same >> argument, a tool cannot know when it can free the memory allocated to >> implement such an annotation. I have added the >> ANNOTATE_HAPPENS_AFTER_DONE() annotation in drd.h for this purpose. > > I would avoid doing this. > Once the program frees the memory which was used as an argument to > ANNOTATE_HAPPENS_*, the detector may release the resources. > (ThreadSanitizer doesn't do this now. I haven't seen this as a > problem, but maybe it is...) I agree that the tool should free the allocated resources at the time the client object is freed. But what should a tool do when the argument passed to ANNOTATE_HAPPENS_* is not a valid client address, such as in unit tests 30 and 31 ? Bart. |
|
From: Konstantin S. <kon...@gm...> - 2009-06-02 07:50:06
|
On Mon, Jun 1, 2009 at 9:54 PM, Bart Van Assche
<bar...@gm...> wrote:
> On Mon, Jun 1, 2009 at 10:24 AM, Konstantin Serebryany
> <kon...@gm...> wrote:
>> On Sat, May 30, 2009 at 3:35 PM, Bart Van Assche
>> <bar...@gm...> wrote:
>>> A few remarks about the semantics of the ANNOTATE_* macro's:
>>> * I do not really like ANNOTATE_PUBLISH_MEMORY_RANGE. The comment
>>> above this macro says more or less that any other thread may access
>>> the published memory range safely after it has been published.
>>> However, no matter which synchronization instructions have been issued
>>> by the publishing thread, a consumer thread may only access the
>>> published memory safely after proper synchronization with the
>>> publishing thread. So my proposal is to remove this annotation and to
>>> use ANNOTATE_MUTEX_IS_USED_AS_CONDVAR instead.
>>
>> ANNOTATE_MUTEX_IS_USED_AS_CONDVAR is a big hammer as it essentially
>> makes the detection to be pure h-b.
>> PUBLISH_MEMORY_RANGE() is needed to hybrid mode.
>>
>> I am not a great expert in lock-less synchronization but I believe
>> that an object could be published safely in a way that does not
>> require any action by a consumer.
>> You can publish an object with just one CAS (at least on x86?). No?
>> So, you can use this annotation in a situation were you don't have
>> locks at all.
>
> The annotations should be general enough such that these are useful
> for any modern memory architecture. It's not entirely clear to me what
> the intended semantics of PUBLISH_MEMORY_RANGE() is. How does it e.g.
> map on the memory barrier instructions as defined by the Alpha
> architecture or the acquire/release labels as defined by the Itanium
> architecture ? On these two architectures making sure that all the
> store operations performed on one CPU are visible on another CPU
> requires the following:
> * First CPU modifies an object as necessary.
> * First CPU issues a memory barrier and sets a flag (Alpha) or updates
> a flag via a store operation that has release semantics.
> * Second CPU observes that the flag has been set and issues a memory
> barrier (Alpha) or observes that the flag has been modified through a
> load with acquire semantics (Itanium).
> * Second CPU loads object data.
> The update of the flag is necessary to make sure that all store
> operations performed by the first CPU will be observed by the second
> CPU: many memory consistency models allow stores to be reordered if
> not explicitly prevented. My point is that on a multiprocessor with
> sufficiently weak ordering guarantees, you can't just publish memory
> modifications. Cooperation of the consumer is needed to make sure that
> the intended semantics are realized.
Let's not argue about lock-less synchronization for now.
Just think about hybrid detectors:
Object *o = NULL;
void Thread1() {
Object *t = new Object;
ScopedMutexLock lock(&mu);
o = t;
ANNOTATE_PUBLISH_MEMORY_RANGE(o, sizeof(*o)) ;
}
void Thread2() {
ScopedMutexLock lock(&mu);
if (o) o->UseMe();
}
...
// we have 999 different places where 'o' is used.
void Thread999() {
ScopedMutexLock lock(&mu);
if (o) o->UseMeInSomeOtherWay();
}
Here, w/o the annotations, a hybrid detector will report a false
positive because the object was constructed outside the mutex.
The same thing could be done with ANNOTATE_HAPPENS_*, but it will
require 1000 annotations instead of just one.
>
> Looking at unit test 92 I get the impression that the semantics of
> PUBLISH_MEMORY_RANGE() is similar to that of the happens before /
> happens after annotations but only for an address range instead of all
> memory locations ?
>
PUBLISH_MEMORY_RANGE creates the same h-b arch as any other h-b
annotation or as e.g. sem_post/sem_wait.
The difference is that there is just one annotation.
The h-b edge is created between the call to PUBLISH_MEMORY_RANGE(mem,
size) and subsequent accesses to memory within the range [mem,
mem+size).
Once the memory [mem, mem+size) is freed, this stops.
--kcc
|
|
From: Konstantin S. <kon...@gm...> - 2009-06-02 07:37:24
|
On Mon, Jun 1, 2009 at 8:19 PM, Bart Van Assche <bar...@gm...> wrote: > On Mon, Jun 1, 2009 at 10:28 AM, Konstantin Serebryany > <kon...@gm...> wrote: >> On Sun, May 31, 2009 at 11:14 PM, Bart Van Assche >> <bar...@gm...> wrote: >>> On Sat, May 30, 2009 at 9:12 PM, Bart Van Assche >>> <bar...@gm...> wrote: >>>> On Fri, May 29, 2009 at 12:58 PM, Konstantin Serebryany >>>> <kon...@gm...> wrote: >>>>> Do you plan to support annotations (aka client requests) in Helgrind >>>>> and DRD in a compatible way (and possibly, in a way compatible with >>>>> ThreadSanitizer)? >>>>> Something like http://code.google.com/p/google-perftools/source/browse/trunk/src/base/dynamic_annotations.h, >>>>> or completely different. >>>>> Our experience shows that even a pure-happens-before race detector is >>>>> completely useless w/o annotations if your code has lock-less >>>>> synchronization and hundreds of benign races. >>>> >>>> Another remark: I suggest to remove the macro's >>>> ANNOTATE_CONDVAR_WAIT() and ANNOTATE_CONDVAR_SIGNAL() but to keep >>>> their aliases ANNOTATE_HAPPENS_BEFORE() and ANNOTATE_HAPPENS_AFTER(). >>>> The names of the first two macro's are really confusing: these two >>>> macro's are a.o. used to annotate ordering constraints between >>>> mutexes(!) in racecheck_unittest.cc. >>> >>> Please ignore the above comment -- I was misled by the statement >>> ANNOTATE_CONDVAR_SIGNAL(&mu) in racecheck_unittest.cc. By this time I >>> figured out that &mu is not the address of a mutex but the address of >>> a condition variable. >> >> The parameter of ANNOTATE_CONDVAR_ is any pointer. >> Since I introduced the HAPPENS_BEFORE/AFTER aliases, CONDVAR >> annotations are supposed to be used only with cond vars (which is only >> required in hybrid mode). >> The aliases are there just to avoid confusion. > > IMHO it would be an improvement if it would be specified explicitly in > the header file dynamic_annotations.h that the ANNOTATE_CONDVAR_* > macro's should be used only with condition variables > (pthread_condvar_t*) and that the ANNOTATE_HAPPENS_* macro's should be > used only with other objects than POSIX synchronization objects. That > would allow pure happens-before data race detectors to ignore the > ANNOTATE_CONDVAR_* annotations and only handle the ANNOTATE_HAPPENS_* > annotations. Hm. Dunno. Maybe. BTW, please don't forget that there are other synchronization primitives in the world, not just POSIX objects. > > Can you please also update regression tests 30 and 31 ? These tests > currently use the ANNOTATE_CONDVAR_* macro's while according to what > you wrote these should use the ANNOTATE_HAPPENS_* macro's. Done, thanks! > > One additional remark regarding the ANNOTATE_HAPPENS_* macro's: a data > race detection tool has to allocate some memory to keep track of the > inter-thread ordering imposed by these annotations. True. > Since > ANNOTATE_HAPPENS_AFTER may be invoked multiple times with the same > argument, a tool cannot know when it can free the memory allocated to > implement such an annotation. I have added the > ANNOTATE_HAPPENS_AFTER_DONE() annotation in drd.h for this purpose. Grr. I would avoid doing this. Once the program frees the memory which was used as an argument to ANNOTATE_HAPPENS_*, the detector may release the resources. (ThreadSanitizer doesn't do this now. I haven't seen this as a problem, but maybe it is...) > > Bart. > |
|
From: <sv...@va...> - 2009-06-02 07:09:34
|
Author: njn Date: 2009-06-02 08:09:27 +0100 (Tue, 02 Jun 2009) New Revision: 10207 Log: Avoid more repetitive cut+pastery. Modified: branches/BUILD_TWEAKS/cachegrind/Makefile.am branches/BUILD_TWEAKS/massif/Makefile.am Modified: branches/BUILD_TWEAKS/cachegrind/Makefile.am =================================================================== --- branches/BUILD_TWEAKS/cachegrind/Makefile.am 2009-06-02 07:03:05 UTC (rev 10206) +++ branches/BUILD_TWEAKS/cachegrind/Makefile.am 2009-06-02 07:09:27 UTC (rev 10207) @@ -4,31 +4,9 @@ noinst_HEADERS = cg_arch.h cg_sim.c cg_branchpred.c -noinst_PROGRAMS = -if VGCONF_PLATFORMS_INCLUDE_X86_LINUX -noinst_PROGRAMS += cachegrind-x86-linux -endif -if VGCONF_PLATFORMS_INCLUDE_AMD64_LINUX -noinst_PROGRAMS += cachegrind-amd64-linux -endif -if VGCONF_PLATFORMS_INCLUDE_PPC32_LINUX -noinst_PROGRAMS += cachegrind-ppc32-linux -endif -if VGCONF_PLATFORMS_INCLUDE_PPC64_LINUX -noinst_PROGRAMS += cachegrind-ppc64-linux -endif -if VGCONF_PLATFORMS_INCLUDE_PPC32_AIX5 -noinst_PROGRAMS += cachegrind-ppc32-aix5 -endif -if VGCONF_PLATFORMS_INCLUDE_PPC64_AIX5 -noinst_PROGRAMS += cachegrind-ppc64-aix5 -endif -if VGCONF_PLATFORMS_INCLUDE_X86_DARWIN -noinst_PROGRAMS += cachegrind-x86-darwin -endif -if VGCONF_PLATFORMS_INCLUDE_AMD64_DARWIN -noinst_PROGRAMS += cachegrind-amd64-darwin -endif +#---------------------------------------------------------------------------- +# cg_merge +#---------------------------------------------------------------------------- # Build cg_merge for the primary target only. bin_PROGRAMS = cg_merge @@ -38,65 +16,53 @@ cg_merge_CCASFLAGS = $(AM_CCASFLAGS_PRI) cg_merge_LDFLAGS = $(AM_CFLAGS_PRI) +#---------------------------------------------------------------------------- +# cachegrind-<platform> +#---------------------------------------------------------------------------- -CACHEGRIND_SOURCES_COMMON = cg_main.c -CACHEGRIND_SOURCES_X86 = cg-x86.c -CACHEGRIND_SOURCES_AMD64 = cg-amd64.c -CACHEGRIND_SOURCES_PPC32 = cg-ppc32.c -CACHEGRIND_SOURCES_PPC64 = cg-ppc64.c +noinst_PROGRAMS = cachegrind-@VGCONF_ARCH_PRI@-@VGCONF_OS@ +if VGCONF_HAVE_PLATFORM_SEC +noinst_PROGRAMS += cachegrind-@VGCONF_ARCH_SEC@-@VGCONF_OS@ +endif -cachegrind_x86_linux_SOURCES = $(CACHEGRIND_SOURCES_COMMON) $(CACHEGRIND_SOURCES_X86) -cachegrind_x86_linux_CPPFLAGS = $(AM_CPPFLAGS_X86_LINUX) -cachegrind_x86_linux_CFLAGS = $(AM_CFLAGS_X86_LINUX) -cachegrind_x86_linux_DEPENDENCIES = $(COREGRIND_LIBS_X86_LINUX) -cachegrind_x86_linux_LDADD = $(TOOL_LDADD_X86_LINUX) -cachegrind_x86_linux_LDFLAGS = $(TOOL_LDFLAGS_X86_LINUX) +if VGCONF_ARCHS_INCLUDE_X86 +CACHEGRIND_SOURCES_ARCH = cg-x86.c +endif +if VGCONF_ARCHS_INCLUDE_AMD64 +CACHEGRIND_SOURCES_ARCH = cg-amd64.c +endif +if VGCONF_ARCHS_INCLUDE_PPC32 +CACHEGRIND_SOURCES_ARCH = cg-ppc32.c +endif +if VGCONF_ARCHS_INCLUDE_PPC64 +CACHEGRIND_SOURCES_ARCH = cg-ppc64.c +endif -cachegrind_amd64_linux_SOURCES = $(CACHEGRIND_SOURCES_COMMON) $(CACHEGRIND_SOURCES_AMD64) -cachegrind_amd64_linux_CPPFLAGS = $(AM_CPPFLAGS_AMD64_LINUX) -cachegrind_amd64_linux_CFLAGS = $(AM_CFLAGS_AMD64_LINUX) -cachegrind_amd64_linux_DEPENDENCIES = $(COREGRIND_LIBS_AMD64_LINUX) -cachegrind_amd64_linux_LDADD = $(TOOL_LDADD_AMD64_LINUX) -cachegrind_amd64_linux_LDFLAGS = $(TOOL_LDFLAGS_AMD64_LINUX) +cachegrind_@VGCONF_ARCH_PRI@_@VGCONF_OS@_SOURCES = \ + cg_main.c $(CACHEGRIND_SOURCES_ARCH) +cachegrind_@VGCONF_ARCH_PRI@_@VGCONF_OS@_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +cachegrind_@VGCONF_ARCH_PRI@_@VGCONF_OS@_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +cachegrind_@VGCONF_ARCH_PRI@_@VGCONF_OS@_DEPENDENCIES = \ + $(COREGRIND_LIBS_@VGCONF_PLATFORM_PRI_CAPS@) +cachegrind_@VGCONF_ARCH_PRI@_@VGCONF_OS@_LDADD = \ + $(TOOL_LDADD_@VGCONF_PLATFORM_PRI_CAPS@) +cachegrind_@VGCONF_ARCH_PRI@_@VGCONF_OS@_LDFLAGS = \ + $(TOOL_LDFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +if VGCONF_HAVE_PLATFORM_SEC +cachegrind_@VGCONF_ARCH_SEC@_@VGCONF_OS@_SOURCES = \ + cg_main.c $(CACHEGRIND_SOURCES_ARCH) +cachegrind_@VGCONF_ARCH_SEC@_@VGCONF_OS@_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +cachegrind_@VGCONF_ARCH_SEC@_@VGCONF_OS@_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +cachegrind_@VGCONF_ARCH_SEC@_@VGCONF_OS@_DEPENDENCIES = \ + $(COREGRIND_LIBS_@VGCONF_PLATFORM_SEC_CAPS@) +cachegrind_@VGCONF_ARCH_SEC@_@VGCONF_OS@_LDADD = \ + $(TOOL_LDADD_@VGCONF_PLATFORM_SEC_CAPS@) +cachegrind_@VGCONF_ARCH_SEC@_@VGCONF_OS@_LDFLAGS = \ + $(TOOL_LDFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +endif -cachegrind_ppc32_linux_SOURCES = $(CACHEGRIND_SOURCES_COMMON) $(CACHEGRIND_SOURCES_PPC32) -cachegrind_ppc32_linux_CPPFLAGS = $(AM_CPPFLAGS_PPC32_LINUX) -cachegrind_ppc32_linux_CFLAGS = $(AM_CFLAGS_PPC32_LINUX) -cachegrind_ppc32_linux_DEPENDENCIES = $(COREGRIND_LIBS_PPC32_LINUX) -cachegrind_ppc32_linux_LDADD = $(TOOL_LDADD_PPC32_LINUX) -cachegrind_ppc32_linux_LDFLAGS = $(TOOL_LDFLAGS_PPC32_LINUX) -cachegrind_ppc64_linux_SOURCES = $(CACHEGRIND_SOURCES_COMMON) $(CACHEGRIND_SOURCES_PPC64) -cachegrind_ppc64_linux_CPPFLAGS = $(AM_CPPFLAGS_PPC64_LINUX) -cachegrind_ppc64_linux_CFLAGS = $(AM_CFLAGS_PPC64_LINUX) -cachegrind_ppc64_linux_DEPENDENCIES = $(COREGRIND_LIBS_PPC64_LINUX) -cachegrind_ppc64_linux_LDADD = $(TOOL_LDADD_PPC64_LINUX) -cachegrind_ppc64_linux_LDFLAGS = $(TOOL_LDFLAGS_PPC64_LINUX) - -cachegrind_ppc32_aix5_SOURCES = $(CACHEGRIND_SOURCES_COMMON) $(CACHEGRIND_SOURCES_PPC32) -cachegrind_ppc32_aix5_CPPFLAGS = $(AM_CPPFLAGS_PPC32_AIX5) -cachegrind_ppc32_aix5_CFLAGS = $(AM_CFLAGS_PPC32_AIX5) -cachegrind_ppc32_aix5_DEPENDENCIES = $(COREGRIND_LIBS_PPC32_AIX5) -cachegrind_ppc32_aix5_LDADD = $(TOOL_LDADD_PPC32_AIX5) -cachegrind_ppc32_aix5_LDFLAGS = $(TOOL_LDFLAGS_PPC32_AIX5) - -cachegrind_ppc64_aix5_SOURCES = $(CACHEGRIND_SOURCES_COMMON) $(CACHEGRIND_SOURCES_PPC64) -cachegrind_ppc64_aix5_CPPFLAGS = $(AM_CPPFLAGS_PPC64_AIX5) -cachegrind_ppc64_aix5_CFLAGS = $(AM_CFLAGS_PPC64_AIX5) -cachegrind_ppc64_aix5_DEPENDENCIES = $(COREGRIND_LIBS_PPC64_AIX5) -cachegrind_ppc64_aix5_LDADD = $(TOOL_LDADD_PPC64_AIX5) -cachegrind_ppc64_aix5_LDFLAGS = $(TOOL_LDFLAGS_PPC64_AIX5) - -cachegrind_x86_darwin_SOURCES = $(CACHEGRIND_SOURCES_COMMON) $(CACHEGRIND_SOURCES_X86) -cachegrind_x86_darwin_CPPFLAGS = $(AM_CPPFLAGS_X86_DARWIN) -cachegrind_x86_darwin_CFLAGS = $(AM_CFLAGS_X86_DARWIN) -cachegrind_x86_darwin_DEPENDENCIES = $(COREGRIND_LIBS_X86_DARWIN) -cachegrind_x86_darwin_LDADD = $(TOOL_LDADD_X86_DARWIN) -cachegrind_x86_darwin_LDFLAGS = $(TOOL_LDFLAGS_X86_DARWIN) - -cachegrind_amd64_darwin_SOURCES = $(CACHEGRIND_SOURCES_COMMON) $(CACHEGRIND_SOURCES_AMD64) -cachegrind_amd64_darwin_CPPFLAGS = $(AM_CPPFLAGS_AMD64_DARWIN) -cachegrind_amd64_darwin_CFLAGS = $(AM_CFLAGS_AMD64_DARWIN) -cachegrind_amd64_darwin_DEPENDENCIES = $(COREGRIND_LIBS_AMD64_DARWIN) -cachegrind_amd64_darwin_LDADD = $(TOOL_LDADD_AMD64_DARWIN) -cachegrind_amd64_darwin_LDFLAGS = $(TOOL_LDFLAGS_AMD64_DARWIN) Modified: branches/BUILD_TWEAKS/massif/Makefile.am =================================================================== --- branches/BUILD_TWEAKS/massif/Makefile.am 2009-06-02 07:03:05 UTC (rev 10206) +++ branches/BUILD_TWEAKS/massif/Makefile.am 2009-06-02 07:09:27 UTC (rev 10207) @@ -4,153 +4,67 @@ bin_SCRIPTS = ms_print -noinst_PROGRAMS = -noinst_DSYMS = -if VGCONF_PLATFORMS_INCLUDE_X86_LINUX -noinst_PROGRAMS += massif-x86-linux vgpreload_massif-x86-linux.so +#---------------------------------------------------------------------------- +# massif-<platform> +#---------------------------------------------------------------------------- + +noinst_PROGRAMS = massif-@VGCONF_ARCH_PRI@-@VGCONF_OS@ +if VGCONF_HAVE_PLATFORM_SEC +noinst_PROGRAMS += massif-@VGCONF_ARCH_SEC@-@VGCONF_OS@ endif -if VGCONF_PLATFORMS_INCLUDE_AMD64_LINUX -noinst_PROGRAMS += massif-amd64-linux vgpreload_massif-amd64-linux.so + +massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_SOURCES = ms_main.c +massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_DEPENDENCIES = \ + $(COREGRIND_LIBS_@VGCONF_PLATFORM_PRI_CAPS@) +massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_LDADD = \ + $(TOOL_LDADD_@VGCONF_PLATFORM_PRI_CAPS@) +massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_LDFLAGS = \ + $(TOOL_LDFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +if VGCONF_HAVE_PLATFORM_SEC +massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_SOURCES = ms_main.c +massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_DEPENDENCIES = \ + $(COREGRIND_LIBS_@VGCONF_PLATFORM_SEC_CAPS@) +massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_LDADD = \ + $(TOOL_LDADD_@VGCONF_PLATFORM_SEC_CAPS@) +massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_LDFLAGS = \ + $(TOOL_LDFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) endif -if VGCONF_PLATFORMS_INCLUDE_PPC32_LINUX -noinst_PROGRAMS += massif-ppc32-linux vgpreload_massif-ppc32-linux.so -endif -if VGCONF_PLATFORMS_INCLUDE_PPC64_LINUX -noinst_PROGRAMS += massif-ppc64-linux vgpreload_massif-ppc64-linux.so -endif -if VGCONF_PLATFORMS_INCLUDE_PPC32_AIX5 -noinst_PROGRAMS += massif-ppc32-aix5 vgpreload_massif-ppc32-aix5.so -endif -if VGCONF_PLATFORMS_INCLUDE_PPC64_AIX5 -noinst_PROGRAMS += massif-ppc64-aix5 vgpreload_massif-ppc64-aix5.so -endif -if VGCONF_PLATFORMS_INCLUDE_X86_DARWIN -noinst_PROGRAMS += massif-x86-darwin vgpreload_massif-x86-darwin.so -noinst_DSYMS += vgpreload_massif-x86-darwin.so -endif -if VGCONF_PLATFORMS_INCLUDE_AMD64_DARWIN -noinst_PROGRAMS += massif-amd64-darwin vgpreload_massif-amd64-darwin.so -noinst_DSYMS += vgpreload_massif-amd64-darwin.so -endif -vgpreload_massif_x86_linux_so_SOURCES = -vgpreload_massif_x86_linux_so_CPPFLAGS = $(AM_CPPFLAGS_X86_LINUX) -vgpreload_massif_x86_linux_so_CFLAGS = $(AM_CFLAGS_X86_LINUX) $(AM_CFLAGS_PIC) -vgpreload_massif_x86_linux_so_DEPENDENCIES = $(LIBREPLACEMALLOC_X86_LINUX) -vgpreload_massif_x86_linux_so_LDFLAGS = \ - $(PRELOAD_LDFLAGS_X86_LINUX) \ - $(LIBREPLACEMALLOC_LDFLAGS_X86_LINUX) +#---------------------------------------------------------------------------- +# vgpreload_massif_<platform>.so +#---------------------------------------------------------------------------- -vgpreload_massif_amd64_linux_so_SOURCES = -vgpreload_massif_amd64_linux_so_CPPFLAGS = $(AM_CPPFLAGS_AMD64_LINUX) -vgpreload_massif_amd64_linux_so_CFLAGS = $(AM_CFLAGS_AMD64_LINUX) $(AM_CFLAGS_PIC) -vgpreload_massif_amd64_linux_so_DEPENDENCIES = $(LIBREPLACEMALLOC_AMD64_LINUX) -vgpreload_massif_amd64_linux_so_LDFLAGS = \ - $(PRELOAD_LDFLAGS_AMD64_LINUX) \ - $(LIBREPLACEMALLOC_LDFLAGS_AMD64_LINUX) +noinst_PROGRAMS += vgpreload_massif-@VGCONF_ARCH_PRI@-@VGCONF_OS@.so +if VGCONF_HAVE_PLATFORM_SEC +noinst_PROGRAMS += vgpreload_massif-@VGCONF_ARCH_SEC@-@VGCONF_OS@.so +endif -vgpreload_massif_ppc32_linux_so_SOURCES = -vgpreload_massif_ppc32_linux_so_CPPFLAGS = $(AM_CPPFLAGS_PPC32_LINUX) -vgpreload_massif_ppc32_linux_so_CFLAGS = $(AM_CFLAGS_PPC32_LINUX) $(AM_CFLAGS_PIC) -vgpreload_massif_ppc32_linux_so_DEPENDENCIES = $(LIBREPLACEMALLOC_PPC32_LINUX) -vgpreload_massif_ppc32_linux_so_LDFLAGS = \ - $(PRELOAD_LDFLAGS_PPC32_LINUX) \ - $(LIBREPLACEMALLOC_LDFLAGS_PPC32_LINUX) +if VGCONF_OS_IS_DARWIN +noinst_DSYMS = $(noinst_PROGRAMS) +endif -vgpreload_massif_ppc64_linux_so_SOURCES = -vgpreload_massif_ppc64_linux_so_CPPFLAGS = $(AM_CPPFLAGS_PPC64_LINUX) -vgpreload_massif_ppc64_linux_so_CFLAGS = $(AM_CFLAGS_PPC64_LINUX) $(AM_CFLAGS_PIC) -vgpreload_massif_ppc64_linux_so_DEPENDENCIES = $(LIBREPLACEMALLOC_PPC64_LINUX) -vgpreload_massif_ppc64_linux_so_LDFLAGS = \ - $(PRELOAD_LDFLAGS_PPC64_LINUX) \ - $(LIBREPLACEMALLOC_LDFLAGS_PPC64_LINUX) +vgpreload_massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_SOURCES = +vgpreload_massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_CPPFLAGS = $(AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +vgpreload_massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_CFLAGS = $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) $(AM_CFLAGS_PIC) +vgpreload_massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_DEPENDENCIES = $(LIBREPLACEMALLOC_@VGCONF_PLATFORM_PRI_CAPS@) +vgpreload_massif_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_LDFLAGS = \ + $(PRELOAD_LDFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) \ + $(LIBREPLACEMALLOC_LDFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +if VGCONF_HAVE_PLATFORM_SEC +vgpreload_massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_SOURCES = +vgpreload_massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_CPPFLAGS = $(AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +vgpreload_massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_CFLAGS = $(AM_CFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) $(AM_CFLAGS_PIC) +vgpreload_massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_DEPENDENCIES = $(LIBREPLACEMALLOC_@VGCONF_PLATFORM_SEC_CAPS@) +vgpreload_massif_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_LDFLAGS = \ + $(PRELOAD_LDFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) \ + $(LIBREPLACEMALLOC_LDFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +endif -vgpreload_massif_ppc32_aix5_so_SOURCES = -vgpreload_massif_ppc32_aix5_so_CPPFLAGS = $(AM_CPPFLAGS_PPC32_AIX5) -vgpreload_massif_ppc32_aix5_so_CFLAGS = $(AM_CFLAGS_PPC32_AIX5) $(AM_CFLAGS_PIC) -vgpreload_massif_ppc32_aix5_so_DEPENDENCIES = $(LIBREPLACEMALLOC_PPC32_AIX5) -vgpreload_massif_ppc32_aix5_so_LDFLAGS = \ - $(PRELOAD_LDFLAGS_PPC32_AIX5) \ - $(LIBREPLACEMALLOC_LDFLAGS_PPC32_AIX5) - -vgpreload_massif_ppc64_aix5_so_SOURCES = -vgpreload_massif_ppc64_aix5_so_CPPFLAGS = $(AM_CPPFLAGS_PPC64_AIX5) -vgpreload_massif_ppc64_aix5_so_CFLAGS = $(AM_CFLAGS_PPC64_AIX5) $(AM_CFLAGS_PIC) -vgpreload_massif_ppc64_aix5_so_DEPENDENCIES = $(LIBREPLACEMALLOC_PPC64_AIX5) -vgpreload_massif_ppc64_aix5_so_LDFLAGS = \ - $(PRELOAD_LDFLAGS_PPC64_AIX5) \ - $(LIBREPLACEMALLOC_LDFLAGS_PPC64_AIX5) - -vgpreload_massif_x86_darwin_so_SOURCES = -vgpreload_massif_x86_darwin_so_CPPFLAGS = $(AM_CPPFLAGS_X86_DARWIN) -vgpreload_massif_x86_darwin_so_CFLAGS = $(AM_CFLAGS_X86_DARWIN) $(AM_CFLAGS_PIC) -vgpreload_massif_x86_darwin_so_DEPENDENCIES = $(LIBREPLACEMALLOC_X86_DARWIN) -vgpreload_massif_x86_darwin_so_LDFLAGS = \ - $(PRELOAD_LDFLAGS_X86_DARWIN) \ - $(LIBREPLACEMALLOC_LDFLAGS_X86_DARWIN) - -vgpreload_massif_amd64_darwin_so_SOURCES = -vgpreload_massif_amd64_darwin_so_CPPFLAGS = $(AM_CPPFLAGS_AMD64_DARWIN) -vgpreload_massif_amd64_darwin_so_CFLAGS = $(AM_CFLAGS_AMD64_DARWIN) $(AM_CFLAGS_PIC) -vgpreload_massif_amd64_darwin_so_DEPENDENCIES = $(LIBREPLACEMALLOC_AMD64_DARWIN) -vgpreload_massif_amd64_darwin_so_LDFLAGS = \ - $(PRELOAD_LDFLAGS_AMD64_DARWIN) \ - $(LIBREPLACEMALLOC_LDFLAGS_AMD64_DARWIN) - -MASSIF_SOURCES_COMMON = ms_main.c - -massif_x86_linux_SOURCES = $(MASSIF_SOURCES_COMMON) -massif_x86_linux_CPPFLAGS = $(AM_CPPFLAGS_X86_LINUX) -massif_x86_linux_CFLAGS = $(AM_CFLAGS_X86_LINUX) -massif_x86_linux_DEPENDENCIES = $(COREGRIND_LIBS_X86_LINUX) -massif_x86_linux_LDADD = $(TOOL_LDADD_X86_LINUX) -massif_x86_linux_LDFLAGS = $(TOOL_LDFLAGS_X86_LINUX) - -massif_amd64_linux_SOURCES = $(MASSIF_SOURCES_COMMON) -massif_amd64_linux_CPPFLAGS = $(AM_CPPFLAGS_AMD64_LINUX) -massif_amd64_linux_CFLAGS = $(AM_CFLAGS_AMD64_LINUX) -massif_amd64_linux_DEPENDENCIES = $(COREGRIND_LIBS_AMD64_LINUX) -massif_amd64_linux_LDADD = $(TOOL_LDADD_AMD64_LINUX) -massif_amd64_linux_LDFLAGS = $(TOOL_LDFLAGS_AMD64_LINUX) - -massif_ppc32_linux_SOURCES = $(MASSIF_SOURCES_COMMON) -massif_ppc32_linux_CPPFLAGS = $(AM_CPPFLAGS_PPC32_LINUX) -massif_ppc32_linux_CFLAGS = $(AM_CFLAGS_PPC32_LINUX) -massif_ppc32_linux_DEPENDENCIES = $(COREGRIND_LIBS_PPC32_LINUX) -massif_ppc32_linux_LDADD = $(TOOL_LDADD_PPC32_LINUX) -massif_ppc32_linux_LDFLAGS = $(TOOL_LDFLAGS_PPC32_LINUX) - -massif_ppc64_linux_SOURCES = $(MASSIF_SOURCES_COMMON) -massif_ppc64_linux_CPPFLAGS = $(AM_CPPFLAGS_PPC64_LINUX) -massif_ppc64_linux_CFLAGS = $(AM_CFLAGS_PPC64_LINUX) -massif_ppc64_linux_DEPENDENCIES = $(COREGRIND_LIBS_PPC64_LINUX) -massif_ppc64_linux_LDADD = $(TOOL_LDADD_PPC64_LINUX) -massif_ppc64_linux_LDFLAGS = $(TOOL_LDFLAGS_PPC64_LINUX) - -massif_ppc32_aix5_SOURCES = $(MASSIF_SOURCES_COMMON) -massif_ppc32_aix5_CPPFLAGS = $(AM_CPPFLAGS_PPC32_AIX5) -massif_ppc32_aix5_CFLAGS = $(AM_CFLAGS_PPC32_AIX5) -massif_ppc32_aix5_DEPENDENCIES = $(COREGRIND_LIBS_PPC32_AIX5) -massif_ppc32_aix5_LDADD = $(TOOL_LDADD_PPC32_AIX5) -massif_ppc32_aix5_LDFLAGS = $(TOOL_LDFLAGS_PPC32_AIX5) - -massif_ppc64_aix5_SOURCES = $(MASSIF_SOURCES_COMMON) -massif_ppc64_aix5_CPPFLAGS = $(AM_CPPFLAGS_PPC64_AIX5) -massif_ppc64_aix5_CFLAGS = $(AM_CFLAGS_PPC64_AIX5) -massif_ppc64_aix5_DEPENDENCIES = $(COREGRIND_LIBS_PPC64_AIX5) -massif_ppc64_aix5_LDADD = $(TOOL_LDADD_PPC64_AIX5) -massif_ppc64_aix5_LDFLAGS = $(TOOL_LDFLAGS_PPC64_AIX5) - -massif_x86_darwin_SOURCES = $(MASSIF_SOURCES_COMMON) -massif_x86_darwin_CPPFLAGS = $(AM_CPPFLAGS_X86_DARWIN) -massif_x86_darwin_CFLAGS = $(AM_CFLAGS_X86_DARWIN) -massif_x86_darwin_DEPENDENCIES = $(COREGRIND_LIBS_X86_DARWIN) -massif_x86_darwin_LDADD = $(TOOL_LDADD_X86_DARWIN) -massif_x86_darwin_LDFLAGS = $(TOOL_LDFLAGS_X86_DARWIN) - -massif_amd64_darwin_SOURCES = $(MASSIF_SOURCES_COMMON) -massif_amd64_darwin_CPPFLAGS = $(AM_CPPFLAGS_AMD64_DARWIN) -massif_amd64_darwin_CFLAGS = $(AM_CFLAGS_AMD64_DARWIN) -massif_amd64_darwin_DEPENDENCIES = $(COREGRIND_LIBS_AMD64_DARWIN) -massif_amd64_darwin_LDADD = $(TOOL_LDADD_AMD64_DARWIN) -massif_amd64_darwin_LDFLAGS = $(TOOL_LDFLAGS_AMD64_DARWIN) |
|
From: <sv...@va...> - 2009-06-02 07:03:11
|
Author: njn Date: 2009-06-02 08:03:05 +0100 (Tue, 02 Jun 2009) New Revision: 10206 Log: Commit r10197--r10200 and r10202--r10203, which were backed out from the trunk, onto this branch. Modified: branches/BUILD_TWEAKS/Makefile.flags.am branches/BUILD_TWEAKS/coregrind/Makefile.am branches/BUILD_TWEAKS/drd/tests/Makefile.am Modified: branches/BUILD_TWEAKS/Makefile.flags.am =================================================================== --- branches/BUILD_TWEAKS/Makefile.flags.am 2009-06-02 06:57:26 UTC (rev 10205) +++ branches/BUILD_TWEAKS/Makefile.flags.am 2009-06-02 07:03:05 UTC (rev 10206) @@ -30,78 +30,56 @@ # means some of the flags are duplicated on systems with newer versions of # automake, but this does not really matter and seems hard to avoid. -AM_CPPFLAGS_COMMON = \ - -I$(top_srcdir) \ - -I$(top_srcdir)/include \ - -I@VEX_DIR@/pub +AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@ = \ + -I$(top_srcdir)/include \ + -I@VEX_DIR@/pub \ + -DVGA_@VGCONF_ARCH_PRI@=1 \ + -DVGO_@VGCONF_OS@=1 \ + -DVGP_@VGCONF_ARCH_PRI@_@VGCONF_OS@=1 +if VGCONF_HAVE_PLATFORM_SEC +AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@ = \ + -I$(top_srcdir)/include \ + -I@VEX_DIR@/pub \ + -DVGA_@VGCONF_ARCH_SEC@=1 \ + -DVGO_@VGCONF_OS@=1 \ + -DVGP_@VGCONF_ARCH_SEC@_@VGCONF_OS@=1 +endif AM_FLAG_M3264_X86_LINUX = @FLAG_M32@ -AM_CPPFLAGS_X86_LINUX = $(AM_CPPFLAGS_COMMON) \ - -DVGA_x86=1 \ - -DVGO_linux=1 \ - -DVGP_x86_linux=1 AM_CFLAGS_X86_LINUX = @FLAG_M32@ @PREFERRED_STACK_BOUNDARY@ \ $(AM_CFLAGS_BASE) AM_CCASFLAGS_X86_LINUX = $(AM_CPPFLAGS_X86_LINUX) @FLAG_M32@ -g AM_FLAG_M3264_AMD64_LINUX = @FLAG_M64@ -AM_CPPFLAGS_AMD64_LINUX = $(AM_CPPFLAGS_COMMON) \ - -DVGA_amd64=1 \ - -DVGO_linux=1 \ - -DVGP_amd64_linux=1 AM_CFLAGS_AMD64_LINUX = @FLAG_M64@ -fomit-frame-pointer \ @PREFERRED_STACK_BOUNDARY@ $(AM_CFLAGS_BASE) AM_CCASFLAGS_AMD64_LINUX = $(AM_CPPFLAGS_AMD64_LINUX) @FLAG_M64@ -g AM_FLAG_M3264_PPC32_LINUX = @FLAG_M32@ -AM_CPPFLAGS_PPC32_LINUX = $(AM_CPPFLAGS_COMMON) \ - -DVGA_ppc32=1 \ - -DVGO_linux=1 \ - -DVGP_ppc32_linux=1 AM_CFLAGS_PPC32_LINUX = @FLAG_M32@ $(AM_CFLAGS_BASE) AM_CCASFLAGS_PPC32_LINUX = $(AM_CPPFLAGS_PPC32_LINUX) @FLAG_M32@ -g AM_FLAG_M3264_PPC64_LINUX = @FLAG_M64@ -AM_CPPFLAGS_PPC64_LINUX = $(AM_CPPFLAGS_COMMON) \ - -DVGA_ppc64=1 \ - -DVGO_linux=1 \ - -DVGP_ppc64_linux=1 AM_CFLAGS_PPC64_LINUX = @FLAG_M64@ $(AM_CFLAGS_BASE) AM_CCASFLAGS_PPC64_LINUX = $(AM_CPPFLAGS_PPC64_LINUX) @FLAG_M64@ -g AM_FLAG_M3264_PPC32_AIX5 = @FLAG_MAIX32@ -AM_CPPFLAGS_PPC32_AIX5 = $(AM_CPPFLAGS_COMMON) \ - -DVGA_ppc32=1 \ - -DVGO_aix5=1 \ - -DVGP_ppc32_aix5=1 AM_CFLAGS_PPC32_AIX5 = @FLAG_MAIX32@ -mcpu=powerpc $(AM_CFLAGS_BASE) AM_CCASFLAGS_PPC32_AIX5 = $(AM_CPPFLAGS_PPC32_AIX5) \ @FLAG_MAIX32@ -mcpu=powerpc -g AM_FLAG_M3264_PPC64_AIX5 = @FLAG_MAIX64@ -AM_CPPFLAGS_PPC64_AIX5 = $(AM_CPPFLAGS_COMMON) \ - -DVGA_ppc64=1 \ - -DVGO_aix5=1 \ - -DVGP_ppc64_aix5=1 AM_CFLAGS_PPC64_AIX5 = @FLAG_MAIX64@ -mcpu=powerpc64 $(AM_CFLAGS_BASE) AM_CCASFLAGS_PPC64_AIX5 = $(AM_CPPFLAGS_PPC64_AIX5) \ @FLAG_MAIX64@ -mcpu=powerpc64 -g AM_FLAG_M3264_X86_DARWIN = -arch i386 -AM_CPPFLAGS_X86_DARWIN = $(AM_CPPFLAGS_COMMON) \ - -DVGA_x86=1 \ - -DVGO_darwin=1 \ - -DVGP_x86_darwin=1 AM_CFLAGS_X86_DARWIN = $(WERROR) -arch i386 $(AM_CFLAGS_BASE) \ -mmacosx-version-min=10.5 -fno-stack-protector \ -mdynamic-no-pic AM_CCASFLAGS_X86_DARWIN = $(AM_CPPFLAGS_X86_DARWIN) -arch i386 -g AM_FLAG_M3264_AMD64_DARWIN = -arch x86_64 -AM_CPPFLAGS_AMD64_DARWIN = $(AM_CPPFLAGS_COMMON) \ - -DVGA_amd64=1 \ - -DVGO_darwin=1 \ - -DVGP_amd64_darwin=1 AM_CFLAGS_AMD64_DARWIN = $(WERROR) -arch x86_64 $(AM_CFLAGS_BASE) \ -mmacosx-version-min=10.5 -fno-stack-protector AM_CCASFLAGS_AMD64_DARWIN = $(AM_CPPFLAGS_AMD64_DARWIN) -arch x86_64 -g Modified: branches/BUILD_TWEAKS/coregrind/Makefile.am =================================================================== --- branches/BUILD_TWEAKS/coregrind/Makefile.am 2009-06-02 06:57:26 UTC (rev 10205) +++ branches/BUILD_TWEAKS/coregrind/Makefile.am 2009-06-02 07:03:05 UTC (rev 10206) @@ -7,95 +7,62 @@ include $(top_srcdir)/Makefile.flags.am include $(top_srcdir)/Makefile.core-tool.am -AM_CPPFLAGS_CORE_COMMON = \ - -I$(top_srcdir)/coregrind \ - -DVG_LIBDIR="\"$(valdir)"\" +#---------------------------------------------------------------------------- +# Basics, flags +#---------------------------------------------------------------------------- -AM_CPPFLAGS_X86_LINUX += \ - $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"x86-linux\"" +AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@ += \ + -I$(top_srcdir)/coregrind \ + -DVG_LIBDIR="\"$(valdir)"\" \ + -DVG_PLATFORM="\"@VGCONF_ARCH_PRI@-@VGCONF_OS@\"" +if VGCONF_HAVE_PLATFORM_SEC +AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@ += \ + -I$(top_srcdir)/coregrind \ + -DVG_LIBDIR="\"$(valdir)"\" \ + -DVG_PLATFORM="\"@VGCONF_ARCH_SEC@-@VGCONF_OS@\"" +endif -AM_CPPFLAGS_AMD64_LINUX += \ - $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"amd64-linux\"" -AM_CPPFLAGS_PPC32_LINUX += \ - $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"ppc32-linux\"" - -AM_CPPFLAGS_PPC64_LINUX += \ - $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"ppc64-linux\"" - -AM_CPPFLAGS_PPC32_AIX5 += \ - $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"ppc32-aix5\"" - -AM_CPPFLAGS_PPC64_AIX5 += \ - $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"ppc64-aix5\"" - -AM_CPPFLAGS_X86_DARWIN += \ - $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"x86-darwin\"" - -AM_CPPFLAGS_AMD64_DARWIN += \ - $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"amd64-darwin\"" - - default.supp: $(SUPP_FILES) -noinst_PROGRAMS = -noinst_DSYMS = pkglib_LIBRARIES = -LIBVEX = if VGCONF_PLATFORMS_INCLUDE_X86_LINUX -noinst_PROGRAMS += vgpreload_core-x86-linux.so -pkglib_LIBRARIES += libcoregrind-x86-linux.a libreplacemalloc_toolpreload-x86-linux.a -LIBVEX += libvex-x86-linux.a +pkglib_LIBRARIES += libcoregrind-x86-linux.a endif if VGCONF_PLATFORMS_INCLUDE_AMD64_LINUX -noinst_PROGRAMS += vgpreload_core-amd64-linux.so -pkglib_LIBRARIES += libcoregrind-amd64-linux.a libreplacemalloc_toolpreload-amd64-linux.a -LIBVEX += libvex-amd64-linux.a +pkglib_LIBRARIES += libcoregrind-amd64-linux.a endif if VGCONF_PLATFORMS_INCLUDE_PPC32_LINUX -noinst_PROGRAMS += vgpreload_core-ppc32-linux.so -pkglib_LIBRARIES += libcoregrind-ppc32-linux.a libreplacemalloc_toolpreload-ppc32-linux.a -LIBVEX += libvex-ppc32-linux.a +pkglib_LIBRARIES += libcoregrind-ppc32-linux.a endif if VGCONF_PLATFORMS_INCLUDE_PPC64_LINUX -noinst_PROGRAMS += vgpreload_core-ppc64-linux.so -pkglib_LIBRARIES += libcoregrind-ppc64-linux.a libreplacemalloc_toolpreload-ppc64-linux.a -LIBVEX += libvex-ppc64-linux.a +pkglib_LIBRARIES += libcoregrind-ppc64-linux.a endif if VGCONF_PLATFORMS_INCLUDE_PPC32_AIX5 -noinst_PROGRAMS += vgpreload_core-ppc32-aix5.so -pkglib_LIBRARIES += libcoregrind-ppc32-aix5.a libreplacemalloc_toolpreload-ppc32-aix5.a -LIBVEX += libvex-ppc32-aix5.a +pkglib_LIBRARIES += libcoregrind-ppc32-aix5.a endif if VGCONF_PLATFORMS_INCLUDE_PPC64_AIX5 -noinst_PROGRAMS += vgpreload_core-ppc64-aix5.so -pkglib_LIBRARIES += libcoregrind-ppc64-aix5.a libreplacemalloc_toolpreload-ppc64-aix5.a -LIBVEX += libvex-ppc64-aix5.a +pkglib_LIBRARIES += libcoregrind-ppc64-aix5.a endif if VGCONF_PLATFORMS_INCLUDE_X86_DARWIN -noinst_PROGRAMS += vgpreload_core-x86-darwin.so -noinst_DSYMS += vgpreload_core-x86-darwin.so -pkglib_LIBRARIES += libcoregrind-x86-darwin.a libreplacemalloc_toolpreload-x86-darwin.a -LIBVEX += libvex-x86-darwin.a +pkglib_LIBRARIES += libcoregrind-x86-darwin.a endif if VGCONF_PLATFORMS_INCLUDE_AMD64_DARWIN -noinst_PROGRAMS += vgpreload_core-amd64-darwin.so -noinst_DSYMS += vgpreload_core-amd64-darwin.so -pkglib_LIBRARIES += libcoregrind-amd64-darwin.a libreplacemalloc_toolpreload-amd64-darwin.a -LIBVEX += libvex-amd64-darwin.a +pkglib_LIBRARIES += libcoregrind-amd64-darwin.a endif - -#------------------------- launcher ----------------------- +#---------------------------------------------------------------------------- +# The launcher +#---------------------------------------------------------------------------- # Build the launcher (valgrind) for the primary target only. # bin_PROGRAMS = \ @@ -118,6 +85,21 @@ m_debuglog.c endif +valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) +valgrind_CFLAGS = $(AM_CFLAGS_PRI) +valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) +valgrind_LDFLAGS = $(AM_CFLAGS_PRI) + +no_op_client_for_valgrind_SOURCES = no_op_client_for_valgrind.c +no_op_client_for_valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) +no_op_client_for_valgrind_CFLAGS = $(AM_CFLAGS_PRI) +no_op_client_for_valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) +no_op_client_for_valgrind_LDFLAGS = $(AM_CFLAGS_PRI) + +#---------------------------------------------------------------------------- +# Darwin Mach stuff +#---------------------------------------------------------------------------- + # Mach RPC interface definitions # Here are some more .defs files that are not used, but could be in the # future: @@ -153,20 +135,10 @@ $(mach_srcs) $(mach_hdrs): $(mach_files) (cd m_mach && mig $(mach_files)) -valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) -valgrind_CFLAGS = $(AM_CFLAGS_PRI) -valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) -valgrind_LDFLAGS = $(AM_CFLAGS_PRI) +#---------------------------------------------------------------------------- +# Headers +#---------------------------------------------------------------------------- -no_op_client_for_valgrind_SOURCES = no_op_client_for_valgrind.c -no_op_client_for_valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) -no_op_client_for_valgrind_CFLAGS = $(AM_CFLAGS_PRI) -no_op_client_for_valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) -no_op_client_for_valgrind_LDFLAGS = $(AM_CFLAGS_PRI) -# -#---------------------------------------------------------- - - noinst_HEADERS = \ $(mach_hdrs) \ launcher-aix5-bootblock.h \ @@ -253,6 +225,10 @@ m_syswrap/priv_syswrap-main.h \ m_ume/priv_ume.h +#---------------------------------------------------------------------------- +# libcoregrind_<platform>.so +#---------------------------------------------------------------------------- + BUILT_SOURCES = CLEANFILES = if VGCONF_OS_IS_DARWIN @@ -260,7 +236,6 @@ CLEANFILES += $(COREGRIND_DARWIN_BUILT_SOURCES) endif - COREGRIND_SOURCES_COMMON = \ m_commandline.c \ m_clientstate.c \ @@ -466,40 +441,6 @@ libcoregrind_amd64_darwin_a_CCASFLAGS = $(AM_CCASFLAGS_AMD64_DARWIN) -libreplacemalloc_toolpreload_x86_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_x86_linux_a_CPPFLAGS = $(AM_CPPFLAGS_X86_LINUX) -libreplacemalloc_toolpreload_x86_linux_a_CFLAGS = $(AM_CFLAGS_X86_LINUX) $(AM_CFLAGS_PIC) - -libreplacemalloc_toolpreload_amd64_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_amd64_linux_a_CPPFLAGS = $(AM_CPPFLAGS_AMD64_LINUX) -libreplacemalloc_toolpreload_amd64_linux_a_CFLAGS = $(AM_CFLAGS_AMD64_LINUX) $(AM_CFLAGS_PIC) - -libreplacemalloc_toolpreload_ppc32_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_ppc32_linux_a_CPPFLAGS = $(AM_CPPFLAGS_PPC32_LINUX) -libreplacemalloc_toolpreload_ppc32_linux_a_CFLAGS = $(AM_CFLAGS_PPC32_LINUX) $(AM_CFLAGS_PIC) - -libreplacemalloc_toolpreload_ppc64_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_ppc64_linux_a_CPPFLAGS = $(AM_CPPFLAGS_PPC64_LINUX) -libreplacemalloc_toolpreload_ppc64_linux_a_CFLAGS = $(AM_CFLAGS_PPC64_LINUX) $(AM_CFLAGS_PIC) - -libreplacemalloc_toolpreload_ppc32_aix5_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_ppc32_aix5_a_CPPFLAGS = $(AM_CPPFLAGS_PPC32_AIX5) -libreplacemalloc_toolpreload_ppc32_aix5_a_CFLAGS = $(AM_CFLAGS_PPC32_AIX5) $(AM_CFLAGS_PIC) -libreplacemalloc_toolpreload_ppc32_aix5_a_AR = $(AR) -X32 cru - -libreplacemalloc_toolpreload_ppc64_aix5_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_ppc64_aix5_a_CPPFLAGS = $(AM_CPPFLAGS_PPC64_AIX5) -libreplacemalloc_toolpreload_ppc64_aix5_a_CFLAGS = $(AM_CFLAGS_PPC64_AIX5) $(AM_CFLAGS_PIC) -libreplacemalloc_toolpreload_ppc64_aix5_a_AR = $(AR) -X64 cru - -libreplacemalloc_toolpreload_x86_darwin_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_x86_darwin_a_CPPFLAGS = $(AM_CPPFLAGS_X86_DARWIN) -libreplacemalloc_toolpreload_x86_darwin_a_CFLAGS = $(AM_CFLAGS_X86_DARWIN) $(AM_CFLAGS_PIC) - -libreplacemalloc_toolpreload_amd64_darwin_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_amd64_darwin_a_CPPFLAGS = $(AM_CPPFLAGS_AMD64_DARWIN) -libreplacemalloc_toolpreload_amd64_darwin_a_CFLAGS = $(AM_CFLAGS_AMD64_DARWIN) $(AM_CFLAGS_PIC) - m_dispatch/dispatch-x86-linux.S: libvex_guest_offsets.h m_dispatch/dispatch-amd64-linux.S: libvex_guest_offsets.h m_dispatch/dispatch-ppc32-linux.S: libvex_guest_offsets.h @@ -521,47 +462,67 @@ libvex_guest_offsets.h: $(MAKE) -C @VEX_DIR@ CC="$(CC)" AR="$(AR)" pub/libvex_guest_offsets.h -VGPRELOAD_CORE_SOURCES_COMMON = vg_preloaded.c +#---------------------------------------------------------------------------- +# libreplacemalloc_toolpreload_<platform>.so +#---------------------------------------------------------------------------- -vgpreload_core_x86_linux_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) -vgpreload_core_x86_linux_so_CPPFLAGS = $(AM_CPPFLAGS_X86_LINUX) -vgpreload_core_x86_linux_so_CFLAGS = $(AM_CFLAGS_X86_LINUX) $(AM_CFLAGS_PIC) -vgpreload_core_x86_linux_so_LDFLAGS = $(PRELOAD_LDFLAGS_X86_LINUX) +pkglib_LIBRARIES += libreplacemalloc_toolpreload-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a +if VGCONF_HAVE_PLATFORM_SEC +pkglib_LIBRARIES += libreplacemalloc_toolpreload-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a +endif -vgpreload_core_amd64_linux_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) -vgpreload_core_amd64_linux_so_CPPFLAGS = $(AM_CPPFLAGS_AMD64_LINUX) -vgpreload_core_amd64_linux_so_CFLAGS = $(AM_CFLAGS_AMD64_LINUX) $(AM_CFLAGS_PIC) -vgpreload_core_amd64_linux_so_LDFLAGS = $(PRELOAD_LDFLAGS_AMD64_LINUX) +libreplacemalloc_toolpreload_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_SOURCES = \ + m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +libreplacemalloc_toolpreload_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) $(AM_CFLAGS_PIC) +if VGCONF_HAVE_PLATFORM_SEC +libreplacemalloc_toolpreload_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_SOURCES = \ + m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +libreplacemalloc_toolpreload_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) $(AM_CFLAGS_PIC) +endif -vgpreload_core_ppc32_linux_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) -vgpreload_core_ppc32_linux_so_CPPFLAGS = $(AM_CPPFLAGS_PPC32_LINUX) -vgpreload_core_ppc32_linux_so_CFLAGS = $(AM_CFLAGS_PPC32_LINUX) $(AM_CFLAGS_PIC) -vgpreload_core_ppc32_linux_so_LDFLAGS = $(PRELOAD_LDFLAGS_PPC32_LINUX) +# Special AR for AIX. +libreplacemalloc_toolpreload_ppc32_aix5_a_AR = $(AR) -X32 cru +libreplacemalloc_toolpreload_ppc64_aix5_a_AR = $(AR) -X64 cru -vgpreload_core_ppc64_linux_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) -vgpreload_core_ppc64_linux_so_CPPFLAGS = $(AM_CPPFLAGS_PPC64_LINUX) -vgpreload_core_ppc64_linux_so_CFLAGS = $(AM_CFLAGS_PPC64_LINUX) $(AM_CFLAGS_PIC) -vgpreload_core_ppc64_linux_so_LDFLAGS = $(PRELOAD_LDFLAGS_PPC64_LINUX) +#---------------------------------------------------------------------------- +# vgpreload_core_<platform>.so +#---------------------------------------------------------------------------- -vgpreload_core_ppc32_aix5_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) -vgpreload_core_ppc32_aix5_so_CPPFLAGS = $(AM_CPPFLAGS_PPC32_AIX5) -vgpreload_core_ppc32_aix5_so_CFLAGS = $(AM_CFLAGS_PPC32_AIX5) $(AM_CFLAGS_PIC) -vgpreload_core_ppc32_aix5_so_LDFLAGS = $(PRELOAD_LDFLAGS_PPC32_AIX5) +noinst_PROGRAMS = vgpreload_core-@VGCONF_ARCH_PRI@-@VGCONF_OS@.so +if VGCONF_HAVE_PLATFORM_SEC +noinst_PROGRAMS += vgpreload_core-@VGCONF_ARCH_SEC@-@VGCONF_OS@.so +endif -vgpreload_core_ppc64_aix5_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) -vgpreload_core_ppc64_aix5_so_CPPFLAGS = $(AM_CPPFLAGS_PPC64_AIX5) -vgpreload_core_ppc64_aix5_so_CFLAGS = $(AM_CFLAGS_PPC64_AIX5) $(AM_CFLAGS_PIC) -vgpreload_core_ppc64_aix5_so_LDFLAGS = $(PRELOAD_LDFLAGS_PPC64_AIX5) +if VGCONF_OS_IS_DARWIN +noinst_DSYMS = $(noinst_PROGRAMS) +endif -vgpreload_core_x86_darwin_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) -vgpreload_core_x86_darwin_so_CPPFLAGS = $(AM_CPPFLAGS_X86_DARWIN) -vgpreload_core_x86_darwin_so_CFLAGS = $(AM_CFLAGS_X86_DARWIN) $(AM_CFLAGS_PIC) -vgpreload_core_x86_darwin_so_LDFLAGS = $(PRELOAD_LDFLAGS_X86_DARWIN) +vgpreload_core_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_SOURCES = vg_preloaded.c +vgpreload_core_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +vgpreload_core_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) $(AM_CFLAGS_PIC) +vgpreload_core_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_LDFLAGS = \ + $(PRELOAD_LDFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +if VGCONF_HAVE_PLATFORM_SEC +vgpreload_core_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_SOURCES = vg_preloaded.c +vgpreload_core_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +vgpreload_core_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) $(AM_CFLAGS_PIC) +vgpreload_core_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_LDFLAGS = \ + $(PRELOAD_LDFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +endif -vgpreload_core_amd64_darwin_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) -vgpreload_core_amd64_darwin_so_CPPFLAGS = $(AM_CPPFLAGS_AMD64_DARWIN) -vgpreload_core_amd64_darwin_so_CFLAGS = $(AM_CFLAGS_AMD64_DARWIN) $(AM_CFLAGS_PIC) -vgpreload_core_amd64_darwin_so_LDFLAGS = $(PRELOAD_LDFLAGS_AMD64_DARWIN) +#---------------------------------------------------------------------------- +# General stuff +#---------------------------------------------------------------------------- all-local: inplace-noinst_PROGRAMS inplace-noinst_DSYMS @@ -569,6 +530,11 @@ $(MAKE) -C @VEX_DIR@ CC="$(CC)" AR="$(AR)" clean rm -f $(mach_srcs) $(mach_server_srcs) $(mach_hdrs) +LIBVEX = libvex-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a +if VGCONF_HAVE_PLATFORM_SEC +LIBVEX += libvex-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a +endif + # Nb: The loop installs the libvex library for possible use by standalone # tools. install-exec-local: install-noinst_PROGRAMS install-noinst_DSYMS Modified: branches/BUILD_TWEAKS/drd/tests/Makefile.am =================================================================== --- branches/BUILD_TWEAKS/drd/tests/Makefile.am 2009-06-02 06:57:26 UTC (rev 10205) +++ branches/BUILD_TWEAKS/drd/tests/Makefile.am 2009-06-02 07:03:05 UTC (rev 10206) @@ -241,8 +241,7 @@ sem_as_mutex \ sigalrm \ thread_name \ - trylock \ - tsan_unittest + trylock if HAVE_BOOST_1_35 check_PROGRAMS += boost_thread @@ -272,6 +271,9 @@ check_PROGRAMS += qt4_mutex qt4_rwlock qt4_semaphore endif +if ! VGCONF_OS_IS_DARWIN +check_PROGRAMS += tsan_unittest +endif AM_CFLAGS += $(AM_FLAG_M3264_PRI) @FLAG_W_EXTRA@ -Wno-inline -Wno-unused-parameter AM_CXXFLAGS += $(AM_FLAG_M3264_PRI) @FLAG_W_EXTRA@ -Wno-inline -Wno-unused-parameter |
|
From: <sv...@va...> - 2009-06-02 06:57:28
|
Author: njn Date: 2009-06-02 07:57:26 +0100 (Tue, 02 Jun 2009) New Revision: 10205 Log: Make a branch for experimenting with the build system, in particular, avoiding lots of cut+paste code that is repeated for each platform. Added: branches/BUILD_TWEAKS/ Copied: branches/BUILD_TWEAKS (from rev 10204, trunk) |
|
From: <sv...@va...> - 2009-06-02 06:55:03
|
Author: njn Date: 2009-06-02 07:54:57 +0100 (Tue, 02 Jun 2009) New Revision: 10204 Log: Back out r10197--r10200 and r10202--r10203. I'm going to put them, and further, related changes, on a branch instead. Modified: trunk/Makefile.flags.am trunk/coregrind/Makefile.am trunk/drd/tests/Makefile.am Modified: trunk/Makefile.flags.am =================================================================== --- trunk/Makefile.flags.am 2009-06-02 05:27:07 UTC (rev 10203) +++ trunk/Makefile.flags.am 2009-06-02 06:54:57 UTC (rev 10204) @@ -30,56 +30,78 @@ # means some of the flags are duplicated on systems with newer versions of # automake, but this does not really matter and seems hard to avoid. -AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@ = \ - -I$(top_srcdir)/include \ - -I@VEX_DIR@/pub \ - -DVGA_@VGCONF_ARCH_PRI@=1 \ - -DVGO_@VGCONF_OS@=1 \ - -DVGP_@VGCONF_ARCH_PRI@_@VGCONF_OS@=1 -if VGCONF_HAVE_PLATFORM_SEC -AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@ = \ - -I$(top_srcdir)/include \ - -I@VEX_DIR@/pub \ - -DVGA_@VGCONF_ARCH_SEC@=1 \ - -DVGO_@VGCONF_OS@=1 \ - -DVGP_@VGCONF_ARCH_SEC@_@VGCONF_OS@=1 -endif +AM_CPPFLAGS_COMMON = \ + -I$(top_srcdir) \ + -I$(top_srcdir)/include \ + -I@VEX_DIR@/pub AM_FLAG_M3264_X86_LINUX = @FLAG_M32@ +AM_CPPFLAGS_X86_LINUX = $(AM_CPPFLAGS_COMMON) \ + -DVGA_x86=1 \ + -DVGO_linux=1 \ + -DVGP_x86_linux=1 AM_CFLAGS_X86_LINUX = @FLAG_M32@ @PREFERRED_STACK_BOUNDARY@ \ $(AM_CFLAGS_BASE) AM_CCASFLAGS_X86_LINUX = $(AM_CPPFLAGS_X86_LINUX) @FLAG_M32@ -g AM_FLAG_M3264_AMD64_LINUX = @FLAG_M64@ +AM_CPPFLAGS_AMD64_LINUX = $(AM_CPPFLAGS_COMMON) \ + -DVGA_amd64=1 \ + -DVGO_linux=1 \ + -DVGP_amd64_linux=1 AM_CFLAGS_AMD64_LINUX = @FLAG_M64@ -fomit-frame-pointer \ @PREFERRED_STACK_BOUNDARY@ $(AM_CFLAGS_BASE) AM_CCASFLAGS_AMD64_LINUX = $(AM_CPPFLAGS_AMD64_LINUX) @FLAG_M64@ -g AM_FLAG_M3264_PPC32_LINUX = @FLAG_M32@ +AM_CPPFLAGS_PPC32_LINUX = $(AM_CPPFLAGS_COMMON) \ + -DVGA_ppc32=1 \ + -DVGO_linux=1 \ + -DVGP_ppc32_linux=1 AM_CFLAGS_PPC32_LINUX = @FLAG_M32@ $(AM_CFLAGS_BASE) AM_CCASFLAGS_PPC32_LINUX = $(AM_CPPFLAGS_PPC32_LINUX) @FLAG_M32@ -g AM_FLAG_M3264_PPC64_LINUX = @FLAG_M64@ +AM_CPPFLAGS_PPC64_LINUX = $(AM_CPPFLAGS_COMMON) \ + -DVGA_ppc64=1 \ + -DVGO_linux=1 \ + -DVGP_ppc64_linux=1 AM_CFLAGS_PPC64_LINUX = @FLAG_M64@ $(AM_CFLAGS_BASE) AM_CCASFLAGS_PPC64_LINUX = $(AM_CPPFLAGS_PPC64_LINUX) @FLAG_M64@ -g AM_FLAG_M3264_PPC32_AIX5 = @FLAG_MAIX32@ +AM_CPPFLAGS_PPC32_AIX5 = $(AM_CPPFLAGS_COMMON) \ + -DVGA_ppc32=1 \ + -DVGO_aix5=1 \ + -DVGP_ppc32_aix5=1 AM_CFLAGS_PPC32_AIX5 = @FLAG_MAIX32@ -mcpu=powerpc $(AM_CFLAGS_BASE) AM_CCASFLAGS_PPC32_AIX5 = $(AM_CPPFLAGS_PPC32_AIX5) \ @FLAG_MAIX32@ -mcpu=powerpc -g AM_FLAG_M3264_PPC64_AIX5 = @FLAG_MAIX64@ +AM_CPPFLAGS_PPC64_AIX5 = $(AM_CPPFLAGS_COMMON) \ + -DVGA_ppc64=1 \ + -DVGO_aix5=1 \ + -DVGP_ppc64_aix5=1 AM_CFLAGS_PPC64_AIX5 = @FLAG_MAIX64@ -mcpu=powerpc64 $(AM_CFLAGS_BASE) AM_CCASFLAGS_PPC64_AIX5 = $(AM_CPPFLAGS_PPC64_AIX5) \ @FLAG_MAIX64@ -mcpu=powerpc64 -g AM_FLAG_M3264_X86_DARWIN = -arch i386 +AM_CPPFLAGS_X86_DARWIN = $(AM_CPPFLAGS_COMMON) \ + -DVGA_x86=1 \ + -DVGO_darwin=1 \ + -DVGP_x86_darwin=1 AM_CFLAGS_X86_DARWIN = $(WERROR) -arch i386 $(AM_CFLAGS_BASE) \ -mmacosx-version-min=10.5 -fno-stack-protector \ -mdynamic-no-pic AM_CCASFLAGS_X86_DARWIN = $(AM_CPPFLAGS_X86_DARWIN) -arch i386 -g AM_FLAG_M3264_AMD64_DARWIN = -arch x86_64 +AM_CPPFLAGS_AMD64_DARWIN = $(AM_CPPFLAGS_COMMON) \ + -DVGA_amd64=1 \ + -DVGO_darwin=1 \ + -DVGP_amd64_darwin=1 AM_CFLAGS_AMD64_DARWIN = $(WERROR) -arch x86_64 $(AM_CFLAGS_BASE) \ -mmacosx-version-min=10.5 -fno-stack-protector AM_CCASFLAGS_AMD64_DARWIN = $(AM_CPPFLAGS_AMD64_DARWIN) -arch x86_64 -g Modified: trunk/coregrind/Makefile.am =================================================================== --- trunk/coregrind/Makefile.am 2009-06-02 05:27:07 UTC (rev 10203) +++ trunk/coregrind/Makefile.am 2009-06-02 06:54:57 UTC (rev 10204) @@ -7,62 +7,95 @@ include $(top_srcdir)/Makefile.flags.am include $(top_srcdir)/Makefile.core-tool.am -#---------------------------------------------------------------------------- -# Basics, flags -#---------------------------------------------------------------------------- +AM_CPPFLAGS_CORE_COMMON = \ + -I$(top_srcdir)/coregrind \ + -DVG_LIBDIR="\"$(valdir)"\" -AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@ += \ - -I$(top_srcdir)/coregrind \ - -DVG_LIBDIR="\"$(valdir)"\" \ - -DVG_PLATFORM="\"@VGCONF_ARCH_PRI@-@VGCONF_OS@\"" -if VGCONF_HAVE_PLATFORM_SEC -AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@ += \ - -I$(top_srcdir)/coregrind \ - -DVG_LIBDIR="\"$(valdir)"\" \ - -DVG_PLATFORM="\"@VGCONF_ARCH_SEC@-@VGCONF_OS@\"" -endif +AM_CPPFLAGS_X86_LINUX += \ + $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"x86-linux\"" +AM_CPPFLAGS_AMD64_LINUX += \ + $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"amd64-linux\"" +AM_CPPFLAGS_PPC32_LINUX += \ + $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"ppc32-linux\"" + +AM_CPPFLAGS_PPC64_LINUX += \ + $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"ppc64-linux\"" + +AM_CPPFLAGS_PPC32_AIX5 += \ + $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"ppc32-aix5\"" + +AM_CPPFLAGS_PPC64_AIX5 += \ + $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"ppc64-aix5\"" + +AM_CPPFLAGS_X86_DARWIN += \ + $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"x86-darwin\"" + +AM_CPPFLAGS_AMD64_DARWIN += \ + $(AM_CPPFLAGS_CORE_COMMON) -DVG_PLATFORM="\"amd64-darwin\"" + + default.supp: $(SUPP_FILES) +noinst_PROGRAMS = +noinst_DSYMS = pkglib_LIBRARIES = +LIBVEX = if VGCONF_PLATFORMS_INCLUDE_X86_LINUX -pkglib_LIBRARIES += libcoregrind-x86-linux.a +noinst_PROGRAMS += vgpreload_core-x86-linux.so +pkglib_LIBRARIES += libcoregrind-x86-linux.a libreplacemalloc_toolpreload-x86-linux.a +LIBVEX += libvex-x86-linux.a endif if VGCONF_PLATFORMS_INCLUDE_AMD64_LINUX -pkglib_LIBRARIES += libcoregrind-amd64-linux.a +noinst_PROGRAMS += vgpreload_core-amd64-linux.so +pkglib_LIBRARIES += libcoregrind-amd64-linux.a libreplacemalloc_toolpreload-amd64-linux.a +LIBVEX += libvex-amd64-linux.a endif if VGCONF_PLATFORMS_INCLUDE_PPC32_LINUX -pkglib_LIBRARIES += libcoregrind-ppc32-linux.a +noinst_PROGRAMS += vgpreload_core-ppc32-linux.so +pkglib_LIBRARIES += libcoregrind-ppc32-linux.a libreplacemalloc_toolpreload-ppc32-linux.a +LIBVEX += libvex-ppc32-linux.a endif if VGCONF_PLATFORMS_INCLUDE_PPC64_LINUX -pkglib_LIBRARIES += libcoregrind-ppc64-linux.a +noinst_PROGRAMS += vgpreload_core-ppc64-linux.so +pkglib_LIBRARIES += libcoregrind-ppc64-linux.a libreplacemalloc_toolpreload-ppc64-linux.a +LIBVEX += libvex-ppc64-linux.a endif if VGCONF_PLATFORMS_INCLUDE_PPC32_AIX5 -pkglib_LIBRARIES += libcoregrind-ppc32-aix5.a +noinst_PROGRAMS += vgpreload_core-ppc32-aix5.so +pkglib_LIBRARIES += libcoregrind-ppc32-aix5.a libreplacemalloc_toolpreload-ppc32-aix5.a +LIBVEX += libvex-ppc32-aix5.a endif if VGCONF_PLATFORMS_INCLUDE_PPC64_AIX5 -pkglib_LIBRARIES += libcoregrind-ppc64-aix5.a +noinst_PROGRAMS += vgpreload_core-ppc64-aix5.so +pkglib_LIBRARIES += libcoregrind-ppc64-aix5.a libreplacemalloc_toolpreload-ppc64-aix5.a +LIBVEX += libvex-ppc64-aix5.a endif if VGCONF_PLATFORMS_INCLUDE_X86_DARWIN -pkglib_LIBRARIES += libcoregrind-x86-darwin.a +noinst_PROGRAMS += vgpreload_core-x86-darwin.so +noinst_DSYMS += vgpreload_core-x86-darwin.so +pkglib_LIBRARIES += libcoregrind-x86-darwin.a libreplacemalloc_toolpreload-x86-darwin.a +LIBVEX += libvex-x86-darwin.a endif if VGCONF_PLATFORMS_INCLUDE_AMD64_DARWIN -pkglib_LIBRARIES += libcoregrind-amd64-darwin.a +noinst_PROGRAMS += vgpreload_core-amd64-darwin.so +noinst_DSYMS += vgpreload_core-amd64-darwin.so +pkglib_LIBRARIES += libcoregrind-amd64-darwin.a libreplacemalloc_toolpreload-amd64-darwin.a +LIBVEX += libvex-amd64-darwin.a endif -#---------------------------------------------------------------------------- -# The launcher -#---------------------------------------------------------------------------- + +#------------------------- launcher ----------------------- # Build the launcher (valgrind) for the primary target only. # bin_PROGRAMS = \ @@ -85,21 +118,6 @@ m_debuglog.c endif -valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) -valgrind_CFLAGS = $(AM_CFLAGS_PRI) -valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) -valgrind_LDFLAGS = $(AM_CFLAGS_PRI) - -no_op_client_for_valgrind_SOURCES = no_op_client_for_valgrind.c -no_op_client_for_valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) -no_op_client_for_valgrind_CFLAGS = $(AM_CFLAGS_PRI) -no_op_client_for_valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) -no_op_client_for_valgrind_LDFLAGS = $(AM_CFLAGS_PRI) - -#---------------------------------------------------------------------------- -# Darwin Mach stuff -#---------------------------------------------------------------------------- - # Mach RPC interface definitions # Here are some more .defs files that are not used, but could be in the # future: @@ -135,10 +153,20 @@ $(mach_srcs) $(mach_hdrs): $(mach_files) (cd m_mach && mig $(mach_files)) -#---------------------------------------------------------------------------- -# Headers -#---------------------------------------------------------------------------- +valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) +valgrind_CFLAGS = $(AM_CFLAGS_PRI) +valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) +valgrind_LDFLAGS = $(AM_CFLAGS_PRI) +no_op_client_for_valgrind_SOURCES = no_op_client_for_valgrind.c +no_op_client_for_valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) +no_op_client_for_valgrind_CFLAGS = $(AM_CFLAGS_PRI) +no_op_client_for_valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) +no_op_client_for_valgrind_LDFLAGS = $(AM_CFLAGS_PRI) +# +#---------------------------------------------------------- + + noinst_HEADERS = \ $(mach_hdrs) \ launcher-aix5-bootblock.h \ @@ -225,10 +253,6 @@ m_syswrap/priv_syswrap-main.h \ m_ume/priv_ume.h -#---------------------------------------------------------------------------- -# libcoregrind_<platform>.so -#---------------------------------------------------------------------------- - BUILT_SOURCES = CLEANFILES = if VGCONF_OS_IS_DARWIN @@ -236,6 +260,7 @@ CLEANFILES += $(COREGRIND_DARWIN_BUILT_SOURCES) endif + COREGRIND_SOURCES_COMMON = \ m_commandline.c \ m_clientstate.c \ @@ -441,6 +466,40 @@ libcoregrind_amd64_darwin_a_CCASFLAGS = $(AM_CCASFLAGS_AMD64_DARWIN) +libreplacemalloc_toolpreload_x86_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_x86_linux_a_CPPFLAGS = $(AM_CPPFLAGS_X86_LINUX) +libreplacemalloc_toolpreload_x86_linux_a_CFLAGS = $(AM_CFLAGS_X86_LINUX) $(AM_CFLAGS_PIC) + +libreplacemalloc_toolpreload_amd64_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_amd64_linux_a_CPPFLAGS = $(AM_CPPFLAGS_AMD64_LINUX) +libreplacemalloc_toolpreload_amd64_linux_a_CFLAGS = $(AM_CFLAGS_AMD64_LINUX) $(AM_CFLAGS_PIC) + +libreplacemalloc_toolpreload_ppc32_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_ppc32_linux_a_CPPFLAGS = $(AM_CPPFLAGS_PPC32_LINUX) +libreplacemalloc_toolpreload_ppc32_linux_a_CFLAGS = $(AM_CFLAGS_PPC32_LINUX) $(AM_CFLAGS_PIC) + +libreplacemalloc_toolpreload_ppc64_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_ppc64_linux_a_CPPFLAGS = $(AM_CPPFLAGS_PPC64_LINUX) +libreplacemalloc_toolpreload_ppc64_linux_a_CFLAGS = $(AM_CFLAGS_PPC64_LINUX) $(AM_CFLAGS_PIC) + +libreplacemalloc_toolpreload_ppc32_aix5_a_SOURCES = m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_ppc32_aix5_a_CPPFLAGS = $(AM_CPPFLAGS_PPC32_AIX5) +libreplacemalloc_toolpreload_ppc32_aix5_a_CFLAGS = $(AM_CFLAGS_PPC32_AIX5) $(AM_CFLAGS_PIC) +libreplacemalloc_toolpreload_ppc32_aix5_a_AR = $(AR) -X32 cru + +libreplacemalloc_toolpreload_ppc64_aix5_a_SOURCES = m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_ppc64_aix5_a_CPPFLAGS = $(AM_CPPFLAGS_PPC64_AIX5) +libreplacemalloc_toolpreload_ppc64_aix5_a_CFLAGS = $(AM_CFLAGS_PPC64_AIX5) $(AM_CFLAGS_PIC) +libreplacemalloc_toolpreload_ppc64_aix5_a_AR = $(AR) -X64 cru + +libreplacemalloc_toolpreload_x86_darwin_a_SOURCES = m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_x86_darwin_a_CPPFLAGS = $(AM_CPPFLAGS_X86_DARWIN) +libreplacemalloc_toolpreload_x86_darwin_a_CFLAGS = $(AM_CFLAGS_X86_DARWIN) $(AM_CFLAGS_PIC) + +libreplacemalloc_toolpreload_amd64_darwin_a_SOURCES = m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_amd64_darwin_a_CPPFLAGS = $(AM_CPPFLAGS_AMD64_DARWIN) +libreplacemalloc_toolpreload_amd64_darwin_a_CFLAGS = $(AM_CFLAGS_AMD64_DARWIN) $(AM_CFLAGS_PIC) + m_dispatch/dispatch-x86-linux.S: libvex_guest_offsets.h m_dispatch/dispatch-amd64-linux.S: libvex_guest_offsets.h m_dispatch/dispatch-ppc32-linux.S: libvex_guest_offsets.h @@ -462,67 +521,47 @@ libvex_guest_offsets.h: $(MAKE) -C @VEX_DIR@ CC="$(CC)" AR="$(AR)" pub/libvex_guest_offsets.h -#---------------------------------------------------------------------------- -# libreplacemalloc_toolpreload_<platform>.so -#---------------------------------------------------------------------------- +VGPRELOAD_CORE_SOURCES_COMMON = vg_preloaded.c -pkglib_LIBRARIES += libreplacemalloc_toolpreload-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a -if VGCONF_HAVE_PLATFORM_SEC -pkglib_LIBRARIES += libreplacemalloc_toolpreload-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a -endif +vgpreload_core_x86_linux_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) +vgpreload_core_x86_linux_so_CPPFLAGS = $(AM_CPPFLAGS_X86_LINUX) +vgpreload_core_x86_linux_so_CFLAGS = $(AM_CFLAGS_X86_LINUX) $(AM_CFLAGS_PIC) +vgpreload_core_x86_linux_so_LDFLAGS = $(PRELOAD_LDFLAGS_X86_LINUX) -libreplacemalloc_toolpreload_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_SOURCES = \ - m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS = \ - $(AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) -libreplacemalloc_toolpreload_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS = \ - $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) $(AM_CFLAGS_PIC) -if VGCONF_HAVE_PLATFORM_SEC -libreplacemalloc_toolpreload_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_SOURCES = \ - m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS = \ - $(AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) -libreplacemalloc_toolpreload_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS = \ - $(AM_CFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) $(AM_CFLAGS_PIC) -endif +vgpreload_core_amd64_linux_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) +vgpreload_core_amd64_linux_so_CPPFLAGS = $(AM_CPPFLAGS_AMD64_LINUX) +vgpreload_core_amd64_linux_so_CFLAGS = $(AM_CFLAGS_AMD64_LINUX) $(AM_CFLAGS_PIC) +vgpreload_core_amd64_linux_so_LDFLAGS = $(PRELOAD_LDFLAGS_AMD64_LINUX) -# Special AR for AIX. -libreplacemalloc_toolpreload_ppc32_aix5_a_AR = $(AR) -X32 cru -libreplacemalloc_toolpreload_ppc64_aix5_a_AR = $(AR) -X64 cru +vgpreload_core_ppc32_linux_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) +vgpreload_core_ppc32_linux_so_CPPFLAGS = $(AM_CPPFLAGS_PPC32_LINUX) +vgpreload_core_ppc32_linux_so_CFLAGS = $(AM_CFLAGS_PPC32_LINUX) $(AM_CFLAGS_PIC) +vgpreload_core_ppc32_linux_so_LDFLAGS = $(PRELOAD_LDFLAGS_PPC32_LINUX) -#---------------------------------------------------------------------------- -# vgpreload_core_<platform>.so -#---------------------------------------------------------------------------- +vgpreload_core_ppc64_linux_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) +vgpreload_core_ppc64_linux_so_CPPFLAGS = $(AM_CPPFLAGS_PPC64_LINUX) +vgpreload_core_ppc64_linux_so_CFLAGS = $(AM_CFLAGS_PPC64_LINUX) $(AM_CFLAGS_PIC) +vgpreload_core_ppc64_linux_so_LDFLAGS = $(PRELOAD_LDFLAGS_PPC64_LINUX) -noinst_PROGRAMS = vgpreload_core-@VGCONF_ARCH_PRI@-@VGCONF_OS@.so -if VGCONF_HAVE_PLATFORM_SEC -noinst_PROGRAMS += vgpreload_core-@VGCONF_ARCH_SEC@-@VGCONF_OS@.so -endif +vgpreload_core_ppc32_aix5_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) +vgpreload_core_ppc32_aix5_so_CPPFLAGS = $(AM_CPPFLAGS_PPC32_AIX5) +vgpreload_core_ppc32_aix5_so_CFLAGS = $(AM_CFLAGS_PPC32_AIX5) $(AM_CFLAGS_PIC) +vgpreload_core_ppc32_aix5_so_LDFLAGS = $(PRELOAD_LDFLAGS_PPC32_AIX5) -if VGCONF_OS_IS_DARWIN -noinst_DSYMS = $(noinst_PROGRAMS) -endif +vgpreload_core_ppc64_aix5_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) +vgpreload_core_ppc64_aix5_so_CPPFLAGS = $(AM_CPPFLAGS_PPC64_AIX5) +vgpreload_core_ppc64_aix5_so_CFLAGS = $(AM_CFLAGS_PPC64_AIX5) $(AM_CFLAGS_PIC) +vgpreload_core_ppc64_aix5_so_LDFLAGS = $(PRELOAD_LDFLAGS_PPC64_AIX5) -vgpreload_core_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_SOURCES = vg_preloaded.c -vgpreload_core_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_CPPFLAGS = \ - $(AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) -vgpreload_core_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_CFLAGS = \ - $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) $(AM_CFLAGS_PIC) -vgpreload_core_@VGCONF_ARCH_PRI@_@VGCONF_OS@_so_LDFLAGS = \ - $(PRELOAD_LDFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) -if VGCONF_HAVE_PLATFORM_SEC -vgpreload_core_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_SOURCES = vg_preloaded.c -vgpreload_core_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_CPPFLAGS = \ - $(AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) -vgpreload_core_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_CFLAGS = \ - $(AM_CFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) $(AM_CFLAGS_PIC) -vgpreload_core_@VGCONF_ARCH_SEC@_@VGCONF_OS@_so_LDFLAGS = \ - $(PRELOAD_LDFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) -endif +vgpreload_core_x86_darwin_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) +vgpreload_core_x86_darwin_so_CPPFLAGS = $(AM_CPPFLAGS_X86_DARWIN) +vgpreload_core_x86_darwin_so_CFLAGS = $(AM_CFLAGS_X86_DARWIN) $(AM_CFLAGS_PIC) +vgpreload_core_x86_darwin_so_LDFLAGS = $(PRELOAD_LDFLAGS_X86_DARWIN) -#---------------------------------------------------------------------------- -# General stuff -#---------------------------------------------------------------------------- +vgpreload_core_amd64_darwin_so_SOURCES = $(VGPRELOAD_CORE_SOURCES_COMMON) +vgpreload_core_amd64_darwin_so_CPPFLAGS = $(AM_CPPFLAGS_AMD64_DARWIN) +vgpreload_core_amd64_darwin_so_CFLAGS = $(AM_CFLAGS_AMD64_DARWIN) $(AM_CFLAGS_PIC) +vgpreload_core_amd64_darwin_so_LDFLAGS = $(PRELOAD_LDFLAGS_AMD64_DARWIN) all-local: inplace-noinst_PROGRAMS inplace-noinst_DSYMS @@ -530,11 +569,6 @@ $(MAKE) -C @VEX_DIR@ CC="$(CC)" AR="$(AR)" clean rm -f $(mach_srcs) $(mach_server_srcs) $(mach_hdrs) -LIBVEX = libvex-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a -if VGCONF_HAVE_PLATFORM_SEC -LIBVEX += libvex-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a -endif - # Nb: The loop installs the libvex library for possible use by standalone # tools. install-exec-local: install-noinst_PROGRAMS install-noinst_DSYMS Modified: trunk/drd/tests/Makefile.am =================================================================== --- trunk/drd/tests/Makefile.am 2009-06-02 05:27:07 UTC (rev 10203) +++ trunk/drd/tests/Makefile.am 2009-06-02 06:54:57 UTC (rev 10204) @@ -241,7 +241,8 @@ sem_as_mutex \ sigalrm \ thread_name \ - trylock + trylock \ + tsan_unittest if HAVE_BOOST_1_35 check_PROGRAMS += boost_thread @@ -271,9 +272,6 @@ check_PROGRAMS += qt4_mutex qt4_rwlock qt4_semaphore endif -if ! VGCONF_OS_IS_DARWIN -check_PROGRAMS += tsan_unittest -endif AM_CFLAGS += $(AM_FLAG_M3264_PRI) @FLAG_W_EXTRA@ -Wno-inline -Wno-unused-parameter AM_CXXFLAGS += $(AM_FLAG_M3264_PRI) @FLAG_W_EXTRA@ -Wno-inline -Wno-unused-parameter |
|
From: <sv...@va...> - 2009-06-02 05:27:13
|
Author: njn Date: 2009-06-02 06:27:07 +0100 (Tue, 02 Jun 2009) New Revision: 10203 Log: Move some stuff around; no functional change. Modified: trunk/coregrind/Makefile.am Modified: trunk/coregrind/Makefile.am =================================================================== --- trunk/coregrind/Makefile.am 2009-06-02 05:19:54 UTC (rev 10202) +++ trunk/coregrind/Makefile.am 2009-06-02 05:27:07 UTC (rev 10203) @@ -7,6 +7,10 @@ include $(top_srcdir)/Makefile.flags.am include $(top_srcdir)/Makefile.core-tool.am +#---------------------------------------------------------------------------- +# Basics, flags +#---------------------------------------------------------------------------- + AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@ += \ -I$(top_srcdir)/coregrind \ -DVG_LIBDIR="\"$(valdir)"\" \ @@ -81,6 +85,21 @@ m_debuglog.c endif +valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) +valgrind_CFLAGS = $(AM_CFLAGS_PRI) +valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) +valgrind_LDFLAGS = $(AM_CFLAGS_PRI) + +no_op_client_for_valgrind_SOURCES = no_op_client_for_valgrind.c +no_op_client_for_valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) +no_op_client_for_valgrind_CFLAGS = $(AM_CFLAGS_PRI) +no_op_client_for_valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) +no_op_client_for_valgrind_LDFLAGS = $(AM_CFLAGS_PRI) + +#---------------------------------------------------------------------------- +# Darwin Mach stuff +#---------------------------------------------------------------------------- + # Mach RPC interface definitions # Here are some more .defs files that are not used, but could be in the # future: @@ -116,17 +135,6 @@ $(mach_srcs) $(mach_hdrs): $(mach_files) (cd m_mach && mig $(mach_files)) -valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) -valgrind_CFLAGS = $(AM_CFLAGS_PRI) -valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) -valgrind_LDFLAGS = $(AM_CFLAGS_PRI) - -no_op_client_for_valgrind_SOURCES = no_op_client_for_valgrind.c -no_op_client_for_valgrind_CPPFLAGS = $(AM_CPPFLAGS_PRI) -no_op_client_for_valgrind_CFLAGS = $(AM_CFLAGS_PRI) -no_op_client_for_valgrind_CCASFLAGS = $(AM_CCASFLAGS_PRI) -no_op_client_for_valgrind_LDFLAGS = $(AM_CFLAGS_PRI) - #---------------------------------------------------------------------------- # Headers #---------------------------------------------------------------------------- @@ -217,6 +225,10 @@ m_syswrap/priv_syswrap-main.h \ m_ume/priv_ume.h +#---------------------------------------------------------------------------- +# libcoregrind_<platform>.so +#---------------------------------------------------------------------------- + BUILT_SOURCES = CLEANFILES = if VGCONF_OS_IS_DARWIN @@ -224,10 +236,6 @@ CLEANFILES += $(COREGRIND_DARWIN_BUILT_SOURCES) endif -#---------------------------------------------------------------------------- -# libcoregrind_<platform>.so -#---------------------------------------------------------------------------- - COREGRIND_SOURCES_COMMON = \ m_commandline.c \ m_clientstate.c \ @@ -432,6 +440,28 @@ libcoregrind_amd64_darwin_a_CFLAGS = $(AM_CFLAGS_AMD64_DARWIN) libcoregrind_amd64_darwin_a_CCASFLAGS = $(AM_CCASFLAGS_AMD64_DARWIN) + +m_dispatch/dispatch-x86-linux.S: libvex_guest_offsets.h +m_dispatch/dispatch-amd64-linux.S: libvex_guest_offsets.h +m_dispatch/dispatch-ppc32-linux.S: libvex_guest_offsets.h +m_dispatch/dispatch-ppc64-linux.S: libvex_guest_offsets.h +m_dispatch/dispatch-ppc32-aix5.S: libvex_guest_offsets.h +m_dispatch/dispatch-ppc64-aix5.S: libvex_guest_offsets.h +m_dispatch/dispatch-x86-darwin.S: libvex_guest_offsets.h +m_dispatch/dispatch-amd64-darwin.S: libvex_guest_offsets.h +m_syswrap/syscall-x86-linux.S: libvex_guest_offsets.h +m_syswrap/syscall-amd64-linux.S: libvex_guest_offsets.h +m_syswrap/syscall-ppc32-linux.S: libvex_guest_offsets.h +m_syswrap/syscall-ppc64-linux.S: libvex_guest_offsets.h +m_syswrap/syscall-ppc32-aix5.S: libvex_guest_offsets.h +m_syswrap/syscall-ppc64-aix5.S: libvex_guest_offsets.h +m_syswrap/syscall-x86-darwin.S: libvex_guest_offsets.h +m_syswrap/syscall-amd64-darwin.S: libvex_guest_offsets.h +m_syswrap/syswrap-main.c: libvex_guest_offsets.h + +libvex_guest_offsets.h: + $(MAKE) -C @VEX_DIR@ CC="$(CC)" AR="$(AR)" pub/libvex_guest_offsets.h + #---------------------------------------------------------------------------- # libreplacemalloc_toolpreload_<platform>.so #---------------------------------------------------------------------------- @@ -460,27 +490,6 @@ libreplacemalloc_toolpreload_ppc32_aix5_a_AR = $(AR) -X32 cru libreplacemalloc_toolpreload_ppc64_aix5_a_AR = $(AR) -X64 cru -m_dispatch/dispatch-x86-linux.S: libvex_guest_offsets.h -m_dispatch/dispatch-amd64-linux.S: libvex_guest_offsets.h -m_dispatch/dispatch-ppc32-linux.S: libvex_guest_offsets.h -m_dispatch/dispatch-ppc64-linux.S: libvex_guest_offsets.h -m_dispatch/dispatch-ppc32-aix5.S: libvex_guest_offsets.h -m_dispatch/dispatch-ppc64-aix5.S: libvex_guest_offsets.h -m_dispatch/dispatch-x86-darwin.S: libvex_guest_offsets.h -m_dispatch/dispatch-amd64-darwin.S: libvex_guest_offsets.h -m_syswrap/syscall-x86-linux.S: libvex_guest_offsets.h -m_syswrap/syscall-amd64-linux.S: libvex_guest_offsets.h -m_syswrap/syscall-ppc32-linux.S: libvex_guest_offsets.h -m_syswrap/syscall-ppc64-linux.S: libvex_guest_offsets.h -m_syswrap/syscall-ppc32-aix5.S: libvex_guest_offsets.h -m_syswrap/syscall-ppc64-aix5.S: libvex_guest_offsets.h -m_syswrap/syscall-x86-darwin.S: libvex_guest_offsets.h -m_syswrap/syscall-amd64-darwin.S: libvex_guest_offsets.h -m_syswrap/syswrap-main.c: libvex_guest_offsets.h - -libvex_guest_offsets.h: - $(MAKE) -C @VEX_DIR@ CC="$(CC)" AR="$(AR)" pub/libvex_guest_offsets.h - #---------------------------------------------------------------------------- # vgpreload_core_<platform>.so #---------------------------------------------------------------------------- |
|
From: <sv...@va...> - 2009-06-02 05:20:01
|
Author: njn Date: 2009-06-02 06:19:54 +0100 (Tue, 02 Jun 2009) New Revision: 10202 Log: Avoid repetitive cut+paste code for libreplacemalloc_toolpreload. Modified: trunk/coregrind/Makefile.am Modified: trunk/coregrind/Makefile.am =================================================================== --- trunk/coregrind/Makefile.am 2009-06-02 05:19:21 UTC (rev 10201) +++ trunk/coregrind/Makefile.am 2009-06-02 05:19:54 UTC (rev 10202) @@ -25,35 +25,35 @@ pkglib_LIBRARIES = if VGCONF_PLATFORMS_INCLUDE_X86_LINUX -pkglib_LIBRARIES += libcoregrind-x86-linux.a libreplacemalloc_toolpreload-x86-linux.a +pkglib_LIBRARIES += libcoregrind-x86-linux.a endif if VGCONF_PLATFORMS_INCLUDE_AMD64_LINUX -pkglib_LIBRARIES += libcoregrind-amd64-linux.a libreplacemalloc_toolpreload-amd64-linux.a +pkglib_LIBRARIES += libcoregrind-amd64-linux.a endif if VGCONF_PLATFORMS_INCLUDE_PPC32_LINUX -pkglib_LIBRARIES += libcoregrind-ppc32-linux.a libreplacemalloc_toolpreload-ppc32-linux.a +pkglib_LIBRARIES += libcoregrind-ppc32-linux.a endif if VGCONF_PLATFORMS_INCLUDE_PPC64_LINUX -pkglib_LIBRARIES += libcoregrind-ppc64-linux.a libreplacemalloc_toolpreload-ppc64-linux.a +pkglib_LIBRARIES += libcoregrind-ppc64-linux.a endif if VGCONF_PLATFORMS_INCLUDE_PPC32_AIX5 -pkglib_LIBRARIES += libcoregrind-ppc32-aix5.a libreplacemalloc_toolpreload-ppc32-aix5.a +pkglib_LIBRARIES += libcoregrind-ppc32-aix5.a endif if VGCONF_PLATFORMS_INCLUDE_PPC64_AIX5 -pkglib_LIBRARIES += libcoregrind-ppc64-aix5.a libreplacemalloc_toolpreload-ppc64-aix5.a +pkglib_LIBRARIES += libcoregrind-ppc64-aix5.a endif if VGCONF_PLATFORMS_INCLUDE_X86_DARWIN -pkglib_LIBRARIES += libcoregrind-x86-darwin.a libreplacemalloc_toolpreload-x86-darwin.a +pkglib_LIBRARIES += libcoregrind-x86-darwin.a endif if VGCONF_PLATFORMS_INCLUDE_AMD64_DARWIN -pkglib_LIBRARIES += libcoregrind-amd64-darwin.a libreplacemalloc_toolpreload-amd64-darwin.a +pkglib_LIBRARIES += libcoregrind-amd64-darwin.a endif #---------------------------------------------------------------------------- @@ -436,40 +436,30 @@ # libreplacemalloc_toolpreload_<platform>.so #---------------------------------------------------------------------------- -libreplacemalloc_toolpreload_x86_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_x86_linux_a_CPPFLAGS = $(AM_CPPFLAGS_X86_LINUX) -libreplacemalloc_toolpreload_x86_linux_a_CFLAGS = $(AM_CFLAGS_X86_LINUX) $(AM_CFLAGS_PIC) +pkglib_LIBRARIES += libreplacemalloc_toolpreload-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a +if VGCONF_HAVE_PLATFORM_SEC +pkglib_LIBRARIES += libreplacemalloc_toolpreload-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a +endif -libreplacemalloc_toolpreload_amd64_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_amd64_linux_a_CPPFLAGS = $(AM_CPPFLAGS_AMD64_LINUX) -libreplacemalloc_toolpreload_amd64_linux_a_CFLAGS = $(AM_CFLAGS_AMD64_LINUX) $(AM_CFLAGS_PIC) +libreplacemalloc_toolpreload_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_SOURCES = \ + m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +libreplacemalloc_toolpreload_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) $(AM_CFLAGS_PIC) +if VGCONF_HAVE_PLATFORM_SEC +libreplacemalloc_toolpreload_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_SOURCES = \ + m_replacemalloc/vg_replace_malloc.c +libreplacemalloc_toolpreload_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +libreplacemalloc_toolpreload_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) $(AM_CFLAGS_PIC) +endif -libreplacemalloc_toolpreload_ppc32_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_ppc32_linux_a_CPPFLAGS = $(AM_CPPFLAGS_PPC32_LINUX) -libreplacemalloc_toolpreload_ppc32_linux_a_CFLAGS = $(AM_CFLAGS_PPC32_LINUX) $(AM_CFLAGS_PIC) - -libreplacemalloc_toolpreload_ppc64_linux_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_ppc64_linux_a_CPPFLAGS = $(AM_CPPFLAGS_PPC64_LINUX) -libreplacemalloc_toolpreload_ppc64_linux_a_CFLAGS = $(AM_CFLAGS_PPC64_LINUX) $(AM_CFLAGS_PIC) - -libreplacemalloc_toolpreload_ppc32_aix5_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_ppc32_aix5_a_CPPFLAGS = $(AM_CPPFLAGS_PPC32_AIX5) -libreplacemalloc_toolpreload_ppc32_aix5_a_CFLAGS = $(AM_CFLAGS_PPC32_AIX5) $(AM_CFLAGS_PIC) +# Special AR for AIX. libreplacemalloc_toolpreload_ppc32_aix5_a_AR = $(AR) -X32 cru - -libreplacemalloc_toolpreload_ppc64_aix5_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_ppc64_aix5_a_CPPFLAGS = $(AM_CPPFLAGS_PPC64_AIX5) -libreplacemalloc_toolpreload_ppc64_aix5_a_CFLAGS = $(AM_CFLAGS_PPC64_AIX5) $(AM_CFLAGS_PIC) libreplacemalloc_toolpreload_ppc64_aix5_a_AR = $(AR) -X64 cru -libreplacemalloc_toolpreload_x86_darwin_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_x86_darwin_a_CPPFLAGS = $(AM_CPPFLAGS_X86_DARWIN) -libreplacemalloc_toolpreload_x86_darwin_a_CFLAGS = $(AM_CFLAGS_X86_DARWIN) $(AM_CFLAGS_PIC) - -libreplacemalloc_toolpreload_amd64_darwin_a_SOURCES = m_replacemalloc/vg_replace_malloc.c -libreplacemalloc_toolpreload_amd64_darwin_a_CPPFLAGS = $(AM_CPPFLAGS_AMD64_DARWIN) -libreplacemalloc_toolpreload_amd64_darwin_a_CFLAGS = $(AM_CFLAGS_AMD64_DARWIN) $(AM_CFLAGS_PIC) - m_dispatch/dispatch-x86-linux.S: libvex_guest_offsets.h m_dispatch/dispatch-amd64-linux.S: libvex_guest_offsets.h m_dispatch/dispatch-ppc32-linux.S: libvex_guest_offsets.h |