You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(16) |
2
(23) |
3
(15) |
|
4
(19) |
5
(21) |
6
(27) |
7
(18) |
8
(17) |
9
(15) |
10
(11) |
|
11
(9) |
12
(18) |
13
(26) |
14
(28) |
15
(26) |
16
(20) |
17
(27) |
|
18
(16) |
19
(40) |
20
(2) |
21
(11) |
22
(27) |
23
(24) |
24
(16) |
|
25
(10) |
26
(12) |
27
(16) |
28
(7) |
29
(6) |
30
(15) |
31
(5) |
|
From: <sv...@va...> - 2005-12-22 23:13:31
|
Author: sewardj
Date: 2005-12-22 23:13:27 +0000 (Thu, 22 Dec 2005)
New Revision: 5414
Log:
Use rt_sigprocmask, and check for errors correctly. (Not yet done: amd64=
-linux).
Modified:
trunk/coregrind/m_syswrap/syscall-ppc32-linux.S
Modified: trunk/coregrind/m_syswrap/syscall-ppc32-linux.S
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syscall-ppc32-linux.S 2005-12-22 22:38:08 U=
TC (rev 5413)
+++ trunk/coregrind/m_syswrap/syscall-ppc32-linux.S 2005-12-22 23:13:27 U=
TC (rev 5414)
@@ -1,7 +1,7 @@
=20
-##--------------------------------------------------------------------##
-##--- Support for doing system calls. syscall-ppc32-linux.S ---##
-##--------------------------------------------------------------------##
+/*--------------------------------------------------------------------*/
+/*--- Support for doing system calls. syscall-ppc32-linux.S ---*/
+/*--------------------------------------------------------------------*/
=20
/*
This file is part of Valgrind, a dynamic binary instrumentation
@@ -50,9 +50,10 @@
back to regs->m_gpr[3]/m_xer/m_result on completion.
=20
Returns 0 if the syscall was successfully called (even if the
- syscall itself failed), or a -ve error code if one of the
- sigprocmasks failed (there's no way to determine which one
- failed).
+ syscall itself failed), or a nonzero error code in the lowest
+ 8 bits if one of the sigprocmasks failed (there's no way to
+ determine which one failed). And there's no obvious way to
+ recover from that either, but nevertheless we want to know.
=20
VG_(fixup_guest_state_after_syscall_interrupted) does the
thread state fixup in the case where we were interrupted by a
@@ -60,7 +61,7 @@
=20
Prototype:
=20
- Int ML_(do_syscall_for_client_WRK)(
+ UWord ML_(do_syscall_for_client_WRK)(
Int syscallno, // r3
void* guest_state, // r4
const vki_sigset_t *sysmask, // r5
@@ -85,10 +86,11 @@
=20
/* set the signal mask for doing the system call */
/* set up for sigprocmask(SIG_SETMASK, sysmask, postmask) */
-1: li 0,__NR_sigprocmask
+1: li 0,__NR_rt_sigprocmask
li 3,VKI_SIG_SETMASK
mr 4,5
mr 5,6
+ mr 6,7
sc /* set the mask */
bso 7f /* if the sigprocmask fails */
=20
@@ -111,15 +113,15 @@
=20
/* block signals again */
/* set up for sigprocmask(SIG_SETMASK, postmask, NULL) */
-4: li 0,__NR_sigprocmask
+4: li 0,__NR_rt_sigprocmask
li 3,VKI_SIG_SETMASK
mr 4,29
li 5,0
mr 6,28
sc /* set the mask */
bso 7f /* if the sigprocmask fails */
-
/* now safe from signals */
+ li 3,0 /* SUCCESS */
=20
/* pop off stack frame */
5: lwz 28,16(1)
@@ -129,8 +131,8 @@
addi 1,1,32
blr
=20
- /* failure: return -ve error code */
-7: neg 3,3
+ /* failure: return 0x8000 | error code */
+7: ori 3,3,0x8000 /* FAILURE -- ensure return value is nonzero */
b 5b
=20
.section .rodata
@@ -154,6 +156,6 @@
/* Let the linker know we don't need an executable stack */
.section .note.GNU-stack,"",@progbits
=20
-##--------------------------------------------------------------------##
-##--- end ---##
-##--------------------------------------------------------------------##
+/*--------------------------------------------------------------------*/
+/*--- end ---*/
+/*--------------------------------------------------------------------*/
|
|
From: <sv...@va...> - 2005-12-22 22:38:15
|
Author: njn
Date: 2005-12-22 22:38:08 +0000 (Thu, 22 Dec 2005)
New Revision: 5413
Log:
Added a GC mechanism for removing old and stale nodes from the secondary =
GC
table. Also increased the size of each node in the sec-V-bit table to 16
bytes, which helps for programs that have long ranges of contiguous
partially-defined bytes (eg. perf/bz2).
Modified:
branches/COMPVBITS/memcheck/mc_main.c
Modified: branches/COMPVBITS/memcheck/mc_main.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- branches/COMPVBITS/memcheck/mc_main.c 2005-12-22 20:24:12 UTC (rev 54=
12)
+++ branches/COMPVBITS/memcheck/mc_main.c 2005-12-22 22:38:08 UTC (rev 54=
13)
@@ -476,17 +476,101 @@
}
}
=20
+/* --------------- Fundamental functions --------------- */
+
+static inline
+void insert_vabits8_into_vabits32 ( Addr a, UChar vabits8, UChar* vabits=
32 )
+{
+ UInt shift =3D (a & 3) << 1; // shift by 0, 2, 4, or 6
+ *vabits32 &=3D ~(0x3 << shift); // mask out the two old bits
+ *vabits32 |=3D (vabits8 << shift); // mask in the two new bits
+}
+
+static inline
+void insert_vabits16_into_vabits32 ( Addr a, UChar vabits16, UChar* vabi=
ts32 )
+{
+ UInt shift;
+ tl_assert(VG_IS_2_ALIGNED(a)); // Must be 2-aligned
+ shift =3D (a & 2) << 1; // shift by 0 or 4
+ *vabits32 &=3D ~(0xf << shift); // mask out the four old bits
+ *vabits32 |=3D (vabits16 << shift); // mask in the four new bits
+}
+
+static inline
+UChar extract_vabits8_from_vabits32 ( Addr a, UChar vabits32 )
+{
+ UInt shift =3D (a & 3) << 1; // shift by 0, 2, 4, or 6
+ vabits32 >>=3D shift; // shift the two bits to the bo=
ttom
+ return 0x3 & vabits32; // mask out the rest
+}
+
+static inline
+UChar extract_vabits16_from_vabits32 ( Addr a, UChar vabits32 )
+{
+ UInt shift;
+ tl_assert(VG_IS_2_ALIGNED(a)); // Must be 2-aligned
+ shift =3D (a & 2) << 1; // shift by 0 or 4
+ vabits32 >>=3D shift; // shift the four bits to the b=
ottom
+ return 0xf & vabits32; // mask out the rest
+}
+
+// Note that these two are only used in slow cases. The fast cases do
+// clever things like combine the auxmap check (in
+// get_secmap_{read,writ}able) with alignment checks.
+
+static inline
+void set_vabits8 ( Addr a, UChar vabits8 )
+{
+ SecMap* sm =3D get_secmap_writable(a);
+ UWord sm_off =3D SM_OFF(a);
+ insert_vabits8_into_vabits32( a, vabits8, &(sm->vabits32[sm_off]) );
+}
+
+static inline
+UChar get_vabits8 ( Addr a )
+{
+ SecMap* sm =3D get_secmap_readable(a);
+ UWord sm_off =3D SM_OFF(a);
+ UChar vabits32 =3D sm->vabits32[sm_off];
+ return extract_vabits8_from_vabits32(a, vabits32);
+}
+
+
/* --------------- Secondary V bit table ------------ */
=20
-// Note: the nodes in this table can become stale. Eg. if you write a
-// partially defined byte (PDB), then overwrite the same address with a
-// fully defined byte, the sec-V-bit node will not necessarily be remove=
d.
-// This is because checking for whether removal is necessary would slow =
down
-// the fast paths. Hopefully this is not a problem. If it becomes a
-// problem, we may have to consider doing a clean-up pass every so often=
.
+// This table holds the full V bit pattern for partially-defined bytes
+// (PDBs) that are represented by VA_BITS8_OTHER in the main shadow memo=
ry.
+//
+// Note: the nodes in this table can become stale. Eg. if you write a P=
DB,
+// then overwrite the same address with a fully defined byte, the sec-V-=
bit
+// node will not necessarily be removed. This is because checking for
+// whether removal is necessary would slow down the fast paths. =20
+//
+// To avoid the stale nodes building up too much, we periodically (once =
the
+// table reaches a certain size) garbage collect (GC) the table by
+// traversing it and evicting any "sufficiently stale" nodes, ie. nodes =
that
+// are stale and haven't been touched for a certain number of collection=
s.
+// If more than a certain proportion of nodes survived, we increase the
+// table size so that GCs occur less often. =20
+//
+// (So this a bit different to a traditional GC, where you definitely wa=
nt
+// to remove any dead nodes. It's more like we have a resizable cache a=
nd
+// we're trying to find the right balance how many elements to evict and=
how
+// big to make the cache.)
+//
+// This policy is designed to avoid bad table bloat in the worst case wh=
ere
+// a program creates huge numbers of stale PDBs -- we would get this blo=
at
+// if we had no GC -- while handling well the case where a node becomes
+// stale but shortly afterwards is rewritten with a PDB and so becomes
+// non-stale again (which happens quite often, eg. in perf/bz2). If we =
just
+// remove all stale nodes as soon as possible, we just end up re-adding =
a
+// lot of them in later again. The "sufficiently stale" approach avoids
+// this. (If a program has many live PDBs, performance will just suck,
+// there's no way around that.)
=20
static OSet* secVBitTable;
=20
+// Stats
static ULong sec_vbits_new_nodes =3D 0;
static ULong sec_vbits_updates =3D 0;
=20
@@ -497,22 +581,113 @@
// the number of total nodes. In practice sometimes they are clustered =
(eg.
// perf/bz2 repeatedly writes then reads more than 20,000 in a contiguou=
s
// row), but often not. So we choose something intermediate.
-#define BYTES_PER_SEC_VBIT_NODE sizeof(Addr)
+#define BYTES_PER_SEC_VBIT_NODE 16
=20
+// We make the table bigger if more than this many nodes survive a GC.
+#define MAX_SURVIVOR_PROPORTION 0.5
+
+// Each time we make the table bigger, we increase it by this much.
+#define TABLE_GROWTH_FACTOR 2
+
+// This defines "sufficiently stale" -- any node that hasn't been touche=
d in
+// this many GCs will be removed.
+#define MAX_STALE_AGE 2
+ =20
+// We GC the table when it gets this many nodes in it, ie. it's effectiv=
ely
+// the table size. It can change.
+static Int secVBitLimit =3D 1024;
+
+// The number of GCs done, used to age sec-V-bit nodes for eviction.
+// Because it's unsigned, wrapping doesn't matter -- the right answer wi=
ll
+// come out anyway.
+static UInt GCs_done =3D 0;
+
typedef=20
struct {
Addr a;
UChar vbits8[BYTES_PER_SEC_VBIT_NODE];
+ UInt last_touched;
}=20
SecVBitNode;
=20
+static OSet* createSecVBitTable(void)
+{
+ return VG_(OSet_Create)( offsetof(SecVBitNode, a),=20
+ NULL, // use fast comparisons
+ VG_(malloc), VG_(free) );
+}
+
+static void gcSecVBitTable(void)
+{
+ OSet* secVBitTable2;
+ SecVBitNode* n;
+ Int i, n_nodes =3D 0, n_survivors =3D 0;
+
+ GCs_done++;
+
+ // Create the new table.
+ secVBitTable2 =3D createSecVBitTable();
+
+ // Traverse the table, moving fresh nodes into the new table.
+ VG_(OSet_ResetIter)(secVBitTable);
+ while ( (n =3D VG_(OSet_Next)(secVBitTable)) ) {
+ Bool keep =3D False;
+ if ( (GCs_done - n->last_touched) <=3D MAX_STALE_AGE ) {
+ // Keep node if it's been touched recently enough (regardless o=
f
+ // freshness/staleness).
+ keep =3D True;
+ } else {
+ // Keep node if any of its bytes are non-stale. Using
+ // get_vabits8() for the lookup is not very efficient, but I do=
n't
+ // think it matters.
+ for (i =3D 0; i < BYTES_PER_SEC_VBIT_NODE; i++) {
+ if (VA_BITS8_OTHER =3D=3D get_vabits8(n->a + i)) {
+ keep =3D True; // Found a non-stale byte, so keep
+ break;
+ }
+ }
+ }
+
+ if ( keep ) {
+ // Insert a copy of the node into the new table.
+ SecVBitNode* n2 =3D=20
+ VG_(OSet_AllocNode)(secVBitTable2, sizeof(SecVBitNode));
+ *n2 =3D *n;
+ VG_(OSet_Insert)(secVBitTable2, n2);
+ }
+ }
+
+ // Get the before and after sizes.
+ n_nodes =3D VG_(OSet_Size)(secVBitTable);
+ n_survivors =3D VG_(OSet_Size)(secVBitTable2);
+
+ // Destroy the old table, and put the new one in its place.
+ VG_(OSet_Destroy)(secVBitTable);
+ secVBitTable =3D secVBitTable2;
+
+ if (VG_(clo_verbosity) > 1) {
+ Char percbuf[6];
+ VG_(percentify)(n_survivors, n_nodes, 1, 6, percbuf);
+ VG_(message)(Vg_DebugMsg, "memcheck GC: %d nodes, %d survivors (%s=
)",
+ n_nodes, n_survivors, percbuf);
+ }
+
+ // Increase table size if necessary.
+ if (n_survivors > (secVBitLimit * MAX_SURVIVOR_PROPORTION)) {
+ secVBitLimit *=3D TABLE_GROWTH_FACTOR;
+ if (VG_(clo_verbosity) > 1)
+ VG_(message)(Vg_DebugMsg, "memcheck GC: increase table size to =
%d",
+ secVBitLimit);
+ }
+}
+
static UWord get_sec_vbits8(Addr a)
{
Addr aAligned =3D VG_ROUNDDN(a, BYTES_PER_SEC_VBIT_NODE);
Int amod =3D a % BYTES_PER_SEC_VBIT_NODE;
SecVBitNode* n =3D VG_(OSet_Lookup)(secVBitTable, &aAligned);
UChar vbits8;
- tl_assert(n);
+ tl_assert2(n, "get_sec_vbits8: no node for address %p (%p)\n", aAlign=
ed, a);
// Shouldn't be fully defined or fully undefined -- those cases shoul=
dn't
// make it to the secondary V bits table.
vbits8 =3D n->vbits8[amod];
@@ -529,7 +704,8 @@
// make it to the secondary V bits table.
tl_assert(V_BITS8_VALID !=3D vbits8 && V_BITS8_INVALID !=3D vbits8);
if (n) {
- n->vbits8[amod] =3D vbits8; // update
+ n->vbits8[amod] =3D vbits8; // update
+ n->last_touched =3D GCs_done;
sec_vbits_updates++;
} else {
// New node: assign the specific byte, make the rest invalid (the=
y
@@ -540,39 +716,20 @@
n->vbits8[i] =3D V_BITS8_INVALID;
}
n->vbits8[amod] =3D vbits8;
+ n->last_touched =3D GCs_done;
+
+ // Do a table GC if necessary. Nb: do this before inserting the n=
ew
+ // node, to avoid erroneously GC'ing the new node.
+ if (secVBitLimit =3D=3D VG_(OSet_Size)(secVBitTable)) {
+ gcSecVBitTable();
+ }
+
+ // Insert the new node.
VG_(OSet_Insert)(secVBitTable, n);
sec_vbits_new_nodes++;
}
}
=20
-// Remove the node if its V bytes (other than the one for 'a') are all f=
ully
-// defined or fully undefined. We ignore the V byte for 'a' because it'=
s
-// about to be overwritten with a fully defined or fully undefined value=
.
-__attribute__((unused))
-static void maybe_remove_sec_vbits8(Addr a)
-{
- Addr aAligned =3D VG_ROUNDDN(a, BYTES_PER_SEC_VBIT_NODE);
- Int i, amod =3D a % BYTES_PER_SEC_VBIT_NODE;
- SecVBitNode* n =3D VG_(OSet_Lookup)(secVBitTable, &aAligned);
- tl_assert(n);
- for (i =3D 0; i < BYTES_PER_SEC_VBIT_NODE; i++) {
- UChar vbits8 =3D n->vbits8[i];
-
- // Ignore the V byte for 'a'.
- if (i =3D=3D amod)
- continue;
- =20
- // One of the other V bytes is still partially defined -- don't re=
move
- // this entry from the table.
- if (V_BITS8_VALID !=3D vbits8 && V_BITS8_INVALID !=3D vbits8)
- return;
- }
- n =3D VG_(OSet_Remove)(secVBitTable, &aAligned);
- VG_(OSet_FreeNode)(secVBitTable, n);
- tl_assert(n);
-}
-
-
/* --------------- Endianness helpers --------------- */
=20
/* Returns the offset in memory of the byteno-th most significant byte
@@ -582,67 +739,6 @@
return bigendian ? (wordszB-1-byteno) : byteno;
}
=20
-
-/* --------------- Fundamental functions --------------- */
-
-static INLINE
-void insert_vabits8_into_vabits32 ( Addr a, UChar vabits8, UChar* vabits=
32 )
-{
- UInt shift =3D (a & 3) << 1; // shift by 0, 2, 4, or 6
- *vabits32 &=3D ~(0x3 << shift); // mask out the two old bits
- *vabits32 |=3D (vabits8 << shift); // mask in the two new bits
-}
-
-static INLINE
-void insert_vabits16_into_vabits32 ( Addr a, UChar vabits16, UChar* vabi=
ts32 )
-{
- UInt shift;
- tl_assert(VG_IS_2_ALIGNED(a)); // Must be 2-aligned
- shift =3D (a & 2) << 1; // shift by 0 or 4
- *vabits32 &=3D ~(0xf << shift); // mask out the four old bits
- *vabits32 |=3D (vabits16 << shift); // mask in the four new bits
-}
-
-static INLINE
-UChar extract_vabits8_from_vabits32 ( Addr a, UChar vabits32 )
-{
- UInt shift =3D (a & 3) << 1; // shift by 0, 2, 4, or 6
- vabits32 >>=3D shift; // shift the two bits to the bo=
ttom
- return 0x3 & vabits32; // mask out the rest
-}
-
-static INLINE
-UChar extract_vabits16_from_vabits32 ( Addr a, UChar vabits32 )
-{
- UInt shift;
- tl_assert(VG_IS_2_ALIGNED(a)); // Must be 2-aligned
- shift =3D (a & 2) << 1; // shift by 0 or 4
- vabits32 >>=3D shift; // shift the four bits to the b=
ottom
- return 0xf & vabits32; // mask out the rest
-}
-
-// Note that these two are only used in slow cases. The fast cases do
-// clever things like combine the auxmap check (in
-// get_secmap_{read,writ}able) with alignment checks.
-
-static INLINE
-void set_vabits8 ( Addr a, UChar vabits8 )
-{
- SecMap* sm =3D get_secmap_writable(a);
- UWord sm_off =3D SM_OFF(a);
- insert_vabits8_into_vabits32( a, vabits8, &(sm->vabits32[sm_off]) );
-}
-
-static INLINE
-UChar get_vabits8 ( Addr a )
-{
- SecMap* sm =3D get_secmap_readable(a);
- UWord sm_off =3D SM_OFF(a);
- UChar vabits32 =3D sm->vabits32[sm_off];
- return extract_vabits8_from_vabits32(a, vabits32);
-}
-
-
/* --------------- Load/store slow cases. --------------- */
=20
// Forward declarations
@@ -735,25 +831,10 @@
// Addressable. Convert in-register format to in-memory format=
.
// Also remove any existing sec V bit entry for the byte if no
// longer necessary.
- //
- // XXX: the calls to maybe_remove_sec_vbits8() are commented ou=
t
- // because they slow things down a bit (eg. 10% for perf/bz2)
- // and the space saving is quite small (eg. 1--2% reduction in =
the
- // size of the sec-V-bit-table?)
- if ( V_BITS8_VALID =3D=3D vbits8 ) {=20
-// if (VA_BITS8_OTHER =3D=3D vabits8)
-// maybe_remove_sec_vbits8(ai);
- vabits8 =3D VA_BITS8_READABLE;=20
-
- } else if ( V_BITS8_INVALID =3D=3D vbits8 ) {=20
-// if (VA_BITS8_OTHER =3D=3D vabits8)
-// maybe_remove_sec_vbits8(ai);
- vabits8 =3D VA_BITS8_WRITABLE;=20
-
- } else {=20
- vabits8 =3D VA_BITS8_OTHER;
- set_sec_vbits8(ai, vbits8);
- }
+ if ( V_BITS8_VALID =3D=3D vbits8 ) { vabits8 =3D VA_BITS=
8_READABLE; }
+ else if ( V_BITS8_INVALID =3D=3D vbits8 ) { vabits8 =3D VA_BITS=
8_WRITABLE; }
+ else { vabits8 =3D VA_BITS8_OT=
HER;
+ set_sec_vbits8(ai, vbit=
s8); }
set_vabits8(ai, vabits8);
=20
} else {
@@ -3274,9 +3355,7 @@
no ... these are statically initialised */
=20
/* Secondary V bit table */
- secVBitTable =3D VG_(OSet_Create)( offsetof(SecVBitNode, a),=20
- NULL, // use fast comparisons
- VG_(malloc), VG_(free) );
+ secVBitTable =3D createSecVBitTable();
}
=20
=20
|
|
From: <sv...@va...> - 2005-12-22 20:24:20
|
Author: sewardj
Date: 2005-12-22 20:24:12 +0000 (Thu, 22 Dec 2005)
New Revision: 5412
Log:
Add enough syscalls to run bzip2 on ppc64-linux.
Modified:
trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c
trunk/coregrind/vki_unistd-ppc64-linux.h
Modified: trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c 2005-12-22 20:16:00 U=
TC (rev 5411)
+++ trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c 2005-12-22 20:24:12 U=
TC (rev 5412)
@@ -1173,22 +1173,22 @@
// _____(__NR_restart_syscall, sys_restart_syscall), // 0
GENX_(__NR_exit, sys_exit), // 1
// _____(__NR_fork, sys_fork), // 2
-// _____(__NR_read, sys_read), // 3
+ GENXY(__NR_read, sys_read), // 3
GENX_(__NR_write, sys_write), // 4
=20
-// _____(__NR_open, sys_open), // 5
-// _____(__NR_close, sys_close), // 6
+ GENXY(__NR_open, sys_open), // 5
+ GENXY(__NR_close, sys_close), // 6
// _____(__NR_waitpid, sys_waitpid), // 7
// _____(__NR_creat, sys_creat), // 8
// _____(__NR_link, sys_link), // 9
=20
-// _____(__NR_unlink, sys_unlink), // 10
+ GENX_(__NR_unlink, sys_unlink), // 10
// _____(__NR_execve, sys_execve), // 11
// _____(__NR_chdir, sys_chdir), // 12
// _____(__NR_time, sys_time), // 13
// _____(__NR_mknod, sys_mknod), // 14
=20
-// _____(__NR_chmod, sys_chmod), // 15
+ GENX_(__NR_chmod, sys_chmod), // 15
// _____(__NR_lchown, sys_lchown), // 16
// _____(__NR_break, sys_break), // 17
// _____(__NR_oldstat, sys_oldstat), // 18
@@ -1206,7 +1206,7 @@
// _____(__NR_oldfstat, sys_oldfstat), // 28
// _____(__NR_pause, sys_pause), // 29
=20
-// _____(__NR_utime, sys_utime), // 30
+ LINX_(__NR_utime, sys_utime), // 30
// _____(__NR_stty, sys_stty), // 31
// _____(__NR_gtty, sys_gtty), // 32
// _____(__NR_access, sys_access), // 33
@@ -1234,9 +1234,9 @@
// _____(__NR_acct, sys_acct), // 51
// _____(__NR_umount2, sys_umount2), // 52
// _____(__NR_lock, sys_lock), // 53
-// _____(__NR_ioctl, sys_ioctl), // 54
+ GENXY(__NR_ioctl, sys_ioctl), // 54
=20
-// _____(__NR_fcntl, sys_fcntl), // 55
+ GENXY(__NR_fcntl, sys_fcntl), // 55
// _____(__NR_mpx, sys_mpx), // 56
// _____(__NR_setpgid, sys_setpgid), // 57
// _____(__NR_ulimit, sys_ulimit), // 58
@@ -1297,7 +1297,7 @@
// _____(__NR_setitimer, sys_setitimer), // 104
=20
// _____(__NR_getitimer, sys_getitimer), // 105
-// _____(__NR_stat, sys_stat), // 106
+ GENXY(__NR_stat, sys_newstat), // 106
// _____(__NR_lstat, sys_lstat), // 107
GENXY(__NR_fstat, sys_newfstat), // 108
// _____(__NR_olduname, sys_olduname), // 109
@@ -1338,7 +1338,7 @@
// _____(__NR_setfsuid, sys_setfsuid), // 138
// _____(__NR_setfsgid, sys_setfsgid), // 139
=20
-// _____(__NR__llseek, sys__llseek), // 140
+ LINXY(__NR__llseek, sys_llseek), // 140
// _____(__NR_getdents, sys_getdents), // 141
// _____(__NR__newselect, sys__newselect), // 142
// _____(__NR_flock, sys_flock), // 143
@@ -1377,7 +1377,7 @@
// _____(__NR_getresgid, sys_getresgid), // 170
// _____(__NR_prctl, sys_prctl), // 171
// _____(__NR_rt_sigreturn, sys_rt_sigreturn), // 172
-// _____(__NR_rt_sigaction, sys_rt_sigaction), // 173
+ LINXY(__NR_rt_sigaction, sys_rt_sigaction), // 173
// _____(__NR_rt_sigprocmask, sys_rt_sigprocmask), // 174
=20
// _____(__NR_rt_sigpending, sys_rt_sigpending), // 175
@@ -1387,7 +1387,7 @@
// _____(__NR_pread64, sys_pread64), // 179
=20
// _____(__NR_pwrite64, sys_pwrite64), // 180
-// _____(__NR_chown, sys_chown), // 181
+ GENX_(__NR_chown, sys_chown), // 181
// _____(__NR_getcwd, sys_getcwd), // 182
// _____(__NR_capget, sys_capget), // 183
// _____(__NR_capset, sys_capset), // 184
@@ -1414,7 +1414,7 @@
// _____(__NR_multiplexer, sys_multiplexer), // 201
// _____(__NR_getdents64, sys_getdents64), // 202
// _____(__NR_pivot_root, sys_pivot_root), // 203
-// /* #define __NR_fcntl64 204 32bit only */
+ GENXY(__NR_fcntl64, sys_fcntl64), // 204 !!!!?? =
32bit only */
=20
// _____(__NR_madvise, sys_madvise), // 205
// _____(__NR_mincore, sys_mincore), // 206
Modified: trunk/coregrind/vki_unistd-ppc64-linux.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/vki_unistd-ppc64-linux.h 2005-12-22 20:16:00 UTC (rev=
5411)
+++ trunk/coregrind/vki_unistd-ppc64-linux.h 2005-12-22 20:24:12 UTC (rev=
5412)
@@ -233,7 +233,7 @@
#define __NR_multiplexer 201
#define __NR_getdents64 202
#define __NR_pivot_root 203
-/* #define __NR_fcntl64 204 32bit only */
+#define __NR_fcntl64 204 /* ???!!! 32bit only */
#define __NR_madvise 205
#define __NR_mincore 206
#define __NR_gettid 207
|
|
From: <sv...@va...> - 2005-12-22 20:16:08
|
Author: sewardj
Date: 2005-12-22 20:16:00 +0000 (Thu, 22 Dec 2005)
New Revision: 5411
Log:
Properly return error codes resulting from sigprocmask failures.
Not yet done: amd64, ppc32.
Modified:
trunk/coregrind/m_syswrap/syscall-x86-linux.S
Modified: trunk/coregrind/m_syswrap/syscall-x86-linux.S
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syscall-x86-linux.S 2005-12-22 20:14:57 UTC=
(rev 5410)
+++ trunk/coregrind/m_syswrap/syscall-x86-linux.S 2005-12-22 20:16:00 UTC=
(rev 5411)
@@ -1,7 +1,7 @@
=20
-##--------------------------------------------------------------------##
-##--- Support for doing system calls. syscall-x86-linux.S ---##
-##--------------------------------------------------------------------##
+/*--------------------------------------------------------------------*/
+/*--- Support for doing system calls. syscall-x86-linux.S ---*/
+/*--------------------------------------------------------------------*/
=20
/*
This file is part of Valgrind, a dynamic binary instrumentation
@@ -51,9 +51,10 @@
back to regs->m_eax on completion.
=09
Returns 0 if the syscall was successfully called (even if the
- syscall itself failed), or a -ve error code if one of the
- sigprocmasks failed (there's no way to determine which one
- failed).
+ syscall itself failed), or a nonzero error code in the lowest
+ 8 bits if one of the sigprocmasks failed (there's no way to
+ determine which one failed). And there's no obvious way to
+ recover from that either, but nevertheless we want to know.
=20
VG_(fixup_guest_state_after_syscall_interrupted) does the
thread state fixup in the case where we were interrupted by a
@@ -61,7 +62,7 @@
=09
Prototype:
=20
- Int ML_(do_syscall_for_client_WRK)(
+ UWord ML_(do_syscall_for_client_WRK)(
Int syscallno, // 0
void* guest_state, // 4
const vki_sigset_t *sysmask, // 8
@@ -94,7 +95,7 @@
movl 16+FSZ(%esp), %esi
int $0x80
testl %eax, %eax
- js 5f /* sigprocmask failed */
+ js 7f /* sigprocmask failed */
=09
movl 4+FSZ(%esp), %eax /* eax =3D=3D ThreadState * */
=20
@@ -122,16 +123,29 @@
xorl %edx, %edx
movl 16+FSZ(%esp), %esi
int $0x80
+ testl %eax, %eax
+ js 7f /* sigprocmask failed */
=20
5: /* now safe from signals */
- =09
+ movl $0, %eax /* SUCCESS */
popl %ebp
popl %ebx
popl %edi
popl %esi
-#undef FSZ
ret
=20
+7: /* failure: return 0x8000 | error code */
+ negl %eax
+ andl $0x7FFF, %eax
+ orl $0x8000, %eax
+ popl %ebp
+ popl %ebx
+ popl %edi
+ popl %esi
+ ret
+#undef FSZ
+
+=09
.section .rodata
/* export the ranges so that
VG_(fixup_guest_state_after_syscall_interrupted) can do the
@@ -152,6 +166,6 @@
/* Let the linker know we don't need an executable stack */
.section .note.GNU-stack,"",@progbits
=20
-##--------------------------------------------------------------------##
-##--- end ---##
-##--------------------------------------------------------------------##
+/*--------------------------------------------------------------------*/
+/*--- end ---*/
+/*--------------------------------------------------------------------*/
|
|
From: <sv...@va...> - 2005-12-22 20:15:07
|
Author: sewardj
Date: 2005-12-22 20:14:57 +0000 (Thu, 22 Dec 2005)
New Revision: 5410
Log:
Comment-only changes
Modified:
trunk/coregrind/m_syswrap/syscall-ppc64-linux.S
Modified: trunk/coregrind/m_syswrap/syscall-ppc64-linux.S
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syscall-ppc64-linux.S 2005-12-22 20:11:28 U=
TC (rev 5409)
+++ trunk/coregrind/m_syswrap/syscall-ppc64-linux.S 2005-12-22 20:14:57 U=
TC (rev 5410)
@@ -1,7 +1,7 @@
=20
-##--------------------------------------------------------------------##
-##--- Support for doing system calls. syscall-ppc64-linux.S ---##
-##--------------------------------------------------------------------##
+/*--------------------------------------------------------------------*/
+/*--- Support for doing system calls. syscall-ppc64-linux.S ---*/
+/*--------------------------------------------------------------------*/
=20
/*
This file is part of Valgrind, a dynamic binary instrumentation
@@ -50,7 +50,7 @@
back to regs->m_gpr[3]/m_xer/m_result on completion.
=20
Returns 0 if the syscall was successfully called (even if the
- syscall itself failed), or an nonzero error code in the lowest
+ syscall itself failed), or a nonzero error code in the lowest
8 bits if one of the sigprocmasks failed (there's no way to
determine which one failed). And there's no obvious way to
recover from that either, but nevertheless we want to know.
@@ -139,7 +139,7 @@
addi 1,1,80
blr
=20
- /* failure: return -ve error code */
+ /* failure: return 0x8000 | error code */
7: ori 3,3,0x8000 /* FAILURE -- ensure return value is nonzero */
b 5b
=20
@@ -163,6 +163,6 @@
/* Let the linker know we don't need an executable stack */
.section .note.GNU-stack,"",@progbits
=20
-##--------------------------------------------------------------------##
-##--- end ---##
-##--------------------------------------------------------------------##
+/*--------------------------------------------------------------------*/
+/*--- end ---*/
+/*--------------------------------------------------------------------*/
|
|
From: <sv...@va...> - 2005-12-22 20:11:33
|
Author: njn
Date: 2005-12-22 20:11:28 +0000 (Thu, 22 Dec 2005)
New Revision: 5409
Log:
Tweak stats gathering.
Modified:
branches/COMPVBITS/memcheck/mc_main.c
Modified: branches/COMPVBITS/memcheck/mc_main.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- branches/COMPVBITS/memcheck/mc_main.c 2005-12-22 19:50:45 UTC (rev 54=
08)
+++ branches/COMPVBITS/memcheck/mc_main.c 2005-12-22 20:11:28 UTC (rev 54=
09)
@@ -487,17 +487,16 @@
=20
static OSet* secVBitTable;
=20
-static ULong sec_vbits_bytes_allocd =3D 0;
-static ULong sec_vbits_bytes_freed =3D 0;
-static ULong sec_vbits_bytes_curr =3D 0;
-static ULong sec_vbits_bytes_peak =3D 0;
+static ULong sec_vbits_new_nodes =3D 0;
+static ULong sec_vbits_updates =3D 0;
=20
-// sizeof(Addr) is the best value here. We can go from 1 to sizeof(Addr=
)
-// for free -- it doesn't change the size of the SecVBitNode because of
-// padding. If we make it larger, we have bigger nodes, but can possibl=
y
-// fit more partially defined bytes in each node. In practice it seems =
that
-// partially defined bytes are rarely clustered close to each other, so
-// going bigger than sizeof(Addr) does not save space.
+// This must be a power of two; this is checked in mc_pre_clo_init().
+// The size chosen here is a trade-off: if the nodes are bigger (ie. co=
ver
+// a larger address range) they take more space but we can get multiple
+// partially-defined bytes in one if they are close to each other, reduc=
ing
+// the number of total nodes. In practice sometimes they are clustered =
(eg.
+// perf/bz2 repeatedly writes then reads more than 20,000 in a contiguou=
s
+// row), but often not. So we choose something intermediate.
#define BYTES_PER_SEC_VBIT_NODE sizeof(Addr)
=20
typedef=20
@@ -531,13 +530,10 @@
tl_assert(V_BITS8_VALID !=3D vbits8 && V_BITS8_INVALID !=3D vbits8);
if (n) {
n->vbits8[amod] =3D vbits8; // update
+ sec_vbits_updates++;
} else {
// New node: assign the specific byte, make the rest invalid (the=
y
// should never be read as-is, but be cautious).
- sec_vbits_bytes_allocd +=3D sizeof(SecVBitNode);
- sec_vbits_bytes_curr +=3D sizeof(SecVBitNode);
- if (sec_vbits_bytes_curr > sec_vbits_bytes_peak)
- sec_vbits_bytes_peak =3D sec_vbits_bytes_curr;
n =3D VG_(OSet_AllocNode)(secVBitTable, sizeof(SecVBitNode));
n->a =3D aAligned;
for (i =3D 0; i < BYTES_PER_SEC_VBIT_NODE; i++) {
@@ -545,6 +541,7 @@
}
n->vbits8[amod] =3D vbits8;
VG_(OSet_Insert)(secVBitTable, n);
+ sec_vbits_new_nodes++;
}
}
=20
@@ -572,8 +569,6 @@
}
n =3D VG_(OSet_Remove)(secVBitTable, &aAligned);
VG_(OSet_FreeNode)(secVBitTable, n);
- sec_vbits_bytes_freed +=3D sizeof(SecVBitNode);
- sec_vbits_bytes_curr -=3D sizeof(SecVBitNode);
tl_assert(n);
}
=20
@@ -3986,8 +3981,12 @@
n_accessible_dist * sizeof(SecMap) / (1024 * 1024) );
=20
VG_(message)(Vg_DebugMsg,
- " memcheck: sec V bit entries: %d",
+ " memcheck: sec V bit nodes: %d",
VG_(OSet_Size)(secVBitTable) );
+ VG_(message)(Vg_DebugMsg,
+ " memcheck: set_sec_vbits8 calls: %llu (new: %llu, updates: %ll=
u)",
+ sec_vbits_new_nodes + sec_vbits_updates,
+ sec_vbits_new_nodes, sec_vbits_updates );
}
=20
if (0) {
@@ -4103,6 +4102,9 @@
=20
// {LOADV,STOREV}[8421] will all fail horribly if this isn't true.
tl_assert(sizeof(UWord) =3D=3D sizeof(Addr));
+
+ // BYTES_PER_SEC_VBIT_NODE must be a power of two.
+ tl_assert(-1 !=3D VG_(log2)(BYTES_PER_SEC_VBIT_NODE));
}
=20
VG_DETERMINE_INTERFACE_VERSION(mc_pre_clo_init)
|
|
From: <sv...@va...> - 2005-12-22 19:50:48
|
Author: njn
Date: 2005-12-22 19:50:45 +0000 (Thu, 22 Dec 2005)
New Revision: 5408
Log:
Add comment about log2().
Modified:
trunk/coregrind/m_libcbase.c
trunk/include/pub_tool_libcbase.h
Modified: trunk/coregrind/m_libcbase.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_libcbase.c 2005-12-22 19:28:37 UTC (rev 5407)
+++ trunk/coregrind/m_libcbase.c 2005-12-22 19:50:45 UTC (rev 5408)
@@ -413,6 +413,7 @@
Misc useful functions
------------------------------------------------------------------ */
=20
+/* Returns the base-2 logarithm of x. Returns -1 if x is not a power of=
two. */
Int VG_(log2) ( Int x )=20
{
Int i;
Modified: trunk/include/pub_tool_libcbase.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/include/pub_tool_libcbase.h 2005-12-22 19:28:37 UTC (rev 5407)
+++ trunk/include/pub_tool_libcbase.h 2005-12-22 19:50:45 UTC (rev 5408)
@@ -111,7 +111,7 @@
extern void VG_(ssort)( void* base, SizeT nmemb, SizeT size,
Int (*compar)(void*, void*) );
=20
-/* Returns the base-2 logarithm of x. */
+/* Returns the base-2 logarithm of x. Returns -1 if x is not a power of=
two. */
extern Int VG_(log2) ( Int x );
=20
// A pseudo-random number generator returning a random UInt. If pSeed
|
|
From: <sv...@va...> - 2005-12-22 19:28:42
|
Author: sewardj
Date: 2005-12-22 19:28:37 +0000 (Thu, 22 Dec 2005)
New Revision: 5407
Log:
Make async-style syscalls work on ppc64, by using rt_sigprocmask
instead of sigprocmask.
In the process, discover that error handling for
ML_(do_syscall_for_client_WRK) on all platforms has always been
broken, in the sense that the sigprocmasks (which are important) could
silently fail. This commit fixes that up too (only on ppc64-linux at
the moment, so all other platforms are probably broken now).
Modified:
trunk/coregrind/m_syswrap/syscall-ppc64-linux.S
trunk/coregrind/m_syswrap/syswrap-main.c
Modified: trunk/coregrind/m_syswrap/syscall-ppc64-linux.S
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syscall-ppc64-linux.S 2005-12-22 19:25:51 U=
TC (rev 5406)
+++ trunk/coregrind/m_syswrap/syscall-ppc64-linux.S 2005-12-22 19:28:37 U=
TC (rev 5407)
@@ -50,9 +50,10 @@
back to regs->m_gpr[3]/m_xer/m_result on completion.
=20
Returns 0 if the syscall was successfully called (even if the
- syscall itself failed), or a -ve error code if one of the
- sigprocmasks failed (there's no way to determine which one
- failed).
+ syscall itself failed), or an nonzero error code in the lowest
+ 8 bits if one of the sigprocmasks failed (there's no way to
+ determine which one failed). And there's no obvious way to
+ recover from that either, but nevertheless we want to know.
=20
VG_(fixup_guest_state_after_syscall_interrupted) does the
thread state fixup in the case where we were interrupted by a
@@ -60,7 +61,7 @@
=20
Prototype:
=20
- Int ML_(do_syscall_for_client_WRK)(
+ UWord ML_(do_syscall_for_client_WRK)(
Int syscallno, // r3
void* guest_state, // r4
const vki_sigset_t *sysmask, // r5
@@ -93,10 +94,11 @@
=20
/* set the signal mask for doing the system call */
/* set up for sigprocmask(SIG_SETMASK, sysmask, postmask) */
-1: li 0,__NR_sigprocmask
+1: li 0,__NR_rt_sigprocmask
li 3,VKI_SIG_SETMASK
mr 4,5
mr 5,6
+ mr 6,7
sc /* set the mask */
bso 7f /* if the sigprocmask fails */
=20
@@ -119,15 +121,15 @@
=20
/* block signals again */
/* set up for sigprocmask(SIG_SETMASK, postmask, NULL) */
-4: li 0,__NR_sigprocmask
+4: li 0,__NR_rt_sigprocmask
li 3,VKI_SIG_SETMASK
mr 4,29
li 5,0
mr 6,28
sc /* set the mask */
bso 7f /* if the sigprocmask fails */
-
/* now safe from signals */
+ li 3,0 /* SUCCESS */
=20
/* pop off stack frame */
5: ld 28,48(1)
@@ -138,7 +140,7 @@
blr
=20
/* failure: return -ve error code */
-7: neg 3,3
+7: ori 3,3,0x8000 /* FAILURE -- ensure return value is nonzero */
b 5b
=20
.section .rodata
Modified: trunk/coregrind/m_syswrap/syswrap-main.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syswrap-main.c 2005-12-22 19:25:51 UTC (rev=
5406)
+++ trunk/coregrind/m_syswrap/syswrap-main.c 2005-12-22 19:28:37 UTC (rev=
5407)
@@ -223,11 +223,11 @@
VG_(fixup_guest_state_after_syscall_interrupted) below for details.
*/
extern
-void ML_(do_syscall_for_client_WRK)( Int syscallno,=20
- void* guest_state,
- const vki_sigset_t *syscall_mask,
- const vki_sigset_t *restore_mask,
- Int nsigwords );
+UWord ML_(do_syscall_for_client_WRK)( Int syscallno,=20
+ void* guest_state,
+ const vki_sigset_t *syscall_mask,
+ const vki_sigset_t *restore_mask,
+ Int nsigwords );
=20
static
void do_syscall_for_client ( Int syscallno,
@@ -235,9 +235,15 @@
const vki_sigset_t* syscall_mask )
{
vki_sigset_t saved;
- ML_(do_syscall_for_client_WRK)(
- syscallno, &tst->arch.vex,=20
- syscall_mask, &saved, _VKI_NSIG_WORDS * sizeof(UWord)
+ UWord err=20
+ =3D ML_(do_syscall_for_client_WRK)(
+ syscallno, &tst->arch.vex,=20
+ syscall_mask, &saved, _VKI_NSIG_WORDS * sizeof(UWord)
+ );
+ vg_assert2(
+ err =3D=3D 0,
+ "ML_(do_syscall_for_client_WRK): sigprocmask error %d",
+ (Int)(err & 0xFFF)
);
}
=20
|
|
From: <sv...@va...> - 2005-12-22 19:25:55
|
Author: sewardj
Date: 2005-12-22 19:25:51 +0000 (Thu, 22 Dec 2005)
New Revision: 5406
Log:
More ppc64-linux syscalls.
Modified:
trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c
Modified: trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c 2005-12-22 15:53:12 U=
TC (rev 5405)
+++ trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c 2005-12-22 19:25:51 U=
TC (rev 5406)
@@ -400,8 +400,8 @@
PRE/POST wrappers for ppc64/Linux-specific syscalls
------------------------------------------------------------------ */
=20
-#define PRE(name) DEFN_PRE_TEMPLATE(ppc32_linux, name)
-#define POST(name) DEFN_POST_TEMPLATE(ppc32_linux, name)
+#define PRE(name) DEFN_PRE_TEMPLATE(ppc64_linux, name)
+#define POST(name) DEFN_POST_TEMPLATE(ppc64_linux, name)
=20
/* Add prototypes for the wrappers declared here, so that gcc doesn't
harass us for not having prototypes. Really this is a kludge --
@@ -410,7 +410,7 @@
magic. */
=20
//zz DECL_TEMPLATE(ppc64_linux, sys_socketcall);
-//zz DECL_TEMPLATE(ppc64_linux, sys_mmap);
+DECL_TEMPLATE(ppc64_linux, sys_mmap);
//zz DECL_TEMPLATE(ppc64_linux, sys_mmap2);
//zz DECL_TEMPLATE(ppc64_linux, sys_stat64);
//zz DECL_TEMPLATE(ppc64_linux, sys_lstat64);
@@ -677,23 +677,23 @@
//zz # undef ARG2_4
//zz # undef ARG2_5
//zz }
-//zz=20
-//zz PRE(sys_mmap)
-//zz {
-//zz SysRes r;
-//zz=20
-//zz PRINT("sys_mmap ( %p, %llu, %d, %d, %d, %d )",
-//zz ARG1, (ULong)ARG2, ARG3, ARG4, ARG5, ARG6 );
-//zz PRE_REG_READ6(long, "mmap",
-//zz unsigned long, start, unsigned long, length,
-//zz unsigned long, prot, unsigned long, flags,
-//zz unsigned long, fd, unsigned long, offset);
-//zz=20
-//zz r =3D ML_(generic_PRE_sys_mmap)( tid, ARG1, ARG2, ARG3, ARG4, AR=
G5,=20
-//zz (Off64T)ARG6 );
-//zz SET_STATUS_from_SysRes(r);
-//zz }
-//zz=20
+
+PRE(sys_mmap)
+{
+ SysRes r;
+
+ PRINT("sys_mmap ( %p, %llu, %d, %d, %d, %d )",
+ ARG1, (ULong)ARG2, ARG3, ARG4, ARG5, ARG6 );
+ PRE_REG_READ6(long, "mmap",
+ unsigned long, start, unsigned long, length,
+ unsigned long, prot, unsigned long, flags,
+ unsigned long, fd, unsigned long, offset);
+
+ r =3D ML_(generic_PRE_sys_mmap)( tid, ARG1, ARG2, ARG3, ARG4, ARG5,=20
+ (Off64T)ARG6 );
+ SET_STATUS_from_SysRes(r);
+}
+
//zz PRE(sys_mmap2)
//zz {
//zz SysRes r;
@@ -1171,10 +1171,10 @@
=20
const SyscallTableEntry ML_(syscall_table)[] =3D {
// _____(__NR_restart_syscall, sys_restart_syscall), // 0
-// _____(__NR_exit, sys_exit), // 1
+ GENX_(__NR_exit, sys_exit), // 1
// _____(__NR_fork, sys_fork), // 2
// _____(__NR_read, sys_read), // 3
-// _____(__NR_write, sys_write), // 4
+ GENX_(__NR_write, sys_write), // 4
=20
// _____(__NR_open, sys_open), // 5
// _____(__NR_close, sys_close), // 6
@@ -1278,8 +1278,8 @@
// _____(__NR_reboot, sys_reboot), // 88
// _____(__NR_readdir, sys_readdir), // 89
=20
-// _____(__NR_mmap, sys_mmap), // 90
-// _____(__NR_munmap, sys_munmap), // 91
+ PLAX_(__NR_mmap, sys_mmap), // 90
+ GENXY(__NR_munmap, sys_munmap), // 91
// _____(__NR_truncate, sys_truncate), // 92
// _____(__NR_ftruncate, sys_ftruncate), // 93
// _____(__NR_fchmod, sys_fchmod), // 94
@@ -1299,7 +1299,7 @@
// _____(__NR_getitimer, sys_getitimer), // 105
// _____(__NR_stat, sys_stat), // 106
// _____(__NR_lstat, sys_lstat), // 107
-// _____(__NR_fstat, sys_fstat), // 108
+ GENXY(__NR_fstat, sys_newfstat), // 108
// _____(__NR_olduname, sys_olduname), // 109
=20
// _____(__NR_iopl, sys_iopl), // 110
@@ -1450,7 +1450,7 @@
// _____(__NR_io_cancel, sys_io_cancel), // 231
// _____(__NR_set_tid_address, sys_set_tid_address), // 232
// _____(__NR_fadvise64, sys_fadvise64), // 233
-// _____(__NR_exit_group, sys_exit_group), // 234
+ LINX_(__NR_exit_group, sys_exit_group), // 234
=20
// _____(__NR_lookup_dcookie, sys_lookup_dcookie), // 235
// _____(__NR_epoll_create, sys_epoll_create), // 236
|
|
From: <sv...@va...> - 2005-12-22 18:22:03
|
Author: sewardj Date: 2005-12-22 18:21:55 +0000 (Thu, 22 Dec 2005) New Revision: 261 Log: rm extraneous 's' Modified: trunk/index.html Modified: trunk/index.html =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/index.html 2005-12-18 03:50:36 UTC (rev 260) +++ trunk/index.html 2005-12-22 18:21:55 UTC (rev 261) @@ -21,7 +21,7 @@ and reduce memory use of your programs.</p> =20 <p>The Valgrind distribution currently includes three tools: a -memory error detectors, a cache (time) profiler +memory error detector, a cache (time) profiler and a heap (space) profiler. It runs on the following platforms: x86/Linux, AMD64/Linux, PPC32/Linux.</p> =20 |
|
From: <sv...@va...> - 2005-12-22 15:53:21
|
Author: cerion
Date: 2005-12-22 15:53:12 +0000 (Thu, 22 Dec 2005)
New Revision: 5405
Log:
fixed up ppc64 assembly with .opd sections
do_syscall_for_client_WRK() needed a bigger stack to avoid the linkage ar=
ea.
always use dot_prefix for label calls
not wrapping assembly with
.section ".text"
...
.previous
- ppc64 doesn't like it... seems we can't 'stack' more than one section =
to pop off with .previous ?
Modified:
trunk/coregrind/m_libcassert.c
trunk/coregrind/m_machine.c
trunk/coregrind/m_main.c
trunk/coregrind/m_signals.c
trunk/coregrind/m_syscall.c
trunk/coregrind/m_syswrap/syscall-ppc64-linux.S
trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c
trunk/coregrind/m_trampoline.S
trunk/coregrind/vki_unistd-ppc64-linux.h
trunk/docs/internals/performance.txt
Modified: trunk/coregrind/m_libcassert.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_libcassert.c 2005-12-22 15:16:43 UTC (rev 5404)
+++ trunk/coregrind/m_libcassert.c 2005-12-22 15:53:12 UTC (rev 5405)
@@ -78,8 +78,8 @@
#elif defined(VGP_ppc64_linux)
# define GET_REAL_PC_SP_AND_FP(pc, sp, fp) \
asm("mflr 0;" /* r0 =3D lr */ \
- "bl m_libcassert_get_ip;" /* lr =3D pc */ \
- "m_libcassert_get_ip:\n" \
+ "bl .m_libcassert_get_ip;" /* lr =3D pc */ \
+ ".m_libcassert_get_ip:\n" \
"mflr %0;" \
"mtlr 0;" /* restore lr */ \
"mr %1,1;" \
Modified: trunk/coregrind/m_machine.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_machine.c 2005-12-22 15:16:43 UTC (rev 5404)
+++ trunk/coregrind/m_machine.c 2005-12-22 15:53:12 UTC (rev 5405)
@@ -373,7 +373,7 @@
/* VG_(printf)("FP %d VMX %d\n", (Int)have_fp, (Int)have_vmx); */
=20
/* We can only support 3 cases, not 4 (vmx but no fp). So make
- fp a prerequisite for vmx. */
+ fp a prerequisite for vmx. */
if (have_vmx && !have_fp)
have_vmx =3D False;
=20
Modified: trunk/coregrind/m_main.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_main.c 2005-12-22 15:16:43 UTC (rev 5404)
+++ trunk/coregrind/m_main.c 2005-12-22 15:53:12 UTC (rev 5405)
@@ -2850,9 +2850,9 @@
);
#elif defined(VGP_ppc64_linux)
asm("\n"
- ".text\n"
/* PPC64 ELF ABI says '_start' points to a function descriptor.
So we must have one, and that is what goes into the .opd section.=
*/
+ "\t.align 2\n"
"\t.global _start\n"
"\t.section \".opd\",\"aw\"\n"
"\t.align 3\n"
@@ -2886,7 +2886,6 @@
"\tbl ._start_in_C\n"
"\tnop\n"
"\ttrap\n"
- ".previous\n"
);
#else
#error "_start: needs implementation on this platform"
Modified: trunk/coregrind/m_signals.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_signals.c 2005-12-22 15:16:43 UTC (rev 5404)
+++ trunk/coregrind/m_signals.c 2005-12-22 15:53:12 UTC (rev 5405)
@@ -477,11 +477,18 @@
".previous\n"
#elif defined(VGP_ppc64_linux)
# define _MYSIG(name) \
- ".text\n" \
+ ".align 2\n" \
+ ".globl my_sigreturn\n" \
+ ".section \".opd\",\"aw\"\n" \
+ ".align 3\n" \
"my_sigreturn:\n" \
+ ".quad .my_sigreturn,.TOC.@tocbase,0\n" \
+ ".previous\n" \
+ ".type .my_sigreturn,@function\n" \
+ ".globl .my_sigreturn\n" \
+ ".my_sigreturn:\n" \
" li 0, " #name "\n" \
- " sc\n" \
- ".previous\n"
+ " sc\n"
#else
# error Unknown platform
#endif
Modified: trunk/coregrind/m_syscall.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syscall.c 2005-12-22 15:16:43 UTC (rev 5404)
+++ trunk/coregrind/m_syscall.c 2005-12-22 15:53:12 UTC (rev 5405)
@@ -218,7 +218,15 @@
bottom but of [1]. */
extern void do_syscall_WRK ( ULong* argblock );
asm(
-".text\n"
+".align 2\n"
+".globl do_syscall_WRK\n"
+".section \".opd\",\"aw\"\n"
+".align 3\n"
+"do_syscall_WRK:\n"
+".quad .do_syscall_WRK,.TOC.@tocbase,0\n"
+".previous\n"
+".type .do_syscall_WRK,@function\n"
+".globl .do_syscall_WRK\n"
".do_syscall_WRK:\n"
" std 3,-16(1)\n" /* stash arg */
" ld 8, 48(3)\n" /* sc arg 6 */
@@ -236,7 +244,6 @@
" andi. 3,3,1\n"
" std 3,8(5)\n" /* argblock[1] =3D cr0.s0 & 1 */
" blr\n"
-".previous\n"
);
#else
# error Unknown platform
Modified: trunk/coregrind/m_syswrap/syscall-ppc64-linux.S
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syscall-ppc64-linux.S 2005-12-22 15:16:43 U=
TC (rev 5404)
+++ trunk/coregrind/m_syswrap/syscall-ppc64-linux.S 2005-12-22 15:53:12 U=
TC (rev 5405)
@@ -70,14 +70,22 @@
/* from vki_arch.h */
#define VKI_SIG_SETMASK 2
=20
+.align 2
+.globl ML_(do_syscall_for_client_WRK)
+.section ".opd","aw"
+.align 3
+ML_(do_syscall_for_client_WRK):=09
+.quad .ML_(do_syscall_for_client_WRK),.TOC.@tocbase,0
+.previous
+.type .ML_(do_syscall_for_client_WRK),@function
.globl .ML_(do_syscall_for_client_WRK)
.ML_(do_syscall_for_client_WRK):
/* make a stack frame */
- stdu 1,-64(1)
- std 31,56(1)
- std 30,48(1)
- std 29,40(1)
- std 28,32(1)
+ stdu 1,-80(1)
+ std 31,72(1)
+ std 30,64(1)
+ std 29,56(1)
+ std 28,48(1)
mr 31,3 /* syscall number */
mr 30,4 /* guest_state */
mr 29,6 /* postmask */
@@ -122,11 +130,11 @@
/* now safe from signals */
=20
/* pop off stack frame */
-5: ld 28,32(1)
- ld 29,40(1)
- ld 30,48(1)
- ld 31,56(1)
- addi 1,1,64
+5: ld 28,48(1)
+ ld 29,56(1)
+ ld 30,64(1)
+ ld 31,72(1)
+ addi 1,1,80
blr
=20
/* failure: return -ve error code */
@@ -149,8 +157,7 @@
ML_(blksys_committed): .long 4b
ML_(blksys_finished): .long 5b
=20
-.previous
- =09
+
/* Let the linker know we don't need an executable stack */
.section .note.GNU-stack,"",@progbits
=20
Modified: trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c 2005-12-22 15:16:43 U=
TC (rev 5404)
+++ trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c 2005-12-22 15:53:12 U=
TC (rev 5405)
@@ -74,7 +74,7 @@
address, the second word is the TOC ptr (r2), and the third word is
the static chain value. */
asm(
-".text\n"
+" .align 2\n"
" .globl vgModuleLocal_call_on_new_stack_0_1\n"
" .section \".opd\",\"aw\"\n"
" .align 3\n"
@@ -122,7 +122,6 @@
" mtcr 0\n\t" // CAB: Need this?
" bctr\n\t" // jump to dst
" trap\n" // should never get here
-".previous\n"
);
=20
=20
@@ -166,7 +165,15 @@
Int* parent_tid,=20
void/*vki_modify_ldt_t*/ * );
asm(
-".text\n"
+" .align 2\n"
+" .globl do_syscall_clone_ppc64_linux\n"
+" .section \".opd\",\"aw\"\n"
+" .align 3\n"
+"do_syscall_clone_ppc64_linux:\n"
+" .quad .do_syscall_clone_ppc64_linux,.TOC.@tocbase,0\n"
+" .previous\n"
+" .type .do_syscall_clone_ppc64_linux,@function\n"
+" .globl .do_syscall_clone_ppc64_linux\n"
".do_syscall_clone_ppc64_linux:\n"
" stdu 1,-64(1)\n"
" std 29,40(1)\n"
@@ -229,7 +236,6 @@
" ld 31,56(1)\n"
" addi 1,1,64\n"
" blr\n"
-".previous\n"
);
=20
#undef __NR_CLONE
Modified: trunk/coregrind/m_trampoline.S
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_trampoline.S 2005-12-22 15:16:43 UTC (rev 5404)
+++ trunk/coregrind/m_trampoline.S 2005-12-22 15:53:12 UTC (rev 5405)
@@ -296,11 +296,28 @@
/* a leading page of unexecutable code */
UD2_PAGE
=20
+.align 2
.global VG_(trampoline_stuff_start)
+.section ".opd","aw"
+.align 3
VG_(trampoline_stuff_start):
+.quad .VG_(trampoline_stuff_start),.TOC.@tocbase,0
+.previous
+.type .VG_(trampoline_stuff_start),@function
+.global .VG_(trampoline_stuff_start)
+.VG_(trampoline_stuff_start):
=09
+
+.align 2
.global VG_(trampoline_stuff_end)
+.section ".opd","aw"
+.align 3
VG_(trampoline_stuff_end):
+.quad .VG_(trampoline_stuff_end),.TOC.@tocbase,0
+.previous
+.type .VG_(trampoline_stuff_end),@function
+.global .VG_(trampoline_stuff_end)
+.VG_(trampoline_stuff_end):
=20
# undef UD2_16
# undef UD2_64
Modified: trunk/coregrind/vki_unistd-ppc64-linux.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/vki_unistd-ppc64-linux.h 2005-12-22 15:16:43 UTC (rev=
5404)
+++ trunk/coregrind/vki_unistd-ppc64-linux.h 2005-12-22 15:53:12 UTC (rev=
5405)
@@ -309,4 +309,4 @@
#define __NR_inotify_rm_watch 277
=20
=20
-#endif /* __VKI_UNISTD_PPC32_LINUX_H */
+#endif /* __VKI_UNISTD_PPC64_LINUX_H */
Modified: trunk/docs/internals/performance.txt
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/docs/internals/performance.txt 2005-12-22 15:16:43 UTC (rev 540=
4)
+++ trunk/docs/internals/performance.txt 2005-12-22 15:53:12 UTC (rev 540=
5)
@@ -16,7 +16,7 @@
Saved 1--3% on a few programs.
- r5345,r5346,r5352: Julian improved the dispatcher so that x86 and
AMD64 use jumps instead of call/return for calling translations.
- Also, on x86, amd64 and ppc32, --profile-flags style profiling was
+ Also, on x86, amd64, ppc32 and ppc64, --profile-flags style profiling =
was
removed from the despatch loop unless --profile-flags is being used.
Improved Nulgrind performance typically by 10--20%, and Memcheck
performance typically by 2--20%.
|
|
From: Ashley P. <as...@qu...> - 2005-12-22 15:21:37
|
On Sat, 2005-12-17 at 10:11 +0100, Jeroen N. Witmond wrote: > > On Tue, 2005-12-06 at 18:37 +0000, Ashley Pittman wrote: > >> I'm still seeing both crashes within valgrind and no error reports where > >> there should be some but I'm no further forward finding the cause of > >> this yet. A typical stack trace looks like this with what appears to be > >> two assertion failures in mac_leakcheck.c, the first one at line 588 > >> "tl_assert(p_min != NULL)" and a second one at 539. > > > > I gave up trying to work with my library and went back to basics coding > > up a reproducer from scratch, can somebody take a look at this for me > > please. > > > > With the patch I posted yesterday it crashes with this error: > > > > Memcheck: mac_leakcheck.c:539 (full_report): Assertion > > 'lc_markstack[i].state == IndirectLeak' failed. > [snip stacktrace] > > I've hacked together a patch (for trunk, needing work for COMPVBITS) to > produce leak checks for custom allocated blocks and mempools, as well as > for malloc'd blocks. I basically did this by separating the bookkeeping > for custom blocks from that for malloc'd blocks, and invoking the leak > detector for each type (malloc, custom or mempool). The problem with this > patch is that I cannot get it to work without disabling this > 'tl_assert(lc_markstack[i].state == IndirectLeak' in 'static void > full_report(ThreadId tid)'. For custom blocks, the state I get is > Unreached, 'definitely lost'. I'd be interested in seeing that patch. Is your patch so that leaks are reported differently for custom and heap blocks? > Can anybody tell me if the tl_assert() is too critical, or if this is some > other bug? Thanks. Well it's been removed now so it's not *too* critical. I found that when I removed it I got different error totals depending on if I set the --leak-check=full option or not :( I don't really understand why this is but hope to get time to at least look into in the new year. Ashley, |
|
From: <sv...@va...> - 2005-12-22 15:16:47
|
Author: sewardj
Date: 2005-12-22 15:16:43 +0000 (Thu, 22 Dec 2005)
New Revision: 5404
Log:
When switching threads on ppc64, clear the reservation pseudo-reg, as on =
ppc32.
Modified:
trunk/coregrind/m_scheduler/scheduler.c
Modified: trunk/coregrind/m_scheduler/scheduler.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_scheduler/scheduler.c 2005-12-22 08:08:01 UTC (rev =
5403)
+++ trunk/coregrind/m_scheduler/scheduler.c 2005-12-22 15:16:43 UTC (rev =
5404)
@@ -395,7 +395,7 @@
vg_assert(sz_spill =3D=3D LibVEX_N_SPILL_BYTES);
vg_assert(a_vex + 2 * sz_vex =3D=3D a_spill);
=20
-# if defined(VGA_ppc32)
+# if defined(VGA_ppc32) || defined(VGA_ppc64)
/* This is necessary due to the hacky way vex models reservations
on ppc. It's really quite incorrect for each thread to have its
own reservation flag/address, since it's really something that
|
|
From: <sv...@va...> - 2005-12-22 14:32:41
|
Author: cerion
Date: 2005-12-22 14:32:35 +0000 (Thu, 22 Dec 2005)
New Revision: 1503
Log:
Implemented almost all of the remaining 64bit-mode insns.
Currently:
Not yet implemented: td(i)
Implemented, not tested: ldarx, stdcx.
All common-mode int & fp insns in 64bit-mode tested.
Altivec insns in 64bit-mode still to be tested.
Modified:
trunk/priv/guest-ppc32/toIR.c
trunk/priv/host-ppc32/hdefs.c
trunk/priv/host-ppc32/hdefs.h
trunk/priv/host-ppc32/isel.c
Modified: trunk/priv/guest-ppc32/toIR.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/guest-ppc32/toIR.c 2005-12-22 03:01:17 UTC (rev 1502)
+++ trunk/priv/guest-ppc32/toIR.c 2005-12-22 14:32:35 UTC (rev 1503)
@@ -1216,7 +1216,7 @@
}
}
=20
-// ROTL(src32/64, rot_amt5)
+// ROTL(src32/64, rot_amt5/6)
static IRExpr* /* :: Ity_I32/64 */ ROTL ( IRExpr* src,
IRExpr* rot_amt )
{
@@ -2086,8 +2086,6 @@
=20
=20
case /* 12 */ PPC32G_FLAG_OP_SRAD:
- vassert(0); // AWAITING TEST CASE
-
/* The shift amount is guaranteed to be in 0 .. 63 inclusive.
If it is <=3D 63, behave like SRADI; else XER.CA is the sign
bit of argL. */
@@ -2108,8 +2106,8 @@
=3D IRExpr_Mux0X(
/* shift amt > 63 ? */
unop(Iop_1Uto8, binop(Iop_CmpLT64U, mkU64(63), argR)),
- /* no -- be like srawi */
- unop(Iop_1Uto32, binop(Iop_CmpNE32, xer_ca, mkU32(0))),
+ /* no -- be like sradi */
+ unop(Iop_1Uto32, binop(Iop_CmpNE64, xer_ca, mkU64(0))),
/* yes -- get sign bit of argL */
unop(Iop_64to32, binop(Iop_Shr64, argL, mkU8(63)))
);
@@ -2900,14 +2898,12 @@
}
DIP("mulhd%s r%u,r%u,r%u\n", flag_rC ? "." : "",
rD_addr, rA_addr, rB_addr);
- DIP(" =3D> not implemented\n");
- return False;
- /*
assign( rD, unop(Iop_128HIto64,=20
- binop(Iop_MullU64,
+ binop(Iop_MullS64,
mkexpr(rA), mkexpr(rB))) );
- */
=20
+ break;
+
case 0x9: // mulhdu (Multiply High Double Word Unsigned, PPC64 p=
540)
if (flag_OE !=3D 0) {
vex_printf("dis_int_arith(PPC32)(mulhdu,flagOE)\n");
@@ -2915,13 +2911,10 @@
}
DIP("mulhdu%s r%u,r%u,r%u\n", flag_rC ? "." : "",
rD_addr, rA_addr, rB_addr);
- DIP(" =3D> not implemented\n");
- return False;
- /*
- assign( rD, unop(Iop_128HIto64,=20
- binop(Iop_MullU64,
- mkexpr(rA), mkexpr(rB))) );
- */
+ assign( rD, unop(Iop_128HIto64,=20
+ binop(Iop_MullU64,
+ mkexpr(rA), mkexpr(rB))) );
+ break;
=20
case 0xE9: // mulld (Multiply Low Double Word, PPC64 p543)
DIP("mulld%s%s r%u,r%u,r%u\n",
@@ -2938,21 +2931,14 @@
DIP("divd%s%s r%u,r%u,r%u\n",
flag_OE ? "o" : "", flag_rC ? "." : "",
rD_addr, rA_addr, rB_addr);
- DIP(" =3D> not implemented\n");
- return False;
- /*
- assign( rD, binop(Iop_DivS64, mkexpr(rA), mkexpr(rB)) );
- if (flag_OE) {
- set_XER_OV( ty, PPC32G_FLAG_OP_DIVW,=20
- mkexpr(rD), mkexpr(rA), mkexpr(rB) );
- }
-
- if invalid divide (rA=3D=3D0x8000_0000_0000_0000 && rB=3D=3D-=
1, OR rB=3D=3D0)
- rD undefined, CR[LT,GT,EQ] undefined
- flag_OE ? XER: set OV
- */
+ assign( rD, binop(Iop_DivS64, mkexpr(rA), mkexpr(rB)) );
+ if (flag_OE) {
+ set_XER_OV( ty, PPC32G_FLAG_OP_DIVW,=20
+ mkexpr(rD), mkexpr(rA), mkexpr(rB) );
+ }
+ break;
/* Note:
- if (0x8000_0000 / -1) or (x / 0)
+ if (0x8000_0000_0000_0000 / -1) or (x / 0)
=3D> rD=3Dundef, if(flag_rC) CR7=3Dundef, if(flag_OE) XER_OV=
=3D1
=3D> But _no_ exception raised. */
=20
@@ -2960,19 +2946,12 @@
DIP("divdu%s%s r%u,r%u,r%u\n",
flag_OE ? "o" : "", flag_rC ? "." : "",
rD_addr, rA_addr, rB_addr);
- DIP(" =3D> not implemented\n");
- return False;
- /*
- assign( rD, binop(Iop_DivU64, mkexpr(rA), mkexpr(rB)) );
- if (flag_OE) {
- set_XER_OV( PPC32G_FLAG_OP_DIVWU,=20
- mkexpr(rD), mkexpr(rA), mkexpr(rB) );
- }
-
- if invalid divide (rB=3D=3D0)
- rD undefined, CR[LT,GT,EQ] undefined
- flag_OE ? XER: set OV
- */
+ assign( rD, binop(Iop_DivU64, mkexpr(rA), mkexpr(rB)) );
+ if (flag_OE) {
+ set_XER_OV( ty, PPC32G_FLAG_OP_DIVWU,=20
+ mkexpr(rD), mkexpr(rA), mkexpr(rB) );
+ }
+ break;
/* Note: ditto comment divd, for (x / 0) */
=20
default:
@@ -3301,15 +3280,12 @@
return False;
}
DIP("cntlzd%s r%u,r%u\n", flag_rC ? "." : "", rA_addr, rS_addr)=
;
- DIP(" =3D> not implemented\n");
- return False;
- /*
- // Iop_Clz64 undefined for arg=3D=3D0, so deal with that case=
:
- irx =3D binop(Iop_CmpNE64, mkexpr(rS), mkU64(0));
- assign(rA, IRExpr_Mux0X( unop(Iop_1Uto8, irx),
- mkU64(64),
- unop(Iop_Clz64, mkexpr(rS)) ));
- */
+ // Iop_Clz64 undefined for arg=3D=3D0, so deal with that case:
+ irx =3D binop(Iop_CmpNE64, mkexpr(rS), mkU64(0));
+ assign(rA, IRExpr_Mux0X( unop(Iop_1Uto8, irx),
+ mkU64(64),
+ unop(Iop_Clz64, mkexpr(rS)) ));
+ break;
=20
default:
vex_printf("dis_int_logic(PPC32)(opc2)\n");
@@ -3473,28 +3449,26 @@
vassert( sh_imm < 64 );
=20
switch (opc2) {
- case 0x4:
- /*
- n =3D lowest 6bits of rB
- r =3D ROTL64(rS,n)
- */
+ case 0x4: {
+ /* r =3D ROTL64( rS, rB_lo6) */
+ r =3D ROTL( mkexpr(rS), unop(Iop_64to8, mkexpr(rB)) );
+
if (b1 =3D=3D 0) { // rldcl (Rotate Left DW then Clear Left, PP=
C64 p555)
DIP("rldcl%s r%u,r%u,r%u,%u\n", flag_rC ? "." : "",
rA_addr, rS_addr, rB_addr, msk_imm);
- r =3D ROTL(mkexpr(rS), mkU8(sh_imm));
+ // note, ROTL does the masking, so we don't do it here
mask64 =3D MASK64(0, 63-msk_imm);
assign( rA, binop(Iop_And64, r, mkU64(mask64)) );
break;
} else { // rldcr (Rotate Left DW then Clear Right, PPC64=
p556)
DIP("rldcr%s r%u,r%u,r%u,%u\n", flag_rC ? "." : "",
rA_addr, rS_addr, rB_addr, msk_imm);
- r =3D ROTL(mkexpr(rS), mkU8(sh_imm));
mask64 =3D MASK64(63-msk_imm, 63);
assign( rA, binop(Iop_And64, r, mkU64(mask64)) );
break;
}
break;
-
+ }
case 0x2: // rldic (Rotate Left DW Imm then Clear, PPC64 p557)
DIP("rldic%s r%u,r%u,%u,%u\n", flag_rC ? "." : "",
rA_addr, rS_addr, sh_imm, msk_imm);
@@ -3528,16 +3502,18 @@
break;
// later: deal with special case: (msk_imm =3D=3D sh_imm) =3D> =
SHL(sh_imm)
=20
- case 0x3: // rldimi (Rotate Left DW Imm then Mask Insert, PPC64 p5=
60)
+ case 0x3: { // rldimi (Rotate Left DW Imm then Mask Insert, PPC64 =
p560)
+ IRTemp rA_orig =3D newTemp(ty);
DIP("rldimi%s r%u,r%u,%u,%u\n", flag_rC ? "." : "",
rA_addr, rS_addr, sh_imm, msk_imm);
r =3D ROTL(mkexpr(rS), mkU8(sh_imm));
mask64 =3D MASK64(sh_imm, 63-msk_imm);
+ assign( rA_orig, getIReg(rA_addr) );
assign( rA, binop(Iop_Or64,
binop(Iop_And64, mkU64(mask64), r),
- binop(Iop_And64, mkU64(~mask64), mkexpr(rA)))=
);
+ binop(Iop_And64, mkU64(~mask64), mkexpr(rA_or=
ig))) );
break;
- =20
+ }
default:
vex_printf("dis_int_rot(PPC32)(opc2)\n");
return False;
@@ -4600,6 +4576,12 @@
=20
/*
Memory Synchronization Instructions
+
+ Note on Reservations:
+ We rely on the assumption that V will in fact only allow one thread at
+ once to run. In effect, a thread can make a reservation, but we don't
+ check any stores it does. Instead, the reservation is cancelled when
+ the scheduler switches to another thread (run_thread_for_a_while()).
*/
static Bool dis_memsync ( UInt theInstr )
{
@@ -4717,50 +4699,43 @@
return False;
}
DIP("ldarx r%u,r%u,r%u\n", rD_addr, rA_addr, rB_addr);
- DIP(" =3D> not implemented\n");
- return False;
- /*
- assign( EA, ea_standard(rA_addr, rB_addr) );
- putIReg( rD_addr, loadBE(Ity_I64, mkexpr(EA)) );
- // Take a reservation
- stmt( IRStmt_Put( OFFB_RESVN, mkexpr(EA) )); =20
- */
- =20
- case 0x0D6: // stdcx. (Store DW Condition Indexed, PPC64 p581)
+ putIReg( rD_addr, loadBE(Ity_I64, mkexpr(EA)) );
+ // Take a reservation
+ putGST( PPC_GST_RESVN, mkexpr(EA) );
+ break;
+ =20
+ case 0x0D6: { // stdcx. (Store DW Condition Indexed, PPC64 p581)
+ IRTemp resaddr =3D newTemp(ty);
if (b0 !=3D 1) {
vex_printf("dis_memsync(PPC32)(stdcx.,b0)\n");
return False;
}
DIP("stdcx. r%u,r%u,r%u\n", rS_addr, rA_addr, rB_addr);
- DIP(" =3D> not implemented\n");
- return False;
- /*
- assign( rS, getIReg(rS_addr) );
- assign( EA, ea_standard(rA_addr, rB_addr) );
+ assign( rS, getIReg(rS_addr) );
=20
- // First set up as if the reservation failed
- // Set CR0[LT GT EQ S0] =3D 0b000 || XER[SO]
- putCR321(0, mkU8(0<<1));
- putCR0(0, getXER_SO());
+ // First set up as if the reservation failed
+ // Set CR0[LT GT EQ S0] =3D 0b000 || XER[SO]
+ putCR321(0, mkU8(0<<1));
+ putCR0(0, getXER_SO());
=20
- // Get the reservation address into a temporary, then clear i=
t.
- assign( resaddr, IRExpr_Get(OFFB_RESVN, Ity_I64) );
- stmt( IRStmt_Put( OFFB_RESVN, mkU64(0) ));
+ // Get the reservation address into a temporary, then clear it.
+ assign( resaddr, getGST(PPC_GST_RESVN) );
+ putGST( PPC_GST_RESVN, mkSzImm(ty, 0) );
=20
- // Skip the rest if the reservation really did fail.
- stmt( IRStmt_Exit(
- binop(Iop_CmpNE64, mkexpr(resaddr),
- mkexpr(EA)),
- Ijk_Boring,
- IRConst_U32(guest_CIA_curr_instr + 4)) );
+ // Skip the rest if the reservation really did fail.
+ stmt( IRStmt_Exit( binop(Iop_CmpNE64, mkexpr(resaddr),
+ mkexpr(EA)),
+ Ijk_Boring,
+ IRConst_U64(nextInsnAddr())) );
=20
- // Success? Do the store
- storeBE( mkexpr(EA), mkexpr(rS) );
+ // Success? Do the store
+ storeBE( mkexpr(EA), mkexpr(rS) );
=20
- // Set CR0[LT GT EQ S0] =3D 0b001 || XER[SO]
- putCR321(0, mkU8(1<<1));
- */
- =20
+ // Set CR0[LT GT EQ S0] =3D 0b001 || XER[SO]
+ putCR321(0, mkU8(1<<1));
+ break;
+ }
+
default:
vex_printf("dis_memsync(PPC32)(opc2)\n");
return False;
@@ -4793,15 +4768,17 @@
=20
IRType ty =3D mode64 ? Ity_I64 : Ity_I32;
IRTemp rA =3D newTemp(ty);
+ IRTemp rS =3D newTemp(ty);
+ IRTemp rB =3D newTemp(ty);
IRTemp outofrange =3D newTemp(Ity_I8);
-// IRTemp sh_amt =3D newTemp(Ity_I8);
- IRTemp sh_amt32 =3D newTemp(Ity_I32);
IRTemp rS_lo32 =3D newTemp(Ity_I32);
IRTemp rB_lo32 =3D newTemp(Ity_I32);
IRExpr* e_tmp;
=20
- assign( rS_lo32, mkSzNarrow32(ty, getIReg(rS_addr)) );
- assign( rB_lo32, mkSzNarrow32(ty, getIReg(rB_addr)) );
+ assign( rS, getIReg(rS_addr) );
+ assign( rB, getIReg(rB_addr) );
+ assign( rS_lo32, mkSzNarrow32(ty, mkexpr(rS)) );
+ assign( rB_lo32, mkSzNarrow32(ty, mkexpr(rB)) );
=20
if (opc1 =3D=3D 0x1F) {
switch (opc2) {
@@ -4829,7 +4806,8 @@
break;
}
=20
- case 0x318: // sraw (Shift Right Algebraic Word, PPC32 p506)
+ case 0x318: { // sraw (Shift Right Algebraic Word, PPC32 p506)
+ IRTemp sh_amt =3D newTemp(Ity_I32);
DIP("sraw%s r%u,r%u,r%u\n", flag_rC ? "." : "",
rA_addr, rS_addr, rB_addr);
/* JRS: my reading of the (poorly worded) PPC32 doc p506 is:
@@ -4837,24 +4815,25 @@
rA =3D Sar32( rS, amt > 31 ? 31 : amt )
XER.CA =3D amt > 31 ? sign-of-rS : (computation as per srawi=
)
*/
- assign( sh_amt32, binop(Iop_And32, mkU32(0x3F), mkexpr(rB_lo32)=
) );
+ assign( sh_amt, binop(Iop_And32, mkU32(0x3F), mkexpr(rB_lo32)) =
);
assign( outofrange,
unop( Iop_1Uto8,=20
- binop(Iop_CmpLT32U, mkU32(31), mkexpr(sh_amt32)) =
));
+ binop(Iop_CmpLT32U, mkU32(31), mkexpr(sh_amt)) ))=
;
e_tmp =3D binop( Iop_Sar32,=20
mkexpr(rS_lo32),=20
unop( Iop_32to8,=20
IRExpr_Mux0X( mkexpr(outofrange),=20
- mkexpr(sh_amt32),=20
+ mkexpr(sh_amt),=20
mkU32(31)) ) );
assign( rA, mkSzWiden32(ty, e_tmp, /* Signed */True) );
=20
set_XER_CA( ty, PPC32G_FLAG_OP_SRAW,
mkexpr(rA),
mkSzWiden32(ty, mkexpr(rS_lo32), True),
- mkSzWiden32(ty, mkexpr(sh_amt32), True ),
+ mkSzWiden32(ty, mkexpr(sh_amt), True ),
mkSzWiden32(ty, getXER_CA32(), True) );
break;
+ }
=20
case 0x338: // srawi (Shift Right Algebraic Word Immediate, PPC32 =
p507)
DIP("srawi%s r%u,r%u,%d\n", flag_rC ? "." : "",
@@ -4903,14 +4882,11 @@
case 0x01B: // sld (Shift Left DW, PPC64 p568)
DIP("sld%s r%u,r%u,r%u\n", flag_rC ? "." : "", rA_addr, rS_addr=
, rB_addr);
/* rA =3D rS << rB */
- /* ppc32 semantics are:=20
+ /* ppc64 semantics are:=20
slw(x,y) =3D (x << (y & 63)) -- primary result
& ~((y << 57) >>s 63) -- make result 0=20
for y in 64 ..=20
*/
- DIP(" =3D> not implemented\n");
- return False;
- /*
assign( rA,
binop(
Iop_And64,
@@ -4922,34 +4898,33 @@
binop( Iop_Sar64,
binop(Iop_Shl64, mkexpr(rB), mkU8(57)),=20
mkU8(63)))) );
- */
+ break;
=20
- case 0x31A: // srad (Shift Right Algebraic DW, PPC64 p570)
+ case 0x31A: { // srad (Shift Right Algebraic DW, PPC64 p570)
+ IRTemp sh_amt =3D newTemp(Ity_I64);
DIP("srad%s r%u,r%u,r%u\n", flag_rC ? "." : "", rA_addr, rS_add=
r, rB_addr);
/* amt =3D rB & 127
rA =3D Sar64( rS, amt > 63 ? 63 : amt )
XER.CA =3D amt > 63 ? sign-of-rS : (computation as per srawi=
)
*/
- DIP(" =3D> not implemented\n");
- return False;
- /*
- assign( sh_amt64, binop(Iop_And64, mkU64(0x7F), mkexpr(rB)) );
+ assign( sh_amt, binop(Iop_And64, mkU64(0x7F), mkexpr(rB)) );
assign( outofrange,
unop( Iop_1Uto8,=20
- binop(Iop_CmpLT64U, mkU64(63), mkexpr(sh_amt64)) =
));
+ binop(Iop_CmpLT64U, mkU64(63), mkexpr(sh_amt)) ))=
;
assign( rA,
binop( Iop_Sar64,=20
mkexpr(rS),=20
unop( Iop_64to8,=20
IRExpr_Mux0X( mkexpr(outofrange),=20
- mkexpr(sh_amt64),=20
- mkU32(63)) ))
+ mkexpr(sh_amt),=20
+ mkU64(63)) ))
);
- set_XER_CA( PPC32G_FLAG_OP_SRAD,
- mkexpr(rA), mkexpr(rS), mkexpr(sh_amt64),
- getXER_CA32() );
- */
- =20
+ set_XER_CA( ty, PPC32G_FLAG_OP_SRAD,
+ mkexpr(rA), mkexpr(rS), mkexpr(sh_amt),
+ mkSzWiden32(ty, getXER_CA32(), /* Signed */False) )=
;
+ break;
+ }
+
case 0x33A: case 0x33B: // sradi (Shift Right Algebraic DW Imm, PP=
C64 p571)
sh_imm |=3D b1<<5;
vassert(sh_imm < 64);
@@ -4965,15 +4940,12 @@
=20
case 0x21B: // srd (Shift Right DW, PPC64 p574)
DIP("srd%s r%u,r%u,r%u\n", flag_rC ? "." : "", rA_addr, rS_addr=
, rB_addr);
- DIP(" =3D> not implemented\n");
- return False;
/* rA =3D rS >>u rB */
- /* ppc32 semantics are:=20
+ /* ppc semantics are:=20
srw(x,y) =3D (x >>u (y & 63)) -- primary result
& ~((y << 57) >>s 63) -- make result 0=20
for y in 64 .. 127
*/
- /*
assign( rA,
binop(
Iop_And64,
@@ -4985,7 +4957,7 @@
binop( Iop_Sar64,=20
binop(Iop_Shl64, mkexpr(rB), mkU8(57)),=20
mkU8(63)))) );
- */
+ break;
=20
default:
vex_printf("dis_int_shift(PPC32)(opc2)\n");
@@ -6113,7 +6085,8 @@
=20
IRTemp frD =3D newTemp(Ity_F64);
IRTemp frB =3D newTemp(Ity_F64);
- IRTemp r_tmp =3D newTemp(Ity_I32);
+ IRTemp r_tmp32 =3D newTemp(Ity_I32);
+ IRTemp r_tmp64 =3D newTemp(Ity_I64);
=20
if (opc1 !=3D 0x3F || b16to20 !=3D 0) {
vex_printf("dis_fp_round(PPC32)(instr)\n");
@@ -6130,47 +6103,37 @@
=20
case 0x00E: // fctiw (Float Conv to Int, PPC32 p404)
DIP("fctiw%s fr%u,fr%u\n", flag_rC ? "." : "", frD_addr, frB_addr)=
;
- assign( r_tmp, binop(Iop_F64toI32, get_roundingmode(), mkexpr(frB)=
) );
+ assign( r_tmp32, binop(Iop_F64toI32, get_roundingmode(), mkexpr(fr=
B)) );
assign( frD, unop( Iop_ReinterpI64asF64,
- unop( Iop_32Uto64, mkexpr(r_tmp))));
+ unop( Iop_32Uto64, mkexpr(r_tmp32))));
break;
=20
case 0x00F: // fctiwz (Float Conv to Int, Round to Zero, PPC32 p405)
DIP("fctiwz%s fr%u,fr%u\n", flag_rC ? "." : "", frD_addr, frB_addr=
);
- assign( r_tmp, binop(Iop_F64toI32, mkU32(0x3), mkexpr(frB)) );
+ assign( r_tmp32, binop(Iop_F64toI32, mkU32(0x3), mkexpr(frB)) );
assign( frD, unop( Iop_ReinterpI64asF64,
- unop( Iop_32Uto64, mkexpr(r_tmp))));
+ unop( Iop_32Uto64, mkexpr(r_tmp32))));
break;
=20
=20
/* 64bit FP conversions */
case 0x32E: // fctid (Float Conv to Int DW, PPC64 p437)
DIP("fctid%s fr%u,fr%u\n", flag_rC ? "." : "", frD_addr, frB_addr)=
;
- DIP(" =3D> not implemented\n");
- return False;
- /*
- assign( r_tmp, binop(Iop_F64toI64, get_roundingmode(), mkexpr(fr=
B)) );
- assign( frD, unop( Iop_ReinterpI64asF64, mkexpr(r_tmp)) );
- */
+ assign( r_tmp64, binop(Iop_F64toI64, get_roundingmode(), mkexpr(fr=
B)) );
+ assign( frD, unop( Iop_ReinterpI64asF64, mkexpr(r_tmp64)) );
+ break;
=20
case 0x32F: // fctidz (Float Conv to Int DW, Round to Zero, PPC64 p43=
7)
DIP("fctidz%s fr%u,fr%u\n", flag_rC ? "." : "", frD_addr, frB_addr=
);
- DIP(" =3D> not implemented\n");
- return False;
- /*
- assign( r_tmp, binop(Iop_F64toI64, mkU32(0x3), mkexpr(frB)) );
- assign( frD, unop( Iop_ReinterpI64asF64, mkexpr(r_tmp)) );
- */
+ assign( r_tmp64, binop(Iop_F64toI64, mkU32(0x3), mkexpr(frB)) );
+ assign( frD, unop( Iop_ReinterpI64asF64, mkexpr(r_tmp64)) );
+ break;
=20
case 0x34E: // fcfid (Float Conv from Int DW, PPC64 p434)
DIP("fcfid%s fr%u,fr%u\n", flag_rC ? "." : "", frD_addr, frB_addr)=
;
- DIP(" =3D> not implemented\n");
- return False;
- /*
- ?
- assign( r_tmp, unop( Iop_ReinterpF64asI64, mkexpr(rD)) );
- assign( frD, binop(Iop_I64toF64, get_roundingmode(), mkexpr(frB)=
) );
- */
+ assign( r_tmp64, unop( Iop_ReinterpF64asI64, mkexpr(frB)) );
+ assign( frD, binop(Iop_I64toF64, get_roundingmode(), mkexpr(r_tmp6=
4)) );
+ break;
=20
default:
vex_printf("dis_fp_round(PPC32)(opc2)\n");
@@ -8722,7 +8685,7 @@
goto decode_failure;
=20
/* 64bit Integer Logical Instructions */
- case 0x3DA: case 0x03A: // extsw, cntlzw
+ case 0x3DA: case 0x03A: // extsw, cntlzd
if (!mode64) goto decode_failure;
if (dis_int_logic( theInstr )) goto decode_success;
goto decode_failure;
@@ -8735,7 +8698,7 @@
=20
/* 64bit Integer Shift Instructions */
case 0x01B: case 0x31A: // sld, srad
- case 0x33A: case 0x33B: // sradi_a, sradi_b
+ case 0x33A: case 0x33B: // sradi
case 0x21B: // srd
if (!mode64) goto decode_failure;
if (dis_int_shift( theInstr )) goto decode_success;
Modified: trunk/priv/host-ppc32/hdefs.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/host-ppc32/hdefs.c 2005-12-22 03:01:17 UTC (rev 1502)
+++ trunk/priv/host-ppc32/hdefs.c 2005-12-22 14:32:35 UTC (rev 1503)
@@ -568,7 +568,8 @@
switch (op) {
case Pun_NOT: return "not";
case Pun_NEG: return "neg";
- case Pun_CLZ: return "cntlzw";
+ case Pun_CLZ32: return "cntlzw";
+ case Pun_CLZ64: return "cntlzd";
default: vpanic("showPPC32UnaryOp");
}
}
@@ -909,6 +910,20 @@
i->Pin.FpF64toI32.src =3D src;
return i;
}
+PPC32Instr* PPC32Instr_FpF64toI64 ( HReg dst, HReg src ) {
+ PPC32Instr* i =3D LibVEX_Alloc(sizeof(PPC32Instr));
+ i->tag =3D Pin_FpF64toI64;
+ i->Pin.FpF64toI64.dst =3D dst;
+ i->Pin.FpF64toI64.src =3D src;
+ return i;
+}
+PPC32Instr* PPC32Instr_FpI64toF64 ( HReg dst, HReg src ) {
+ PPC32Instr* i =3D LibVEX_Alloc(sizeof(PPC32Instr));
+ i->tag =3D Pin_FpI64toF64;
+ i->Pin.FpI64toF64.dst =3D dst;
+ i->Pin.FpI64toF64.src =3D src;
+ return i;
+}
PPC32Instr* PPC32Instr_FpCMov ( PPC32CondCode cond, HReg dst, HReg src )=
{
PPC32Instr* i =3D LibVEX_Alloc(sizeof(PPC32Instr));
i->tag =3D Pin_FpCMov;
@@ -1396,6 +1411,23 @@
ppHRegPPC32(i->Pin.FpF64toI32.dst);
vex_printf(",%%r0,%%r1");
return;
+ case Pin_FpF64toI64:
+ vex_printf("fctid %%fr7,");
+ ppHRegPPC32(i->Pin.FpF64toI64.src);
+ vex_printf("; stfdx %%fr7,%%r0,%%r1");
+ vex_printf("; ldx ");
+ ppHRegPPC32(i->Pin.FpF64toI64.dst);
+ vex_printf(",%%r0,%%r1");
+ return;
+ case Pin_FpI64toF64:
+ vex_printf("stdx ");
+ ppHRegPPC32(i->Pin.FpI64toF64.src);
+ vex_printf(",%%r0,%%r1");
+ vex_printf("; lfdx %%fr7,%%r0,%%r1");
+ vex_printf("; fcfid ");
+ ppHRegPPC32(i->Pin.FpI64toF64.dst);
+ vex_printf(",%%r7");
+ return;
case Pin_FpCMov:
vex_printf("fpcmov (%s) ", showPPC32CondCode(i->Pin.FpCMov.cond));
ppHRegPPC32(i->Pin.FpCMov.dst);
@@ -1731,6 +1763,16 @@
addHRegUse(u, HRmWrite, hregPPC32_FPR7());
addHRegUse(u, HRmRead, i->Pin.FpF64toI32.src);
return;
+ case Pin_FpF64toI64:
+ addHRegUse(u, HRmWrite, i->Pin.FpF64toI64.dst);
+ addHRegUse(u, HRmWrite, hregPPC32_FPR7());
+ addHRegUse(u, HRmRead, i->Pin.FpF64toI64.src);
+ return;
+ case Pin_FpI64toF64:
+ addHRegUse(u, HRmWrite, i->Pin.FpI64toF64.dst);
+ addHRegUse(u, HRmWrite, hregPPC32_FPR7());
+ addHRegUse(u, HRmRead, i->Pin.FpI64toF64.src);
+ return;
case Pin_FpCMov:
addHRegUse(u, HRmModify, i->Pin.FpCMov.dst);
addHRegUse(u, HRmRead, i->Pin.FpCMov.src);
@@ -1925,6 +1967,14 @@
mapReg(m, &i->Pin.FpF64toI32.dst);
mapReg(m, &i->Pin.FpF64toI32.src);
return;
+ case Pin_FpF64toI64:
+ mapReg(m, &i->Pin.FpF64toI64.dst);
+ mapReg(m, &i->Pin.FpF64toI64.src);
+ return;
+ case Pin_FpI64toF64:
+ mapReg(m, &i->Pin.FpI64toF64.dst);
+ mapReg(m, &i->Pin.FpI64toF64.src);
+ return;
case Pin_FpCMov:
mapReg(m, &i->Pin.FpCMov.dst);
mapReg(m, &i->Pin.FpCMov.src);
@@ -2728,9 +2778,12 @@
case Pun_NEG: // neg r_dst,r_src
p =3D mkFormXO(p, 31, r_dst, r_src, 0, 0, 104, 0);
break;
- case Pun_CLZ: // cntlzw r_dst, r_src
+ case Pun_CLZ32: // cntlzw r_dst, r_src
p =3D mkFormX(p, 31, r_src, r_dst, 0, 26, 0);
break;
+ case Pun_CLZ64: // cntlzd r_dst, r_src
+ p =3D mkFormX(p, 31, r_src, r_dst, 0, 58, 0);
+ break;
default: goto bad;
}
goto done;
@@ -3147,6 +3200,44 @@
goto done;
}
=20
+ case Pin_FpF64toI64: {
+ UInt r_dst =3D iregNo(i->Pin.FpF64toI64.dst, mode64);
+ UInt fr_src =3D fregNo(i->Pin.FpF64toI64.src);
+ UChar fr_tmp =3D 7; // Temp freg
+ PPC32AMode* am_addr;
+
+ // fctid (conv f64 to i64), PPC64 p437
+ p =3D mkFormX(p, 63, fr_tmp, 0, fr_src, 814, 0);
+
+ am_addr =3D PPC32AMode_RR( StackFramePtr(mode64),
+ hregPPC_GPR0(mode64) );
+
+ // stfdx (store fp64), PPC64 p589
+ p =3D doAMode_RR(p, 31, 727, fr_tmp, am_addr, mode64);
+
+ // ldx (load int64), PPC64 p476
+ p =3D doAMode_RR(p, 31, 21, r_dst, am_addr, mode64);
+ goto done;
+ }
+
+ case Pin_FpI64toF64: {
+ UInt r_src =3D iregNo(i->Pin.FpI64toF64.src, mode64);
+ UInt fr_dst =3D fregNo(i->Pin.FpI64toF64.dst);
+ UChar fr_tmp =3D 7; // Temp freg
+ PPC32AMode* am_addr =3D PPC32AMode_RR( StackFramePtr(mode64),
+ hregPPC_GPR0(mode64) );
+
+ // stdx r_src,r0,r1
+ p =3D doAMode_RR(p, 31, 149, r_src, am_addr, mode64);
+
+ // lfdx fr7,r0,r1
+ p =3D doAMode_RR(p, 31, 599, fr_tmp, am_addr, mode64);
+
+ // fcfid (conv i64 to f64), PPC64 p434
+ p =3D mkFormX(p, 63, fr_dst, 0, fr_tmp, 846, 0);
+ goto done;
+ }
+
case Pin_FpCMov: {
UInt fr_dst =3D fregNo(i->Pin.FpCMov.dst);
UInt fr_src =3D fregNo(i->Pin.FpCMov.src);
Modified: trunk/priv/host-ppc32/hdefs.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/host-ppc32/hdefs.h 2005-12-22 03:01:17 UTC (rev 1502)
+++ trunk/priv/host-ppc32/hdefs.h 2005-12-22 14:32:35 UTC (rev 1503)
@@ -326,7 +326,8 @@
enum {
Pun_NEG,
Pun_NOT,
- Pun_CLZ
+ Pun_CLZ32,
+ Pun_CLZ64
}
PPC32UnaryOp;
=20
@@ -460,6 +461,8 @@
Pin_FpLdSt, /* FP load/store */
Pin_FpF64toF32, /* FP round IEEE754 double to IEEE754 single */
Pin_FpF64toI32, /* FP round IEEE754 double to 32-bit integer */
+ Pin_FpF64toI64, /* FP round IEEE754 double to 32-bit integer */
+ Pin_FpI64toF64, /* FP round IEEE754 64-bit integer to double */
Pin_FpCMov, /* FP floating point conditional move */
Pin_FpLdFPSCR, /* mtfsf */
Pin_FpCmp, /* FP compare, generating value into int reg */
@@ -645,6 +648,17 @@
HReg src;
HReg dst;
} FpF64toI32;
+ /* Ditto to 64-bit integer type. */
+ struct {
+ HReg src;
+ HReg dst;
+ } FpF64toI64;
+ /* By observing the current FPU rounding mode, reinterpret src =
from
+ a 64-bit integer to double type, and round into dst. */
+ struct {
+ HReg src;
+ HReg dst;
+ } FpI64toF64;
/* Mov src to dst on the given condition, which may not
be the bogus Xcc_ALWAYS. */
struct {
@@ -781,6 +795,8 @@
extern PPC32Instr* PPC32Instr_FpLdSt ( Bool isLoad, UChar sz, HReg, =
PPC32AMode* );
extern PPC32Instr* PPC32Instr_FpF64toF32 ( HReg dst, HReg src );
extern PPC32Instr* PPC32Instr_FpF64toI32 ( HReg dst, HReg src );
+extern PPC32Instr* PPC32Instr_FpF64toI64 ( HReg dst, HReg src );
+extern PPC32Instr* PPC32Instr_FpI64toF64 ( HReg dst, HReg src );
extern PPC32Instr* PPC32Instr_FpCMov ( PPC32CondCode, HReg dst, HReg=
src );
extern PPC32Instr* PPC32Instr_FpLdFPSCR ( HReg src );
extern PPC32Instr* PPC32Instr_FpCmp ( HReg dst, HReg srcL, HReg src=
R );
Modified: trunk/priv/host-ppc32/isel.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/host-ppc32/isel.c 2005-12-22 03:01:17 UTC (rev 1502)
+++ trunk/priv/host-ppc32/isel.c 2005-12-22 14:32:35 UTC (rev 1503)
@@ -786,13 +786,10 @@
static
void set_FPU_rounding_mode ( ISelEnv* env, IRExpr* mode )
{
- HReg fr_src =3D newVRegF(env);
+ HReg fr_src =3D newVRegF(env);
HReg r_src;
=20
- if (mode64)
- vassert(typeOfIRExpr(env->type_env,mode) =3D=3D Ity_I64);
- else
- vassert(typeOfIRExpr(env->type_env,mode) =3D=3D Ity_I32);
+ vassert(typeOfIRExpr(env->type_env,mode) =3D=3D Ity_I32);
=20
/* Only supporting the rounding-mode bits - the rest of FPSCR is 0x0
- so we can set the whole register at once (faster)
@@ -1324,7 +1321,22 @@
return r_dst;
}
=20
+ if (e->Iex.Binop.op =3D=3D Iop_F64toI64) {
+ HReg fr_src =3D iselDblExpr(env, e->Iex.Binop.arg2);
+ HReg r_dst =3D newVRegI(env); =20
+ /* Set host rounding mode */
+ set_FPU_rounding_mode( env, e->Iex.Binop.arg1 );
=20
+ sub_from_sp( env, 16 );
+ addInstr(env, PPC32Instr_FpF64toI64(r_dst, fr_src));
+ add_to_sp( env, 16 );
+
+ /* Restore default FPU rounding. */
+ set_FPU_rounding_default( env );
+ return r_dst;
+ }
+
+
//.. /* C3210 flags following FPU partial remainder (fprem), both
//.. IEEE compliant (PREM1) and non-IEEE compliant (PREM). */
//.. if (e->Iex.Binop.op =3D=3D Iop_PRemC3210F64
@@ -1538,19 +1550,14 @@
r_dst, r_dst, PPC32RH_Imm(False,3=
1)));
return r_dst;
}
-
-//.. case Iop_Ctz32: {
-//.. /* Count trailing zeroes, implemented by x86 'bsfl' */
-//.. HReg dst =3D newVRegI32(env);
-//.. HReg src =3D iselIntExpr_R(env, e->Iex.Unop.arg);
-//.. addInstr(env, X86Instr_Bsfr32(True,src,dst));
-//.. return dst;
-//.. }
- case Iop_Clz32: {
+ case Iop_Clz32:
+ case Iop_Clz64: {
+ PPC32UnaryOp op_clz =3D
+ (e->Iex.Unop.op =3D=3D Iop_Clz32) ? Pun_CLZ32 : Pun_CLZ64;
/* Count leading zeroes. */
HReg r_dst =3D newVRegI(env);
HReg r_src =3D iselIntExpr_R(env, e->Iex.Unop.arg);
- addInstr(env, PPC32Instr_Unary(Pun_CLZ,r_dst,r_src));
+ addInstr(env, PPC32Instr_Unary(op_clz,r_dst,r_src));
return r_dst;
}
case Iop_Neg8:
@@ -1590,6 +1597,27 @@
/* These are no-ops. */
return iselIntExpr_R(env, e->Iex.Unop.arg);
=20
+ /* ReinterpF64asI64(e) */
+ /* Given an IEEE754 double, produce an I64 with the same bit
+ pattern. */
+ case Iop_ReinterpF64asI64: {
+ PPC32AMode *am_addr;
+ HReg fr_src =3D iselDblExpr(env, e->Iex.Unop.arg);
+ HReg r_dst =3D newVRegI(env);
+ vassert(mode64);
+
+ sub_from_sp( env, 16 ); // Move SP down 16 bytes
+ am_addr =3D PPC32AMode_IR(0, StackFramePtr(mode64));
+
+ // store as F64
+ addInstr(env, PPC32Instr_FpLdSt( False/*store*/, 8, fr_src, am_=
addr ));
+ // load as Ity_I64
+ addInstr(env, PPC32Instr_Load( 8, False, r_dst, am_addr, mode64=
));
+
+ add_to_sp( env, 16 ); // Reset SP
+ return r_dst;
+ }
+
default:=20
break;
}
@@ -3159,18 +3187,16 @@
else
vpanic("iselDblExpr(ppc32): const");
=20
- if (mode64) {
- HReg r_src =3D newVRegI(env);
- vassert(0);
- // AWAITING TEST CASE
- addInstr(env, PPC32Instr_LI(r_src, u.u64, mode64));
- return mk_LoadR64toFPR( env, r_src ); // 1*I64 -> F64
- } else { // mode32
+ if (!mode64) {
HReg r_srcHi =3D newVRegI(env);
HReg r_srcLo =3D newVRegI(env);
addInstr(env, PPC32Instr_LI(r_srcHi, u.u32x2[1], mode64));
addInstr(env, PPC32Instr_LI(r_srcLo, u.u32x2[0], mode64));
return mk_LoadRR32toFPR( env, r_srcHi, r_srcLo );
+ } else { // mode64
+ HReg r_src =3D newVRegI(env);
+ addInstr(env, PPC32Instr_LI(r_src, u.u64, mode64));
+ return mk_LoadR64toFPR( env, r_src ); // 1*I64 -> F64
}
}
=20
@@ -3217,6 +3243,23 @@
addInstr(env, PPC32Instr_FpBinary(fpop, r_dst, r_srcL, r_srcR))=
;
return r_dst;
}
+
+ if (e->Iex.Binop.op =3D=3D Iop_I64toF64) {
+ HReg fr_dst =3D newVRegF(env);
+ HReg r_src =3D iselIntExpr_R(env, e->Iex.Binop.arg2);
+ vassert(mode64);
+
+ /* Set host rounding mode */
+ set_FPU_rounding_mode( env, e->Iex.Binop.arg1 );
+
+ sub_from_sp( env, 16 );
+ addInstr(env, PPC32Instr_FpI64toF64(fr_dst, r_src));
+ add_to_sp( env, 16 );
+
+ /* Restore default FPU rounding. */
+ set_FPU_rounding_default( env );
+ return fr_dst;
+ }
}
=20
//.. if (e->tag =3D=3D Iex_Binop && e->Iex.Binop.op =3D=3D Iop_RoundF=
64) {
@@ -3299,6 +3342,7 @@
//.. add_to_esp(env, 4);
//.. return dst;
//.. }
+
case Iop_ReinterpI64asF64: {
/* Given an I64, produce an IEEE754 double with the same
bit pattern. */
@@ -3307,8 +3351,8 @@
iselInt64Expr( &r_srcHi, &r_srcLo, env, e->Iex.Unop.arg);
return mk_LoadRR32toFPR( env, r_srcHi, r_srcLo );
} else {
- // TODO
- vassert(0);
+ HReg r_src =3D iselIntExpr_R(env, e->Iex.Unop.arg);
+ return mk_LoadR64toFPR( env, r_src );
}
}
case Iop_F32toF64: {
|
|
From: <sv...@va...> - 2005-12-22 08:08:07
|
Author: njn
Date: 2005-12-22 08:08:01 +0000 (Thu, 22 Dec 2005)
New Revision: 5403
Log:
Clean up the secmap access functions a little more.
Modified:
branches/COMPVBITS/memcheck/mc_main.c
Modified: branches/COMPVBITS/memcheck/mc_main.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- branches/COMPVBITS/memcheck/mc_main.c 2005-12-22 06:20:59 UTC (rev 54=
02)
+++ branches/COMPVBITS/memcheck/mc_main.c 2005-12-22 08:08:01 UTC (rev 54=
03)
@@ -232,10 +232,14 @@
#define SM_OFF(aaa) (((aaa) & 0xffff) >> 2)
#define SM_OFF_64(aaa) (((aaa) & 0xffff) >> 3)
=20
-static inline Addr start_of_this_sm ( Addr a ) {
+// Paranoia: it's critical for performance that the requested inlining
+// occurs. So try extra hard.
+#define INLINE inline __attribute__((always_inline))
+
+static INLINE Addr start_of_this_sm ( Addr a ) {
return (a & (~SM_MASK));
}
-static inline Bool is_start_of_sm ( Addr a ) {
+static INLINE Bool is_start_of_sm ( Addr a ) {
return (start_of_this_sm(a) =3D=3D a);
}
=20
@@ -255,7 +259,7 @@
=20
static SecMap sm_distinguished[3];
=20
-static inline Bool is_distinguished_sm ( SecMap* sm ) {
+static INLINE Bool is_distinguished_sm ( SecMap* sm ) {
return sm >=3D &sm_distinguished[0] && sm <=3D &sm_distinguished[2];
}
=20
@@ -381,16 +385,57 @@
=20
/* --------------- SecMap fundamentals --------------- */
=20
-__attribute__((always_inline))
-static inline SecMap* get_secmap_readable_low ( Addr a )
+// In all these, 'low' means it's definitely in the main primary map,
+// 'high' means it's definitely in the auxiliary table.
+
+static INLINE SecMap** get_secmap_low_ptr ( Addr a )
{
UWord pm_off =3D a >> 16;
# if VG_DEBUG_MEMORY >=3D 1
tl_assert(pm_off < N_PRIMARY_MAP);
# endif
- return primary_map[ pm_off ];
+ return &primary_map[ pm_off ];
}
=20
+static INLINE SecMap** get_secmap_high_ptr ( Addr a )
+{
+ AuxMapEnt* am =3D find_or_alloc_in_auxmap(a);
+ return &am->sm;
+}
+
+static SecMap** get_secmap_ptr ( Addr a )
+{
+ return ( a <=3D MAX_PRIMARY_ADDRESS=20
+ ? get_secmap_low_ptr(a)=20
+ : get_secmap_high_ptr(a));
+}
+
+static INLINE SecMap* get_secmap_readable_low ( Addr a )
+{
+ return *get_secmap_low_ptr(a);
+}
+
+static INLINE SecMap* get_secmap_readable_high ( Addr a )
+{
+ return *get_secmap_high_ptr(a);
+}
+
+static INLINE SecMap* get_secmap_writable_low(Addr a)
+{
+ SecMap** p =3D get_secmap_low_ptr(a);
+ if (EXPECTED_NOT_TAKEN(is_distinguished_sm(*p)))
+ *p =3D copy_for_writing(*p);
+ return *p;
+}
+
+static INLINE SecMap* get_secmap_writable_high ( Addr a )
+{
+ SecMap** p =3D get_secmap_high_ptr(a);
+ if (EXPECTED_NOT_TAKEN(is_distinguished_sm(*p)))
+ *p =3D copy_for_writing(*p);
+ return *p;
+}
+
/* Produce the secmap for 'a', either from the primary map or by
ensuring there is an entry for it in the aux primary map. The
secmap may be a distinguished one as the caller will only want to
@@ -398,14 +443,25 @@
*/
static SecMap* get_secmap_readable ( Addr a )
{
- if (a <=3D MAX_PRIMARY_ADDRESS) {
- return get_secmap_readable_low(a);
- } else {
- AuxMapEnt* am =3D find_or_alloc_in_auxmap(a);
- return am->sm;
- }
+ return ( a <=3D MAX_PRIMARY_ADDRESS
+ ? get_secmap_readable_low (a)
+ : get_secmap_readable_high(a) );
}
=20
+/* Produce the secmap for 'a', either from the primary map or by
+ ensuring there is an entry for it in the aux primary map. The
+ secmap may not be a distinguished one, since the caller will want
+ to be able to write it. If it is a distinguished secondary, make a
+ writable copy of it, install it, and return the copy instead. (COW
+ semantics).
+*/
+static SecMap* get_secmap_writable ( Addr a )
+{
+ return ( a <=3D MAX_PRIMARY_ADDRESS
+ ? get_secmap_writable_low (a)
+ : get_secmap_writable_high(a) );
+}
+
/* If 'a' has a SecMap, produce it. Else produce NULL. But don't
allocate one if one doesn't already exist. This is used by the
leak checker.
@@ -420,39 +476,6 @@
}
}
=20
-// Produce the secmap for 'a', where 'a' is known to be in the primary m=
ap.
-__attribute__((always_inline))
-static inline SecMap* get_secmap_writable_low(Addr a)
-{
- UWord pm_off =3D a >> 16;
-# if VG_DEBUG_MEMORY >=3D 1
- tl_assert(pm_off < N_PRIMARY_MAP);
-# endif
- if (EXPECTED_NOT_TAKEN(is_distinguished_sm(primary_map[pm_off])))
- primary_map[pm_off] =3D copy_for_writing(primary_map[pm_off]);
- return primary_map[pm_off];
-}
-
-/* Produce the secmap for 'a', either from the primary map or by
- ensuring there is an entry for it in the aux primary map. The
- secmap may not be a distinguished one, since the caller will want
- to be able to write it. If it is a distinguished secondary, make a
- writable copy of it, install it, and return the copy instead. (COW
- semantics).
-*/
-static SecMap* get_secmap_writable ( Addr a )
-{
- if (a <=3D MAX_PRIMARY_ADDRESS) {
- return get_secmap_writable_low(a);
- } else {
- AuxMapEnt* am =3D find_or_alloc_in_auxmap(a);
- if (is_distinguished_sm(am->sm))
- am->sm =3D copy_for_writing(am->sm);
- return am->sm;
- }
-}
-
-
/* --------------- Secondary V bit table ------------ */
=20
// Note: the nodes in this table can become stale. Eg. if you write a
@@ -559,7 +582,7 @@
=20
/* Returns the offset in memory of the byteno-th most significant byte
in a wordszB-sized word, given the specified endianness. */
-static inline UWord byte_offset_w ( UWord wordszB, Bool bigendian,=20
+static INLINE UWord byte_offset_w ( UWord wordszB, Bool bigendian,=20
UWord byteno ) {
return bigendian ? (wordszB-1-byteno) : byteno;
}
@@ -567,7 +590,7 @@
=20
/* --------------- Fundamental functions --------------- */
=20
-static inline
+static INLINE
void insert_vabits8_into_vabits32 ( Addr a, UChar vabits8, UChar* vabits=
32 )
{
UInt shift =3D (a & 3) << 1; // shift by 0, 2, 4, or 6
@@ -575,7 +598,7 @@
*vabits32 |=3D (vabits8 << shift); // mask in the two new bits
}
=20
-static inline
+static INLINE
void insert_vabits16_into_vabits32 ( Addr a, UChar vabits16, UChar* vabi=
ts32 )
{
UInt shift;
@@ -585,7 +608,7 @@
*vabits32 |=3D (vabits16 << shift); // mask in the four new bits
}
=20
-static inline
+static INLINE
UChar extract_vabits8_from_vabits32 ( Addr a, UChar vabits32 )
{
UInt shift =3D (a & 3) << 1; // shift by 0, 2, 4, or 6
@@ -593,7 +616,7 @@
return 0x3 & vabits32; // mask out the rest
}
=20
-static inline
+static INLINE
UChar extract_vabits16_from_vabits32 ( Addr a, UChar vabits32 )
{
UInt shift;
@@ -607,7 +630,7 @@
// clever things like combine the auxmap check (in
// get_secmap_{read,writ}able) with alignment checks.
=20
-static inline
+static INLINE
void set_vabits8 ( Addr a, UChar vabits8 )
{
SecMap* sm =3D get_secmap_writable(a);
@@ -615,7 +638,7 @@
insert_vabits8_into_vabits32( a, vabits8, &(sm->vabits32[sm_off]) );
}
=20
-static inline
+static INLINE
UChar get_vabits8 ( Addr a )
{
SecMap* sm =3D get_secmap_readable(a);
@@ -755,7 +778,7 @@
=20
//zz /* Reading/writing of the bitmaps, for aligned word-sized accesses.=
*/
//zz=20
-//zz static __inline__ UChar get_abits4_ALIGNED ( Addr a )
+//zz static INLINE UChar get_abits4_ALIGNED ( Addr a )
//zz {
//zz SecMap* sm;
//zz UInt sm_off;
@@ -772,7 +795,7 @@
//zz return abits8;
//zz }
//zz=20
-//zz static UInt __inline__ get_vbytes4_ALIGNED ( Addr a )
+//zz static UInt INLINE get_vbytes4_ALIGNED ( Addr a )
//zz {
//zz SecMap* sm =3D primary_map[PM_IDX(a)];
//zz UInt sm_off =3D SM_OFF(a);
@@ -784,7 +807,7 @@
//zz }
//zz=20
//zz=20
-//zz static void __inline__ set_vbytes4_ALIGNED ( Addr a, UInt vbytes )
+//zz static void INLINE set_vbytes4_ALIGNED ( Addr a, UInt vbytes )
//zz {
//zz SecMap* sm;
//zz UInt sm_off;
@@ -803,33 +826,6 @@
/*--- Setting permissions over address ranges. ---*/
/*------------------------------------------------------------*/
=20
-/* Given address 'a', find the place where the pointer to a's
- secondary map lives. If a falls into the primary map, the returned
- value points to one of the entries in primary_map[]. Otherwise,
- the auxiliary primary map is searched for 'a', or an entry is
- created for it; either way, the returned value points to the
- relevant AuxMapEnt's .sm field.
-
- The point of this is to enable set_address_range_perms to assign
- secondary maps in a uniform way, without worrying about whether a
- given secondary map is pointed to from the main or auxiliary
- primary map. =20
-*/
-
-static SecMap** find_secmap_binder_for_addr ( Addr a )
-{
- if (a > MAX_PRIMARY_ADDRESS) {
- AuxMapEnt* am =3D find_or_alloc_in_auxmap(a);
- return &am->sm;
- } else {
- UWord sec_no =3D (UWord)(a >> 16);
-# if VG_DEBUG_MEMORY >=3D 1
- tl_assert(sec_no < N_PRIMARY_MAP);
-# endif
- return &primary_map[sec_no];
- }
-}
-
static void set_address_range_perms ( Addr a, SizeT lenT, UWord vabits64=
,
UWord dsm_num )
{
@@ -837,7 +833,7 @@
SizeT lenA, lenB, len_to_next_secmap;
Addr aNext;
SecMap* sm;
- SecMap** binder;
+ SecMap** sm_ptr;
SecMap* example_dsm;
=20
PROF_EVENT(150, "set_address_range_perms");
@@ -936,19 +932,19 @@
//-------------------------------------------------------------------=
-----
=20
// If it's distinguished, make it undistinguished if necessary.
- binder =3D find_secmap_binder_for_addr(a);
- if (is_distinguished_sm(*binder)) {
- if (*binder =3D=3D example_dsm) {
+ sm_ptr =3D get_secmap_ptr(a);
+ if (is_distinguished_sm(*sm_ptr)) {
+ if (*sm_ptr =3D=3D example_dsm) {
// Sec-map already has the V+A bits that we want, so skip.
PROF_EVENT(154, "set_address_range_perms-dist-sm1-quick");
a =3D aNext;
lenA =3D 0;
} else {
PROF_EVENT(155, "set_address_range_perms-dist-sm1");
- *binder =3D copy_for_writing(*binder);
+ *sm_ptr =3D copy_for_writing(*sm_ptr);
}
}
- sm =3D *binder;
+ sm =3D *sm_ptr;
=20
// 1 byte steps
while (True) {
@@ -993,16 +989,16 @@
if (lenB < SM_SIZE) break;
tl_assert(is_start_of_sm(a));
PROF_EVENT(159, "set_address_range_perms-loop64K");
- binder =3D find_secmap_binder_for_addr(a);
- if (!is_distinguished_sm(*binder)) {
+ sm_ptr =3D get_secmap_ptr(a);
+ if (!is_distinguished_sm(*sm_ptr)) {
PROF_EVENT(160, "set_address_range_perms-loop64K-free-dist-sm")=
;
// Free the non-distinguished sec-map that we're replacing. Th=
is
// case happens moderately often, enough to be worthwhile.
- VG_(am_munmap_valgrind)((Addr)*binder, sizeof(SecMap));
+ VG_(am_munmap_valgrind)((Addr)*sm_ptr, sizeof(SecMap));
n_secmaps_deissued++; // Needed for the expensive sanity c=
heck
}
// Make the sec-map entry point to the example DSM
- *binder =3D example_dsm;
+ *sm_ptr =3D example_dsm;
lenB -=3D SM_SIZE;
a +=3D SM_SIZE;
}
@@ -1018,18 +1014,18 @@
tl_assert(is_start_of_sm(a) && lenB < SM_SIZE);
=20
// If it's distinguished, make it undistinguished if necessary.
- binder =3D find_secmap_binder_for_addr(a);
- if (is_distinguished_sm(*binder)) {
- if (*binder =3D=3D example_dsm) {
+ sm_ptr =3D get_secmap_ptr(a);
+ if (is_distinguished_sm(*sm_ptr)) {
+ if (*sm_ptr =3D=3D example_dsm) {
// Sec-map already has the V+A bits that we want, so stop.
PROF_EVENT(161, "set_address_range_perms-dist-sm2-quick");
return;
} else {
PROF_EVENT(162, "set_address_range_perms-dist-sm2");
- *binder =3D copy_for_writing(*binder);
+ *sm_ptr =3D copy_for_writing(*sm_ptr);
}
}
- sm =3D *binder;
+ sm =3D *sm_ptr;
=20
// 8-aligned, 8 byte steps
while (True) {
@@ -1107,7 +1103,7 @@
=20
/* --- Fast case permission setters, for dealing with stacks. --- */
=20
-static __inline__
+static INLINE
void make_aligned_word32_writable ( Addr a )
{
UWord sm_off;
@@ -1132,7 +1128,7 @@
}
=20
=20
-static __inline__
+static INLINE
void make_aligned_word32_noaccess ( Addr a )
{
UWord sm_off;
@@ -1158,7 +1154,7 @@
=20
=20
/* Nb: by "aligned" here we mean 8-byte aligned */
-static __inline__
+static INLINE
void make_aligned_word64_writable ( Addr a )
{
UWord sm_off64;
@@ -1183,7 +1179,7 @@
}
=20
=20
-static __inline__
+static INLINE
void make_aligned_word64_noaccess ( Addr a )
{
UWord sm_off64;
@@ -1438,32 +1434,25 @@
/* Idea is: go fast when
* 8-aligned and length is 128
* the sm is available in the main primary map
- * the address range falls entirely with a single
- secondary map
- * the SM is modifiable
- If all those conditions hold, just update the V+A bits
- by writing directly into the vabits array. =20
+ * the address range falls entirely with a single secondary map
+ If all those conditions hold, just update the V+A bits by writing
+ directly into the vabits array. (If the sm was distinguished, thi=
s
+ will make a copy and then write to it.)
*/
- if (EXPECTED_TAKEN( len =3D=3D 128
- && VG_IS_8_ALIGNED(base)=20
- )) {
+ if (EXPECTED_TAKEN( len =3D=3D 128 && VG_IS_8_ALIGNED(base) )) {
/* Now we know the address range is suitably sized and aligned. */
- UWord a_lo =3D (UWord)base;
- UWord a_hi =3D (UWord)(base + 127);
- UWord sec_lo =3D a_lo >> 16;
- UWord sec_hi =3D a_hi >> 16;
-
- if (EXPECTED_TAKEN( sec_lo =3D=3D sec_hi=20
- && sec_lo <=3D N_PRIMARY_MAP
- )) {
+ UWord a_lo =3D (UWord)(base);
+ UWord a_hi =3D (UWord)(base + 127);
+ tl_assert(a_lo < a_hi); // paranoia: detect overflow
+ if (a_hi < MAX_PRIMARY_ADDRESS) {
+ // Now we know the entire range is within the main primary map.
+ SecMap* sm =3D get_secmap_writable_low(a_lo);
+ SecMap* sm_hi =3D get_secmap_writable_low(a_hi);
/* Now we know that the entire address range falls within a
single secondary map, and that that secondary 'lives' in
the main primary map. */
- SecMap* sm =3D primary_map[sec_lo];
-
- if (EXPECTED_TAKEN( !is_distinguished_sm(sm) )) {
- /* And finally, now we know that the secondary in question
- is modifiable. */
+ if (EXPECTED_TAKEN(sm =3D=3D sm_hi)) {
+ // Finally, we know that the range is entirely within one se=
cmap.
UWord v_off =3D SM_OFF(a_lo);
UShort* p =3D (UShort*)(&sm->vabits32[v_off]);
p[ 0] =3D VA_BITS64_WRITABLE;
@@ -2646,7 +2635,7 @@
=20
/* ------------------------ Size =3D 8 ------------------------ */
=20
-static inline __attribute__((always_inline))
+static INLINE
ULong mc_LOADV8 ( Addr a, Bool isBigEndian )
{
UWord sm_off64, vabits64;
@@ -2692,7 +2681,7 @@
}
=20
=20
-static inline __attribute__((always_inline))
+static INLINE
void mc_STOREV8 ( Addr a, ULong vbytes, Bool isBigEndian )
{
UWord sm_off64, vabits64;
@@ -2754,7 +2743,7 @@
=20
/* ------------------------ Size =3D 4 ------------------------ */
=20
-static inline __attribute__((always_inline))
+static INLINE
UWord mc_LOADV4 ( Addr a, Bool isBigEndian )
{
UWord sm_off, vabits32;
@@ -2802,7 +2791,7 @@
}
=20
=20
-static inline __attribute__((always_inline))
+static INLINE
void mc_STOREV4 ( Addr a, UWord vbytes, Bool isBigEndian )
{
UWord sm_off, vabits32;
@@ -2896,7 +2885,7 @@
=20
/* ------------------------ Size =3D 2 ------------------------ */
=20
-static inline __attribute__((always_inline))
+static INLINE
UWord mc_LOADV2 ( Addr a, Bool isBigEndian )
{
UWord sm_off, vabits32;
@@ -2947,7 +2936,7 @@
}
=20
=20
-static inline __attribute__((always_inline))
+static INLINE
void mc_STOREV2 ( Addr a, UWord vbytes, Bool isBigEndian )
{
UWord sm_off, vabits32;
|
|
From: <sv...@va...> - 2005-12-22 06:21:03
|
Author: njn
Date: 2005-12-22 06:20:59 +0000 (Thu, 22 Dec 2005)
New Revision: 5402
Log:
Add a destructor function to OSet_Destroy() which can be called for each
node.
Modified:
trunk/coregrind/m_oset.c
trunk/include/pub_tool_oset.h
trunk/memcheck/tests/oset_test.c
Modified: trunk/coregrind/m_oset.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_oset.c 2005-12-22 06:14:42 UTC (rev 5401)
+++ trunk/coregrind/m_oset.c 2005-12-22 06:20:59 UTC (rev 5402)
@@ -294,7 +294,7 @@
}
=20
// Destructor, frees up all memory held by remaining nodes.
-void VG_(OSet_Destroy)(AvlTree* t)
+void VG_(OSet_Destroy)(AvlTree* t, OSetNodeDestroy_t destroyNode)
{
AvlNode* n;
Int i, sz =3D 0;
@@ -317,6 +317,7 @@
if (n->right) stackPush(t, n->right, 1);
break;
case 3:
+ if (destroyNode) destroyNode(n);
t->free(n);
sz++;
break;
Modified: trunk/include/pub_tool_oset.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/include/pub_tool_oset.h 2005-12-22 06:14:42 UTC (rev 5401)
+++ trunk/include/pub_tool_oset.h 2005-12-22 06:20:59 UTC (rev 5402)
@@ -65,9 +65,10 @@
typedef struct _OSet OSet;
typedef struct _OSetNode OSetNode;
=20
-typedef Int (*OSetCmp_t) ( void* key, void* elem );
-typedef void* (*OSetAlloc_t) ( SizeT szB );
-typedef void (*OSetFree_t) ( void* p );
+typedef Int (*OSetCmp_t) ( void* key, void* elem );
+typedef void* (*OSetAlloc_t) ( SizeT szB );
+typedef void (*OSetFree_t) ( void* p );
+typedef void (*OSetNodeDestroy_t) ( void* elem );
=20
/*--------------------------------------------------------------------*/
/*--- Creating and destroying OSets and OSet members ---*/
@@ -85,7 +86,9 @@
// If cmp is NULL, keyOff must be zero. This is checked.
//
// * Destroy: frees all nodes in the table, plus the memory used by
-// the table itself.
+// the table itself. The passed-in function is called on each node fi=
rst
+// to allow the destruction of any attached resources; if NULL it is =
not
+// called.
//
// * AllocNode: Allocate and zero memory for a node to go into the OSet.
// Uses the alloc function given to VG_(OSet_Create)() to allocated a =
node
@@ -101,7 +104,7 @@
=20
extern OSet* VG_(OSet_Create) ( OffT keyOff, OSetCmp_t cmp,
OSetAlloc_t alloc, OSetFree_t free );
-extern void VG_(OSet_Destroy) ( OSet* os );
+extern void VG_(OSet_Destroy) ( OSet* os, OSetNodeDestroy_t destroyNo=
de );
extern void* VG_(OSet_AllocNode) ( OSet* os, SizeT elemSize );
extern void VG_(OSet_FreeNode) ( OSet* os, void* elem );
=20
Modified: trunk/memcheck/tests/oset_test.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/memcheck/tests/oset_test.c 2005-12-22 06:14:42 UTC (rev 5401)
+++ trunk/memcheck/tests/oset_test.c 2005-12-22 06:20:59 UTC (rev 5402)
@@ -182,7 +182,7 @@
OSet_Print(oset1, "foo", intToStr);
=20
// Destroy the OSet
- VG_(OSet_Destroy)(oset1);
+ VG_(OSet_Destroy)(oset1, NULL);
}
=20
=20
@@ -353,7 +353,7 @@
}
=20
// Destroy the OSet
- VG_(OSet_Destroy)(oset2);
+ VG_(OSet_Destroy)(oset2, NULL);
}
=20
//----------------------------------------------------------------------=
-
|
|
From: <sv...@va...> - 2005-12-22 06:14:45
|
Author: njn
Date: 2005-12-22 06:14:42 +0000 (Thu, 22 Dec 2005)
New Revision: 5401
Log:
Add the name of the input file to cg_annotate's output.
Modified:
trunk/cachegrind/cg_annotate.in
Modified: trunk/cachegrind/cg_annotate.in
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/cachegrind/cg_annotate.in 2005-12-22 03:35:12 UTC (rev 5400)
+++ trunk/cachegrind/cg_annotate.in 2005-12-22 06:14:42 UTC (rev 5401)
@@ -497,6 +497,7 @@
print($fancy);
print($desc);
print("Command: $cmd\n");
+ print("Data file: $input_file\n");
print("Events recorded: @events\n");
print("Events shown: @show_events\n");
print("Event sort order: @sort_events\n");
|
|
From: <js...@ac...> - 2005-12-22 03:59:43
|
Nightly build on phoenix ( SuSE 10.0 ) started at 2005-12-22 03:30:01 GMT Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 208 tests, 6 stderr failures, 1 stdout failure ================= memcheck/tests/leak-tree (stderr) memcheck/tests/mempool (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) none/tests/mremap2 (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2005-12-22 03:55:47
|
Nightly build on aston ( x86_64, Fedora Core 3 ) started at 2005-12-22 03:05:15 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 227 tests, 6 stderr failures, 1 stdout failure ================= memcheck/tests/mempool (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/amd64/faultstatus (stderr) none/tests/mremap2 (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 227 tests, 6 stderr failures, 2 stdout failures ================= memcheck/tests/mempool (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/amd64/faultstatus (stderr) none/tests/mremap2 (stdout) none/tests/tls (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Thu Dec 22 03:33:32 2005 --- new.short Thu Dec 22 03:55:39 2005 *************** *** 8,10 **** ! == 227 tests, 6 stderr failures, 2 stdout failures ================= memcheck/tests/mempool (stderr) --- 8,10 ---- ! == 227 tests, 6 stderr failures, 1 stdout failure ================= memcheck/tests/mempool (stderr) *************** *** 14,16 **** none/tests/mremap2 (stdout) - none/tests/tls (stdout) none/tests/x86/faultstatus (stderr) --- 14,15 ---- |
|
From: <js...@ac...> - 2005-12-22 03:47:14
|
Nightly build on g5 ( YDL 4.0, ppc970 ) started at 2005-12-22 04:40:00 CET Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 176 tests, 15 stderr failures, 1 stdout failure ================= memcheck/tests/badjump (stderr) memcheck/tests/badjump2 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/mempool (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/supp1 (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/toobig-allocs (stderr) memcheck/tests/xml1 (stderr) massif/tests/toobig-allocs (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) |
|
From: Tom H. <to...@co...> - 2005-12-22 03:43:16
|
Nightly build on dunsmere ( athlon, Fedora Core 4 ) started at 2005-12-22 03:30:04 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 210 tests, 7 stderr failures, 1 stdout failure ================= memcheck/tests/leak-tree (stderr) memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) none/tests/mremap2 (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: <sv...@va...> - 2005-12-22 03:35:16
|
Author: sewardj
Date: 2005-12-22 03:35:12 +0000 (Thu, 22 Dec 2005)
New Revision: 5400
Log:
Make a start adding syscalls.
Modified:
trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c
Modified: trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c 2005-12-22 03:33:16 U=
TC (rev 5399)
+++ trunk/coregrind/m_syswrap/syswrap-ppc64-linux.c 2005-12-22 03:35:12 U=
TC (rev 5400)
@@ -1164,236 +1164,236 @@
// (unknown).
=20
const SyscallTableEntry ML_(syscall_table)[] =3D {
-// _____(__NR_restart_syscall, sys__NR_restart_syscall), // 0
-// _____(__NR_exit, sys__NR_exit), // 1
-// _____(__NR_fork, sys__NR_fork), // 2
-// _____(__NR_read, sys__NR_read), // 3
-// _____(__NR_write, sys__NR_write), // 4
+// _____(__NR_restart_syscall, sys_restart_syscall), // 0
+// _____(__NR_exit, sys_exit), // 1
+// _____(__NR_fork, sys_fork), // 2
+// _____(__NR_read, sys_read), // 3
+// _____(__NR_write, sys_write), // 4
=20
-// _____(__NR_open, sys__NR_open), // 5
-// _____(__NR_close, sys__NR_close), // 6
-// _____(__NR_waitpid, sys__NR_waitpid), // 7
-// _____(__NR_creat, sys__NR_creat), // 8
-// _____(__NR_link, sys__NR_link), // 9
+// _____(__NR_open, sys_open), // 5
+// _____(__NR_close, sys_close), // 6
+// _____(__NR_waitpid, sys_waitpid), // 7
+// _____(__NR_creat, sys_creat), // 8
+// _____(__NR_link, sys_link), // 9
=20
-// _____(__NR_unlink, sys__NR_unlink), // 10
-// _____(__NR_execve, sys__NR_execve), // 11
-// _____(__NR_chdir, sys__NR_chdir), // 12
-// _____(__NR_time, sys__NR_time), // 13
-// _____(__NR_mknod, sys__NR_mknod), // 14
+// _____(__NR_unlink, sys_unlink), // 10
+// _____(__NR_execve, sys_execve), // 11
+// _____(__NR_chdir, sys_chdir), // 12
+// _____(__NR_time, sys_time), // 13
+// _____(__NR_mknod, sys_mknod), // 14
=20
-// _____(__NR_chmod, sys__NR_chmod), // 15
-// _____(__NR_lchown, sys__NR_lchown), // 16
-// _____(__NR_break, sys__NR_break), // 17
-// _____(__NR_oldstat, sys__NR_oldstat), // 18
-// _____(__NR_lseek, sys__NR_lseek), // 19
+// _____(__NR_chmod, sys_chmod), // 15
+// _____(__NR_lchown, sys_lchown), // 16
+// _____(__NR_break, sys_break), // 17
+// _____(__NR_oldstat, sys_oldstat), // 18
+// _____(__NR_lseek, sys_lseek), // 19
=20
-// _____(__NR_getpid, sys__NR_getpid), // 20
-// _____(__NR_mount, sys__NR_mount), // 21
-// _____(__NR_umount, sys__NR_umount), // 22
-// _____(__NR_setuid, sys__NR_setuid), // 23
-// _____(__NR_getuid, sys__NR_getuid), // 24
+// _____(__NR_getpid, sys_getpid), // 20
+// _____(__NR_mount, sys_mount), // 21
+// _____(__NR_umount, sys_umount), // 22
+// _____(__NR_setuid, sys_setuid), // 23
+// _____(__NR_getuid, sys_getuid), // 24
=20
-// _____(__NR_stime, sys__NR_stime), // 25
-// _____(__NR_ptrace, sys__NR_ptrace), // 26
-// _____(__NR_alarm, sys__NR_alarm), // 27
-// _____(__NR_oldfstat, sys__NR_oldfstat), // 28
-// _____(__NR_pause, sys__NR_pause), // 29
+// _____(__NR_stime, sys_stime), // 25
+// _____(__NR_ptrace, sys_ptrace), // 26
+// _____(__NR_alarm, sys_alarm), // 27
+// _____(__NR_oldfstat, sys_oldfstat), // 28
+// _____(__NR_pause, sys_pause), // 29
=20
-// _____(__NR_utime, sys__NR_utime), // 30
-// _____(__NR_stty, sys__NR_stty), // 31
-// _____(__NR_gtty, sys__NR_gtty), // 32
-// _____(__NR_access, sys__NR_access), // 33
-// _____(__NR_nice, sys__NR_nice), // 34
+// _____(__NR_utime, sys_utime), // 30
+// _____(__NR_stty, sys_stty), // 31
+// _____(__NR_gtty, sys_gtty), // 32
+// _____(__NR_access, sys_access), // 33
+// _____(__NR_nice, sys_nice), // 34
=20
-// _____(__NR_ftime, sys__NR_ftime), // 35
-// _____(__NR_sync, sys__NR_sync), // 36
-// _____(__NR_kill, sys__NR_kill), // 37
-// _____(__NR_rename, sys__NR_rename), // 38
-// _____(__NR_mkdir, sys__NR_mkdir), // 39
+// _____(__NR_ftime, sys_ftime), // 35
+// _____(__NR_sync, sys_sync), // 36
+// _____(__NR_kill, sys_kill), // 37
+// _____(__NR_rename, sys_rename), // 38
+// _____(__NR_mkdir, sys_mkdir), // 39
=20
-// _____(__NR_rmdir, sys__NR_rmdir), // 40
-// _____(__NR_dup, sys__NR_dup), // 41
-// _____(__NR_pipe, sys__NR_pipe), // 42
-// _____(__NR_times, sys__NR_times), // 43
-// _____(__NR_prof, sys__NR_prof), // 44
+// _____(__NR_rmdir, sys_rmdir), // 40
+// _____(__NR_dup, sys_dup), // 41
+// _____(__NR_pipe, sys_pipe), // 42
+// _____(__NR_times, sys_times), // 43
+// _____(__NR_prof, sys_prof), // 44
=20
-// _____(__NR_brk, sys__NR_brk), // 45
-// _____(__NR_setgid, sys__NR_setgid), // 46
-// _____(__NR_getgid, sys__NR_getgid), // 47
-// _____(__NR_signal, sys__NR_signal), // 48
-// _____(__NR_geteuid, sys__NR_geteuid), // 49
+ GENX_(__NR_brk, sys_brk), // 45
+// _____(__NR_setgid, sys_setgid), // 46
+// _____(__NR_getgid, sys_getgid), // 47
+// _____(__NR_signal, sys_signal), // 48
+// _____(__NR_geteuid, sys_geteuid), // 49
=20
-// _____(__NR_getegid, sys__NR_getegid), // 50
-// _____(__NR_acct, sys__NR_acct), // 51
-// _____(__NR_umount2, sys__NR_umount2), // 52
-// _____(__NR_lock, sys__NR_lock), // 53
-// _____(__NR_ioctl, sys__NR_ioctl), // 54
+// _____(__NR_getegid, sys_getegid), // 50
+// _____(__NR_acct, sys_acct), // 51
+// _____(__NR_umount2, sys_umount2), // 52
+// _____(__NR_lock, sys_lock), // 53
+// _____(__NR_ioctl, sys_ioctl), // 54
=20
-// _____(__NR_fcntl, sys__NR_fcntl), // 55
-// _____(__NR_mpx, sys__NR_mpx), // 56
-// _____(__NR_setpgid, sys__NR_setpgid), // 57
-// _____(__NR_ulimit, sys__NR_ulimit), // 58
-// _____(__NR_oldolduname, sys__NR_oldolduname), // 59
+// _____(__NR_fcntl, sys_fcntl), // 55
+// _____(__NR_mpx, sys_mpx), // 56
+// _____(__NR_setpgid, sys_setpgid), // 57
+// _____(__NR_ulimit, sys_ulimit), // 58
+// _____(__NR_oldolduname, sys_oldolduname), // 59
=20
-// _____(__NR_umask, sys__NR_umask), // 60
-// _____(__NR_chroot, sys__NR_chroot), // 61
-// _____(__NR_ustat, sys__NR_ustat), // 62
-// _____(__NR_dup2, sys__NR_dup2), // 63
-// _____(__NR_getppid, sys__NR_getppid), // 64
+// _____(__NR_umask, sys_umask), // 60
+// _____(__NR_chroot, sys_chroot), // 61
+// _____(__NR_ustat, sys_ustat), // 62
+// _____(__NR_dup2, sys_dup2), // 63
+// _____(__NR_getppid, sys_getppid), // 64
=20
-// _____(__NR_getpgrp, sys__NR_getpgrp), // 65
-// _____(__NR_setsid, sys__NR_setsid), // 66
-// _____(__NR_sigaction, sys__NR_sigaction), // 67
-// _____(__NR_sgetmask, sys__NR_sgetmask), // 68
-// _____(__NR_ssetmask, sys__NR_ssetmask), // 69
+// _____(__NR_getpgrp, sys_getpgrp), // 65
+// _____(__NR_setsid, sys_setsid), // 66
+// _____(__NR_sigaction, sys_sigaction), // 67
+// _____(__NR_sgetmask, sys_sgetmask), // 68
+// _____(__NR_ssetmask, sys_ssetmask), // 69
=20
-// _____(__NR_setreuid, sys__NR_setreuid), // 70
-// _____(__NR_setregid, sys__NR_setregid), // 71
-// _____(__NR_sigsuspend, sys__NR_sigsuspend), // 72
-// _____(__NR_sigpending, sys__NR_sigpending), // 73
-// _____(__NR_sethostname, sys__NR_sethostname), // 74
+// _____(__NR_setreuid, sys_setreuid), // 70
+// _____(__NR_setregid, sys_setregid), // 71
+// _____(__NR_sigsuspend, sys_sigsuspend), // 72
+// _____(__NR_sigpending, sys_sigpending), // 73
+// _____(__NR_sethostname, sys_sethostname), // 74
=20
-// _____(__NR_setrlimit, sys__NR_setrlimit), // 75
-// _____(__NR_getrlimit, sys__NR_getrlimit), // 76
-// _____(__NR_getrusage, sys__NR_getrusage), // 77
-// _____(__NR_gettimeofday, sys__NR_gettimeofday), // 78
-// _____(__NR_settimeofday, sys__NR_settimeofday), // 79
+// _____(__NR_setrlimit, sys_setrlimit), // 75
+// _____(__NR_getrlimit, sys_getrlimit), // 76
+// _____(__NR_getrusage, sys_getrusage), // 77
+// _____(__NR_gettimeofday, sys_gettimeofday), // 78
+// _____(__NR_settimeofday, sys_settimeofday), // 79
=20
-// _____(__NR_getgroups, sys__NR_getgroups), // 80
-// _____(__NR_setgroups, sys__NR_setgroups), // 81
-// _____(__NR_select, sys__NR_select), // 82
-// _____(__NR_symlink, sys__NR_symlink), // 83
-// _____(__NR_oldlstat, sys__NR_oldlstat), // 84
+// _____(__NR_getgroups, sys_getgroups), // 80
+// _____(__NR_setgroups, sys_setgroups), // 81
+// _____(__NR_select, sys_select), // 82
+// _____(__NR_symlink, sys_symlink), // 83
+// _____(__NR_oldlstat, sys_oldlstat), // 84
=20
-// _____(__NR_readlink, sys__NR_readlink), // 85
-// _____(__NR_uselib, sys__NR_uselib), // 86
-// _____(__NR_swapon, sys__NR_swapon), // 87
-// _____(__NR_reboot, sys__NR_reboot), // 88
-// _____(__NR_readdir, sys__NR_readdir), // 89
+// _____(__NR_readlink, sys_readlink), // 85
+// _____(__NR_uselib, sys_uselib), // 86
+// _____(__NR_swapon, sys_swapon), // 87
+// _____(__NR_reboot, sys_reboot), // 88
+// _____(__NR_readdir, sys_readdir), // 89
=20
-// _____(__NR_mmap, sys__NR_mmap), // 90
-// _____(__NR_munmap, sys__NR_munmap), // 91
-// _____(__NR_truncate, sys__NR_truncate), // 92
-// _____(__NR_ftruncate, sys__NR_ftruncate), // 93
-// _____(__NR_fchmod, sys__NR_fchmod), // 94
+// _____(__NR_mmap, sys_mmap), // 90
+// _____(__NR_munmap, sys_munmap), // 91
+// _____(__NR_truncate, sys_truncate), // 92
+// _____(__NR_ftruncate, sys_ftruncate), // 93
+// _____(__NR_fchmod, sys_fchmod), // 94
=20
-// _____(__NR_fchown, sys__NR_fchown), // 95
-// _____(__NR_getpriority, sys__NR_getpriority), // 96
-// _____(__NR_setpriority, sys__NR_setpriority), // 97
-// _____(__NR_profil, sys__NR_profil), // 98
-// _____(__NR_statfs, sys__NR_statfs), // 99
+// _____(__NR_fchown, sys_fchown), // 95
+// _____(__NR_getpriority, sys_getpriority), // 96
+// _____(__NR_setpriority, sys_setpriority), // 97
+// _____(__NR_profil, sys_profil), // 98
+// _____(__NR_statfs, sys_statfs), // 99
=20
-// _____(__NR_fstatfs, sys__NR_fstatfs), // 100
-// _____(__NR_ioperm, sys__NR_ioperm), // 101
-// _____(__NR_socketcall, sys__NR_socketcall), // 102
-// _____(__NR_syslog, sys__NR_syslog), // 103
-// _____(__NR_setitimer, sys__NR_setitimer), // 104
+// _____(__NR_fstatfs, sys_fstatfs), // 100
+// _____(__NR_ioperm, sys_ioperm), // 101
+// _____(__NR_socketcall, sys_socketcall), // 102
+// _____(__NR_syslog, sys_syslog), // 103
+// _____(__NR_setitimer, sys_setitimer), // 104
=20
-// _____(__NR_getitimer, sys__NR_getitimer), // 105
-// _____(__NR_stat, sys__NR_stat), // 106
-// _____(__NR_lstat, sys__NR_lstat), // 107
-// _____(__NR_fstat, sys__NR_fstat), // 108
-// _____(__NR_olduname, sys__NR_olduname), // 109
+// _____(__NR_getitimer, sys_getitimer), // 105
+// _____(__NR_stat, sys_stat), // 106
+// _____(__NR_lstat, sys_lstat), // 107
+// _____(__NR_fstat, sys_fstat), // 108
+// _____(__NR_olduname, sys_olduname), // 109
=20
-// _____(__NR_iopl, sys__NR_iopl), // 110
-// _____(__NR_vhangup, sys__NR_vhangup), // 111
-// _____(__NR_idle, sys__NR_idle), // 112
-// _____(__NR_vm86, sys__NR_vm86), // 113
-// _____(__NR_wait4, sys__NR_wait4), // 114
+// _____(__NR_iopl, sys_iopl), // 110
+// _____(__NR_vhangup, sys_vhangup), // 111
+// _____(__NR_idle, sys_idle), // 112
+// _____(__NR_vm86, sys_vm86), // 113
+// _____(__NR_wait4, sys_wait4), // 114
=20
-// _____(__NR_swapoff, sys__NR_swapoff), // 115
-// _____(__NR_sysinfo, sys__NR_sysinfo), // 116
-// _____(__NR_ipc, sys__NR_ipc), // 117
-// _____(__NR_fsync, sys__NR_fsync), // 118
-// _____(__NR_sigreturn, sys__NR_sigreturn), // 119
+// _____(__NR_swapoff, sys_swapoff), // 115
+// _____(__NR_sysinfo, sys_sysinfo), // 116
+// _____(__NR_ipc, sys_ipc), // 117
+// _____(__NR_fsync, sys_fsync), // 118
+// _____(__NR_sigreturn, sys_sigreturn), // 119
=20
-// _____(__NR_clone, sys__NR_clone), // 120
-// _____(__NR_setdomainname, sys__NR_setdomainname), // 121
-// _____(__NR_uname, sys__NR_uname), // 122
-// _____(__NR_modify_ldt, sys__NR_modify_ldt), // 123
-// _____(__NR_adjtimex, sys__NR_adjtimex), // 124
+// _____(__NR_clone, sys_clone), // 120
+// _____(__NR_setdomainname, sys_setdomainname), // 121
+ GENXY(__NR_uname, sys_newuname), // 122
+// _____(__NR_modify_ldt, sys_modify_ldt), // 123
+// _____(__NR_adjtimex, sys_adjtimex), // 124
=20
-// _____(__NR_mprotect, sys__NR_mprotect), // 125
-// _____(__NR_sigprocmask, sys__NR_sigprocmask), // 126
-// _____(__NR_create_module, sys__NR_create_module), // 127
-// _____(__NR_init_module, sys__NR_init_module), // 128
-// _____(__NR_delete_module, sys__NR_delete_module), // 129
+// _____(__NR_mprotect, sys_mprotect), // 125
+// _____(__NR_sigprocmask, sys_sigprocmask), // 126
+// _____(__NR_create_module, sys_create_module), // 127
+// _____(__NR_init_module, sys_init_module), // 128
+// _____(__NR_delete_module, sys_delete_module), // 129
=20
-// _____(__NR_get_kernel_syms, sys__NR_get_kernel_syms), // 130
-// _____(__NR_quotactl, sys__NR_quotactl), // 131
-// _____(__NR_getpgid, sys__NR_getpgid), // 132
-// _____(__NR_fchdir, sys__NR_fchdir), // 133
-// _____(__NR_bdflush, sys__NR_bdflush), // 134
+// _____(__NR_get_kernel_syms, sys_get_kernel_syms), // 130
+// _____(__NR_quotactl, sys_quotactl), // 131
+// _____(__NR_getpgid, sys_getpgid), // 132
+// _____(__NR_fchdir, sys_fchdir), // 133
+// _____(__NR_bdflush, sys_bdflush), // 134
=20
-// _____(__NR_sysfs, sys__NR_sysfs), // 135
-// _____(__NR_personality, sys__NR_personality), // 136
-// _____(__NR_afs_syscall, sys__NR_afs_syscall), // 137
-// _____(__NR_setfsuid, sys__NR_setfsuid), // 138
-// _____(__NR_setfsgid, sys__NR_setfsgid), // 139
+// _____(__NR_sysfs, sys_sysfs), // 135
+// _____(__NR_personality, sys_personality), // 136
+// _____(__NR_afs_syscall, sys_afs_syscall), // 137
+// _____(__NR_setfsuid, sys_setfsuid), // 138
+// _____(__NR_setfsgid, sys_setfsgid), // 139
=20
-// _____(__NR__llseek, sys__NR__llseek), // 140
-// _____(__NR_getdents, sys__NR_getdents), // 141
-// _____(__NR__newselect, sys__NR__newselect), // 142
-// _____(__NR_flock, sys__NR_flock), // 143
-// _____(__NR_msync, sys__NR_msync), // 144
+// _____(__NR__llseek, sys__llseek), // 140
+// _____(__NR_getdents, sys_getdents), // 141
+// _____(__NR__newselect, sys__newselect), // 142
+// _____(__NR_flock, sys_flock), // 143
+// _____(__NR_msync, sys_msync), // 144
=20
-// _____(__NR_readv, sys__NR_readv), // 145
-// _____(__NR_writev, sys__NR_writev), // 146
-// _____(__NR_getsid, sys__NR_getsid), // 147
-// _____(__NR_fdatasync, sys__NR_fdatasync), // 148
-// _____(__NR__sysctl, sys__NR__sysctl), // 149
+// _____(__NR_readv, sys_readv), // 145
+// _____(__NR_writev, sys_writev), // 146
+// _____(__NR_getsid, sys_getsid), // 147
+// _____(__NR_fdatasync, sys_fdatasync), // 148
+// _____(__NR__sysctl, sys__sysctl), // 149
=20
-// _____(__NR_mlock, sys__NR_mlock), // 150
-// _____(__NR_munlock, sys__NR_munlock), // 151
-// _____(__NR_mlockall, sys__NR_mlockall), // 152
-// _____(__NR_munlockall, sys__NR_munlockall), // 153
-// _____(__NR_sched_setparam, sys__NR_sched_setparam), // 154
+// _____(__NR_mlock, sys_mlock), // 150
+// _____(__NR_munlock, sys_munlock), // 151
+// _____(__NR_mlockall, sys_mlockall), // 152
+// _____(__NR_munlockall, sys_munlockall), // 153
+// _____(__NR_sched_setparam, sys_sched_setparam), // 154
=20
-// _____(__NR_sched_getparam, sys__NR_sched_getparam), // 155
-// _____(__NR_sched_setscheduler, sys__NR_sched_setscheduler), =
// 156
-// _____(__NR_sched_getscheduler, sys__NR_sched_getscheduler), =
// 157
-// _____(__NR_sched_yield, sys__NR_sched_yield), =
// 158
-// _____(__NR_sched_get_priority_max, sys__NR_sched_get_priority_max), =
// 159
+// _____(__NR_sched_getparam, sys_sched_getparam), // 155
+// _____(__NR_sched_setscheduler, sys_sched_setscheduler), // 1=
56
+// _____(__NR_sched_getscheduler, sys_sched_getscheduler), // 1=
57
+// _____(__NR_sched_yield, sys_sched_yield), // 1=
58
+// _____(__NR_sched_get_priority_max, sys_sched_get_priority_max), // 1=
59
=20
-// _____(__NR_sched_get_priority_min, sys__NR_sched_get_priority_min), =
// 160
-// _____(__NR_sched_rr_get_interval, sys__NR_sched_rr_get_interval), =
// 161
-// _____(__NR_nanosleep, sys__NR_nanosleep), // 162
-// _____(__NR_mremap, sys__NR_mremap), // 163
-// _____(__NR_setresuid, sys__NR_setresuid), // 164
+// _____(__NR_sched_get_priority_min, sys_sched_get_priority_min), // 1=
60
+// _____(__NR_sched_rr_get_interval, sys_sched_rr_get_interval), // 1=
61
+// _____(__NR_nanosleep, sys_nanosleep), // 162
+// _____(__NR_mremap, sys_mremap), // 163
+// _____(__NR_setresuid, sys_setresuid), // 164
=20
-// _____(__NR_getresuid, sys__NR_getresuid), // 165
-// _____(__NR_query_module, sys__NR_query_module), // 166
-// _____(__NR_poll, sys__NR_poll), // 167
-// _____(__NR_nfsservctl, sys__NR_nfsservctl), // 168
-// _____(__NR_setresgid, sys__NR_setresgid), // 169
+// _____(__NR_getresuid, sys_getresuid), // 165
+// _____(__NR_query_module, sys_query_module), // 166
+// _____(__NR_poll, sys_poll), // 167
+// _____(__NR_nfsservctl, sys_nfsservctl), // 168
+// _____(__NR_setresgid, sys_setresgid), // 169
=20
-// _____(__NR_getresgid, sys__NR_getresgid), // 170
-// _____(__NR_prctl, sys__NR_prctl), // 171
-// _____(__NR_rt_sigreturn, sys__NR_rt_sigreturn), // 172
-// _____(__NR_rt_sigaction, sys__NR_rt_sigaction), // 173
-// _____(__NR_rt_sigprocmask, sys__NR_rt_sigprocmask), // 174
+// _____(__NR_getresgid, sys_getresgid), // 170
+// _____(__NR_prctl, sys_prctl), // 171
+// _____(__NR_rt_sigreturn, sys_rt_sigreturn), // 172
+// _____(__NR_rt_sigaction, sys_rt_sigaction), // 173
+// _____(__NR_rt_sigprocmask, sys_rt_sigprocmask), // 174
=20
-// _____(__NR_rt_sigpending, sys__NR_rt_sigpending), // 175
-// _____(__NR_rt_sigtimedwait, sys__NR_rt_sigtimedwait), // 176
-// _____(__NR_rt_sigqueueinfo, sys__NR_rt_sigqueueinfo), // 177
-// _____(__NR_rt_sigsuspend, sys__NR_rt_sigsuspend), // 178
-// _____(__NR_pread64, sys__NR_pread64), // 179
+// _____(__NR_rt_sigpending, sys_rt_sigpending), // 175
+// _____(__NR_rt_sigtimedwait, sys_rt_sigtimedwait), // 176
+// _____(__NR_rt_sigqueueinfo, sys_rt_sigqueueinfo), // 177
+// _____(__NR_rt_sigsuspend, sys_rt_sigsuspend), // 178
+// _____(__NR_pread64, sys_pread64), // 179
=20
-// _____(__NR_pwrite64, sys__NR_pwrite64), // 180
-// _____(__NR_chown, sys__NR_chown), // 181
-// _____(__NR_getcwd, sys__NR_getcwd), // 182
-// _____(__NR_capget, sys__NR_capget), // 183
-// _____(__NR_capset, sys__NR_capset), // 184
+// _____(__NR_pwrite64, sys_pwrite64), // 180
+// _____(__NR_chown, sys_chown), // 181
+// _____(__NR_getcwd, sys_getcwd), // 182
+// _____(__NR_capget, sys_capget), // 183
+// _____(__NR_capset, sys_capset), // 184
=20
-// _____(__NR_sigaltstack, sys__NR_sigaltstack), // 185
-// _____(__NR_sendfile, sys__NR_sendfile), // 186
-// _____(__NR_getpmsg, sys__NR_getpmsg), // 187
-// _____(__NR_putpmsg, sys__NR_putpmsg), // 188
-// _____(__NR_vfork, sys__NR_vfork), // 189
+// _____(__NR_sigaltstack, sys_sigaltstack), // 185
+// _____(__NR_sendfile, sys_sendfile), // 186
+// _____(__NR_getpmsg, sys_getpmsg), // 187
+// _____(__NR_putpmsg, sys_putpmsg), // 188
+// _____(__NR_vfork, sys_vfork), // 189
=20
-// _____(__NR_ugetrlimit, sys__NR_ugetrlimit), // 190
-// _____(__NR_readahead, sys__NR_readahead), // 191
+// _____(__NR_ugetrlimit, sys_ugetrlimit), // 190
+// _____(__NR_readahead, sys_readahead), // 191
// /* #define __NR_mmap2 192 32bit only */
// /* #define __NR_truncate64 193 32bit only */
// /* #define __NR_ftruncate64 194 32bit only */
@@ -1401,102 +1401,102 @@
// /* #define __NR_stat64 195 32bit only */
// /* #define __NR_lstat64 196 32bit only */
// /* #define __NR_fstat64 197 32bit only */
-// _____(__NR_pciconfig_read, sys__NR_pciconfig_read), // 198
-// _____(__NR_pciconfig_write, sys__NR_pciconfig_write), // 199
+// _____(__NR_pciconfig_read, sys_pciconfig_read), // 198
+// _____(__NR_pciconfig_write, sys_pciconfig_write), // 199
=20
-// _____(__NR_pciconfig_iobase, sys__NR_pciconfig_iobase), // 200
-// _____(__NR_multiplexer, sys__NR_multiplexer), // 201
-// _____(__NR_getdents64, sys__NR_getdents64), // 202
-// _____(__NR_pivot_root, sys__NR_pivot_root), // 203
+// _____(__NR_pciconfig_iobase, sys_pciconfig_iobase), // 200
+// _____(__NR_multiplexer, sys_multiplexer), // 201
+// _____(__NR_getdents64, sys_getdents64), // 202
+// _____(__NR_pivot_root, sys_pivot_root), // 203
// /* #define __NR_fcntl64 204 32bit only */
=20
-// _____(__NR_madvise, sys__NR_madvise), // 205
-// _____(__NR_mincore, sys__NR_mincore), // 206
-// _____(__NR_gettid, sys__NR_gettid), // 207
-// _____(__NR_tkill, sys__NR_tkill), // 208
-// _____(__NR_setxattr, sys__NR_setxattr), // 209
+// _____(__NR_madvise, sys_madvise), // 205
+// _____(__NR_mincore, sys_mincore), // 206
+// _____(__NR_gettid, sys_gettid), // 207
+// _____(__NR_tkill, sys_tkill), // 208
+// _____(__NR_setxattr, sys_setxattr), // 209
=20
-// _____(__NR_lsetxattr, sys__NR_lsetxattr), // 210
-// _____(__NR_fsetxattr, sys__NR_fsetxattr), // 211
-// _____(__NR_getxattr, sys__NR_getxattr), // 212
-// _____(__NR_lgetxattr, sys__NR_lgetxattr), // 213
-// _____(__NR_fgetxattr, sys__NR_fgetxattr), // 214
+// _____(__NR_lsetxattr, sys_lsetxattr), // 210
+// _____(__NR_fsetxattr, sys_fsetxattr), // 211
+// _____(__NR_getxattr, sys_getxattr), // 212
+// _____(__NR_lgetxattr, sys_lgetxattr), // 213
+// _____(__NR_fgetxattr, sys_fgetxattr), // 214
=20
-// _____(__NR_listxattr, sys__NR_listxattr), // 215
-// _____(__NR_llistxattr, sys__NR_llistxattr), // 216
-// _____(__NR_flistxattr, sys__NR_flistxattr), // 217
-// _____(__NR_removexattr, sys__NR_removexattr), // 218
-// _____(__NR_lremovexattr, sys__NR_lremovexattr), // 219
+// _____(__NR_listxattr, sys_listxattr), // 215
+// _____(__NR_llistxattr, sys_llistxattr), // 216
+// _____(__NR_flistxattr, sys_flistxattr), // 217
+// _____(__NR_removexattr, sys_removexattr), // 218
+// _____(__NR_lremovexattr, sys_lremovexattr), // 219
=20
-// _____(__NR_fremovexattr, sys__NR_fremovexattr), // 220
-// _____(__NR_futex, sys__NR_futex), // 221
-// _____(__NR_sched_setaffinity, sys__NR_sched_setaffinity), // 222
-// _____(__NR_sched_getaffinity, sys__NR_sched_getaffinity), // 223
+// _____(__NR_fremovexattr, sys_fremovexattr), // 220
+// _____(__NR_futex, sys_futex), // 221
+// _____(__NR_sched_setaffinity, sys_sched_setaffinity), // 222
+// _____(__NR_sched_getaffinity, sys_sched_getaffinity), // 223
// /* 224 currently unused */
=20
-// _____(__NR_tuxcall, sys__NR_tuxcall), // 225
+// _____(__NR_tuxcall, sys_tuxcall), // 225
// /* #define __NR_sendfile64 226 32bit only */
-// _____(__NR_io_setup, sys__NR_io_setup), // 227
-// _____(__NR_io_destroy, sys__NR_io_destroy), // 228
-// _____(__NR_io_getevents, sys__NR_io_getevents), // 229
+// _____(__NR_io_setup, sys_io_setup), // 227
+// _____(__NR_io_destroy, sys_io_destroy), // 228
+// _____(__NR_io_getevents, sys_io_getevents), // 229
=20
-// _____(__NR_io_submit, sys__NR_io_submit), // 230
-// _____(__NR_io_cancel, sys__NR_io_cancel), // 231
-// _____(__NR_set_tid_address, sys__NR_set_tid_address), // 232
-// _____(__NR_fadvise64, sys__NR_fadvise64), // 233
-// _____(__NR_exit_group, sys__NR_exit_group), // 234
+// _____(__NR_io_submit, sys_io_submit), // 230
+// _____(__NR_io_cancel, sys_io_cancel), // 231
+// _____(__NR_set_tid_address, sys_set_tid_address), // 232
+// _____(__NR_fadvise64, sys_fadvise64), // 233
+// _____(__NR_exit_group, sys_exit_group), // 234
=20
-// _____(__NR_lookup_dcookie, sys__NR_lookup_dcookie), // 235
-// _____(__NR_epoll_create, sys__NR_epoll_create), // 236
-// _____(__NR_epoll_ctl, sys__NR_epoll_ctl), // 237
-// _____(__NR_epoll_wait, sys__NR_epoll_wait), // 238
-// _____(__NR_remap_file_pages, sys__NR_remap_file_pages), // 239
+// _____(__NR_lookup_dcookie, sys_lookup_dcookie), // 235
+// _____(__NR_epoll_create, sys_epoll_create), // 236
+// _____(__NR_epoll_ctl, sys_epoll_ctl), // 237
+// _____(__NR_epoll_wait, sys_epoll_wait), // 238
+// _____(__NR_remap_file_pages, sys_remap_file_pages), // 239
=20
-// _____(__NR_timer_create, sys__NR_timer_create), // 240
-// _____(__NR_timer_settime, sys__NR_timer_settime), // 241
-// _____(__NR_timer_gettime, sys__NR_timer_gettime), // 242
-// _____(__NR_timer_getoverrun, sys__NR_timer_getoverrun), // 243
-// _____(__NR_timer_delete, sys__NR_timer_delete), // 244
+// _____(__NR_timer_create, sys_timer_create), // 240
+// _____(__NR_timer_settime, sys_timer_settime), // 241
+// _____(__NR_timer_gettime, sys_timer_gettime), // 242
+// _____(__NR_timer_getoverrun, sys_timer_getoverrun), // 243
+// _____(__NR_timer_delete, sys_timer_delete), // 244
=20
-// _____(__NR_clock_settime, sys__NR_clock_settime), // 245
-// _____(__NR_clock_gettime, sys__NR_clock_gettime), // 246
-// _____(__NR_clock_getres, sys__NR_clock_getres), // 247
-// _____(__NR_clock_nanosleep, sys__NR_clock_nanosleep), // 248
-// _____(__NR_swapcontext, sys__NR_swapcontext), // 249
+// _____(__NR_clock_settime, sys_clock_settime), // 245
+// _____(__NR_clock_gettime, sys_clock_gettime), // 246
+// _____(__NR_clock_getres, sys_clock_getres), // 247
+// _____(__NR_clock_nanosleep, sys_clock_nanosleep), // 248
+// _____(__NR_swapcontext, sys_swapcontext), // 249
=20
-// _____(__NR_tgkill, sys__NR_tgkill), // 250
-// _____(__NR_utimes, sys__NR_utimes), // 251
-// _____(__NR_statfs64, sys__NR_statfs64), // 252
-// _____(__NR_fstatfs64, sys__NR_fstatfs64), // 253
+// _____(__NR_tgkill, sys_tgkill), // 250
+// _____(__NR_utimes, sys_utimes), // 251
+// _____(__NR_statfs64, sys_statfs64), // 252
+// _____(__NR_fstatfs64, sys_fstatfs64), // 253
// /* #define __NR_fadvise64_64 254 32bit only */
=20
-// _____(__NR_rtas, sys__NR_rtas), // 255
+// _____(__NR_rtas, sys_rtas), // 255
// /* Number 256 is reserved for sys_debug_setcontext */
// /* Number 257 is reserved for vserver */
// /* 258 currently unused */
-// _____(__NR_mbind, sys__NR_mbind), // 259
+// _____(__NR_mbind, sys_mbind), // 259
=20
-// _____(__NR_get_mempolicy, sys__NR_get_mempolicy), // 260
-// _____(__NR_set_mempolicy, sys__NR_set_mempolicy), // 261
-// _____(__NR_mq_open, sys__NR_mq_open), // 262
-// _____(__NR_mq_unlink, sys__NR_mq_unlink), // 263
-// _____(__NR_mq_timedsend, sys__NR_mq_timedsend), // 264
+// _____(__NR_get_mempolicy, sys_get_mempolicy), // 260
+// _____(__NR_set_mempolicy, sys_set_mempolicy), // 261
+// _____(__NR_mq_open, sys_mq_open), // 262
+// _____(__NR_mq_unlink, sys_mq_unlink), // 263
+// _____(__NR_mq_timedsend, sys_mq_timedsend), // 264
=20
-// _____(__NR_mq_timedreceive, sys__NR_mq_timedreceive), // 265
-// _____(__NR_mq_notify, sys__NR_mq_notify), // 266
-// _____(__NR_mq_getsetattr, sys__NR_mq_getsetattr), // 267
-// _____(__NR_kexec_load, sys__NR_kexec_load), // 268
-// _____(__NR_add_key, sys__NR_add_key), // 269
+// _____(__NR_mq_timedreceive, sys_mq_timedreceive), // 265
+// _____(__NR_mq_notify, sys_mq_notify), // 266
+// _____(__NR_mq_getsetattr, sys_mq_getsetattr), // 267
+// _____(__NR_kexec_load, sys_kexec_load), // 268
+// _____(__NR_add_key, sys_add_key), // 269
=20
-// _____(__NR_request_key, sys__NR_request_key), // 270
-// _____(__NR_keyctl, sys__NR_keyctl), // 271
-// _____(__NR_waitid, sys__NR_waitid), // 272
-// _____(__NR_ioprio_set, sys__NR_ioprio_set), // 273
-// _____(__NR_ioprio_get, sys__NR_ioprio_get), // 274
+// _____(__NR_request_key, sys_request_key), // 270
+// _____(__NR_keyctl, sys_keyctl), // 271
+// _____(__NR_waitid, sys_waitid), // 272
+// _____(__NR_ioprio_set, sys_ioprio_set), // 273
+// _____(__NR_ioprio_get, sys_ioprio_get), // 274
=20
-// _____(__NR_inotify_init, sys__NR_inotify_init), // 275
-// _____(__NR_inotify_add_watch, sys__NR_inotify_add_watch), // 276
-// _____(__NR_inotify_rm_watch, sys__NR_inotify_rm_watch) // 277
+// _____(__NR_inotify_init, sys_inotify_init), // 275
+// _____(__NR_inotify_add_watch, sys_inotify_add_watch), // 276
+// _____(__NR_inotify_rm_watch, sys_inotify_rm_watch) // 277
};
=20
const UInt ML_(syscall_table_size) =3D=20
|
|
From: <sv...@va...> - 2005-12-22 03:33:24
|
Author: sewardj
Date: 2005-12-22 03:33:16 +0000 (Thu, 22 Dec 2005)
New Revision: 5399
Log:
Save %CIA correctly (caused ppc64-linux to loop at the first syscall,
entertainingly).
Modified:
trunk/coregrind/m_dispatch/dispatch-ppc64-linux.S
Modified: trunk/coregrind/m_dispatch/dispatch-ppc64-linux.S
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_dispatch/dispatch-ppc64-linux.S 2005-12-21 20:22:52=
UTC (rev 5398)
+++ trunk/coregrind/m_dispatch/dispatch-ppc64-linux.S 2005-12-22 03:33:16=
UTC (rev 5399)
@@ -397,7 +397,7 @@
that holds the value we want to return to the scheduler.
Hence use %r5 transiently for the guest state pointer. */
ld 5,152(1) /* original guest_state ptr */
- std 3,OFFSET_ppc32_CIA(5)
+ std 3,OFFSET_ppc64_CIA(5)
mr 3,31 /* r3 =3D new gsp value */
b .run_innerloop_exit
/*NOTREACHED*/
|
|
From: Tom H. <th...@cy...> - 2005-12-22 03:29:44
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2005-12-22 03:15:04 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 209 tests, 17 stderr failures, 0 stdout failures ================= memcheck/tests/addressable (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/mempool (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2005-12-22 03:25:59
|
Nightly build on dellow ( x86_64, Fedora Core 4 ) started at 2005-12-22 03:10:12 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 227 tests, 6 stderr failures, 2 stdout failures ================= memcheck/tests/leakotron (stdout) memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) none/tests/amd64/faultstatus (stderr) none/tests/mremap2 (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 227 tests, 5 stderr failures, 1 stdout failure ================= memcheck/tests/mempool (stderr) memcheck/tests/x86/scalar (stderr) none/tests/amd64/faultstatus (stderr) none/tests/mremap2 (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Thu Dec 22 03:20:34 2005 --- new.short Thu Dec 22 03:25:55 2005 *************** *** 8,11 **** ! == 227 tests, 5 stderr failures, 1 stdout failure ================= memcheck/tests/mempool (stderr) memcheck/tests/x86/scalar (stderr) --- 8,13 ---- ! == 227 tests, 6 stderr failures, 2 stdout failures ================= ! memcheck/tests/leakotron (stdout) memcheck/tests/mempool (stderr) + memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) |