You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(16) |
2
(23) |
3
(15) |
|
4
(19) |
5
(21) |
6
(27) |
7
(18) |
8
(17) |
9
(15) |
10
(11) |
|
11
(9) |
12
(18) |
13
(26) |
14
(28) |
15
(26) |
16
(20) |
17
(27) |
|
18
(16) |
19
(40) |
20
(2) |
21
(11) |
22
(27) |
23
(24) |
24
(16) |
|
25
(10) |
26
(12) |
27
(16) |
28
(7) |
29
(6) |
30
(15) |
31
(5) |
|
From: Nicholas N. <nj...@cs...> - 2005-12-24 22:29:02
|
Hi,
I noticed that this check in coregrind/m_stacks.c:VG_(unknown_SP_update)()
is being triggered a lot in programs that don't do any stack switching:
/* Check if the stack pointer is still in the same stack as before. */
if (current_stack == NULL ||
new_SP < current_stack->start || new_SP > current_stack->end) {
VG_(printf)("new_SP = %p, curr->start = %p, curr->end = %p\n",
new_SP, current_stack->start, current_stack->end);
Stack* new_stack = find_stack_by_addr(new_SP);
if (new_stack && new_stack->id != current_stack->id) {
/* The stack pointer is now in another stack. Update the current
stack information and return without doing anything else. */
current_stack = new_stack;
return;
}
}
The problem is that in m_main.c the main stack is registered at a minimal
size, on my machine it's 0xBEFFF000--0xBEFFFFFF. And then it extends
beyond that, so the above "out of range" case matches for values like
0xBEFFEFE4, 0xBEFFEF64, 0xBEFFEF1C, etc. But the calls to
find_stack_by_addr() fail -- because there are no other stacks -- and so
it doesn't get changed. And then the cycle repeats.
This only occurs on the non-common SP changes -- for the common ones (eg.
increment/decrement by 4) stack membership is not tested for.
Basically the problem is that registered stacks cannot be extended. Or
perhaps that the main stack is not setup with a big enough range.
Perhaps we should make it 8MB to begin with (or whatever ulimit says it
can be)?
Thoughts?
Nick
|
|
From: John R.
|
Nicholas Nethercote wrote: > In the recent Valgrind survey five people complained about the > difficulty of tracking down the root cause of undefined value errors, > caused by the fact that Memcheck waits until an undefined value can > affect the visible behaviour of the program (eg. is used in a > conditional branch, or a syscall input). [snip] > It has been suggested that an option be present to do this eager > checking, but I'm not convinced it would be useful given the > overwhelming number of false positives. I'm wondering what other people > think. Thank you, Nicholas, for continuing to explore eager undef checking. One of the quality control policies that I deal with demands that an application must never fetch uninitialized bits from memory. This policy increases run-to-run repeatability when "unrelated" logic errors occur. Such repeatability makes maintenance easier, and increases reliability over the software lifecycle. The policy also increases compliance with ISO C 1989, which says that any use of uninitialized bits makes execution totally indeterminate. Language lawyers argue whether "mere fetch" constitutes "use," but an addition operation whose inputs contain uninit bits certainly is a use, even though non-eager memcheck will not complain until the sum affects I/O or flow of control. The policy makes developers aware of alignment holes and padding in structures. Often the response is "memset(&Struct, 0, sizeof(Struct));" shortly after declaration. This can increase runtime efficiency, particularly when the compiler "understands" memset(,0,) and thus elides subsequent "Struct.member<k> = 0;", or when the hardware has special instructions to clear entire cache lines. Of course, it can hurt on processors such as i586 Intel PentiumPlain/MMX, where a write miss does not allocate a cache line. But then memset can insert a fetch, for which i586 does allocate a new cache line upon miss. And memcheck can learn this specific exception, just like memcheck can learn about the intentional fetch overruns in strlen, strcpy, memcpy, etc. If the fundamental low-level language runtime libraries fetch uninit bits in their internal operation, then eager memcheck will notice, and the resulting "noise" will be bothersome. As a contribution towards eliminating this problem, from time to time I have "cleaned up" glibc so that its own internal testcases fetch no uninit bits. See my web page http://BitWagon.com/glibc-audit/glibc-audit.html Just as the first encounter between an application and memcheck often causes dismay, so the first encounter with eager memcheck is likely to be even more daunting. But "thousands" of complaints can be handled with just a few memset(), and this is heartening. Even junior team members can be productive at this stage. Finding alignment holes or padding in an applications's "basic" structs can be a wakeup call for storage efficiency. And if you keep track, within a week or two you might notice that the number and frequency of non-reproducible behaviors is decreasing. Bugs "cascade" less often; the original bug, the first misuse, tends to become visible immediately rather than after infecting several other areas of control or data. -- |
|
From: Nicholas N. <nj...@cs...> - 2005-12-24 18:15:43
|
Hi, In the recent Valgrind survey five people complained about the difficulty of tracking down the root cause of undefined value errors, caused by the fact that Memcheck waits until an undefined value can affect the visible behaviour of the program (eg. is used in a conditional branch, or a syscall input). A couple of people suggested doing more eager checking, and this idea has come up before. The problem is that the copying of undefined values is common, mostly due to the practice of padding structs for alignment and bitfields. I did some experimentation with eager checking a couple of years ago and found that it caused large numbers of false positives. I repeated the experiment again yesterday and saw the same results. I changed Memcheck to complain about the loading of any undefined values and tried various programs. For the empty C program that just returns zero, I get 24 errors from 23 contexts, most just from the dynamic linker. I get the following counts for the following programs: empty 1 errors from 1 context perf/bz2 8405487 errors from 30 contexts perf/tinycc 4647525 errors from 301 contexts I had to use --error-limit=no for these otherwise Memcheck would have stopped reporting errors after 100,000. These programs have no (unsuppressed) errors when run with a normal Memcheck. If I suppress the ones in the dynamic linker, I get: empty 1 errors from 1 context perf/bz2 8405464 errors from 8 contexts perf/tinycc 4647501 errors from 299 contexts If I change things so that any undefined value loaded gets loaded as if it was defined (to avoid possible cascading errors), I get: empty 1 errors from 1 context perf/bz2 4202624 errors from 2 contexts perf/tinycc 1137041 errors from 113 contexts I've attached the output from that last tinycc run. Some extra programs: vim 521 errors from 120 contexts gcc 384 errors from 53 contexts emacs 4876 errors from 63 contexts It has been suggested that an option be present to do this eager checking, but I'm not convinced it would be useful given the overwhelming number of false positives. I'm wondering what other people think. If you want to try this out for yourself, I've attached the patch I used. It's against the COMPVBITS branch, do this to check it out and build: svn co svn://www.valgrind.org/valgrind/branches/COMPVBITS cd COMPVBITS sh ./autogen.sh ./configure --prefix=<...> patch -p0 < eager.diff make Nick |
|
From: <sv...@va...> - 2005-12-24 16:34:58
|
Author: sewardj
Date: 2005-12-24 16:34:49 +0000 (Sat, 24 Dec 2005)
New Revision: 5430
Log:
Fix ppc32 build.
Modified:
branches/COMPVBITS/coregrind/m_machine.c
branches/COMPVBITS/coregrind/m_transtab.c
Modified: branches/COMPVBITS/coregrind/m_machine.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- branches/COMPVBITS/coregrind/m_machine.c 2005-12-24 15:33:32 UTC (rev=
5429)
+++ branches/COMPVBITS/coregrind/m_machine.c 2005-12-24 16:34:49 UTC (rev=
5430)
@@ -411,11 +411,11 @@
/* Either the value must not have been set yet (zero) or we can
tolerate it being set to the same value multiple times, as the
stack scanning logic in m_main is a bit stupid. */
- vg_assert(vai.ppc32_cache_line_szB =3D=3D 0
- || vai.ppc32_cache_line_szB =3D=3D szB);
+ vg_assert(vai.ppc_cache_line_szB =3D=3D 0
+ || vai.ppc_cache_line_szB =3D=3D szB);
=20
vg_assert(szB =3D=3D 32 || szB =3D=3D 128);
- vai.ppc32_cache_line_szB =3D szB;
+ vai.ppc_cache_line_szB =3D szB;
}
#endif
=20
Modified: branches/COMPVBITS/coregrind/m_transtab.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- branches/COMPVBITS/coregrind/m_transtab.c 2005-12-24 15:33:32 UTC (re=
v 5429)
+++ branches/COMPVBITS/coregrind/m_transtab.c 2005-12-24 16:34:49 UTC (re=
v 5430)
@@ -742,7 +742,7 @@
VexArchInfo vai;
=20
VG_(machine_get_VexArchInfo)( NULL, &vai );
- cls =3D vai.ppc32_cache_line_szB;
+ cls =3D vai.ppc_cache_line_szB;
=20
/* Stay sane .. */
vg_assert(cls =3D=3D 32 || cls =3D=3D 128);
|
|
From: <sv...@va...> - 2005-12-24 15:33:35
|
Author: sewardj
Date: 2005-12-24 15:33:32 +0000 (Sat, 24 Dec 2005)
New Revision: 5429
Log:
Fix read-after-free in VG_(HT_destruct). This fixes
memcheck/tests/mempools. Thanks to Jeroen Witmond for tracking it
down.
Modified:
trunk/coregrind/m_hashtable.c
trunk/docs/internals/3_1_BUGSTATUS.txt
Modified: trunk/coregrind/m_hashtable.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_hashtable.c 2005-12-24 12:55:48 UTC (rev 5428)
+++ trunk/coregrind/m_hashtable.c 2005-12-24 15:33:32 UTC (rev 5429)
@@ -234,11 +234,12 @@
=20
void VG_(HT_destruct)(VgHashTable table)
{
- UInt i;
- VgHashNode* node;
+ UInt i;
+ VgHashNode *node, *node_next;
=20
for (i =3D 0; i < table->n_chains; i++) {
- for (node =3D table->chains[i]; node !=3D NULL; node =3D node->nex=
t) {
+ for (node =3D table->chains[i]; node !=3D NULL; node =3D node_next=
) {
+ node_next =3D node->next;
VG_(free)(node);
}
}
Modified: trunk/docs/internals/3_1_BUGSTATUS.txt
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/docs/internals/3_1_BUGSTATUS.txt 2005-12-24 12:55:48 UTC (rev 5=
428)
+++ trunk/docs/internals/3_1_BUGSTATUS.txt 2005-12-24 15:33:32 UTC (rev 5=
429)
@@ -23,7 +23,7 @@
pending pending 118274 amd64: 0xDD #7 (fnsave)
pending pending 118466 add %r,%r mishandled by memcheck
pending pending n-i-bz VALGRIND_COUNT_LEAKS arg types (Olly Betts)
-pending pending n-i-bz memcheck/tests/mempool reads freed memory
+v5429 pending n-i-bz memcheck/tests/mempool reads freed memory
v5366/67/70 pending n-i-bz AshleyP's custom-allocator assertion
vx1501 pending n-i-bz Dirk strict-aliasing stuff
v5368 pending n-i-bz More space for debugger cmd line (Dan Thale=
r)
|
|
From: <sv...@va...> - 2005-12-24 13:14:17
|
Author: cerion
Date: 2005-12-24 13:14:11 +0000 (Sat, 24 Dec 2005)
New Revision: 1511
Log:
Comment only changes - misc refs to ppc32 changed to ppc.
Modified:
trunk/priv/guest-generic/bb_to_IR.h
trunk/priv/guest-ppc/ghelpers.c
trunk/priv/guest-ppc/toIR.c
trunk/priv/host-ppc/hdefs.c
trunk/pub/libvex_ir.h
Modified: trunk/priv/guest-generic/bb_to_IR.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/guest-generic/bb_to_IR.h 2005-12-24 12:39:47 UTC (rev 1510=
)
+++ trunk/priv/guest-generic/bb_to_IR.h 2005-12-24 13:14:11 UTC (rev 1511=
)
@@ -50,7 +50,7 @@
=20
/* This defines stuff needed by the guest insn disassemblers.
It's a bit circular; is imported by
- - the guest-specific toIR.c files (guest-{x86,amd64,ppc32,arm}/toIR.c=
)
+ - the guest-specific toIR.c files (guest-{x86,amd64,ppc,arm}/toIR.c)
- the generic disassembly driver (bb_to_IR.c)
- vex_main.c
*/
Modified: trunk/priv/guest-ppc/ghelpers.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/guest-ppc/ghelpers.c 2005-12-24 12:39:47 UTC (rev 1510)
+++ trunk/priv/guest-ppc/ghelpers.c 2005-12-24 13:14:11 UTC (rev 1511)
@@ -604,7 +604,7 @@
=20
=20
/*-----------------------------------------------------------*/
-/*--- Describing the ppc32 guest state, for the benefit ---*/
+/*--- Describing the ppc guest state, for the benefit ---*/
/*--- of iropt and instrumenters. ---*/
/*-----------------------------------------------------------*/
=20
@@ -614,7 +614,7 @@
=20
By default we enforce precise exns for guest R1 (stack pointer),
CIA (current insn address) and LR (link register). These are the
- minimum needed to extract correct stack backtraces from ppc32
+ minimum needed to extract correct stack backtraces from ppc
code. [[NB: not sure if keeping LR up to date is actually
necessary.]]
*/
Modified: trunk/priv/guest-ppc/toIR.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/guest-ppc/toIR.c 2005-12-24 12:39:47 UTC (rev 1510)
+++ trunk/priv/guest-ppc/toIR.c 2005-12-24 13:14:11 UTC (rev 1511)
@@ -1677,7 +1677,7 @@
/* Incoming oldca is assumed to hold the values 0 or 1 only. This
seems reasonable given that it's always generated by
getXER_CA32(), which masks it accordingly. In any case it being
- 0 or 1 is an invariant of the ppc32 guest state representation;
+ 0 or 1 is an invariant of the ppc guest state representation;
if it has any other value, that invariant has been violated. */
=20
switch (op) {
@@ -1795,7 +1795,7 @@
/* Incoming oldca is assumed to hold the values 0 or 1 only. This
seems reasonable given that it's always generated by
getXER_CA32(), which masks it accordingly. In any case it being
- 0 or 1 is an invariant of the ppc32 guest state representation;
+ 0 or 1 is an invariant of the ppc guest state representation;
if it has any other value, that invariant has been violated. */
=20
switch (op) {
Modified: trunk/priv/host-ppc/hdefs.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/host-ppc/hdefs.c 2005-12-24 12:39:47 UTC (rev 1510)
+++ trunk/priv/host-ppc/hdefs.c 2005-12-24 13:14:11 UTC (rev 1511)
@@ -2109,7 +2109,7 @@
}
=20
=20
-/* Generate ppc32 spill/reload instructions under the direction of the
+/* Generate ppc spill/reload instructions under the direction of the
register allocator. Note it's critical these don't write the
condition codes. */
PPCInstr* genSpill_PPC ( HReg rreg, UShort offsetB, Bool mode64 )
@@ -2161,7 +2161,7 @@
}
=20
=20
-/* --------- The ppc32 assembler (bleh.) --------- */
+/* --------- The ppc assembler (bleh.) --------- */
=20
static UInt iregNo ( HReg r, Bool mode64 )
{
@@ -2203,7 +2203,7 @@
return p;
}
=20
-/* The following mkForm[...] functions refer to PPC32 instruction forms
+/* The following mkForm[...] functions refer to ppc instruction forms
as per PPC32 p576
*/
=20
@@ -2530,7 +2530,7 @@
Note that buf is not the insn's final place, and therefore it is
imperative to emit position-independent code.=20
=20
- Note, dispatch should always be NULL since ppc32/ppc64 backends
+ Note, dispatch should always be NULL since ppc32/64 backends
use a call-return scheme to get from the dispatcher to generated
code and back.
*/
Modified: trunk/pub/libvex_ir.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/pub/libvex_ir.h 2005-12-24 12:39:47 UTC (rev 1510)
+++ trunk/pub/libvex_ir.h 2005-12-24 13:14:11 UTC (rev 1511)
@@ -472,7 +472,7 @@
/* binary */
Iop_Add32Fx4, Iop_Sub32Fx4, Iop_Mul32Fx4, Iop_Div32Fx4,=20
Iop_Max32Fx4, Iop_Min32Fx4,
- /* Note: For the following compares, the ppc32 front-end assumes a
+ /* Note: For the following compares, the ppc front-end assumes a
nan in a lane of either argument returns zero for that lane. */
Iop_CmpEQ32Fx4, Iop_CmpLT32Fx4, Iop_CmpLE32Fx4, Iop_CmpUN32Fx4,=20
Iop_CmpGT32Fx4, Iop_CmpGE32Fx4,
@@ -857,7 +857,7 @@
Ijk_TInval, /* Invalidate translations before continuing. =
*/
/* Unfortunately, various guest-dependent syscall kinds. They
all mean: do a syscall before continuing. */
- Ijk_Sys_syscall, /* amd64 'syscall', ppc32 'sc' */
+ Ijk_Sys_syscall, /* amd64 'syscall', ppc 'sc' */
Ijk_Sys_int32, /* amd64/x86 'int $0x20' */
Ijk_Sys_int128, /* amd64/x86 'int $0x80' */
Ijk_Sys_sysenter /* x86 'sysenter'. guest_EIP becomes=20
|
|
From: <sv...@va...> - 2005-12-24 12:55:58
|
Author: cerion
Date: 2005-12-24 12:55:48 +0000 (Sat, 24 Dec 2005)
New Revision: 5428
Log:
Tidy up ppc32 jm-insns test some more.
In particular, hide the function pointer setup stuff, for the test loops,=
in init_function()
Modified:
trunk/none/tests/ppc32/jm-insns.c
trunk/none/tests/ppc32/jm-int.stdout.exp
Modified: trunk/none/tests/ppc32/jm-insns.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/none/tests/ppc32/jm-insns.c 2005-12-24 03:10:56 UTC (rev 5427)
+++ trunk/none/tests/ppc32/jm-insns.c 2005-12-24 12:55:48 UTC (rev 5428)
@@ -105,26 +105,26 @@
*
* Example:
* extern void test_addi (void);
- * asm(".text\n"
- * "test_addi:\n"
- * "\taddi 17, 14, 0\n"
- * "\tblr\n"
- * ".previous\n"
- * );
+ * asm(".section \".text\"\n"
+ * " .align 2\n"
+ * " .type test_addi,@function\n"
+ * "test_addi:\n"
+ * " addi\n"
+ * " blr\n"
+ * " .previous\n"
+ * );
*
* We are interested only in:
- * "\taddi 17, 14, 0\n"
- * "\tblr\n"
+ * " addi 17, 14, 0\n"
+ * " blr\n"
*
* In a loop test, we may see:
* uint32_t func_buf[2]; // our new stack based 'function'
- * uint32_t *p; // ptr to access insns by idx
* for imm... // loop over imm
- * p =3D (void *)func;
- * func_buf[1] =3D p[1]; // copy 'blr' to func_buf[1]
- * patch_op_imm16(func_buf, p, imm); // patched 'addi' -> func_buf[0]
- * func =3D (void *)func_buf; // ptr to stack based insns
- * (*func)(); // exec our rewritten code
+ * init_function( &func, func_buf ); // copy insns, set func ptr
+ * patch_op_imm16(&func_buf[0], imm); // patch 'addi' insn
+ * ...
+ * (*func)(); // exec our rewritten code
*
* patch_op_imm16() itself simply takes the uint32_t insn and overwrites
* the immediate field with the new value (which, for 'addi', is the
@@ -135,6 +135,9 @@
*
* after patch_op_imm16(), func_buf[0] becomes:
* 0x3A2E0009 =3D> addi 17, 14, 9
+ *
+ * Note: init_function() needs to be called on every iteration
+ * - don't ask me why!
*/
=20
=20
@@ -161,16 +164,22 @@
=20
#include <stdint.h>
=20
+
+/* Something of the same size as void*, so can be safely be coerced
+ to/from a pointer type. Also same size as the host's gp registers. */
+typedef uint32_t HWord_t;
+
+
register double f14 __asm__ ("f14");
register double f15 __asm__ ("f15");
register double f16 __asm__ ("f16");
register double f17 __asm__ ("f17");
register double f18 __asm__ ("f18");
-register uint32_t r14 __asm__ ("r14");
-register uint32_t r15 __asm__ ("r15");
-register uint32_t r16 __asm__ ("r16");
-register uint32_t r17 __asm__ ("r17");
-register uint32_t r18 __asm__ ("r18");
+register HWord_t r14 __asm__ ("r14");
+register HWord_t r15 __asm__ ("r15");
+register HWord_t r16 __asm__ ("r16");
+register HWord_t r17 __asm__ ("r17");
+register HWord_t r18 __asm__ ("r18");
=20
#if defined (HAS_ALTIVEC)
# include <altivec.h>
@@ -187,6 +196,7 @@
#include <malloc.h>
=20
=20
+
#define ASSEMBLY_FUNC(__fname, __insn) \
asm(".section \".text\"\n" \
"\t.align 2\n" \
@@ -3703,7 +3713,7 @@
static double *fargs;
static int nb_fargs;
static int nb_normal_fargs;
-static uint32_t *iargs;
+static HWord_t *iargs;
static int nb_iargs;
static uint16_t *ii16;
static int nb_ii16;
@@ -4101,7 +4111,8 @@
static void test_int_three_args (const char* name, test_func_t func,
unused uint32_t test_flags)
{
- volatile uint32_t res, flags, xer, tmpcr, tmpxer;
+ volatile HWord_t res;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
int i, j, k;
=20
for (i=3D0; i<nb_iargs; i++) {
@@ -4141,11 +4152,12 @@
static void test_int_two_args (const char* name, test_func_t func,
uint32_t test_flags)
{
- volatile uint32_t res, flags, xer, xer_orig, tmpcr, tmpxer;
+ volatile HWord_t res;
+ volatile uint32_t flags, xer, xer_orig, tmpcr, tmpxer;
int i, j, is_div;
=20
// catches div, divwu, divo, divwu, divwuo, and . variants
- is_div =3D NULL !=3D strstr(name, "divw");
+ is_div =3D strstr(name, "divw") !=3D NULL;
=20
xer_orig =3D 0x00000000;
redo:
@@ -4192,7 +4204,8 @@
static void test_int_one_arg (const char* name, test_func_t func,
uint32_t test_flags)
{
- volatile uint32_t res, flags, xer, xer_orig, tmpcr, tmpxer;
+ volatile HWord_t res;
+ volatile uint32_t flags, xer, xer_orig, tmpcr, tmpxer;
int i;
=20
xer_orig =3D 0x00000000;
@@ -4231,10 +4244,10 @@
=20
static inline void invalidate_icache ( void *ptr, int nbytes )
{
- unsigned int startaddr =3D (unsigned int) ptr;
- unsigned int endaddr =3D startaddr + nbytes;
- unsigned int cls =3D 32; /*VG_(cache_line_size_ppc32);*/
- unsigned int addr;
+ HWord_t startaddr =3D (HWord_t) ptr;
+ HWord_t endaddr =3D startaddr + nbytes;
+ HWord_t cls =3D 32; /*VG_(cache_line_size_ppc32);*/
+ HWord_t addr;
=20
startaddr &=3D ~(cls - 1);
for (addr =3D startaddr; addr < endaddr; addr +=3D cls)
@@ -4247,53 +4260,53 @@
=20
/* for god knows what reason, if this isn't inlined, the
program segfaults. */
-static inline void _patch_op_imm (void *out, void *in,
- uint16_t imm, int sh, int len)
+static inline
+void _patch_op_imm (uint32_t *p_insn, uint16_t imm, int sh, int len)
{
- volatile uint32_t *p, *q;
- =20
- p =3D out;
- q =3D in;
- *p =3D (*q & ~(((1 << len) - 1) << sh)) | ((imm & ((1 << len) - 1)) <=
< sh);
+ uint32_t mask =3D ((1 << len) - 1) << sh;
+ *p_insn =3D (*p_insn & ~mask) | ((imm<<sh) & mask);
}
=20
-static inline void patch_op_imm (void *out, void *in,
- uint16_t imm, int sh, int len)
+static inline
+void patch_op_imm (uint32_t* p_insn, uint16_t imm, int sh, int len)
{
- volatile uint32_t *p;
- =20
- p =3D out;
- _patch_op_imm(out, in, imm, sh, len);
- invalidate_icache(out, 4);
+ _patch_op_imm(p_insn, imm, sh, len);
+ invalidate_icache(p_insn, 4);
}
=20
-static inline void patch_op_imm16 (void *out, void *in, uint16_t imm)
+static inline
+void patch_op_imm16 (uint32_t *p_insn, uint16_t imm)
{
- patch_op_imm(out, in, imm, 0, 16);
+ patch_op_imm(p_insn, imm, 0, 16);
}
=20
+
+static inline
+void init_function( test_func_t *p_func, uint32_t func_buf[] )
+{
+ uint32_t *p;
+ p =3D (uint32_t *)*p_func;
+ func_buf[0] =3D p[0];
+ func_buf[1] =3D p[1];
+ *p_func =3D (void *)func_buf;
+}
+
+
static void test_int_one_reg_imm16 (const char* name,
test_func_t func,
unused uint32_t test_flags)
{
- uint32_t func_buf[2], *p;
- volatile uint32_t res, flags, xer, tmpcr, tmpxer;
+ uint32_t func_buf[2];
+ volatile HWord_t res;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
int i, j;
- =20
+
for (i=3D0; i<nb_iargs; i++) {
for (j=3D0; j<nb_ii16; j++) {
- p =3D (void *)func;
-#if 0
- printf("copy func %s from %p to %p (%08x %08x)\n",
- name, func, func_buf, p[0], p[1]);
-#endif
- func_buf[1] =3D p[1];
- patch_op_imm16(func_buf, p, ii16[j]);
- func =3D (void *)func_buf;
-#if 0
- printf(" =3D> func %s from %p to %p (%08x %08x)\n",
- name, func, func_buf, func_buf[0], func_buf[1]);
-#endif
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ patch_op_imm16(&func_buf[0], ii16[j]);
+
r14 =3D iargs[i];
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
@@ -4340,28 +4353,31 @@
static void rlwi_cb (const char* name, test_func_t func,
unused uint32_t test_flags)
{
- uint32_t func_buf[2], *p;
- volatile uint32_t res, flags, xer, tmpcr, tmpxer;
- int i, j, k, l;
+ uint32_t func_buf[2];
+ volatile HWord_t res;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
+ int i, j, k, l, arg_step;
=20
- int arg_step =3D (arg_list_size =3D=3D 0) ? 31 : 3;
+ arg_step =3D (arg_list_size =3D=3D 0) ? 31 : 3;
=20
for (i=3D0; i<nb_iargs; i++) {
for (j=3D0; j<32; j+=3Darg_step) {
for (k=3D0; k<32; k+=3Darg_step) {
for (l=3D0; l<32; l+=3Darg_step) {
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- _patch_op_imm(func_buf, p, j, 11, 5);
- _patch_op_imm(func_buf, p, k, 6, 5);
- patch_op_imm(func_buf, p, l, 1, 5);
- func =3D (void *)func_buf;
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ _patch_op_imm(&func_buf[0], j, 11, 5);
+ _patch_op_imm(&func_buf[0], k, 6, 5);
+ patch_op_imm(&func_buf[0], l, 1, 5);
+
r14 =3D iargs[i];
+
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
tmpcr =3D r18;
__asm__ __volatile__ ("mfxer 18");
tmpxer =3D r18;
+
/* Set up flags for test */
r18 =3D 0;
__asm__ __volatile__ ("mtcr 18");
@@ -4372,12 +4388,13 @@
__asm__ __volatile__ ("mfxer 18");
xer =3D r18;
res =3D r17;
+
/* Restore flags */
r18 =3D tmpcr;
__asm__ __volatile__ ("mtcr 18");
r18 =3D tmpxer;
__asm__ __volatile__ ("mtxer 18");
- printf("%s %08x, %d, %d, %d =3D> %08x (%08x %08x)\n",
+ printf("%s %08x, %2d, %2d, %2d =3D> %08x (%08x %08x)\n",
name, iargs[i], j, k, l, res, flags, xer);
}
if (verbose) printf("\n");
@@ -4389,28 +4406,31 @@
static void rlwnm_cb (const char* name, test_func_t func,
unused uint32_t test_flags)
{
- uint32_t func_buf[2], *p;
- volatile uint32_t res, flags, xer, tmpcr, tmpxer;
- int i, j, k, l;
+ uint32_t func_buf[2];
+ volatile HWord_t res;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
+ int i, j, k, l, arg_step;
=20
- int arg_step =3D (arg_list_size =3D=3D 0) ? 31 : 3;
+ arg_step =3D (arg_list_size =3D=3D 0) ? 31 : 3;
=20
for (i=3D0; i<nb_iargs; i++) {
for (j=3D0; j<nb_iargs; j++) {
for (k=3D0; k<32; k+=3Darg_step) {
for (l=3D0; l<32; l+=3Darg_step) {
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- _patch_op_imm(func_buf, p, k, 6, 5);
- patch_op_imm(func_buf, p, l, 1, 5);
- func =3D (void *)func_buf;
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ _patch_op_imm(&func_buf[0], k, 6, 5);
+ patch_op_imm(&func_buf[0], l, 1, 5);
+
r14 =3D iargs[i];
r15 =3D iargs[j];
+
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
tmpcr =3D r18;
__asm__ __volatile__ ("mfxer 18");
tmpxer =3D r18;
+
/* Set up flags for test */
r18 =3D 0;
__asm__ __volatile__ ("mtcr 18");
@@ -4421,12 +4441,13 @@
__asm__ __volatile__ ("mfxer 18");
xer =3D r18;
res =3D r17;
+
/* Restore flags */
r18 =3D tmpcr;
__asm__ __volatile__ ("mtcr 18");
r18 =3D tmpxer;
__asm__ __volatile__ ("mtxer 18");
- printf("%s %08x, %08x, %d, %d =3D> %08x (%08x %08x)\n",
+ printf("%s %08x, %08x, %2d, %2d =3D> %08x (%08x %08x)\n",
name, iargs[i], iargs[j], k, l, res, flags, xer);
}
if (verbose) printf("\n");
@@ -4438,24 +4459,27 @@
static void srawi_cb (const char* name, test_func_t func,
unused uint32_t test_flags)
{
- uint32_t func_buf[2], *p;
- volatile uint32_t res, flags, xer, tmpcr, tmpxer;
- int i, j;
+ uint32_t func_buf[2];
+ volatile HWord_t res;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
+ int i, j, arg_step;
=20
- int arg_step =3D (arg_list_size =3D=3D 0) ? 31 : 1;
+ arg_step =3D (arg_list_size =3D=3D 0) ? 31 : 1;
=20
for (i=3D0; i<nb_iargs; i++) {
for (j=3D0; j<32; j+=3Darg_step) {
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm(func_buf, p, j, 11, 5);
- func =3D (void *)func_buf;
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ patch_op_imm(&func_buf[0], j, 11, 5);
+
r14 =3D iargs[i];
+
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
tmpcr =3D r18;
__asm__ __volatile__ ("mfxer 18");
tmpxer =3D r18;
+
/* Set up flags for test */
r18 =3D 0;
__asm__ __volatile__ ("mtcr 18");
@@ -4466,12 +4490,13 @@
__asm__ __volatile__ ("mfxer 18");
xer =3D r18;
res =3D r17;
+
/* Restore flags */
r18 =3D tmpcr;
__asm__ __volatile__ ("mtcr 18");
r18 =3D tmpxer;
__asm__ __volatile__ ("mtxer 18");
- printf("%s %08x, %d =3D> %08x (%08x %08x)\n",
+ printf("%s %08x, %2d =3D> %08x (%08x %08x)\n",
name, iargs[i], j, res, flags, xer);
}
if (verbose) printf("\n");
@@ -4481,26 +4506,29 @@
static void mcrf_cb (const char* name, test_func_t func,
unused uint32_t test_flags)
{
- uint32_t func_buf[2], *p;
+ uint32_t func_buf[2];
volatile uint32_t flags, xer, tmpcr, tmpxer;
- int i, j, k;
+ int i, j, k, arg_step;
=20
- int arg_step =3D (arg_list_size =3D=3D 0) ? 7 : 1;
+
+ arg_step =3D (arg_list_size =3D=3D 0) ? 7 : 1;
=20
for (i=3D0; i<nb_iargs; i++) {
for (j=3D0; j<8; j+=3Darg_step) {
for (k=3D0; k<8; k+=3Darg_step) {
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- _patch_op_imm(func_buf, p, j, 23, 3);
- patch_op_imm(func_buf, p, k, 18, 3);
- func =3D (void *)func_buf;
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ _patch_op_imm(&func_buf[0], j, 23, 3);
+ patch_op_imm(&func_buf[0], k, 18, 3);
+
r14 =3D iargs[i];
+
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
tmpcr =3D r18;
__asm__ __volatile__ ("mfxer 18");
tmpxer =3D r18;
+
/* Set up flags for test */
r18 =3D 0;
__asm__ __volatile__ ("mtcr 14");
@@ -4510,6 +4538,7 @@
flags =3D r18;
__asm__ __volatile__ ("mfxer 18");
xer =3D r18;
+
/* Restore flags */
r18 =3D tmpcr;
__asm__ __volatile__ ("mtcr 18");
@@ -4533,25 +4562,27 @@
static void mcrxr_cb (const char* name, test_func_t func,
unused uint32_t test_flags)
{
- uint32_t func_buf[2], *p;
+ uint32_t func_buf[2];
volatile uint32_t flags, xer, tmpcr, tmpxer;
- int i, j, k;
+ int i, j, k, arg_step;
=20
- int arg_step =3D 1; //(arg_list_size =3D=3D 0) ? 7 : 1;
+ arg_step =3D 1; //(arg_list_size =3D=3D 0) ? 7 : 1;
=20
for (i=3D0; i<16; i+=3Darg_step) {
j =3D i << 28;
for (k=3D0; k<8; k+=3Darg_step) {
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm(func_buf, p, k, 23, 3);
- func =3D (void *)func_buf;
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ patch_op_imm(&func_buf[0], k, 23, 3);
+
r14 =3D j;
+
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
tmpcr =3D r18;
__asm__ __volatile__ ("mfxer 18");
tmpxer =3D r18;
+
/* Set up flags for test */
r18 =3D 0;
__asm__ __volatile__ ("mtcr 18");
@@ -4561,6 +4592,7 @@
flags =3D r18;
__asm__ __volatile__ ("mfxer 18");
xer =3D r18;
+
/* Restore flags */
r18 =3D tmpcr;
__asm__ __volatile__ ("mtcr 18");
@@ -4576,7 +4608,8 @@
static void mfcr_cb (const char* name, test_func_t func,
unused uint32_t test_flags)
{
- volatile uint32_t res, flags, xer, tmpcr, tmpxer;
+ volatile HWord_t res;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
int i;
=20
for (i=3D0; i<nb_iargs; i++) {
@@ -4612,9 +4645,7 @@
{
//volatile uint32_t res, flags, xer, ctr, lr, tmpcr, tmpxer;
int j, k, res;
- =20
- // Call func, just to stop compiler complaining
- (*func)();
+ func =3D func; // just to stop compiler complaining
=20
// mtxer followed by mfxer
for (k=3D0; k<nb_iargs; k++) {
@@ -4625,7 +4656,7 @@
: /*out*/"=3Dr"(res) : /*in*/"r"(j) : /*trashed*/"xer"=20
);
res &=3D 0xE000007F; /* rest of the bits are undefined */
- printf("%s: %08x -> mtxer -> mfxer =3D> %08x\n",
+ printf("%s 1 (%08x) -> mtxer -> mfxer =3D> %08x\n",
name, j, res);
}
=20
@@ -4637,7 +4668,7 @@
"\tmflr %0"
: /*out*/"=3Dr"(res) : /*in*/"r"(j) : /*trashed*/"lr"=20
);
- printf("%s: %08x -> mtlr -> mflr =3D> %08x\n",
+ printf("%s 8 (%08x) -> mtlr -> mflr =3D> %08x\n",
name, j, res);
}
=20
@@ -4649,7 +4680,7 @@
"\tmfctr %0"
: /*out*/"=3Dr"(res) : /*in*/"r"(j) : /*trashed*/"ctr"=20
);
- printf("%s: %08x -> mtctr -> mfctr =3D> %08x\n",
+ printf("%s 9 (%08x) -> mtctr -> mfctr =3D> %08x\n",
name, j, res);
}
=20
@@ -4790,7 +4821,8 @@
// 1) TBU won't change for a while
// 2) TBL will have changed every loop iter
=20
- volatile uint32_t res, flags, xer, tmpcr, tmpxer;
+ volatile HWord_t res;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
int i, j;
=20
i =3D 269;
@@ -4855,24 +4887,26 @@
static void mtcrf_cb (const char* name, test_func_t func,
unused uint32_t test_flags)
{
- uint32_t func_buf[2], *p;
+ uint32_t func_buf[2];
volatile uint32_t flags, xer, tmpcr, tmpxer;
- int i, j;
+ int i, j, arg_step;
=20
- int arg_step =3D (arg_list_size =3D=3D 0) ? 99 : 1;
+ arg_step =3D (arg_list_size =3D=3D 0) ? 99 : 1;
=20
for (i=3D0; i<nb_iargs; i++) {
for (j=3D0; j<256; j+=3Darg_step) {
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm(func_buf, p, j, 12, 8);
- func =3D (void *)func_buf;
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ patch_op_imm(&func_buf[0], j, 12, 8);
+
r14 =3D iargs[i];
+
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
tmpcr =3D r18;
__asm__ __volatile__ ("mfxer 18");
tmpxer =3D r18;
+
/* Set up flags for test */
r18 =3D 0;
__asm__ __volatile__ ("mtcr 18");
@@ -4882,12 +4916,13 @@
flags =3D r18;
__asm__ __volatile__ ("mfxer 18");
xer =3D r18;
+
/* Restore flags */
r18 =3D tmpcr;
__asm__ __volatile__ ("mtcr 18");
r18 =3D tmpxer;
__asm__ __volatile__ ("mtxer 18");
- printf("%s %d, %08x =3D> (%08x %08x)\n",
+ printf("%s %3d, %08x =3D> (%08x %08x)\n",
name, j, iargs[i], flags, xer);
}
if (verbose) printf("\n");
@@ -4899,12 +4934,11 @@
unused uint32_t test_flags)
{
#if 0
- volatile uint32_t flags, xer, ctr, lr, tmpcr, tmpxer;
+ volatile HWord_t ctr, lr;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
int j, k;
+ func =3D func; // just to stop compiler complaining
=20
- // Call func, just to stop compiler complaining
- (*func)();
- =20
// mtxer
j =3D 1;
for (k=3D0; k<nb_iargs; k++) {
@@ -5151,19 +5185,21 @@
test_func_t func,
unused uint32_t test_flags)
{
- uint32_t func_buf[2], *p;
- volatile uint32_t res, rA, flags, xer, tmpcr, tmpxer;
+ uint32_t func_buf[2];
+ volatile HWord_t res, rA;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
int i, j;
=20
// +ve d
for (i=3D0; i<nb_iargs; i++) {
- j =3D i * 4; // offset =3D i * sizeof(uint32_=
t)
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm16(func_buf, p, j);
- func =3D (void *)func_buf;
- r14 =3D (uint32_t)&iargs[0]; // base reg =3D start of array
+ j =3D i * sizeof(HWord_t);
=20
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ patch_op_imm16(&func_buf[0], j);
+
+ r14 =3D (HWord_t)&iargs[0]; // base reg =3D start of array
+
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
tmpcr =3D r18;
@@ -5188,20 +5224,21 @@
r18 =3D tmpxer;
__asm__ __volatile__ ("mtxer 18");
=20
- printf("%s %d, (%08x) =3D> %08x, (%08x %08x)\n",
+ printf("%s %2d, (%08x) =3D> %08x (%08x %08x)\n",
name, j, /*&iargs[0], */ iargs[i], res, /*rA, */ flags, xer=
);
}
if (verbose) printf("\n");
=20
// -ve d
for (i =3D -nb_iargs+1; i<=3D0; i++) {
- j =3D i * 4; // sizeof(uint32_t)
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm16(func_buf, p, j);
- func =3D (void *)func_buf;
- r14 =3D (uint32_t)&iargs[nb_iargs-1];
+ j =3D i * sizeof(HWord_t);
=20
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ patch_op_imm16(&func_buf[0], j);
+
+ r14 =3D (HWord_t)&iargs[nb_iargs-1];
+
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
tmpcr =3D r18;
@@ -5226,7 +5263,7 @@
r18 =3D tmpxer;
__asm__ __volatile__ ("mtxer 18");
=20
- printf("%s %d, (%08x) =3D> %08x (%08x %08x)\n",
+ printf("%s %2d, (%08x) =3D> %08x (%08x %08x)\n",
name, j, /*&iargs[nb_iargs-1], */ iargs[nb_iargs-1+i], res,=
/*rA, */ flags, xer);
}
}
@@ -5235,13 +5272,14 @@
test_func_t func,
unused uint32_t test_flags)
{
- volatile uint32_t res, rA, flags, xer, tmpcr, tmpxer;
+ volatile HWord_t res, rA;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
int i, j;
=20
// +ve d
for (i=3D0; i<nb_iargs; i++) {
- j =3D i * 4; // sizeof(uint32_t)
- r14 =3D (uint32_t)&iargs[0];
+ j =3D i * sizeof(HWord_t);
+ r14 =3D (HWord_t)&iargs[0];
r15 =3D j;
=20
/* Save flags */
@@ -5277,27 +5315,30 @@
test_func_t func,
unused uint32_t test_flags)
{
- uint32_t func_buf[2], *p;
- volatile uint32_t rA, flags, xer, tmpcr, tmpxer;
- int i, j;
- uint32_t *iargs_priv;
+ uint32_t func_buf[2];
+ volatile HWord_t rA;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
+ int i, j, k;
+ HWord_t *iargs_priv;
=20
// private iargs table to store to
- iargs_priv =3D malloc(nb_iargs * sizeof(uint32_t));
- for (i=3D0; i<nb_iargs; i++)
- iargs_priv[i] =3D 0;
+ iargs_priv =3D malloc(nb_iargs * sizeof(HWord_t));
=20
// __asm__ __volatile__ ("stwu 14,0(15)");
=20
// +ve d
for (i=3D0; i<nb_iargs; i++) {
- j =3D i * 4; // sizeof(uint32_t)
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm16(func_buf, p, j);
- func =3D (void *)func_buf;
+ for (k=3D0; k<nb_iargs; k++) // clear array
+ iargs_priv[k] =3D 0;
+
+ j =3D i * sizeof(HWord_t);
+=20
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ patch_op_imm16(&func_buf[0], j);
+
r14 =3D iargs[i]; // read from iargs
- r15 =3D (uint32_t)&iargs_priv[0]; // store to r15 + j
+ r15 =3D (HWord_t)&iargs_priv[0]; // store to r15 + j
=20
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
@@ -5322,20 +5363,24 @@
r18 =3D tmpxer;
__asm__ __volatile__ ("mtxer 18");
=20
- printf("%s %08x, %d =3D> %08x, (%08x %08x)\n",
+ printf("%s %08x, %2d =3D> %08x (%08x %08x)\n",
name, iargs[i], j, /*&iargs_priv[0], */ iargs_priv[i], /*rA=
, */ flags, xer);
}
if (verbose) printf("\n");
=20
// -ve d
for (i =3D -nb_iargs+1; i<=3D0; i++) {
- j =3D i * 4; // sizeof(uint32_t)
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm16(func_buf, p, j);
- func =3D (void *)func_buf;
+ for (k=3D0; k<nb_iargs; k++) // clear array
+ iargs_priv[k] =3D 0;
+
+ j =3D i * sizeof(HWord_t);
+=20
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ patch_op_imm16(&func_buf[0], j);
+
r14 =3D iargs[nb_iargs-1+i]; // read from iargs
- r15 =3D (uint32_t)&iargs_priv[nb_iargs-1]; // store to r15 + j
+ r15 =3D (HWord_t)&iargs_priv[nb_iargs-1]; // store to r15 + j
=20
/* Save flags */
__asm__ __volatile__ ("mfcr 18");
@@ -5360,7 +5405,7 @@
r18 =3D tmpxer;
__asm__ __volatile__ ("mtxer 18");
=20
- printf("%s %08x, %d =3D> %08x, (%08x %08x)\n",
+ printf("%s %08x, %2d =3D> %08x (%08x %08x)\n",
name, iargs[nb_iargs-1+i], j, /*&iargs_priv[nb_iargs-1], */=
iargs_priv[nb_iargs-1+i], /*rA, */ flags, xer);
}
free(iargs_priv);
@@ -5370,19 +5415,21 @@
test_func_t func,
unused uint32_t test_flags)
{
- volatile uint32_t rA, flags, xer, tmpcr, tmpxer;
- int i, j;
- uint32_t *iargs_priv;
+ volatile HWord_t rA;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
+ int i, j, k;
+ HWord_t *iargs_priv;
=20
// private iargs table to store to
- iargs_priv =3D malloc(nb_iargs * sizeof(uint32_t));
- for (i=3D0; i<nb_iargs; i++)
- iargs_priv[i] =3D 0;
+ iargs_priv =3D malloc(nb_iargs * sizeof(HWord_t));
=20
for (i=3D0; i<nb_iargs; i++) {
- j =3D i * 4; // sizeof(uint32_t)
+ for (k=3D0; k<nb_iargs; k++) // clear array
+ iargs_priv[k] =3D 0;
+
+ j =3D i * sizeof(HWord_t);
r14 =3D iargs[i]; // read from iargs
- r15 =3D (uint32_t)&iargs_priv[0]; // store to r15 + j
+ r15 =3D (HWord_t)&iargs_priv[0]; // store to r15 + j
r16 =3D j;
=20
/* Save flags */
@@ -5408,7 +5455,7 @@
r18 =3D tmpxer;
__asm__ __volatile__ ("mtxer 18");
=20
- printf("%s %08x, %d =3D> %08x, (%08x %08x)\n",
+ printf("%s %08x, %d =3D> %08x (%08x %08x)\n",
name, iargs[i], /*&iargs_priv[0], */ j, iargs_priv[i], /*rA=
, */ flags, xer);
}
free(iargs_priv);
@@ -5547,11 +5594,11 @@
double res;
uint64_t u0, ur;
volatile uint32_t flags, tmpcr, tmpxer;
- int i;
+ int i, zap_hi_32bits;
=20
/* if we're testing fctiw or fctiwz, zap the hi 32bits,
as they're undefined */
- unsigned char zap_hi_32bits =3D strstr(name,"fctiw") ? 1 : 0;
+ zap_hi_32bits =3D strstr(name,"fctiw") !=3D NULL;
=20
for (i=3D0; i<nb_fargs; i++) {
u0 =3D *(uint64_t *)(&fargs[i]);
@@ -5578,7 +5625,7 @@
r18 =3D tmpxer;
__asm__ __volatile__ ("mtxer 18");
=20
- if (zap_hi_32bits !=3D 0)
+ if (zap_hi_32bits)
ur &=3D 0xFFFFFFFFULL;
=20
#if defined TEST_FLOAT_FLAGS
@@ -5654,7 +5701,7 @@
test_func_t func,
unused uint32_t test_flags)
{
- uint32_t base, func_buf[2], *p;
+ uint32_t base, func_buf[2];
volatile uint32_t flags, xer, tmpcr, tmpxer;
volatile double src, res;
int i, offs;
@@ -5664,16 +5711,15 @@
offs =3D i * 8; // offset =3D i * sizeof(double)
if (i < 0) {
src =3D fargs[nb_fargs-1 + i];
- base =3D (uint32_t)&fargs[nb_fargs-1];
+ base =3D (HWord_t)&fargs[nb_fargs-1];
} else {
src =3D fargs[i];
- base =3D (uint32_t)&fargs[0];
+ base =3D (HWord_t)&fargs[0];
}
=20
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm16(func_buf, p, offs);
- func =3D (void *)func_buf;
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ patch_op_imm16(&func_buf[0], offs);
=20
// load from fargs[idx] =3D> r14 + offs
r14 =3D base;
@@ -5718,7 +5764,8 @@
test_func_t func,
unused uint32_t test_flags)
{
- volatile uint32_t base, flags, xer, tmpcr, tmpxer;
+ volatile HWord_t base;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
volatile double src, res;
int i;
=20
@@ -5727,10 +5774,10 @@
r15 =3D i * 8; // offset =3D i * sizeof(double)
if (i < 0) { // base reg =3D start of array
src =3D fargs[nb_fargs-1 + i];
- base =3D (uint32_t)&fargs[nb_fargs-1];
+ base =3D (HWord_t)&fargs[nb_fargs-1];
} else {
src =3D fargs[i];
- base =3D (uint32_t)&fargs[0];
+ base =3D (HWord_t)&fargs[0];
}
=20
r14 =3D base;
@@ -5774,20 +5821,22 @@
test_func_t func,
unused uint32_t test_flags)
{
- uint32_t base, func_buf[2], *p;
+ HWord_t base;
+ uint32_t func_buf[2];
volatile uint32_t flags, xer, tmpcr, tmpxer;
double src, *p_dst;
int i, offs;
double *fargs_priv;
int nb_tmp_fargs =3D nb_fargs;
=20
+
/* if we're storing an fp single-precision, don't want nans
- the vex implementation doesn't like them (yet)
Note: This is actually a bigger problem: the vex implementation
rounds these insns twice. This leads to many rounding errors.
For the small fargs set, however, this doesn't show up.
*/
- if (strstr(name, "stfs"))
+ if (strstr(name, "stfs") !=3D NULL)
nb_tmp_fargs =3D nb_normal_fargs;
=20
=20
@@ -5800,18 +5849,17 @@
if (i < 0) {
src =3D fargs [nb_tmp_fargs-1 + i];
p_dst =3D &fargs_priv[nb_tmp_fargs-1 + i];
- base =3D (uint32_t)&fargs_priv[nb_tmp_fargs-1];
+ base =3D (HWord_t)&fargs_priv[nb_tmp_fargs-1];
} else {
src =3D fargs [i];
p_dst =3D &fargs_priv[i];
- base =3D (uint32_t)&fargs_priv[0];
+ base =3D (HWord_t)&fargs_priv[0];
}
*p_dst =3D 0; // clear dst
=20
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm16(func_buf, p, offs);
- func =3D (void *)func_buf;
+ init_function( &func, func_buf );
+ /* Patch up the instruction */
+ patch_op_imm16(&func_buf[0], offs);
=20
// read from fargs[idx] =3D> f14
// store to fargs_priv[idx] =3D> r15 + offs
@@ -5857,7 +5905,8 @@
test_func_t func,
unused uint32_t test_flags)
{
- volatile uint32_t base, flags, xer, tmpcr, tmpxer;
+ volatile HWord_t base;
+ volatile uint32_t flags, xer, tmpcr, tmpxer;
double src, *p_dst;
int i, offs;
double *fargs_priv;
@@ -5869,7 +5918,7 @@
rounds these insns twice. This leads to many rounding errors.
For the small fargs set, however, this doesn't show up.
*/
- if (strstr(name, "stfs")) // stfs(u)(x)
+ if (strstr(name, "stfs") !=3D NULL) // stfs(u)(x)
nb_tmp_fargs =3D nb_normal_fargs;
=20
=20
@@ -5883,11 +5932,11 @@
if (i < 0) {
src =3D fargs [nb_tmp_fargs-1 + i];
p_dst =3D &fargs_priv[nb_tmp_fargs-1 + i];
- base =3D (uint32_t)&fargs_priv[nb_tmp_fargs-1];
+ base =3D (HWord_t)&fargs_priv[nb_tmp_fargs-1];
} else {
src =3D fargs [i];
p_dst =3D &fargs_priv[i];
- base =3D (uint32_t)&fargs_priv[0];
+ base =3D (HWord_t)&fargs_priv[0];
}
*p_dst =3D 0; // clear dst
=20
@@ -6252,7 +6301,7 @@
volatile uint32_t flags, tmpcr;
volatile vector unsigned int tmpvscr;
volatile vector unsigned int vec_in1, vec_out, vscr;
- uint32_t func_buf[2], *p;
+ uint32_t func_buf[2];
unsigned int *src1, *dst;
int i,j;
#if defined TEST_VSCR_SAT
@@ -6265,12 +6314,10 @@
for (j=3D0; j<16; j+=3D3) {
vec_out =3D (vector unsigned int){ 0,0,0,0 };
=20
+ init_function( &func, func_buf );
/* Patch up the instruction */
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm(func_buf, p, j, 16, 5);
- func =3D (void *)func_buf;
- =20
+ patch_op_imm(&func_buf[0], j, 16, 5);
+
/* Save flags */
__asm__ __volatile__ ("mfcr %0" : "=3Dr" (tmpcr));
__asm__ __volatile__ ("mfvscr %0" : "=3Dvr" (tmpvscr));
@@ -6322,7 +6369,7 @@
volatile uint32_t flags, tmpcr;
volatile vector unsigned int tmpvscr;
volatile vector unsigned int vec_out, vscr;
- uint32_t func_buf[2], *p;
+ uint32_t func_buf[2];
unsigned int *dst;
int i;
#if defined TEST_VSCR_SAT
@@ -6332,11 +6379,9 @@
for (i=3D0; i<32; i++) {
vec_out =3D (vector unsigned int){ 0,0,0,0 };
=20
+ init_function( &func, func_buf );
/* Patch up the instruction */
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm(func_buf, p, i, 16, 5);
- func =3D (void *)func_buf;
+ patch_op_imm(&func_buf[0], i, 16, 5);
=20
/* Save flags */
__asm__ __volatile__ ("mfcr %0" : "=3Dr" (tmpcr));
@@ -6381,7 +6426,7 @@
volatile uint32_t flags, tmpcr;
volatile vector unsigned int tmpvscr;
volatile vector unsigned int vec_in1, vec_in2, vec_out, vscr;
- uint32_t func_buf[2], *p;
+ uint32_t func_buf[2];
unsigned int *src1, *src2, *dst;
int i,j,k;
#if defined TEST_VSCR_SAT
@@ -6395,11 +6440,9 @@
for (k=3D0; k<16; k+=3D14) {
vec_out =3D (vector unsigned int){ 0,0,0,0 };
=20
+ init_function( &func, func_buf );
/* Patch up the instruction */
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm(func_buf, p, k, 6, 4);
- func =3D (void *)func_buf;
+ patch_op_imm(&func_buf[0], k, 6, 4);
=20
/* Save flags */
__asm__ __volatile__ ("mfcr %0" : "=3Dr" (tmpcr));
@@ -6468,7 +6511,7 @@
vec_out =3D (vector unsigned int){ 0,0,0,0 };
=20
// make sure start address is 16 aligned - use viargs[0]
- r15 =3D (uint32_t)&viargs[0];
+ r15 =3D (HWord_t)&viargs[0];
r14 =3D i;
=20
/* Save flags */
@@ -6571,16 +6614,16 @@
int i,j, k, do_mask;
=20
do_mask =3D 0;
- if (strstr(name, "lvebx")) do_mask =3D 1;
- if (strstr(name, "lvehx")) do_mask =3D 2;
- if (strstr(name, "lvewx")) do_mask =3D 4;
+ if (strstr(name, "lvebx") !=3D NULL) do_mask =3D 1;
+ if (strstr(name, "lvehx") !=3D NULL) do_mask =3D 2;
+ if (strstr(name, "lvewx") !=3D NULL) do_mask =3D 4;
=20
for (i=3D0; i<nb_viargs; i++) {
for (j=3D0; j<16; j+=3D7) {
vec_out =3D (vector unsigned int){ 0,0,0,0 };
=20
// load from viargs array + some dis-alignment
- r15 =3D (uint32_t)&viargs[0];
+ r15 =3D (HWord_t)&viargs[0];
r14 =3D i*16 + j;
=20
/* Save flags */
@@ -6666,7 +6709,7 @@
vec_in =3D (vector unsigned int)viargs[i];
=20
// store to viargs_priv[0] + some dis-alignment
- r16 =3D (uint32_t)&viargs_priv[0];
+ r16 =3D (HWord_t)&viargs_priv[0];
r15 =3D i*16 + j;
=20
/* Save flags */
@@ -6741,7 +6784,8 @@
bottom byte of the result as it's basically garbage, and differs
between cpus */
unsigned int mask
- =3D (strstr(name,"vrsqrtefp") || strstr(name,"vrefp"))
+ =3D (strstr(name,"vrsqrtefp") !=3D NULL ||
+ strstr(name, "vrefp") !=3D NULL)
? 0xFFFFFF00 : 0xFFFFFFFF;
=20
for (i=3D0; i<nb_vfargs; i++) {
@@ -6937,7 +6981,7 @@
volatile uint32_t flags, tmpcr;
volatile vector unsigned int tmpvscr;
volatile vector unsigned int vec_in, vec_out, vscr;
- uint32_t func_buf[2], *p;
+ uint32_t func_buf[2];
unsigned int *src, *dst;
int i,j;
#if defined TEST_VSCR_SAT
@@ -6950,11 +6994,9 @@
for (j=3D0; j<32; j+=3D9) {
vec_out =3D (vector unsigned int){ 0,0,0,0 };
=20
+ init_function( &func, func_buf );
/* Patch up the instruction */
- p =3D (void *)func;
- func_buf[1] =3D p[1];
- patch_op_imm(func_buf, p, j, 16, 5);
- func =3D (void *)func_buf;
+ patch_op_imm(&func_buf[0], j, 16, 5);
=20
/* Save flags */
__asm__ __volatile__ ("mfcr %0" : "=3Dr" (tmpcr));
Modified: trunk/none/tests/ppc32/jm-int.stdout.exp
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/none/tests/ppc32/jm-int.stdout.exp 2005-12-24 03:10:56 UTC (rev=
5427)
+++ trunk/none/tests/ppc32/jm-int.stdout.exp 2005-12-24 12:55:48 UTC (rev=
5428)
@@ -1080,208 +1080,208 @@
nego. ffffffff =3D> 00000001 (40000000 00000000)
=20
PPC logical insns with special forms:
- rlwimi 00000000, 0, 0, 0 =3D> 00000001 (00000000 00000000)
- rlwimi 00000000, 0, 0, 31 =3D> 00000000 (00000000 00000000)
- rlwimi 00000000, 0, 31, 0 =3D> 00000000 (00000000 00000000)
- rlwimi 00000000, 0, 31, 31 =3D> 00000000 (00000000 00000000)
- rlwimi 00000000, 31, 0, 0 =3D> 00000000 (00000000 00000000)
- rlwimi 00000000, 31, 0, 31 =3D> 00000000 (00000000 00000000)
- rlwimi 00000000, 31, 31, 0 =3D> 00000000 (00000000 00000000)
+ rlwimi 00000000, 0, 0, 0 =3D> 00000001 (00000000 00000000)
+ rlwimi 00000000, 0, 0, 31 =3D> 00000000 (00000000 00000000)
+ rlwimi 00000000, 0, 31, 0 =3D> 00000000 (00000000 00000000)
+ rlwimi 00000000, 0, 31, 31 =3D> 00000000 (00000000 00000000)
+ rlwimi 00000000, 31, 0, 0 =3D> 00000000 (00000000 00000000)
+ rlwimi 00000000, 31, 0, 31 =3D> 00000000 (00000000 00000000)
+ rlwimi 00000000, 31, 31, 0 =3D> 00000000 (00000000 00000000)
rlwimi 00000000, 31, 31, 31 =3D> 00000000 (00000000 00000000)
- rlwimi 000f423f, 0, 0, 0 =3D> 00000000 (00000000 00000000)
- rlwimi 000f423f, 0, 0, 31 =3D> 000f423f (00000000 00000000)
- rlwimi 000f423f, 0, 31, 0 =3D> 000f423f (00000000 00000000)
- rlwimi 000f423f, 0, 31, 31 =3D> 000f423f (00000000 00000000)
- rlwimi 000f423f, 31, 0, 0 =3D> 800f423f (00000000 00000000)
- rlwimi 000f423f, 31, 0, 31 =3D> 8007a11f (00000000 00000000)
- rlwimi 000f423f, 31, 31, 0 =3D> 8007a11f (00000000 00000000)
+ rlwimi 000f423f, 0, 0, 0 =3D> 00000000 (00000000 00000000)
+ rlwimi 000f423f, 0, 0, 31 =3D> 000f423f (00000000 00000000)
+ rlwimi 000f423f, 0, 31, 0 =3D> 000f423f (00000000 00000000)
+ rlwimi 000f423f, 0, 31, 31 =3D> 000f423f (00000000 00000000)
+ rlwimi 000f423f, 31, 0, 0 =3D> 800f423f (00000000 00000000)
+ rlwimi 000f423f, 31, 0, 31 =3D> 8007a11f (00000000 00000000)
+ rlwimi 000f423f, 31, 31, 0 =3D> 8007a11f (00000000 00000000)
rlwimi 000f423f, 31, 31, 31 =3D> 8007a11f (00000000 00000000)
- rlwimi ffffffff, 0, 0, 0 =3D> 8007a11f (00000000 00000000)
- rlwimi ffffffff, 0, 0, 31 =3D> ffffffff (00000000 00000000)
- rlwimi ffffffff, 0, 31, 0 =3D> ffffffff (00000000 00000000)
- rlwimi ffffffff, 0, 31, 31 =3D> ffffffff (00000000 00000000)
- rlwimi ffffffff, 31, 0, 0 =3D> ffffffff (00000000 00000000)
- rlwimi ffffffff, 31, 0, 31 =3D> ffffffff (00000000 00000000)
- rlwimi ffffffff, 31, 31, 0 =3D> ffffffff (00000000 00000000)
+ rlwimi ffffffff, 0, 0, 0 =3D> 8007a11f (00000000 00000000)
+ rlwimi ffffffff, 0, 0, 31 =3D> ffffffff (00000000 00000000)
+ rlwimi ffffffff, 0, 31, 0 =3D> ffffffff (00000000 00000000)
+ rlwimi ffffffff, 0, 31, 31 =3D> ffffffff (00000000 00000000)
+ rlwimi ffffffff, 31, 0, 0 =3D> ffffffff (00000000 00000000)
+ rlwimi ffffffff, 31, 0, 31 =3D> ffffffff (00000000 00000000)
+ rlwimi ffffffff, 31, 31, 0 =3D> ffffffff (00000000 00000000)
rlwimi ffffffff, 31, 31, 31 =3D> ffffffff (00000000 00000000)
=20
- rlwinm 00000000, 0, 0, 0 =3D> 00000000 (00000000 00000000)
- rlwinm 00000000, 0, 0, 31 =3D> 00000000 (00000000 00000000)
- rlwinm 00000000, 0, 31, 0 =3D> 00000000 (00000000 00000000)
- rlwinm 00000000, 0, 31, 31 =3D> 00000000 (00000000 00000000)
- rlwinm 00000000, 31, 0, 0 =3D> 00000000 (00000000 00000000)
- rlwinm 00000000, 31, 0, 31 =3D> 00000000 (00000000 00000000)
- rlwinm 00000000, 31, 31, 0 =3D> 00000000 (00000000 00000000)
+ rlwinm 00000000, 0, 0, 0 =3D> 00000000 (00000000 00000000)
+ rlwinm 00000000, 0, 0, 31 =3D> 00000000 (00000000 00000000)
+ rlwinm 00000000, 0, 31, 0 =3D> 00000000 (00000000 00000000)
+ rlwinm 00000000, 0, 31, 31 =3D> 00000000 (00000000 00000000)
+ rlwinm 00000000, 31, 0, 0 =3D> 00000000 (00000000 00000000)
+ rlwinm 00000000, 31, 0, 31 =3D> 00000000 (00000000 00000000)
+ rlwinm 00000000, 31, 31, 0 =3D> 00000000 (00000000 00000000)
rlwinm 00000000, 31, 31, 31 =3D> 00000000 (00000000 00000000)
- rlwinm 000f423f, 0, 0, 0 =3D> 00000000 (00000000 00000000)
- rlwinm 000f423f, 0, 0, 31 =3D> 000f423f (00000000 00000000)
- rlwinm 000f423f, 0, 31, 0 =3D> 00000001 (00000000 00000000)
- rlwinm 000f423f, 0, 31, 31 =3D> 00000001 (00000000 00000000)
- rlwinm 000f423f, 31, 0, 0 =3D> 80000000 (00000000 00000000)
- rlwinm 000f423f, 31, 0, 31 =3D> 8007a11f (00000000 00000000)
- rlwinm 000f423f, 31, 31, 0 =3D> 80000001 (00000000 00000000)
+ rlwinm 000f423f, 0, 0, 0 =3D> 00000000 (00000000 00000000)
+ rlwinm 000f423f, 0, 0, 31 =3D> 000f423f (00000000 00000000)
+ rlwinm 000f423f, 0, 31, 0 =3D> 00000001 (00000000 00000000)
+ rlwinm 000f423f, 0, 31, 31 =3D> 00000001 (00000000 00000000)
+ rlwinm 000f423f, 31, 0, 0 =3D> 80000000 (00000000 00000000)
+ rlwinm 000f423f, 31, 0, 31 =3D> 8007a11f (00000000 00000000)
+ rlwinm 000f423f, 31, 31, 0 =3D> 80000001 (00000000 00000000)
rlwinm 000f423f, 31, 31, 31 =3D> 00000001 (00000000 00000000)
- rlwinm ffffffff, 0, 0, 0 =3D> 80000000 (00000000 00000000)
- rlwinm ffffffff, 0, 0, 31 =3D> ffffffff (00000000 00000000)
- rlwinm ffffffff, 0, 31, 0 =3D> 80000001 (00000000 00000000)
- rlwinm ffffffff, 0, 31, 31 =3D> 00000001 (00000000 00000000)
- rlwinm ffffffff, 31, 0, 0 =3D> 80000000 (00000000 00000000)
- rlwinm ffffffff, 31, 0, 31 =3D> ffffffff (00000000 00000000)
- rlwinm ffffffff, 31, 31, 0 =3D> 80000001 (00000000 00000000)
+ rlwinm ffffffff, 0, 0, 0 =3D> 80000000 (00000000 00000000)
+ rlwinm ffffffff, 0, 0, 31 =3D> ffffffff (00000000 00000000)
+ rlwinm ffffffff, 0, 31, 0 =3D> 80000001 (00000000 00000000)
+ rlwinm ffffffff, 0, 31, 31 =3D> 00000001 (00000000 00000000)
+ rlwinm ffffffff, 31, 0, 0 =3D> 80000000 (00000000 00000000)
+ rlwinm ffffffff, 31, 0, 31 =3D> ffffffff (00000000 00000000)
+ rlwinm ffffffff, 31, 31, 0 =3D> 80000001 (00000000 00000000)
rlwinm ffffffff, 31, 31, 31 =3D> 00000001 (00000000 00000000)
=20
- rlwnm 00000000, 00000000, 0, 0 =3D> 00000000 (00000000 00000000)
- rlwnm 00000000, 00000000, 0, 31 =3D> 00000000 (00000000 00000000)
- rlwnm 00000000, 00000000, 31, 0 =3D> 00000000 (00000000 00000000)
+ rlwnm 00000000, 00000000, 0, 0 =3D> 00000000 (00000000 00000000=
)
+ rlwnm 00000000, 00000000, 0, 31 =3D> 00000000 (00000000 00000000=
)
+ rlwnm 00000000, 00000000, 31, 0 =3D> 00000000 (00000000 00000000=
)
rlwnm 00000000, 00000000, 31, 31 =3D> 00000000 (00000000 00000000=
)
- rlwnm 00000000, 000f423f, 0, 0 =3D> 00000000 (00000000 00000000)
- rlwnm 00000000, 000f423f, 0, 31 =3D> 00000000 (00000000 00000000)
- rlwnm 00000000, 000f423f, 31, 0 =3D> 00000000 (00000000 00000000)
+ rlwnm 00000000, 000f423f, 0, 0 =3D> 00000000 (00000000 00000000=
)
+ rlwnm 00000000, 000f423f, 0, 31 =3D> 00000000 (00000000 00000000=
)
+ rlwnm 00000000, 000f423f, 31, 0 =3D> 00000000 (00000000 00000000=
)
rlwnm 00000000, 000f423f, 31, 31 =3D> 00000000 (00000000 00000000=
)
- rlwnm 00000000, ffffffff, 0, 0 =3D> 00000000 (00000000 00000000)
- rlwnm 00000000, ffffffff, 0, 31 =3D> 00000000 (00000000 00000000)
- rlwnm 00000000, ffffffff, 31, 0 =3D> 00000000 (00000000 00000000)
+ rlwnm 00000000, ffffffff, 0, 0 =3D> 00000000 (00000000 00000000=
)
+ rlwnm 00000000, ffffffff, 0, 31 =3D> 00000000 (00000000 00000000=
)
+ rlwnm 00000000, ffffffff, 31, 0 =3D> 00000000 (00000000 00000000=
)
rlwnm 00000000, ffffffff, 31, 31 =3D> 00000000 (00000000 00000000=
)
- rlwnm 000f423f, 00000000, 0, 0 =3D> 00000000 (00000000 00000000)
- rlwnm 000f423f, 00000000, 0, 31 =3D> 000f423f (00000000 00000000)
- rlwnm 000f423f, 00000000, 31, 0 =3D> 00000001 (00000000 00000000)
+ rlwnm 000f423f, 00000000, 0, 0 =3D> 00000000 (00000000 00000000=
)
+ rlwnm 000f423f, 00000000, 0, 31 =3D> 000f423f (00000000 00000000=
)
+ rlwnm 000f423f, 00000000, 31, 0 =3D> 00000001 (00000000 00000000=
)
rlwnm 000f423f, 00000000, 31, 31 =3D> 00000001 (00000000 00000000=
)
- rlwnm 000f423f, 000f423f, 0, 0 =3D> 80000000 (00000000 00000000)
- rlwnm 000f423f, 000f423f, 0, 31 =3D> 8007a11f (00000000 00000000)
- rlwnm 000f423f, 000f423f, 31, 0 =3D> 80000001 (00000000 00000000)
+ rlwnm 000f423f, 000f423f, 0, 0 =3D> 80000000 (00000000 00000000=
)
+ rlwnm 000f423f, 000f423f, 0, 31 =3D> 8007a11f (00000000 00000000=
)
+ rlwnm 000f423f, 000f423f, 31, 0 =3D> 80000001 (00000000 00000000=
)
rlwnm 000f423f, 000f423f, 31, 31 =3D> 00000001 (00000000 00000000=
)
- rlwnm 000f423f, ffffffff, 0, 0 =3D> 80000000 (00000000 00000000)
- rlwnm 000f423f, ffffffff, 0, 31 =3D> 8007a11f (00000000 00000000)
- rlwnm 000f423f, ffffffff, 31, 0 =3D> 80000001 (00000000 00000000)
+ rlwnm 000f423f, ffffffff, 0, 0 =3D> 80000000 (00000000 00000000=
)
+ rlwnm 000f423f, ffffffff, 0, 31 =3D> 8007a11f (00000000 00000000=
)
+ rlwnm 000f423f, ffffffff, 31, 0 =3D> 80000001 (00000000 00000000=
)
rlwnm 000f423f, ffffffff, 31, 31 =3D> 00000001 (00000000 00000000=
)
- rlwnm ffffffff, 00000000, 0, 0 =3D> 80000000 (00000000 00000000)
- rlwnm ffffffff, 00000000, 0, 31 =3D> ffffffff (00000000 00000000)
- rlwnm ffffffff, 00000000, 31, 0 =3D> 80000001 (00000000 00000000)
+ rlwnm ffffffff, 00000000, 0, 0 =3D> 80000000 (00000000 00000000=
)
+ rlwnm ffffffff, 00000000, 0, 31 =3D> ffffffff (00000000 00000000=
)
+ rlwnm ffffffff, 00000000, 31, 0 =3D> 80000001 (00000000 00000000=
)
rlwnm ffffffff, 00000000, 31, 31 =3D> 00000001 (00000000 00000000=
)
- rlwnm ffffffff, 000f423f, 0, 0 =3D> 80000000 (00000000 00000000)
- rlwnm ffffffff, 000f423f, 0, 31 =3D> ffffffff (00000000 00000000)
- rlwnm ffffffff, 000f423f, 31, 0 =3D> 80000001 (00000000 00000000)
+ rlwnm ffffffff, 000f423f, 0, 0 =3D> 80000000 (00000000 00000000=
)
+ rlwnm ffffffff, 000f423f, 0, 31 =3D> ffffffff (00000000 00000000=
)
+ rlwnm ffffffff, 000f423f, 31, 0 =3D> 80000001 (00000000 00000000=
)
rlwnm ffffffff, 000f423f, 31, 31 =3D> 00000001 (00000000 00000000=
)
- rlwnm ffffffff, ffffffff, 0, 0 =3D> 80000000 (00000000 00000000)
- rlwnm ffffffff, ffffffff, 0, 31 =3D> ffffffff (00000000 00000000)
- rlwnm ffffffff, ffffffff, 31, 0 =3D> 80000001 (00000000 00000000)
+ rlwnm ffffffff, ffffffff, 0, 0 =3D> 80000000 (00000000 00000000=
)
+ rlwnm ffffffff, ffffffff, 0, 31 =3D> ffffffff (00000000 00000000=
)
+ rlwnm ffffffff, ffffffff, 31, 0 =3D> 80000001 (00000000 00000000=
)
rlwnm ffffffff, ffffffff, 31, 31 =3D> 00000001 (00000000 00000000=
)
=20
- srawi 00000000, 0 =3D> 00000000 (00000000 00000000)
+ srawi 00000000, 0 =3D> 00000000 (00000000 00000000)
srawi 00000000, 31 =3D> 00000000 (00000000 00000000)
- srawi 000f423f, 0 =3D> 000f423f (00000000 00000000)
+ srawi 000f423f, 0 =3D> 000f423f (00000000 00000000)
srawi 000f423f, 31 =3D> 00000000 (00000000 00000000)
- srawi ffffffff, 0 =3D> ffffffff (00000000 00000000)
+ srawi ffffffff, 0 =3D> ffffffff (00000000 00000000)
srawi ffffffff, 31 =3D> ffffffff (00000000 20000000)
=20
mfcr (00000000) =3D> 00000000 (00000000 00000000)
mfcr (000f423f) =3D> 000f423f (000f423f 00000000)
mfcr (ffffffff) =3D> ffffffff (ffffffff 00000000)
=20
- mfspr: 00000000 -> mtxer -> mfxer =3D> 00000000
- mfspr: 000f423f -> mtxer -> mfxer =3D> 0000003f
- mfspr: ffffffff -> mtxer -> mfxer =3D> e000007f
- mfspr: 00000000 -> mtlr -> mflr =3D> 00000000
- mfspr: 000f423f -> mtlr -> mflr =3D> 000f423f
- mfspr: ffffffff -> mtlr -> mflr =3D> ffffffff
- mfspr: 00000000 -> mtctr -> mfctr =3D> 00000000
- mfspr: 000f423f -> mtctr -> mfctr =3D> 000f423f
- mfspr: ffffffff -> mtctr -> mfctr =3D> ffffffff
+ mfspr 1 (00000000) -> mtxer -> mfxer =3D> 00000000
+ mfspr 1 (000f423f) -> mtxer -> mfxer =3D> 0000003f
+ mfspr 1 (ffffffff) -> mtxer -> mfxer =3D> e000007f
+ mfspr 8 (00000000) -> mtlr -> mflr =3D> 00000000
+ mfspr 8 (000f423f) -> mtlr -> mflr =3D> 000f423f
+ mfspr 8 (ffffffff) -> mtlr -> mflr =3D> ffffffff
+ mfspr 9 (00000000) -> mtctr -> mfctr =3D> 00000000
+ mfspr 9 (000f423f) -> mtctr -> mfctr =3D> 000f423f
+ mfspr 9 (ffffffff) -> mtctr -> mfctr =3D> ffffffff
=20
=20
PPC logical insns with special forms with flags update:
- rlwimi. 00000000, 0, 0, 0 =3D> 20000000 (40000000 00000000)
- rlwimi. 00000000, 0, 0, 31 =3D> 00000000 (20000000 00000000)
- rlwimi. 00000000, 0, 31, 0 =3D> 00000000 (20000000 00000000)
- rlwimi. 00000000, 0, 31, 31 =3D> 00000000 (20000000 00000000)
- rlwimi. 00000000, 31, 0, 0 =3D> 00000000 (20000000 00000000)
- rlwimi. 00000000, 31, 0, 31 =3D> 00000000 (20000000 00000000)
- rlwimi. 00000000, 31, 31, 0 =3D> 00000000 (20000000 00000000)
+ rlwimi. 00000000, 0, 0, 0 =3D> 7fffffff (40000000 00000000)
+ rlwimi. 00000000, 0, 0, 31 =3D> 00000000 (20000000 00000000)
+ rlwimi. 00000000, 0, 31, 0 =3D> 00000000 (20000000 00000000)
+ rlwimi. 00000000, 0, 31, 31 =3D> 00000000 (20000000 00000000)
+ rlwimi. 00000000, 31, 0, 0 =3D> 00000000 (20000000 00000000)
+ rlwimi. 00000000, 31, 0, 31 =3D> 00000000 (20000000 00000000)
+ rlwimi. 00000000, 31, 31, 0 =3D> 00000000 (20000000 00000000)
rlwimi. 00000000, 31, 31, 31 =3D> 00000000 (20000000 00000000)
- rlwimi. 000f423f, 0, 0, 0 =3D> 00000000 (20000000 00000000)
- rlwimi. 000f423f, 0, 0, 31 =3D> 000f423f (40000000 00000000)
- rlwimi. 000f423f, 0, 31, 0 =3D> 000f423f (40000000 00000000)
- rlwimi. 000f423f, 0, 31, 31 =3D> 000f423f (40000000 00000000)
- rlwimi. 000f423f, 31, 0, 0 =3D> 800f423f (80000000 00000000)
- rlwimi. 000f423f, 31, 0, 31 =3D> 8007a11f (80000000 00000000)
- rlwimi. 000f423f, 31, 31, 0 =3D> 8007a11f (80000000 00000000)
+ rlwimi. 000f423f, 0, 0, 0 =3D> 00000000 (20000000 00000000)
+ rlwimi. 000f423f, 0, 0, 31 =3D> 000f423f (40000000 00000000)
+ rlwimi. 000f423f, 0, 31, 0 =3D> 000f423f (40000000 00000000)
+ rlwimi. 000f423f, 0, 31, 31 =3D> 000f423f (40000000 00000000)
+ rlwimi. 000f423f, 31, 0, 0 =3D> 800f423f (80000000 00000000)
+ rlwimi. 000f423f, 31, 0, 31 =3D> 8007a11f (80000000 00000000)
+ rlwimi. 000f423f, 31, 31, 0 =3D> 8007a11f (80000000 00000000)
rlwimi. 000f423f, 31, 31, 31 =3D> 8007a11f (80000000 00000000)
- rlwimi. ffffffff, 0, 0, 0 =3D> 8007a11f (80000000 00000000)
- rlwimi. ffffffff, 0, 0, 31 =3D> ffffffff (80000000 00000000)
- rlwimi. ffffffff, 0, 31, 0 =3D> ffffffff (80000000 00000000)
- rlwimi. ffffffff, 0, 31, 31 =3D> ffffffff (80000000 00000000)
- rlwimi. ffffffff, 31, 0, 0 =3D> ffffffff (80000000 00000000)
- rlwimi. ffffffff, 31, 0, 31 =3D> ffffffff (80000000 00000000)
- rlwimi. ffffffff, 31, 31, 0 =3D> ffffffff (80000000 00000000)
+ rlwimi. ffffffff, 0, 0, 0 =3D> 8007a11f (80000000 00000000)
+ rlwimi. ffffffff, 0, 0, 31 =3D> ffffffff (80000000 00000000)
+ rlwimi. ffffffff, 0, 31, 0 =3D> ffffffff (80000000 00000000)
+ rlwimi. ffffffff, 0, 31, 31 =3D> ffffffff (80000000 00000000)
+ rlwimi. ffffffff, 31, 0, 0 =3D> ffffffff (80000000 00000000)
+ rlwimi. ffffffff, 31, 0, 31 =3D> ffffffff (80000000 00000000)
+ rlwimi. ffffffff, 31, 31, 0 =3D> ffffffff (80000000 00000000)
rlwimi. ffffffff, 31, 31, 31 =3D> ffffffff (80000000 00000000)
=20
- rlwinm. 00000000, 0, 0, 0 =3D> 00000000 (20000000 00000000)
- rlwinm. 00000000, 0, 0, 31 =3D> 00000000 (20000000 00000000)
- rlwinm. 00000000, 0, 31, 0 =3D> 00000000 (20000000 00000000)
- rlwinm. 00000000, 0, 31, 31 =3D> 00000000 (20000000 00000000)
- rlwinm. 00000000, 31, 0, 0 =3D> 00000000 (20000000 00000000)
- rlwinm. 00000000, 31, 0, 31 =3D> 00000000 (20000000 00000000)
- rlwinm. 00000000, 31, 31, 0 =3D> 00000000 (20000000 00000000)
+ rlwinm. 00000000, 0, 0, 0 =3D> 00000000 (20000000 00000000)
+ rlwinm. 00000000, 0, 0, 31 =3D> 00000000 (20000000 00000000)
+ rlwinm. 00000000, 0, 31, 0 =3D> 00000000 (20000000 00000000)
+ rlwinm. 00000000, 0, 31, 31 =3D> 00000000 (20000000 00000000)
+ rlwinm. 00000000, 31, 0, 0 =3D> 00000000 (20000000 00000000)
+ rlwinm. 00000000, 31, 0, 31 =3D> 00000000 (20000000 00000000)
+ rlwinm. 00000000, 31, 31, 0 =3D> 00000000 (20000000 00000000)
rlwinm. 00000000, 31, 31, 31 =3D> 00000000 (20000000 00000000)
- rlwinm. 000f423f, 0, 0, 0 =3D> 00000000 (20000000 00000000)
- rlwinm. 000f423f, 0, 0, 31 =3D> 000f423f (40000000 00000000)
- rlwinm. 000f423f, 0, 31, 0 =3D> 00000001 (40000000 00000000)
- rlwinm. 000f423f, 0, 31, 31 =3D> 00000001 (40000000 00000000)
- rlwinm. 000f423f, 31, 0, 0 =3D> 80000000 (80000000 00000000)
- rlwinm. 000f423f, 31, 0, 31 =3D> 8007a11f (80000000 00000000)
- rlwinm. 000f423f, 31, 31, 0 =3D> 80000001 (80000000 00000000)
+ rlwinm. 000f423f, 0, 0, 0 =3D> 00000000 (20000000 00000000)
+ rlwinm. 000f423f, 0, 0, 31 =3D> 000f423f (40000000 00000000)
+ rlwinm. 000f423f, 0, 31, 0 =3D> 00000001 (40000000 00000000)
+ rlwinm. 000f423f, 0, 31, 31 =3D> 00000001 (40000000 00000000)
+ rlwinm. 000f423f, 31, 0, 0 =3D> 80000000 (80000000 00000000)
+ rlwinm. 000f423f, 31, 0, 31 =3D> 8007a11f (80000000 00000000)
+ rlwinm. 000f423f, 31, 31, 0 =3D> 80000001 (80000000 00000000)
rlwinm. 000f423f, 31, 31, 31 =3D> 00000001 (40000000 00000000)
- rlwinm. ffffffff, 0, 0, 0 =3D> 80000000 (80000000 00000000)
- rlwinm. ffffffff, 0, 0, 31 =3D> ffffffff (80000000 00000000)
- rlwinm. ffffffff, 0, 31, 0 =3D> 80000001 (80000000 00000000)
- rlwinm. ffffffff, 0, 31, 31 =3D> 00000001 (40000000 00000000)
- rlwinm. ffffffff, 31, 0, 0 =3D> 80000000 (80000000 00000000)
- rlwinm. ffffffff, 31, 0, 31 =3D> ffffffff (80000000 00000000)
- rlwinm. ffffffff, 31, 31, 0 =3D> 80000001 (80000000 00000000)
+ rlwinm. ffffffff, 0, 0, 0 =3D> 80000000 (80000000 00000000)
+ rlwinm. ffffffff, 0, 0, 31 =3D> ffffffff (80000000 00000000)
+ rlwinm. ffffffff, 0, 31, 0 =3D> 80000001 (80000000 00000000)
+ rlwinm. ffffffff, 0, 31, 31 =3D> 00000001 (40000000 00000000)
+ rlwinm. ffffffff, 31, 0, 0 =3D> 80000000 (80000000 00000000)
+ rlwinm. ffffffff, 31, 0, 31 =3D> ffffffff (80000000 00000000)
+ rlwinm. ffffffff, 31, 31, 0 =3D> 80000001 (80000000 00000000)
rlwinm. ffffffff, 31, 31, 31 =3D> 00000001 (40000000 00000000)
=20
- rlwnm. 00000000, 00000000, 0, 0 =3D> 00000000 (20000000 00000000)
- rlwnm. 00000000, 00000000, 0, 31 =3D> 00000000 (20000000 00000000)
- rlwnm. 00000000, 00000000, 31, 0 =3D> 00000000 (20000000 00000000)
+ rlwnm. 00000000, 00000000, 0, 0 =3D> 00000000 (20000000 00000000=
)
+ rlwnm. 00000000, 00000000, 0, 31 =3D> 00000000 (20000000 00000000=
)
+ rlwnm. 00000000, 00000000, 31, 0 =3D> 00000000 (20000000 00000000=
)
rlwnm. 00000000, 00000000, 31, 31 =3D> 00000000 (20000000 00000000=
)
- rlwnm. 00000000, 000f423f, 0, 0 =3D> 00000000 (20000000 00000000)
- rlwnm. 00000000, 000f423f, 0, 31 =3D> 00000000 (20000000 00000000)
- rlwnm. 00000000, 000f423f, 31, 0 =3D> 00000000 (20000000 00000000)
+ rlwnm. 00000000, 000f423f, 0, 0 =3D> 00000000 (20000000 00000000=
)
+ rlwnm. 00000000, 000f423f, 0, 31 =3D> 00000000 (20000000 00000000=
)
+ rlwnm. 00000000, 000f423f, 31, 0 =3D> 00000000 (20000000 00000000=
)
rlwnm. 00000000, 000f423f, 31, 31 =3D> 00000000 (20000000 00000000=
)
- rlwnm. 00000000, ffffffff, 0, 0 =3D> 00000000 (20000000 00000000)
- rlwnm. 00000000, ffffffff, 0, 31 =3D> 00000000 (20000000 00000000)
- rlwnm. 00000000, ffffffff, 31, 0 =3D> 00000000 (20000000 00000000)
+ rlwnm. 00000000, ffffffff, 0, 0 =3D> 00000000 (20000000 00000000=
)
+ rlwnm. 00000000, ffffffff, 0, 31 =3D> 00000000 (20000000 00000000=
)
+ rlwnm. 00000000, ffffffff, 31, 0 =3D> 00000000 (20000000 00000000=
)
rlwnm. 00000000, ffffffff, 31, 31 =3D> 00000000 (20000000 00000000=
)
- rlwnm. 000f423f, 00000000, 0, 0 =3D> 00000000 (20000000 00000000)
- rlwnm. 000f423f, 00000000, 0, 31 =3D> 000f423f (40000000 00000000)
- rlwnm. 000f423f, 00000000, 31, 0 =3D> 00000001 (40000000 00000000)
+ rlwnm. 000f423f, 00000000, 0, 0 =3D> 00000000 (20000000 00000000=
)
+ rlwnm. 000f423f, 00000000, 0, 31 =3D> 000f423f (40000000 00000000=
)
+ rlwnm. 000f423f, 00000000, 31, 0 =3D> 00000001 (40000000 00000000=
)
rlwnm. 000f423f, 00000000, 31, 31 =3D> 00000001 (40000000 00000000=
)
- rlwnm. 000f423f, 000f423f, 0, 0 =3D> 80000000 (80000000 00000000)
- rlwnm. 000f423f, 000f423f, 0, 31 =3D> 8007a11f (80000000 00000000)
- rlwnm. 000f423f, 000f423f, 31, 0 =3D> 80000001 (80000000 00000000)
+ rlwnm. 000f423f, 000f423f, 0, 0 =3D> 80000000 (80000000 00000000=
)
+ rlwnm. 000f423f, 000f423f, 0, 31 =3D> 8007a11f (80000000 00000000=
)
+ rlwnm. 000f423f, 000f423f, 31, 0 =3D> 80000001 (80000000 00000000=
)
rlwnm. 000f423f, 000f423f, 31, 31 =3D> 00000001 (40000000 00000000=
)
- rlwnm. 000f423f, ffffffff, 0, 0 =3D> 80000000 (80000000 00000000)
- rlwnm. 000f423f, ffffffff, 0, 31 =3D> 8007a11f (80000000 00000000)
- rlwnm. 000f423f, ffffffff, 31, 0 =3D> 80000001 (80000000 00000000)
+ rlwnm. 000f423f, ffffffff, 0, 0 =3D> 80000000 (80000000 00000000=
)
+ rlwnm. 000f423f, ffffffff, 0, 31 =3D> 8007a11f (80000000 00000000=
)
+ rlwnm. 000f423f, ffffffff, 31, 0 =3D> 80000001 (80000000 00000000=
)
rlwnm. 000f423f, ffffffff, 31, 31 =3D> 00000001 (40000000 00000000=
)
- rlwnm. ffffffff, 00000000, 0, 0 =3D> 80000000 (80000000 00000000)
- rlwnm. ffffffff, 00000000, 0, 31 =3D> ffffffff (80000000 00000000)
- rlwnm. ffffffff, 00000000, 31, 0 =3D> 80000001 (80000000 00000000)
+ rlwnm. ffffffff, 00000000, 0, 0 =3D> 80000000 (80000000 00000000=
)
+ rlwnm. ffffffff, 00000000, 0, 31 =3D> ffffffff (80000000 00000000=
)
+ rlwnm. ffffffff, 00000000, 31, 0 =3D> 80000001 (80000000 00000000=
)
rlwnm. ffffffff, 00000000, 31, 31 =3D> 00000001 (40000000 00000000=
)
- ...
[truncated message content] |
|
From: <sv...@va...> - 2005-12-24 12:39:50
|
Author: cerion
Date: 2005-12-24 12:39:47 +0000 (Sat, 24 Dec 2005)
New Revision: 1510
Log:
Put mode64 in ISelEnv, removing global variable.
Modified:
trunk/priv/host-ppc/isel.c
Modified: trunk/priv/host-ppc/isel.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/host-ppc/isel.c 2005-12-24 12:32:10 UTC (rev 1509)
+++ trunk/priv/host-ppc/isel.c 2005-12-24 12:39:47 UTC (rev 1510)
@@ -54,13 +54,10 @@
#include "host-generic/h_generic_regs.h"
#include "host-ppc/hdefs.h"
=20
-/* Is our guest binary 32 or 64bit? Set at each call to
- iselBB_PPC below. */
-static Bool mode64 =3D False;
+/* GPR register class for ppc32/64 */
+#define HRcGPR(__mode64) (__mode64 ? HRcInt64 : HRcInt32)
=20
-#define HRcIntWRDSZ (mode64 ? HRcInt64 : HRcInt32)
=20
-
/*---------------------------------------------------------*/
/*--- Register Usage Conventions ---*/
/*---------------------------------------------------------*/
@@ -192,8 +189,11 @@
- The host subarchitecture we are selecting insns for. =20
This is set at the start and does not change.
=20
- Note, this is all host-independent. (JRS 20050201: well, kinda
- ... not completely. Compare with ISelEnv for amd64.)
+ - A Bool to tell us if the host is 32 or 64bit.
+ This is set at the start and does not change.
+=20
+ Note, this is mostly host-independent.
+ (JRS 20050201: well, kinda... Compare with ISelEnv for amd64.)
*/
=20
typedef
@@ -211,6 +211,8 @@
Int vreg_ctr;
=20
VexSubArch subarch;
+
+ Bool mode64;
}
ISelEnv;
=20
@@ -225,7 +227,7 @@
static void lookupIRTemp64 ( HReg* vrHI, HReg* vrLO,
ISelEnv* env, IRTemp tmp )
{
- vassert(!mode64);
+ vassert(!env->mode64);
vassert(tmp >=3D 0);
vassert(tmp < env->n_vregmap);
vassert(env->vregmapHI[tmp] !=3D INVALID_HREG);
@@ -236,7 +238,7 @@
static void lookupIRTemp128 ( HReg* vrHI, HReg* vrLO,
ISelEnv* env, IRTemp tmp )
{
- vassert(mode64);
+ vassert(env->mode64);
vassert(tmp >=3D 0);
vassert(tmp < env->n_vregmap);
vassert(env->vregmapHI[tmp] !=3D INVALID_HREG);
@@ -248,14 +250,15 @@
{
addHInstr(env->code, instr);
if (vex_traceflags & VEX_TRACE_VCODE) {
- ppPPCInstr(instr, mode64);
+ ppPPCInstr(instr, env->mode64);
vex_printf("\n");
}
}
=20
static HReg newVRegI ( ISelEnv* env )
{ =20
- HReg reg =3D mkHReg(env->vreg_ctr, HRcIntWRDSZ, True/*virtual reg*/);
+ HReg reg =3D mkHReg(env->vreg_ctr, HRcGPR(env->mode64),
+ True/*virtual reg*/);
env->vreg_ctr++;
return reg;
}
@@ -361,8 +364,9 @@
=20
static PPCInstr* mk_iMOVds_RR ( HReg r_dst, HReg r_src )
{
- vassert(hregClass(r_dst) =3D=3D HRcIntWRDSZ);
- vassert(hregClass(r_src) =3D=3D HRcIntWRDSZ);
+ vassert(hregClass(r_dst) =3D=3D hregClass(r_src));
+ vassert(hregClass(r_src) =3D=3D HRcInt32 ||
+ hregClass(r_src) =3D=3D HRcInt64);
return PPCInstr_Alu(Palu_OR, r_dst, r_src, PPCRH_Reg(r_src));
}
=20
@@ -379,7 +383,7 @@
=20
static void add_to_sp ( ISelEnv* env, UInt n )
{
- HReg sp =3D StackFramePtr(mode64);
+ HReg sp =3D StackFramePtr(env->mode64);
vassert(n < 256 && (n%16) =3D=3D 0);
addInstr(env, PPCInstr_Alu( Palu_ADD, sp, sp,
PPCRH_Imm(True,toUShort(n)) ));
@@ -387,7 +391,7 @@
=20
static void sub_from_sp ( ISelEnv* env, UInt n )
{
- HReg sp =3D StackFramePtr(mode64);
+ HReg sp =3D StackFramePtr(env->mode64);
vassert(n < 256 && (n%16) =3D=3D 0);
addInstr(env, PPCInstr_Alu( Palu_SUB, sp, sp,
PPCRH_Imm(True,toUShort(n)) ));
@@ -403,12 +407,13 @@
{
HReg r =3D newVRegI(env);
HReg align16 =3D newVRegI(env);
- addInstr(env, mk_iMOVds_RR(r, StackFramePtr(mode64)));
+ addInstr(env, mk_iMOVds_RR(r, StackFramePtr(env->mode64)));
// add 16
addInstr(env, PPCInstr_Alu( Palu_ADD, r, r,
PPCRH_Imm(True,toUShort(16)) ));
// mask to quadword
- addInstr(env, PPCInstr_LI(align16, 0xFFFFFFFFFFFFFFF0ULL, mode64));
+ addInstr(env,
+ PPCInstr_LI(align16, 0xFFFFFFFFFFFFFFF0ULL, env->mode64));
addInstr(env, PPCInstr_Alu(Palu_AND, r,r, PPCRH_Reg(align16)));
return r;
}
@@ -422,17 +427,17 @@
HReg fr_dst =3D newVRegF(env);
PPCAMode *am_addr0, *am_addr1;
=20
- vassert(!mode64);
+ vassert(!env->mode64);
vassert(hregClass(r_srcHi) =3D=3D HRcInt32);
vassert(hregClass(r_srcLo) =3D=3D HRcInt32);
=20
sub_from_sp( env, 16 ); // Move SP down 16 bytes
- am_addr0 =3D PPCAMode_IR( 0, StackFramePtr(mode64) );
- am_addr1 =3D PPCAMode_IR( 4, StackFramePtr(mode64) );
+ am_addr0 =3D PPCAMode_IR( 0, StackFramePtr(env->mode64) );
+ am_addr1 =3D PPCAMode_IR( 4, StackFramePtr(env->mode64) );
=20
// store hi,lo as Ity_I32's
- addInstr(env, PPCInstr_Store( 4, am_addr0, r_srcHi, mode64 ));
- addInstr(env, PPCInstr_Store( 4, am_addr1, r_srcLo, mode64 ));
+ addInstr(env, PPCInstr_Store( 4, am_addr0, r_srcHi, env->mode64 ));
+ addInstr(env, PPCInstr_Store( 4, am_addr1, r_srcLo, env->mode64 ));
=20
// load as float
addInstr(env, PPCInstr_FpLdSt(True/*load*/, 8, fr_dst, am_addr0));
@@ -447,14 +452,14 @@
HReg fr_dst =3D newVRegF(env);
PPCAMode *am_addr0;
=20
- vassert(mode64);
+ vassert(env->mode64);
vassert(hregClass(r_src) =3D=3D HRcInt64);
=20
sub_from_sp( env, 16 ); // Move SP down 16 bytes
- am_addr0 =3D PPCAMode_IR( 0, StackFramePtr(mode64) );
+ am_addr0 =3D PPCAMode_IR( 0, StackFramePtr(env->mode64) );
=20
// store as Ity_I64
- addInstr(env, PPCInstr_Store( 8, am_addr0, r_src, mode64 ));
+ addInstr(env, PPCInstr_Store( 8, am_addr0, r_src, env->mode64 ));
=20
// load as float
addInstr(env, PPCInstr_FpLdSt(True/*load*/, 8, fr_dst, am_addr0));
@@ -512,6 +517,7 @@
Int n_args, i, argreg;
UInt argiregs;
ULong target;
+ Bool mode64 =3D env->mode64;
=20
/* Marshal args for a call and do the call.
=20
@@ -741,8 +747,8 @@
- so we can set the whole register at once (faster)
note: upper 32 bits ignored by FpLdFPSCR
*/
- addInstr(env, PPCInstr_LI(r_src, 0x0, mode64));
- if (mode64) {
+ addInstr(env, PPCInstr_LI(r_src, 0x0, env->mode64));
+ if (env->mode64) {
fr_src =3D mk_LoadR64toFPR( env, r_src ); // 1*I64 -> F64
} else {
fr_src =3D mk_LoadRR32toFPR( env, r_src, r_src ); // 2*I32 -> F64
@@ -764,7 +770,7 @@
HReg r_rmPPC =3D newVRegI(env);
HReg r_tmp =3D newVRegI(env);
=20
- vassert(hregClass(r_rmIR) =3D=3D HRcIntWRDSZ);
+ vassert(hregClass(r_rmIR) =3D=3D HRcGPR(env->mode64));
=20
// AND r_rmIR,3 -- shouldn't be needed; paranoia
addInstr(env, PPCInstr_Alu( Palu_AND, r_rmIR, r_rmIR,
@@ -802,7 +808,7 @@
// Resolve rounding mode and convert to PPC representation
r_src =3D roundModeIRtoPPC( env, iselIntExpr_R(env, mode) );
// gpr -> fpr
- if (mode64) {
+ if (env->mode64) {
fr_src =3D mk_LoadR64toFPR( env, r_src ); // 1*I64 -> F64
} else {
fr_src =3D mk_LoadRR32toFPR( env, r_src, r_src ); // 2*I32 -> F64
@@ -900,7 +906,7 @@
=20
/* no luck; use the Slow way. */
r_src =3D newVRegI(env);
- addInstr(env, PPCInstr_LI(r_src, (Long)simm32, mode64));
+ addInstr(env, PPCInstr_LI(r_src, (Long)simm32, env->mode64));
}
else {
r_src =3D ri->Pri.Reg;
@@ -921,7 +927,7 @@
am_off12 =3D PPCAMode_IR( 12, r_aligned16 );
=20
/* Store r_src in low word of 16-aligned mem */
- addInstr(env, PPCInstr_Store( 4, am_off12, r_src, mode64 ));
+ addInstr(env, PPCInstr_Store( 4, am_off12, r_src, env->mode64 ));
=20
/* Load src to vector[low lane] */
addInstr(env, PPCInstr_AvLdSt( True/*ld*/, 4, v_src, am_off12 ) );
@@ -987,7 +993,7 @@
vex_printf("\n"); ppIRExpr(e); vex_printf("\n");
# endif
=20
- vassert(hregClass(r) =3D=3D HRcIntWRDSZ);
+ vassert(hregClass(r) =3D=3D HRcGPR(env->mode64));
vassert(hregIsVirtual(r));
return r;
}
@@ -995,6 +1001,7 @@
/* DO NOT CALL THIS DIRECTLY ! */
static HReg iselIntExpr_R_wrk ( ISelEnv* env, IRExpr* e )
{
+ Bool mode64 =3D env->mode64;
MatchInfo mi;
DECLARE_PATTERN(p_32to1_then_1Uto8);
=20
@@ -1790,17 +1797,18 @@
return toBool(u =3D=3D (UInt)i);
}
=20
-static Bool sane_AMode ( PPCAMode* am )
+static Bool sane_AMode ( ISelEnv* env, PPCAMode* am )
{
+ Bool mode64 =3D env->mode64;
switch (am->tag) {
case Pam_IR:
- return toBool( hregClass(am->Pam.IR.base) =3D=3D HRcIntWRDSZ &&=20
+ return toBool( hregClass(am->Pam.IR.base) =3D=3D HRcGPR(mode64) &&=
=20
hregIsVirtual(am->Pam.IR.base) &&=20
fits16bits(am->Pam.IR.index) );
case Pam_RR:
- return toBool( hregClass(am->Pam.RR.base) =3D=3D HRcIntWRDSZ &&=20
- hregIsVirtual(am->Pam.IR.base) &&
- hregClass(am->Pam.RR.index) =3D=3D HRcIntWRDSZ &&
+ return toBool( hregClass(am->Pam.RR.base) =3D=3D HRcGPR(mode64) &&=
=20
+ hregIsVirtual(am->Pam.RR.base) &&
+ hregClass(am->Pam.RR.index) =3D=3D HRcGPR(mode64) &=
&
hregIsVirtual(am->Pam.IR.index) );
default:
vpanic("sane_AMode: unknown ppc amode tag");
@@ -1810,7 +1818,7 @@
static PPCAMode* iselIntExpr_AMode ( ISelEnv* env, IRExpr* e )
{
PPCAMode* am =3D iselIntExpr_AMode_wrk(env, e);
- vassert(sane_AMode(am));
+ vassert(sane_AMode(env, am));
return am;
}
=20
@@ -1818,7 +1826,7 @@
static PPCAMode* iselIntExpr_AMode_wrk ( ISelEnv* env, IRExpr* e )
{
IRType ty =3D typeOfIRExpr(env->type_env,e);
- vassert(ty =3D=3D (mode64 ? Ity_I64 : Ity_I32));
+ vassert(ty =3D=3D (env->mode64 ? Ity_I64 : Ity_I32));
=20
/* Add32(expr,i), where i =3D=3D sign-extend of (i & 0xFFFF) */
if (e->tag =3D=3D Iex_Binop=20
@@ -1866,7 +1874,7 @@
vassert(ri->Prh.Imm.imm16 !=3D 0x8000);
return ri;
case Prh_Reg:
- vassert(hregClass(ri->Prh.Reg.reg) =3D=3D HRcIntWRDSZ);
+ vassert(hregClass(ri->Prh.Reg.reg) =3D=3D HRcGPR(env->mode64));
vassert(hregIsVirtual(ri->Prh.Reg.reg));
return ri;
default:
@@ -1881,7 +1889,7 @@
Long l;
IRType ty =3D typeOfIRExpr(env->type_env,e);
vassert(ty =3D=3D Ity_I8 || ty =3D=3D Ity_I16 ||
- ty =3D=3D Ity_I32 || ((ty =3D=3D Ity_I64) && mode64));
+ ty =3D=3D Ity_I32 || ((ty =3D=3D Ity_I64) && env->mode64));
=20
/* special case: immediate */
if (e->tag =3D=3D Iex_Const) {
@@ -1889,7 +1897,7 @@
/* What value are we aiming to generate? */
switch (con->tag) {
/* Note: Not sign-extending - we carry 'syned' around */
- case Ico_U64: vassert(mode64);
+ case Ico_U64: vassert(env->mode64);
u =3D con->Ico.U64; break;
case Ico_U32: u =3D 0xFFFFFFFF & con->Ico.U32; break;
case Ico_U16: u =3D 0x0000FFFF & con->Ico.U16; break;
@@ -1928,7 +1936,7 @@
case Pri_Imm:
return ri;
case Pri_Reg:
- vassert(hregClass(ri->Pri.Reg) =3D=3D HRcIntWRDSZ);
+ vassert(hregClass(ri->Pri.Reg) =3D=3D HRcGPR(env->mode64));
vassert(hregIsVirtual(ri->Pri.Reg));
return ri;
default:
@@ -1942,13 +1950,13 @@
Long l;
IRType ty =3D typeOfIRExpr(env->type_env,e);
vassert(ty =3D=3D Ity_I8 || ty =3D=3D Ity_I16 ||
- ty =3D=3D Ity_I32 || ((ty =3D=3D Ity_I64) && mode64));
+ ty =3D=3D Ity_I32 || ((ty =3D=3D Ity_I64) && env->mode64));
=20
/* special case: immediate */
if (e->tag =3D=3D Iex_Const) {
IRConst* con =3D e->Iex.Const.con;
switch (con->tag) {
- case Ico_U64: vassert(mode64);
+ case Ico_U64: vassert(env->mode64);
l =3D (Long) con->Ico.U64; break;
case Ico_U32: l =3D (Long)(Int) con->Ico.U32; break;
case Ico_U16: l =3D (Long)(Int)(Short)con->Ico.U16; break;
@@ -1982,7 +1990,7 @@
vassert(!ri->Prh.Imm.syned);
return ri;
case Prh_Reg:
- vassert(hregClass(ri->Prh.Reg.reg) =3D=3D HRcIntWRDSZ);
+ vassert(hregClass(ri->Prh.Reg.reg) =3D=3D HRcGPR(env->mode64));
vassert(hregIsVirtual(ri->Prh.Reg.reg));
return ri;
default:
@@ -2028,7 +2036,7 @@
vassert(!ri->Prh.Imm.syned);
return ri;
case Prh_Reg:
- vassert(hregClass(ri->Prh.Reg.reg) =3D=3D HRcIntWRDSZ);
+ vassert(hregClass(ri->Prh.Reg.reg) =3D=3D HRcGPR(env->mode64));
vassert(hregIsVirtual(ri->Prh.Reg.reg));
return ri;
default:
@@ -2085,7 +2093,7 @@
if (e->tag =3D=3D Iex_Const && e->Iex.Const.con->Ico.U1 =3D=3D True) =
{
// Make a compare that will always be true:
HReg r_zero =3D newVRegI(env);
- addInstr(env, PPCInstr_LI(r_zero, 0, mode64));
+ addInstr(env, PPCInstr_LI(r_zero, 0, env->mode64));
addInstr(env, PPCInstr_Cmp(False/*unsigned*/, True/*32bit cmp*/,
7/*cr*/, r_zero, PPCRH_Reg(r_zero)));
return mk_PPCCondCode( Pct_TRUE, Pcf_7EQ );
@@ -2241,7 +2249,7 @@
e->Iex.Binop.op =3D=3D Iop_CmpLE64S);
HReg r1 =3D iselIntExpr_R(env, e->Iex.Binop.arg1);
PPCRH* ri2 =3D iselIntExpr_RH(env, syned, e->Iex.Binop.arg2);
- vassert(mode64);
+ vassert(env->mode64);
addInstr(env, PPCInstr_Cmp(syned, False/*64bit cmp*/,
7/*cr*/, r1, ri2));
=20
@@ -2307,7 +2315,7 @@
/* CmpNEZ64 */
if (e->tag =3D=3D Iex_Unop=20
&& e->Iex.Unop.op =3D=3D Iop_CmpNEZ64) {
- if (!mode64) {
+ if (!env->mode64) {
HReg hi, lo;
HReg tmp =3D newVRegI(env);
iselInt64Expr( &hi, &lo, env, e->Iex.Unop.arg );
@@ -2355,14 +2363,14 @@
static void iselInt128Expr ( HReg* rHi, HReg* rLo,
ISelEnv* env, IRExpr* e )
{
- vassert(mode64);
+ vassert(env->mode64);
iselInt128Expr_wrk(rHi, rLo, env, e);
# if 0
vex_printf("\n"); ppIRExpr(e); vex_printf("\n");
# endif
- vassert(hregClass(*rHi) =3D=3D HRcIntWRDSZ);
+ vassert(hregClass(*rHi) =3D=3D HRcGPR(env->mode64));
vassert(hregIsVirtual(*rHi));
- vassert(hregClass(*rLo) =3D=3D HRcIntWRDSZ);
+ vassert(hregClass(*rLo) =3D=3D HRcGPR(env->mode64));
vassert(hregIsVirtual(*rLo));
}
=20
@@ -2439,7 +2447,7 @@
static void iselInt64Expr ( HReg* rHi, HReg* rLo,
ISelEnv* env, IRExpr* e )
{
- vassert(!mode64);
+ vassert(!env->mode64);
iselInt64Expr_wrk(rHi, rLo, env, e);
# if 0
vex_printf("\n"); ppIRExpr(e); vex_printf("\n");
@@ -2454,6 +2462,7 @@
static void iselInt64Expr_wrk ( HReg* rHi, HReg* rLo,
ISelEnv* env, IRExpr* e )
{
+ Bool mode64 =3D env->mode64;
// HWord fn =3D 0; /* helper fn for most SIMD64 stuff */
vassert(e);
vassert(typeOfIRExpr(env->type_env,e) =3D=3D Ity_I64);
@@ -3168,7 +3177,7 @@
if (e->tag =3D=3D Iex_Get) {
HReg r_dst =3D newVRegF(env);
PPCAMode* am_addr =3D PPCAMode_IR( e->Iex.Get.offset,
- GuestStatePtr(mode64) );
+ GuestStatePtr(env->mode64) );
addInstr(env, PPCInstr_FpLdSt( True/*load*/, 4, r_dst, am_addr ));
return r_dst;
}
@@ -3235,6 +3244,7 @@
/* DO NOT CALL THIS DIRECTLY */
static HReg iselDblExpr_wrk ( ISelEnv* env, IRExpr* e )
{
+ Bool mode64 =3D env->mode64;
IRType ty =3D typeOfIRExpr(env->type_env,e);
vassert(e);
vassert(ty =3D=3D Ity_F64);
@@ -3482,6 +3492,7 @@
/* DO NOT CALL THIS DIRECTLY */
static HReg iselVecExpr_wrk ( ISelEnv* env, IRExpr* e )
{
+ Bool mode64 =3D env->mode64;
//.. Bool arg1isEReg =3D False;
PPCAvOp op =3D Pav_INVALID;
IRType ty =3D typeOfIRExpr(env->type_env,e);
@@ -4089,6 +4100,7 @@
=20
static void iselStmt ( ISelEnv* env, IRStmt* stmt )
{
+ Bool mode64 =3D env->mode64;
if (vex_traceflags & VEX_TRACE_VCODE) {
vex_printf("\n -- ");
ppIRStmt(stmt);
@@ -4393,6 +4405,7 @@
HReg hreg, hregHI;
ISelEnv* env;
VexSubArch subarch_host =3D archinfo_host->subarch;
+ Bool mode64;
=20
/* Figure out whether we're being ppc32 or ppc64 today. */
switch (subarch_host) {
@@ -4413,6 +4426,9 @@
env =3D LibVEX_Alloc(sizeof(ISelEnv));
env->vreg_ctr =3D 0;
=20
+ /* Are we being ppc32 or ppc64? */
+ env->mode64 =3D mode64;
+
/* Set up output code array. */
env->code =3D newHInstrArray();
=20
|
|
From: <sv...@va...> - 2005-12-24 12:32:17
|
Author: cerion
Date: 2005-12-24 12:32:10 +0000 (Sat, 24 Dec 2005)
New Revision: 1509
Log:
Fix AltiVec load/store on ppc64 - was only considering lo32 bits of addre=
ss.
Modified:
trunk/priv/guest-ppc/toIR.c
Modified: trunk/priv/guest-ppc/toIR.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/guest-ppc/toIR.c 2005-12-23 12:46:16 UTC (rev 1508)
+++ trunk/priv/guest-ppc/toIR.c 2005-12-24 12:32:10 UTC (rev 1509)
@@ -1145,8 +1145,7 @@
}
=20
vassert(typeOfIRExpr(irbb->tyenv,addr) =3D=3D ty);
- return binop( mkSzOp(ty, Iop_And8),
- addr, mkSzImm(ty, mask) );
+ return binop( mkSzOp(ty, Iop_And8), addr, mkSzImm(ty, mask) );
}
=20
=20
@@ -6306,27 +6305,28 @@
UInt opc2 =3D ifieldOPClo10(theInstr);
UChar b0 =3D ifieldBIT0(theInstr);
=20
- IRType ty =3D mode64 ? Ity_I64 : Ity_I32;
- IRTemp EA_lo32 =3D newTemp(Ity_I32);
- IRTemp addr_align16 =3D newTemp(ty);
+ IRType ty =3D mode64 ? Ity_I64 : Ity_I32;
+ IRTemp EA =3D newTemp(ty);
+ IRTemp EA_align16 =3D newTemp(ty);
=20
if (opc1 !=3D 0x1F || b0 !=3D 0) {
vex_printf("dis_av_load(ppc)(instr)\n");
return False;
}
=20
- assign( EA_lo32, mkSzNarrow32(ty, ea_rAor0_idxd(rA_addr, rB_addr)) );
- assign( addr_align16, addr_align( mkexpr(EA_lo32), 16 ) );
+ assign( EA, ea_rAor0_idxd(rA_addr, rB_addr) );
+ assign( EA_align16, addr_align( mkexpr(EA), 16 ) );
=20
switch (opc2) {
=20
case 0x006: { // lvsl (Load Vector for Shift Left, AV p123)
+ IRDirty* d;
UInt vD_off =3D vectorGuestRegOffset(vD_addr);
IRExpr** args =3D mkIRExprVec_3(
mkU32(vD_off),=20
- binop(Iop_And32, mkexpr(EA_lo32), mkU32(0xF)),
+ binop(Iop_And32, mkSzNarrow32(ty, mkexpr(EA)),
+ mkU32(0xF)),
mkU32(0)/*left*/ );
- IRDirty* d;
if (!mode64) {
d =3D unsafeIRDirty_0_N ( 0/*regparms*/,=20
"ppc32g_dirtyhelper_LVS",
@@ -6351,12 +6351,13 @@
break;
}
case 0x026: { // lvsr (Load Vector for Shift Right, AV p125)
+ IRDirty* d;
UInt vD_off =3D vectorGuestRegOffset(vD_addr);
IRExpr** args =3D mkIRExprVec_3(
mkU32(vD_off),=20
- binop(Iop_And32, mkexpr(EA_lo32), mkU32(0xF)),
+ binop(Iop_And32, mkSzNarrow32(ty, mkexpr(EA)),
+ mkU32(0xF)),
mkU32(1)/*right*/ );
- IRDirty* d;
if (!mode64) {
d =3D unsafeIRDirty_0_N ( 0/*regparms*/,=20
"ppc32g_dirtyhelper_LVS",
@@ -6385,24 +6386,24 @@
/* loads addressed byte into vector[EA[0:3]
since all other destination bytes are undefined,
can simply load entire vector from 16-aligned EA */
- putVReg( vD_addr, loadBE(Ity_V128, mkexpr(addr_align16)) );
+ putVReg( vD_addr, loadBE(Ity_V128, mkexpr(EA_align16)) );
break;
=20
case 0x027: // lvehx (Load Vector Element Half Word Indexed, AV p121)
DIP("lvehx v%d,r%u,r%u\n", vD_addr, rA_addr, rB_addr);
/* see note for lvebx */
- putVReg( vD_addr, loadBE(Ity_V128, mkexpr(addr_align16)) );
+ putVReg( vD_addr, loadBE(Ity_V128, mkexpr(EA_align16)) );
break;
=20
case 0x047: // lvewx (Load Vector Element Word Indexed, AV p122)
DIP("lvewx v%d,r%u,r%u\n", vD_addr, rA_addr, rB_addr);
/* see note for lvebx */
- putVReg( vD_addr, loadBE(Ity_V128, mkexpr(addr_align16)) );
+ putVReg( vD_addr, loadBE(Ity_V128, mkexpr(EA_align16)) );
break;
=20
case 0x067: // lvx (Load Vector Indexed, AV p127)
DIP("lvx v%d,r%u,r%u\n", vD_addr, rA_addr, rB_addr);
- putVReg( vD_addr, loadBE(Ity_V128, mkexpr(addr_align16)) );
+ putVReg( vD_addr, loadBE(Ity_V128, mkexpr(EA_align16)) );
break;
=20
case 0x167: // lvxl (Load Vector Indexed LRU, AV p128)
|
|
From: Cerion Armour-B. <ce...@op...> - 2005-12-24 12:21:58
|
On Saturday 24 December 2005 08:36, Jeroen N. Witmond wrote: > > To check the branch out and build it, use: > > > > svn co svn://www.valgrind.org/valgrind/branches/COMPVBITS > > Just being curious: Shouldn't the hostname be svn.valgrind.org? Doesn't really matter as both on same ip addr, but 'svn.' is preferred: 1) apache::svn setup (if it ever gets going again - seems like no-one misses it?) would be via http://svn.valgrind.org 2) in case the svn server is moved in future. Cerion |
|
From: Jeroen N. W. <jn...@xs...> - 2005-12-24 07:36:24
|
> To check the branch out and build it, use: > > svn co svn://www.valgrind.org/valgrind/branches/COMPVBITS Just being curious: Shouldn't the hostname be svn.valgrind.org? Jeroen. |
|
From: <js...@ac...> - 2005-12-24 03:59:39
|
Nightly build on phoenix ( SuSE 10.0 ) started at 2005-12-24 03:30:01 GMT Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 208 tests, 6 stderr failures, 1 stdout failure ================= memcheck/tests/leak-tree (stderr) memcheck/tests/mempool (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) none/tests/mremap2 (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Nicholas N. <nj...@cs...> - 2005-12-24 03:51:17
|
Hi, The COMPVBITS branch holds an improved version of Memcheck that runs significantly faster than the trunk (roughly 10--15% faster, and the trunk itself is 10--20% faster than 3.1.0, but the numbers vary greatly across different programs) and uses much less memory (the basic memory overhead is 2 bits per byte rather than 9 bits). It is intended to become the version of Memcheck that will be in release 3.2.0. It will also avoid the need for Addrcheck; if you want Addrcheck-like behaviour (ie. faster but with no definedness errors) use the --undef-value-errors=no flag. It would be great to have people test this branch out. It works well for me on x86, but I haven't tested it on AMD64 or PPC32. You might even like to switch to this version for your day-to-day use to take advantage of the speed improvements. To check the branch out and build it, use: svn co svn://www.valgrind.org/valgrind/branches/COMPVBITS cd COMPVBITS sh ./autogen.sh ./configure --prefix=<...> make I'm most interested in making sure that it is functionally identical to the current trunk; please let me know if this isn't the case. I'm also interested in seeing how its performance compares against the current trunk and maybe 3.1.0. To do so, run "make check" and then this command: perl perf/vg_perf --vg=<dir1> --vg=<dir2> --vg=<dir3> perf where <dir1>, <dir2> and <dir3> are the directories holding the three different Valgrind versions. Please post the results, and try to avoid line wrapping if you can :) Below are the results I get on my dual 3GHz P4 (Prescott). The run will take 15 minutes or more depending on machine speed; if you do it please try to minimise other activity on the machine while the tests are running. I'd love to see performance numbers for any other apps you might have, particularly big ones. Thanks. Nick Versions are: - 3.1.X branch (performance-wise equivalent to 3.1.0) - current trunk - current COMPVBITS - current COMPVBITS with --undef-value-errors=no baked in (Addrcheck mode) 'gcc' is a benchmark that's not checked into SVN, so you won't see it if you run the suite yourself. -- bigcode1 -- bigcode1 vg-3.1.X : 0.2s nl: 7.1s (33.7x, -----) mc:12.8s (61.1x, -----) bigcode1 trunk1 : 0.2s nl: 5.7s (27.1x, 19.5%) mc: 9.4s (44.7x, 26.9%) bigcode1 compvbits : 0.2s nl: 5.5s (26.0x, 22.9%) mc: 8.8s (41.9x, 31.5%) bigcode1 compvbits2: 0.2s nl: 5.4s (25.8x, 23.4%) mc: 6.6s (31.3x, 48.8%) -- bigcode2 -- bigcode2 vg-3.1.X : 0.2s nl:13.2s (65.8x, -----) mc:25.5s (127.3x, -----) bigcode2 trunk1 : 0.2s nl:11.8s (59.0x, 10.3%) mc:20.6s (103.2x, 19.0%) bigcode2 compvbits : 0.2s nl:11.1s (55.6x, 15.6%) mc:19.9s (99.5x, 21.8%) bigcode2 compvbits2: 0.2s nl:11.1s (55.7x, 15.4%) mc:15.2s (75.8x, 40.5%) -- bz2 -- bz2 vg-3.1.X : 1.3s nl: 9.3s ( 7.2x, -----) mc:26.3s (20.3x, -----) bz2 trunk1 : 1.3s nl: 7.3s ( 5.6x, 22.0%) mc:28.8s (22.2x, -9.5%) bz2 compvbits : 1.3s nl: 7.2s ( 5.5x, 22.7%) mc:21.1s (16.2x, 19.8%) bz2 compvbits2: 1.3s nl: 7.2s ( 5.5x, 22.9%) mc:16.1s (12.4x, 39.0%) -- fbench -- fbench vg-3.1.X : 1.1s nl: 4.9s ( 4.4x, -----) mc:13.6s (12.0x, -----) fbench trunk1 : 1.1s nl: 4.5s ( 3.9x, 9.6%) mc:12.2s (10.8x, 10.6%) fbench compvbits : 1.1s nl: 4.5s ( 4.0x, 8.3%) mc:11.2s ( 9.9x, 17.7%) fbench compvbits2: 1.1s nl: 4.5s ( 4.0x, 8.3%) mc: 8.6s ( 7.6x, 37.0%) -- ffbench -- ffbench vg-3.1.X : 0.8s nl: 4.1s ( 4.9x, -----) mc:11.4s (13.6x, -----) ffbench trunk1 : 0.8s nl: 4.0s ( 4.8x, 1.7%) mc:10.9s (13.0x, 3.9%) ffbench compvbits : 0.8s nl: 3.9s ( 4.6x, 6.6%) mc: 8.8s (10.5x, 22.7%) ffbench compvbits2: 0.8s nl: 3.7s ( 4.4x, 10.7%) mc: 7.3s ( 8.7x, 35.7%) -- gcc -- gcc vg-3.1.X : 0.3s nl:12.8s (39.8x, -----) mc:34.5s (107.9x, -----) gcc trunk1 : 0.3s nl:10.9s (34.1x, 14.4%) mc:29.7s (92.9x, 13.9%) gcc compvbits : 0.3s nl:10.8s (33.7x, 15.5%) mc:29.0s (90.7x, 16.0%) gcc compvbits2: 0.3s nl:10.8s (33.8x, 15.2%) mc:22.1s (69.0x, 36.1%) -- heap -- heap vg-3.1.X : 0.4s nl: 3.2s ( 7.8x, -----) mc:27.2s (66.3x, -----) heap trunk1 : 0.4s nl: 2.2s ( 5.3x, 31.6%) mc:19.0s (46.4x, 30.1%) heap compvbits : 0.4s nl: 2.3s ( 5.6x, 28.1%) mc:16.6s (40.6x, 38.8%) heap compvbits2: 0.4s nl: 2.2s ( 5.4x, 30.6%) mc:16.3s (39.7x, 40.2%) -- sarp -- sarp vg-3.1.X : 0.1s nl: 0.9s (12.4x, -----) mc:11.0s (157.0x, -----) sarp trunk1 : 0.1s nl: 0.5s ( 6.4x, 48.3%) mc:10.9s (155.7x, 0.8%) sarp compvbits : 0.1s nl: 0.5s ( 6.4x, 48.3%) mc: 4.1s (58.7x, 62.6%) sarp compvbits2: 0.1s nl: 0.4s ( 6.3x, 49.4%) mc: 3.8s (54.1x, 65.5%) -- tinycc -- tinycc vg-3.1.X : 0.8s nl:10.3s (12.2x, -----) mc:43.4s (51.7x, -----) tinycc trunk1 : 0.8s nl: 7.3s ( 8.7x, 29.1%) mc:41.2s (49.0x, 5.2%) tinycc compvbits : 0.8s nl: 7.3s ( 8.7x, 29.2%) mc:38.6s (46.0x, 11.1%) tinycc compvbits2: 0.8s nl: 7.2s ( 8.6x, 29.9%) mc:33.0s (39.3x, 24.0%) -- Finished tests in perf ---------------------------------------------- |
|
From: Tom H. <to...@co...> - 2005-12-24 03:43:11
|
Nightly build on dunsmere ( athlon, Fedora Core 4 ) started at 2005-12-24 03:30:05 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 210 tests, 7 stderr failures, 1 stdout failure ================= memcheck/tests/leak-tree (stderr) memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) none/tests/mremap2 (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: <js...@ac...> - 2005-12-24 03:40:11
|
Nightly build on g5 ( YDL 4.0, ppc970 ) started at 2005-12-24 04:40:00 CET Checking out vex source tree ... failed Last 20 lines of log.verbose follow Nightly build on g5 ( YDL 4.0, ppc970 ) started at 2005-12-24 04:40:00 CET svn: Won't delete locally modified directory 'vex/priv' svn: Left locally modified or unversioned files |
|
From: <sv...@va...> - 2005-12-24 03:11:21
|
Author: njn
Date: 2005-12-24 03:10:56 +0000 (Sat, 24 Dec 2005)
New Revision: 5427
Log:
Fix a nasty 64-bit-uncleanness bug in OSet spotted by Julian -- for fast
comparisons it was only considering the bottom 32-bits of the key.
Modified:
trunk/cachegrind/cg_main.c
trunk/coregrind/m_oset.c
trunk/include/pub_tool_oset.h
Modified: trunk/cachegrind/cg_main.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/cachegrind/cg_main.c 2005-12-23 23:46:31 UTC (rev 5426)
+++ trunk/cachegrind/cg_main.c 2005-12-24 03:10:56 UTC (rev 5427)
@@ -92,9 +92,9 @@
};
=20
// First compare file, then fn, then line.
-static Int cmp_CodeLoc_LineCC(void *vloc, void *vcc)
+static Word cmp_CodeLoc_LineCC(void *vloc, void *vcc)
{
- Int res;
+ Word res;
CodeLoc* a =3D (CodeLoc*)vloc;
CodeLoc* b =3D &(((LineCC*)vcc)->loc);
=20
@@ -162,7 +162,7 @@
/*--- String table operations ---*/
/*------------------------------------------------------------*/
=20
-static Int stringCmp( void* key, void* elem )
+static Word stringCmp( void* key, void* elem )
{
return VG_(strcmp)(*(Char**)key, *(Char**)elem);
}
Modified: trunk/coregrind/m_oset.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_oset.c 2005-12-23 23:46:31 UTC (rev 5426)
+++ trunk/coregrind/m_oset.c 2005-12-24 03:10:56 UTC (rev 5427)
@@ -171,13 +171,13 @@
}
=20
// Compare the first word of each element. Inlining is *crucial*.
-static inline Int fast_cmp(void* k, AvlNode* n)
+static inline Word fast_cmp(void* k, AvlNode* n)
{
- return ( *(Int*)k - *(Int*)elem_of_node(n) );
+ return ( *(Word*)k - *(Word*)elem_of_node(n) );
}
=20
// Compare a key and an element. Inlining is *crucial*.
-static inline Int slow_cmp(AvlTree* t, void* k, AvlNode* n)
+static inline Word slow_cmp(AvlTree* t, void* k, AvlNode* n)
{
return t->cmp(k, elem_of_node(n));
}
@@ -349,7 +349,7 @@
/*--- Insertion ---*/
/*--------------------------------------------------------------------*/
=20
-static inline Int cmp_key_root(AvlTree* t, AvlNode* n)
+static inline Word cmp_key_root(AvlTree* t, AvlNode* n)
{
return t->cmp
? slow_cmp(t, slow_key_of_node(t, n), t->root)
@@ -360,7 +360,7 @@
// Returns True if the depth of the tree has grown.
static Bool avl_insert(AvlTree* t, AvlNode* n)
{
- Int cmpres =3D cmp_key_root(t, n);
+ Word cmpres =3D cmp_key_root(t, n);
=20
if (cmpres < 0) {
// Insert into the left subtree.
@@ -464,7 +464,7 @@
// Find the *node* in t matching k, or NULL if not found.
static AvlNode* avl_lookup(AvlTree* t, void* k)
{
- Int cmpres;
+ Word cmpres;
AvlNode* curr =3D t->root;
=20
if (t->cmp) {
@@ -481,10 +481,10 @@
// elem_of_node because it saves about 10% on lookup time. This
// shouldn't be very dangerous because each node will have been
// checked on insertion.
- Int kk =3D *(Int*)k;
+ Word kk =3D *(Word*)k;
while (True) {
if (curr =3D=3D NULL) return NULL;
- cmpres =3D kk - *(Int*)elem_of_node_no_check(curr);
+ cmpres =3D kk - *(Word*)elem_of_node_no_check(curr);
if (cmpres < 0) curr =3D curr->left; else
if (cmpres > 0) curr =3D curr->right; else
return curr;
@@ -533,7 +533,7 @@
static Bool avl_remove(AvlTree* t, AvlNode* n)
{
Bool ch;
- Int cmpres =3D cmp_key_root(t, n);
+ Word cmpres =3D cmp_key_root(t, n);
=20
if (cmpres < 0) {
AvlTree left_subtree;
@@ -616,7 +616,7 @@
// Returns True if the depth of the tree has shrunk.
static Bool avl_removeroot(AvlTree* t)
{
- Int ch;
+ Bool ch;
AvlNode* n;
=20
if (!t->root->left) {
Modified: trunk/include/pub_tool_oset.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/include/pub_tool_oset.h 2005-12-23 23:46:31 UTC (rev 5426)
+++ trunk/include/pub_tool_oset.h 2005-12-24 03:10:56 UTC (rev 5427)
@@ -65,7 +65,7 @@
typedef struct _OSet OSet;
typedef struct _OSetNode OSetNode;
=20
-typedef Int (*OSetCmp_t) ( void* key, void* elem );
+typedef Word (*OSetCmp_t) ( void* key, void* elem );
typedef void* (*OSetAlloc_t) ( SizeT szB );
typedef void (*OSetFree_t) ( void* p );
typedef void (*OSetNodeDestroy_t) ( void* elem );
|