You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
1
(5) |
2
(5) |
3
(6) |
4
(6) |
|
5
(6) |
6
(10) |
7
(5) |
8
(5) |
9
(6) |
10
(6) |
11
(5) |
|
12
(5) |
13
(5) |
14
(6) |
15
(7) |
16
(8) |
17
(7) |
18
(1) |
|
19
(2) |
20
(3) |
21
(7) |
22
(5) |
23
(9) |
24
(7) |
25
(14) |
|
26
(7) |
27
(12) |
28
(32) |
29
(15) |
30
(5) |
31
(9) |
|
|
From: Tom H. <th...@cy...> - 2007-08-26 02:15:54
|
Nightly build on dellow ( x86_64, Fedora 7 ) started at 2007-08-26 03:10:03 BST
Results differ from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 292 tests, 221 stderr failures, 105 stdout failures, 0 posttest failures ==
memcheck/tests/addressable (stdout)
memcheck/tests/addressable (stderr)
memcheck/tests/amd64/bt_everything (stdout)
memcheck/tests/amd64/bt_everything (stderr)
memcheck/tests/amd64/bug132146 (stdout)
memcheck/tests/amd64/bug132146 (stderr)
memcheck/tests/amd64/defcfaexpr (stderr)
memcheck/tests/amd64/fxsave-amd64 (stdout)
memcheck/tests/amd64/fxsave-amd64 (stderr)
memcheck/tests/amd64/insn_basic (stdout)
memcheck/tests/amd64/insn_basic (stderr)
memcheck/tests/amd64/insn_fpu (stdout)
memcheck/tests/amd64/insn_fpu (stderr)
memcheck/tests/amd64/insn_mmx (stdout)
memcheck/tests/amd64/insn_mmx (stderr)
memcheck/tests/amd64/insn_sse (stdout)
memcheck/tests/amd64/insn_sse (stderr)
memcheck/tests/amd64/insn_sse2 (stdout)
memcheck/tests/amd64/insn_sse2 (stderr)
memcheck/tests/amd64/int3-amd64 (stdout)
memcheck/tests/amd64/int3-amd64 (stderr)
memcheck/tests/amd64/more_x87_fp (stdout)
memcheck/tests/amd64/more_x87_fp (stderr)
memcheck/tests/amd64/sse_memory (stdout)
memcheck/tests/amd64/sse_memory (stderr)
memcheck/tests/amd64/xor-undef-amd64 (stdout)
memcheck/tests/amd64/xor-undef-amd64 (stderr)
memcheck/tests/badaddrvalue (stdout)
memcheck/tests/badaddrvalue (stderr)
memcheck/tests/badfree-2trace (stderr)
memcheck/tests/badfree (stderr)
memcheck/tests/badjump (stderr)
memcheck/tests/badjump2 (stderr)
memcheck/tests/badloop (stderr)
memcheck/tests/badpoll (stderr)
memcheck/tests/badrw (stderr)
memcheck/tests/brk (stderr)
memcheck/tests/brk2 (stderr)
memcheck/tests/buflen_check (stderr)
memcheck/tests/clientperm (stdout)
memcheck/tests/clientperm (stderr)
memcheck/tests/custom_alloc (stderr)
memcheck/tests/deep_templates (stdout)
memcheck/tests/deep_templates (stderr)
memcheck/tests/describe-block (stderr)
memcheck/tests/doublefree (stderr)
memcheck/tests/erringfds (stdout)
memcheck/tests/erringfds (stderr)
memcheck/tests/error_counts (stdout)
memcheck/tests/errs1 (stderr)
memcheck/tests/execve (stderr)
memcheck/tests/execve2 (stderr)
memcheck/tests/exitprog (stderr)
memcheck/tests/fprw (stderr)
memcheck/tests/fwrite (stderr)
memcheck/tests/inits (stderr)
memcheck/tests/inline (stdout)
memcheck/tests/inline (stderr)
memcheck/tests/leak-0 (stderr)
memcheck/tests/leak-cycle (stderr)
memcheck/tests/leak-pool-0 (stderr)
memcheck/tests/leak-pool-1 (stderr)
memcheck/tests/leak-pool-2 (stderr)
memcheck/tests/leak-pool-3 (stderr)
memcheck/tests/leak-pool-4 (stderr)
memcheck/tests/leak-pool-5 (stderr)
memcheck/tests/leak-regroot (stderr)
memcheck/tests/leak-tree (stderr)
memcheck/tests/leakotron (stdout)
memcheck/tests/long_namespace_xml (stdout)
memcheck/tests/long_namespace_xml (stderr)
memcheck/tests/malloc1 (stderr)
memcheck/tests/malloc2 (stderr)
memcheck/tests/malloc3 (stdout)
memcheck/tests/malloc3 (stderr)
memcheck/tests/malloc_usable (stderr)
memcheck/tests/manuel1 (stdout)
memcheck/tests/manuel1 (stderr)
memcheck/tests/manuel2 (stdout)
memcheck/tests/manuel2 (stderr)
memcheck/tests/manuel3 (stderr)
memcheck/tests/match-overrun (stderr)
memcheck/tests/memalign2 (stderr)
memcheck/tests/memalign_test (stderr)
memcheck/tests/memcmptest (stdout)
memcheck/tests/memcmptest (stderr)
memcheck/tests/mempool (stderr)
memcheck/tests/metadata (stdout)
memcheck/tests/metadata (stderr)
memcheck/tests/mismatches (stderr)
memcheck/tests/mmaptest (stderr)
memcheck/tests/nanoleak (stderr)
memcheck/tests/nanoleak2 (stderr)
memcheck/tests/nanoleak_supp (stderr)
memcheck/tests/new_nothrow (stderr)
memcheck/tests/new_override (stdout)
memcheck/tests/new_override (stderr)
memcheck/tests/null_socket (stderr)
memcheck/tests/oset_test (stdout)
memcheck/tests/oset_test (stderr)
memcheck/tests/overlap (stdout)
memcheck/tests/overlap (stderr)
memcheck/tests/partial_load_dflt (stderr)
memcheck/tests/partial_load_ok (stderr)
memcheck/tests/partiallydefinedeq (stdout)
memcheck/tests/partiallydefinedeq (stderr)
memcheck/tests/pdb-realloc (stderr)
memcheck/tests/pdb-realloc2 (stdout)
memcheck/tests/pdb-realloc2 (stderr)
memcheck/tests/pipe (stderr)
memcheck/tests/pointer-trace (stderr)
memcheck/tests/post-syscall (stdout)
memcheck/tests/post-syscall (stderr)
memcheck/tests/realloc1 (stderr)
memcheck/tests/realloc2 (stderr)
memcheck/tests/realloc3 (stderr)
memcheck/tests/sh-mem-random (stdout)
memcheck/tests/sh-mem-random (stderr)
memcheck/tests/sh-mem (stderr)
memcheck/tests/sigaltstack (stderr)
memcheck/tests/sigkill (stderr)
memcheck/tests/signal2 (stdout)
memcheck/tests/signal2 (stderr)
memcheck/tests/sigprocmask (stderr)
memcheck/tests/stack_changes (stdout)
memcheck/tests/stack_changes (stderr)
memcheck/tests/stack_switch (stderr)
memcheck/tests/str_tester (stderr)
memcheck/tests/strchr (stderr)
memcheck/tests/supp1 (stderr)
memcheck/tests/supp2 (stderr)
memcheck/tests/supp_unknown (stderr)
memcheck/tests/suppfree (stderr)
memcheck/tests/toobig-allocs (stderr)
memcheck/tests/trivialleak (stderr)
memcheck/tests/vcpu_bz2 (stdout)
memcheck/tests/vcpu_bz2 (stderr)
memcheck/tests/vcpu_fbench (stdout)
memcheck/tests/vcpu_fbench (stderr)
memcheck/tests/vcpu_fnfns (stdout)
memcheck/tests/vcpu_fnfns (stderr)
memcheck/tests/with-space (stdout)
memcheck/tests/with-space (stderr)
memcheck/tests/wrap1 (stdout)
memcheck/tests/wrap1 (stderr)
memcheck/tests/wrap2 (stdout)
memcheck/tests/wrap2 (stderr)
memcheck/tests/wrap3 (stdout)
memcheck/tests/wrap3 (stderr)
memcheck/tests/wrap4 (stdout)
memcheck/tests/wrap4 (stderr)
memcheck/tests/wrap5 (stdout)
memcheck/tests/wrap5 (stderr)
memcheck/tests/wrap6 (stdout)
memcheck/tests/wrap6 (stderr)
memcheck/tests/wrap7 (stdout)
memcheck/tests/wrap7 (stderr)
memcheck/tests/wrap8 (stdout)
memcheck/tests/wrap8 (stderr)
memcheck/tests/writev (stderr)
memcheck/tests/x86/scalar (stderr)
memcheck/tests/xml1 (stdout)
memcheck/tests/xml1 (stderr)
memcheck/tests/zeropage (stdout)
memcheck/tests/zeropage (stderr)
cachegrind/tests/chdir (stderr)
cachegrind/tests/clreq (stderr)
cachegrind/tests/dlclose (stdout)
cachegrind/tests/dlclose (stderr)
cachegrind/tests/wrap5 (stdout)
cachegrind/tests/wrap5 (stderr)
callgrind/tests/clreq (stderr)
callgrind/tests/simwork1 (stdout)
callgrind/tests/simwork1 (stderr)
callgrind/tests/simwork2 (stdout)
callgrind/tests/simwork2 (stderr)
callgrind/tests/simwork3 (stdout)
callgrind/tests/simwork3 (stderr)
callgrind/tests/threads (stderr)
massif/tests/basic_malloc (stderr)
massif/tests/toobig-allocs (stderr)
massif/tests/true_html (stderr)
massif/tests/true_text (stderr)
lackey/tests/true (stderr)
none/tests/amd64/bug127521-64 (stdout)
none/tests/amd64/bug127521-64 (stderr)
none/tests/amd64/bug132813-amd64 (stdout)
none/tests/amd64/bug132813-amd64 (stderr)
none/tests/amd64/bug132918 (stdout)
none/tests/amd64/bug132918 (stderr)
none/tests/amd64/clc (stdout)
none/tests/amd64/clc (stderr)
none/tests/amd64/fcmovnu (stdout)
none/tests/amd64/fcmovnu (stderr)
none/tests/amd64/fxtract (stdout)
none/tests/amd64/fxtract (stderr)
none/tests/amd64/insn_basic (stdout)
none/tests/amd64/insn_basic (stderr)
none/tests/amd64/insn_fpu (stdout)
none/tests/amd64/insn_fpu (stderr)
none/tests/amd64/insn_mmx (stdout)
none/tests/amd64/insn_mmx (stderr)
none/tests/amd64/insn_sse (stdout)
none/tests/amd64/insn_sse (stderr)
none/tests/amd64/insn_sse2 (stdout)
none/tests/amd64/insn_sse2 (stderr)
none/tests/amd64/jrcxz (stdout)
none/tests/amd64/jrcxz (stderr)
none/tests/amd64/looper (stdout)
none/tests/amd64/looper (stderr)
none/tests/amd64/nibz_bennee_mmap (stdout)
none/tests/amd64/nibz_bennee_mmap (stderr)
none/tests/amd64/rcl-amd64 (stdout)
none/tests/amd64/rcl-amd64 (stderr)
none/tests/amd64/shrld (stdout)
none/tests/amd64/shrld (stderr)
none/tests/amd64/slahf-amd64 (stdout)
none/tests/amd64/slahf-amd64 (stderr)
none/tests/amd64/smc1 (stdout)
none/tests/amd64/smc1 (stderr)
none/tests/ansi (stderr)
none/tests/args (stdout)
none/tests/args (stderr)
none/tests/async-sigs (stdout)
none/tests/async-sigs (stderr)
none/tests/bitfield1 (stderr)
none/tests/blockfault (stderr)
none/tests/bug129866 (stdout)
none/tests/bug129866 (stderr)
none/tests/closeall (stderr)
none/tests/coolo_sigaction (stdout)
none/tests/coolo_sigaction (stderr)
none/tests/coolo_strlen (stderr)
none/tests/discard (stdout)
none/tests/discard (stderr)
none/tests/exec-sigmask (stderr)
none/tests/execve (stderr)
none/tests/faultstatus (stderr)
none/tests/fcntl_setown (stderr)
none/tests/fdleak_cmsg (stderr)
none/tests/fdleak_creat (stderr)
none/tests/fdleak_dup (stderr)
none/tests/fdleak_dup2 (stderr)
none/tests/fdleak_fcntl (stderr)
none/tests/fdleak_ipv4 (stdout)
none/tests/fdleak_ipv4 (stderr)
none/tests/fdleak_open (stderr)
none/tests/fdleak_pipe (stderr)
none/tests/fdleak_socketpair (stderr)
none/tests/floored (stdout)
none/tests/floored (stderr)
none/tests/fork (stdout)
none/tests/fork (stderr)
none/tests/fucomip (stderr)
none/tests/gxx304 (stderr)
none/tests/manythreads (stdout)
none/tests/manythreads (stderr)
none/tests/map_unaligned (stderr)
none/tests/map_unmap (stdout)
none/tests/map_unmap (stderr)
none/tests/mq (stderr)
none/tests/mremap (stderr)
none/tests/mremap2 (stdout)
none/tests/mremap2 (stderr)
none/tests/munmap_exe (stderr)
none/tests/nestedfns (stdout)
none/tests/nestedfns (stderr)
none/tests/pending (stdout)
none/tests/pending (stderr)
none/tests/pth_atfork1 (stdout)
none/tests/pth_atfork1 (stderr)
none/tests/pth_blockedsig (stdout)
none/tests/pth_blockedsig (stderr)
none/tests/pth_cancel1 (stdout)
none/tests/pth_cancel1 (stderr)
none/tests/pth_cancel2 (stderr)
none/tests/pth_cvsimple (stdout)
none/tests/pth_cvsimple (stderr)
none/tests/pth_detached (stdout)
none/tests/pth_detached (stderr)
none/tests/pth_empty (stderr)
none/tests/pth_exit (stderr)
none/tests/pth_exit2 (stderr)
none/tests/pth_mutexspeed (stdout)
none/tests/pth_mutexspeed (stderr)
none/tests/pth_once (stdout)
none/tests/pth_once (stderr)
none/tests/pth_rwlock (stderr)
none/tests/pth_stackalign (stdout)
none/tests/pth_stackalign (stderr)
none/tests/rcrl (stdout)
none/tests/rcrl (stderr)
none/tests/readline1 (stdout)
none/tests/readline1 (stderr)
none/tests/res_search (stdout)
none/tests/res_search (stderr)
none/tests/resolv (stdout)
none/tests/resolv (stderr)
none/tests/rlimit_nofile (stderr)
none/tests/sem (stderr)
none/tests/semlimit (stderr)
none/tests/sha1_test (stderr)
none/tests/shell (stdout)
none/tests/shell (stderr)
none/tests/shell_valid1 (stderr)
none/tests/shell_valid2 (stderr)
none/tests/shell_valid3 (stderr)
none/tests/shortpush (stderr)
none/tests/shorts (stderr)
none/tests/sigstackgrowth (stdout)
none/tests/sigstackgrowth (stderr)
none/tests/stackgrowth (stdout)
none/tests/stackgrowth (stderr)
none/tests/syscall-restart1 (stderr)
none/tests/syscall-restart2 (stderr)
none/tests/system (stderr)
none/tests/thread-exits (stdout)
none/tests/thread-exits (stderr)
none/tests/threaded-fork (stdout)
none/tests/threaded-fork (stderr)
none/tests/threadederrno (stdout)
none/tests/threadederrno (stderr)
none/tests/tls (stdout)
none/tests/tls (stderr)
none/tests/vgprintf (stdout)
none/tests/vgprintf (stderr)
=================================================
== Results from 24 hours ago ==
=================================================
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... failed
Last 20 lines of verbose log follow echo
mc_translate.c:416: error: 'Iop_Neg8' undeclared (first use in this function)
mc_translate.c:416: error: (Each undeclared identifier is reported only once
mc_translate.c:416: error: for each function it appears in.)
mc_translate.c: In function 'mkLeft16':
mc_translate.c:425: error: 'Iop_Neg16' undeclared (first use in this function)
mc_translate.c: In function 'mkLeft32':
mc_translate.c:434: error: 'Iop_Neg32' undeclared (first use in this function)
mc_translate.c: In function 'mkLeft64':
mc_translate.c:443: error: 'Iop_Neg64' undeclared (first use in this function)
mc_translate.c: In function 'expr2vbits_Unop':
mc_translate.c:2527: error: 'Iop_Neg8' undeclared (first use in this function)
mc_translate.c:2529: error: 'Iop_Neg16' undeclared (first use in this function)
mc_translate.c:2531: error: 'Iop_Neg32' undeclared (first use in this function)
make[3]: *** [memcheck_x86_linux-mc_translate.o] Error 1
make[3]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind'
make: *** [all] Error 2
=================================================
== Difference between 24 hours ago and now ==
=================================================
*** old.short Sun Aug 26 03:11:21 2007
--- new.short Sun Aug 26 03:15:45 2007
***************
*** 3,26 ****
Configuring valgrind ... done
! Building valgrind ... failed
- Last 20 lines of verbose log follow echo
- mc_translate.c:416: error: 'Iop_Neg8' undeclared (first use in this function)
- mc_translate.c:416: error: (Each undeclared identifier is reported only once
- mc_translate.c:416: error: for each function it appears in.)
- mc_translate.c: In function 'mkLeft16':
- mc_translate.c:425: error: 'Iop_Neg16' undeclared (first use in this function)
- mc_translate.c: In function 'mkLeft32':
- mc_translate.c:434: error: 'Iop_Neg32' undeclared (first use in this function)
- mc_translate.c: In function 'mkLeft64':
- mc_translate.c:443: error: 'Iop_Neg64' undeclared (first use in this function)
- mc_translate.c: In function 'expr2vbits_Unop':
- mc_translate.c:2527: error: 'Iop_Neg8' undeclared (first use in this function)
- mc_translate.c:2529: error: 'Iop_Neg16' undeclared (first use in this function)
- mc_translate.c:2531: error: 'Iop_Neg32' undeclared (first use in this function)
- make[3]: *** [memcheck_x86_linux-mc_translate.o] Error 1
- make[3]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
- make[2]: *** [all-recursive] Error 1
- make[2]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
- make[1]: *** [all-recursive] Error 1
- make[1]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind'
- make: *** [all] Error 2
--- 3,336 ----
Configuring valgrind ... done
! Building valgrind ... done
! Running regression tests ... failed
!
! Regression test results follow
!
! == 292 tests, 221 stderr failures, 105 stdout failures, 0 posttest failures ==
! memcheck/tests/addressable (stdout)
! memcheck/tests/addressable (stderr)
! memcheck/tests/amd64/bt_everything (stdout)
! memcheck/tests/amd64/bt_everything (stderr)
! memcheck/tests/amd64/bug132146 (stdout)
! memcheck/tests/amd64/bug132146 (stderr)
! memcheck/tests/amd64/defcfaexpr (stderr)
! memcheck/tests/amd64/fxsave-amd64 (stdout)
! memcheck/tests/amd64/fxsave-amd64 (stderr)
! memcheck/tests/amd64/insn_basic (stdout)
! memcheck/tests/amd64/insn_basic (stderr)
! memcheck/tests/amd64/insn_fpu (stdout)
! memcheck/tests/amd64/insn_fpu (stderr)
! memcheck/tests/amd64/insn_mmx (stdout)
! memcheck/tests/amd64/insn_mmx (stderr)
! memcheck/tests/amd64/insn_sse (stdout)
! memcheck/tests/amd64/insn_sse (stderr)
! memcheck/tests/amd64/insn_sse2 (stdout)
! memcheck/tests/amd64/insn_sse2 (stderr)
! memcheck/tests/amd64/int3-amd64 (stdout)
! memcheck/tests/amd64/int3-amd64 (stderr)
! memcheck/tests/amd64/more_x87_fp (stdout)
! memcheck/tests/amd64/more_x87_fp (stderr)
! memcheck/tests/amd64/sse_memory (stdout)
! memcheck/tests/amd64/sse_memory (stderr)
! memcheck/tests/amd64/xor-undef-amd64 (stdout)
! memcheck/tests/amd64/xor-undef-amd64 (stderr)
! memcheck/tests/badaddrvalue (stdout)
! memcheck/tests/badaddrvalue (stderr)
! memcheck/tests/badfree-2trace (stderr)
! memcheck/tests/badfree (stderr)
! memcheck/tests/badjump (stderr)
! memcheck/tests/badjump2 (stderr)
! memcheck/tests/badloop (stderr)
! memcheck/tests/badpoll (stderr)
! memcheck/tests/badrw (stderr)
! memcheck/tests/brk (stderr)
! memcheck/tests/brk2 (stderr)
! memcheck/tests/buflen_check (stderr)
! memcheck/tests/clientperm (stdout)
! memcheck/tests/clientperm (stderr)
! memcheck/tests/custom_alloc (stderr)
! memcheck/tests/deep_templates (stdout)
! memcheck/tests/deep_templates (stderr)
! memcheck/tests/describe-block (stderr)
! memcheck/tests/doublefree (stderr)
! memcheck/tests/erringfds (stdout)
! memcheck/tests/erringfds (stderr)
! memcheck/tests/error_counts (stdout)
! memcheck/tests/errs1 (stderr)
! memcheck/tests/execve (stderr)
! memcheck/tests/execve2 (stderr)
! memcheck/tests/exitprog (stderr)
! memcheck/tests/fprw (stderr)
! memcheck/tests/fwrite (stderr)
! memcheck/tests/inits (stderr)
! memcheck/tests/inline (stdout)
! memcheck/tests/inline (stderr)
! memcheck/tests/leak-0 (stderr)
! memcheck/tests/leak-cycle (stderr)
! memcheck/tests/leak-pool-0 (stderr)
! memcheck/tests/leak-pool-1 (stderr)
! memcheck/tests/leak-pool-2 (stderr)
! memcheck/tests/leak-pool-3 (stderr)
! memcheck/tests/leak-pool-4 (stderr)
! memcheck/tests/leak-pool-5 (stderr)
! memcheck/tests/leak-regroot (stderr)
! memcheck/tests/leak-tree (stderr)
! memcheck/tests/leakotron (stdout)
! memcheck/tests/long_namespace_xml (stdout)
! memcheck/tests/long_namespace_xml (stderr)
! memcheck/tests/malloc1 (stderr)
! memcheck/tests/malloc2 (stderr)
! memcheck/tests/malloc3 (stdout)
! memcheck/tests/malloc3 (stderr)
! memcheck/tests/malloc_usable (stderr)
! memcheck/tests/manuel1 (stdout)
! memcheck/tests/manuel1 (stderr)
! memcheck/tests/manuel2 (stdout)
! memcheck/tests/manuel2 (stderr)
! memcheck/tests/manuel3 (stderr)
! memcheck/tests/match-overrun (stderr)
! memcheck/tests/memalign2 (stderr)
! memcheck/tests/memalign_test (stderr)
! memcheck/tests/memcmptest (stdout)
! memcheck/tests/memcmptest (stderr)
! memcheck/tests/mempool (stderr)
! memcheck/tests/metadata (stdout)
! memcheck/tests/metadata (stderr)
! memcheck/tests/mismatches (stderr)
! memcheck/tests/mmaptest (stderr)
! memcheck/tests/nanoleak (stderr)
! memcheck/tests/nanoleak2 (stderr)
! memcheck/tests/nanoleak_supp (stderr)
! memcheck/tests/new_nothrow (stderr)
! memcheck/tests/new_override (stdout)
! memcheck/tests/new_override (stderr)
! memcheck/tests/null_socket (stderr)
! memcheck/tests/oset_test (stdout)
! memcheck/tests/oset_test (stderr)
! memcheck/tests/overlap (stdout)
! memcheck/tests/overlap (stderr)
! memcheck/tests/partial_load_dflt (stderr)
! memcheck/tests/partial_load_ok (stderr)
! memcheck/tests/partiallydefinedeq (stdout)
! memcheck/tests/partiallydefinedeq (stderr)
! memcheck/tests/pdb-realloc (stderr)
! memcheck/tests/pdb-realloc2 (stdout)
! memcheck/tests/pdb-realloc2 (stderr)
! memcheck/tests/pipe (stderr)
! memcheck/tests/pointer-trace (stderr)
! memcheck/tests/post-syscall (stdout)
! memcheck/tests/post-syscall (stderr)
! memcheck/tests/realloc1 (stderr)
! memcheck/tests/realloc2 (stderr)
! memcheck/tests/realloc3 (stderr)
! memcheck/tests/sh-mem-random (stdout)
! memcheck/tests/sh-mem-random (stderr)
! memcheck/tests/sh-mem (stderr)
! memcheck/tests/sigaltstack (stderr)
! memcheck/tests/sigkill (stderr)
! memcheck/tests/signal2 (stdout)
! memcheck/tests/signal2 (stderr)
! memcheck/tests/sigprocmask (stderr)
! memcheck/tests/stack_changes (stdout)
! memcheck/tests/stack_changes (stderr)
! memcheck/tests/stack_switch (stderr)
! memcheck/tests/str_tester (stderr)
! memcheck/tests/strchr (stderr)
! memcheck/tests/supp1 (stderr)
! memcheck/tests/supp2 (stderr)
! memcheck/tests/supp_unknown (stderr)
! memcheck/tests/suppfree (stderr)
! memcheck/tests/toobig-allocs (stderr)
! memcheck/tests/trivialleak (stderr)
! memcheck/tests/vcpu_bz2 (stdout)
! memcheck/tests/vcpu_bz2 (stderr)
! memcheck/tests/vcpu_fbench (stdout)
! memcheck/tests/vcpu_fbench (stderr)
! memcheck/tests/vcpu_fnfns (stdout)
! memcheck/tests/vcpu_fnfns (stderr)
! memcheck/tests/with-space (stdout)
! memcheck/tests/with-space (stderr)
! memcheck/tests/wrap1 (stdout)
! memcheck/tests/wrap1 (stderr)
! memcheck/tests/wrap2 (stdout)
! memcheck/tests/wrap2 (stderr)
! memcheck/tests/wrap3 (stdout)
! memcheck/tests/wrap3 (stderr)
! memcheck/tests/wrap4 (stdout)
! memcheck/tests/wrap4 (stderr)
! memcheck/tests/wrap5 (stdout)
! memcheck/tests/wrap5 (stderr)
! memcheck/tests/wrap6 (stdout)
! memcheck/tests/wrap6 (stderr)
! memcheck/tests/wrap7 (stdout)
! memcheck/tests/wrap7 (stderr)
! memcheck/tests/wrap8 (stdout)
! memcheck/tests/wrap8 (stderr)
! memcheck/tests/writev (stderr)
! memcheck/tests/x86/scalar (stderr)
! memcheck/tests/xml1 (stdout)
! memcheck/tests/xml1 (stderr)
! memcheck/tests/zeropage (stdout)
! memcheck/tests/zeropage (stderr)
! cachegrind/tests/chdir (stderr)
! cachegrind/tests/clreq (stderr)
! cachegrind/tests/dlclose (stdout)
! cachegrind/tests/dlclose (stderr)
! cachegrind/tests/wrap5 (stdout)
! cachegrind/tests/wrap5 (stderr)
! callgrind/tests/clreq (stderr)
! callgrind/tests/simwork1 (stdout)
! callgrind/tests/simwork1 (stderr)
! callgrind/tests/simwork2 (stdout)
! callgrind/tests/simwork2 (stderr)
! callgrind/tests/simwork3 (stdout)
! callgrind/tests/simwork3 (stderr)
! callgrind/tests/threads (stderr)
! massif/tests/basic_malloc (stderr)
! massif/tests/toobig-allocs (stderr)
! massif/tests/true_html (stderr)
! massif/tests/true_text (stderr)
! lackey/tests/true (stderr)
! none/tests/amd64/bug127521-64 (stdout)
! none/tests/amd64/bug127521-64 (stderr)
! none/tests/amd64/bug132813-amd64 (stdout)
! none/tests/amd64/bug132813-amd64 (stderr)
! none/tests/amd64/bug132918 (stdout)
! none/tests/amd64/bug132918 (stderr)
! none/tests/amd64/clc (stdout)
! none/tests/amd64/clc (stderr)
! none/tests/amd64/fcmovnu (stdout)
! none/tests/amd64/fcmovnu (stderr)
! none/tests/amd64/fxtract (stdout)
! none/tests/amd64/fxtract (stderr)
! none/tests/amd64/insn_basic (stdout)
! none/tests/amd64/insn_basic (stderr)
! none/tests/amd64/insn_fpu (stdout)
! none/tests/amd64/insn_fpu (stderr)
! none/tests/amd64/insn_mmx (stdout)
! none/tests/amd64/insn_mmx (stderr)
! none/tests/amd64/insn_sse (stdout)
! none/tests/amd64/insn_sse (stderr)
! none/tests/amd64/insn_sse2 (stdout)
! none/tests/amd64/insn_sse2 (stderr)
! none/tests/amd64/jrcxz (stdout)
! none/tests/amd64/jrcxz (stderr)
! none/tests/amd64/looper (stdout)
! none/tests/amd64/looper (stderr)
! none/tests/amd64/nibz_bennee_mmap (stdout)
! none/tests/amd64/nibz_bennee_mmap (stderr)
! none/tests/amd64/rcl-amd64 (stdout)
! none/tests/amd64/rcl-amd64 (stderr)
! none/tests/amd64/shrld (stdout)
! none/tests/amd64/shrld (stderr)
! none/tests/amd64/slahf-amd64 (stdout)
! none/tests/amd64/slahf-amd64 (stderr)
! none/tests/amd64/smc1 (stdout)
! none/tests/amd64/smc1 (stderr)
! none/tests/ansi (stderr)
! none/tests/args (stdout)
! none/tests/args (stderr)
! none/tests/async-sigs (stdout)
! none/tests/async-sigs (stderr)
! none/tests/bitfield1 (stderr)
! none/tests/blockfault (stderr)
! none/tests/bug129866 (stdout)
! none/tests/bug129866 (stderr)
! none/tests/closeall (stderr)
! none/tests/coolo_sigaction (stdout)
! none/tests/coolo_sigaction (stderr)
! none/tests/coolo_strlen (stderr)
! none/tests/discard (stdout)
! none/tests/discard (stderr)
! none/tests/exec-sigmask (stderr)
! none/tests/execve (stderr)
! none/tests/faultstatus (stderr)
! none/tests/fcntl_setown (stderr)
! none/tests/fdleak_cmsg (stderr)
! none/tests/fdleak_creat (stderr)
! none/tests/fdleak_dup (stderr)
! none/tests/fdleak_dup2 (stderr)
! none/tests/fdleak_fcntl (stderr)
! none/tests/fdleak_ipv4 (stdout)
! none/tests/fdleak_ipv4 (stderr)
! none/tests/fdleak_open (stderr)
! none/tests/fdleak_pipe (stderr)
! none/tests/fdleak_socketpair (stderr)
! none/tests/floored (stdout)
! none/tests/floored (stderr)
! none/tests/fork (stdout)
! none/tests/fork (stderr)
! none/tests/fucomip (stderr)
! none/tests/gxx304 (stderr)
! none/tests/manythreads (stdout)
! none/tests/manythreads (stderr)
! none/tests/map_unaligned (stderr)
! none/tests/map_unmap (stdout)
! none/tests/map_unmap (stderr)
! none/tests/mq (stderr)
! none/tests/mremap (stderr)
! none/tests/mremap2 (stdout)
! none/tests/mremap2 (stderr)
! none/tests/munmap_exe (stderr)
! none/tests/nestedfns (stdout)
! none/tests/nestedfns (stderr)
! none/tests/pending (stdout)
! none/tests/pending (stderr)
! none/tests/pth_atfork1 (stdout)
! none/tests/pth_atfork1 (stderr)
! none/tests/pth_blockedsig (stdout)
! none/tests/pth_blockedsig (stderr)
! none/tests/pth_cancel1 (stdout)
! none/tests/pth_cancel1 (stderr)
! none/tests/pth_cancel2 (stderr)
! none/tests/pth_cvsimple (stdout)
! none/tests/pth_cvsimple (stderr)
! none/tests/pth_detached (stdout)
! none/tests/pth_detached (stderr)
! none/tests/pth_empty (stderr)
! none/tests/pth_exit (stderr)
! none/tests/pth_exit2 (stderr)
! none/tests/pth_mutexspeed (stdout)
! none/tests/pth_mutexspeed (stderr)
! none/tests/pth_once (stdout)
! none/tests/pth_once (stderr)
! none/tests/pth_rwlock (stderr)
! none/tests/pth_stackalign (stdout)
! none/tests/pth_stackalign (stderr)
! none/tests/rcrl (stdout)
! none/tests/rcrl (stderr)
! none/tests/readline1 (stdout)
! none/tests/readline1 (stderr)
! none/tests/res_search (stdout)
! none/tests/res_search (stderr)
! none/tests/resolv (stdout)
! none/tests/resolv (stderr)
! none/tests/rlimit_nofile (stderr)
! none/tests/sem (stderr)
! none/tests/semlimit (stderr)
! none/tests/sha1_test (stderr)
! none/tests/shell (stdout)
! none/tests/shell (stderr)
! none/tests/shell_valid1 (stderr)
! none/tests/shell_valid2 (stderr)
! none/tests/shell_valid3 (stderr)
! none/tests/shortpush (stderr)
! none/tests/shorts (stderr)
! none/tests/sigstackgrowth (stdout)
! none/tests/sigstackgrowth (stderr)
! none/tests/stackgrowth (stdout)
! none/tests/stackgrowth (stderr)
! none/tests/syscall-restart1 (stderr)
! none/tests/syscall-restart2 (stderr)
! none/tests/system (stderr)
! none/tests/thread-exits (stdout)
! none/tests/thread-exits (stderr)
! none/tests/threaded-fork (stdout)
! none/tests/threaded-fork (stderr)
! none/tests/threadederrno (stdout)
! none/tests/threadederrno (stderr)
! none/tests/tls (stdout)
! none/tests/tls (stderr)
! none/tests/vgprintf (stdout)
! none/tests/vgprintf (stderr)
|
|
From: Tom H. <th...@cy...> - 2007-08-26 02:12:29
|
Nightly build on lloyd ( x86_64, Fedora Core 3 ) started at 2007-08-26 03:05:04 BST
Results differ from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 292 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures ==
memcheck/tests/pointer-trace (stderr)
memcheck/tests/stack_switch (stderr)
memcheck/tests/x86/scalar (stderr)
memcheck/tests/x86/scalar_supp (stderr)
memcheck/tests/xml1 (stderr)
none/tests/mremap (stderr)
none/tests/mremap2 (stdout)
=================================================
== Results from 24 hours ago ==
=================================================
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... failed
Last 20 lines of verbose log follow echo
mc_translate.c:416: error: `Iop_Neg8' undeclared (first use in this function)
mc_translate.c:416: error: (Each undeclared identifier is reported only once
mc_translate.c:416: error: for each function it appears in.)
mc_translate.c: In function `mkLeft16':
mc_translate.c:425: error: `Iop_Neg16' undeclared (first use in this function)
mc_translate.c: In function `mkLeft32':
mc_translate.c:434: error: `Iop_Neg32' undeclared (first use in this function)
mc_translate.c: In function `mkLeft64':
mc_translate.c:443: error: `Iop_Neg64' undeclared (first use in this function)
mc_translate.c: In function `expr2vbits_Unop':
mc_translate.c:2527: error: `Iop_Neg8' undeclared (first use in this function)
mc_translate.c:2529: error: `Iop_Neg16' undeclared (first use in this function)
mc_translate.c:2531: error: `Iop_Neg32' undeclared (first use in this function)
make[3]: *** [memcheck_x86_linux-mc_translate.o] Error 1
make[3]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind'
make: *** [all] Error 2
=================================================
== Difference between 24 hours ago and now ==
=================================================
*** old.short Sun Aug 26 03:06:16 2007
--- new.short Sun Aug 26 03:12:21 2007
***************
*** 3,26 ****
Configuring valgrind ... done
! Building valgrind ... failed
- Last 20 lines of verbose log follow echo
- mc_translate.c:416: error: `Iop_Neg8' undeclared (first use in this function)
- mc_translate.c:416: error: (Each undeclared identifier is reported only once
- mc_translate.c:416: error: for each function it appears in.)
- mc_translate.c: In function `mkLeft16':
- mc_translate.c:425: error: `Iop_Neg16' undeclared (first use in this function)
- mc_translate.c: In function `mkLeft32':
- mc_translate.c:434: error: `Iop_Neg32' undeclared (first use in this function)
- mc_translate.c: In function `mkLeft64':
- mc_translate.c:443: error: `Iop_Neg64' undeclared (first use in this function)
- mc_translate.c: In function `expr2vbits_Unop':
- mc_translate.c:2527: error: `Iop_Neg8' undeclared (first use in this function)
- mc_translate.c:2529: error: `Iop_Neg16' undeclared (first use in this function)
- mc_translate.c:2531: error: `Iop_Neg32' undeclared (first use in this function)
- make[3]: *** [memcheck_x86_linux-mc_translate.o] Error 1
- make[3]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
- make[2]: *** [all-recursive] Error 1
- make[2]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
- make[1]: *** [all-recursive] Error 1
- make[1]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind'
- make: *** [all] Error 2
--- 3,17 ----
Configuring valgrind ... done
! Building valgrind ... done
! Running regression tests ... failed
!
! Regression test results follow
!
! == 292 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures ==
! memcheck/tests/pointer-trace (stderr)
! memcheck/tests/stack_switch (stderr)
! memcheck/tests/x86/scalar (stderr)
! memcheck/tests/x86/scalar_supp (stderr)
! memcheck/tests/xml1 (stderr)
! none/tests/mremap (stderr)
! none/tests/mremap2 (stdout)
|
|
From: Tom H. <th...@cy...> - 2007-08-26 02:06:50
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-08-26 03:00:03 BST
Results differ from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 294 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures ==
memcheck/tests/pointer-trace (stderr)
memcheck/tests/stack_switch (stderr)
memcheck/tests/x86/scalar (stderr)
memcheck/tests/x86/scalar_supp (stderr)
none/tests/fdleak_fcntl (stderr)
none/tests/mremap (stderr)
none/tests/mremap2 (stdout)
=================================================
== Results from 24 hours ago ==
=================================================
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... failed
Last 20 lines of verbose log follow echo
mc_translate.c:416: error: `Iop_Neg8' undeclared (first use in this function)
mc_translate.c:416: error: (Each undeclared identifier is reported only once
mc_translate.c:416: error: for each function it appears in.)
mc_translate.c: In function `mkLeft16':
mc_translate.c:425: error: `Iop_Neg16' undeclared (first use in this function)
mc_translate.c: In function `mkLeft32':
mc_translate.c:434: error: `Iop_Neg32' undeclared (first use in this function)
mc_translate.c: In function `mkLeft64':
mc_translate.c:443: error: `Iop_Neg64' undeclared (first use in this function)
mc_translate.c: In function `expr2vbits_Unop':
mc_translate.c:2527: error: `Iop_Neg8' undeclared (first use in this function)
mc_translate.c:2529: error: `Iop_Neg16' undeclared (first use in this function)
mc_translate.c:2531: error: `Iop_Neg32' undeclared (first use in this function)
make[3]: *** [memcheck_x86_linux-mc_translate.o] Error 1
make[3]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind'
make: *** [all] Error 2
=================================================
== Difference between 24 hours ago and now ==
=================================================
*** old.short Sun Aug 26 03:01:17 2007
--- new.short Sun Aug 26 03:06:42 2007
***************
*** 3,26 ****
Configuring valgrind ... done
! Building valgrind ... failed
- Last 20 lines of verbose log follow echo
- mc_translate.c:416: error: `Iop_Neg8' undeclared (first use in this function)
- mc_translate.c:416: error: (Each undeclared identifier is reported only once
- mc_translate.c:416: error: for each function it appears in.)
- mc_translate.c: In function `mkLeft16':
- mc_translate.c:425: error: `Iop_Neg16' undeclared (first use in this function)
- mc_translate.c: In function `mkLeft32':
- mc_translate.c:434: error: `Iop_Neg32' undeclared (first use in this function)
- mc_translate.c: In function `mkLeft64':
- mc_translate.c:443: error: `Iop_Neg64' undeclared (first use in this function)
- mc_translate.c: In function `expr2vbits_Unop':
- mc_translate.c:2527: error: `Iop_Neg8' undeclared (first use in this function)
- mc_translate.c:2529: error: `Iop_Neg16' undeclared (first use in this function)
- mc_translate.c:2531: error: `Iop_Neg32' undeclared (first use in this function)
- make[3]: *** [memcheck_x86_linux-mc_translate.o] Error 1
- make[3]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
- make[2]: *** [all-recursive] Error 1
- make[2]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind/memcheck'
- make[1]: *** [all-recursive] Error 1
- make[1]: Leaving directory `/tmp/vgtest/2007-08-26/valgrind'
- make: *** [all] Error 2
--- 3,17 ----
Configuring valgrind ... done
! Building valgrind ... done
! Running regression tests ... failed
!
! Regression test results follow
!
! == 294 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures ==
! memcheck/tests/pointer-trace (stderr)
! memcheck/tests/stack_switch (stderr)
! memcheck/tests/x86/scalar (stderr)
! memcheck/tests/x86/scalar_supp (stderr)
! none/tests/fdleak_fcntl (stderr)
! none/tests/mremap (stderr)
! none/tests/mremap2 (stdout)
|
|
From: <js...@ac...> - 2007-08-26 00:10:28
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-08-26 02:00:01 CEST
Results differ from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 226 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures ==
memcheck/tests/deep_templates (stdout)
memcheck/tests/leak-cycle (stderr)
memcheck/tests/leak-tree (stderr)
memcheck/tests/pointer-trace (stderr)
none/tests/faultstatus (stderr)
none/tests/fdleak_cmsg (stderr)
none/tests/mremap (stderr)
none/tests/mremap2 (stdout)
=================================================
== Results from 24 hours ago ==
=================================================
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... failed
Last 20 lines of verbose log follow echo
mc_translate.c:416: error: 'Iop_Neg8' undeclared (first use in this function)
mc_translate.c:416: error: (Each undeclared identifier is reported only once
mc_translate.c:416: error: for each function it appears in.)
mc_translate.c: In function 'mkLeft16':
mc_translate.c:425: error: 'Iop_Neg16' undeclared (first use in this function)
mc_translate.c: In function 'mkLeft32':
mc_translate.c:434: error: 'Iop_Neg32' undeclared (first use in this function)
mc_translate.c: In function 'mkLeft64':
mc_translate.c:443: error: 'Iop_Neg64' undeclared (first use in this function)
mc_translate.c: In function 'expr2vbits_Unop':
mc_translate.c:2527: error: 'Iop_Neg8' undeclared (first use in this function)
mc_translate.c:2529: error: 'Iop_Neg16' undeclared (first use in this function)
mc_translate.c:2531: error: 'Iop_Neg32' undeclared (first use in this function)
make[3]: *** [memcheck_ppc32_linux-mc_translate.o] Error 1
make[3]: Leaving directory `/home/sewardj/Nightly/valgrind/memcheck'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/home/sewardj/Nightly/valgrind/memcheck'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/sewardj/Nightly/valgrind'
make: *** [all] Error 2
=================================================
== Difference between 24 hours ago and now ==
=================================================
*** old.short Sun Aug 26 02:01:59 2007
--- new.short Sun Aug 26 02:10:17 2007
***************
*** 3,26 ****
Configuring valgrind ... done
! Building valgrind ... failed
- Last 20 lines of verbose log follow echo
- mc_translate.c:416: error: 'Iop_Neg8' undeclared (first use in this function)
- mc_translate.c:416: error: (Each undeclared identifier is reported only once
- mc_translate.c:416: error: for each function it appears in.)
- mc_translate.c: In function 'mkLeft16':
- mc_translate.c:425: error: 'Iop_Neg16' undeclared (first use in this function)
- mc_translate.c: In function 'mkLeft32':
- mc_translate.c:434: error: 'Iop_Neg32' undeclared (first use in this function)
- mc_translate.c: In function 'mkLeft64':
- mc_translate.c:443: error: 'Iop_Neg64' undeclared (first use in this function)
- mc_translate.c: In function 'expr2vbits_Unop':
- mc_translate.c:2527: error: 'Iop_Neg8' undeclared (first use in this function)
- mc_translate.c:2529: error: 'Iop_Neg16' undeclared (first use in this function)
- mc_translate.c:2531: error: 'Iop_Neg32' undeclared (first use in this function)
- make[3]: *** [memcheck_ppc32_linux-mc_translate.o] Error 1
- make[3]: Leaving directory `/home/sewardj/Nightly/valgrind/memcheck'
- make[2]: *** [all-recursive] Error 1
- make[2]: Leaving directory `/home/sewardj/Nightly/valgrind/memcheck'
- make[1]: *** [all-recursive] Error 1
- make[1]: Leaving directory `/home/sewardj/Nightly/valgrind'
- make: *** [all] Error 2
--- 3,18 ----
Configuring valgrind ... done
! Building valgrind ... done
! Running regression tests ... failed
!
! Regression test results follow
!
! == 226 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures ==
! memcheck/tests/deep_templates (stdout)
! memcheck/tests/leak-cycle (stderr)
! memcheck/tests/leak-tree (stderr)
! memcheck/tests/pointer-trace (stderr)
! none/tests/faultstatus (stderr)
! none/tests/fdleak_cmsg (stderr)
! none/tests/mremap (stderr)
! none/tests/mremap2 (stdout)
|
|
From: <sv...@va...> - 2007-08-25 23:25:01
|
Author: sewardj
Date: 2007-08-26 00:25:00 +0100 (Sun, 26 Aug 2007)
New Revision: 6780
Log:
Merge from CGTUNE branch:
r6738:
Unroll memset; apparently is popular in some places (kpdf).
Modified:
trunk/memcheck/mc_replace_strmem.c
Modified: trunk/memcheck/mc_replace_strmem.c
===================================================================
--- trunk/memcheck/mc_replace_strmem.c 2007-08-25 23:09:36 UTC (rev 6779)
+++ trunk/memcheck/mc_replace_strmem.c 2007-08-25 23:25:00 UTC (rev 6780)
@@ -475,10 +475,17 @@
void* VG_REPLACE_FUNCTION_ZU(soname,fnname)(void *s, Int c, SizeT n) \
{ \
unsigned char *cp = s; \
- \
- while(n--) \
+ while (n >= 4) { \
+ cp[0] = c; \
+ cp[1] = c; \
+ cp[2] = c; \
+ cp[3] = c; \
+ cp += 4; \
+ n -= 4; \
+ } \
+ while (n--) { \
*cp++ = c; \
- \
+ } \
return s; \
}
|
|
From: <sv...@va...> - 2007-08-25 23:21:08
|
Author: sewardj
Date: 2007-08-26 00:21:08 +0100 (Sun, 26 Aug 2007)
New Revision: 1781
Log:
Merge from CGTUNE branch, code generation improvements for amd64:
r1772:
When generating code for helper calls, be more aggressive about
computing values directly into argument registers, thereby avoiding
some reg-reg shuffling. This reduces the amount of code (on amd64)
generated by Cachegrind by about 6% and has zero or marginal benefit
for other tools.
r1773:
Emit 64-bit branch targets using 32-bit short forms when possible.
Since (with V's default amd64 load address of 0x38000000) this is
usually possible, it saves about 7% in code size for Memcheck and even
more for Cachegrind.
Modified:
trunk/priv/host-amd64/hdefs.c
trunk/priv/host-amd64/isel.c
Modified: trunk/priv/host-amd64/hdefs.c
===================================================================
--- trunk/priv/host-amd64/hdefs.c 2007-08-25 23:07:44 UTC (rev 1780)
+++ trunk/priv/host-amd64/hdefs.c 2007-08-25 23:21:08 UTC (rev 1781)
@@ -1991,6 +1991,17 @@
Int i32 = (Int)w32;
return toBool(i32 == ((i32 << 24) >> 24));
}
+/* Can the lower 32 bits be signedly widened to produce the whole
+ 64-bit value? In other words, are the top 33 bits either all 0 or
+ all 1 ? */
+static Bool fitsIn32Bits ( ULong x )
+{
+ Long y0 = (Long)x;
+ Long y1 = y0;
+ y1 <<= 32;
+ y1 >>=/*s*/ 32;
+ return toBool(x == y1);
+}
/* Forming mod-reg-rm bytes and scale-index-base bytes.
@@ -2601,25 +2612,36 @@
goto bad;
}
- case Ain_Call:
+ case Ain_Call: {
/* As per detailed comment for Ain_Call in
getRegUsage_AMD64Instr above, %r11 is used as an address
temporary. */
/* jump over the following two insns if the condition does not
hold */
+ Bool shortImm = fitsIn32Bits(i->Ain.Call.target);
if (i->Ain.Call.cond != Acc_ALWAYS) {
*p++ = toUChar(0x70 + (0xF & (i->Ain.Call.cond ^ 1)));
- *p++ = 13; /* 13 bytes in the next two insns */
+ *p++ = shortImm ? 10 : 13;
+ /* 10 or 13 bytes in the next two insns */
}
- /* movabsq $target, %r11 */
- *p++ = 0x49;
- *p++ = 0xBB;
- p = emit64(p, i->Ain.Call.target);
- /* call *%r11 */
+ if (shortImm) {
+ /* 7 bytes: movl sign-extend(imm32), %r11 */
+ *p++ = 0x49;
+ *p++ = 0xC7;
+ *p++ = 0xC3;
+ p = emit32(p, (UInt)i->Ain.Call.target);
+ } else {
+ /* 10 bytes: movabsq $target, %r11 */
+ *p++ = 0x49;
+ *p++ = 0xBB;
+ p = emit64(p, i->Ain.Call.target);
+ }
+ /* 3 bytes: call *%r11 */
*p++ = 0x41;
*p++ = 0xFF;
*p++ = 0xD3;
goto done;
+ }
case Ain_Goto:
/* Use ptmp for backpatching conditional jumps. */
@@ -2701,11 +2723,19 @@
destined for %rax immediately prior to this Ain_Goto. */
vassert(sizeof(ULong) == sizeof(void*));
vassert(dispatch != NULL);
- /* movabsq $imm64, %rdx */
- *p++ = 0x48;
- *p++ = 0xBA;
- p = emit64(p, Ptr_to_ULong(dispatch));
+ if (fitsIn32Bits(Ptr_to_ULong(dispatch))) {
+ /* movl sign-extend(imm32), %rdx */
+ *p++ = 0x48;
+ *p++ = 0xC7;
+ *p++ = 0xC2;
+ p = emit32(p, (UInt)Ptr_to_ULong(dispatch));
+ } else {
+ /* movabsq $imm64, %rdx */
+ *p++ = 0x48;
+ *p++ = 0xBA;
+ p = emit64(p, Ptr_to_ULong(dispatch));
+ }
/* jmp *%rdx */
*p++ = 0xFF;
*p++ = 0xE2;
Modified: trunk/priv/host-amd64/isel.c
===================================================================
--- trunk/priv/host-amd64/isel.c 2007-08-25 23:07:44 UTC (rev 1780)
+++ trunk/priv/host-amd64/isel.c 2007-08-25 23:21:08 UTC (rev 1781)
@@ -372,20 +372,54 @@
//.. }
-/* Used only in doHelperCall. See big comment in doHelperCall re
- handling of register-parameter args. This function figures out
- whether evaluation of an expression might require use of a fixed
- register. If in doubt return True (safe but suboptimal).
-*/
-static
-Bool mightRequireFixedRegs ( IRExpr* e )
+/* Used only in doHelperCall. If possible, produce a single
+ instruction which computes 'e' into 'dst'. If not possible, return
+ NULL. */
+
+static AMD64Instr* iselIntExpr_single_instruction ( ISelEnv* env,
+ HReg dst,
+ IRExpr* e )
{
- switch (e->tag) {
- case Iex_RdTmp: case Iex_Const: case Iex_Get:
- return False;
- default:
- return True;
+ vassert(typeOfIRExpr(env->type_env, e) == Ity_I64);
+
+ if (e->tag == Iex_Const) {
+ vassert(e->Iex.Const.con->tag == Ico_U64);
+ if (fitsIn32Bits(e->Iex.Const.con->Ico.U64)) {
+ return AMD64Instr_Alu64R(
+ Aalu_MOV,
+ AMD64RMI_Imm(toUInt(e->Iex.Const.con->Ico.U64)),
+ dst
+ );
+ } else {
+ return AMD64Instr_Imm64(e->Iex.Const.con->Ico.U64, dst);
+ }
}
+
+ if (e->tag == Iex_RdTmp) {
+ HReg src = lookupIRTemp(env, e->Iex.RdTmp.tmp);
+ return mk_iMOVsd_RR(src, dst);
+ }
+
+ if (e->tag == Iex_Get) {
+ vassert(e->Iex.Get.ty == Ity_I64);
+ return AMD64Instr_Alu64R(
+ Aalu_MOV,
+ AMD64RMI_Mem(
+ AMD64AMode_IR(e->Iex.Get.offset,
+ hregAMD64_RBP())),
+ dst);
+ }
+
+ if (e->tag == Iex_Unop
+ && e->Iex.Unop.op == Iop_32Uto64
+ && e->Iex.Unop.arg->tag == Iex_RdTmp) {
+ HReg src = lookupIRTemp(env, e->Iex.Unop.arg->Iex.RdTmp.tmp);
+ return AMD64Instr_MovZLQ(src, dst);
+ }
+
+ if (0) { ppIRExpr(e); vex_printf("\n"); }
+
+ return NULL;
}
@@ -401,7 +435,7 @@
AMD64CondCode cc;
HReg argregs[6];
HReg tmpregs[6];
- Bool go_fast;
+ AMD64Instr* fastinstrs[6];
Int n_args, i, argreg;
/* Marshal args for a call and do the call.
@@ -471,12 +505,13 @@
tmpregs[0] = tmpregs[1] = tmpregs[2] =
tmpregs[3] = tmpregs[4] = tmpregs[5] = INVALID_HREG;
+ fastinstrs[0] = fastinstrs[1] = fastinstrs[2] =
+ fastinstrs[3] = fastinstrs[4] = fastinstrs[5] = NULL;
+
/* First decide which scheme (slow or fast) is to be used. First
assume the fast scheme, and select slow if any contraindications
(wow) appear. */
- go_fast = True;
-
if (guard) {
if (guard->tag == Iex_Const
&& guard->Iex.Const.con->tag == Ico_U1
@@ -484,91 +519,94 @@
/* unconditional */
} else {
/* Not manifestly unconditional -- be conservative. */
- go_fast = False;
+ goto slowscheme;
}
}
- if (go_fast) {
- for (i = 0; i < n_args; i++) {
- if (mightRequireFixedRegs(args[i])) {
- go_fast = False;
- break;
- }
- }
+ /* Ok, let's try for the fast scheme. If it doesn't pan out, we'll
+ use the slow scheme. Because this is tentative, we can't call
+ addInstr (that is, commit to) any instructions until we're
+ handled all the arguments. So park the resulting instructions
+ in a buffer and emit that if we're successful. */
+
+ /* FAST SCHEME */
+ argreg = 0;
+ if (passBBP) {
+ fastinstrs[argreg] = mk_iMOVsd_RR( hregAMD64_RBP(), argregs[argreg]);
+ argreg++;
}
- /* At this point the scheme to use has been established. Generate
- code to get the arg values into the argument rregs. */
+ for (i = 0; i < n_args; i++) {
+ vassert(argreg < 6);
+ vassert(typeOfIRExpr(env->type_env, args[i]) == Ity_I64);
+ fastinstrs[argreg]
+ = iselIntExpr_single_instruction( env, argregs[argreg], args[i] );
+ if (fastinstrs[argreg] == NULL)
+ goto slowscheme;
+ argreg++;
+ }
- if (go_fast) {
+ /* Looks like we're in luck. Emit the accumulated instructions and
+ move on to doing the call itself. */
+ vassert(argreg <= 6);
+ for (i = 0; i < argreg; i++)
+ addInstr(env, fastinstrs[i]);
- /* FAST SCHEME */
- argreg = 0;
- if (passBBP) {
- addInstr(env, mk_iMOVsd_RR( hregAMD64_RBP(), argregs[argreg]));
- argreg++;
- }
+ /* Fast scheme only applies for unconditional calls. Hence: */
+ cc = Acc_ALWAYS;
- for (i = 0; i < n_args; i++) {
- vassert(argreg < 6);
- vassert(typeOfIRExpr(env->type_env, args[i]) == Ity_I64);
- addInstr(env, AMD64Instr_Alu64R(
- Aalu_MOV,
- iselIntExpr_RMI(env, args[i]),
- argregs[argreg]
- )
- );
- argreg++;
- }
+ goto handle_call;
- /* Fast scheme only applies for unconditional calls. Hence: */
- cc = Acc_ALWAYS;
- } else {
+ /* SLOW SCHEME; move via temporaries */
+ slowscheme:
+#if 0
+if (n_args > 0) {for (i = 0; args[i]; i++) {
+ppIRExpr(args[i]); vex_printf(" "); }
+vex_printf("\n");}
+#endif
+ argreg = 0;
- /* SLOW SCHEME; move via temporaries */
- argreg = 0;
+ if (passBBP) {
+ /* This is pretty stupid; better to move directly to rdi
+ after the rest of the args are done. */
+ tmpregs[argreg] = newVRegI(env);
+ addInstr(env, mk_iMOVsd_RR( hregAMD64_RBP(), tmpregs[argreg]));
+ argreg++;
+ }
- if (passBBP) {
- /* This is pretty stupid; better to move directly to rdi
- after the rest of the args are done. */
- tmpregs[argreg] = newVRegI(env);
- addInstr(env, mk_iMOVsd_RR( hregAMD64_RBP(), tmpregs[argreg]));
- argreg++;
- }
+ for (i = 0; i < n_args; i++) {
+ vassert(argreg < 6);
+ vassert(typeOfIRExpr(env->type_env, args[i]) == Ity_I64);
+ tmpregs[argreg] = iselIntExpr_R(env, args[i]);
+ argreg++;
+ }
- for (i = 0; i < n_args; i++) {
- vassert(argreg < 6);
- vassert(typeOfIRExpr(env->type_env, args[i]) == Ity_I64);
- tmpregs[argreg] = iselIntExpr_R(env, args[i]);
- argreg++;
+ /* Now we can compute the condition. We can't do it earlier
+ because the argument computations could trash the condition
+ codes. Be a bit clever to handle the common case where the
+ guard is 1:Bit. */
+ cc = Acc_ALWAYS;
+ if (guard) {
+ if (guard->tag == Iex_Const
+ && guard->Iex.Const.con->tag == Ico_U1
+ && guard->Iex.Const.con->Ico.U1 == True) {
+ /* unconditional -- do nothing */
+ } else {
+ cc = iselCondCode( env, guard );
}
+ }
- /* Now we can compute the condition. We can't do it earlier
- because the argument computations could trash the condition
- codes. Be a bit clever to handle the common case where the
- guard is 1:Bit. */
- cc = Acc_ALWAYS;
- if (guard) {
- if (guard->tag == Iex_Const
- && guard->Iex.Const.con->tag == Ico_U1
- && guard->Iex.Const.con->Ico.U1 == True) {
- /* unconditional -- do nothing */
- } else {
- cc = iselCondCode( env, guard );
- }
- }
+ /* Move the args to their final destinations. */
+ for (i = 0; i < argreg; i++) {
+ /* None of these insns, including any spill code that might
+ be generated, may alter the condition codes. */
+ addInstr( env, mk_iMOVsd_RR( tmpregs[i], argregs[i] ) );
+ }
- /* Move the args to their final destinations. */
- for (i = 0; i < argreg; i++) {
- /* None of these insns, including any spill code that might
- be generated, may alter the condition codes. */
- addInstr( env, mk_iMOVsd_RR( tmpregs[i], argregs[i] ) );
- }
- }
-
/* Finally, the call itself. */
+ handle_call:
addInstr(env, AMD64Instr_Call(
cc,
Ptr_to_ULong(cee->addr),
|
|
From: <sv...@va...> - 2007-08-25 23:09:35
|
Author: sewardj
Date: 2007-08-26 00:09:36 +0100 (Sun, 26 Aug 2007)
New Revision: 6779
Log:
Merge from CGTUNE branch:
r6736:
Hook up Memcheck to the new Left and CmpwNEZ primops defined in vex r1769.
r6737:
Track vex r1770 (removal of Iop_Neg64/32/16/8 primops)
Modified:
trunk/memcheck/mc_translate.c
Modified: trunk/memcheck/mc_translate.c
===================================================================
--- trunk/memcheck/mc_translate.c 2007-08-25 07:19:08 UTC (rev 6778)
+++ trunk/memcheck/mc_translate.c 2007-08-25 23:09:36 UTC (rev 6779)
@@ -411,38 +411,22 @@
static IRAtom* mkLeft8 ( MCEnv* mce, IRAtom* a1 ) {
tl_assert(isShadowAtom(mce,a1));
- /* It's safe to duplicate a1 since it's only an atom */
- return assignNew(mce, Ity_I8,
- binop(Iop_Or8, a1,
- assignNew(mce, Ity_I8,
- unop(Iop_Neg8, a1))));
+ return assignNew(mce, Ity_I8, unop(Iop_Left8, a1));
}
static IRAtom* mkLeft16 ( MCEnv* mce, IRAtom* a1 ) {
tl_assert(isShadowAtom(mce,a1));
- /* It's safe to duplicate a1 since it's only an atom */
- return assignNew(mce, Ity_I16,
- binop(Iop_Or16, a1,
- assignNew(mce, Ity_I16,
- unop(Iop_Neg16, a1))));
+ return assignNew(mce, Ity_I16, unop(Iop_Left16, a1));
}
static IRAtom* mkLeft32 ( MCEnv* mce, IRAtom* a1 ) {
tl_assert(isShadowAtom(mce,a1));
- /* It's safe to duplicate a1 since it's only an atom */
- return assignNew(mce, Ity_I32,
- binop(Iop_Or32, a1,
- assignNew(mce, Ity_I32,
- unop(Iop_Neg32, a1))));
+ return assignNew(mce, Ity_I32, unop(Iop_Left32, a1));
}
static IRAtom* mkLeft64 ( MCEnv* mce, IRAtom* a1 ) {
tl_assert(isShadowAtom(mce,a1));
- /* It's safe to duplicate a1 since it's only an atom */
- return assignNew(mce, Ity_I64,
- binop(Iop_Or64, a1,
- assignNew(mce, Ity_I64,
- unop(Iop_Neg64, a1))));
+ return assignNew(mce, Ity_I64, unop(Iop_Left64, a1));
}
/* --------- 'Improvement' functions for AND/OR. --------- */
@@ -557,14 +541,28 @@
static IRAtom* mkPCastTo( MCEnv* mce, IRType dst_ty, IRAtom* vbits )
{
- IRType ty;
+ IRType src_ty;
IRAtom* tmp1;
/* Note, dst_ty is a shadow type, not an original type. */
/* First of all, collapse vbits down to a single bit. */
tl_assert(isShadowAtom(mce,vbits));
- ty = typeOfIRExpr(mce->bb->tyenv, vbits);
- tmp1 = NULL;
- switch (ty) {
+ src_ty = typeOfIRExpr(mce->bb->tyenv, vbits);
+
+ /* Fast-track some common cases */
+ if (src_ty == Ity_I32 && dst_ty == Ity_I32)
+ return assignNew(mce, Ity_I32, unop(Iop_CmpwNEZ32, vbits));
+
+ if (src_ty == Ity_I64 && dst_ty == Ity_I64)
+ return assignNew(mce, Ity_I64, unop(Iop_CmpwNEZ64, vbits));
+
+ if (src_ty == Ity_I32 && dst_ty == Ity_I64) {
+ IRAtom* tmp = assignNew(mce, Ity_I32, unop(Iop_CmpwNEZ32, vbits));
+ return assignNew(mce, Ity_I64, binop(Iop_32HLto64, tmp, tmp));
+ }
+
+ /* Else do it the slow way .. */
+ tmp1 = NULL;
+ switch (src_ty) {
case Ity_I1:
tmp1 = vbits;
break;
@@ -591,7 +589,7 @@
break;
}
default:
- ppIRType(ty);
+ ppIRType(src_ty);
VG_(tool_panic)("mkPCastTo(1)");
}
tl_assert(tmp1);
@@ -1315,12 +1313,30 @@
IRAtom* mkLazyN ( MCEnv* mce,
IRAtom** exprvec, IRType finalVtype, IRCallee* cee )
{
- Int i;
+ Int i;
IRAtom* here;
- IRAtom* curr = definedOfType(Ity_I32);
+ IRAtom* curr;
+ IRType mergeTy;
+ IRType mergeTy64 = True;
+
+ /* Decide on the type of the merge intermediary. If all relevant
+ args are I64, then it's I64. In all other circumstances, use
+ I32. */
for (i = 0; exprvec[i]; i++) {
tl_assert(i < 32);
tl_assert(isOriginalAtom(mce, exprvec[i]));
+ if (cee->mcx_mask & (1<<i))
+ continue;
+ if (typeOfIRExpr(mce->bb->tyenv, exprvec[i]) != Ity_I64)
+ mergeTy64 = False;
+ }
+
+ mergeTy = mergeTy64 ? Ity_I64 : Ity_I32;
+ curr = definedOfType(mergeTy);
+
+ for (i = 0; exprvec[i]; i++) {
+ tl_assert(i < 32);
+ tl_assert(isOriginalAtom(mce, exprvec[i]));
/* Only take notice of this arg if the callee's mc-exclusion
mask does not say it is to be excluded. */
if (cee->mcx_mask & (1<<i)) {
@@ -1330,8 +1346,10 @@
} else {
/* calculate the arg's definedness, and pessimistically merge
it in. */
- here = mkPCastTo( mce, Ity_I32, expr2vbits(mce, exprvec[i]) );
- curr = mkUifU32(mce, here, curr);
+ here = mkPCastTo( mce, mergeTy, expr2vbits(mce, exprvec[i]) );
+ curr = mergeTy64
+ ? mkUifU64(mce, here, curr)
+ : mkUifU32(mce, here, curr);
}
}
return mkPCastTo(mce, finalVtype, curr );
@@ -2520,17 +2538,6 @@
case Iop_Not1:
return vatom;
- /* Neg* really fall under the Add/Sub banner, and as such you
- might think would qualify for the 'expensive add/sub'
- treatment. However, in this case since the implied literal
- is zero (0 - arg), we just do the cheap thing anyway. */
- case Iop_Neg8:
- return mkLeft8(mce, vatom);
- case Iop_Neg16:
- return mkLeft16(mce, vatom);
- case Iop_Neg32:
- return mkLeft32(mce, vatom);
-
default:
ppIROp(op);
VG_(tool_panic)("memcheck:expr2vbits_Unop");
|
|
From: <sv...@va...> - 2007-08-25 23:07:49
|
Author: sewardj
Date: 2007-08-26 00:07:44 +0100 (Sun, 26 Aug 2007)
New Revision: 1780
Log:
Merge from CGTUNE branch:
r1769:
This commit provides a bunch of enhancements to the IR optimiser
(iropt) and to the various backend instruction selectors.
Unfortunately the changes are interrelated and cannot easily be
committed in pieces in any meaningful way. Between them and the
already-committed register allocation enhancements (r1765, r1767)
performance of Memcheck is improved by 0%-10%. Improvements are also
applicable to other tools to lesser extents.
Main changes are:
* Add new IR primops Iop_Left64/32/16/8 and Iop_CmpwNEZ64/32/16/8
which Memcheck uses to express some primitive operations on
definedness (V) bits:
Left(x) = set all bits to the left of the rightmost 1 bit to 1
CmpwNEZ(x) = if x == 0 then 0 else 0xFF...FF
Left and CmpwNEZ are detailed in the Usenix 2005 paper (in which
CmpwNEZ is called PCast). The new primops expose opportunities for
IR optimisation at tree-build time. Prior to this change Memcheck
expressed Left and CmpwNEZ in terms of lower level primitives
(logical or, negation, compares, various casts) which was simpler
but hindered further optimisation.
* Enhance the IR optimiser's tree builder so it can rewrite trees
as they are constructed, according to useful identities, for example:
CmpwNEZ64( Or64 ( CmpwNEZ64(x), y ) ) --> CmpwNEZ64( Or64( x, y ) )
which gets rid of a CmpwNEZ64 operation - a win as they are relatively
expensive. See functions fold_IRExpr_Binop and fold_IRExpr_Unop.
Allowing the tree builder to rewrite trees also makes it possible to
have a single implementation of certain transformation rules which
were previously duplicated in the x86, amd64 and ppc instruction
selectors. For example
32to1(1Uto32(x)) --> x
This simplifies the instruction selectors and gives a central place
to put such IR-level transformations, which is a Good Thing.
* Various minor refinements to the instruction selectors:
- ppc64 generates 32Sto64 into 1 instruction instead of 2
- x86 can now generate movsbl
- x86 handles 64-bit integer Mux0X better for cases typically
arising from Memchecking of FP code
- misc other patterns handled better
Overall these changes are a straight win - vex generates less code,
and does so a bit faster since its register allocator has to chew
through fewer instructions. The main risk is that of correctness:
making Left and CmpwNEZ explicit, and adding rewrite rules for them,
is a substantial change in the way Memcheck deals with undefined value
tracking, and I am concerned to ensure that the changes do not cause
false negatives. I _think_ it's all correct so far.
r1770:
Get rid of Iop_Neg64/32/16/8 as they are no longer used by Memcheck,
and any uses as generated by the front ends are so infrequent that
generating the equivalent Sub(0, ..) is good enough. This gets rid of
quite a few lines of code. Add isel cases for Sub(0, ..) patterns so
that the x86/amd64 backends still generate negl/negq where possible.
r1771:
Handle Left64. Fixes failure on none/tests/x86/insn_sse2.
Modified:
trunk/priv/guest-ppc/toIR.c
trunk/priv/guest-x86/toIR.c
trunk/priv/host-amd64/isel.c
trunk/priv/host-ppc/hdefs.c
trunk/priv/host-ppc/isel.c
trunk/priv/host-x86/hdefs.c
trunk/priv/host-x86/isel.c
trunk/priv/ir/irdefs.c
trunk/priv/ir/iropt.c
trunk/priv/main/vex_util.c
trunk/pub/libvex_ir.h
Modified: trunk/priv/guest-ppc/toIR.c
===================================================================
--- trunk/priv/guest-ppc/toIR.c 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/priv/guest-ppc/toIR.c 2007-08-25 23:07:44 UTC (rev 1780)
@@ -783,7 +783,7 @@
op8 == Iop_Or8 || op8 == Iop_And8 || op8 == Iop_Xor8 ||
op8 == Iop_Shl8 || op8 == Iop_Shr8 || op8 == Iop_Sar8 ||
op8 == Iop_CmpEQ8 || op8 == Iop_CmpNE8 ||
- op8 == Iop_Not8 || op8 == Iop_Neg8 );
+ op8 == Iop_Not8 );
adj = ty==Ity_I8 ? 0 : (ty==Ity_I16 ? 1 : (ty==Ity_I32 ? 2 : 3));
return adj + op8;
}
Modified: trunk/priv/guest-x86/toIR.c
===================================================================
--- trunk/priv/guest-x86/toIR.c 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/priv/guest-x86/toIR.c 2007-08-25 23:07:44 UTC (rev 1780)
@@ -684,7 +684,7 @@
|| op8 == Iop_Or8 || op8 == Iop_And8 || op8 == Iop_Xor8
|| op8 == Iop_Shl8 || op8 == Iop_Shr8 || op8 == Iop_Sar8
|| op8 == Iop_CmpEQ8 || op8 == Iop_CmpNE8
- || op8 == Iop_Not8 || op8 == Iop_Neg8);
+ || op8 == Iop_Not8);
adj = ty==Ity_I8 ? 0 : (ty==Ity_I16 ? 1 : 2);
return adj + op8;
}
@@ -2631,7 +2631,7 @@
dst1 = newTemp(ty);
assign(dst0, mkU(ty,0));
assign(src, getIReg(sz,eregOfRM(modrm)));
- assign(dst1, unop(mkSizedOp(ty,Iop_Neg8), mkexpr(src)));
+ assign(dst1, binop(mkSizedOp(ty,Iop_Sub8), mkexpr(dst0), mkexpr(src)));
setFlags_DEP1_DEP2(Iop_Sub8, dst0, src, ty);
putIReg(sz, eregOfRM(modrm), mkexpr(dst1));
DIP("neg%c %s\n", nameISize(sz), nameIReg(sz, eregOfRM(modrm)));
@@ -2693,7 +2693,7 @@
dst1 = newTemp(ty);
assign(dst0, mkU(ty,0));
assign(src, mkexpr(t1));
- assign(dst1, unop(mkSizedOp(ty,Iop_Neg8), mkexpr(src)));
+ assign(dst1, binop(mkSizedOp(ty,Iop_Sub8), mkexpr(dst0), mkexpr(src)));
setFlags_DEP1_DEP2(Iop_Sub8, dst0, src, ty);
storeLE( mkexpr(addr), mkexpr(dst1) );
DIP("neg%c %s\n", nameISize(sz), dis_buf);
Modified: trunk/priv/host-amd64/isel.c
===================================================================
--- trunk/priv/host-amd64/isel.c 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/priv/host-amd64/isel.c 2007-08-25 23:07:44 UTC (rev 1780)
@@ -278,15 +278,22 @@
return toBool(x == y1);
}
-//.. /* Is this a 32-bit zero expression? */
-//..
-//.. static Bool isZero32 ( IRExpr* e )
-//.. {
-//.. return e->tag == Iex_Const
-//.. && e->Iex.Const.con->tag == Ico_U32
-//.. && e->Iex.Const.con->Ico.U32 == 0;
-//.. }
+/* Is this a 64-bit zero expression? */
+static Bool isZeroU64 ( IRExpr* e )
+{
+ return e->tag == Iex_Const
+ && e->Iex.Const.con->tag == Ico_U64
+ && e->Iex.Const.con->Ico.U64 == 0ULL;
+}
+
+static Bool isZeroU32 ( IRExpr* e )
+{
+ return e->tag == Iex_Const
+ && e->Iex.Const.con->tag == Ico_U32
+ && e->Iex.Const.con->Ico.U32 == 0;
+}
+
/* Make a int reg-reg move. */
static AMD64Instr* mk_iMOVsd_RR ( HReg src, HReg dst )
@@ -841,16 +848,17 @@
AMD64AluOp aluOp;
AMD64ShiftOp shOp;
-//..
-//.. /* Pattern: Sub32(0,x) */
-//.. if (e->Iex.Binop.op == Iop_Sub32 && isZero32(e->Iex.Binop.arg1)) {
-//.. HReg dst = newVRegI(env);
-//.. HReg reg = iselIntExpr_R(env, e->Iex.Binop.arg2);
-//.. addInstr(env, mk_iMOVsd_RR(reg,dst));
-//.. addInstr(env, X86Instr_Unary32(Xun_NEG,X86RM_Reg(dst)));
-//.. return dst;
-//.. }
-//..
+ /* Pattern: Sub64(0,x) */
+ /* and: Sub32(0,x) */
+ if ((e->Iex.Binop.op == Iop_Sub64 && isZeroU64(e->Iex.Binop.arg1))
+ || (e->Iex.Binop.op == Iop_Sub32 && isZeroU32(e->Iex.Binop.arg1))) {
+ HReg dst = newVRegI(env);
+ HReg reg = iselIntExpr_R(env, e->Iex.Binop.arg2);
+ addInstr(env, mk_iMOVsd_RR(reg,dst));
+ addInstr(env, AMD64Instr_Unary64(Aun_NEG,dst));
+ return dst;
+ }
+
/* Is it an addition or logical style op? */
switch (e->Iex.Binop.op) {
case Iop_Add8: case Iop_Add16: case Iop_Add32: case Iop_Add64:
@@ -1449,17 +1457,44 @@
AMD64RMI_Reg(tmp), dst));
return dst;
}
- case Iop_Neg8:
- case Iop_Neg16:
- case Iop_Neg32:
- case Iop_Neg64: {
+
+ case Iop_CmpwNEZ64: {
HReg dst = newVRegI(env);
- HReg reg = iselIntExpr_R(env, e->Iex.Unop.arg);
- addInstr(env, mk_iMOVsd_RR(reg,dst));
+ HReg src = iselIntExpr_R(env, e->Iex.Unop.arg);
+ addInstr(env, mk_iMOVsd_RR(src,dst));
addInstr(env, AMD64Instr_Unary64(Aun_NEG,dst));
+ addInstr(env, AMD64Instr_Alu64R(Aalu_OR,
+ AMD64RMI_Reg(src), dst));
+ addInstr(env, AMD64Instr_Sh64(Ash_SAR, 63, dst));
return dst;
}
+ case Iop_CmpwNEZ32: {
+ HReg src = newVRegI(env);
+ HReg dst = newVRegI(env);
+ HReg pre = iselIntExpr_R(env, e->Iex.Unop.arg);
+ addInstr(env, mk_iMOVsd_RR(pre,src));
+ addInstr(env, AMD64Instr_MovZLQ(src,src));
+ addInstr(env, mk_iMOVsd_RR(src,dst));
+ addInstr(env, AMD64Instr_Unary64(Aun_NEG,dst));
+ addInstr(env, AMD64Instr_Alu64R(Aalu_OR,
+ AMD64RMI_Reg(src), dst));
+ addInstr(env, AMD64Instr_Sh64(Ash_SAR, 63, dst));
+ return dst;
+ }
+
+ case Iop_Left8:
+ case Iop_Left16:
+ case Iop_Left32:
+ case Iop_Left64: {
+ HReg dst = newVRegI(env);
+ HReg src = iselIntExpr_R(env, e->Iex.Unop.arg);
+ addInstr(env, mk_iMOVsd_RR(src, dst));
+ addInstr(env, AMD64Instr_Unary64(Aun_NEG, dst));
+ addInstr(env, AMD64Instr_Alu64R(Aalu_OR, AMD64RMI_Reg(src), dst));
+ return dst;
+ }
+
case Iop_V128to32: {
HReg dst = newVRegI(env);
HReg vec = iselVecExpr(env, e->Iex.Unop.arg);
@@ -1965,11 +2000,7 @@
static AMD64CondCode iselCondCode_wrk ( ISelEnv* env, IRExpr* e )
{
MatchInfo mi;
-//.. DECLARE_PATTERN(p_1Uto32_then_32to1);
-//.. DECLARE_PATTERN(p_1Sto32_then_32to1);
- DECLARE_PATTERN(p_1Uto64_then_64to1);
-
vassert(e);
vassert(typeOfIRExpr(env->type_env,e) == Ity_I1);
@@ -2002,30 +2033,6 @@
/* --- patterns rooted at: 64to1 --- */
- /* 64to1(1Uto64(expr1)) ==> expr1 */
- DEFINE_PATTERN( p_1Uto64_then_64to1,
- unop(Iop_64to1, unop(Iop_1Uto64, bind(0))) );
- if (matchIRExpr(&mi,p_1Uto64_then_64to1,e)) {
- IRExpr* expr1 = mi.bindee[0];
- return iselCondCode(env, expr1);
- }
-
-//.. /* 32to1(1Uto32(expr1)) -- the casts are pointless, ignore them */
-//.. DEFINE_PATTERN(p_1Uto32_then_32to1,
-//.. unop(Iop_32to1,unop(Iop_1Uto32,bind(0))));
-//.. if (matchIRExpr(&mi,p_1Uto32_then_32to1,e)) {
-//.. IRExpr* expr1 = mi.bindee[0];
-//.. return iselCondCode(env, expr1);
-//.. }
-//..
-//.. /* 32to1(1Sto32(expr1)) -- the casts are pointless, ignore them */
-//.. DEFINE_PATTERN(p_1Sto32_then_32to1,
-//.. unop(Iop_32to1,unop(Iop_1Sto32,bind(0))));
-//.. if (matchIRExpr(&mi,p_1Sto32_then_32to1,e)) {
-//.. IRExpr* expr1 = mi.bindee[0];
-//.. return iselCondCode(env, expr1);
-//.. }
-
/* 64to1 */
if (e->tag == Iex_Unop && e->Iex.Unop.op == Iop_64to1) {
HReg reg = iselIntExpr_R(env, e->Iex.Unop.arg);
@@ -2168,53 +2175,6 @@
}
}
-//.. /* CmpNE64(1Sto64(b), 0) ==> b */
-//.. {
-//.. DECLARE_PATTERN(p_CmpNE64_1Sto64);
-//.. DEFINE_PATTERN(
-//.. p_CmpNE64_1Sto64,
-//.. binop(Iop_CmpNE64, unop(Iop_1Sto64,bind(0)), mkU64(0)));
-//.. if (matchIRExpr(&mi, p_CmpNE64_1Sto64, e)) {
-//.. return iselCondCode(env, mi.bindee[0]);
-//.. }
-//.. }
-//..
-//.. /* CmpNE64(x, 0) */
-//.. {
-//.. DECLARE_PATTERN(p_CmpNE64_x_zero);
-//.. DEFINE_PATTERN(
-//.. p_CmpNE64_x_zero,
-//.. binop(Iop_CmpNE64, bind(0), mkU64(0)) );
-//.. if (matchIRExpr(&mi, p_CmpNE64_x_zero, e)) {
-//.. HReg hi, lo;
-//.. IRExpr* x = mi.bindee[0];
-//.. HReg tmp = newVRegI(env);
-//.. iselInt64Expr( &hi, &lo, env, x );
-//.. addInstr(env, mk_iMOVsd_RR(hi, tmp));
-//.. addInstr(env, X86Instr_Alu32R(Xalu_OR,X86RMI_Reg(lo), tmp));
-//.. return Xcc_NZ;
-//.. }
-//.. }
-//..
-//.. /* CmpNE64 */
-//.. if (e->tag == Iex_Binop
-//.. && e->Iex.Binop.op == Iop_CmpNE64) {
-//.. HReg hi1, hi2, lo1, lo2;
-//.. HReg tHi = newVRegI(env);
-//.. HReg tLo = newVRegI(env);
-//.. iselInt64Expr( &hi1, &lo1, env, e->Iex.Binop.arg1 );
-//.. iselInt64Expr( &hi2, &lo2, env, e->Iex.Binop.arg2 );
-//.. addInstr(env, mk_iMOVsd_RR(hi1, tHi));
-//.. addInstr(env, X86Instr_Alu32R(Xalu_XOR,X86RMI_Reg(hi2), tHi));
-//.. addInstr(env, mk_iMOVsd_RR(lo1, tLo));
-//.. addInstr(env, X86Instr_Alu32R(Xalu_XOR,X86RMI_Reg(lo2), tLo));
-//.. addInstr(env, X86Instr_Alu32R(Xalu_OR,X86RMI_Reg(tHi), tLo));
-//.. switch (e->Iex.Binop.op) {
-//.. case Iop_CmpNE64: return Xcc_NZ;
-//.. default: vpanic("iselCondCode(x86): CmpXX64");
-//.. }
-//.. }
-
ppIRExpr(e);
vpanic("iselCondCode(amd64)");
}
Modified: trunk/priv/host-ppc/hdefs.c
===================================================================
--- trunk/priv/host-ppc/hdefs.c 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/priv/host-ppc/hdefs.c 2007-08-25 23:07:44 UTC (rev 1780)
@@ -2706,7 +2706,13 @@
/* srawi (PPC32 p507) */
UInt n = srcR->Prh.Imm.imm16;
vassert(!srcR->Prh.Imm.syned);
- vassert(n > 0 && n < 32);
+ /* In 64-bit mode, we allow right shifts by zero bits
+ as that is a handy way to sign extend the lower 32
+ bits into the upper 32 bits. */
+ if (mode64)
+ vassert(n >= 0 && n < 32);
+ else
+ vassert(n > 0 && n < 32);
p = mkFormX(p, 31, r_srcL, r_dst, n, 824, 0);
} else {
/* sraw (PPC32 p506) */
Modified: trunk/priv/host-ppc/isel.c
===================================================================
--- trunk/priv/host-ppc/isel.c 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/priv/host-ppc/isel.c 2007-08-25 23:07:44 UTC (rev 1780)
@@ -1569,8 +1569,7 @@
return r_dst;
}
case Iop_8Sto64:
- case Iop_16Sto64:
- case Iop_32Sto64: {
+ case Iop_16Sto64: {
HReg r_dst = newVRegI(env);
HReg r_src = iselWordExpr_R(env, e->Iex.Unop.arg);
UShort amt = toUShort(op_unop==Iop_8Sto64 ? 56 :
@@ -1584,6 +1583,17 @@
r_dst, r_dst, PPCRH_Imm(False,amt)));
return r_dst;
}
+ case Iop_32Sto64: {
+ HReg r_dst = newVRegI(env);
+ HReg r_src = iselWordExpr_R(env, e->Iex.Unop.arg);
+ vassert(mode64);
+ /* According to the IBM docs, in 64 bit mode, srawi r,r,0
+ sign extends the lower 32 bits into the upper 32 bits. */
+ addInstr(env,
+ PPCInstr_Shft(Pshft_SAR, True/*32bit shift*/,
+ r_dst, r_src, PPCRH_Imm(False,0)));
+ return r_dst;
+ }
case Iop_Not8:
case Iop_Not16:
case Iop_Not32:
@@ -1695,18 +1705,41 @@
addInstr(env, PPCInstr_Unary(op_clz,r_dst,r_src));
return r_dst;
}
- case Iop_Neg8:
- case Iop_Neg16:
- case Iop_Neg32:
- case Iop_Neg64: {
+
+ case Iop_Left8:
+ case Iop_Left32:
+ case Iop_Left64: {
+ HReg r_src, r_dst;
+ if (op_unop == Iop_Left64 && !mode64)
+ goto irreducible;
+ r_dst = newVRegI(env);
+ r_src = iselWordExpr_R(env, e->Iex.Unop.arg);
+ addInstr(env, PPCInstr_Unary(Pun_NEG,r_dst,r_src));
+ addInstr(env, PPCInstr_Alu(Palu_OR, r_dst, r_dst, PPCRH_Reg(r_src)));
+ return r_dst;
+ }
+
+ case Iop_CmpwNEZ32: {
HReg r_dst = newVRegI(env);
HReg r_src = iselWordExpr_R(env, e->Iex.Unop.arg);
- if (op_unop == Iop_Neg64 && !mode64)
- goto irreducible;
addInstr(env, PPCInstr_Unary(Pun_NEG,r_dst,r_src));
+ addInstr(env, PPCInstr_Alu(Palu_OR, r_dst, r_dst, PPCRH_Reg(r_src)));
+ addInstr(env, PPCInstr_Shft(Pshft_SAR, True/*32bit shift*/,
+ r_dst, r_dst, PPCRH_Imm(False, 31)));
return r_dst;
}
+ case Iop_CmpwNEZ64: {
+ HReg r_dst = newVRegI(env);
+ HReg r_src = iselWordExpr_R(env, e->Iex.Unop.arg);
+ if (!mode64) goto irreducible;
+ addInstr(env, PPCInstr_Unary(Pun_NEG,r_dst,r_src));
+ addInstr(env, PPCInstr_Alu(Palu_OR, r_dst, r_dst, PPCRH_Reg(r_src)));
+ addInstr(env, PPCInstr_Shft(Pshft_SAR, False/*64bit shift*/,
+ r_dst, r_dst, PPCRH_Imm(False, 63)));
+ return r_dst;
+ }
+
case Iop_V128to32: {
HReg r_aligned16;
HReg dst = newVRegI(env);
@@ -1761,8 +1794,6 @@
case Iop_32to16:
case Iop_64to8:
/* These are no-ops. */
- if (op_unop == Iop_Neg64 && !mode64)
- goto irreducible;
return iselWordExpr_R(env, e->Iex.Unop.arg);
/* ReinterpF64asI64(e) */
@@ -2685,6 +2716,24 @@
if (e->tag == Iex_Unop) {
switch (e->Iex.Unop.op) {
+ /* CmpwNEZ64(e) */
+ case Iop_CmpwNEZ64: {
+ HReg argHi, argLo;
+ HReg tmp1 = newVRegI(env);
+ HReg tmp2 = newVRegI(env);
+ iselInt64Expr(&argHi, &argLo, env, e->Iex.Unop.arg);
+ /* tmp1 = argHi | argLo */
+ addInstr(env, PPCInstr_Alu(Palu_OR, tmp1, argHi, PPCRH_Reg(argLo)));
+ /* tmp2 = (tmp1 | -tmp1) >>s 31 */
+ addInstr(env, PPCInstr_Unary(Pun_NEG,tmp2,tmp1));
+ addInstr(env, PPCInstr_Alu(Palu_OR, tmp2, tmp2, PPCRH_Reg(tmp1)));
+ addInstr(env, PPCInstr_Shft(Pshft_SAR, True/*32bit shift*/,
+ tmp2, tmp2, PPCRH_Imm(False, 31)));
+ *rHi = tmp2;
+ *rLo = tmp2; /* yes, really tmp2 */
+ return;
+ }
+
/* 32Sto64(e) */
case Iop_32Sto64: {
HReg tHi = newVRegI(env);
@@ -2754,22 +2803,6 @@
*rLo = tLo;
return;
}
-
- case Iop_Neg64: {
- HReg yLo, yHi;
- HReg zero = newVRegI(env);
- HReg tLo = newVRegI(env);
- HReg tHi = newVRegI(env);
- iselInt64Expr(&yHi, &yLo, env, e->Iex.Unop.arg);
- addInstr(env, PPCInstr_LI(zero, 0, False/*mode32*/));
- addInstr(env, PPCInstr_AddSubC( False/*sub*/, True/*set carry*/,
- tLo, zero, yLo));
- addInstr(env, PPCInstr_AddSubC( False/*sub*/, False/*read carry*/,
- tHi, zero, yHi));
- *rHi = tHi;
- *rLo = tLo;
- return;
- }
/* ReinterpF64asI64(e) */
/* Given an IEEE754 double, produce an I64 with the same bit
Modified: trunk/priv/host-x86/hdefs.c
===================================================================
--- trunk/priv/host-x86/hdefs.c 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/priv/host-x86/hdefs.c 2007-08-25 23:07:44 UTC (rev 1780)
@@ -1612,7 +1612,7 @@
/* The given instruction reads the specified vreg exactly once, and
that vreg is currently located at the given spill offset. If
- possible, return a variant of the instruction which instead
+ possible, return a variant of the instruction to one which instead
references the spill slot directly. */
X86Instr* directReload_X86( X86Instr* i, HReg vreg, Short spill_off )
@@ -2407,6 +2407,13 @@
p = doAMode_M(p, i->Xin.LoadEX.dst, i->Xin.LoadEX.src);
goto done;
}
+ if (i->Xin.LoadEX.szSmall == 1 && i->Xin.LoadEX.syned) {
+ /* movsbl */
+ *p++ = 0x0F;
+ *p++ = 0xBE;
+ p = doAMode_M(p, i->Xin.LoadEX.dst, i->Xin.LoadEX.src);
+ goto done;
+ }
break;
case Xin_Set32:
Modified: trunk/priv/host-x86/isel.c
===================================================================
--- trunk/priv/host-x86/isel.c 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/priv/host-x86/isel.c 2007-08-25 23:07:44 UTC (rev 1780)
@@ -120,7 +120,21 @@
&& e->Iex.Const.con->Ico.U8 == 0;
}
+static Bool isZeroU32 ( IRExpr* e )
+{
+ return e->tag == Iex_Const
+ && e->Iex.Const.con->tag == Ico_U32
+ && e->Iex.Const.con->Ico.U32 == 0;
+}
+static Bool isZeroU64 ( IRExpr* e )
+{
+ return e->tag == Iex_Const
+ && e->Iex.Const.con->tag == Ico_U64
+ && e->Iex.Const.con->Ico.U64 == 0ULL;
+}
+
+
/*---------------------------------------------------------*/
/*--- ISelEnv ---*/
/*---------------------------------------------------------*/
@@ -730,7 +744,6 @@
static HReg iselIntExpr_R_wrk ( ISelEnv* env, IRExpr* e )
{
MatchInfo mi;
- DECLARE_PATTERN(p_32to1_then_1Uto8);
IRType ty = typeOfIRExpr(env->type_env,e);
vassert(ty == Ity_I32 || ty == Ity_I16 || ty == Ity_I8);
@@ -799,6 +812,15 @@
X86AluOp aluOp;
X86ShiftOp shOp;
+ /* Pattern: Sub32(0,x) */
+ if (e->Iex.Binop.op == Iop_Sub32 && isZeroU32(e->Iex.Binop.arg1)) {
+ HReg dst = newVRegI(env);
+ HReg reg = iselIntExpr_R(env, e->Iex.Binop.arg2);
+ addInstr(env, mk_iMOVsd_RR(reg,dst));
+ addInstr(env, X86Instr_Unary32(Xun_NEG,dst));
+ return dst;
+ }
+
/* Is it an addition or logical style op? */
switch (e->Iex.Binop.op) {
case Iop_Add8: case Iop_Add16: case Iop_Add32:
@@ -1011,21 +1033,53 @@
/* --------- UNARY OP --------- */
case Iex_Unop: {
+
/* 1Uto8(32to1(expr32)) */
- DEFINE_PATTERN(p_32to1_then_1Uto8,
- unop(Iop_1Uto8,unop(Iop_32to1,bind(0))));
- if (matchIRExpr(&mi,p_32to1_then_1Uto8,e)) {
- IRExpr* expr32 = mi.bindee[0];
- HReg dst = newVRegI(env);
- HReg src = iselIntExpr_R(env, expr32);
- addInstr(env, mk_iMOVsd_RR(src,dst) );
- addInstr(env, X86Instr_Alu32R(Xalu_AND,
- X86RMI_Imm(1), dst));
- return dst;
+ if (e->Iex.Unop.op == Iop_1Uto8) {
+ DECLARE_PATTERN(p_32to1_then_1Uto8);
+ DEFINE_PATTERN(p_32to1_then_1Uto8,
+ unop(Iop_1Uto8,unop(Iop_32to1,bind(0))));
+ if (matchIRExpr(&mi,p_32to1_then_1Uto8,e)) {
+ IRExpr* expr32 = mi.bindee[0];
+ HReg dst = newVRegI(env);
+ HReg src = iselIntExpr_R(env, expr32);
+ addInstr(env, mk_iMOVsd_RR(src,dst) );
+ addInstr(env, X86Instr_Alu32R(Xalu_AND,
+ X86RMI_Imm(1), dst));
+ return dst;
+ }
}
+ /* 8Uto32(LDle(expr32)) */
+ if (e->Iex.Unop.op == Iop_8Uto32) {
+ DECLARE_PATTERN(p_LDle8_then_8Uto32);
+ DEFINE_PATTERN(p_LDle8_then_8Uto32,
+ unop(Iop_8Uto32,
+ IRExpr_Load(Iend_LE,Ity_I8,bind(0))) );
+ if (matchIRExpr(&mi,p_LDle8_then_8Uto32,e)) {
+ HReg dst = newVRegI(env);
+ X86AMode* amode = iselIntExpr_AMode ( env, mi.bindee[0] );
+ addInstr(env, X86Instr_LoadEX(1,False,amode,dst));
+ return dst;
+ }
+ }
+
+ /* 8Sto32(LDle(expr32)) */
+ if (e->Iex.Unop.op == Iop_8Sto32) {
+ DECLARE_PATTERN(p_LDle8_then_8Sto32);
+ DEFINE_PATTERN(p_LDle8_then_8Sto32,
+ unop(Iop_8Sto32,
+ IRExpr_Load(Iend_LE,Ity_I8,bind(0))) );
+ if (matchIRExpr(&mi,p_LDle8_then_8Sto32,e)) {
+ HReg dst = newVRegI(env);
+ X86AMode* amode = iselIntExpr_AMode ( env, mi.bindee[0] );
+ addInstr(env, X86Instr_LoadEX(1,True,amode,dst));
+ return dst;
+ }
+ }
+
/* 16Uto32(LDle(expr32)) */
- {
+ if (e->Iex.Unop.op == Iop_16Uto32) {
DECLARE_PATTERN(p_LDle16_then_16Uto32);
DEFINE_PATTERN(p_LDle16_then_16Uto32,
unop(Iop_16Uto32,
@@ -1038,6 +1092,34 @@
}
}
+ /* 8Uto32(GET:I8) */
+ if (e->Iex.Unop.op == Iop_8Uto32) {
+ if (e->Iex.Unop.arg->tag == Iex_Get) {
+ HReg dst;
+ X86AMode* amode;
+ vassert(e->Iex.Unop.arg->Iex.Get.ty == Ity_I8);
+ dst = newVRegI(env);
+ amode = X86AMode_IR(e->Iex.Unop.arg->Iex.Get.offset,
+ hregX86_EBP());
+ addInstr(env, X86Instr_LoadEX(1,False,amode,dst));
+ return dst;
+ }
+ }
+
+ /* 16to32(GET:I16) */
+ if (e->Iex.Unop.op == Iop_16Uto32) {
+ if (e->Iex.Unop.arg->tag == Iex_Get) {
+ HReg dst;
+ X86AMode* amode;
+ vassert(e->Iex.Unop.arg->Iex.Get.ty == Ity_I16);
+ dst = newVRegI(env);
+ amode = X86AMode_IR(e->Iex.Unop.arg->Iex.Get.offset,
+ hregX86_EBP());
+ addInstr(env, X86Instr_LoadEX(2,False,amode,dst));
+ return dst;
+ }
+ }
+
switch (e->Iex.Unop.op) {
case Iop_8Uto16:
case Iop_8Uto32:
@@ -1128,15 +1210,27 @@
X86RMI_Reg(tmp), dst));
return dst;
}
- case Iop_Neg8:
- case Iop_Neg16:
- case Iop_Neg32: {
+
+ case Iop_CmpwNEZ32: {
HReg dst = newVRegI(env);
- HReg reg = iselIntExpr_R(env, e->Iex.Unop.arg);
- addInstr(env, mk_iMOVsd_RR(reg,dst));
+ HReg src = iselIntExpr_R(env, e->Iex.Unop.arg);
+ addInstr(env, mk_iMOVsd_RR(src,dst));
addInstr(env, X86Instr_Unary32(Xun_NEG,dst));
+ addInstr(env, X86Instr_Alu32R(Xalu_OR,
+ X86RMI_Reg(src), dst));
+ addInstr(env, X86Instr_Sh32(Xsh_SAR, 31, dst));
return dst;
}
+ case Iop_Left8:
+ case Iop_Left16:
+ case Iop_Left32: {
+ HReg dst = newVRegI(env);
+ HReg src = iselIntExpr_R(env, e->Iex.Unop.arg);
+ addInstr(env, mk_iMOVsd_RR(src, dst));
+ addInstr(env, X86Instr_Unary32(Xun_NEG, dst));
+ addInstr(env, X86Instr_Alu32R(Xalu_OR, X86RMI_Reg(src), dst));
+ return dst;
+ }
case Iop_V128to32: {
HReg dst = newVRegI(env);
@@ -1547,9 +1641,6 @@
static X86CondCode iselCondCode_wrk ( ISelEnv* env, IRExpr* e )
{
MatchInfo mi;
- DECLARE_PATTERN(p_32to1);
- DECLARE_PATTERN(p_1Uto32_then_32to1);
- DECLARE_PATTERN(p_1Sto32_then_32to1);
vassert(e);
vassert(typeOfIRExpr(env->type_env,e) == Ity_I1);
@@ -1582,28 +1673,9 @@
/* --- patterns rooted at: 32to1 --- */
- /* 32to1(1Uto32(e)) ==> e */
- DEFINE_PATTERN(p_1Uto32_then_32to1,
- unop(Iop_32to1,unop(Iop_1Uto32,bind(0))));
- if (matchIRExpr(&mi,p_1Uto32_then_32to1,e)) {
- IRExpr* expr1 = mi.bindee[0];
- return iselCondCode(env, expr1);
- }
-
- /* 32to1(1Sto32(e)) ==> e */
- DEFINE_PATTERN(p_1Sto32_then_32to1,
- unop(Iop_32to1,unop(Iop_1Sto32,bind(0))));
- if (matchIRExpr(&mi,p_1Sto32_then_32to1,e)) {
- IRExpr* expr1 = mi.bindee[0];
- return iselCondCode(env, expr1);
- }
-
- /* 32to1(expr32) */
- DEFINE_PATTERN(p_32to1,
- unop(Iop_32to1,bind(0))
- );
- if (matchIRExpr(&mi,p_32to1,e)) {
- X86RM* rm = iselIntExpr_RM(env, mi.bindee[0]);
+ if (e->tag == Iex_Unop
+ && e->Iex.Unop.op == Iop_32to1) {
+ X86RM* rm = iselIntExpr_RM(env, e->Iex.Unop.arg);
addInstr(env, X86Instr_Test32(1,rm));
return Xcc_NZ;
}
@@ -1630,16 +1702,6 @@
/* --- patterns rooted at: CmpNEZ32 --- */
- /* CmpNEZ32(1Sto32(b)) ==> b */
- {
- DECLARE_PATTERN(p_CmpNEZ32_1Sto32);
- DEFINE_PATTERN(p_CmpNEZ32_1Sto32,
- unop(Iop_CmpNEZ32, unop(Iop_1Sto32,bind(0))));
- if (matchIRExpr(&mi, p_CmpNEZ32_1Sto32, e)) {
- return iselCondCode(env, mi.bindee[0]);
- }
- }
-
/* CmpNEZ32(And32(x,y)) */
{
DECLARE_PATTERN(p_CmpNEZ32_And32);
@@ -1670,6 +1732,16 @@
}
}
+ /* CmpNEZ32(GET(..):I32) */
+ if (e->tag == Iex_Unop
+ && e->Iex.Unop.op == Iop_CmpNEZ32
+ && e->Iex.Unop.arg->tag == Iex_Get) {
+ X86AMode* am = X86AMode_IR(e->Iex.Unop.arg->Iex.Get.offset,
+ hregX86_EBP());
+ addInstr(env, X86Instr_Alu32M(Xalu_CMP, X86RI_Imm(0), am));
+ return Xcc_NZ;
+ }
+
/* CmpNEZ32(x) */
if (e->tag == Iex_Unop
&& e->Iex.Unop.op == Iop_CmpNEZ32) {
@@ -1681,17 +1753,6 @@
/* --- patterns rooted at: CmpNEZ64 --- */
- /* CmpNEZ64(1Sto64(b)) ==> b */
- {
- DECLARE_PATTERN(p_CmpNEZ64_1Sto64);
- DEFINE_PATTERN(
- p_CmpNEZ64_1Sto64,
- unop(Iop_CmpNEZ64, unop(Iop_1Sto64,bind(0))));
- if (matchIRExpr(&mi, p_CmpNEZ64_1Sto64, e)) {
- return iselCondCode(env, mi.bindee[0]);
- }
- }
-
/* CmpNEZ64(Or64(x,y)) */
{
DECLARE_PATTERN(p_CmpNEZ64_Or64);
@@ -1839,6 +1900,7 @@
/* DO NOT CALL THIS DIRECTLY ! */
static void iselInt64Expr_wrk ( HReg* rHi, HReg* rLo, ISelEnv* env, IRExpr* e )
{
+ MatchInfo mi;
HWord fn = 0; /* helper fn for most SIMD64 stuff */
vassert(e);
vassert(typeOfIRExpr(env->type_env,e) == Ity_I64);
@@ -1915,18 +1977,59 @@
return;
}
- /* 64-bit Mux0X */
+ /* 64-bit Mux0X: Mux0X(g, expr, 0:I64) */
+ if (e->tag == Iex_Mux0X && isZeroU64(e->Iex.Mux0X.exprX)) {
+ X86RM* r8;
+ HReg e0Lo, e0Hi;
+ HReg tLo = newVRegI(env);
+ HReg tHi = newVRegI(env);
+ X86AMode* zero_esp = X86AMode_IR(0, hregX86_ESP());
+ iselInt64Expr(&e0Hi, &e0Lo, env, e->Iex.Mux0X.expr0);
+ r8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
+ addInstr(env, mk_iMOVsd_RR( e0Hi, tHi ) );
+ addInstr(env, mk_iMOVsd_RR( e0Lo, tLo ) );
+ addInstr(env, X86Instr_Push(X86RMI_Imm(0)));
+ addInstr(env, X86Instr_Test32(0xFF, r8));
+ addInstr(env, X86Instr_CMov32(Xcc_NZ,X86RM_Mem(zero_esp),tHi));
+ addInstr(env, X86Instr_CMov32(Xcc_NZ,X86RM_Mem(zero_esp),tLo));
+ add_to_esp(env, 4);
+ *rHi = tHi;
+ *rLo = tLo;
+ return;
+ }
+ /* 64-bit Mux0X: Mux0X(g, 0:I64, expr) */
+ if (e->tag == Iex_Mux0X && isZeroU64(e->Iex.Mux0X.expr0)) {
+ X86RM* r8;
+ HReg e0Lo, e0Hi;
+ HReg tLo = newVRegI(env);
+ HReg tHi = newVRegI(env);
+ X86AMode* zero_esp = X86AMode_IR(0, hregX86_ESP());
+ iselInt64Expr(&e0Hi, &e0Lo, env, e->Iex.Mux0X.exprX);
+ r8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
+ addInstr(env, mk_iMOVsd_RR( e0Hi, tHi ) );
+ addInstr(env, mk_iMOVsd_RR( e0Lo, tLo ) );
+ addInstr(env, X86Instr_Push(X86RMI_Imm(0)));
+ addInstr(env, X86Instr_Test32(0xFF, r8));
+ addInstr(env, X86Instr_CMov32(Xcc_Z,X86RM_Mem(zero_esp),tHi));
+ addInstr(env, X86Instr_CMov32(Xcc_Z,X86RM_Mem(zero_esp),tLo));
+ add_to_esp(env, 4);
+ *rHi = tHi;
+ *rLo = tLo;
+ return;
+ }
+
+ /* 64-bit Mux0X: Mux0X(g, expr, expr) */
if (e->tag == Iex_Mux0X) {
- X86RM* rm8;
- HReg e0Lo, e0Hi, eXLo, eXHi;
- HReg tLo = newVRegI(env);
- HReg tHi = newVRegI(env);
+ X86RM* r8;
+ HReg e0Lo, e0Hi, eXLo, eXHi;
+ HReg tLo = newVRegI(env);
+ HReg tHi = newVRegI(env);
iselInt64Expr(&e0Hi, &e0Lo, env, e->Iex.Mux0X.expr0);
iselInt64Expr(&eXHi, &eXLo, env, e->Iex.Mux0X.exprX);
addInstr(env, mk_iMOVsd_RR(eXHi, tHi));
addInstr(env, mk_iMOVsd_RR(eXLo, tLo));
- rm8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
- addInstr(env, X86Instr_Test32(0xFF, rm8));
+ r8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
+ addInstr(env, X86Instr_Test32(0xFF, r8));
/* This assumes the first cmov32 doesn't trash the condition
codes, so they are still available for the second cmov32 */
addInstr(env, X86Instr_CMov32(Xcc_Z,X86RM_Reg(e0Hi),tHi));
@@ -1992,10 +2095,10 @@
: e->Iex.Binop.op==Iop_And64 ? Xalu_AND
: Xalu_XOR;
iselInt64Expr(&xHi, &xLo, env, e->Iex.Binop.arg1);
+ iselInt64Expr(&yHi, &yLo, env, e->Iex.Binop.arg2);
addInstr(env, mk_iMOVsd_RR(xHi, tHi));
+ addInstr(env, X86Instr_Alu32R(op, X86RMI_Reg(yHi), tHi));
addInstr(env, mk_iMOVsd_RR(xLo, tLo));
- iselInt64Expr(&yHi, &yLo, env, e->Iex.Binop.arg2);
- addInstr(env, X86Instr_Alu32R(op, X86RMI_Reg(yHi), tHi));
addInstr(env, X86Instr_Alu32R(op, X86RMI_Reg(yLo), tLo));
*rHi = tHi;
*rLo = tLo;
@@ -2398,8 +2501,8 @@
return;
}
- /* Neg64(e) */
- case Iop_Neg64: {
+ /* Left64(e) */
+ case Iop_Left64: {
HReg yLo, yHi;
HReg tLo = newVRegI(env);
HReg tHi = newVRegI(env);
@@ -2411,11 +2514,75 @@
/* tHi = 0 - yHi - carry */
addInstr(env, X86Instr_Alu32R(Xalu_MOV, X86RMI_Imm(0), tHi));
addInstr(env, X86Instr_Alu32R(Xalu_SBB, X86RMI_Reg(yHi), tHi));
+ /* So now we have tHi:tLo = -arg. To finish off, or 'arg'
+ back in, so as to give the final result
+ tHi:tLo = arg | -arg. */
+ addInstr(env, X86Instr_Alu32R(Xalu_OR, X86RMI_Reg(yLo), tLo));
+ addInstr(env, X86Instr_Alu32R(Xalu_OR, X86RMI_Reg(yHi), tHi));
*rHi = tHi;
*rLo = tLo;
return;
}
+ /* --- patterns rooted at: CmpwNEZ64 --- */
+
+ /* CmpwNEZ64(e) */
+ case Iop_CmpwNEZ64: {
+
+ DECLARE_PATTERN(p_CmpwNEZ64_Or64);
+ DEFINE_PATTERN(p_CmpwNEZ64_Or64,
+ unop(Iop_CmpwNEZ64,binop(Iop_Or64,bind(0),bind(1))));
+ if (matchIRExpr(&mi, p_CmpwNEZ64_Or64, e)) {
+ /* CmpwNEZ64(Or64(x,y)) */
+ HReg xHi,xLo,yHi,yLo;
+ HReg xBoth = newVRegI(env);
+ HReg merged = newVRegI(env);
+ HReg tmp2 = newVRegI(env);
+
+ iselInt64Expr(&xHi,&xLo, env, mi.bindee[0]);
+ addInstr(env, mk_iMOVsd_RR(xHi,xBoth));
+ addInstr(env, X86Instr_Alu32R(Xalu_OR,
+ X86RMI_Reg(xLo),xBoth));
+
+ iselInt64Expr(&yHi,&yLo, env, mi.bindee[1]);
+ addInstr(env, mk_iMOVsd_RR(yHi,merged));
+ addInstr(env, X86Instr_Alu32R(Xalu_OR,
+ X86RMI_Reg(yLo),merged));
+ addInstr(env, X86Instr_Alu32R(Xalu_OR,
+ X86RMI_Reg(xBoth),merged));
+
+ /* tmp2 = (merged | -merged) >>s 31 */
+ addInstr(env, mk_iMOVsd_RR(merged,tmp2));
+ addInstr(env, X86Instr_Unary32(Xun_NEG,tmp2));
+ addInstr(env, X86Instr_Alu32R(Xalu_OR,
+ X86RMI_Reg(merged), tmp2));
+ addInstr(env, X86Instr_Sh32(Xsh_SAR, 31, tmp2));
+ *rHi = tmp2;
+ *rLo = tmp2;
+ return;
+ } else {
+ /* CmpwNEZ64(e) */
+ HReg srcLo, srcHi;
+ HReg tmp1 = newVRegI(env);
+ HReg tmp2 = newVRegI(env);
+ /* srcHi:srcLo = arg */
+ iselInt64Expr(&srcHi, &srcLo, env, e->Iex.Unop.arg);
+ /* tmp1 = srcHi | srcLo */
+ addInstr(env, mk_iMOVsd_RR(srcHi,tmp1));
+ addInstr(env, X86Instr_Alu32R(Xalu_OR,
+ X86RMI_Reg(srcLo), tmp1));
+ /* tmp2 = (tmp1 | -tmp1) >>s 31 */
+ addInstr(env, mk_iMOVsd_RR(tmp1,tmp2));
+ addInstr(env, X86Instr_Unary32(Xun_NEG,tmp2));
+ addInstr(env, X86Instr_Alu32R(Xalu_OR,
+ X86RMI_Reg(tmp1), tmp2));
+ addInstr(env, X86Instr_Sh32(Xsh_SAR, 31, tmp2));
+ *rHi = tmp2;
+ *rLo = tmp2;
+ return;
+ }
+ }
+
/* ReinterpF64asI64(e) */
/* Given an IEEE754 double, produce an I64 with the same bit
pattern. */
@@ -2829,12 +2996,12 @@
if (e->tag == Iex_Mux0X) {
if (ty == Ity_F64
&& typeOfIRExpr(env->type_env,e->Iex.Mux0X.cond) == Ity_I8) {
- X86RM* rm8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
- HReg rX = iselDblExpr(env, e->Iex.Mux0X.exprX);
- HReg r0 = iselDblExpr(env, e->Iex.Mux0X.expr0);
- HReg dst = newVRegF(env);
+ X86RM* r8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
+ HReg rX = iselDblExpr(env, e->Iex.Mux0X.exprX);
+ HReg r0 = iselDblExpr(env, e->Iex.Mux0X.expr0);
+ HReg dst = newVRegF(env);
addInstr(env, X86Instr_FpUnary(Xfp_MOV,rX,dst));
- addInstr(env, X86Instr_Test32(0xFF, rm8));
+ addInstr(env, X86Instr_Test32(0xFF, r8));
addInstr(env, X86Instr_FpCMov(Xcc_Z,r0,dst));
return dst;
}
@@ -3350,12 +3517,12 @@
} /* if (e->tag == Iex_Binop) */
if (e->tag == Iex_Mux0X) {
- X86RM* rm8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
- HReg rX = iselVecExpr(env, e->Iex.Mux0X.exprX);
- HReg r0 = iselVecExpr(env, e->Iex.Mux0X.expr0);
- HReg dst = newVRegV(env);
+ X86RM* r8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
+ HReg rX = iselVecExpr(env, e->Iex.Mux0X.exprX);
+ HReg r0 = iselVecExpr(env, e->Iex.Mux0X.expr0);
+ HReg dst = newVRegV(env);
addInstr(env, mk_vMOVsd_RR(rX,dst));
- addInstr(env, X86Instr_Test32(0xFF, rm8));
+ addInstr(env, X86Instr_Test32(0xFF, r8));
addInstr(env, X86Instr_SseCMov(Xcc_Z,r0,dst));
return dst;
}
Modified: trunk/priv/ir/irdefs.c
===================================================================
--- trunk/priv/ir/irdefs.c 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/priv/ir/irdefs.c 2007-08-25 23:07:44 UTC (rev 1780)
@@ -203,17 +203,20 @@
case Iop_CmpNEZ32: vex_printf("CmpNEZ32"); return;
case Iop_CmpNEZ64: vex_printf("CmpNEZ64"); return;
+ case Iop_CmpwNEZ32: vex_printf("CmpwNEZ32"); return;
+ case Iop_CmpwNEZ64: vex_printf("CmpwNEZ64"); return;
+
+ case Iop_Left8: vex_printf("Left8"); return;
+ case Iop_Left16: vex_printf("Left16"); return;
+ case Iop_Left32: vex_printf("Left32"); return;
+ case Iop_Left64: vex_printf("Left64"); return;
+
case Iop_CmpORD32U: vex_printf("CmpORD32U"); return;
case Iop_CmpORD32S: vex_printf("CmpORD32S"); return;
case Iop_CmpORD64U: vex_printf("CmpORD64U"); return;
case Iop_CmpORD64S: vex_printf("CmpORD64S"); return;
- case Iop_Neg8: vex_printf("Neg8"); return;
- case Iop_Neg16: vex_printf("Neg16"); return;
- case Iop_Neg32: vex_printf("Neg32"); return;
- case Iop_Neg64: vex_printf("Neg64"); return;
-
case Iop_DivU32: vex_printf("DivU32"); return;
case Iop_DivS32: vex_printf("DivS32"); return;
case Iop_DivU64: vex_printf("DivU64"); return;
@@ -1517,14 +1520,13 @@
case Iop_Shl64: case Iop_Shr64: case Iop_Sar64:
BINARY(Ity_I64,Ity_I8, Ity_I64);
- case Iop_Not8: case Iop_Neg8:
+ case Iop_Not8:
UNARY(Ity_I8, Ity_I8);
- case Iop_Not16: case Iop_Neg16:
+ case Iop_Not16:
UNARY(Ity_I16, Ity_I16);
- case Iop_Not32: case Iop_Neg32:
+ case Iop_Not32:
UNARY(Ity_I32, Ity_I32);
- case Iop_Neg64:
case Iop_Not64:
case Iop_CmpNEZ32x2: case Iop_CmpNEZ16x4: case Iop_CmpNEZ8x8:
UNARY(Ity_I64, Ity_I64);
@@ -1547,6 +1549,11 @@
case Iop_CmpNEZ32: UNARY_COMPARISON(Ity_I32);
case Iop_CmpNEZ64: UNARY_COMPARISON(Ity_I64);
+ case Iop_Left8: UNARY(Ity_I8, Ity_I8);
+ case Iop_Left16: UNARY(Ity_I16,Ity_I16);
+ case Iop_CmpwNEZ32: case Iop_Left32: UNARY(Ity_I32,Ity_I32);
+ case Iop_CmpwNEZ64: case Iop_Left64: UNARY(Ity_I64,Ity_I64);
+
case Iop_MullU8: case Iop_MullS8:
BINARY(Ity_I8,Ity_I8, Ity_I16);
case Iop_MullU16: case Iop_MullS16:
Modified: trunk/priv/ir/iropt.c
===================================================================
--- trunk/priv/ir/iropt.c 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/priv/ir/iropt.c 2007-08-25 23:07:44 UTC (rev 1780)
@@ -1011,19 +1011,6 @@
notBool(e->Iex.Unop.arg->Iex.Const.con->Ico.U1)));
break;
- case Iop_Neg64:
- e2 = IRExpr_Const(IRConst_U64(
- - (e->Iex.Unop.arg->Iex.Const.con->Ico.U64)));
- break;
- case Iop_Neg32:
- e2 = IRExpr_Const(IRConst_U32(
- - (e->Iex.Unop.arg->Iex.Const.con->Ico.U32)));
- break;
- case Iop_Neg8:
- e2 = IRExpr_Const(IRConst_U8(toUChar(
- - (e->Iex.Unop.arg->Iex.Const.con->Ico.U8))));
- break;
-
case Iop_64to8: {
ULong w64 = e->Iex.Unop.arg->Iex.Const.con->Ico.U64;
w64 &= 0xFFULL;
@@ -1072,6 +1059,39 @@
)));
break;
+ case Iop_CmpwNEZ32: {
+ UInt w32 = e->Iex.Unop.arg->Iex.Const.con->Ico.U32;
+ if (w32 == 0)
+ e2 = IRExpr_Const(IRConst_U32( 0 ));
+ else
+ e2 = IRExpr_Const(IRConst_U32( 0xFFFFFFFF ));
+ break;
+ }
+ case Iop_CmpwNEZ64: {
+ ULong w64 = e->Iex.Unop.arg->Iex.Const.con->Ico.U64;
+ if (w64 == 0)
+ e2 = IRExpr_Const(IRConst_U64( 0 ));
+ else
+ e2 = IRExpr_Const(IRConst_U64( 0xFFFFFFFFFFFFFFFFULL ));
+ break;
+ }
+
+ case Iop_Left32: {
+ UInt u32 = e->Iex.Unop.arg->Iex.Const.con->Ico.U32;
+ Int s32 = (Int)(u32 & 0xFFFFFFFF);
+ s32 = (s32 | (-s32));
+ e2 = IRExpr_Const( IRConst_U32( (UInt)s32 ));
+ break;
+ }
+
+ case Iop_Left64: {
+ ULong u64 = e->Iex.Unop.arg->Iex.Const.con->Ico.U64;
+ Long s64 = (Long)u64;
+ s64 = (s64 | (-s64));
+ e2 = IRExpr_Const( IRConst_U64( (ULong)s64 ));
+ break;
+ }
+
default:
goto unhandled;
}
@@ -1465,13 +1485,20 @@
e2 = IRExpr_Const(IRConst_U32(0));
} else
- /* And32(0,x) ==> 0 */
- if (e->Iex.Binop.op == Iop_And32
+ /* And32/Shl32(0,x) ==> 0 */
+ if ((e->Iex.Binop.op == Iop_And32 || e->Iex.Binop.op == Iop_Shl32)
&& e->Iex.Binop.arg1->tag == Iex_Const
&& e->Iex.Binop.arg1->Iex.Const.con->Ico.U32 == 0) {
e2 = IRExpr_Const(IRConst_U32(0));
} else
+ /* Or8(0,x) ==> x */
+ if (e->Iex.Binop.op == Iop_Or8
+ && e->Iex.Binop.arg1->tag == Iex_Const
+ && e->Iex.Binop.arg1->Iex.Const.con->Ico.U8 == 0) {
+ e2 = e->Iex.Binop.arg2;
+ } else
+
/* Or32(0,x) ==> x */
if (e->Iex.Binop.op == Iop_Or32
&& e->Iex.Binop.arg1->tag == Iex_Const
@@ -3698,6 +3725,94 @@
'single-shot', so once a binding is used, it is marked as no longer
available, by setting its .bindee field to NULL. */
+static inline Bool is_Unop ( IRExpr* e, IROp op ) {
+ return e->tag == Iex_Unop && e->Iex.Unop.op == op;
+}
+static inline Bool is_Binop ( IRExpr* e, IROp op ) {
+ return e->tag == Iex_Binop && e->Iex.Binop.op == op;
+}
+
+static IRExpr* fold_IRExpr_Binop ( IROp op, IRExpr* a1, IRExpr* a2 )
+{
+ switch (op) {
+ case Iop_Or32:
+ /* Or32( CmpwNEZ32(x), CmpwNEZ32(y) ) --> CmpwNEZ32( Or32( x, y ) ) */
+ if (is_Unop(a1, Iop_CmpwNEZ32) && is_Unop(a2, Iop_CmpwNEZ32))
+ return IRExpr_Unop( Iop_CmpwNEZ32,
+ IRExpr_Binop( Iop_Or32, a1->Iex.Unop.arg,
+ a2->Iex.Unop.arg ) );
+ break;
+ default:
+ break;
+ }
+ /* no reduction rule applies */
+ return IRExpr_Binop( op, a1, a2 );
+}
+
+static IRExpr* fold_IRExpr_Unop ( IROp op, IRExpr* aa )
+{
+ switch (op) {
+ case Iop_CmpwNEZ64:
+ /* CmpwNEZ64( Or64 ( CmpwNEZ64(x), y ) ) --> CmpwNEZ64( Or64( x, y ) ) */
+ if (is_Binop(aa, Iop_Or64)
+ && is_Unop(aa->Iex.Binop.arg1, Iop_CmpwNEZ64))
+ return fold_IRExpr_Unop(
+ Iop_CmpwNEZ64,
+ IRExpr_Binop(Iop_Or64,
+ aa->Iex.Binop.arg1->Iex.Unop.arg,
+ aa->Iex.Binop.arg2));
+ /* CmpwNEZ64( Or64 ( x, CmpwNEZ64(y) ) ) --> CmpwNEZ64( Or64( x, y ) ) */
+ if (is_Binop(aa, Iop_Or64)
+ && is_Unop(aa->Iex.Binop.arg2, Iop_CmpwNEZ64))
+ return fold_IRExpr_Unop(
+ Iop_CmpwNEZ64,
+ IRExpr_Binop(Iop_Or64,
+ aa->Iex.Binop.arg1,
+ aa->Iex.Binop.arg2->Iex.Unop.arg));
+ break;
+ case Iop_CmpNEZ64:
+ /* CmpNEZ64( Left64(x) ) --> CmpNEZ64(x) */
+ if (is_Unop(aa, Iop_Left64))
+ return IRExpr_Unop(Iop_CmpNEZ64, aa->Iex.Unop.arg);
+ break;
+ case Iop_CmpwNEZ32:
+ /* CmpwNEZ32( CmpwNEZ32 ( x ) ) --> CmpwNEZ32 ( x ) */
+ if (is_Unop(aa, Iop_CmpwNEZ32))
+ return IRExpr_Unop( Iop_CmpwNEZ32, aa->Iex.Unop.arg );
+ break;
+ case Iop_CmpNEZ32:
+ /* CmpNEZ32( Left32(x) ) --> CmpNEZ32(x) */
+ if (is_Unop(aa, Iop_Left32))
+ return IRExpr_Unop(Iop_CmpNEZ32, aa->Iex.Unop.arg);
+ break;
+ case Iop_Left32:
+ /* Left32( Left32(x) ) --> Left32(x) */
+ if (is_Unop(aa, Iop_Left32))
+ return IRExpr_Unop( Iop_Left32, aa->Iex.Unop.arg );
+ break;
+ case Iop_32to1:
+ /* 32to1( 1Uto32 ( x ) ) --> x */
+ if (is_Unop(aa, Iop_1Uto32))
+ return aa->Iex.Unop.arg;
+ /* 32to1( CmpwNEZ32 ( x )) --> CmpNEZ32(x) */
+ if (is_Unop(aa, Iop_CmpwNEZ32))
+ return IRExpr_Unop( Iop_CmpNEZ32, aa->Iex.Unop.arg );
+ break;
+ case Iop_64to1:
+ /* 64to1( 1Uto64 ( x ) ) --> x */
+ if (is_Unop(aa, Iop_1Uto64))
+ return aa->Iex.Unop.arg;
+ /* 64to1( CmpwNEZ64 ( x )) --> CmpNEZ64(x) */
+ if (is_Unop(aa, Iop_CmpwNEZ64))
+ return IRExpr_Unop( Iop_CmpNEZ64, aa->Iex.Unop.arg );
+ break;
+ default:
+ break;
+ }
+ /* no reduction rule applies */
+ return IRExpr_Unop( op, aa );
+}
+
static IRExpr* atbSubst_Expr ( ATmpInfo* env, IRExpr* e )
{
IRExpr* e2;
@@ -3740,13 +3855,13 @@
atbSubst_Expr(env, e->Iex.Triop.arg3)
);
case Iex_Binop:
- return IRExpr_Binop(
+ return fold_IRExpr_Binop(
e->Iex.Binop.op,
atbSubst_Expr(env, e->Iex.Binop.arg1),
atbSubst_Expr(env, e->Iex.Binop.arg2)
);
case Iex_Unop:
- return IRExpr_Unop(
+ return fold_IRExpr_Unop(
e->Iex.Unop.op,
atbSubst_Expr(env, e->Iex.Unop.arg)
);
Modified: trunk/priv/main/vex_util.c
===================================================================
--- trunk/priv/main/vex_util.c 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/priv/main/vex_util.c 2007-08-25 23:07:44 UTC (rev 1780)
@@ -441,6 +441,10 @@
PAD(len1); PUT('0'); PUT('x'); PUTSTR(intbuf); PAD(len3);
break;
}
+ case '%': {
+ PUT('%');
+ break;
+ }
default:
/* no idea what it is. Print the format literally and
move on. */
Modified: trunk/pub/libvex_ir.h
===================================================================
--- trunk/pub/libvex_ir.h 2007-08-25 21:29:03 UTC (rev 1779)
+++ trunk/pub/libvex_ir.h 2007-08-25 23:07:44 UTC (rev 1780)
@@ -422,7 +422,6 @@
Iop_CmpNE8, Iop_CmpNE16, Iop_CmpNE32, Iop_CmpNE64,
/* Tags for unary ops */
Iop_Not8, Iop_Not16, Iop_Not32, Iop_Not64,
- Iop_Neg8, Iop_Neg16, Iop_Neg32, Iop_Neg64,
/* -- Ordering not important after here. -- */
@@ -445,6 +444,8 @@
/* As a sop to Valgrind-Memcheck, the following are useful. */
Iop_CmpNEZ8, Iop_CmpNEZ16, Iop_CmpNEZ32, Iop_CmpNEZ64,
+ Iop_CmpwNEZ32, Iop_CmpwNEZ64, /* all-0s -> all-Os; other -> all-1s */
+ Iop_Left8, Iop_Left16, Iop_Left32, Iop_Left64, /* \x -> x | -x */
/* PowerPC-style 3-way integer comparisons. Without them it is
difficult to simulate PPC efficiently.
|
|
From: <sv...@va...> - 2007-08-25 21:29:03
|
Author: sewardj
Date: 2007-08-25 22:29:03 +0100 (Sat, 25 Aug 2007)
New Revision: 1779
Log:
Merge, from CGTUNE branch:
r1768:
Cosmetic (non-functional) changes associated with r1767.
r1767:
Add a second spill-code-avoidance optimisation, which could be called
'directReload' for lack of a better name.
If an instruction reads exactly one vreg which is currently in a spill
slot, and this is last use of that vreg, see if the instruction can be
converted into one that reads directly from the spill slot. This is
clearly only possible for x86 and amd64 targets, since ppc is a
load-store architecture. So, for example,
orl %vreg, %dst
where %vreg is in a spill slot, and this is its last use, would
previously be converted to
movl $spill-offset(%ebp), %tmp
orl %tmp, %dst
whereas now it becomes
orl $spill-offset(%ebp), %dst
This not only avoids an instruction, it eliminates the need for a
reload temporary (%tmp in this example) and so potentially further
reduces spilling.
Implementation is in two parts: an architecture independent part, in
reg_alloc2.c, which finds candidate instructions, and a host dependent
function (directReload_ARCH) for each arch supporting the
optimisation. The directReload_ function does the instruction form
conversion, when possible. Currently only x86 hosts are supported.
As a side effect, change the form of the X86_Test32 instruction from
reg-only to reg/mem so it can participate in such transformations.
This gives a code size reduction of 0.6% for perf/bz2 on x86 memcheck,
but tends to be more effective for long blocks of x86 FP code.
Modified:
trunk/priv/host-generic/h_generic_regs.h
trunk/priv/host-generic/reg_alloc2.c
trunk/priv/host-x86/hdefs.c
trunk/priv/host-x86/hdefs.h
trunk/priv/host-x86/isel.c
trunk/priv/main/vex_main.c
Modified: trunk/priv/host-generic/h_generic_regs.h
===================================================================
--- trunk/priv/host-generic/h_generic_regs.h 2007-08-25 21:11:33 UTC (rev 1778)
+++ trunk/priv/host-generic/h_generic_regs.h 2007-08-25 21:29:03 UTC (rev 1779)
@@ -266,9 +266,10 @@
void (*mapRegs) (HRegRemap*, HInstr*, Bool),
/* Return an insn to spill/restore a real reg to a spill slot
- offset. */
+ offset. And optionally a function to do direct reloads. */
HInstr* (*genSpill) ( HReg, Int, Bool ),
HInstr* (*genReload) ( HReg, Int, Bool ),
+ HInstr* (*directReload) ( HInstr*, HReg, Short ),
Int guest_sizeB,
/* For debug printing only. */
Modified: trunk/priv/host-generic/reg_alloc2.c
===================================================================
--- trunk/priv/host-generic/reg_alloc2.c 2007-08-25 21:11:33 UTC (rev 1778)
+++ trunk/priv/host-generic/reg_alloc2.c 2007-08-25 21:29:03 UTC (rev 1779)
@@ -323,10 +323,14 @@
/* Apply a reg-reg mapping to an insn. */
void (*mapRegs) ( HRegRemap*, HInstr*, Bool ),
- /* Return an insn to spill/restore a real reg to a spill slot
- byte offset. */
+ /* Return an insn to spill/restore a real reg to a spill slot byte
+ offset. Also (optionally) a 'directReload' function, which
+ attempts to replace a given instruction by one which reads
+ directly from a specified spill slot. May be NULL, in which
+ case the optimisation is not attempted. */
HInstr* (*genSpill) ( HReg, Int, Bool ),
HInstr* (*genReload) ( HReg, Int, Bool ),
+ HInstr* (*directReload) ( HInstr*, HReg, Short ),
Int guest_sizeB,
/* For debug printing only. */
@@ -1162,6 +1166,76 @@
initHRegRemap(&remap);
+ /* ------------ BEGIN directReload optimisation ----------- */
+
+ /* If the instruction reads exactly one vreg which is currently
+ in a spill slot, and this is last use of that vreg, see if we
+ can convert the instruction into one reads directly from the
+ spill slot. This is clearly only possible for x86 and amd64
+ targets, since ppc is a load-store architecture. If
+ successful, replace instrs_in->arr[ii] with this new
+ instruction, and recompute its reg usage, so that the change
+ is invisible to the standard-case handling that follows. */
+
+ if (directReload && reg_usage.n_used <= 2) {
+ Bool debug_direct_reload = True && False;
+ HReg cand = INVALID_HREG;
+ Bool nreads = 0;
+ Short spilloff = 0;
+
+ for (j = 0; j < reg_usage.n_used; j++) {
+
+ vreg = reg_usage.hreg[j];
+
+ if (!hregIsVirtual(vreg))
+ continue;
+
+ if (reg_usage.mode[j] == HRmRead) {
+ nreads++;
+ m = hregNumber(vreg);
+ vassert(IS_VALID_VREGNO(m));
+ k = vreg_state[m];
+ if (!IS_VALID_RREGNO(k)) {
+ /* ok, it is spilled. Now, is this its last use? */
+ vassert(vreg_lrs[m].dead_before >= ii+1);
+ if (vreg_lrs[m].dead_before == ii+1
+ && cand == INVALID_HREG) {
+ spilloff = vreg_lrs[m].spill_offset;
+ cand = vreg;
+ }
+ }
+ }
+ }
+
+ if (nreads == 1 && cand != INVALID_HREG) {
+ HInstr* reloaded;
+ if (reg_usage.n_used == 2)
+ vassert(reg_usage.hreg[0] != reg_usage.hreg[1]);
+
+ reloaded = directReload ( instrs_in->arr[ii], cand, spilloff );
+ if (debug_direct_reload && !reloaded) {
+ vex_printf("[%3d] ", spilloff); ppHReg(cand); vex_printf(" ");
+ ppInstr(instrs_in->arr[ii], mode64);
+ }
+ if (reloaded) {
+ /* Update info about the insn, so it looks as if it had
+ been in this form all along. */
+ instrs_in->arr[ii] = reloaded;
+ (*getRegUsage)( ®_usage, instrs_in->arr[ii], mode64 );
+ if (debug_direct_reload && !reloaded) {
+ vex_printf(" --> ");
+ ppInstr(reloaded, mode64);
+ }
+ }
+
+ if (debug_direct_reload && !reloaded)
+ vex_printf("\n");
+ }
+
+ }
+
+ /* ------------ END directReload optimisation ------------ */
+
/* for each reg mentioned in the insn ... */
for (j = 0; j < reg_usage.n_used; j++) {
Modified: trunk/priv/host-x86/hdefs.c
===================================================================
--- trunk/priv/host-x86/hdefs.c 2007-08-25 21:11:33 UTC (rev 1778)
+++ trunk/priv/host-x86/hdefs.c 2007-08-25 21:29:03 UTC (rev 1779)
@@ -598,7 +598,7 @@
i->Xin.Sh32.dst = dst;
return i;
}
-X86Instr* X86Instr_Test32 ( UInt imm32, HReg dst ) {
+X86Instr* X86Instr_Test32 ( UInt imm32, X86RM* dst ) {
X86Instr* i = LibVEX_Alloc(sizeof(X86Instr));
i->tag = Xin_Test32;
i->Xin.Test32.imm32 = imm32;
@@ -908,7 +908,7 @@
return;
case Xin_Test32:
vex_printf("testl $%d,", (Int)i->Xin.Test32.imm32);
- ppHRegX86(i->Xin.Test32.dst);
+ ppX86RM(i->Xin.Test32.dst);
return;
case Xin_Unary32:
vex_printf("%sl ", showX86UnaryOp(i->Xin.Unary32.op));
@@ -1173,7 +1173,7 @@
addHRegUse(u, HRmRead, hregX86_ECX());
return;
case Xin_Test32:
- addHRegUse(u, HRmRead, i->Xin.Test32.dst);
+ addRegUsage_X86RM(u, i->Xin.Test32.dst, HRmRead);
return;
case Xin_Unary32:
addHRegUse(u, HRmModify, i->Xin.Unary32.dst);
@@ -1402,7 +1402,7 @@
mapReg(m, &i->Xin.Sh32.dst);
return;
case Xin_Test32:
- mapReg(m, &i->Xin.Test32.dst);
+ mapRegs_X86RM(m, i->Xin.Test32.dst);
return;
case Xin_Unary32:
mapReg(m, &i->Xin.Unary32.dst);
@@ -1610,7 +1610,83 @@
}
}
+/* The given instruction reads the specified vreg exactly once, and
+ that vreg is currently located at the given spill offset. If
+ possible, return a variant of the instruction which instead
+ references the spill slot directly. */
+X86Instr* directReload_X86( X86Instr* i, HReg vreg, Short spill_off )
+{
+ vassert(spill_off >= 0 && spill_off < 10000); /* let's say */
+
+ /* Deal with form: src=RMI_Reg, dst=Reg where src == vreg
+ Convert to: src=RMI_Mem, dst=Reg
+ */
+ if (i->tag == Xin_Alu32R
+ && (i->Xin.Alu32R.op == Xalu_MOV || i->Xin.Alu32R.op == Xalu_OR
+ || i->Xin.Alu32R.op == Xalu_XOR)
+ && i->Xin.Alu32R.src->tag == Xrmi_Reg
+ && i->Xin.Alu32R.src->Xrmi.Reg.reg == vreg) {
+ vassert(i->Xin.Alu32R.dst != vreg);
+ return X86Instr_Alu32R(
+ i->Xin.Alu32R.op,
+ X86RMI_Mem( X86AMode_IR( spill_off, hregX86_EBP())),
+ i->Xin.Alu32R.dst
+ );
+ }
+
+ /* Deal with form: src=RMI_Imm, dst=Reg where dst == vreg
+ Convert to: src=RI_Imm, dst=Mem
+ */
+ if (i->tag == Xin_Alu32R
+ && (i->Xin.Alu32R.op == Xalu_CMP)
+ && i->Xin.Alu32R.src->tag == Xrmi_Imm
+ && i->Xin.Alu32R.dst == vreg) {
+ return X86Instr_Alu32M(
+ i->Xin.Alu32R.op,
+ X86RI_Imm( i->Xin.Alu32R.src->Xrmi.Imm.imm32 ),
+ X86AMode_IR( spill_off, hregX86_EBP())
+ );
+ }
+
+ /* Deal with form: Push(RMI_Reg)
+ Convert to: Push(RMI_Mem)
+ */
+ if (i->tag == Xin_Push
+ && i->Xin.Push.src->tag == Xrmi_Reg
+ && i->Xin.Push.src->Xrmi.Reg.reg == vreg) {
+ return X86Instr_Push(
+ X86RMI_Mem( X86AMode_IR( spill_off, hregX86_EBP()))
+ );
+ }
+
+ /* Deal with form: CMov32(src=RM_Reg, dst) where vreg == src
+ Convert to CMov32(RM_Mem, dst) */
+ if (i->tag == Xin_CMov32
+ && i->Xin.CMov32.src->tag == Xrm_Reg
+ && i->Xin.CMov32.src->Xrm.Reg.reg == vreg) {
+ vassert(i->Xin.CMov32.dst != vreg);
+ return X86Instr_CMov32(
+ i->Xin.CMov32.cond,
+ X86RM_Mem( X86AMode_IR( spill_off, hregX86_EBP() )),
+ i->Xin.CMov32.dst
+ );
+ }
+
+ /* Deal with form: Test32(imm,RM_Reg vreg) -> Test32(imm,amode) */
+ if (i->tag == Xin_Test32
+ && i->Xin.Test32.dst->tag == Xrm_Reg
+ && i->Xin.Test32.dst->Xrm.Reg.reg == vreg) {
+ return X86Instr_Test32(
+ i->Xin.Test32.imm32,
+ X86RM_Mem( X86AMode_IR( spill_off, hregX86_EBP() ) )
+ );
+ }
+
+ return NULL;
+}
+
+
/* --------- The x86 assembler (bleh.) --------- */
static UChar iregNo ( HReg r )
@@ -2010,6 +2086,7 @@
switch (i->Xin.Alu32M.op) {
case Xalu_ADD: opc = 0x01; subopc_imm = 0; break;
case Xalu_SUB: opc = 0x29; subopc_imm = 5; break;
+ case Xalu_CMP: opc = 0x39; subopc_imm = 7; break;
default: goto bad;
}
switch (i->Xin.Alu32M.src->tag) {
@@ -2054,11 +2131,19 @@
goto done;
case Xin_Test32:
- /* testl $imm32, %reg */
- *p++ = 0xF7;
- p = doAMode_R(p, fake(0), i->Xin.Test32.dst);
- p = emit32(p, i->Xin.Test32.imm32);
- goto done;
+ if (i->Xin.Test32.dst->tag == Xrm_Reg) {
+ /* testl $imm32, %reg */
+ *p++ = 0xF7;
+ p = doAMode_R(p, fake(0), i->Xin.Test32.dst->Xrm.Reg.reg);
+ p = emit32(p, i->Xin.Test32.imm32);
+ goto done;
+ } else {
+ /* testl $imm32, amode */
+ *p++ = 0xF7;
+ p = doAMode_M(p, fake(0), i->Xin.Test32.dst->Xrm.Mem.am);
+ p = emit32(p, i->Xin.Test32.imm32);
+ goto done;
+ }
case Xin_Unary32:
if (i->Xin.Unary32.op == Xun_NOT) {
Modified: trunk/priv/host-x86/hdefs.h
===================================================================
--- trunk/priv/host-x86/hdefs.h 2007-08-25 21:11:33 UTC (rev 1778)
+++ trunk/priv/host-x86/hdefs.h 2007-08-25 21:29:03 UTC (rev 1779)
@@ -351,7 +351,7 @@
Xin_Alu32R, /* 32-bit mov/arith/logical, dst=REG */
Xin_Alu32M, /* 32-bit mov/arith/logical, dst=MEM */
Xin_Sh32, /* 32-bit shift/rotate, dst=REG */
- Xin_Test32, /* 32-bit test of REG against imm32 (AND, set
+ Xin_Test32, /* 32-bit test of REG or MEM against imm32 (AND, set
flags, discard result) */
Xin_Unary32, /* 32-bit not and neg */
Xin_Lea32, /* 32-bit compute EA into a reg */
@@ -413,8 +413,8 @@
HReg dst;
} Sh32;
struct {
- UInt imm32;
- HReg dst; /* not written, only read */
+ UInt imm32;
+ X86RM* dst; /* not written, only read */
} Test32;
/* Not and Neg */
struct {
@@ -624,7 +624,7 @@
extern X86Instr* X86Instr_Lea32 ( X86AMode* am, HReg dst );
extern X86Instr* X86Instr_Sh32 ( X86ShiftOp, UInt, HReg );
-extern X86Instr* X86Instr_Test32 ( UInt imm32, HReg dst );
+extern X86Instr* X86Instr_Test32 ( UInt imm32, X86RM* dst );
extern X86Instr* X86Instr_MulL ( Bool syned, X86RM* );
extern X86Instr* X86Instr_Div ( Bool syned, X86RM* );
extern X86Instr* X86Instr_Sh3232 ( X86ShiftOp, UInt amt, HReg src, HReg dst );
@@ -672,6 +672,8 @@
Bool, void* dispatch );
extern X86Instr* genSpill_X86 ( HReg rreg, Int offset, Bool );
extern X86Instr* genReload_X86 ( HReg rreg, Int offset, Bool );
+extern X86Instr* directReload_X86 ( X86Instr* i,
+ HReg vreg, Short spill_off );
extern void getAllocableRegs_X86 ( Int*, HReg** );
extern HInstrArray* iselSB_X86 ( IRSB*, VexArch,
VexArchInfo*,
Modified: trunk/priv/host-x86/isel.c
===================================================================
--- trunk/priv/host-x86/isel.c 2007-08-25 21:11:33 UTC (rev 1778)
+++ trunk/priv/host-x86/isel.c 2007-08-25 21:29:03 UTC (rev 1779)
@@ -113,6 +113,12 @@
return IRExpr_Binder(binder);
}
+static Bool isZeroU8 ( IRExpr* e )
+{
+ return e->tag == Iex_Const
+ && e->Iex.Const.con->tag == Ico_U8
+ && e->Iex.Const.con->Ico.U8 == 0;
+}
/*---------------------------------------------------------*/
@@ -1248,12 +1254,12 @@
case Iex_Mux0X: {
if ((ty == Ity_I32 || ty == Ity_I16 || ty == Ity_I8)
&& typeOfIRExpr(env->type_env,e->Iex.Mux0X.cond) == Ity_I8) {
- HReg r8;
- HReg rX = iselIntExpr_R(env, e->Iex.Mux0X.exprX);
- X86RM* r0 = iselIntExpr_RM(env, e->Iex.Mux0X.expr0);
- HReg dst = newVRegI(env);
+ X86RM* r8;
+ HReg rX = iselIntExpr_R(env, e->Iex.Mux0X.exprX);
+ X86RM* r0 = iselIntExpr_RM(env, e->Iex.Mux0X.expr0);
+ HReg dst = newVRegI(env);
addInstr(env, mk_iMOVsd_RR(rX,dst));
- r8 = iselIntExpr_R(env, e->Iex.Mux0X.cond);
+ r8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
addInstr(env, X86Instr_Test32(0xFF, r8));
addInstr(env, X86Instr_CMov32(Xcc_Z,r0,dst));
return dst;
@@ -1552,7 +1558,7 @@
if (e->tag == Iex_RdTmp) {
HReg r32 = lookupIRTemp(env, e->Iex.RdTmp.tmp);
/* Test32 doesn't modify r32; so this is OK. */
- addInstr(env, X86Instr_Test32(1,r32));
+ addInstr(env, X86Instr_Test32(1,X86RM_Reg(r32)));
return Xcc_NZ;
}
@@ -1597,8 +1603,8 @@
unop(Iop_32to1,bind(0))
);
if (matchIRExpr(&mi,p_32to1,e)) {
- HReg r = iselIntExpr_R(env, mi.bindee[0]);
- addInstr(env, X86Instr_Test32(1,r));
+ X86RM* rm = iselIntExpr_RM(env, mi.bindee[0]);
+ addInstr(env, X86Instr_Test32(1,rm));
return Xcc_NZ;
}
@@ -1607,8 +1613,8 @@
/* CmpNEZ8(x) */
if (e->tag == Iex_Unop
&& e->Iex.Unop.op == Iop_CmpNEZ8) {
- HReg r = iselIntExpr_R(env, e->Iex.Unop.arg);
- addInstr(env, X86Instr_Test32(0xFF,r));
+ X86RM* rm = iselIntExpr_RM(env, e->Iex.Unop.arg);
+ addInstr(env, X86Instr_Test32(0xFF,rm));
return Xcc_NZ;
}
@@ -1617,8 +1623,8 @@
/* CmpNEZ16(x) */
if (e->tag == Iex_Unop
&& e->Iex.Unop.op == Iop_CmpNEZ16) {
- HReg r = iselIntExpr_R(env, e->Iex.Unop.arg);
- addInstr(env, X86Instr_Test32(0xFFFF,r));
+ X86RM* rm = iselIntExpr_RM(env, e->Iex.Unop.arg);
+ addInstr(env, X86Instr_Test32(0xFFFF,rm));
return Xcc_NZ;
}
@@ -1721,16 +1727,26 @@
if (e->tag == Iex_Binop
&& (e->Iex.Binop.op == Iop_CmpEQ8
|| e->Iex.Binop.op == Iop_CmpNE8)) {
- HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
- X86RMI* rmi2 = iselIntExpr_RMI(env, e->Iex.Binop.arg2);
- HReg r = newVRegI(env);
- addInstr(env, mk_iMOVsd_RR(r1,r));
- addInstr(env, X86Instr_Alu32R(Xalu_XOR,rmi2,r));
- addInstr(env, X86Instr_Test32(0xFF,r));
- switch (e->Iex.Binop.op) {
- case Iop_CmpEQ8: return Xcc_Z;
- case Iop_CmpNE8: return Xcc_NZ;
- default: vpanic("iselCondCode(x86): CmpXX8");
+ if (isZeroU8(e->Iex.Binop.arg2)) {
+ HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
+ addInstr(env, X86Instr_Test32(0xFF,X86RM_Reg(r1)));
+ switch (e->Iex.Binop.op) {
+ case Iop_CmpEQ8: return Xcc_Z;
+ case Iop_CmpNE8: return Xcc_NZ;
+ default: vpanic("iselCondCode(x86): CmpXX8(expr,0:I8)");
+ }
+ } else {
+ HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
+ X86RMI* rmi2 = iselIntExpr_RMI(env, e->Iex.Binop.arg2);
+ HReg r = newVRegI(env);
+ addInstr(env, mk_iMOVsd_RR(r1,r));
+ addInstr(env, X86Instr_Alu32R(Xalu_XOR,rmi2,r));
+ addInstr(env, X86Instr_Test32(0xFF,X86RM_Reg(r)));
+ switch (e->Iex.Binop.op) {
+ case Iop_CmpEQ8: return Xcc_Z;
+ case Iop_CmpNE8: return Xcc_NZ;
+ default: vpanic("iselCondCode(x86): CmpXX8(expr,expr)");
+ }
}
}
@@ -1743,7 +1759,7 @@
HReg r = newVRegI(env);
addInstr(env, mk_iMOVsd_RR(r1,r));
addInstr(env, X86Instr_Alu32R(Xalu_XOR,rmi2,r));
- addInstr(env, X86Instr_Test32(0xFFFF,r));
+ addInstr(env, X86Instr_Test32(0xFFFF,X86RM_Reg(r)));
switch (e->Iex.Binop.op) {
case Iop_CmpEQ16: return Xcc_Z;
case Iop_CmpNE16: return Xcc_NZ;
@@ -1901,15 +1917,16 @@
/* 64-bit Mux0X */
if (e->tag == Iex_Mux0X) {
- HReg e0Lo, e0Hi, eXLo, eXHi, r8;
- HReg tLo = newVRegI(env);
- HReg tHi = newVRegI(env);
+ X86RM* rm8;
+ HReg e0Lo, e0Hi, eXLo, eXHi;
+ HReg tLo = newVRegI(env);
+ HReg tHi = newVRegI(env);
iselInt64Expr(&e0Hi, &e0Lo, env, e->Iex.Mux0X.expr0);
iselInt64Expr(&eXHi, &eXLo, env, e->Iex.Mux0X.exprX);
addInstr(env, mk_iMOVsd_RR(eXHi, tHi));
addInstr(env, mk_iMOVsd_RR(eXLo, tLo));
- r8 = iselIntExpr_R(env, e->Iex.Mux0X.cond);
- addInstr(env, X86Instr_Test32(0xFF, r8));
+ rm8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
+ addInstr(env, X86Instr_Test32(0xFF, rm8));
/* This assumes the first cmov32 doesn't trash the condition
codes, so they are still available for the second cmov32 */
addInstr(env, X86Instr_CMov32(Xcc_Z,X86RM_Reg(e0Hi),tHi));
@@ -2047,7 +2064,7 @@
and those regs are legitimately modifiable. */
addInstr(env, X86Instr_Sh3232(Xsh_SHL, 0/*%cl*/, tLo, tHi));
addInstr(env, X86Instr_Sh32(Xsh_SHL, 0/*%cl*/, tLo));
- addInstr(env, X86Instr_Test32(32, hregX86_ECX()));
+ addInstr(env, X86Instr_Test32(32, X86RM_Reg(hregX86_ECX())));
addInstr(env, X86Instr_CMov32(Xcc_NZ, X86RM_Reg(tLo), tHi));
addInstr(env, X86Instr_Alu32R(Xalu_MOV, X86RMI_Imm(0), tTemp));
addInstr(env, X86Instr_CMov32(Xcc_NZ, X86RM_Reg(tTemp), tLo));
@@ -2089,7 +2106,7 @@
and those regs are legitimately modifiable. */
addInstr(env, X86Instr_Sh3232(Xsh_SHR, 0/*%cl*/, tHi, tLo));
addInstr(env, X86Instr_Sh32(Xsh_SHR, 0/*%cl*/, tHi));
- addInstr(env, X86Instr_Test32(32, hregX86_ECX()));
+ addInstr(env, X86Instr_Test32(32, X86RM_Reg(hregX86_ECX())));
addInstr(env, X86Instr_CMov32(Xcc_NZ, X86RM_Reg(tHi), tLo));
addInstr(env, X86Instr_Alu32R(Xalu_MOV, X86RMI_Imm(0), tTemp));
addInstr(env, X86Instr_CMov32(Xcc_NZ, X86RM_Reg(tTemp), tHi));
@@ -2812,12 +2829,12 @@
if (e->tag == Iex_Mux0X) {
if (ty == Ity_F64
&& typeOfIRExpr(env->type_env,e->Iex.Mux0X.cond) == Ity_I8) {
- HReg r8 = iselIntExpr_R(env, e->Iex.Mux0X.cond);
- HReg rX = iselDblExpr(env, e->Iex.Mux0X.exprX);
- HReg r0 = iselDblExpr(env, e->Iex.Mux0X.expr0);
- HReg dst = newVRegF(env);
+ X86RM* rm8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
+ HReg rX = iselDblExpr(env, e->Iex.Mux0X.exprX);
+ HReg r0 = iselDblExpr(env, e->Iex.Mux0X.expr0);
+ HReg dst = newVRegF(env);
addInstr(env, X86Instr_FpUnary(Xfp_MOV,rX,dst));
- addInstr(env, X86Instr_Test32(0xFF, r8));
+ addInstr(env, X86Instr_Test32(0xFF, rm8));
addInstr(env, X86Instr_FpCMov(Xcc_Z,r0,dst));
return dst;
}
@@ -3333,12 +3350,12 @@
} /* if (e->tag == Iex_Binop) */
if (e->tag == Iex_Mux0X) {
- HReg r8 = iselIntExpr_R(env, e->Iex.Mux0X.cond);
- HReg rX = iselVecExpr(env, e->Iex.Mux0X.exprX);
- HReg r0 = iselVecExpr(env, e->Iex.Mux0X.expr0);
- HReg dst = newVRegV(env);
+ X86RM* rm8 = iselIntExpr_RM(env, e->Iex.Mux0X.cond);
+ HReg rX = iselVecExpr(env, e->Iex.Mux0X.exprX);
+ HReg r0 = iselVecExpr(env, e->Iex.Mux0X.expr0);
+ HReg dst = newVRegV(env);
addInstr(env, mk_vMOVsd_RR(rX,dst));
- addInstr(env, X86Instr_Test32(0xFF, r8));
+ addInstr(env, X86Instr_Test32(0xFF, rm8));
addInstr(env, X86Instr_SseCMov(Xcc_Z,r0,dst));
return dst;
}
Modified: trunk/priv/main/vex_main.c
===================================================================
--- trunk/priv/main/vex_main.c 2007-08-25 21:11:33 UTC (rev 1778)
+++ trunk/priv/main/vex_main.c 2007-08-25 21:29:03 UTC (rev 1779)
@@ -186,17 +186,18 @@
from the target instruction set. */
HReg* available_real_regs;
Int n_available_real_regs;
- Bool (*isMove) ( HInstr*, HReg*, HReg* );
- void (*getRegUsage) ( HRegUsage*, HInstr*, Bool );
- void (*mapRegs) ( HRegRemap*, HInstr*, Bool );
- HInstr* (*genSpill) ( HReg, Int, Bool );
- HInstr* (*genReload) ( HReg, Int, Bool );
- void (*ppInstr) ( HInstr*, Bool );
- void (*ppReg) ( HReg );
- HInstrArray* (*iselSB) ( IRSB*, VexArch, VexArchInfo*,
- VexAbiInfo* );
- Int (*emit) ( UChar*, Int, HInstr*, Bool, void* );
- IRExpr* (*specHelper) ( HChar*, IRExpr** );
+ Bool (*isMove) ( HInstr*, HReg*, HReg* );
+ void (*getRegUsage) ( HRegUsage*, HInstr*, Bool );
+ void (*mapRegs) ( HRegRemap*, HInstr*, Bool );
+ HInstr* (*genSpill) ( HReg, Int, Bool );
+ HInstr* (*genReload) ( HReg, Int, Bool );
+ HInstr* (*directReload) ( HInstr*, HReg, Short );
+ void (*ppInstr) ( HInstr*, Bool );
+ void (*ppReg) ( HReg );
+ HInstrArray* (*iselSB) ( IRSB*, VexArch, VexArchInfo*,
+ VexAbiInfo* );
+ Int (*emit) ( UChar*, Int, HInstr*, Bool, void* );
+ IRExpr* (*specHelper) ( HChar*, IRExpr** );
Bool (*preciseMemExnsFn) ( Int, Int );
DisOneInstrFn disInstrFn;
@@ -221,6 +222,7 @@
mapRegs = NULL;
genSpill = NULL;
genReload = NULL;
+ directReload = NULL;
ppInstr = NULL;
ppReg = NULL;
iselSB = NULL;
@@ -246,18 +248,19 @@
switch (vta->arch_host) {
case VexArchX86:
- mode64 = False;
+ mode64 = False;
getAllocableRegs_X86 ( &n_available_real_regs,
&available_real_regs );
- isMove = (Bool(*)(HInstr*,HReg*,HReg*)) isMove_X86Instr;
- getRegUsage = (void(*)(HRegUsage*,HInstr*, Bool)) getRegUsage_X86Instr;
- mapRegs = (void(*)(HRegRemap*,HInstr*, Bool)) mapRegs_X86Instr;
- genSpill = (HInstr*(*)(HReg,Int, Bool)) genSpill_X86;
- genReload = (HInstr*(*)(HReg,Int, Bool)) genReload_X86;
- ppInstr = (void(*)(HInstr*, Bool)) ppX86Instr;
- ppReg = (void(*)(HReg)) ppHRegX86;
- iselSB = iselSB_X86;
- emit = (Int(*)(UChar*,Int,HInstr*,Bool,void*)) emit_X86Instr;
+ isMove = (Bool(*)(HInstr*,HReg*,HReg*)) isMove_X86Instr;
+ getRegUsage = (void(*)(HRegUsage*,HInstr*, Bool)) getRegUsage_X86Instr;
+ mapRegs = (void(*)(HRegRemap*,HInstr*, Bool)) mapRegs_X86Instr;
+ genSpill = (HInstr*(*)(HReg,Int, Bool)) genSpill_X86;
+ genReload = (HInstr*(*)(HReg,Int, Bool)) genReload_X86;
+ directReload = (HInstr*(*)(HInstr*,HReg,Short)) directReload_X86;
+ ppInstr = (void(*)(HInstr*, Bool)) ppX86Instr;
+ ppReg = (void(*)(HReg)) ppHRegX86;
+ iselSB = iselSB_X86;
+ emit = (Int(*)(UChar*,Int,HInstr*,Bool,void*)) emit_X86Instr;
host_is_bigendian = False;
host_word_type = Ity_I32;
vassert(are_valid_hwcaps(VexArchX86, vta->archinfo_host.hwcaps));
@@ -581,7 +584,8 @@
rcode = doRegisterAllocation ( vcode, available_real_regs,
n_available_real_regs,
isMove, getRegUsage, mapRegs,
- genSpill, genReload, guest_sizeB,
+ genSpill, genReload, directReload,
+ guest_sizeB,
ppInstr, ppReg, mode64 );
vexAllocSanityCheck();
|
|
From: <sv...@va...> - 2007-08-25 21:11:34
|
Author: sewardj
Date: 2007-08-25 22:11:33 +0100 (Sat, 25 Aug 2007)
New Revision: 1778
Log:
Merge, from CGTUNE branch:
r1765:
During register allocation, keep track of which (real) registers have
the same value as their associated spill slot. Then, if a register
needs to be freed up for some reason, and that register has the same
value as its spill slot, there is no need to produce a spill store.
This substantially reduces the number of spill store instructions
created. Overall gives a 1.9% generated code size reduction for
perf/bz2 running on x86.
r1766:
Followup to r1765: fix some comments, and rearrange fields in struct
RRegState so as to fit it into 16 bytes.
Modified:
trunk/priv/host-generic/reg_alloc2.c
Modified: trunk/priv/host-generic/reg_alloc2.c
===================================================================
--- trunk/priv/host-generic/reg_alloc2.c 2007-08-23 19:02:47 UTC (rev 1777)
+++ trunk/priv/host-generic/reg_alloc2.c 2007-08-25 21:11:33 UTC (rev 1778)
@@ -56,9 +56,6 @@
/* TODO 27 Oct 04:
- (Critical): Need a way to statically establish the vreg classes,
- else we can't allocate spill slots properly.
-
Better consistency checking from what isMove tells us.
We can possibly do V-V coalescing even when the src is spilled,
@@ -66,10 +63,6 @@
Note that state[].hreg is the same as the available real regs.
- Check whether rreg preferencing has any beneficial effect.
-
- Remove preferencing fields in VRegInfo, if not used.
-
Generally rationalise data structures. */
@@ -109,23 +102,30 @@
updated as the allocator processes instructions. */
typedef
struct {
- /* FIELDS WHICH DO NOT CHANGE */
+ /* ------ FIELDS WHICH DO NOT CHANGE ------ */
/* Which rreg is this for? */
HReg rreg;
/* Is this involved in any HLRs? (only an optimisation hint) */
Bool has_hlrs;
- /* FIELDS WHICH DO CHANGE */
+ /* ------ FIELDS WHICH DO CHANGE ------ */
+ /* 6 May 07: rearranged fields below so the whole struct fits
+ into 16 bytes on both x86 and amd64. */
+ /* Used when .disp == Bound and we are looking for vregs to
+ spill. */
+ Bool is_spill_cand;
+ /* Optimisation: used when .disp == Bound. Indicates when the
+ rreg has the same value as the spill slot for the associated
+ vreg. Is safely left at False, and becomes True after a
+ spill store or reload for this rreg. */
+ Bool eq_spill_slot;
/* What's it's current disposition? */
enum { Free, /* available for use */
Unavail, /* in a real-reg live range */
Bound /* in use (holding value of some vreg) */
}
disp;
- /* If RRegBound, what vreg is it bound to? */
+ /* If .disp == Bound, what vreg is it bound to? */
HReg vreg;
- /* Used when .disp == Bound and we are looking for vregs to
- spill. */
- Bool is_spill_cand;
}
RRegState;
@@ -339,6 +339,8 @@
{
# define N_SPILL64S (LibVEX_N_SPILL_BYTES / 8)
+ const Bool eq_spill_opt = True;
+
/* Iterators and temporaries. */
Int ii, j, k, m, spillee, k_suboptimal;
HReg rreg, vreg, vregS, vregD;
@@ -462,6 +464,7 @@
rreg_state[j].disp = Free;
rreg_state[j].vreg = INVALID_HREG;
rreg_state[j].is_spill_cand = False;
+ rreg_state[j].eq_spill_slot = False;
}
for (j = 0; j < n_vregs; j++)
@@ -783,7 +786,7 @@
two spill slots.
Do a rank-based allocation of vregs to spill slot numbers. We
- put as few values as possible in spill slows, but nevertheless
+ put as few values as possible in spill slots, but nevertheless
need to have a spill slot available for all vregs, just in case.
*/
/* max_ss_no = -1; */
@@ -956,8 +959,10 @@
/* Sanity check 3: all vreg-rreg bindings must bind registers
of the same class. */
for (j = 0; j < n_rregs; j++) {
- if (rreg_state[j].disp != Bound)
+ if (rreg_state[j].disp != Bound) {
+ vassert(rreg_state[j].eq_spill_slot == False);
continue;
+ }
vassert(hregClass(rreg_state[j].rreg)
== hregClass(rreg_state[j].vreg));
vassert( hregIsVirtual(rreg_state[j].vreg));
@@ -1033,6 +1038,10 @@
vreg_state[hregNumber(vregD)] = toShort(m);
vreg_state[hregNumber(vregS)] = INVALID_RREG_NO;
+ /* This rreg has become associated with a different vreg and
+ hence with a different spill slot. Play safe. */
+ rreg_state[m].eq_spill_slot = False;
+
/* Move on to the next insn. We skip the post-insn stuff for
fixed registers, since this move should not interact with
them in any way. */
@@ -1052,6 +1061,7 @@
vassert(IS_VALID_VREGNO(vreg));
if (vreg_lrs[vreg].dead_before <= ii) {
rreg_state[j].disp = Free;
+ rreg_state[j].eq_spill_slot = False;
m = hregNumber(rreg_state[j].vreg);
vassert(IS_VALID_VREGNO(m));
vreg_state[m] = INVALID_RREG_NO;
@@ -1115,13 +1125,17 @@
vreg_state[m] = INVALID_RREG_NO;
if (vreg_lrs[m].dead_before > ii) {
vassert(vreg_lrs[m].reg_class != HRcINVALID);
- EMIT_INSTR( (*genSpill)( rreg_state[k].rreg,
- vreg_lrs[m].spill_offset,
- mode64 ) );
+ if ((!eq_spill_opt) || !rreg_state[k].eq_spill_slot) {
+ EMIT_INSTR( (*genSpill)( rreg_state[k].rreg,
+ vreg_lrs[m].spill_offset,
+ mode64 ) );
+ }
+ rreg_state[k].eq_spill_slot = True;
}
}
rreg_state[k].disp = Unavail;
rreg_state[k].vreg = INVALID_HREG;
+ rreg_state[k].eq_spill_slot = False;
/* check for further rregs entering HLRs at this point */
rreg_lrs_la_next++;
@@ -1170,6 +1184,10 @@
if (IS_VALID_RREGNO(k)) {
vassert(rreg_state[k].disp == Bound);
addToHRegRemap(&remap, vreg, rreg_state[k].rreg);
+ /* If this rreg is written or modified, mark it as different
+ from any spill slot value. */
+ if (reg_usage.mode[j] != HRmRead)
+ rreg_state[k].eq_spill_slot = False;
continue;
} else {
vassert(k == INVALID_RREG_NO);
@@ -1205,13 +1223,22 @@
vassert(IS_VALID_VREGNO(m));
vreg_state[m] = toShort(k);
addToHRegRemap(&remap, vreg, rreg_state[k].rreg);
- /* Generate a reload if needed. */
+ /* Generate a reload if needed. This only creates needed
+ reloads because the live range builder for vregs will
+ guarantee that the first event for a vreg is a write.
+ Hence, if this reference is not a write, it cannot be
+ the first reference for this vreg, and so a reload is
+ indeed needed. */
if (reg_usage.mode[j] != HRmWrite) {
vassert(vreg_lrs[m].reg_class != HRcINVALID);
EMIT_INSTR( (*genReload)( rreg_state[k].rreg,
vreg_lrs[m].spill_offset,
mode64 ) );
+ rreg_state[k].eq_spill_slot = True;
+ } else {
+ rreg_state[k].eq_spill_slot = False;
}
+
continue;
}
@@ -1272,15 +1299,19 @@
live vreg. */
vassert(vreg_lrs[m].dead_before > ii);
vassert(vreg_lrs[m].reg_class != HRcINVALID);
- EMIT_INSTR( (*genSpill)( rreg_state[spillee].rreg,
- vreg_lrs[m].spill_offset,
- mode64 ) );
+ if ((!eq_spill_opt) || !rreg_state[spillee].eq_spill_slot) {
+ EMIT_INSTR( (*genSpill)( rreg_state[spillee].rreg,
+ vreg_lrs[m].spill_offset,
+ mode64 ) );
+ }
/* Update the rreg_state to reflect the new assignment for this
rreg. */
rreg_state[spillee].vreg = vreg;
vreg_state[m] = INVALID_RREG_NO;
+ rreg_state[spillee].eq_spill_slot = False; /* be safe */
+
m = hregNumber(vreg);
vassert(IS_VALID_VREGNO(m));
vreg_state[m] = toShort(spillee);
@@ -1292,6 +1323,7 @@
EMIT_INSTR( (*genReload)( rreg_state[spillee].rreg,
vreg_lrs[m].spill_offset,
mode64 ) );
+ rreg_state[spillee].eq_spill_slot = True;
}
/* So after much twisting and turning, we have vreg mapped to
@@ -1344,6 +1376,7 @@
vassert(rreg_state[k].disp == Unavail);
rreg_state[k].disp = Free;
rreg_state[k].vreg = INVALID_HREG;
+ rreg_state[k].eq_spill_slot = False;
/* check for further rregs leaving HLRs at this point */
rreg_lrs_db_next++;
|
|
From: <js...@ac...> - 2007-08-25 11:25:04
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2007-08-25 09:00:01 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 219 tests, 10 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: <sv...@va...> - 2007-08-25 07:19:10
|
Author: sewardj
Date: 2007-08-25 08:19:08 +0100 (Sat, 25 Aug 2007)
New Revision: 6778
Log:
Changes to m_hashtable:
Allow hashtables to dynamically resize (patch from Christoph
Bartoschek). Results in the following interface changes:
* HT_construct: no need to supply an initial table size.
Instead, supply a text string used to "name" the table, so
that debugging messages ("resizing the table") can say which
one they are resizing.
* Remove VG_(HT_get_node). This exposes the chain structure to
callers (via the next_ptr parameter), which is a problem since
callers could get some info about the chain structure which then
changes when the table is resized. Fortunately is not used.
* Remove VG_(HT_first_match) and VG_(HT_apply_to_all_nodes) as
they are unused.
* Make the iteration mechanism more paranoid, so any adding or
deleting of nodes part way through an iteration causes VG_(HT_next)
to assert.
* Fix the comment on VG_(HT_to_array) so it no longer speaks
specifically about MC's leak detector.
Modified:
trunk/coregrind/m_hashtable.c
trunk/helgrind/hg_main.c
trunk/include/pub_tool_hashtable.h
trunk/massif/ms_main.c
trunk/memcheck/mc_main.c
trunk/memcheck/mc_malloc_wrappers.c
Modified: trunk/coregrind/m_hashtable.c
===================================================================
--- trunk/coregrind/m_hashtable.c 2007-08-24 20:37:09 UTC (rev 6777)
+++ trunk/coregrind/m_hashtable.c 2007-08-25 07:19:08 UTC (rev 6778)
@@ -29,6 +29,7 @@
*/
#include "pub_core_basics.h"
+#include "pub_core_debuglog.h"
#include "pub_core_hashtable.h"
#include "pub_core_libcassert.h"
#include "pub_core_mallocfree.h"
@@ -40,35 +41,99 @@
#define CHAIN_NO(key,tbl) (((UWord)(key)) % tbl->n_chains)
struct _VgHashTable {
- UInt n_chains; // should be prime
- VgHashNode* iterNode; // current iterator node
- UInt iterChain; // next chain to be traversed by the iterator
- VgHashNode* chains[0]; // must be last field in the struct!
+ UInt n_chains; // should be prime
+ UInt n_elements;
+ VgHashNode* iterNode; // current iterator node
+ UInt iterChain; // next chain to be traversed by the iterator
+ VgHashNode** chains; // expanding array of hash chains
+ Bool iterOK; // table safe to iterate over?
+ HChar* name; // name of table (for debugging only)
};
+#define N_HASH_PRIMES 20
+
+static SizeT primes[N_HASH_PRIMES] = {
+ 769UL, 1543UL, 3079UL, 6151UL,
+ 12289UL, 24593UL, 49157UL, 98317UL,
+ 196613UL, 393241UL, 786433UL, 1572869UL,
+ 3145739UL, 6291469UL, 12582917UL, 25165843UL,
+ 50331653UL, 100663319UL, 201326611UL, 402653189UL
+};
+
/*--------------------------------------------------------------------*/
/*--- Functions ---*/
/*--------------------------------------------------------------------*/
-VgHashTable VG_(HT_construct)(UInt n_chains)
+VgHashTable VG_(HT_construct) ( HChar* name )
{
/* Initialises to zero, ie. all entries NULL */
- SizeT sz = sizeof(struct _VgHashTable) + n_chains*sizeof(VgHashNode*);
- VgHashTable table = VG_(calloc)(1, sz);
- table->n_chains = n_chains;
+ SizeT n_chains = primes[0];
+ SizeT sz = n_chains * sizeof(VgHashNode*);
+ VgHashTable table = VG_(calloc)(1, sizeof(struct _VgHashTable));
+ table->chains = VG_(calloc)(1, sz);
+ table->n_chains = n_chains;
+ table->n_elements = 0;
+ table->iterOK = True;
+ table->name = name;
+ vg_assert(name);
return table;
}
Int VG_(HT_count_nodes) ( VgHashTable table )
{
- VgHashNode* node;
- UInt chain;
- Int n = 0;
+ return table->n_elements;
+}
- for (chain = 0; chain < table->n_chains; chain++)
- for (node = table->chains[chain]; node != NULL; node = node->next)
- n++;
- return n;
+static void resize ( VgHashTable table )
+{
+ Int i;
+ SizeT sz;
+ SizeT old_chains = table->n_chains;
+ SizeT new_chains = old_chains + 1;
+ VgHashNode** chains;
+ VgHashNode * node;
+
+ /* If we've run out of primes, do nothing. */
+ if (old_chains == primes[N_HASH_PRIMES-1])
+ return;
+
+ vg_assert(old_chains >= primes[0]
+ && old_chains < primes[N_HASH_PRIMES-1]);
+
+ for (i = 0; i < N_HASH_PRIMES; i++) {
+ if (primes[i] > new_chains) {
+ new_chains = primes[i];
+ break;
+ }
+ }
+
+ vg_assert(new_chains > old_chains);
+ vg_assert(new_chains > primes[0]
+ && new_chains <= primes[N_HASH_PRIMES-1]);
+
+ VG_(debugLog)(
+ 1, "hashtable",
+ "resizing table `%s' from %lu to %lu (total elems %lu)\n",
+ table->name, (UWord)old_chains, (UWord)new_chains,
+ (UWord)table->n_elements );
+
+ table->n_chains = new_chains;
+ sz = new_chains * sizeof(VgHashNode*);
+ chains = VG_(calloc)(1, sz);
+
+ for (i = 0; i < old_chains; i++) {
+ node = table->chains[i];
+ while (node != NULL) {
+ VgHashNode* next = node->next;
+ UWord chain = CHAIN_NO(node->key, table);
+ node->next = chains[chain];
+ chains[chain] = node;
+ node = next;
+ }
+ }
+
+ VG_(free)(table->chains);
+ table->chains = chains;
}
/* Puts a new, heap allocated VgHashNode, into the VgHashTable. Prepends
@@ -76,39 +141,16 @@
void VG_(HT_add_node) ( VgHashTable table, void* vnode )
{
VgHashNode* node = (VgHashNode*)vnode;
- UInt chain = CHAIN_NO(node->key, table);
+ UWord chain = CHAIN_NO(node->key, table);
node->next = table->chains[chain];
table->chains[chain] = node;
-}
-
-/* Looks up a VgHashNode in the table. Also returns the address of
- the previous node's 'next' pointer which allows it to be removed from the
- list later without having to look it up again. */
-void* VG_(HT_get_node) ( VgHashTable table, UWord key,
- /*OUT*/VgHashNode*** next_ptr )
-{
- VgHashNode *prev, *curr;
- Int chain;
-
- chain = CHAIN_NO(key, table);
-
- prev = NULL;
- curr = table->chains[chain];
- while (True) {
- if (curr == NULL)
- break;
- if (key == curr->key)
- break;
- prev = curr;
- curr = curr->next;
+ table->n_elements++;
+ if ( (1 * (ULong)table->n_elements) > (1 * (ULong)table->n_chains) ) {
+ resize(table);
}
- if (NULL == prev)
- *next_ptr = & (table->chains[chain]);
- else
- *next_ptr = & (prev->next);
-
- return curr;
+ /* Table has been modified; hence HT_Next should assert. */
+ table->iterOK = False;
}
/* Looks up a VgHashNode in the table. Returns NULL if not found. */
@@ -128,13 +170,17 @@
/* Removes a VgHashNode from the table. Returns NULL if not found. */
void* VG_(HT_remove) ( VgHashTable table, UWord key )
{
- Int chain = CHAIN_NO(key, table);
+ UWord chain = CHAIN_NO(key, table);
VgHashNode* curr = table->chains[chain];
VgHashNode** prev_next_ptr = &(table->chains[chain]);
+ /* Table has been modified; hence HT_Next should assert. */
+ table->iterOK = False;
+
while (curr) {
if (key == curr->key) {
*prev_next_ptr = curr->next;
+ table->n_elements--;
return curr;
}
prev_next_ptr = &(curr->next);
@@ -143,26 +189,27 @@
return NULL;
}
-/* Allocates a suitably-sized array, copies all the malloc'd block
- shadows into it, then returns both the array and the size of it. This is
- used by the memory-leak detector.
+/* Allocates a suitably-sized array, copies all the hashtable elements
+ into it, then returns both the array and the size of it. This is
+ used by the memory-leak detector. The array must be freed with
+ VG_(free).
*/
-VgHashNode** VG_(HT_to_array) ( VgHashTable table, /*OUT*/ UInt* n_shadows )
+VgHashNode** VG_(HT_to_array) ( VgHashTable table, /*OUT*/ UInt* n_elems )
{
UInt i, j;
VgHashNode** arr;
VgHashNode* node;
- *n_shadows = 0;
+ *n_elems = 0;
for (i = 0; i < table->n_chains; i++) {
for (node = table->chains[i]; node != NULL; node = node->next) {
- (*n_shadows)++;
+ (*n_elems)++;
}
}
- if (*n_shadows == 0)
+ if (*n_elems == 0)
return NULL;
- arr = VG_(malloc)( *n_shadows * sizeof(VgHashNode*) );
+ arr = VG_(malloc)( *n_elems * sizeof(VgHashNode*) );
j = 0;
for (i = 0; i < table->n_chains; i++) {
@@ -170,53 +217,28 @@
arr[j++] = node;
}
}
- vg_assert(j == *n_shadows);
+ vg_assert(j == *n_elems);
return arr;
}
-/* Return the first VgHashNode satisfying the predicate p. */
-void* VG_(HT_first_match) ( VgHashTable table,
- Bool (*p) ( VgHashNode*, void* ),
- void* d )
-{
- UInt i;
- VgHashNode* node;
-
- for (i = 0; i < table->n_chains; i++)
- for (node = table->chains[i]; node != NULL; node = node->next)
- if ( p(node, d) )
- return node;
-
- return NULL;
-}
-
-void VG_(HT_apply_to_all_nodes)( VgHashTable table,
- void (*f)(VgHashNode*, void*),
- void* d )
-{
- UInt i;
- VgHashNode* node;
-
- for (i = 0; i < table->n_chains; i++) {
- for (node = table->chains[i]; node != NULL; node = node->next) {
- f(node, d);
- }
- }
-}
-
void VG_(HT_ResetIter)(VgHashTable table)
{
vg_assert(table);
table->iterNode = NULL;
table->iterChain = 0;
+ table->iterOK = True;
}
void* VG_(HT_Next)(VgHashTable table)
{
Int i;
vg_assert(table);
-
+ /* See long comment on HT_Next prototype in pub_tool_hashtable.h.
+ In short if this fails, it means the caller tried to modify the
+ table whilst iterating over it, which is a bug. */
+ vg_assert(table->iterOK);
+
if (table->iterNode && table->iterNode->next) {
table->iterNode = table->iterNode->next;
return table->iterNode;
@@ -236,13 +258,14 @@
{
UInt i;
VgHashNode *node, *node_next;
-
+
for (i = 0; i < table->n_chains; i++) {
for (node = table->chains[i]; node != NULL; node = node_next) {
node_next = node->next;
VG_(free)(node);
}
}
+ VG_(free)(table->chains);
VG_(free)(table);
}
Modified: trunk/helgrind/hg_main.c
===================================================================
--- trunk/helgrind/hg_main.c 2007-08-24 20:37:09 UTC (rev 6777)
+++ trunk/helgrind/hg_main.c 2007-08-25 07:19:08 UTC (rev 6778)
@@ -1948,9 +1948,10 @@
{
HG_Chunk* hc;
HG_Chunk** prev_chunks_next_ptr;
-
+ /* Commented out 25 Aug 07 as VG_(HT_get_node) no longer exists.
hc = (HG_Chunk*)VG_(HT_get_node) ( hg_malloc_list, (UWord)p,
(VgHashNode***)&prev_chunks_next_ptr );
+ */
if (hc == NULL) {
return;
}
@@ -1978,9 +1979,10 @@
HG_Chunk **prev_chunks_next_ptr;
/* First try and find the block. */
+ /* Commented out 25 Aug 07 as VG_(HT_get_node) no longer exists.
hc = (HG_Chunk*)VG_(HT_get_node) ( hg_malloc_list, (UWord)p,
(VgHashNode***)&prev_chunks_next_ptr );
-
+ */
if (hc == NULL) {
return NULL;
}
@@ -2419,6 +2421,7 @@
putting the result in ai. */
/* Callback for searching malloc'd and free'd lists */
+/*
static Bool addr_is_in_block(VgHashNode *node, void *ap)
{
HG_Chunk* hc2 = (HG_Chunk*)node;
@@ -2426,6 +2429,7 @@
return (hc2->data <= a && a < hc2->data + hc2->size);
}
+*/
static void describe_addr ( Addr a, AddrInfo* ai )
{
@@ -2467,7 +2471,9 @@
}
/* Search for a currently malloc'd block which might bracket it. */
+ /* Commented out 25 Aug 07 as VG_(HT_first_match) no longer exists.
hc = (HG_Chunk*)VG_(HT_first_match)(hg_malloc_list, addr_is_in_block, &a);
+ */
if (NULL != hc) {
ai->akind = Mallocd;
ai->blksize = hc->size;
@@ -3478,7 +3484,7 @@
}
init_shadow_memory();
- hg_malloc_list = VG_(HT_construct)( 80021 ); // prime, big
+ hg_malloc_list = VG_(HT_construct)( "Helgrind's malloc list" );
}
VG_DETERMINE_INTERFACE_VERSION(hg_pre_clo_init)
Modified: trunk/include/pub_tool_hashtable.h
===================================================================
--- trunk/include/pub_tool_hashtable.h 2007-08-24 20:37:09 UTC (rev 6777)
+++ trunk/include/pub_tool_hashtable.h 2007-08-25 07:19:08 UTC (rev 6778)
@@ -39,8 +39,6 @@
// Problems with this data structure:
// - Separate chaining gives bad cache behaviour. Hash tables with linear
// probing give better cache behaviour.
-// - It's not very abstract, eg. deleting nodes exposes more internals than
-// I'd like.
typedef
struct _VgHashNode {
@@ -51,9 +49,11 @@
typedef struct _VgHashTable * VgHashTable;
-/* Make a new table. Allocates the memory with VG_(calloc)(), so can be freed
- with VG_(free)(). n_chains should be prime. */
-extern VgHashTable VG_(HT_construct) ( UInt n_chains );
+/* Make a new table. Allocates the memory with VG_(calloc)(), so can
+ be freed with VG_(free)(). The table starts small but will
+ periodically be expanded. This is transparent to the users of this
+ module. */
+extern VgHashTable VG_(HT_construct) ( HChar* name );
/* Count the number of nodes in a table. */
extern Int VG_(HT_count_nodes) ( VgHashTable table );
@@ -61,41 +61,32 @@
/* Add a node to the table. */
extern void VG_(HT_add_node) ( VgHashTable t, void* node );
-/* Looks up a node in the hash table. Also returns the address of the
- previous node's `next' pointer which allows it to be removed from the
- list later without having to look it up again. */
-extern void* VG_(HT_get_node) ( VgHashTable t, UWord key,
- /*OUT*/VgHashNode*** next_ptr );
-
/* Looks up a VgHashNode in the table. Returns NULL if not found. */
extern void* VG_(HT_lookup) ( VgHashTable table, UWord key );
/* Removes a VgHashNode from the table. Returns NULL if not found. */
extern void* VG_(HT_remove) ( VgHashTable table, UWord key );
-/* Allocates an array of pointers to all the shadow chunks of malloc'd
- blocks. Must be freed with VG_(free)(). */
-extern VgHashNode** VG_(HT_to_array) ( VgHashTable t, /*OUT*/ UInt* n_shadows );
+/* Allocates a suitably-sized array, copies all the hashtable elements
+ into it, then returns both the array and the size of it. This is
+ used by the memory-leak detector. The array must be freed with
+ VG_(free). */
+extern VgHashNode** VG_(HT_to_array) ( VgHashTable t, /*OUT*/ UInt* n_elems );
-/* Returns first node that matches predicate `p', or NULL if none do.
- Extra arguments can be implicitly passed to `p' using `d' which is an
- opaque pointer passed to `p' each time it is called. */
-extern void* VG_(HT_first_match) ( VgHashTable t,
- Bool (*p)(VgHashNode*, void*),
- void* d );
-
-/* Applies a function f() once to each node. Again, `d' can be used
- to pass extra information to the function. */
-extern void VG_(HT_apply_to_all_nodes)( VgHashTable t,
- void (*f)(VgHashNode*, void*),
- void* d );
-
/* Reset the table's iterator to point to the first element. */
extern void VG_(HT_ResetIter) ( VgHashTable table );
-/* Return the element pointed to by the iterator and move on to the next
- one. Returns NULL if the last one has been passed, or if HT_ResetIter()
- has not been called previously. */
+/* Return the element pointed to by the iterator and move on to the
+ next one. Returns NULL if the last one has been passed, or if
+ HT_ResetIter() has not been called previously. Asserts if the
+ table has been modified (HT_add_node, HT_remove) since
+ HT_ResetIter. This guarantees that callers cannot screw up by
+ modifying the table whilst iterating over it (and is necessary to
+ make the implementation safe; specifically we must guarantee that
+ the table will not get resized whilst iteration is happening.
+ Since resizing only happens as a result of calling HT_add_node,
+ disallowing HT_add_node during iteration should give the required
+ assurance. */
extern void* VG_(HT_Next) ( VgHashTable table );
/* Destroy a table. */
Modified: trunk/massif/ms_main.c
===================================================================
--- trunk/massif/ms_main.c 2007-08-24 20:37:09 UTC (rev 6777)
+++ trunk/massif/ms_main.c 2007-08-25 07:19:08 UTC (rev 6778)
@@ -1753,7 +1753,7 @@
VG_(track_die_mem_stack_signal)( die_mem_stack_signal );
// HP_Chunks
- malloc_list = VG_(HT_construct)( 80021 ); // prime, big
+ malloc_list = VG_(HT_construct)( "Massif's malloc list" );
// Dummy node at top of the context structure.
alloc_xpt = new_XPt(0, NULL, /*is_bottom*/False);
Modified: trunk/memcheck/mc_main.c
===================================================================
--- trunk/memcheck/mc_main.c 2007-08-24 20:37:09 UTC (rev 6777)
+++ trunk/memcheck/mc_main.c 2007-08-25 07:19:08 UTC (rev 6778)
@@ -5059,8 +5059,8 @@
VG_(track_post_reg_write_clientcall_return)( mc_post_reg_write_clientcall );
init_shadow_memory();
- MC_(malloc_list) = VG_(HT_construct)( 80021 ); // prime, big
- MC_(mempool_list) = VG_(HT_construct)( 1009 ); // prime, not so big
+ MC_(malloc_list) = VG_(HT_construct)( "MC_(malloc_list)" );
+ MC_(mempool_list) = VG_(HT_construct)( "MC_(mempool_list)" );
init_prof_mem();
tl_assert( mc_expensive_sanity_check() );
Modified: trunk/memcheck/mc_malloc_wrappers.c
===================================================================
--- trunk/memcheck/mc_malloc_wrappers.c 2007-08-24 20:37:09 UTC (rev 6777)
+++ trunk/memcheck/mc_malloc_wrappers.c 2007-08-25 07:19:08 UTC (rev 6778)
@@ -416,7 +416,7 @@
mp->pool = pool;
mp->rzB = rzB;
mp->is_zeroed = is_zeroed;
- mp->chunks = VG_(HT_construct)( 3001 ); // prime, not so big
+ mp->chunks = VG_(HT_construct)( "MC_(create_mempool)" );
/* Paranoia ... ensure this area is off-limits to the client, so
the mp->data field isn't visible to the leak checker. If memory
|
|
From: Tom H. <th...@cy...> - 2007-08-25 02:31:37
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2007-08-25 03:15:02 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 256 tests, 27 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-08-25 02:19:15
|
Nightly build on dellow ( x86_64, Fedora 7 ) started at 2007-08-25 03:10:03 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 292 tests, 221 stderr failures, 105 stdout failures, 0 posttest failures == memcheck/tests/addressable (stdout) memcheck/tests/addressable (stderr) memcheck/tests/amd64/bt_everything (stdout) memcheck/tests/amd64/bt_everything (stderr) memcheck/tests/amd64/bug132146 (stdout) memcheck/tests/amd64/bug132146 (stderr) memcheck/tests/amd64/defcfaexpr (stderr) memcheck/tests/amd64/fxsave-amd64 (stdout) memcheck/tests/amd64/fxsave-amd64 (stderr) memcheck/tests/amd64/insn_basic (stdout) memcheck/tests/amd64/insn_basic (stderr) memcheck/tests/amd64/insn_fpu (stdout) memcheck/tests/amd64/insn_fpu (stderr) memcheck/tests/amd64/insn_mmx (stdout) memcheck/tests/amd64/insn_mmx (stderr) memcheck/tests/amd64/insn_sse (stdout) memcheck/tests/amd64/insn_sse (stderr) memcheck/tests/amd64/insn_sse2 (stdout) memcheck/tests/amd64/insn_sse2 (stderr) memcheck/tests/amd64/int3-amd64 (stdout) memcheck/tests/amd64/int3-amd64 (stderr) memcheck/tests/amd64/more_x87_fp (stdout) memcheck/tests/amd64/more_x87_fp (stderr) memcheck/tests/amd64/sse_memory (stdout) memcheck/tests/amd64/sse_memory (stderr) memcheck/tests/amd64/xor-undef-amd64 (stdout) memcheck/tests/amd64/xor-undef-amd64 (stderr) memcheck/tests/badaddrvalue (stdout) memcheck/tests/badaddrvalue (stderr) memcheck/tests/badfree-2trace (stderr) memcheck/tests/badfree (stderr) memcheck/tests/badjump (stderr) memcheck/tests/badjump2 (stderr) memcheck/tests/badloop (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/badrw (stderr) memcheck/tests/brk (stderr) memcheck/tests/brk2 (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/clientperm (stdout) memcheck/tests/clientperm (stderr) memcheck/tests/custom_alloc (stderr) memcheck/tests/deep_templates (stdout) memcheck/tests/deep_templates (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/doublefree (stderr) memcheck/tests/erringfds (stdout) memcheck/tests/erringfds (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/errs1 (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/exitprog (stderr) memcheck/tests/fprw (stderr) memcheck/tests/fwrite (stderr) memcheck/tests/inits (stderr) memcheck/tests/inline (stdout) memcheck/tests/inline (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/long_namespace_xml (stdout) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/malloc1 (stderr) memcheck/tests/malloc2 (stderr) memcheck/tests/malloc3 (stdout) memcheck/tests/malloc3 (stderr) memcheck/tests/malloc_usable (stderr) memcheck/tests/manuel1 (stdout) memcheck/tests/manuel1 (stderr) memcheck/tests/manuel2 (stdout) memcheck/tests/manuel2 (stderr) memcheck/tests/manuel3 (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/memalign2 (stderr) memcheck/tests/memalign_test (stderr) memcheck/tests/memcmptest (stdout) memcheck/tests/memcmptest (stderr) memcheck/tests/mempool (stderr) memcheck/tests/metadata (stdout) memcheck/tests/metadata (stderr) memcheck/tests/mismatches (stderr) memcheck/tests/mmaptest (stderr) memcheck/tests/nanoleak (stderr) memcheck/tests/nanoleak2 (stderr) memcheck/tests/nanoleak_supp (stderr) memcheck/tests/new_nothrow (stderr) memcheck/tests/new_override (stdout) memcheck/tests/new_override (stderr) memcheck/tests/null_socket (stderr) memcheck/tests/oset_test (stdout) memcheck/tests/oset_test (stderr) memcheck/tests/overlap (stdout) memcheck/tests/overlap (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stdout) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pdb-realloc (stderr) memcheck/tests/pdb-realloc2 (stdout) memcheck/tests/pdb-realloc2 (stderr) memcheck/tests/pipe (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/post-syscall (stdout) memcheck/tests/post-syscall (stderr) memcheck/tests/realloc1 (stderr) memcheck/tests/realloc2 (stderr) memcheck/tests/realloc3 (stderr) memcheck/tests/sh-mem-random (stdout) memcheck/tests/sh-mem-random (stderr) memcheck/tests/sh-mem (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/signal2 (stdout) memcheck/tests/signal2 (stderr) memcheck/tests/sigprocmask (stderr) memcheck/tests/stack_changes (stdout) memcheck/tests/stack_changes (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/str_tester (stderr) memcheck/tests/strchr (stderr) memcheck/tests/supp1 (stderr) memcheck/tests/supp2 (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/suppfree (stderr) memcheck/tests/toobig-allocs (stderr) memcheck/tests/trivialleak (stderr) memcheck/tests/vcpu_bz2 (stdout) memcheck/tests/vcpu_bz2 (stderr) memcheck/tests/vcpu_fbench (stdout) memcheck/tests/vcpu_fbench (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/vcpu_fnfns (stderr) memcheck/tests/with-space (stdout) memcheck/tests/with-space (stderr) memcheck/tests/wrap1 (stdout) memcheck/tests/wrap1 (stderr) memcheck/tests/wrap2 (stdout) memcheck/tests/wrap2 (stderr) memcheck/tests/wrap3 (stdout) memcheck/tests/wrap3 (stderr) memcheck/tests/wrap4 (stdout) memcheck/tests/wrap4 (stderr) memcheck/tests/wrap5 (stdout) memcheck/tests/wrap5 (stderr) memcheck/tests/wrap6 (stdout) memcheck/tests/wrap6 (stderr) memcheck/tests/wrap7 (stdout) memcheck/tests/wrap7 (stderr) memcheck/tests/wrap8 (stdout) memcheck/tests/wrap8 (stderr) memcheck/tests/writev (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stdout) memcheck/tests/xml1 (stderr) memcheck/tests/zeropage (stdout) memcheck/tests/zeropage (stderr) cachegrind/tests/chdir (stderr) cachegrind/tests/clreq (stderr) cachegrind/tests/dlclose (stdout) cachegrind/tests/dlclose (stderr) cachegrind/tests/wrap5 (stdout) cachegrind/tests/wrap5 (stderr) callgrind/tests/clreq (stderr) callgrind/tests/simwork1 (stdout) callgrind/tests/simwork1 (stderr) callgrind/tests/simwork2 (stdout) callgrind/tests/simwork2 (stderr) callgrind/tests/simwork3 (stdout) callgrind/tests/simwork3 (stderr) callgrind/tests/threads (stderr) massif/tests/basic_malloc (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) lackey/tests/true (stderr) none/tests/amd64/bug127521-64 (stdout) none/tests/amd64/bug127521-64 (stderr) none/tests/amd64/bug132813-amd64 (stdout) none/tests/amd64/bug132813-amd64 (stderr) none/tests/amd64/bug132918 (stdout) none/tests/amd64/bug132918 (stderr) none/tests/amd64/clc (stdout) none/tests/amd64/clc (stderr) none/tests/amd64/fcmovnu (stdout) none/tests/amd64/fcmovnu (stderr) none/tests/amd64/fxtract (stdout) none/tests/amd64/fxtract (stderr) none/tests/amd64/insn_basic (stdout) none/tests/amd64/insn_basic (stderr) none/tests/amd64/insn_fpu (stdout) none/tests/amd64/insn_fpu (stderr) none/tests/amd64/insn_mmx (stdout) none/tests/amd64/insn_mmx (stderr) none/tests/amd64/insn_sse (stdout) none/tests/amd64/insn_sse (stderr) none/tests/amd64/insn_sse2 (stdout) none/tests/amd64/insn_sse2 (stderr) none/tests/amd64/jrcxz (stdout) none/tests/amd64/jrcxz (stderr) none/tests/amd64/looper (stdout) none/tests/amd64/looper (stderr) none/tests/amd64/nibz_bennee_mmap (stdout) none/tests/amd64/nibz_bennee_mmap (stderr) none/tests/amd64/rcl-amd64 (stdout) none/tests/amd64/rcl-amd64 (stderr) none/tests/amd64/shrld (stdout) none/tests/amd64/shrld (stderr) none/tests/amd64/slahf-amd64 (stdout) none/tests/amd64/slahf-amd64 (stderr) none/tests/amd64/smc1 (stdout) none/tests/amd64/smc1 (stderr) none/tests/ansi (stderr) none/tests/args (stdout) none/tests/args (stderr) none/tests/async-sigs (stdout) none/tests/async-sigs (stderr) none/tests/bitfield1 (stderr) none/tests/blockfault (stderr) none/tests/bug129866 (stdout) none/tests/bug129866 (stderr) none/tests/closeall (stderr) none/tests/coolo_sigaction (stdout) none/tests/coolo_sigaction (stderr) none/tests/coolo_strlen (stderr) none/tests/discard (stdout) none/tests/discard (stderr) none/tests/exec-sigmask (stderr) none/tests/execve (stderr) none/tests/faultstatus (stderr) none/tests/fcntl_setown (stderr) none/tests/fdleak_cmsg (stderr) none/tests/fdleak_creat (stderr) none/tests/fdleak_dup (stderr) none/tests/fdleak_dup2 (stderr) none/tests/fdleak_fcntl (stderr) none/tests/fdleak_ipv4 (stdout) none/tests/fdleak_ipv4 (stderr) none/tests/fdleak_open (stderr) none/tests/fdleak_pipe (stderr) none/tests/fdleak_socketpair (stderr) none/tests/floored (stdout) none/tests/floored (stderr) none/tests/fork (stdout) none/tests/fork (stderr) none/tests/fucomip (stderr) none/tests/gxx304 (stderr) none/tests/manythreads (stdout) none/tests/manythreads (stderr) none/tests/map_unaligned (stderr) none/tests/map_unmap (stdout) none/tests/map_unmap (stderr) none/tests/mq (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/mremap2 (stderr) none/tests/munmap_exe (stderr) none/tests/nestedfns (stdout) none/tests/nestedfns (stderr) none/tests/pending (stdout) none/tests/pending (stderr) none/tests/pth_atfork1 (stdout) none/tests/pth_atfork1 (stderr) none/tests/pth_blockedsig (stdout) none/tests/pth_blockedsig (stderr) none/tests/pth_cancel1 (stdout) none/tests/pth_cancel1 (stderr) none/tests/pth_cancel2 (stderr) none/tests/pth_cvsimple (stdout) none/tests/pth_cvsimple (stderr) none/tests/pth_detached (stdout) none/tests/pth_detached (stderr) none/tests/pth_empty (stderr) none/tests/pth_exit (stderr) none/tests/pth_exit2 (stderr) none/tests/pth_mutexspeed (stdout) none/tests/pth_mutexspeed (stderr) none/tests/pth_once (stdout) none/tests/pth_once (stderr) none/tests/pth_rwlock (stderr) none/tests/pth_stackalign (stdout) none/tests/pth_stackalign (stderr) none/tests/rcrl (stdout) none/tests/rcrl (stderr) none/tests/readline1 (stdout) none/tests/readline1 (stderr) none/tests/res_search (stdout) none/tests/res_search (stderr) none/tests/resolv (stdout) none/tests/resolv (stderr) none/tests/rlimit_nofile (stderr) none/tests/sem (stderr) none/tests/semlimit (stderr) none/tests/sha1_test (stderr) none/tests/shell (stdout) none/tests/shell (stderr) none/tests/shell_valid1 (stderr) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) none/tests/shortpush (stderr) none/tests/shorts (stderr) none/tests/sigstackgrowth (stdout) none/tests/sigstackgrowth (stderr) none/tests/stackgrowth (stdout) none/tests/stackgrowth (stderr) none/tests/syscall-restart1 (stderr) none/tests/syscall-restart2 (stderr) none/tests/system (stderr) none/tests/thread-exits (stdout) none/tests/thread-exits (stderr) none/tests/threaded-fork (stdout) none/tests/threaded-fork (stderr) none/tests/threadederrno (stdout) none/tests/threadederrno (stderr) none/tests/tls (stdout) none/tests/tls (stderr) none/tests/vgprintf (stdout) none/tests/vgprintf (stderr) |
|
From: Tom H. <th...@cy...> - 2007-08-25 02:17:40
|
Nightly build on lloyd ( x86_64, Fedora Core 3 ) started at 2007-08-25 03:05:06 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 292 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-08-25 02:11:11
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-08-25 03:00:03 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 294 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Nicholas N. <nj...@cs...> - 2007-08-25 01:02:35
|
On Fri, 24 Aug 2007, Stephen McCamant wrote: > If the vex_shadow field is going to be variable-length, my first > impulse was that it should move to the end of the ThreadArchState > structure. In retrospect, though, I think it might be less disruptive > to leave it in its current position, in between the regular guest > state (the "vex" field) and the VEX spill area. Leaving it in place > means the tool's view of the guest state stays the same: they can > still start allocating shadow value right after the regular guest > state. The disadvantage is that putting a variable length structure > before it makes the start of the spill area unpredictable, but that > would just require a change to the VEX interface; nothing on the > Valgrind side really cares where the spills go. This sounds, at first thought, the best way to go. > Another set of fixed-length structures that currently hold a copy of > the shadow information is the sigframes. Since these are allocated > specially, it wasn't immediately clear to me whether it would be safe > to add a level of indirection to them. I don't know about them... > I'm not sure now whether the better default for the multiplier would > be 0 or 1. Most tools don't seem to use shadow registers, so 0 would > save space for them, but 1 would be more backwards-compatible. Don't worry about backwards compatibility, since you'll probably be breaking that anyway :) No point wasting space if you don't have to. > Given the complexities of adding indirection, it seems somewhat in the > spirit of the rest of Valgrind to just hard-code a larger maximum; in > that case, I'd propose a multiplier of 10. But 300 threads * 300 bytes > guest space * 10 copies = 900k is not a trivial amount of memory to > waste. I think specifying the multiplier would be better. But one issue that worries me: even if the shadow size is only double the normal size, will all the register stuff work in the IR? For example, if the code is dealing with registers of type F64 or V128, there's no way to talk about shadow registers of twice that size, is there? It's probably ok for the integer registers, but that's only part of the story. Nick |
|
From: <js...@ac...> - 2007-08-25 00:22:03
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2007-08-25 02:00:01 CEST Results differ from 24 hours ago Checking out valgrind source tree ... failed Last 20 lines of verbose log follow echo Checking out valgrind source tree ... svn co svn://svn.valgrind.org/valgrind/trunk -r {2007-08-25T02:00:01} valgrind svn: Unknown hostname 'svn.valgrind.org' ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... failed Last 20 lines of verbose log follow echo Checking out valgrind source tree ... svn co svn://svn.valgrind.org/valgrind/trunk -r {2007-08-24T02:00:01} valgrind svn: Unknown hostname 'svn.valgrind.org' ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Sat Aug 25 02:00:50 2007 --- new.short Sat Aug 25 02:01:35 2007 *************** *** 5,7 **** ! Checking out valgrind source tree ... svn co svn://svn.valgrind.org/valgrind/trunk -r {2007-08-24T02:00:01} valgrind svn: Unknown hostname 'svn.valgrind.org' --- 5,7 ---- ! Checking out valgrind source tree ... svn co svn://svn.valgrind.org/valgrind/trunk -r {2007-08-25T02:00:01} valgrind svn: Unknown hostname 'svn.valgrind.org' |
|
From: Stephen M.
|
>>>>> "SMcC" == Stephen McCamant <smcc@CSAIL.MIT.EDU> writes: SMcC> The shadow guest state is a part of the Valgrind machine state SMcC> used by tools to record shadow information associated with SMcC> registers: for instance, Memcheck uses it to store the V bits of SMcC> values in registers. At the moment, the size of the shadow guest SMcC> state is hard-coded to be equal in size to the regular guest SMcC> state it shadows, which happens to work well for Memcheck SMcC> because (uncompressed) V bits use one bit for each data bit. SMcC> However, I've now worked on a couple of other shadow-value-style SMcC> tools where this limitation was a problem, since I wanted to SMcC> store more shadow state with each register. I ended up finding SMcC> the right set of places to change to hard-code a larger limit, SMcC> but I think a better fix would be to make this configurable on a SMcC> per-tool basis. The most obvious interface to me would be for SMcC> the tool to specify an integer multiplier; then the shadow guest SMcC> state size would be that multiple of the regular guest state SMcC> size. Memcheck would supply 1 as the multiplier, and tools that SMcC> don't need shadow register values could use 0 (which also seems SMcC> like the best default). SMcC> Does this sound like a good idea? If so, I'd be willing to work SMcC> on a patch. I did some working on this today, and found that the changes required (at least for the approach I tried first) were more extensive than I'd hoped. I've got a set of changes that pass the regression tests, but perhaps some more design discussion is in order. Right now, the ThreadArchState structure that holds the guest state is inlined into the ThreadState object. This has to change if ThreadArchState is going to be variable length, and that's the largest reason for code changes: "arch." has to change to "arch->" in a whole lot of places. If the vex_shadow field is going to be variable-length, my first impulse was that it should move to the end of the ThreadArchState structure. In retrospect, though, I think it might be less disruptive to leave it in its current position, in between the regular guest state (the "vex" field) and the VEX spill area. Leaving it in place means the tool's view of the guest state stays the same: they can still start allocating shadow value right after the regular guest state. The disadvantage is that putting a variable length structure before it makes the start of the spill area unpredictable, but that would just require a change to the VEX interface; nothing on the Valgrind side really cares where the spills go. A slightly crazier idea would be to make the guest state indexing bidirectional, say using positive offsets for guest state and shadow guest state, and negative offsets for spills. That would let both the shadow state size and the number of spills vary more flexibly, but it doesn't make the view of the structure from C code any cleaner. Another set of fixed-length structures that currently hold a copy of the shadow information is the sigframes. Since these are allocated specially, it wasn't immediately clear to me whether it would be safe to add a level of indirection to them. I'm not sure now whether the better default for the multiplier would be 0 or 1. Most tools don't seem to use shadow registers, so 0 would save space for them, but 1 would be more backwards-compatible. Given the complexities of adding indirection, it seems somewhat in the spirit of the rest of Valgrind to just hard-code a larger maximum; in that case, I'd propose a multiplier of 10. But 300 threads * 300 bytes guest space * 10 copies = 900k is not a trivial amount of memory to waste. Any thoughts on the above issues? If you'd like to look at the patches I've been working on so far, they're at: http://people.csail.mit.edu/smcc/valgrind-patches/guest-resize-try1-vg.patch http://people.csail.mit.edu/smcc/valgrind-patches/guest-resize-try1-vex.patch (svn diffs for Valgrind proper and VEX respectively). -- Stephen |
|
From: <sv...@va...> - 2007-08-24 20:37:22
|
Author: sewardj
Date: 2007-08-24 21:37:09 +0100 (Fri, 24 Aug 2007)
New Revision: 6777
Log:
gcc-4.3 compile fixes.
Modified:
trunk/coregrind/m_dispatch/dispatch-x86-linux.S
trunk/coregrind/m_syswrap/syscall-x86-linux.S
trunk/coregrind/m_trampoline.S
Modified: trunk/coregrind/m_dispatch/dispatch-x86-linux.S
===================================================================
--- trunk/coregrind/m_dispatch/dispatch-x86-linux.S 2007-08-23 10:24:30 UTC (rev 6776)
+++ trunk/coregrind/m_dispatch/dispatch-x86-linux.S 2007-08-24 20:37:09 UTC (rev 6777)
@@ -121,7 +121,7 @@
/* try a fast lookup in the translation cache */
movl %eax, %ebx /* next guest addr */
- andl $VG_TT_FAST_MASK, %ebx /* entry# */
+ andl $ VG_TT_FAST_MASK, %ebx /* entry# */
movl 0+VG_(tt_fast)(,%ebx,8), %esi /* .guest */
movl 4+VG_(tt_fast)(,%ebx,8), %edi /* .host */
cmpl %eax, %esi
@@ -157,7 +157,7 @@
/* try a fast lookup in the translation cache */
movl %eax, %ebx /* next guest addr */
- andl $VG_TT_FAST_MASK, %ebx /* entry# */
+ andl $ VG_TT_FAST_MASK, %ebx /* entry# */
movl 0+VG_(tt_fast)(,%ebx,8), %esi /* .guest */
movl 4+VG_(tt_fast)(,%ebx,8), %edi /* .host */
cmpl %eax, %esi
@@ -199,7 +199,7 @@
/* %EIP is up to date here */
/* back out decrement of the dispatch counter */
addl $1, VG_(dispatch_ctr)
- movl $VG_TRC_INNER_COUNTERZERO, %eax
+ movl $ VG_TRC_INNER_COUNTERZERO, %eax
jmp run_innerloop_exit
/*NOTREACHED*/
@@ -207,7 +207,7 @@
/* %EIP is up to date here */
/* back out decrement of the dispatch counter */
addl $1, VG_(dispatch_ctr)
- movl $VG_TRC_INNER_FASTMISS, %eax
+ movl $ VG_TRC_INNER_FASTMISS, %eax
jmp run_innerloop_exit
/*NOTREACHED*/
@@ -240,7 +240,7 @@
jmp run_innerloop_exit_REALLY
invariant_violation:
- movl $VG_TRC_INVARIANT_FAILED, %eax
+ movl $ VG_TRC_INVARIANT_FAILED, %eax
jmp run_innerloop_exit_REALLY
run_innerloop_exit_REALLY:
Modified: trunk/coregrind/m_syswrap/syscall-x86-linux.S
===================================================================
--- trunk/coregrind/m_syswrap/syscall-x86-linux.S 2007-08-23 10:24:30 UTC (rev 6776)
+++ trunk/coregrind/m_syswrap/syscall-x86-linux.S 2007-08-24 20:37:09 UTC (rev 6777)
@@ -88,8 +88,8 @@
If eip is in the range [1,2), the syscall hasn't been started yet */
/* Set the signal mask which should be current during the syscall. */
- movl $__NR_rt_sigprocmask, %eax
- movl $VKI_SIG_SETMASK, %ebx
+ movl $ __NR_rt_sigprocmask, %eax
+ movl $ VKI_SIG_SETMASK, %ebx
movl 8+FSZ(%esp), %ecx
movl 12+FSZ(%esp), %edx
movl 16+FSZ(%esp), %esi
@@ -117,8 +117,8 @@
4: /* Re-block signals. If eip is in [4,5), then the syscall is complete and
we needn't worry about it. */
- movl $__NR_rt_sigprocmask, %eax
- movl $VKI_SIG_SETMASK, %ebx
+ movl $ __NR_rt_sigprocmask, %eax
+ movl $ VKI_SIG_SETMASK, %ebx
movl 12+FSZ(%esp), %ecx
xorl %edx, %edx
movl 16+FSZ(%esp), %esi
Modified: trunk/coregrind/m_trampoline.S
===================================================================
--- trunk/coregrind/m_trampoline.S 2007-08-23 10:24:30 UTC (rev 6776)
+++ trunk/coregrind/m_trampoline.S 2007-08-24 20:37:09 UTC (rev 6777)
@@ -67,14 +67,14 @@
x86_fallback_frame_state() in
gcc-4.1.0/gcc/config/i386/linux-unwind.h */
popl %eax
- movl $__NR_sigreturn, %eax
+ movl $ __NR_sigreturn, %eax
int $0x80
ud2
.global VG_(x86_linux_SUBST_FOR_rt_sigreturn)
VG_(x86_linux_SUBST_FOR_rt_sigreturn):
/* Likewise for rt signal frames */
- movl $__NR_rt_sigreturn, %eax
+ movl $ __NR_rt_sigreturn, %eax
int $0x80
ud2
|
|
From: <js...@ac...> - 2007-08-24 11:24:44
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2007-08-24 09:00:01 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 219 tests, 10 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: Tom H. <th...@cy...> - 2007-08-24 02:39:42
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2007-08-24 03:15:02 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 256 tests, 27 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-08-24 02:20:51
|
Nightly build on lloyd ( x86_64, Fedora Core 3 ) started at 2007-08-24 03:05:07 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 292 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2007-08-24 02:18:50
|
Nightly build on dellow ( x86_64, Fedora 7 ) started at 2007-08-24 03:10:04 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 292 tests, 221 stderr failures, 105 stdout failures, 0 posttest failures == memcheck/tests/addressable (stdout) memcheck/tests/addressable (stderr) memcheck/tests/amd64/bt_everything (stdout) memcheck/tests/amd64/bt_everything (stderr) memcheck/tests/amd64/bug132146 (stdout) memcheck/tests/amd64/bug132146 (stderr) memcheck/tests/amd64/defcfaexpr (stderr) memcheck/tests/amd64/fxsave-amd64 (stdout) memcheck/tests/amd64/fxsave-amd64 (stderr) memcheck/tests/amd64/insn_basic (stdout) memcheck/tests/amd64/insn_basic (stderr) memcheck/tests/amd64/insn_fpu (stdout) memcheck/tests/amd64/insn_fpu (stderr) memcheck/tests/amd64/insn_mmx (stdout) memcheck/tests/amd64/insn_mmx (stderr) memcheck/tests/amd64/insn_sse (stdout) memcheck/tests/amd64/insn_sse (stderr) memcheck/tests/amd64/insn_sse2 (stdout) memcheck/tests/amd64/insn_sse2 (stderr) memcheck/tests/amd64/int3-amd64 (stdout) memcheck/tests/amd64/int3-amd64 (stderr) memcheck/tests/amd64/more_x87_fp (stdout) memcheck/tests/amd64/more_x87_fp (stderr) memcheck/tests/amd64/sse_memory (stdout) memcheck/tests/amd64/sse_memory (stderr) memcheck/tests/amd64/xor-undef-amd64 (stdout) memcheck/tests/amd64/xor-undef-amd64 (stderr) memcheck/tests/badaddrvalue (stdout) memcheck/tests/badaddrvalue (stderr) memcheck/tests/badfree-2trace (stderr) memcheck/tests/badfree (stderr) memcheck/tests/badjump (stderr) memcheck/tests/badjump2 (stderr) memcheck/tests/badloop (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/badrw (stderr) memcheck/tests/brk (stderr) memcheck/tests/brk2 (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/clientperm (stdout) memcheck/tests/clientperm (stderr) memcheck/tests/custom_alloc (stderr) memcheck/tests/deep_templates (stdout) memcheck/tests/deep_templates (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/doublefree (stderr) memcheck/tests/erringfds (stdout) memcheck/tests/erringfds (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/errs1 (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/exitprog (stderr) memcheck/tests/fprw (stderr) memcheck/tests/fwrite (stderr) memcheck/tests/inits (stderr) memcheck/tests/inline (stdout) memcheck/tests/inline (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/long_namespace_xml (stdout) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/malloc1 (stderr) memcheck/tests/malloc2 (stderr) memcheck/tests/malloc3 (stdout) memcheck/tests/malloc3 (stderr) memcheck/tests/malloc_usable (stderr) memcheck/tests/manuel1 (stdout) memcheck/tests/manuel1 (stderr) memcheck/tests/manuel2 (stdout) memcheck/tests/manuel2 (stderr) memcheck/tests/manuel3 (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/memalign2 (stderr) memcheck/tests/memalign_test (stderr) memcheck/tests/memcmptest (stdout) memcheck/tests/memcmptest (stderr) memcheck/tests/mempool (stderr) memcheck/tests/metadata (stdout) memcheck/tests/metadata (stderr) memcheck/tests/mismatches (stderr) memcheck/tests/mmaptest (stderr) memcheck/tests/nanoleak (stderr) memcheck/tests/nanoleak2 (stderr) memcheck/tests/nanoleak_supp (stderr) memcheck/tests/new_nothrow (stderr) memcheck/tests/new_override (stdout) memcheck/tests/new_override (stderr) memcheck/tests/null_socket (stderr) memcheck/tests/oset_test (stdout) memcheck/tests/oset_test (stderr) memcheck/tests/overlap (stdout) memcheck/tests/overlap (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stdout) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pdb-realloc (stderr) memcheck/tests/pdb-realloc2 (stdout) memcheck/tests/pdb-realloc2 (stderr) memcheck/tests/pipe (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/post-syscall (stdout) memcheck/tests/post-syscall (stderr) memcheck/tests/realloc1 (stderr) memcheck/tests/realloc2 (stderr) memcheck/tests/realloc3 (stderr) memcheck/tests/sh-mem-random (stdout) memcheck/tests/sh-mem-random (stderr) memcheck/tests/sh-mem (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/signal2 (stdout) memcheck/tests/signal2 (stderr) memcheck/tests/sigprocmask (stderr) memcheck/tests/stack_changes (stdout) memcheck/tests/stack_changes (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/str_tester (stderr) memcheck/tests/strchr (stderr) memcheck/tests/supp1 (stderr) memcheck/tests/supp2 (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/suppfree (stderr) memcheck/tests/toobig-allocs (stderr) memcheck/tests/trivialleak (stderr) memcheck/tests/vcpu_bz2 (stdout) memcheck/tests/vcpu_bz2 (stderr) memcheck/tests/vcpu_fbench (stdout) memcheck/tests/vcpu_fbench (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/vcpu_fnfns (stderr) memcheck/tests/with-space (stdout) memcheck/tests/with-space (stderr) memcheck/tests/wrap1 (stdout) memcheck/tests/wrap1 (stderr) memcheck/tests/wrap2 (stdout) memcheck/tests/wrap2 (stderr) memcheck/tests/wrap3 (stdout) memcheck/tests/wrap3 (stderr) memcheck/tests/wrap4 (stdout) memcheck/tests/wrap4 (stderr) memcheck/tests/wrap5 (stdout) memcheck/tests/wrap5 (stderr) memcheck/tests/wrap6 (stdout) memcheck/tests/wrap6 (stderr) memcheck/tests/wrap7 (stdout) memcheck/tests/wrap7 (stderr) memcheck/tests/wrap8 (stdout) memcheck/tests/wrap8 (stderr) memcheck/tests/writev (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stdout) memcheck/tests/xml1 (stderr) memcheck/tests/zeropage (stdout) memcheck/tests/zeropage (stderr) cachegrind/tests/chdir (stderr) cachegrind/tests/clreq (stderr) cachegrind/tests/dlclose (stdout) cachegrind/tests/dlclose (stderr) cachegrind/tests/wrap5 (stdout) cachegrind/tests/wrap5 (stderr) callgrind/tests/clreq (stderr) callgrind/tests/simwork1 (stdout) callgrind/tests/simwork1 (stderr) callgrind/tests/simwork2 (stdout) callgrind/tests/simwork2 (stderr) callgrind/tests/simwork3 (stdout) callgrind/tests/simwork3 (stderr) callgrind/tests/threads (stderr) massif/tests/basic_malloc (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) lackey/tests/true (stderr) none/tests/amd64/bug127521-64 (stdout) none/tests/amd64/bug127521-64 (stderr) none/tests/amd64/bug132813-amd64 (stdout) none/tests/amd64/bug132813-amd64 (stderr) none/tests/amd64/bug132918 (stdout) none/tests/amd64/bug132918 (stderr) none/tests/amd64/clc (stdout) none/tests/amd64/clc (stderr) none/tests/amd64/fcmovnu (stdout) none/tests/amd64/fcmovnu (stderr) none/tests/amd64/fxtract (stdout) none/tests/amd64/fxtract (stderr) none/tests/amd64/insn_basic (stdout) none/tests/amd64/insn_basic (stderr) none/tests/amd64/insn_fpu (stdout) none/tests/amd64/insn_fpu (stderr) none/tests/amd64/insn_mmx (stdout) none/tests/amd64/insn_mmx (stderr) none/tests/amd64/insn_sse (stdout) none/tests/amd64/insn_sse (stderr) none/tests/amd64/insn_sse2 (stdout) none/tests/amd64/insn_sse2 (stderr) none/tests/amd64/jrcxz (stdout) none/tests/amd64/jrcxz (stderr) none/tests/amd64/looper (stdout) none/tests/amd64/looper (stderr) none/tests/amd64/nibz_bennee_mmap (stdout) none/tests/amd64/nibz_bennee_mmap (stderr) none/tests/amd64/rcl-amd64 (stdout) none/tests/amd64/rcl-amd64 (stderr) none/tests/amd64/shrld (stdout) none/tests/amd64/shrld (stderr) none/tests/amd64/slahf-amd64 (stdout) none/tests/amd64/slahf-amd64 (stderr) none/tests/amd64/smc1 (stdout) none/tests/amd64/smc1 (stderr) none/tests/ansi (stderr) none/tests/args (stdout) none/tests/args (stderr) none/tests/async-sigs (stdout) none/tests/async-sigs (stderr) none/tests/bitfield1 (stderr) none/tests/blockfault (stderr) none/tests/bug129866 (stdout) none/tests/bug129866 (stderr) none/tests/closeall (stderr) none/tests/coolo_sigaction (stdout) none/tests/coolo_sigaction (stderr) none/tests/coolo_strlen (stderr) none/tests/discard (stdout) none/tests/discard (stderr) none/tests/exec-sigmask (stderr) none/tests/execve (stderr) none/tests/faultstatus (stderr) none/tests/fcntl_setown (stderr) none/tests/fdleak_cmsg (stderr) none/tests/fdleak_creat (stderr) none/tests/fdleak_dup (stderr) none/tests/fdleak_dup2 (stderr) none/tests/fdleak_fcntl (stderr) none/tests/fdleak_ipv4 (stdout) none/tests/fdleak_ipv4 (stderr) none/tests/fdleak_open (stderr) none/tests/fdleak_pipe (stderr) none/tests/fdleak_socketpair (stderr) none/tests/floored (stdout) none/tests/floored (stderr) none/tests/fork (stdout) none/tests/fork (stderr) none/tests/fucomip (stderr) none/tests/gxx304 (stderr) none/tests/manythreads (stdout) none/tests/manythreads (stderr) none/tests/map_unaligned (stderr) none/tests/map_unmap (stdout) none/tests/map_unmap (stderr) none/tests/mq (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/mremap2 (stderr) none/tests/munmap_exe (stderr) none/tests/nestedfns (stdout) none/tests/nestedfns (stderr) none/tests/pending (stdout) none/tests/pending (stderr) none/tests/pth_atfork1 (stdout) none/tests/pth_atfork1 (stderr) none/tests/pth_blockedsig (stdout) none/tests/pth_blockedsig (stderr) none/tests/pth_cancel1 (stdout) none/tests/pth_cancel1 (stderr) none/tests/pth_cancel2 (stderr) none/tests/pth_cvsimple (stdout) none/tests/pth_cvsimple (stderr) none/tests/pth_detached (stdout) none/tests/pth_detached (stderr) none/tests/pth_empty (stderr) none/tests/pth_exit (stderr) none/tests/pth_exit2 (stderr) none/tests/pth_mutexspeed (stdout) none/tests/pth_mutexspeed (stderr) none/tests/pth_once (stdout) none/tests/pth_once (stderr) none/tests/pth_rwlock (stderr) none/tests/pth_stackalign (stdout) none/tests/pth_stackalign (stderr) none/tests/rcrl (stdout) none/tests/rcrl (stderr) none/tests/readline1 (stdout) none/tests/readline1 (stderr) none/tests/res_search (stdout) none/tests/res_search (stderr) none/tests/resolv (stdout) none/tests/resolv (stderr) none/tests/rlimit_nofile (stderr) none/tests/sem (stderr) none/tests/semlimit (stderr) none/tests/sha1_test (stderr) none/tests/shell (stdout) none/tests/shell (stderr) none/tests/shell_valid1 (stderr) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) none/tests/shortpush (stderr) none/tests/shorts (stderr) none/tests/sigstackgrowth (stdout) none/tests/sigstackgrowth (stderr) none/tests/stackgrowth (stdout) none/tests/stackgrowth (stderr) none/tests/syscall-restart1 (stderr) none/tests/syscall-restart2 (stderr) none/tests/system (stderr) none/tests/thread-exits (stdout) none/tests/thread-exits (stderr) none/tests/threaded-fork (stdout) none/tests/threaded-fork (stderr) none/tests/threadederrno (stdout) none/tests/threadederrno (stderr) none/tests/tls (stdout) none/tests/tls (stderr) none/tests/vgprintf (stdout) none/tests/vgprintf (stderr) |
|
From: Tom H. <th...@cy...> - 2007-08-24 02:13:58
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2007-08-24 03:00:02 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 294 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |