You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(15) |
2
(12) |
3
(11) |
4
(20) |
5
(6) |
|
6
(6) |
7
(7) |
8
(8) |
9
(17) |
10
(25) |
11
(27) |
12
(6) |
|
13
(28) |
14
(16) |
15
(20) |
16
(9) |
17
(26) |
18
(7) |
19
(25) |
|
20
(7) |
21
(18) |
22
(25) |
23
(15) |
24
(21) |
25
(32) |
26
(15) |
|
27
(23) |
28
(33) |
|
|
|
|
|
|
From: Greg P. <gp...@us...> - 2005-02-19 21:28:04
|
Nicholas Nethercote writes: > I believe that on x86/Linux %eax gets the return value, and you can tell > if it's an error if the value is in the range -4096 <= ret <= -1, and that > the current Valgrind code assumes this when using VG_(do_syscall)(). > > So if that's not true for Solaris then perhaps the way VG_(do_syscall)() > works needs to change a little, eg. return 2 values, one for the return > value, and one a boolean that indicates whether it's an error code. But > I'm not certain about that. Mac OS X definitely doesn't do it this way. The VG_(is_kerror)() model doesn't work at all. Returning two values would work. However, I've found it easier to use VG_(do_syscall) only for syscalls on Valgrind's behalf, and write a different dispatcher for client syscalls. The PowerPC's syscall instruction always returns a result in register r3. The difference between success and error is the instruction at which execution resumes after the syscall: (syscall+4) means error, and (syscall+8) means success. For native libc, the success case simply returns, and the error case branches to some code that moves the syscall result to errno and returns -1. My client syscall dispatcher takes a ThreadState argument, and reads the syscall and syscall parameters from that ThreadState. On return, the result (good or bad) is left in virtual r3, and a per-ThreadState syscall_failed is set on error. In addition, the wrapper updates the virtual PC to the correct destination. The syscall wrappers use syscall_failed and the virtual r3 directly. (Throwing a wrench into the mix is the distinction between BSD syscalls and Mach traps. The Mach traps use the same syscall instruction but return to (syscall+4) unconditionally, which looks like the BSD failure case. So there are in fact two client dispatchers, one for BSD and one for Mach, based on the syscall number. This could probably be unified if the Mach wrappers carefully ignored syscall_failed, but I haven't bothered to rearrange it that way yet.) -- Greg Parker gp...@us... |
|
From: <js...@ac...> - 2005-02-19 04:00:21
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2005-02-19 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 198 tests, 11 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_fcntl (stderr) helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2005-02-19 03:29:53
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2005-02-19 03:20:05 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 205 tests, 10 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/threadederrno (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-19 03:22:43
|
Nightly build on audi ( Red Hat 9 ) started at 2005-02-19 03:15:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 204 tests, 6 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-19 03:17:03
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2005-02-19 03:10:04 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 203 tests, 8 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/threadederrno (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-19 03:16:08
|
Nightly build on standard ( Red Hat 7.2 ) started at 2005-02-19 03:00:05 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 203 tests, 9 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-19 03:12:57
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2005-02-19 03:05:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow == 203 tests, 16 stderr failures, 0 stdout failures ================= addrcheck/tests/addressable (stderr) helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/post-syscall (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Jeremy F. <je...@go...> - 2005-02-19 02:28:18
|
Nicholas Nethercote wrote:
> That's strange. For non-PIE builds, that LOAD VirtAddr should be
> 0xb0000000, right?
Yep, but for PIE it doesn't matter, so we don't set anything, and ld is
using its defaults.
> I think the problem is that my GCC seems to be using binutils 2.13.2.1
> (which doesn't recognise --pie) not the one in my PATH, which is
> 2.15. I guess GCC was configured to use that binutils upon
> installation? Any idea how to change it? Maybe I'll have to install
> a new GCC, and point it to a more recent binutils.
Maybe. Sounds like a bit of a mess. I'm not sure if PIE is a
RedHat/Fedora local change, or something which is in the baseline
tools. I notice that SuSE 9.2 doesn't support PIE at all.
J
|
|
From: Nicholas N. <nj...@cs...> - 2005-02-19 01:37:05
|
On Wed, 16 Feb 2005, Jeremy Fitzhardinge wrote: >> - Section 2.8 of the manual talks about support for threads. It's out of >> date, I think it should be nuked. Anyone disagree? > > Hm, some of it is still true, but a lot of the details are wrong. I've cut it down a lot. I'm not sure if the numbers for the round-robin scheduling are still right. N |
|
From: Nicholas N. <nj...@cs...> - 2005-02-19 01:23:42
|
On Wed, 16 Feb 2005, Jeremy Fitzhardinge wrote: >> The bad address was 0xb805ab88, ie. within stage2, which doesn't seem >> unreasonable. > > It looks like it has been linked as a normal executable, which we're > relocating to somewhere it doesn't expect. > >> ELF Header: >> Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 >> Class: ELF32 >> Data: 2's complement, little endian >> Version: 1 (current) >> OS/ABI: UNIX - System V >> ABI Version: 0 >> Type: EXEC (Executable file) >> >> > Hm, this is a normal executable, not an -fpie executable (which would be > DYN)... > >> Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align >> PHDR 0x000034 0x08048034 0x08048034 0x000e0 0x000e0 R E 0x4 >> INTERP 0x000114 0x08048114 0x08048114 0x00013 0x00013 R 0x1 >> [Requesting program interpreter: /lib/ld-linux.so.2] >> LOAD 0x000000 0x08048000 0x08048000 0xba440 0xba440 R E 0x1000 >> >> > ...loaded at the normal executable address. That's strange. For non-PIE builds, that LOAD VirtAddr should be 0xb0000000, right? I think the problem is that my GCC seems to be using binutils 2.13.2.1 (which doesn't recognise --pie) not the one in my PATH, which is 2.15. I guess GCC was configured to use that binutils upon installation? Any idea how to change it? Maybe I'll have to install a new GCC, and point it to a more recent binutils. N |
|
From: Nicholas N. <nj...@cs...> - 2005-02-19 01:09:39
|
CVS commit by nethercote:
Moved --command-line-only to the debugging options list, since
non-developers won't need to use it.
Also used VG_BOOL_CLO macro to shorten code.
Also improved a comment.
M +7 -8 coregrind/vg_main.c 1.252
M +0 -1 none/tests/cmdline1.stdout.exp 1.12
M +1 -1 none/tests/cmdline2.stdout.exp 1.13
--- valgrind/coregrind/vg_main.c #1.251:1.252
@@ -650,14 +650,12 @@ static void get_command_line( int argc,
for (vg_argc0 = 1; vg_argc0 < argc; vg_argc0++) {
- if (argv[vg_argc0][0] != '-') /* exe name */
+ Char* arg = argv[vg_argc0];
+ if (arg[0] != '-') /* exe name */
break;
- if (VG_STREQ(argv[vg_argc0], "--")) { /* dummy arg */
+ if (VG_STREQ(arg, "--")) { /* dummy arg */
vg_argc0++;
break;
}
- if (VG_CLO_STREQ(argv[vg_argc0], "--command-line-only=yes"))
- augment = False;
- else if (VG_CLO_STREQ(argv[vg_argc0], "--command-line-only=no"))
- augment = True;
+ VG_BOOL_CLO("--command-line-only", augment)
}
cl_argv = &argv[vg_argc0];
@@ -665,5 +663,6 @@ static void get_command_line( int argc,
/* Get extra args from VALGRIND_OPTS and .valgrindrc files.
Note we don't do this if getting args from VALGRINDCLO, as
- those extra args will already be present in VALGRINDCLO. */
+ those extra args will already be present in VALGRINDCLO.
+ (We also don't do it when --command-line-only=yes.) */
if (augment)
augment_command_line(&vg_argc0, &vg_argv0);
@@ -1487,5 +1486,4 @@ void usage ( Bool debug_help )
" --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls,ioctl-mmap [none]\n"
" --pointercheck=no|yes enforce client address space limits [yes]\n"
-" --command-line-only=no|yes only use command line options [no]\n"
"\n"
" user options for Valgrind tools that report errors:\n"
@@ -1520,4 +1518,5 @@ void usage ( Bool debug_help )
" --wait-for-gdb=yes|no pause on startup to wait for gdb attach\n"
" --model-pthreads=yes|no model the pthreads library [no]\n"
+" --command-line-only=no|yes only use command line options [no]\n"
"\n"
" debugging options for Valgrind tools that report errors\n"
--- valgrind/none/tests/cmdline1.stdout.exp #1.11:1.12
@@ -16,5 +16,4 @@
--weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls,ioctl-mmap [none]
--pointercheck=no|yes enforce client address space limits [yes]
- --command-line-only=no|yes only use command line options [no]
user options for Valgrind tools that report errors:
--- valgrind/none/tests/cmdline2.stdout.exp #1.12:1.13
@@ -16,5 +16,4 @@
--weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls,ioctl-mmap [none]
--pointercheck=no|yes enforce client address space limits [yes]
- --command-line-only=no|yes only use command line options [no]
user options for Valgrind tools that report errors:
@@ -49,4 +48,5 @@
--wait-for-gdb=yes|no pause on startup to wait for gdb attach
--model-pthreads=yes|no model the pthreads library [no]
+ --command-line-only=no|yes only use command line options [no]
debugging options for Valgrind tools that report errors
|
|
From: Jeremy F. <je...@go...> - 2005-02-19 01:09:19
|
CVS commit by fitzhardinge:
Page protections, as set by mprotect(), have nothing to do with memcheck's
notion of "readable" or "writable", or memcheck and addrcheck's notion
of "addressable". Never use the page permissions when determining the
addressability of a piece of memory.
BUGS:99568
A addrcheck/tests/addressable.stderr.exp 1.1
A addrcheck/tests/addressable.stdout.exp 1.1
A addrcheck/tests/addressable.vgtest 1.1
A memcheck/tests/addressable.c 1.1 [no copyright]
A memcheck/tests/addressable.stderr.exp 1.1
A memcheck/tests/addressable.stdout.exp 1.1
A memcheck/tests/addressable.vgtest 1.1
M +3 -8 addrcheck/ac_main.c 1.77
M +1 -0 addrcheck/tests/Makefile.am 1.13
M +3 -9 memcheck/mc_main.c 1.63
M +3 -0 memcheck/tests/Makefile.am 1.66
--- valgrind/memcheck/mc_main.c #1.62:1.63
@@ -791,11 +791,7 @@ void mc_new_mem_heap ( Addr a, SizeT len
static
-void mc_set_perms (Addr a, SizeT len, Bool rr, Bool ww, Bool xx)
+void mc_new_mem_mmap ( Addr a, SizeT len, Bool rr, Bool ww, Bool xx )
{
- DEBUG("mc_set_perms(%p, %llu, rr=%u ww=%u, xx=%u)\n",
- a, (ULong)len, rr, ww, xx);
- if (rr) mc_make_readable(a, len);
- else if (ww) mc_make_writable(a, len);
- else mc_make_noaccess(a, len);
+ mc_make_readable(a, len);
}
@@ -1963,8 +1958,7 @@ void SK_(pre_clo_init)(void)
VG_(init_new_mem_stack_signal) ( & mc_make_writable );
VG_(init_new_mem_brk) ( & mc_make_writable );
- VG_(init_new_mem_mmap) ( & mc_set_perms );
+ VG_(init_new_mem_mmap) ( & mc_new_mem_mmap );
VG_(init_copy_mem_remap) ( & mc_copy_address_range_state );
- VG_(init_change_mem_mprotect) ( & mc_set_perms );
VG_(init_die_mem_stack_signal) ( & mc_make_noaccess );
--- valgrind/addrcheck/ac_main.c #1.76:1.77
@@ -653,13 +653,9 @@ void ac_new_mem_heap ( Addr a, SizeT len
static
-void ac_set_perms (Addr a, SizeT len, Bool rr, Bool ww, Bool xx)
+void ac_new_mem_mmap (Addr a, SizeT len, Bool rr, Bool ww, Bool xx)
{
DEBUG("ac_set_perms(%p, %u, rr=%u ww=%u, xx=%u)\n",
a, len, rr, ww, xx);
- if (rr || ww || xx) {
ac_make_accessible(a, len);
- } else {
- ac_make_noaccess(a, len);
- }
}
@@ -1314,8 +1310,7 @@ void SK_(pre_clo_init)(void)
VG_(init_new_mem_stack_signal) ( & ac_make_accessible );
VG_(init_new_mem_brk) ( & ac_make_accessible );
- VG_(init_new_mem_mmap) ( & ac_set_perms );
+ VG_(init_new_mem_mmap) ( & ac_new_mem_mmap );
VG_(init_copy_mem_remap) ( & ac_copy_address_range_state );
- VG_(init_change_mem_mprotect) ( & ac_set_perms );
VG_(init_die_mem_stack_signal) ( & ac_make_noaccess );
--- valgrind/memcheck/tests/Makefile.am #1.65:1.66
@@ -7,4 +7,5 @@
EXTRA_DIST = $(noinst_SCRIPTS) \
+ addressable.stderr.exp addressable.stdout.exp addressable.vgtest \
badaddrvalue.stderr.exp \
badaddrvalue.stdout.exp badaddrvalue.vgtest \
@@ -79,4 +80,5 @@
check_PROGRAMS = \
+ addressable \
badaddrvalue badfree badjump badjump2 \
badloop badpoll badrw brk brk2 buflen_check \
@@ -104,4 +106,5 @@
# C ones
+addressable_SOURCES = addressable.c
badaddrvalue_SOURCES = badaddrvalue.c
badfree_SOURCES = badfree.c
--- valgrind/addrcheck/tests/Makefile.am #1.12:1.13
@@ -4,4 +4,5 @@
EXTRA_DIST = $(noinst_SCRIPTS) \
+ addressable.vgtest addressable.stderr.exp addressable.stdout.exp \
badrw.stderr.exp badrw.vgtest \
fprw.stderr.exp fprw.vgtest \
|
|
From: Jeremy F. <je...@go...> - 2005-02-19 01:06:10
|
CVS commit by fitzhardinge:
Implement a simple optimisation to memcheck & addrcheck which saves lots
of shadow memory. Rather than having a single distinguished secondary
map, this patch introduces 4 (2 for addrcheck). These secondary maps
represent the states of "unaddressable", "addressable + not valid" and
"addressable + valid". If all the bytes in a secondary map are in one of
these states, the primary map just points to the distinguished secondary
for that state rather than allocating new shadow memory.
This results in significant memory savings; for example, shadow memory
use for OpenOffice writer drops from ~200Mbytes to ~50Mbytes.
M +66 -28 addrcheck/ac_main.c 1.76
M +6 -9 coregrind/vg_syscalls.c 1.249
M +8 -5 memcheck/mac_shared.h 1.32
M +104 -37 memcheck/mc_main.c 1.62
--- valgrind/addrcheck/ac_main.c #1.75:1.76
@@ -147,26 +147,41 @@ static void ac_fpu_ACCESS_check_SLOWLY (
typedef
struct {
- UChar abits[8192];
+ UChar abits[SECONDARY_SIZE / 8];
}
AcSecMap;
-static AcSecMap* primary_map[ /*65536*/ 262144 ];
-static AcSecMap distinguished_secondary_map;
+static AcSecMap* primary_map[ /*PRIMARY_SIZE*/ PRIMARY_SIZE*4 ];
+static const AcSecMap distinguished_secondary_maps[2] = {
+ [ VGM_BIT_INVALID ] = { { [0 ... (SECONDARY_SIZE/8) - 1] = VGM_BYTE_INVALID } },
+ [ VGM_BIT_VALID ] = { { [0 ... (SECONDARY_SIZE/8) - 1] = VGM_BYTE_VALID } },
+};
+#define N_SECONDARY_MAPS (sizeof(distinguished_secondary_maps)/sizeof(*distinguished_secondary_maps))
+
+#define DSM_IDX(a) ((a) & 1)
+
+#define DSM(a) ((AcSecMap *)&distinguished_secondary_maps[DSM_IDX(a)])
+
+#define DSM_NOTADDR DSM(VGM_BIT_INVALID)
+#define DSM_ADDR DSM(VGM_BIT_VALID)
static void init_shadow_memory ( void )
{
- Int i;
+ Int i, a;
- for (i = 0; i < 8192; i++) /* Invalid address */
- distinguished_secondary_map.abits[i] = VGM_BYTE_INVALID;
+ /* check construction of the distinguished secondaries */
+ sk_assert(VGM_BIT_INVALID == 1);
+ sk_assert(VGM_BIT_VALID == 0);
+
+ for(a = 0; a <= 1; a++)
+ sk_assert(distinguished_secondary_maps[DSM_IDX(a)].abits[0] == BIT_EXPAND(a));
/* These entries gradually get overwritten as the used address
space expands. */
- for (i = 0; i < 65536; i++)
- primary_map[i] = &distinguished_secondary_map;
+ for (i = 0; i < PRIMARY_SIZE; i++)
+ primary_map[i] = DSM_NOTADDR;
/* These ones should never change; it's a bug in Valgrind if they do. */
- for (i = 65536; i < 262144; i++)
- primary_map[i] = &distinguished_secondary_map;
+ for (i = PRIMARY_SIZE; i < PRIMARY_SIZE*4; i++)
+ primary_map[i] = DSM_NOTADDR;
}
@@ -178,14 +193,12 @@ static void init_shadow_memory ( void )
static AcSecMap* alloc_secondary_map ( __attribute__ ((unused))
- Char* caller )
+ Char* caller,
+ const AcSecMap *prototype)
{
AcSecMap* map;
- UInt i;
PROF_EVENT(10);
- /* Mark all bytes as invalid access and invalid value. */
map = (AcSecMap *)VG_(shadow_alloc)(sizeof(AcSecMap));
- for (i = 0; i < 8192; i++)
- map->abits[i] = VGM_BYTE_INVALID; /* Invalid address */
+ VG_(memcpy)(map, prototype, sizeof(*map));
/* VG_(printf)("ALLOC_2MAP(%s)\n", caller ); */
@@ -313,13 +326,12 @@ void set_address_range_perms ( Addr a, S
sk_assert((a % 8) == 0 && len > 0);
- /* Once aligned, go fast. */
- for (; len >= 8; a += 8, len -= 8) {
+ /* Once aligned, go fast up to primary boundary. */
+ for (; (a & SECONDARY_MASK) && len >= 8; a += 8, len -= 8) {
PROF_EVENT(32);
- /* If we're setting the addressability to "invalid", and the
- secondary map is the distinguished_secondary_map, don't
- allocate a new secondary map, since the distinguished map is
- all-invalid anyway. */
- if (abyte8 == VGM_BYTE_INVALID && IS_DISTINGUISHED(a))
+ /* If the primary is already pointing to a distinguished map
+ with the same properties as we're trying to set, then leave
+ it that way. */
+ if (primary_map[PM_IDX(a)] == DSM(example_a_bit))
continue;
ENSURE_MAPPABLE(a, "set_address_range_perms(fast)");
@@ -329,9 +341,33 @@ void set_address_range_perms ( Addr a, S
}
- if (len == 0) {
- VGP_POPCC(VgpSetMem);
- return;
+ /* Now set whole secondary maps to the right distinguished value.
+
+ Note that if the primary already points to a non-distinguished
+ secondary, then don't replace the reference. That would just
+ leak memory.
+ */
+ for(; len >= SECONDARY_SIZE; a += SECONDARY_SIZE, len -= SECONDARY_SIZE) {
+ sm = primary_map[PM_IDX(a)];
+
+ if (IS_DISTINGUISHED_SM(sm))
+ primary_map[PM_IDX(a)] = DSM(example_a_bit);
+ else
+ VG_(memset)(sm->abits, abyte8, sizeof(sm->abits));
+ }
+
+ /* Now finished the remains. */
+ for (; len >= 8; a += 8, len -= 8) {
+ PROF_EVENT(32);
+
+ /* If the primary is already pointing to a distinguished map
+ with the same properties as we're trying to set, then leave
+ it that way. */
+ if (primary_map[PM_IDX(a)] == DSM(example_a_bit))
+ continue;
+ ENSURE_MAPPABLE(a, "set_address_range_perms(fast)");
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
+ sm->abits[sm_off >> 3] = abyte8;
}
- sk_assert((a % 8) == 0 && len > 0 && len < 8);
/* Finish the upper fragment. */
@@ -1141,13 +1177,15 @@ Bool SK_(expensive_sanity_check) ( void
Int i;
+#if 0
/* Make sure nobody changed the distinguished secondary. */
for (i = 0; i < 8192; i++)
if (distinguished_secondary_map.abits[i] != VGM_BYTE_INVALID)
return False;
+#endif
/* Make sure that the upper 3/4 of the primary map hasn't
been messed with. */
- for (i = 65536; i < 262144; i++)
- if (primary_map[i] != & distinguished_secondary_map)
+ for (i = PRIMARY_SIZE; i < PRIMARY_SIZE*4; i++)
+ if (primary_map[i] != DSM_NOTADDR)
return False;
--- valgrind/memcheck/mac_shared.h #1.31:1.32
@@ -193,6 +193,6 @@ extern UInt MAC_(event_ctr)[N_PROF_EVENT
/*------------------------------------------------------------*/
-/* expand 1 bit -> 32 */
-#define BIT_EXPAND(b) (~(((UInt)(b) & 1) - 1))
+/* expand 1 bit -> 8 */
+#define BIT_EXPAND(b) ((~(((UChar)(b) & 1) - 1)) & 0xFF)
#define SECONDARY_SHIFT 16
@@ -200,9 +200,12 @@ extern UInt MAC_(event_ctr)[N_PROF_EVENT
#define SECONDARY_MASK (SECONDARY_SIZE - 1)
+#define PRIMARY_SIZE (1 << (32 - SECONDARY_SHIFT))
+
#define SM_OFF(addr) ((addr) & SECONDARY_MASK)
#define PM_IDX(addr) ((addr) >> SECONDARY_SHIFT)
#define IS_DISTINGUISHED_SM(smap) \
- ((smap) == &distinguished_secondary_map)
+ ((smap) >= &distinguished_secondary_maps[0] && \
+ (smap) < &distinguished_secondary_maps[N_SECONDARY_MAPS])
#define IS_DISTINGUISHED(addr) (IS_DISTINGUISHED_SM(primary_map[PM_IDX(addr)]))
@@ -211,5 +214,5 @@ extern UInt MAC_(event_ctr)[N_PROF_EVENT
do { \
if (IS_DISTINGUISHED(addr)) { \
- primary_map[PM_IDX(addr)] = alloc_secondary_map(caller); \
+ primary_map[PM_IDX(addr)] = alloc_secondary_map(caller, primary_map[PM_IDX(addr)]); \
/* VG_(printf)("new 2map because of %p\n", addr); */ \
} \
--- valgrind/memcheck/mc_main.c #1.61:1.62
@@ -65,4 +65,8 @@
distinguished map.
+ There are actually 4 distinguished secondaries. These are used to
+ represent a memory range which is either not addressable (validity
+ doesn't matter), addressable+not valid, addressable+valid.
+
[...] lots of stuff deleted due to out of date-ness
@@ -109,29 +113,63 @@ static void mc_fpu_write_check_SLOWLY (
typedef
struct {
- UChar abits[8192];
- UChar vbyte[65536];
+ UChar abits[SECONDARY_SIZE/8];
+ UChar vbyte[SECONDARY_SIZE];
}
SecMap;
-static SecMap* primary_map[ /*65536*/ 262144 ];
-static SecMap distinguished_secondary_map;
+
+static SecMap* primary_map[ /*PRIMARY_SIZE*/ PRIMARY_SIZE*4 ];
+
+#define DSM_IDX(a, v) ((((a)&1) << 1) + ((v)&1))
+
+/* 4 secondary maps, but one is redundant (because the !addressable &&
+ valid state is meaningless) */
+static const SecMap distinguished_secondary_maps[4] = {
+#define INIT(a, v) \
+ [ DSM_IDX(a, v) ] = { { [0 ... (SECONDARY_SIZE/8)-1] = BIT_EXPAND(a) }, \
+ { [0 ... SECONDARY_SIZE-1] = BIT_EXPAND(a|v) } }
+ INIT(VGM_BIT_VALID, VGM_BIT_VALID),
+ INIT(VGM_BIT_VALID, VGM_BIT_INVALID),
+ INIT(VGM_BIT_INVALID, VGM_BIT_VALID),
+ INIT(VGM_BIT_INVALID, VGM_BIT_INVALID),
+#undef INIT
+};
+#define N_SECONDARY_MAPS (sizeof(distinguished_secondary_maps)/sizeof(*distinguished_secondary_maps))
+
+#define DSM(a,v) ((SecMap *)&distinguished_secondary_maps[DSM_IDX(a, v)])
+
+#define DSM_NOTADDR DSM(VGM_BIT_INVALID, VGM_BIT_INVALID)
+#define DSM_ADDR_NOTVALID DSM(VGM_BIT_VALID, VGM_BIT_INVALID)
+#define DSM_ADDR_VALID DSM(VGM_BIT_VALID, VGM_BIT_VALID)
static void init_shadow_memory ( void )
{
- Int i;
+ Int i, a, v;
- for (i = 0; i < SECONDARY_SIZE/8; i++) /* Invalid address */
- distinguished_secondary_map.abits[i] = VGM_BYTE_INVALID;
- for (i = 0; i < SECONDARY_SIZE; i++) /* Invalid Value */
- distinguished_secondary_map.vbyte[i] = VGM_BYTE_INVALID;
+ /* check construction of the 4 distinguished secondaries */
+ sk_assert(VGM_BIT_INVALID == 1);
+ sk_assert(VGM_BIT_VALID == 0);
+
+ for(a = 0; a <= 1; a++)
+ for(v = 0; v <= 1; v++) {
+ if (DSM(a,v)->abits[0] != BIT_EXPAND(a))
+ VG_(printf)("DSM(%d,%d)[%d]->abits[0] == %x not %x\n",
+ a,v,DSM_IDX(a,v),DSM(a,v)->abits[0], BIT_EXPAND(a));
+ if (DSM(a,v)->vbyte[0] != BIT_EXPAND(a|v))
+ VG_(printf)("DSM(%d,%d)[%d]->vbyte[0] == %x not %x\n",
+ a,v,DSM_IDX(a,v),DSM(a,v)->vbyte[0], BIT_EXPAND(a|v));
+
+ sk_assert(DSM(a,v)->abits[0] == BIT_EXPAND(a));
+ sk_assert(DSM(a,v)->vbyte[0] == BIT_EXPAND(v|a));
+ }
/* These entries gradually get overwritten as the used address
space expands. */
- for (i = 0; i < 65536; i++)
- primary_map[i] = &distinguished_secondary_map;
+ for (i = 0; i < PRIMARY_SIZE; i++)
+ primary_map[i] = DSM_NOTADDR;
/* These ones should never change; it's a bug in Valgrind if they do. */
- for (i = 65536; i < 262144; i++)
- primary_map[i] = &distinguished_secondary_map;
+ for (i = PRIMARY_SIZE; i < PRIMARY_SIZE*4; i++)
+ primary_map[i] = DSM_NOTADDR;
}
@@ -143,17 +181,13 @@ static void init_shadow_memory ( void )
static SecMap* alloc_secondary_map ( __attribute__ ((unused))
- Char* caller )
+ Char* caller,
+ const SecMap *prototype)
{
SecMap* map;
- UInt i;
PROF_EVENT(10);
- /* Mark all bytes as invalid access and invalid value. */
map = (SecMap *)VG_(shadow_alloc)(sizeof(SecMap));
- for (i = 0; i < 8192; i++)
- map->abits[i] = VGM_BYTE_INVALID; /* Invalid address */
- for (i = 0; i < 65536; i++)
- map->vbyte[i] = VGM_BYTE_INVALID; /* Invalid Value */
+ VG_(memcpy)(map, prototype, sizeof(*map));
/* VG_(printf)("ALLOC_2MAP(%s)\n", caller ); */
@@ -343,14 +377,14 @@ static void set_address_range_perms ( Ad
sk_assert((a % 8) == 0 && len > 0);
- /* Once aligned, go fast. */
- for (; len >= 8; a += 8, len -= 8) {
+ /* Now align to the next primary_map entry */
+ for (; (a & SECONDARY_MASK) && len >= 8; a += 8, len -= 8) {
PROF_EVENT(32);
- /* If we're setting the addressability to "invalid", and the
- secondary map is the distinguished_secondary_map, don't
- allocate a new secondary map, since the distinguished map is
- all-invalid anyway. */
- if (abyte8 == VGM_BYTE_INVALID && IS_DISTINGUISHED(a))
+ /* If the primary is already pointing to a distinguished map
+ with the same properties as we're trying to set, then leave
+ it that way. */
+ if (primary_map[PM_IDX(a)] == DSM(example_a_bit, example_v_bit))
continue;
+
ENSURE_MAPPABLE(a, "set_address_range_perms(fast)");
sm = primary_map[PM_IDX(a)];
@@ -361,9 +395,38 @@ static void set_address_range_perms ( Ad
}
- if (len == 0) {
- VGP_POPCC(VgpSetMem);
- return;
+ /* Now set whole secondary maps to the right distinguished value.
+
+ Note that if the primary already points to a non-distinguished
+ secondary, then don't replace the reference. That would just
+ leak memory.
+ */
+ for(; len >= SECONDARY_SIZE; a += SECONDARY_SIZE, len -= SECONDARY_SIZE) {
+ sm = primary_map[PM_IDX(a)];
+
+ if (IS_DISTINGUISHED_SM(sm))
+ primary_map[PM_IDX(a)] = DSM(example_a_bit, example_v_bit);
+ else {
+ VG_(memset)(sm->abits, abyte8, sizeof(sm->abits));
+ VG_(memset)(sm->vbyte, vbyte, sizeof(sm->vbyte));
+ }
+ }
+
+ /* Now finish off any remains */
+ for (; len >= 8; a += 8, len -= 8) {
+ PROF_EVENT(32);
+
+ /* If the primary is already pointing to a distinguished map
+ with the same properties as we're trying to set, then leave
+ it that way. */
+ if (primary_map[PM_IDX(a)] == DSM(example_a_bit, example_v_bit))
+ continue;
+
+ ENSURE_MAPPABLE(a, "set_address_range_perms(fast)");
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
+ sm->abits[sm_off >> 3] = abyte8;
+ ((UInt*)(sm->vbyte))[(sm_off >> 2) + 0] = vword4;
+ ((UInt*)(sm->vbyte))[(sm_off >> 2) + 1] = vword4;
}
- sk_assert((a % 8) == 0 && len > 0 && len < 8);
/* Finish the upper fragment. */
@@ -841,5 +904,5 @@ void MC_(helperc_STOREV4) ( Addr a, UInt
abits &= 15;
PROF_EVENT(61);
- if (abits == VGM_NIBBLE_VALID) {
+ if (!IS_DISTINGUISHED_SM(sm) && abits == VGM_NIBBLE_VALID) {
/* Handle common case quickly: a is suitably aligned, is mapped,
and is addressible. */
@@ -886,5 +949,5 @@ void MC_(helperc_STOREV2) ( Addr a, UInt
UInt a_off = (SM_OFF(a)) >> 3;
PROF_EVENT(63);
- if (sm->abits[a_off] == VGM_BYTE_VALID) {
+ if (!IS_DISTINGUISHED_SM(sm) && sm->abits[a_off] == VGM_BYTE_VALID) {
/* Handle common case quickly. */
UInt v_off = SM_OFF(a);
@@ -930,5 +993,5 @@ void MC_(helperc_STOREV1) ( Addr a, UInt
UInt a_off = (SM_OFF(a)) >> 3;
PROF_EVENT(65);
- if (sm->abits[a_off] == VGM_BYTE_VALID) {
+ if (!IS_DISTINGUISHED_SM(sm) && sm->abits[a_off] == VGM_BYTE_VALID) {
/* Handle common case quickly. */
UInt v_off = SM_OFF(a);
@@ -1516,5 +1579,8 @@ Bool SK_(expensive_sanity_check) ( void
Int i;
- /* Make sure nobody changed the distinguished secondary. */
+ /* Make sure nobody changed the distinguished secondary.
+ They're in read-only memory, so that would be hard.
+ */
+#if 0
for (i = 0; i < 8192; i++)
if (distinguished_secondary_map.abits[i] != VGM_BYTE_INVALID)
@@ -1524,9 +1590,10 @@ Bool SK_(expensive_sanity_check) ( void
if (distinguished_secondary_map.vbyte[i] != VGM_BYTE_INVALID)
return False;
+#endif
/* Make sure that the upper 3/4 of the primary map hasn't
been messed with. */
- for (i = 65536; i < 262144; i++)
- if (primary_map[i] != & distinguished_secondary_map)
+ for (i = PRIMARY_SIZE; i < PRIMARY_SIZE*4; i++)
+ if (primary_map[i] != DSM_NOTADDR)
return False;
--- valgrind/coregrind/vg_syscalls.c #1.248:1.249
@@ -174,4 +174,8 @@ void mmap_segment ( Addr a, SizeT len, U
{
UInt flags;
+ Bool rr, ww, xx;
+ rr = prot & VKI_PROT_READ;
+ ww = prot & VKI_PROT_WRITE;
+ xx = prot & VKI_PROT_EXEC;
flags = SF_MMAP;
@@ -188,12 +192,5 @@ void mmap_segment ( Addr a, SizeT len, U
VG_(map_fd_segment)(a, len, prot, flags, fd, offset, NULL);
- if (prot != VKI_PROT_NONE) {
- Bool rr, ww, xx;
- rr = prot & VKI_PROT_READ;
- ww = prot & VKI_PROT_WRITE;
- xx = prot & VKI_PROT_EXEC;
-
VG_TRACK( new_mem_mmap, a, len, rr, ww, xx );
- }
}
|
|
From: Jeremy F. <je...@go...> - 2005-02-19 01:01:20
|
CVS commit by fitzhardinge:
Parameterise the primary/secondary map sizes in memcheck+addrcheck.
M +26 -33 addrcheck/ac_main.c 1.75
M +12 -2 memcheck/mac_shared.h 1.31
M +57 -64 memcheck/mc_main.c 1.61
--- valgrind/addrcheck/ac_main.c #1.74:1.75
@@ -198,6 +198,6 @@ static AcSecMap* alloc_secondary_map ( _
static __inline__ UChar get_abit ( Addr a )
{
- AcSecMap* sm = primary_map[a >> 16];
- UInt sm_off = a & 0xFFFF;
+ AcSecMap* sm = primary_map[PM_IDX(a)];
+ UInt sm_off = SM_OFF(a);
PROF_EVENT(20);
# if 0
@@ -216,6 +216,6 @@ static /* __inline__ */ void set_abit (
PROF_EVENT(22);
ENSURE_MAPPABLE(a, "set_abit");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
if (abit)
BITARR_SET(sm->abits, sm_off);
@@ -236,6 +236,6 @@ static __inline__ UChar get_abits4_ALIGN
sk_assert(IS_ALIGNED4_ADDR(a));
# endif
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
abits8 = sm->abits[sm_off >> 3];
abits8 >>= (a & 4 /* 100b */); /* a & 4 is either 0 or 4 */
@@ -283,12 +283,5 @@ void set_address_range_perms ( Addr a, S
/* In order that we can charge through the address space at 8
bytes/main-loop iteration, make up some perms. */
- abyte8 = (example_a_bit << 7)
- | (example_a_bit << 6)
- | (example_a_bit << 5)
- | (example_a_bit << 4)
- | (example_a_bit << 3)
- | (example_a_bit << 2)
- | (example_a_bit << 1)
- | (example_a_bit << 0);
+ abyte8 = BIT_EXPAND(example_a_bit);
# ifdef VG_DEBUG_MEMORY
@@ -331,6 +324,6 @@ void set_address_range_perms ( Addr a, S
continue;
ENSURE_MAPPABLE(a, "set_address_range_perms(fast)");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
sm->abits[sm_off >> 3] = abyte8;
}
@@ -384,6 +377,6 @@ void make_aligned_word_noaccess(Addr a)
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_word_noaccess");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
mask = 0x0F;
mask <<= (a & 4 /* 100b */); /* a & 4 is either 0 or 4 */
@@ -402,6 +395,6 @@ void make_aligned_word_accessible(Addr a
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_word_accessible");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
mask = 0x0F;
mask <<= (a & 4 /* 100b */); /* a & 4 is either 0 or 4 */
@@ -421,6 +414,6 @@ void make_aligned_doubleword_accessible(
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_doubleword_accessible");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
sm->abits[sm_off >> 3] = VGM_BYTE_VALID;
VGP_POPCC(VgpESPAdj);
@@ -435,6 +428,6 @@ void make_aligned_doubleword_noaccess(Ad
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_doubleword_noaccess");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
sm->abits[sm_off >> 3] = VGM_BYTE_INVALID;
VGP_POPCC(VgpESPAdj);
@@ -666,5 +659,5 @@ static __inline__ void ac_helperc_ACCESS
UInt sec_no = rotateRight16(a) & 0x3FFFF;
AcSecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
UChar abits = sm->abits[a_off];
abits >>= (a & 4);
@@ -689,5 +682,5 @@ static __inline__ void ac_helperc_ACCESS
UInt sec_no = rotateRight16(a) & 0x1FFFF;
AcSecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
PROF_EVENT(67);
if (sm->abits[a_off] == VGM_BYTE_VALID) {
@@ -708,5 +701,5 @@ static __inline__ void ac_helperc_ACCESS
UInt sec_no = shiftRight16(a);
AcSecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
PROF_EVENT(68);
if (sm->abits[a_off] == VGM_BYTE_VALID) {
@@ -863,6 +856,6 @@ void ac_fpu_ACCESS_check ( Addr addr, Si
PROF_EVENT(91);
/* Properly aligned. */
- sm = primary_map[addr >> 16];
- sm_off = addr & 0xFFFF;
+ sm = primary_map[PM_IDX(addr)];
+ sm_off = SM_OFF(addr);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow4;
@@ -880,12 +873,12 @@ void ac_fpu_ACCESS_check ( Addr addr, Si
addr4 = addr + 4;
/* First half. */
- sm = primary_map[addr >> 16];
- sm_off = addr & 0xFFFF;
+ sm = primary_map[PM_IDX(addr)];
+ sm_off = SM_OFF(addr);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow8;
/* First half properly aligned and addressible. */
/* Second half. */
- sm = primary_map[addr4 >> 16];
- sm_off = addr4 & 0xFFFF;
+ sm = primary_map[PM_IDX(addr4)];
+ sm_off = SM_OFF(addr4);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow8;
--- valgrind/memcheck/mac_shared.h #1.30:1.31
@@ -193,13 +193,23 @@ extern UInt MAC_(event_ctr)[N_PROF_EVENT
/*------------------------------------------------------------*/
+/* expand 1 bit -> 32 */
+#define BIT_EXPAND(b) (~(((UInt)(b) & 1) - 1))
+
+#define SECONDARY_SHIFT 16
+#define SECONDARY_SIZE (1 << SECONDARY_SHIFT)
+#define SECONDARY_MASK (SECONDARY_SIZE - 1)
+
+#define SM_OFF(addr) ((addr) & SECONDARY_MASK)
+#define PM_IDX(addr) ((addr) >> SECONDARY_SHIFT)
+
#define IS_DISTINGUISHED_SM(smap) \
((smap) == &distinguished_secondary_map)
-#define IS_DISTINGUISHED(addr) (IS_DISTINGUISHED_SM(primary_map[(addr) >> 16]))
+#define IS_DISTINGUISHED(addr) (IS_DISTINGUISHED_SM(primary_map[PM_IDX(addr)]))
#define ENSURE_MAPPABLE(addr,caller) \
do { \
if (IS_DISTINGUISHED(addr)) { \
- primary_map[(addr) >> 16] = alloc_secondary_map(caller); \
+ primary_map[PM_IDX(addr)] = alloc_secondary_map(caller); \
/* VG_(printf)("new 2map because of %p\n", addr); */ \
} \
--- valgrind/memcheck/mc_main.c #1.60:1.61
@@ -121,7 +121,7 @@ static void init_shadow_memory ( void )
Int i;
- for (i = 0; i < 8192; i++) /* Invalid address */
+ for (i = 0; i < SECONDARY_SIZE/8; i++) /* Invalid address */
distinguished_secondary_map.abits[i] = VGM_BYTE_INVALID;
- for (i = 0; i < 65536; i++) /* Invalid Value */
+ for (i = 0; i < SECONDARY_SIZE; i++) /* Invalid Value */
distinguished_secondary_map.vbyte[i] = VGM_BYTE_INVALID;
@@ -166,6 +166,6 @@ static SecMap* alloc_secondary_map ( __a
static __inline__ UChar get_abit ( Addr a )
{
- SecMap* sm = primary_map[a >> 16];
- UInt sm_off = a & 0xFFFF;
+ SecMap* sm = primary_map[PM_IDX(a)];
+ UInt sm_off = SM_OFF(a);
PROF_EVENT(20);
# if 0
@@ -180,6 +180,6 @@ static __inline__ UChar get_abit ( Addr
static __inline__ UChar get_vbyte ( Addr a )
{
- SecMap* sm = primary_map[a >> 16];
- UInt sm_off = a & 0xFFFF;
+ SecMap* sm = primary_map[PM_IDX(a)];
+ UInt sm_off = SM_OFF(a);
PROF_EVENT(21);
# if 0
@@ -197,6 +197,6 @@ static /* __inline__ */ void set_abit (
PROF_EVENT(22);
ENSURE_MAPPABLE(a, "set_abit");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
if (abit)
BITARR_SET(sm->abits, sm_off);
@@ -211,6 +211,6 @@ static __inline__ void set_vbyte ( Addr
PROF_EVENT(23);
ENSURE_MAPPABLE(a, "set_vbyte");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
sm->vbyte[sm_off] = vbyte;
}
@@ -228,6 +228,6 @@ static __inline__ UChar get_abits4_ALIGN
sk_assert(IS_ALIGNED4_ADDR(a));
# endif
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
abits8 = sm->abits[sm_off >> 3];
abits8 >>= (a & 4 /* 100b */); /* a & 4 is either 0 or 4 */
@@ -238,6 +238,6 @@ static __inline__ UChar get_abits4_ALIGN
static UInt __inline__ get_vbytes4_ALIGNED ( Addr a )
{
- SecMap* sm = primary_map[a >> 16];
- UInt sm_off = a & 0xFFFF;
+ SecMap* sm = primary_map[PM_IDX(a)];
+ UInt sm_off = SM_OFF(a);
PROF_EVENT(25);
# ifdef VG_DEBUG_MEMORY
@@ -253,6 +253,6 @@ static void __inline__ set_vbytes4_ALIGN
UInt sm_off;
ENSURE_MAPPABLE(a, "set_vbytes4_ALIGNED");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
PROF_EVENT(23);
# ifdef VG_DEBUG_MEMORY
@@ -311,12 +311,5 @@ static void set_address_range_perms ( Ad
/* In order that we can charge through the address space at 8
bytes/main-loop iteration, make up some perms. */
- abyte8 = (example_a_bit << 7)
- | (example_a_bit << 6)
- | (example_a_bit << 5)
- | (example_a_bit << 4)
- | (example_a_bit << 3)
- | (example_a_bit << 2)
- | (example_a_bit << 1)
- | (example_a_bit << 0);
+ abyte8 = BIT_EXPAND(example_a_bit);
vword4 = (vbyte << 24) | (vbyte << 16) | (vbyte << 8) | vbyte;
@@ -361,6 +354,6 @@ static void set_address_range_perms ( Ad
continue;
ENSURE_MAPPABLE(a, "set_address_range_perms(fast)");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
sm->abits[sm_off >> 3] = abyte8;
((UInt*)(sm->vbyte))[(sm_off >> 2) + 0] = vword4;
@@ -424,6 +417,6 @@ void make_aligned_word_writable(Addr a)
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_word_writable");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
((UInt*)(sm->vbyte))[sm_off >> 2] = VGM_WORD_INVALID;
mask = 0x0F;
@@ -443,6 +436,6 @@ void make_aligned_word_noaccess(Addr a)
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_word_noaccess");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
((UInt*)(sm->vbyte))[sm_off >> 2] = VGM_WORD_INVALID;
mask = 0x0F;
@@ -462,6 +455,6 @@ void make_aligned_doubleword_writable(Ad
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_doubleword_writable");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
sm->abits[sm_off >> 3] = VGM_BYTE_VALID;
((UInt*)(sm->vbyte))[(sm_off >> 2) + 0] = VGM_WORD_INVALID;
@@ -478,6 +471,6 @@ void make_aligned_doubleword_noaccess(Ad
VGP_PUSHCC(VgpESPAdj);
ENSURE_MAPPABLE(a, "make_aligned_doubleword_noaccess");
- sm = primary_map[a >> 16];
- sm_off = a & 0xFFFF;
+ sm = primary_map[PM_IDX(a)];
+ sm_off = SM_OFF(a);
sm->abits[sm_off >> 3] = VGM_BYTE_INVALID;
((UInt*)(sm->vbyte))[(sm_off >> 2) + 0] = VGM_WORD_INVALID;
@@ -818,5 +811,5 @@ UInt MC_(helperc_LOADV4) ( Addr a )
UInt sec_no = rotateRight16(a) & 0x3FFFF;
SecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
UChar abits = sm->abits[a_off];
abits >>= (a & 4);
@@ -826,5 +819,5 @@ UInt MC_(helperc_LOADV4) ( Addr a )
/* Handle common case quickly: a is suitably aligned, is mapped,
and is addressible. */
- UInt v_off = a & 0xFFFF;
+ UInt v_off = SM_OFF(a);
return ((UInt*)(sm->vbyte))[ v_off >> 2 ];
} else {
@@ -843,5 +836,5 @@ void MC_(helperc_STOREV4) ( Addr a, UInt
UInt sec_no = rotateRight16(a) & 0x3FFFF;
SecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
UChar abits = sm->abits[a_off];
abits >>= (a & 4);
@@ -851,5 +844,5 @@ void MC_(helperc_STOREV4) ( Addr a, UInt
/* Handle common case quickly: a is suitably aligned, is mapped,
and is addressible. */
- UInt v_off = a & 0xFFFF;
+ UInt v_off = SM_OFF(a);
((UInt*)(sm->vbyte))[ v_off >> 2 ] = vbytes;
} else {
@@ -868,9 +861,9 @@ UInt MC_(helperc_LOADV2) ( Addr a )
UInt sec_no = rotateRight16(a) & 0x1FFFF;
SecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
PROF_EVENT(62);
if (sm->abits[a_off] == VGM_BYTE_VALID) {
/* Handle common case quickly. */
- UInt v_off = a & 0xFFFF;
+ UInt v_off = SM_OFF(a);
return 0xFFFF0000
|
@@ -891,9 +884,9 @@ void MC_(helperc_STOREV2) ( Addr a, UInt
UInt sec_no = rotateRight16(a) & 0x1FFFF;
SecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
PROF_EVENT(63);
if (sm->abits[a_off] == VGM_BYTE_VALID) {
/* Handle common case quickly. */
- UInt v_off = a & 0xFFFF;
+ UInt v_off = SM_OFF(a);
((UShort*)(sm->vbyte))[ v_off >> 1 ] = vbytes & 0x0000FFFF;
} else {
@@ -912,9 +905,9 @@ UInt MC_(helperc_LOADV1) ( Addr a )
UInt sec_no = shiftRight16(a);
SecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
PROF_EVENT(64);
if (sm->abits[a_off] == VGM_BYTE_VALID) {
/* Handle common case quickly. */
- UInt v_off = a & 0xFFFF;
+ UInt v_off = SM_OFF(a);
return 0xFFFFFF00
|
@@ -935,9 +928,9 @@ void MC_(helperc_STOREV1) ( Addr a, UInt
UInt sec_no = shiftRight16(a);
SecMap* sm = primary_map[sec_no];
- UInt a_off = (a & 0xFFFF) >> 3;
+ UInt a_off = (SM_OFF(a)) >> 3;
PROF_EVENT(65);
if (sm->abits[a_off] == VGM_BYTE_VALID) {
/* Handle common case quickly. */
- UInt v_off = a & 0xFFFF;
+ UInt v_off = SM_OFF(a);
((UChar*)(sm->vbyte))[ v_off ] = vbytes & 0x000000FF;
} else {
@@ -1180,10 +1173,10 @@ void MC_(fpu_read_check) ( Addr addr, Si
PROF_EVENT(81);
/* Properly aligned. */
- sm = primary_map[addr >> 16];
- sm_off = addr & 0xFFFF;
+ sm = primary_map[PM_IDX(addr)];
+ sm_off = SM_OFF(addr);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow4;
/* Properly aligned and addressible. */
- v_off = addr & 0xFFFF;
+ v_off = SM_OFF(addr);
if (((UInt*)(sm->vbyte))[ v_off >> 2 ] != VGM_WORD_VALID)
goto slow4;
@@ -1201,19 +1194,19 @@ void MC_(fpu_read_check) ( Addr addr, Si
addr4 = addr + 4;
/* First half. */
- sm = primary_map[addr >> 16];
- sm_off = addr & 0xFFFF;
+ sm = primary_map[PM_IDX(addr)];
+ sm_off = SM_OFF(addr);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow8;
/* First half properly aligned and addressible. */
- v_off = addr & 0xFFFF;
+ v_off = SM_OFF(addr);
if (((UInt*)(sm->vbyte))[ v_off >> 2 ] != VGM_WORD_VALID)
goto slow8;
/* Second half. */
- sm = primary_map[addr4 >> 16];
- sm_off = addr4 & 0xFFFF;
+ sm = primary_map[PM_IDX(addr4)];
+ sm_off = SM_OFF(addr4);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow8;
/* Second half properly aligned and addressible. */
- v_off = addr4 & 0xFFFF;
+ v_off = SM_OFF(addr4);
if (((UInt*)(sm->vbyte))[ v_off >> 2 ] != VGM_WORD_VALID)
goto slow8;
@@ -1268,10 +1261,10 @@ void MC_(fpu_write_check) ( Addr addr, S
PROF_EVENT(86);
/* Properly aligned. */
- sm = primary_map[addr >> 16];
- sm_off = addr & 0xFFFF;
+ sm = primary_map[PM_IDX(addr)];
+ sm_off = SM_OFF(addr);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow4;
/* Properly aligned and addressible. Make valid. */
- v_off = addr & 0xFFFF;
+ v_off = SM_OFF(addr);
((UInt*)(sm->vbyte))[ v_off >> 2 ] = VGM_WORD_VALID;
return;
@@ -1287,18 +1280,18 @@ void MC_(fpu_write_check) ( Addr addr, S
addr4 = addr + 4;
/* First half. */
- sm = primary_map[addr >> 16];
- sm_off = addr & 0xFFFF;
+ sm = primary_map[PM_IDX(addr)];
+ sm_off = SM_OFF(addr);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow8;
/* First half properly aligned and addressible. Make valid. */
- v_off = addr & 0xFFFF;
+ v_off = SM_OFF(addr);
((UInt*)(sm->vbyte))[ v_off >> 2 ] = VGM_WORD_VALID;
/* Second half. */
- sm = primary_map[addr4 >> 16];
- sm_off = addr4 & 0xFFFF;
+ sm = primary_map[PM_IDX(addr4)];
+ sm_off = SM_OFF(addr4);
a_off = sm_off >> 3;
if (sm->abits[a_off] != VGM_BYTE_VALID) goto slow8;
/* Second half properly aligned and addressible. */
- v_off = addr4 & 0xFFFF;
+ v_off = SM_OFF(addr4);
((UInt*)(sm->vbyte))[ v_off >> 2 ] = VGM_WORD_VALID;
/* Properly aligned, addressible and with valid data. */
|
|
From: Jeremy F. <je...@go...> - 2005-02-19 01:00:29
|
CVS commit by fitzhardinge: Add missing <pthread.h> M +1 -0 vg_intercept.c 1.31 --- valgrind/coregrind/vg_intercept.c #1.30:1.31 @@ -41,4 +41,5 @@ #include "core.h" #include <unistd.h> +#include <pthread.h> #include <dlfcn.h> |
|
From: Nicholas N. <nj...@cs...> - 2005-02-19 00:50:29
|
CVS commit by nethercote:
Remove some out of date stuff:
- vg_intercept.c no longer needs to be generated using gen_intercepts.pl.
- Section 2.8 of the manual (about threads) has been greatly streamlined.
M +4 -8 Makefile.am 1.109
M +36 -64 vg_intercept.c 1.30
M +5 -43 docs/coregrind_core.html 1.37
R vg_intercept.c.base 1.4
--- valgrind/coregrind/Makefile.am #1.108:1.109
@@ -31,8 +31,8 @@
valgrind.vs \
gen_toolint.pl toolfuncs.def \
- gen_intercepts.pl vg_replace_malloc.c.base vg_intercept.c.base
+ gen_intercepts.pl vg_replace_malloc.c.base
BUILT_SOURCES = vg_toolint.c vg_toolint.h
-CLEANFILES = vg_toolint.c vg_toolint.h vg_replace_malloc.c vg_intercept.c
+CLEANFILES = vg_toolint.c vg_toolint.h vg_replace_malloc.c
valgrind_SOURCES = \
@@ -56,4 +56,5 @@
vg_hashtable.c \
vg_instrument.c \
+ vg_intercept.c \
vg_main.c \
vg_malloc2.c \
@@ -109,8 +110,4 @@
-vg_intercept.c: $(srcdir)/gen_intercepts.pl $(srcdir)/vg_intercept.c.base
- rm -f $@
- $(PERL) $(srcdir)/gen_intercepts.pl < $(srcdir)/vg_intercept.c.base > $@
-
vg_replace_malloc.c: $(srcdir)/gen_intercepts.pl $(srcdir)/vg_replace_malloc.c.base
rm -f $@
@@ -130,6 +127,5 @@
$(PERL) $(srcdir)/gen_toolint.pl struct < $(srcdir)/toolfuncs.def >> $@ || rm -f $@
-vg_inject_so_SOURCES = \
- vg_intercept.c
+vg_inject_so_SOURCES =
vg_inject_so_CFLAGS = $(AM_CFLAGS) -fpic
vg_inject_so_LDADD = -ldl
--- valgrind/coregrind/docs/coregrind_core.html #1.36:1.37
@@ -1023,19 +1023,9 @@
<h3>2.8 Support for POSIX Pthreads</h3>
-Valgrind supports programs which use POSIX pthreads. Getting this to work was
-technically challenging but it all works well enough for significant threaded
-applications to work.
-<p>
-It works as follows: threaded apps are (dynamically) linked against
-<code>libpthread.so</code>. Usually this is the one installed with
-your Linux distribution. Valgrind, however, supplies its own
-<code>libpthread.so</code> and automatically connects your program to
-it instead.
-<p>
-The fake <code>libpthread.so</code> and Valgrind cooperate to
-implement a user-space pthreads package. This approach avoids the
-horrible implementation problems of implementing a truly
-multiprocessor version of Valgrind, but it does mean that threaded
-apps run only on one CPU, even if you have a multiprocessor machine.
+Valgrind supports programs which use POSIX pthreads. However, it runs
+multi-threaded programs in such a way that only one thread runs at a time.
+This approach avoids the horrible implementation problems of implementing a
+truly multiprocessor version of Valgrind, but it does mean that threaded
+apps only utilise one CPU, even if you have a multiprocessor machine.
<p>
Valgrind schedules your threads in a round-robin fashion, with all
@@ -1046,32 +1036,4 @@
if you have some kind of concurrency, critical race, locking, or
similar, bugs.
-<p>
-As of the Valgrind-1.0 release, the state of pthread support was as follows:
-<ul>
-<li>Mutexes, condition variables, thread-specific data,
- <code>pthread_once</code>, reader-writer locks, semaphores,
- cleanup stacks, cancellation and thread detaching currently work.
- Various attribute-like calls are handled but ignored; you get a
- warning message.
-<p>
-<li>Currently the following syscalls are thread-safe (nonblocking):
- <code>write</code> <code>read</code> <code>nanosleep</code>
- <code>sleep</code> <code>select</code> <code>poll</code>
- <code>recvmsg</code> and
- <code>accept</code>.
-<p>
-<li>Signals in pthreads are now handled properly(ish):
- <code>pthread_sigmask</code>, <code>pthread_kill</code>,
- <code>sigwait</code> and <code>raise</code> are now implemented.
- Each thread has its own signal mask, as POSIX requires.
- It's a bit kludgey -- there's a system-wide pending signal set,
- rather than one for each thread. But hey.
-</ul>
-
-As of 18 May 02, the following threaded programs now work fine on my
-RedHat 7.2 box: Opera 6.0Beta2, KNode in KDE 3.0, Mozilla-0.9.2.1 and
-Galeon-0.11.3, both as supplied with RedHat 7.2. Also Mozilla 1.0RC2.
-OpenOffice 1.0. MySQL 3.something (the current stable release).
-
|
|
From: Jeremy F. <je...@go...> - 2005-02-19 00:35:17
|
Naveen Kumar wrote:
>Hi,
> Why is stage1 linked to be a static executable ? It
>didn't seem to be so during 2.2.0. The reason I ask is
>that on x86-solaris there doesnt seem to be any auxv
>info for static executables.
>
Its static to make sure it is all together in one place. If it were
dynamic, then its libraries might be placed where we want to put stage2.
Can you just make up a complete auxv? It doesn't really need to see the
kernel's one.
> I was initially making
>changes off 2.2.0 for x86-solaris but decided to see
>if I could track cvs latest. It wasnt too difficult
>and within a day I am at the same point that I was at
>for 2.2.0. The platform related partitioning changes
>seems to be pretty good.
>
>
Good.
>The system call number goes in %eax and the lcall does
>the system call. Return value is in %eax however if
>the condition flag is set then this signals an error
>and %eax contains the error number(positive value). My
>question is how do I convey all this to the calling
>function ? Do I move the error value to errno and then
>use vk_(is_kerror)(errno) to decide what value the
>calling function returns ?
>
>
BSD seems to have a similar (identical?) syscall convention. I haven't
really thought about it, but we could change do_syscall to something
like "int VG_(do_syscall)(int *ret, int syscall, ...)", and have it
return 0 if OK or 1 on error; *ret would always get the sycall return
value. Does Solaris ever return 64 bit results in eax:edx?
J
|
|
From: Nicholas N. <nj...@cs...> - 2005-02-19 00:29:48
|
CVS commit by nethercote:
Remove now-irrelevant FAQ.
M +0 -63 FAQ.txt 1.24
--- valgrind/FAQ.txt #1.23:1.24
@@ -126,67 +126,4 @@
might not if the jump happens to land in addressable memory.
------------------------------------------------------------------
-
-3.4. My program dies like this:
-
- error: /lib/librt.so.1: symbol __pthread_clock_settime, version
- GLIBC_PRIVATE not defined in file libpthread.so.0 with link time
- reference
-
-This is a total swamp. Nevertheless there is a way out. It's a problem
-which is not easy to fix. Really the problem is that /lib/librt.so.1
-refers to some symbols __pthread_clock_settime and
-__pthread_clock_gettime in /lib/libpthread.so which are not intended to
-be exported, ie they are private.
-
-Best solution is to ensure your program does not use /lib/librt.so.1.
-
-However .. since you're probably not using it directly, or even
-knowingly, that's hard to do. You might instead be able to fix it by
-playing around with coregrind/vg_libpthread.vs. Things to try:
-
-Remove this
-
- GLIBC_PRIVATE {
- __pthread_clock_gettime;
- __pthread_clock_settime;
- };
-
-or maybe remove this
-
- GLIBC_2.2.3 {
- __pthread_clock_gettime;
- __pthread_clock_settime;
- } GLIBC_2.2;
-
-or maybe add this
-
- GLIBC_2.2.4 {
- __pthread_clock_gettime;
- __pthread_clock_settime;
- } GLIBC_2.2;
-
- GLIBC_2.2.5 {
- __pthread_clock_gettime;
- __pthread_clock_settime;
- } GLIBC_2.2;
-
-or some combination of the above. After each change you need to delete
-coregrind/libpthread.so and do make && make install.
-
-I just don't know if any of the above will work. If you can find a
-solution which works, I would be interested to hear it.
-
-To which someone replied:
-
- I deleted this:
-
- GLIBC_2.2.3 {
- __pthread_clock_gettime;
- __pthread_clock_settime;
- } GLIBC_2.2;
-
- and it worked.
-
-----------------------------------------------------------------
|
|
From: Nicholas N. <nj...@cs...> - 2005-02-19 00:25:00
|
On Fri, 18 Feb 2005, Naveen Kumar wrote: > Why is stage1 linked to be a static executable ? It > didn't seem to be so during 2.2.0. The reason I ask is > that on x86-solaris there doesnt seem to be any auxv > info for static executables. I was initially making > changes off 2.2.0 for x86-solaris but decided to see > if I could track cvs latest. It wasnt too difficult > and within a day I am at the same point that I was at > for 2.2.0. The platform related partitioning changes > seems to be pretty good. I don't know why it is a static executable, but I would say that tracking CVS is a very good idea. > On another note I was wondering if anyone could > suggest a way of implementing vg_(do_syscall) on > x86-solaris. Currently my implementation is something > like this > > vg_(do_syscall) > popl %edx > popl %eax > pushl %edx > lcall $0x27, $0x0 > push (%esp) > ret > > The system call number goes in %eax and the lcall does > the system call. Return value is in %eax however if > the condition flag is set then this signals an error > and %eax contains the error number(positive value). My > question is how do I convey all this to the calling > function ? Do I move the error value to errno and then > use vk_(is_kerror)(errno) to decide what value the > calling function returns ? I believe that on x86/Linux %eax gets the return value, and you can tell if it's an error if the value is in the range -4096 <= ret <= -1, and that the current Valgrind code assumes this when using VG_(do_syscall)(). So if that's not true for Solaris then perhaps the way VG_(do_syscall)() works needs to change a little, eg. return 2 values, one for the return value, and one a boolean that indicates whether it's an error code. But I'm not certain about that. N |
|
From: Jeremy F. <je...@go...> - 2005-02-19 00:20:55
|
CVS commit by fitzhardinge:
Print stats about shadow memory usage with -v -v.
M +2 -0 core.h 1.86
M +2 -0 vg_main.c 1.251
M +9 -1 vg_memory.c 1.90
--- valgrind/coregrind/vg_memory.c #1.89:1.90
@@ -910,7 +910,7 @@ void VG_(init_shadow_range)(Addr p, UInt
}
+static Addr shadow_alloc = 0;
void *VG_(shadow_alloc)(UInt size)
{
- static Addr shadow_alloc = 0;
void *ret;
@@ -941,4 +941,12 @@ void *VG_(shadow_alloc)(UInt size)
}
+void VG_(print_shadow_stats)()
+{
+ SizeT used = (shadow_alloc ? shadow_alloc : VG_(shadow_base)) - VG_(shadow_base);
+ SizeT total = VG_(shadow_end) - VG_(shadow_base);
+
+ VG_(message)(Vg_DebugMsg, "Total shadow reserved: %d Mbytes, %d Mbytes used (%d%%)",
+ total / (1024*1024), used / (1024*1024), (UInt)(total ? (used * 100ull / total) : 0));
+}
/*--------------------------------------------------------------------*/
--- valgrind/coregrind/core.h #1.85:1.86
@@ -1257,4 +1257,6 @@ extern Bool VG_(sanity_check_memory)(voi
extern const Char *VG_(prot_str)(UInt prot);
+extern void VG_(print_shadow_stats)();
+
/* ---------------------------------------------------------------------
Exports of vg_syscalls.c
--- valgrind/coregrind/vg_main.c #1.250:1.251
@@ -207,4 +207,6 @@ static void print_all_stats ( void )
VG_(print_all_arena_stats)();
VG_(message)(Vg_DebugMsg, "");
+ VG_(print_shadow_stats)();
+ VG_(message)(Vg_DebugMsg, "");
VG_(message)(Vg_DebugMsg,
"------ Valgrind's ExeContext management stats follow ------" );
|
|
From: Jeremy F. <je...@go...> - 2005-02-19 00:18:13
|
CVS commit by fitzhardinge:
Remove now unnecessary --num-callers=4.
M +0 -2 vg_regtest.in 1.31
--- valgrind/tests/vg_regtest.in #1.30:1.31
@@ -276,6 +276,4 @@
}
- # *.exp have been generated assuming this
- $vgopts = '--num-callers=4 '.$vgopts;
printf("%-16s valgrind $vgopts $prog $args\n", "$name:");
|
|
From: Jeremy F. <je...@go...> - 2005-02-19 00:17:41
|
CVS commit by fitzhardinge:
Implement the "into" instruction. Has been seen in the wild (apparently
generated by some Java compiler or something).
M +15 -8 coregrind/vg_to_ucode.c 1.155
M +5 -4 none/tests/faultstatus.c 1.4
M +1 -3 none/tests/faultstatus.stderr.exp 1.6
--- valgrind/coregrind/vg_to_ucode.c #1.154:1.155
@@ -5526,8 +5526,15 @@ static Addr disInstr ( UCodeBlock* cb, A
case 0xCC: /* INT3 - breakpoint */
case 0xCD: /* INT imm8 */
- if (opc == 0xCD)
- d32 = getUChar(eip++);
- else
- d32 = 3;
+ case 0xCE: /* INTo */
+ d32 = 0; /* shut up, gcc */
+ switch(opc) {
+ case 0xCC: d32 = 3; break;
+ case 0xCD: d32 = getUChar(eip++); break;
+ case 0xCE:
+ /* Conditional trap on overflow */
+ jcc_lit(cb, eip, CondNO);
+ d32 = 4; /* #OF */
+ break;
+ }
/* It's important that all ArchRegs carry their up-to-date value
@@ -5546,8 +5553,8 @@ static Addr disInstr ( UCodeBlock* cb, A
LAST_UINSTR(cb).jmpkind = JmpSyscall;
- if (opc == 0xCD) {
- DIP("int $0x%02x\n", d32);
- } else {
- DIP("int3\n");
+ switch(d32) {
+ case 3: DIP("int3\n"); break;
+ case 4: DIP("intO\n"); break;
+ default: DIP("int $0x%02x\n", d32); break;
}
break;
--- valgrind/none/tests/faultstatus.stderr.exp #1.5:1.6
@@ -7,7 +7,5 @@
Test 6: PASS
Test 7: PASS
-Test 8: disInstr: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........
- at 0x........: test8 (faultstatus.c:118)
- FAIL: expected signal 11, not 4
+Test 8: PASS
Test 9: disInstr: unhandled instruction bytes: 0x........ 0x........ 0x........ 0x........
at 0x........: test9 (faultstatus.c:127)
--- valgrind/none/tests/faultstatus.c #1.3:1.4
@@ -164,9 +165,9 @@ int main()
T(6, SIGTRAP, 128, 0), /* TRAP_BRKPT? */
T(7, SIGSEGV, 128, 0),
-
- /* These 2 are an expected failure - Valgrind
- doesn't implement their instructions, and
- so issues a SIGILL instead. */
T(8, SIGSEGV, 128, 0),
+
+ /* This is an expected failure - Valgrind
+ doesn't implement the BOUND instruction,
+ and so issues a SIGILL instead. */
T(9, SIGSEGV, 128, 0),
#endif /* __i386__ */
|
|
From: Jeremy F. <je...@go...> - 2005-02-19 00:16:48
|
CVS commit by fitzhardinge:
Implement sys_clock_nanosleep.
M +1 -0 core.h 1.85
M +15 -0 vg_syscalls.c 1.248
M +1 -1 x86-linux/syscalls.c 1.22
--- valgrind/coregrind/vg_syscalls.c #1.247:1.248
@@ -5885,4 +5885,19 @@ POST(sys_clock_getres)
}
+PRE(sys_clock_nanosleep, MayBlock)
+{
+ PRINT("sys_clock_nanosleep( %d, %x, %p, %p )", arg1, arg2, arg3, arg4);
+ PRE_REG_READ4(long, "clock_nanosleep", vki_clockid_t, which_clock, int, flags,
+ const struct vki_timespec *, rqtp, struct vki_timespec *, rmtp);
+ SYS_PRE_MEM_READ("clock_nanosleep(rqtp)", arg3, sizeof(struct vki_timespec));
+ if (arg4)
+ SYS_PRE_MEM_WRITE("clock_nanosleep(rmtp)", arg4, sizeof(struct vki_timespec));
+}
+
+POST(sys_clock_nanosleep)
+{
+ if (arg4)
+ POST_MEM_WRITE(arg4, sizeof(struct vki_timespec));
+}
/* ---------------------------------------------------------------------
--- valgrind/coregrind/core.h #1.84:1.85
@@ -1394,4 +1394,5 @@ GEN_SYSCALL_WRAPPER(sys_clock_settime);
GEN_SYSCALL_WRAPPER(sys_clock_gettime);
GEN_SYSCALL_WRAPPER(sys_clock_getres);
+GEN_SYSCALL_WRAPPER(sys_clock_nanosleep);
GEN_SYSCALL_WRAPPER(sys_getcwd);
GEN_SYSCALL_WRAPPER(sys_symlink);
--- valgrind/coregrind/x86-linux/syscalls.c #1.21:1.22
@@ -1015,5 +1015,5 @@ const struct SyscallTableEntry VGA_(sysc
GENXY(__NR_clock_gettime, sys_clock_gettime), // (timer_create+6)
GENXY(__NR_clock_getres, sys_clock_getres), // (timer_create+7)
- // (__NR_clock_nanosleep, sys_clock_nanosleep),// (timer_create+8) */*
+ GENXY(__NR_clock_nanosleep, sys_clock_nanosleep),// (timer_create+8) */*
GENXY(__NR_statfs64, sys_statfs64), // 268
GENXY(__NR_fstatfs64, sys_fstatfs64), // 269
|
|
From: Jeremy F. <je...@go...> - 2005-02-19 00:16:04
|
CVS commit by fitzhardinge:
Make sure the signal is not one of our's in sigqueueinfo.
M +2 -0 vg_syscalls.c 1.247
--- valgrind/coregrind/vg_syscalls.c #1.246:1.247
@@ -5475,4 +5475,6 @@ PRE(sys_rt_sigqueueinfo, 0)
if (arg2 != 0)
SYS_PRE_MEM_READ( "rt_sigqueueinfo(uinfo)", arg3, sizeof(vki_siginfo_t) );
+ if (!VG_(client_signal_OK)(arg2))
+ set_result( -VKI_EINVAL );
}
|
|
From: Jeremy F. <je...@go...> - 2005-02-19 00:15:10
|
CVS commit by fitzhardinge:
When exiting, the exiting thread interrupts any other threads which are
blocked in syscalls. Unfortunately, they weren't cleaning up properly
after themselves, which caused a panic if __libc_freeres ended up doing
a syscall.
M +1 -0 vg_signals.c 1.123
--- valgrind/coregrind/vg_signals.c #1.122:1.123
@@ -1876,4 +1876,5 @@ static void sigvgkill_handler(int signo,
VG_(set_running)(tid);
+ VG_(post_syscall)(tid);
VG_(resume_scheduler)(tid);
|