You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(5) |
2
(3) |
|
3
(2) |
4
(3) |
5
(16) |
6
(8) |
7
(6) |
8
(2) |
9
(4) |
|
10
(10) |
11
(22) |
12
(7) |
13
(10) |
14
(11) |
15
(8) |
16
(6) |
|
17
(11) |
18
|
19
(6) |
20
(8) |
21
(5) |
22
(11) |
23
(6) |
|
24
(1) |
25
(6) |
26
(4) |
27
(2) |
28
(1) |
29
|
30
(2) |
|
31
(5) |
|
|
|
|
|
|
On Tue, 2015-05-12 at 22:21 +0200, Florian Krohm wrote:
> On 12.05.2015 21:11, Philippe Waroquiers wrote:
> > On Tue, 2015-05-12 at 18:22 +0000, sv...@va... wrote:
> >
> >> + The KIND parameter tells what kind of ADDR is allowed:
> >> + 'M' ADDR must be mapped
> >> + 'U' ADDR must be unmapped
> >> + '*' ADDR can be mapped or unmapped
> >> +*/
> >> +Bool VG_(am_addr_is_in_extensible_client_stack)( Addr addr, HChar kind )
> > Any reason why an HChar is used, rather than the classical enum ?
> > (which is usually more readable?)
>
> I felt that the encoding characters were good enough and it wasn't worth
> inventing a new type and thinking about good enumerator names.
I understand that using a HChar implies to invent less
names, but IMO, that should be limited to very local use,
not in public interfaces (as these are less readable).
Tools are also not working with constants while they are working
with enum (e.g. emacs find-tag does not work with HChar
constants, and works with enum values).
So, instead, why not use e.g.
typedef
enum { SzMapped = 0x01, // Addr in mapped stack zone
SzResvn = 0x02 // Addr in reserved (unmapped) stack zone
} StackZone;
Philippe
|
|
From: <sv...@va...> - 2015-05-13 21:46:55
|
Author: carll
Date: Wed May 13 22:46:47 2015
New Revision: 15228
Log:
Patch 2 in a revised series of cleanup patches from Will Schmidt
Add a deep-D test .exp values for ppc64.
Depending on the system and the systems endianness, there are variances
in the library reference, and to the specific line number in the library.
I was able to add and modify existing filters to cover most of the variations,
but did need to add a .exp to cover the additional call stack entry as seen
on power.
This change allows the ppc64 targets to pass the massif/deep-D test.
This patch fixes Vagrind bugzilla 347686
Added:
trunk/massif/tests/deep-D.post.exp-ppc64
Modified:
trunk/massif/tests/Makefile.am
trunk/tests/filter_libc
Modified: trunk/massif/tests/Makefile.am
==============================================================================
--- trunk/massif/tests/Makefile.am (original)
+++ trunk/massif/tests/Makefile.am Wed May 13 22:46:47 2015
@@ -14,6 +14,7 @@
deep-B.post.exp deep-B.stderr.exp deep-B.vgtest \
deep-C.post.exp deep-C.stderr.exp deep-C.vgtest \
deep-D.post.exp deep-D.stderr.exp deep-D.vgtest \
+ deep-D.post.exp-ppc64 \
culling1.stderr.exp culling1.vgtest \
culling2.stderr.exp culling2.vgtest \
custom_alloc.post.exp custom_alloc.stderr.exp custom_alloc.vgtest \
Added: trunk/massif/tests/deep-D.post.exp-ppc64
==============================================================================
--- trunk/massif/tests/deep-D.post.exp-ppc64 (added)
+++ trunk/massif/tests/deep-D.post.exp-ppc64 Wed May 13 22:46:47 2015
@@ -0,0 +1,55 @@
+--------------------------------------------------------------------------------
+Command: ./deep
+Massif arguments: --stacks=no --time-unit=B --alloc-fn=a1 --alloc-fn=a2 --alloc-fn=a3 --alloc-fn=a4 --alloc-fn=a5 --alloc-fn=a6 --alloc-fn=a7 --alloc-fn=a8 --alloc-fn=a9 --alloc-fn=a10 --alloc-fn=a11 --alloc-fn=a12 --alloc-fn=main --depth=20 --massif-out-file=massif.out --ignore-fn=__part_load_locale --ignore-fn=__time_load_locale --ignore-fn=dwarf2_unwind_dyld_add_image_hook --ignore-fn=get_or_create_key_element
+ms_print arguments: massif.out
+--------------------------------------------------------------------------------
+
+
+ KB
+3.984^ :
+ | :
+ | @@@@@@@:
+ | @ :
+ | :::::::@ :
+ | : @ :
+ | :::::::: @ :
+ | : : @ :
+ | :::::::: : @ :
+ | : : : @ :
+ | :::::::: : : @ :
+ | : : : : @ :
+ | ::::::::: : : : @ :
+ | : : : : : @ :
+ | :::::::: : : : : @ :
+ | : : : : : : @ :
+ | :::::::: : : : : : @ :
+ | : : : : : : : @ :
+ | :::::::: : : : : : : @ :
+ | : : : : : : : : @ :
+ 0 +----------------------------------------------------------------------->KB
+ 0 3.984
+
+Number of snapshots: 11
+ Detailed snapshots: [9]
+
+--------------------------------------------------------------------------------
+ n time(B) total(B) useful-heap(B) extra-heap(B) stacks(B)
+--------------------------------------------------------------------------------
+ 0 0 0 0 0 0
+ 1 408 408 400 8 0
+ 2 816 816 800 16 0
+ 3 1,224 1,224 1,200 24 0
+ 4 1,632 1,632 1,600 32 0
+ 5 2,040 2,040 2,000 40 0
+ 6 2,448 2,448 2,400 48 0
+ 7 2,856 2,856 2,800 56 0
+ 8 3,264 3,264 3,200 64 0
+ 9 3,672 3,672 3,600 72 0
+98.04% (3,600B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
+->98.04% (3,600B) 0x........: generic_start_main.isra.0 (in /...libc...)
+ ->98.04% (3,600B) 0x........: (below main)
+
+--------------------------------------------------------------------------------
+ n time(B) total(B) useful-heap(B) extra-heap(B) stacks(B)
+--------------------------------------------------------------------------------
+ 10 4,080 4,080 4,000 80 0
Modified: trunk/tests/filter_libc
==============================================================================
--- trunk/tests/filter_libc (original)
+++ trunk/tests/filter_libc Wed May 13 22:46:47 2015
@@ -22,6 +22,9 @@
# libc, on some (eg. Darwin) it will be in the main executable.
s/\(below main\) \(.+\)$/(below main)/;
+ # filter out the exact libc-start.c:### line number. (ppc64*)
+ s/\(libc-start.c:[0-9]*\)$/(in \/...libc...)/;
+
# Merge the different C++ operator variations.
s/(at.*)__builtin_new/$1...operator new.../;
s/(at.*)operator new\(unsigned(| int| long)\)/$1...operator new.../;
|
|
From: <sv...@va...> - 2015-05-13 21:26:53
|
Author: florian
Date: Wed May 13 22:26:46 2015
New Revision: 15227
Log:
Changes for tilegx: Use VKI_AT_FDCWD not AT_FDCWD.
Modified:
trunk/coregrind/m_aspacemgr/aspacemgr-common.c
trunk/coregrind/m_libcfile.c
trunk/include/vki/vki-scnums-tilegx-linux.h
Modified: trunk/coregrind/m_aspacemgr/aspacemgr-common.c
==============================================================================
--- trunk/coregrind/m_aspacemgr/aspacemgr-common.c (original)
+++ trunk/coregrind/m_aspacemgr/aspacemgr-common.c Wed May 13 22:26:46 2015
@@ -247,7 +247,7 @@
SysRes res = VG_(do_syscall4)(__NR_openat,
VKI_AT_FDCWD, (UWord)pathname, flags, mode);
# elif defined(VGP_tilegx_linux)
- SysRes res = VG_(do_syscall4)(__NR_openat, AT_FDCWD, (UWord)pathname,
+ SysRes res = VG_(do_syscall4)(__NR_openat, VKI_AT_FDCWD, (UWord)pathname,
flags, mode);
# else
SysRes res = VG_(do_syscall3)(__NR_open, (UWord)pathname, flags, mode);
@@ -273,8 +273,8 @@
res = VG_(do_syscall4)(__NR_readlinkat, VKI_AT_FDCWD,
(UWord)path, (UWord)buf, bufsiz);
# elif defined(VGP_tilegx_linux)
- res = VG_(do_syscall4)(__NR_readlinkat, AT_FDCWD, (UWord)path, (UWord)buf,
- bufsiz);
+ res = VG_(do_syscall4)(__NR_readlinkat, VKI_AT_FDCWD, (UWord)path,
+ (UWord)buf, bufsiz);
# else
res = VG_(do_syscall3)(__NR_readlink, (UWord)path, (UWord)buf, bufsiz);
# endif
Modified: trunk/coregrind/m_libcfile.c
==============================================================================
--- trunk/coregrind/m_libcfile.c (original)
+++ trunk/coregrind/m_libcfile.c Wed May 13 22:26:46 2015
@@ -435,7 +435,7 @@
Int VG_(rename) ( const HChar* old_name, const HChar* new_name )
{
# if defined(VGP_tilegx_linux)
- SysRes res = VG_(do_syscall3)(__NR_renameat, AT_FDCWD,
+ SysRes res = VG_(do_syscall3)(__NR_renameat, VKI_AT_FDCWD,
(UWord)old_name, (UWord)new_name);
# else
SysRes res = VG_(do_syscall2)(__NR_rename, (UWord)old_name, (UWord)new_name);
Modified: trunk/include/vki/vki-scnums-tilegx-linux.h
==============================================================================
--- trunk/include/vki/vki-scnums-tilegx-linux.h (original)
+++ trunk/include/vki/vki-scnums-tilegx-linux.h Wed May 13 22:26:46 2015
@@ -33,8 +33,6 @@
#ifndef __VKI_SCNUMS_TILEGX_LINUX_H
#define __VKI_SCNUMS_TILEGX_LINUX_H
-#define AT_FDCWD -100
-
/* From tilegx linux/include/asm-generic/unistd.h */
#define __NR_io_setup 0
|
|
From: <sv...@va...> - 2015-05-13 21:10:19
|
Author: carll
Date: Wed May 13 22:10:12 2015
New Revision: 15226
Log:
Patch 1 in a revised series of cleanup patches from Will Schmidt
Update the massif/big-alloc test for ppc64*.
In comparison to the existing .exp files, the time,total,extra-heap
values generated on ppc64* vary from the other architectures.
This .exp allows the ppc64 targets to pass the test.
This patch fixes Vagrind bugzilla 347322.
Added:
trunk/massif/tests/big-alloc.post.exp-ppc64
Modified:
trunk/massif/tests/Makefile.am
Modified: trunk/massif/tests/Makefile.am
==============================================================================
--- trunk/massif/tests/Makefile.am (original)
+++ trunk/massif/tests/Makefile.am Wed May 13 22:10:12 2015
@@ -8,7 +8,7 @@
alloc-fns-B.post.exp alloc-fns-B.stderr.exp alloc-fns-B.vgtest \
basic.post.exp basic.stderr.exp basic.vgtest \
basic2.post.exp basic2.stderr.exp basic2.vgtest \
- big-alloc.post.exp big-alloc.post.exp-64bit \
+ big-alloc.post.exp big-alloc.post.exp-64bit big-alloc.post.exp-ppc64 \
big-alloc.stderr.exp big-alloc.vgtest \
deep-A.post.exp deep-A.stderr.exp deep-A.vgtest \
deep-B.post.exp deep-B.stderr.exp deep-B.vgtest \
Added: trunk/massif/tests/big-alloc.post.exp-ppc64
==============================================================================
--- trunk/massif/tests/big-alloc.post.exp-ppc64 (added)
+++ trunk/massif/tests/big-alloc.post.exp-ppc64 Wed May 13 22:10:12 2015
@@ -0,0 +1,54 @@
+--------------------------------------------------------------------------------
+Command: ./big-alloc
+Massif arguments: --stacks=no --time-unit=B --massif-out-file=massif.out --ignore-fn=__part_load_locale --ignore-fn=__time_load_locale --ignore-fn=dwarf2_unwind_dyld_add_image_hook --ignore-fn=get_or_create_key_element
+ms_print arguments: massif.out
+--------------------------------------------------------------------------------
+
+
+ MB
+100.6^ :
+ | :
+ | @@@@@@@:
+ | @ :
+ | :::::::@ :
+ | : @ :
+ | :::::::: @ :
+ | : : @ :
+ | :::::::: : @ :
+ | : : : @ :
+ | :::::::: : : @ :
+ | : : : : @ :
+ | ::::::::: : : : @ :
+ | : : : : : @ :
+ | :::::::: : : : : @ :
+ | : : : : : : @ :
+ | :::::::: : : : : : @ :
+ | : : : : : : : @ :
+ | :::::::: : : : : : : @ :
+ | : : : : : : : : @ :
+ 0 +----------------------------------------------------------------------->MB
+ 0 100.6
+
+Number of snapshots: 11
+ Detailed snapshots: [9]
+
+--------------------------------------------------------------------------------
+ n time(B) total(B) useful-heap(B) extra-heap(B) stacks(B)
+--------------------------------------------------------------------------------
+ 0 0 0 0 0 0
+ 1 10,551,240 10,551,240 10,485,760 65,480 0
+ 2 21,102,480 21,102,480 20,971,520 130,960 0
+ 3 31,653,720 31,653,720 31,457,280 196,440 0
+ 4 42,204,960 42,204,960 41,943,040 261,920 0
+ 5 52,756,200 52,756,200 52,428,800 327,400 0
+ 6 63,307,440 63,307,440 62,914,560 392,880 0
+ 7 73,858,680 73,858,680 73,400,320 458,360 0
+ 8 84,409,920 84,409,920 83,886,080 523,840 0
+ 9 94,961,160 94,961,160 94,371,840 589,320 0
+99.38% (94,371,840B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
+->99.38% (94,371,840B) 0x........: main (big-alloc.c:12)
+
+--------------------------------------------------------------------------------
+ n time(B) total(B) useful-heap(B) extra-heap(B) stacks(B)
+--------------------------------------------------------------------------------
+ 10 105,512,400 105,512,400 104,857,600 654,800 0
|
|
From: Julian S. <js...@ac...> - 2015-05-13 19:40:36
|
Really, I1 is intended to be something that you could generate by doing a comparison and then used by looking at the condition codes. We could (and sometimes do) generate I1 values into a register, but the code is ugly. The easy way to do what you want is to have your helper return an I32 and do CmpNE(returned_value, mkU32(0)) for the IRStmt test. That should even become reasonably good code in the back end. J On 13/05/15 14:46, Ivo Raisr wrote: > Dear developers, > > Please could you shed some light on why it is not allowed > to return Ity_I1 from a clean helper (CCall). > There is an explicit check in ir_defs.c which barfs: > "Iex.CCall.retty: cannot return :: Ity_I1" > > I would like to use the return value from a clean helper > as the guard condition for IRStmt_Exit. This requires > an IRExpr of type Ity_I1. > > Thank you for any insights, > I. |
|
From: <sv...@va...> - 2015-05-13 16:03:36
|
Author: florian
Date: Wed May 13 17:03:29 2015
New Revision: 15225
Log:
Remove segAddr_to_index as its behaviour is undefined.
Modified:
branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c
Modified: branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c Wed May 13 17:03:29 2015
@@ -1124,33 +1124,21 @@
}
-/* Map segment pointer to segment index. */
-static Int segAddr_to_index ( const NSegment* seg )
-{
- aspacem_assert(seg >= &nsegments[0] && seg < &nsegments[nsegments_used]);
-
- return seg - &nsegments[0];
-}
-
-
/* Find the next segment along from 'here', if it is a non-SkFree segment. */
NSegment const * VG_(am_next_nsegment) ( const NSegment* here, Bool fwds )
{
- Int i = segAddr_to_index(here);
+ const NSegment *next;
if (fwds) {
- i++;
- if (i >= nsegments_used)
+ next = here + 1;
+ if (next >= nsegments + nsegments_used)
return NULL;
} else {
- i--;
- if (i < 0)
+ if (here == nsegments)
return NULL;
+ next = here - 1;
}
- if (nsegments[i].kind == SkFree)
- return NULL;
- else
- return &nsegments[i];
+ return (next->kind == SkFree) ? NULL : next;
}
|
Author: florian
Date: Wed May 13 15:30:09 2015
New Revision: 15224
Log:
Merge from trunk.
Added:
branches/ASPACEM_TWEAKS/helgrind/tests/shmem_abits.c
- copied unchanged from r15223, trunk/helgrind/tests/shmem_abits.c
branches/ASPACEM_TWEAKS/helgrind/tests/shmem_abits.stderr.exp
- copied unchanged from r15223, trunk/helgrind/tests/shmem_abits.stderr.exp
branches/ASPACEM_TWEAKS/helgrind/tests/shmem_abits.stdout.exp
- copied unchanged from r15223, trunk/helgrind/tests/shmem_abits.stdout.exp
branches/ASPACEM_TWEAKS/helgrind/tests/shmem_abits.vgtest
- copied unchanged from r15223, trunk/helgrind/tests/shmem_abits.vgtest
branches/ASPACEM_TWEAKS/none/tests/darwin/bug254164.c
- copied unchanged from r15223, trunk/none/tests/darwin/bug254164.c
branches/ASPACEM_TWEAKS/none/tests/darwin/bug254164.stderr.exp
- copied unchanged from r15223, trunk/none/tests/darwin/bug254164.stderr.exp
branches/ASPACEM_TWEAKS/none/tests/darwin/bug254164.vgtest
- copied unchanged from r15223, trunk/none/tests/darwin/bug254164.vgtest
branches/ASPACEM_TWEAKS/none/tests/linux/mremap4.c
- copied unchanged from r15223, trunk/none/tests/linux/mremap4.c
branches/ASPACEM_TWEAKS/none/tests/linux/mremap4.stderr.exp
- copied unchanged from r15223, trunk/none/tests/linux/mremap4.stderr.exp
branches/ASPACEM_TWEAKS/none/tests/linux/mremap4.vgtest
- copied unchanged from r15223, trunk/none/tests/linux/mremap4.vgtest
branches/ASPACEM_TWEAKS/none/tests/linux/mremap5.c
- copied unchanged from r15223, trunk/none/tests/linux/mremap5.c
branches/ASPACEM_TWEAKS/none/tests/linux/mremap5.stderr.exp
- copied unchanged from r15223, trunk/none/tests/linux/mremap5.stderr.exp
branches/ASPACEM_TWEAKS/none/tests/linux/mremap5.vgtest
- copied unchanged from r15223, trunk/none/tests/linux/mremap5.vgtest
Modified:
branches/ASPACEM_TWEAKS/ (props changed)
branches/ASPACEM_TWEAKS/NEWS
branches/ASPACEM_TWEAKS/README_DEVELOPERS
branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c
branches/ASPACEM_TWEAKS/coregrind/m_gdbserver/server.c
branches/ASPACEM_TWEAKS/coregrind/m_mallocfree.c
branches/ASPACEM_TWEAKS/coregrind/m_redir.c
branches/ASPACEM_TWEAKS/coregrind/m_syswrap/priv_syswrap-darwin.h
branches/ASPACEM_TWEAKS/coregrind/m_syswrap/syswrap-darwin.c
branches/ASPACEM_TWEAKS/coregrind/m_syswrap/syswrap-generic.c
branches/ASPACEM_TWEAKS/coregrind/m_xarray.c
branches/ASPACEM_TWEAKS/darwin14.supp
branches/ASPACEM_TWEAKS/helgrind/helgrind.h
branches/ASPACEM_TWEAKS/helgrind/hg_main.c
branches/ASPACEM_TWEAKS/helgrind/libhb.h
branches/ASPACEM_TWEAKS/helgrind/libhb_core.c
branches/ASPACEM_TWEAKS/helgrind/tests/ (props changed)
branches/ASPACEM_TWEAKS/helgrind/tests/Makefile.am
branches/ASPACEM_TWEAKS/none/tests/darwin/ (props changed)
branches/ASPACEM_TWEAKS/none/tests/darwin/Makefile.am
branches/ASPACEM_TWEAKS/none/tests/linux/ (props changed)
branches/ASPACEM_TWEAKS/none/tests/linux/Makefile.am
Modified: branches/ASPACEM_TWEAKS/NEWS
==============================================================================
--- branches/ASPACEM_TWEAKS/NEWS (original)
+++ branches/ASPACEM_TWEAKS/NEWS Wed May 13 15:30:09 2015
@@ -70,6 +70,8 @@
211926 Avoid compilation warnings in valgrind.h with -pedantic
226609 Crediting upstream authors in man page
231257 Valgrind omits path when executing script from shebang line
+254164 OS X task_info: UNKNOWN task message [id 3405, to mach_task_self(),
+ reply 0x........]
269360 s390x: Fix addressing mode selection for compare-and-swap
333051 mmap of huge pages fails due to incorrect alignment
== 339163
@@ -180,6 +182,7 @@
n-i-bz Fix compilation on distros with glibc < 2.5
n-i-bz (vex 3098) Avoid generation of Neon insns on non-Neon hosts
n-i-bz Enable rt_sigpending syscall on ppc64 linux.
+n-i-bz mremap did not work properly on shared memory
Release 3.10.1 (25 November 2014)
Modified: branches/ASPACEM_TWEAKS/README_DEVELOPERS
==============================================================================
--- branches/ASPACEM_TWEAKS/README_DEVELOPERS (original)
+++ branches/ASPACEM_TWEAKS/README_DEVELOPERS Wed May 13 15:30:09 2015
@@ -250,6 +250,7 @@
To compare the performance of multiple Valgrind versions, do :
perl perf/vg_perf --outer-valgrind=../outer/.../bin/valgrind \
+ --outer-tool=callgrind \
--vg=../inner_xxxx --vg=../inner_yyyy perf
(where inner_xxxx and inner_yyyy are the toplevel directories of
the versions to compare).
Modified: branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c Wed May 13 15:30:09 2015
@@ -3134,7 +3134,8 @@
NSegment *seg = nsegments + ix;
- aspacem_assert(seg->kind == SkFileC || seg->kind == SkAnonC);
+ aspacem_assert(seg->kind == SkFileC || seg->kind == SkAnonC ||
+ seg->kind == SkShmC);
aspacem_assert(delta > 0 && VG_IS_PAGE_ALIGNED(delta)) ;
xStart = seg->end+1;
@@ -3212,7 +3213,8 @@
if (iLo != iHi)
return False;
- if (nsegments[iLo].kind != SkFileC && nsegments[iLo].kind != SkAnonC)
+ if (nsegments[iLo].kind != SkFileC && nsegments[iLo].kind != SkAnonC &&
+ nsegments[iLo].kind != SkShmC)
return False;
sres = ML_(am_do_relocate_nooverlap_mapping_NO_NOTIFY)
Modified: branches/ASPACEM_TWEAKS/coregrind/m_gdbserver/server.c
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_gdbserver/server.c (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_gdbserver/server.c Wed May 13 15:30:09 2015
@@ -157,9 +157,10 @@
VG_(message)(Vg_DebugMsg,
"------ Valgrind's internal memory use stats follow ------\n" );
VG_(sanity_check_malloc_all)();
- VG_(message)(Vg_DebugMsg,
- "------ %llu bytes have already been mmap-ed ANONYMOUS.\n",
- VG_(am_get_anonsize_total)());
+ VG_(message)
+ (Vg_DebugMsg,
+ "------ %'13llu bytes have already been mmap-ed ANONYMOUS.\n",
+ VG_(am_get_anonsize_total)());
VG_(print_all_arena_stats)();
if (VG_(clo_profile_heap))
VG_(print_arena_cc_analysis) ();
@@ -386,7 +387,7 @@
VG_(gdbserver_status_output)();
break;
case 4: /* memory */
- VG_(printf) ("%llu bytes have already been mmap-ed ANONYMOUS.\n",
+ VG_(printf) ("%'13llu bytes have already been mmap-ed ANONYMOUS.\n",
VG_(am_get_anonsize_total)());
VG_(print_all_arena_stats) ();
if (VG_(clo_profile_heap))
Modified: branches/ASPACEM_TWEAKS/coregrind/m_mallocfree.c
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_mallocfree.c (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_mallocfree.c Wed May 13 15:30:09 2015
@@ -625,9 +625,9 @@
for (i = 0; i < VG_N_ARENAS; i++) {
Arena* a = arenaId_to_ArenaP(i);
VG_(message)(Vg_DebugMsg,
- "%-8s: %8lu/%8lu max/curr mmap'd, "
+ "%-8s: %'13lu/%'13lu max/curr mmap'd, "
"%llu/%llu unsplit/split sb unmmap'd, "
- "%8lu/%8lu max/curr, "
+ "%'13lu/%'13lu max/curr, "
"%10llu/%10llu totalloc-blocks/bytes,"
" %10llu searches %lu rzB\n",
a->name,
@@ -758,7 +758,7 @@
"\n"
" Valgrind's memory management: out of memory:\n"
" %s's request for %llu bytes failed.\n"
- " %llu bytes have already been mmap-ed ANONYMOUS.\n"
+ " %'13llu bytes have already been mmap-ed ANONYMOUS.\n"
" Valgrind cannot continue. Sorry.\n\n"
" There are several possible reasons for this.\n"
" - You have some kind of memory limit in place. Look at the\n"
Modified: branches/ASPACEM_TWEAKS/coregrind/m_redir.c
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_redir.c (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_redir.c Wed May 13 15:30:09 2015
@@ -1537,7 +1537,8 @@
{
NSegment const* seg = VG_(am_find_nsegment)(a);
return seg != NULL
- && (seg->kind == SkAnonC || seg->kind == SkFileC)
+ && (seg->kind == SkAnonC || seg->kind == SkFileC ||
+ seg->kind == SkShmC)
&& (seg->hasX || seg->hasR); /* crude x86-specific hack */
}
Modified: branches/ASPACEM_TWEAKS/coregrind/m_syswrap/priv_syswrap-darwin.h
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_syswrap/priv_syswrap-darwin.h (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_syswrap/priv_syswrap-darwin.h Wed May 13 15:30:09 2015
@@ -595,6 +595,7 @@
DECL_TEMPLATE(darwin, task_policy_set);
DECL_TEMPLATE(darwin, mach_ports_register);
DECL_TEMPLATE(darwin, mach_ports_lookup);
+DECL_TEMPLATE(darwin, task_info);
DECL_TEMPLATE(darwin, task_threads);
DECL_TEMPLATE(darwin, task_suspend);
DECL_TEMPLATE(darwin, task_resume);
Modified: branches/ASPACEM_TWEAKS/coregrind/m_syswrap/syswrap-darwin.c
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_syswrap/syswrap-darwin.c (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_syswrap/syswrap-darwin.c Wed May 13 15:30:09 2015
@@ -5774,7 +5774,7 @@
Request *req = (Request *)ARG1;
- PRINT("task_policy_set(%s) flacor:%d", name_for_port(MACH_REMOTE), req->flavor);
+ PRINT("task_policy_set(%s) flavor:%d", name_for_port(MACH_REMOTE), req->flavor);
AFTER = POST_FN(task_policy_set);
}
@@ -5869,6 +5869,44 @@
}
+PRE(task_info)
+{
+#pragma pack(4)
+ typedef struct {
+ mach_msg_header_t Head;
+ NDR_record_t NDR;
+ task_flavor_t flavor;
+ mach_msg_type_number_t task_info_outCnt;
+ } Request;
+#pragma pack()
+
+ Request *req = (Request *)ARG1;
+
+ PRINT("task_info(%s) flavor:%d", name_for_port(MACH_REMOTE), req->flavor);
+
+ AFTER = POST_FN(task_info);
+}
+
+POST(task_info)
+{
+#pragma pack(4)
+ typedef struct {
+ mach_msg_header_t Head;
+ NDR_record_t NDR;
+ kern_return_t RetCode;
+ mach_msg_type_number_t task_info_outCnt;
+ integer_t task_info_out[52];
+ } Reply;
+#pragma pack()
+
+ Reply *reply = (Reply *)ARG1;
+ if (!reply->RetCode) {
+ } else {
+ PRINT("mig return %d", reply->RetCode);
+ }
+}
+
+
PRE(task_threads)
{
#pragma pack(4)
@@ -7758,6 +7796,10 @@
CALL_PRE(mach_ports_lookup);
return;
+ case 3405:
+ CALL_PRE(task_info);
+ return;
+
case 3407:
CALL_PRE(task_suspend);
return;
Modified: branches/ASPACEM_TWEAKS/coregrind/m_syswrap/syswrap-generic.c
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_syswrap/syswrap-generic.c (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_syswrap/syswrap-generic.c Wed May 13 15:30:09 2015
@@ -326,7 +326,8 @@
old_seg = VG_(am_find_nsegment)( old_addr );
if (old_addr < old_seg->start || old_addr+old_len-1 > old_seg->end)
goto eINVAL;
- if (old_seg->kind != SkAnonC && old_seg->kind != SkFileC)
+ if (old_seg->kind != SkAnonC && old_seg->kind != SkFileC &&
+ old_seg->kind != SkShmC)
goto eINVAL;
vg_assert(old_len > 0);
Modified: branches/ASPACEM_TWEAKS/coregrind/m_xarray.c
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_xarray.c (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_xarray.c Wed May 13 15:30:09 2015
@@ -128,8 +128,9 @@
inline void* VG_(indexXA) ( const XArray* xa, Word n )
{
vg_assert(xa);
- vg_assert(n >= 0);
- vg_assert(n < xa->usedsizeE);
+ // vg_assert(n >= 0); If n negative, the UWord conversion will make
+ // it bigger than usedsizeE.
+ vg_assert((UWord)n < xa->usedsizeE);
return ((char*)xa->arr) + n * xa->elemSzB;
}
Modified: branches/ASPACEM_TWEAKS/darwin14.supp
==============================================================================
--- branches/ASPACEM_TWEAKS/darwin14.supp (original)
+++ branches/ASPACEM_TWEAKS/darwin14.supp Wed May 13 15:30:09 2015
@@ -77,15 +77,15 @@
...
}
-#{
-# OSX1010:8-Leak
-# Memcheck:Leak
-# match-leak-kinds: indirect
-# fun:?alloc
-# ...
-# fun:libSystem_initializer
-# ...
-#}
+{
+ OSX1010:8-Leak
+ Memcheck:Leak
+ match-leak-kinds: definite
+ fun:?alloc
+ ...
+ fun:libSystem_initializer
+ ...
+}
{
OSX1010:9-Leak
Modified: branches/ASPACEM_TWEAKS/helgrind/helgrind.h
==============================================================================
--- branches/ASPACEM_TWEAKS/helgrind/helgrind.h (original)
+++ branches/ASPACEM_TWEAKS/helgrind/helgrind.h Wed May 13 15:30:09 2015
@@ -118,8 +118,8 @@
_VG_USERREQ__HG_CLEAN_MEMORY_HEAPBLOCK, /* Addr start_of_block */
_VG_USERREQ__HG_PTHREAD_COND_INIT_POST, /* pth_cond_t*, pth_cond_attr_t*/
_VG_USERREQ__HG_GNAT_MASTER_HOOK, /* void*d,void*m,Word ml */
- _VG_USERREQ__HG_GNAT_MASTER_COMPLETED_HOOK /* void*s,Word ml */
-
+ _VG_USERREQ__HG_GNAT_MASTER_COMPLETED_HOOK,/* void*s,Word ml */
+ _VG_USERREQ__HG_GET_ABITS /* Addr a,Addr abits, ulong len */
} Vg_TCheckClientRequest;
@@ -157,7 +157,7 @@
#define DO_CREQ_W_W(_resF, _dfltF, _creqF, _ty1F,_arg1F) \
do { \
- long int arg1; \
+ long int _arg1; \
/* assert(sizeof(_ty1F) == sizeof(long int)); */ \
_arg1 = (long int)(_arg1F); \
_qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR( \
@@ -194,6 +194,23 @@
_arg1,_arg2,_arg3,0,0); \
} while (0)
+#define DO_CREQ_W_WWW(_resF, _dfltF, _creqF, _ty1F,_arg1F, \
+ _ty2F,_arg2F, _ty3F, _arg3F) \
+ do { \
+ long int _qzz_res; \
+ long int _arg1, _arg2, _arg3; \
+ /* assert(sizeof(_ty1F) == sizeof(long int)); */ \
+ _arg1 = (long int)(_arg1F); \
+ _arg2 = (long int)(_arg2F); \
+ _arg3 = (long int)(_arg3F); \
+ _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR( \
+ (_dfltF), \
+ (_creqF), \
+ _arg1,_arg2,_arg3,0,0); \
+ _resF = _qzz_res; \
+ } while (0)
+
+
#define _HG_CLIENTREQ_UNIMP(_qzz_str) \
DO_CREQ_v_W(_VG_USERREQ__HG_CLIENTREQ_UNIMP, \
@@ -367,6 +384,38 @@
unsigned long,(_qzz_len))
+#define VALGRIND_HG_ENABLE_CHECKING(_qzz_start, _qzz_len) \
+ DO_CREQ_v_WW(_VG_USERREQ__HG_ARANGE_MAKE_TRACKED, \
+ void*,(_qzz_start), \
+ unsigned long,(_qzz_len))
+
+
+/* Checks the accessibility bits for addresses [zza..zza+zznbytes-1].
+ If zzabits array is provided, copy the accessibility bits in zzabits.
+ Return values:
+ -2 if not running on helgrind
+ -1 if any parts of zzabits is not addressable
+ >= 0 : success.
+ When success, it returns the nr of addressable bytes found.
+ So, to check that a whole range is addressable, check
+ VALGRIND_HG_GET_ABITS(addr,NULL,len) == len
+ In addition, if you want to examine the addressability of each
+ byte of the range, you need to provide a non NULL ptr as
+ second argument, pointing to an array of unsigned char
+ of length len.
+ Addressable bytes are indicated with 0xff.
+ Non-addressable bytes are indicated with 0x00.
+*/
+#define VALGRIND_HG_GET_ABITS(zza,zzabits,zznbytes) \
+ (__extension__ \
+ ({long int _res; \
+ DO_CREQ_W_WWW(_res, (-2)/*default*/, \
+ _VG_USERREQ__HG_GET_ABITS, \
+ void*,(zza), void*,(zzabits), \
+ unsigned long,(zznbytes)); \
+ _res; \
+ }))
+
/*----------------------------------------------------------------*/
/*--- ---*/
/*--- ThreadSanitizer-compatible requests ---*/
Modified: branches/ASPACEM_TWEAKS/helgrind/hg_main.c
==============================================================================
--- branches/ASPACEM_TWEAKS/helgrind/hg_main.c (original)
+++ branches/ASPACEM_TWEAKS/helgrind/hg_main.c Wed May 13 15:30:09 2015
@@ -1813,7 +1813,28 @@
guarantee that the reference happens before the free. */
shadow_mem_cwrite_range(thr, a, len);
}
- shadow_mem_make_NoAccess_NoFX( thr, a, len );
+ shadow_mem_make_NoAccess_AHAE( thr, a, len );
+ /* We used to call instead
+ shadow_mem_make_NoAccess_NoFX( thr, a, len );
+ A non-buggy application will not access anymore
+ the freed memory, and so marking no access is in theory useless.
+ Not marking freed memory would avoid the overhead for applications
+ doing mostly malloc/free, as the freed memory should then be recycled
+ very quickly after marking.
+ We rather mark it noaccess for the following reasons:
+ * accessibility bits then always correctly represents the memory
+ status (e.g. for the client request VALGRIND_HG_GET_ABITS).
+ * the overhead is reasonable (about 5 seconds per Gb in 1000 bytes
+ blocks, on a ppc64le, for a unrealistic workload of an application
+ doing only malloc/free).
+ * marking no access allows to GC the SecMap, which might improve
+ performance and/or memory usage.
+ * we might detect more applications bugs when memory is marked
+ noaccess.
+ If needed, we could support here an option --free-is-noaccess=yes|no
+ to avoid marking freed memory as no access if some applications
+ would need to avoid the marking noaccess overhead. */
+
if (len >= SCE_BIGRANGE_T && (HG_(clo_sanity_flags) & SCE_BIGRANGE))
all__sanity_check("evh__pre_mem_read-post");
}
@@ -4885,6 +4906,20 @@
}
break;
+ case _VG_USERREQ__HG_GET_ABITS:
+ if (0) VG_(printf)("HG_GET_ABITS(%#lx,%#lx,%ld)\n",
+ args[1], args[2], args[3]);
+ UChar *zzabit = (UChar *) args[2];
+ if (zzabit == NULL
+ || VG_(am_is_valid_for_client)((Addr)zzabit, (SizeT)args[3],
+ VKI_PROT_READ|VKI_PROT_WRITE))
+ *ret = (UWord) libhb_srange_get_abits ((Addr) args[1],
+ (UChar*) args[2],
+ (SizeT) args[3]);
+ else
+ *ret = -1;
+ break;
+
/* --- --- Client requests for Helgrind's use only --- --- */
/* Some thread is telling us its pthread_t value. Record the
Modified: branches/ASPACEM_TWEAKS/helgrind/libhb.h
==============================================================================
--- branches/ASPACEM_TWEAKS/helgrind/libhb.h (original)
+++ branches/ASPACEM_TWEAKS/helgrind/libhb.h Wed May 13 15:30:09 2015
@@ -131,6 +131,13 @@
void libhb_srange_noaccess_NoFX ( Thr*, Addr, SizeT ); /* IS IGNORED */
void libhb_srange_noaccess_AHAE ( Thr*, Addr, SizeT ); /* IS NOT IGNORED */
+/* Counts the nr of bytes addressable in the range [a, a+len[
+ (so a+len excluded) and returns the nr of addressable bytes found.
+ If abits /= NULL, abits must point to a block of memory of length len.
+ In this array, each addressable byte will be indicated with 0xff.
+ Non-addressable bytes are indicated with 0x00. */
+UWord libhb_srange_get_abits (Addr a, /*OUT*/UChar *abits, SizeT len);
+
/* Get and set the hgthread (pointer to corresponding Thread
structure). */
Thread* libhb_get_Thr_hgthread ( Thr* );
Modified: branches/ASPACEM_TWEAKS/helgrind/libhb_core.c
==============================================================================
--- branches/ASPACEM_TWEAKS/helgrind/libhb_core.c (original)
+++ branches/ASPACEM_TWEAKS/helgrind/libhb_core.c Wed May 13 15:30:09 2015
@@ -365,6 +365,8 @@
static inline VtsID SVal__unC_Rmin ( SVal s );
static inline VtsID SVal__unC_Wmin ( SVal s );
static inline SVal SVal__mkC ( VtsID rmini, VtsID wmini );
+static inline void SVal__rcinc ( SVal s );
+static inline void SVal__rcdec ( SVal s );
/* A double linked list of all the SO's. */
SO* admin_SO;
@@ -383,7 +385,7 @@
#define __HB_ZSM_H
/* Initialise the library. Once initialised, it will (or may) call
- rcinc and rcdec in response to all the calls below, in order to
+ SVal__rcinc and SVal__rcdec in response to all the calls below, in order to
allow the user to do reference counting on the SVals stored herein.
It is important to understand, however, that due to internal
caching, the reference counts are in general inaccurate, and can be
@@ -394,13 +396,14 @@
To make the reference counting exact and therefore non-pointless,
call zsm_flush_cache. Immediately after it returns, the reference
counts for all items, as deduced by the caller by observing calls
- to rcinc and rcdec, will be correct, and so any items with a zero
- reference count may be freed (or at least considered to be
+ to SVal__rcinc and SVal__rcdec, will be correct, and so any items with a
+ zero reference count may be freed (or at least considered to be
unreferenced by this library).
*/
-static void zsm_init ( void(*rcinc)(SVal), void(*rcdec)(SVal) );
+static void zsm_init ( void );
static void zsm_sset_range ( Addr, SizeT, SVal );
+static void zsm_sset_range_SMALL ( Addr a, SizeT len, SVal svNew );
static void zsm_scopy_range ( Addr, Addr, SizeT );
static void zsm_flush_cache ( void );
@@ -412,12 +415,19 @@
/* Round a down to the next multiple of N. N must be a power of 2 */
#define ROUNDDN(a, N) ((a) & ~(N-1))
-
-
-/* ------ User-supplied RC functions ------ */
-static void(*rcinc)(SVal) = NULL;
-static void(*rcdec)(SVal) = NULL;
-
+/* True if a belongs in range [start, start + szB[
+ (i.e. start + szB is excluded). */
+static inline Bool address_in_range (Addr a, Addr start, SizeT szB)
+{
+ /* Checking start <= a && a < start + szB.
+ As start and a are unsigned addresses, the condition can
+ be simplified. */
+ if (CHECK_ZSM)
+ tl_assert ((a - start < szB)
+ == (start <= a
+ && a < start + szB));
+ return a - start < szB;
+}
/* ------ CacheLine ------ */
@@ -517,6 +527,9 @@
#define SecMap_MAGIC 0x571e58cbU
+// (UInt) `echo "Free SecMap" | md5sum`
+#define SecMap_free_MAGIC 0x5a977f30U
+
__attribute__((unused))
static inline Bool is_sane_SecMap ( SecMap* sm ) {
return sm != NULL && sm->magic == SecMap_MAGIC;
@@ -558,18 +571,20 @@
static UWord stats__secmaps_search = 0; // # SM finds
static UWord stats__secmaps_search_slow = 0; // # SM lookupFMs
static UWord stats__secmaps_allocd = 0; // # SecMaps issued
+static UWord stats__secmaps_in_map_shmem = 0; // # SecMaps 'live'
+static UWord stats__secmaps_scanGC = 0; // # nr of scan GC done.
+static UWord stats__secmaps_scanGCed = 0; // # SecMaps GC-ed via scan
+static UWord stats__secmaps_ssetGCed = 0; // # SecMaps GC-ed via setnoaccess
static UWord stats__secmap_ga_space_covered = 0; // # ga bytes covered
static UWord stats__secmap_linesZ_allocd = 0; // # LineZ's issued
static UWord stats__secmap_linesZ_bytes = 0; // .. using this much storage
static UWord stats__secmap_linesF_allocd = 0; // # LineF's issued
static UWord stats__secmap_linesF_bytes = 0; // .. using this much storage
-static UWord stats__secmap_iterator_steppings = 0; // # calls to stepSMIter
static UWord stats__cache_Z_fetches = 0; // # Z lines fetched
static UWord stats__cache_Z_wbacks = 0; // # Z lines written back
static UWord stats__cache_F_fetches = 0; // # F lines fetched
static UWord stats__cache_F_wbacks = 0; // # F lines written back
-static UWord stats__cache_invals = 0; // # cache invals
-static UWord stats__cache_flushes = 0; // # cache flushes
+static UWord stats__cache_flushes_invals = 0; // # cache flushes and invals
static UWord stats__cache_totrefs = 0; // # total accesses
static UWord stats__cache_totmisses = 0; // # misses
static ULong stats__cache_make_New_arange = 0; // total arange made New
@@ -599,6 +614,8 @@
static UWord stats__vts__join = 0; // # calls to VTS__join
static UWord stats__vts__cmpLEQ = 0; // # calls to VTS__cmpLEQ
static UWord stats__vts__cmp_structural = 0; // # calls to VTS__cmp_structural
+static UWord stats__vts_tab_GC = 0; // # nr of vts_tab GC
+static UWord stats__vts_pruning = 0; // # nr of vts pruning
// # calls to VTS__cmp_structural w/ slow case
static UWord stats__vts__cmp_structural_slow = 0;
@@ -657,10 +674,59 @@
return shmem__bigchunk_next - n;
}
-static SecMap* shmem__alloc_SecMap ( void )
+/* SecMap changed to be fully SVal_NOACCESS are inserted in a list of
+ recycled SecMap. When a new SecMap is needed, a recycled SecMap
+ will be used in preference to allocating a new SecMap. */
+/* We make a linked list of SecMap. LinesF pointer is re-used to
+ implement the link list. */
+static SecMap *SecMap_freelist = NULL;
+static UWord SecMap_freelist_length(void)
+{
+ SecMap *sm;
+ UWord n = 0;
+
+ sm = SecMap_freelist;
+ while (sm) {
+ n++;
+ sm = (SecMap*)sm->linesF;
+ }
+ return n;
+}
+
+static void push_SecMap_on_freelist(SecMap* sm)
+{
+ if (0) VG_(message)(Vg_DebugMsg, "%p push\n", sm);
+ sm->magic = SecMap_free_MAGIC;
+ sm->linesF = (LineF*)SecMap_freelist;
+ SecMap_freelist = sm;
+}
+/* Returns a free SecMap if there is one.
+ Otherwise, returns NULL. */
+static SecMap *pop_SecMap_from_freelist(void)
+{
+ SecMap *sm;
+
+ sm = SecMap_freelist;
+ if (sm) {
+ tl_assert (sm->magic == SecMap_free_MAGIC);
+ SecMap_freelist = (SecMap*)sm->linesF;
+ if (0) VG_(message)(Vg_DebugMsg, "%p pop\n", sm);
+ }
+ return sm;
+}
+
+static SecMap* shmem__alloc_or_recycle_SecMap ( void )
{
Word i, j;
- SecMap* sm = shmem__bigchunk_alloc( sizeof(SecMap) );
+ SecMap* sm = pop_SecMap_from_freelist();
+
+ if (!sm) {
+ sm = shmem__bigchunk_alloc( sizeof(SecMap) );
+ stats__secmaps_allocd++;
+ stats__secmap_ga_space_covered += N_SECMAP_ARANGE;
+ stats__secmap_linesZ_allocd += N_SECMAP_ZLINES;
+ stats__secmap_linesZ_bytes += N_SECMAP_ZLINES * sizeof(LineZ);
+ }
if (0) VG_(printf)("alloc_SecMap %p\n",sm);
tl_assert(sm);
sm->magic = SecMap_MAGIC;
@@ -674,10 +740,6 @@
}
sm->linesF = NULL;
sm->linesF_size = 0;
- stats__secmaps_allocd++;
- stats__secmap_ga_space_covered += N_SECMAP_ARANGE;
- stats__secmap_linesZ_allocd += N_SECMAP_ZLINES;
- stats__secmap_linesZ_bytes += N_SECMAP_ZLINES * sizeof(LineZ);
return sm;
}
@@ -719,17 +781,120 @@
return sm;
}
+/* Scan the SecMap and count the SecMap that can be GC-ed.
+ If really, really does the GC of the SecMap. */
+/* NOT TO BE CALLED FROM WITHIN libzsm. */
+static UWord next_SecMap_GC_at = 1000;
+__attribute__((noinline))
+static UWord shmem__SecMap_do_GC(Bool really)
+{
+ UWord secmapW = 0;
+ Addr gaKey;
+ UWord examined = 0;
+ UWord ok_GCed = 0;
+
+ /* First invalidate the smCache */
+ smCache[0].gaKey = 1;
+ smCache[1].gaKey = 1;
+ smCache[2].gaKey = 1;
+ STATIC_ASSERT (3 == sizeof(smCache)/sizeof(smCache[0]));
+
+ VG_(initIterFM)( map_shmem );
+ while (VG_(nextIterFM)( map_shmem, &gaKey, &secmapW )) {
+ UWord i;
+ UWord j;
+ SecMap* sm = (SecMap*)secmapW;
+ tl_assert(sm->magic == SecMap_MAGIC);
+ Bool ok_to_GC = True;
+
+ examined++;
+
+ /* Deal with the LineZs */
+ for (i = 0; i < N_SECMAP_ZLINES && ok_to_GC; i++) {
+ LineZ* lineZ = &sm->linesZ[i];
+ ok_to_GC = lineZ->dict[0] == SVal_INVALID
+ || (lineZ->dict[0] == SVal_NOACCESS
+ && !SVal__isC (lineZ->dict[1])
+ && !SVal__isC (lineZ->dict[2])
+ && !SVal__isC (lineZ->dict[3]));
+ }
+ /* Deal with the LineFs */
+ for (i = 0; i < sm->linesF_size && ok_to_GC; i++) {
+ LineF* lineF = &sm->linesF[i];
+ if (!lineF->inUse)
+ continue;
+ for (j = 0; j < N_LINE_ARANGE && ok_to_GC; j++)
+ ok_to_GC = lineF->w64s[j] == SVal_NOACCESS;
+ }
+ if (ok_to_GC)
+ ok_GCed++;
+ if (ok_to_GC && really) {
+ SecMap *fm_sm;
+ Addr fm_gaKey;
+ /* We cannot remove a SecMap from map_shmem while iterating.
+ So, stop iteration, remove from map_shmem, recreate the iteration
+ on the next SecMap. */
+ VG_(doneIterFM) ( map_shmem );
+ /* No need to rcdec linesZ or linesF, these are all SVal_NOACCESS or
+ not in use. We just need to free the linesF. */
+ if (sm->linesF_size > 0) {
+ HG_(free)(sm->linesF);
+ stats__secmap_linesF_allocd -= sm->linesF_size;
+ stats__secmap_linesF_bytes -= sm->linesF_size * sizeof(LineF);
+ }
+ if (!VG_(delFromFM)(map_shmem, &fm_gaKey, (UWord*)&fm_sm, gaKey))
+ tl_assert (0);
+ stats__secmaps_in_map_shmem--;
+ tl_assert (gaKey == fm_gaKey);
+ tl_assert (sm == fm_sm);
+ stats__secmaps_scanGCed++;
+ push_SecMap_on_freelist (sm);
+ VG_(initIterAtFM) (map_shmem, gaKey + N_SECMAP_ARANGE);
+ }
+ }
+ VG_(doneIterFM)( map_shmem );
+
+ if (really) {
+ stats__secmaps_scanGC++;
+ /* Next GC when we approach the max allocated */
+ next_SecMap_GC_at = stats__secmaps_allocd - 1000;
+ /* Unless we GCed less than 10%. We then allow to alloc 10%
+ more before GCing. This avoids doing a lot of costly GC
+ for the worst case : the 'growing phase' of an application
+ that allocates a lot of memory.
+ Worst can can be reproduced e.g. by
+ perf/memrw -t 30000000 -b 1000 -r 1 -l 1
+ that allocates around 30Gb of memory. */
+ if (ok_GCed < stats__secmaps_allocd/10)
+ next_SecMap_GC_at = stats__secmaps_allocd + stats__secmaps_allocd/10;
+
+ }
+
+ if (VG_(clo_stats) && really) {
+ VG_(message)(Vg_DebugMsg,
+ "libhb: SecMap GC: #%lu scanned %lu, GCed %lu,"
+ " next GC at %lu\n",
+ stats__secmaps_scanGC, examined, ok_GCed,
+ next_SecMap_GC_at);
+ }
+
+ return ok_GCed;
+}
+
static SecMap* shmem__find_or_alloc_SecMap ( Addr ga )
{
SecMap* sm = shmem__find_SecMap ( ga );
if (LIKELY(sm)) {
+ if (CHECK_ZSM) tl_assert(is_sane_SecMap(sm));
return sm;
} else {
/* create a new one */
Addr gaKey = shmem__round_to_SecMap_base(ga);
- sm = shmem__alloc_SecMap();
+ sm = shmem__alloc_or_recycle_SecMap();
tl_assert(sm);
VG_(addToFM)( map_shmem, (UWord)gaKey, (UWord)sm );
+ stats__secmaps_in_map_shmem++;
+ if (CHECK_ZSM) tl_assert(is_sane_SecMap(sm));
return sm;
}
}
@@ -741,30 +906,30 @@
UWord i;
tl_assert(lineF->inUse);
for (i = 0; i < N_LINE_ARANGE; i++)
- rcinc(lineF->w64s[i]);
+ SVal__rcinc(lineF->w64s[i]);
}
static void rcdec_LineF ( LineF* lineF ) {
UWord i;
tl_assert(lineF->inUse);
for (i = 0; i < N_LINE_ARANGE; i++)
- rcdec(lineF->w64s[i]);
+ SVal__rcdec(lineF->w64s[i]);
}
static void rcinc_LineZ ( LineZ* lineZ ) {
tl_assert(lineZ->dict[0] != SVal_INVALID);
- rcinc(lineZ->dict[0]);
- if (lineZ->dict[1] != SVal_INVALID) rcinc(lineZ->dict[1]);
- if (lineZ->dict[2] != SVal_INVALID) rcinc(lineZ->dict[2]);
- if (lineZ->dict[3] != SVal_INVALID) rcinc(lineZ->dict[3]);
+ SVal__rcinc(lineZ->dict[0]);
+ if (lineZ->dict[1] != SVal_INVALID) SVal__rcinc(lineZ->dict[1]);
+ if (lineZ->dict[2] != SVal_INVALID) SVal__rcinc(lineZ->dict[2]);
+ if (lineZ->dict[3] != SVal_INVALID) SVal__rcinc(lineZ->dict[3]);
}
static void rcdec_LineZ ( LineZ* lineZ ) {
tl_assert(lineZ->dict[0] != SVal_INVALID);
- rcdec(lineZ->dict[0]);
- if (lineZ->dict[1] != SVal_INVALID) rcdec(lineZ->dict[1]);
- if (lineZ->dict[2] != SVal_INVALID) rcdec(lineZ->dict[2]);
- if (lineZ->dict[3] != SVal_INVALID) rcdec(lineZ->dict[3]);
+ SVal__rcdec(lineZ->dict[0]);
+ if (lineZ->dict[1] != SVal_INVALID) SVal__rcdec(lineZ->dict[1]);
+ if (lineZ->dict[2] != SVal_INVALID) SVal__rcdec(lineZ->dict[2]);
+ if (lineZ->dict[3] != SVal_INVALID) SVal__rcdec(lineZ->dict[3]);
}
inline
@@ -1138,10 +1303,11 @@
UShort descr;
/* pre: incoming tree[0..7] does not have any invalid shvals, in
particular no zeroes. */
- if (UNLIKELY(tree[7] == SVal_INVALID || tree[6] == SVal_INVALID
- || tree[5] == SVal_INVALID || tree[4] == SVal_INVALID
- || tree[3] == SVal_INVALID || tree[2] == SVal_INVALID
- || tree[1] == SVal_INVALID || tree[0] == SVal_INVALID))
+ if (CHECK_ZSM
+ && UNLIKELY(tree[7] == SVal_INVALID || tree[6] == SVal_INVALID
+ || tree[5] == SVal_INVALID || tree[4] == SVal_INVALID
+ || tree[3] == SVal_INVALID || tree[2] == SVal_INVALID
+ || tree[1] == SVal_INVALID || tree[0] == SVal_INVALID))
tl_assert(0);
descr = TREE_DESCR_8_7 | TREE_DESCR_8_6 | TREE_DESCR_8_5
@@ -1449,28 +1615,44 @@
stats__cache_F_fetches++;
} else {
for (i = 0; i < N_LINE_ARANGE; i++) {
- SVal sv;
UWord ix = read_twobit_array( lineZ->ix2s, i );
- /* correct, but expensive: tl_assert(ix >= 0 && ix <= 3); */
- sv = lineZ->dict[ix];
- tl_assert(sv != SVal_INVALID);
- cl->svals[i] = sv;
+ if (CHECK_ZSM) tl_assert(ix >= 0 && ix <= 3);
+ cl->svals[i] = lineZ->dict[ix];
+ if (CHECK_ZSM) tl_assert(cl->svals[i] != SVal_INVALID);
}
stats__cache_Z_fetches++;
}
normalise_CacheLine( cl );
}
-static void shmem__invalidate_scache ( void ) {
+/* Invalid the cachelines corresponding to the given range, which
+ must start and end on a cacheline boundary. */
+static void shmem__invalidate_scache_range (Addr ga, SizeT szB)
+{
Word wix;
- if (0) VG_(printf)("%s","scache inval\n");
- tl_assert(!is_valid_scache_tag(1));
- for (wix = 0; wix < N_WAY_NENT; wix++) {
- cache_shmem.tags0[wix] = 1/*INVALID*/;
+
+ /* ga must be on a cacheline boundary. */
+ tl_assert (is_valid_scache_tag (ga));
+ /* szB must be a multiple of cacheline size. */
+ tl_assert (0 == (szB & (N_LINE_ARANGE - 1)));
+
+
+ Word ga_ix = (ga >> N_LINE_BITS) & (N_WAY_NENT - 1);
+ Word nwix = szB / N_LINE_ARANGE;
+
+ if (nwix > N_WAY_NENT)
+ nwix = N_WAY_NENT; // no need to check several times the same entry.
+
+ for (wix = 0; wix < nwix; wix++) {
+ if (address_in_range(cache_shmem.tags0[ga_ix], ga, szB))
+ cache_shmem.tags0[ga_ix] = 1/*INVALID*/;
+ ga_ix++;
+ if (ga_ix == N_WAY_NENT)
+ ga_ix = 0;
}
- stats__cache_invals++;
}
+
static void shmem__flush_and_invalidate_scache ( void ) {
Word wix;
Addr tag;
@@ -1486,8 +1668,7 @@
}
cache_shmem.tags0[wix] = 1/*INVALID*/;
}
- stats__cache_flushes++;
- stats__cache_invals++;
+ stats__cache_flushes_invals++;
}
@@ -1747,18 +1928,19 @@
}
-static void zsm_init ( void(*p_rcinc)(SVal), void(*p_rcdec)(SVal) )
+static void zsm_init ( void )
{
tl_assert( sizeof(UWord) == sizeof(Addr) );
- rcinc = p_rcinc;
- rcdec = p_rcdec;
-
tl_assert(map_shmem == NULL);
map_shmem = VG_(newFM)( HG_(zalloc), "libhb.zsm_init.1 (map_shmem)",
HG_(free),
NULL/*unboxed UWord cmp*/);
- shmem__invalidate_scache();
+ /* Invalidate all cache entries. */
+ tl_assert(!is_valid_scache_tag(1));
+ for (UWord wix = 0; wix < N_WAY_NENT; wix++) {
+ cache_shmem.tags0[wix] = 1/*INVALID*/;
+ }
/* a SecMap must contain an integral number of CacheLines */
tl_assert(0 == (N_SECMAP_ARANGE % N_LINE_ARANGE));
@@ -2817,12 +2999,13 @@
VG_(printf)("<<GC ends, next gc at %ld>>\n", vts_next_GC_at);
}
+ stats__vts_tab_GC++;
if (VG_(clo_stats)) {
- static UInt ctr = 1;
tl_assert(nTab > 0);
VG_(message)(Vg_DebugMsg,
- "libhb: VTS GC: #%u old size %lu live %lu (%2llu%%)\n",
- ctr++, nTab, nLive, (100ULL * (ULong)nLive) / (ULong)nTab);
+ "libhb: VTS GC: #%lu old size %lu live %lu (%2llu%%)\n",
+ stats__vts_tab_GC,
+ nTab, nLive, (100ULL * (ULong)nLive) / (ULong)nTab);
}
/* ---------- END VTS GC ---------- */
@@ -3129,14 +3312,14 @@
}
/* And we're done. Bwahahaha. Ha. Ha. Ha. */
+ stats__vts_pruning++;
if (VG_(clo_stats)) {
- static UInt ctr = 1;
tl_assert(nTab > 0);
VG_(message)(
Vg_DebugMsg,
- "libhb: VTS PR: #%u before %lu (avg sz %lu) "
+ "libhb: VTS PR: #%lu before %lu (avg sz %lu) "
"after %lu (avg sz %lu)\n",
- ctr++,
+ stats__vts_pruning,
nBeforePruning, nSTSsBefore / (nBeforePruning ? nBeforePruning : 1),
nAfterPruning, nSTSsAfter / (nAfterPruning ? nAfterPruning : 1)
);
@@ -3797,7 +3980,7 @@
}
/* Direct callback from lib_zsm. */
-static void SVal__rcinc ( SVal s ) {
+static inline void SVal__rcinc ( SVal s ) {
if (SVal__isC(s)) {
VtsID__rcinc( SVal__unC_Rmin(s) );
VtsID__rcinc( SVal__unC_Wmin(s) );
@@ -3805,7 +3988,7 @@
}
/* Direct callback from lib_zsm. */
-static void SVal__rcdec ( SVal s ) {
+static inline void SVal__rcdec ( SVal s ) {
if (SVal__isC(s)) {
VtsID__rcdec( SVal__unC_Rmin(s) );
VtsID__rcdec( SVal__unC_Wmin(s) );
@@ -5941,7 +6124,7 @@
VtsID__invalidate_caches();
// initialise shadow memory
- zsm_init( SVal__rcinc, SVal__rcdec );
+ zsm_init( );
thr = Thr__new();
vi = VtsID__mk_Singleton( thr, 1 );
@@ -6006,8 +6189,17 @@
VG_(printf)(" linesF: %'10lu allocd (%'12lu bytes occupied)\n",
stats__secmap_linesF_allocd,
stats__secmap_linesF_bytes);
- VG_(printf)(" secmaps: %'10lu iterator steppings\n",
- stats__secmap_iterator_steppings);
+ VG_(printf)(" secmaps: %'10lu in map (can be scanGCed %'5lu)"
+ " #%lu scanGC \n",
+ stats__secmaps_in_map_shmem,
+ shmem__SecMap_do_GC(False /* really do GC */),
+ stats__secmaps_scanGC);
+ tl_assert (VG_(sizeFM) (map_shmem) == stats__secmaps_in_map_shmem);
+ VG_(printf)(" secmaps: %'10lu in freelist,"
+ " total (scanGCed %'lu, ssetGCed %'lu)\n",
+ SecMap_freelist_length(),
+ stats__secmaps_scanGCed,
+ stats__secmaps_ssetGCed);
VG_(printf)(" secmaps: %'10lu searches (%'12lu slow)\n",
stats__secmaps_search, stats__secmaps_search_slow);
@@ -6018,8 +6210,8 @@
stats__cache_Z_fetches, stats__cache_F_fetches );
VG_(printf)(" cache: %'14lu Z-wback, %'14lu F-wback\n",
stats__cache_Z_wbacks, stats__cache_F_wbacks );
- VG_(printf)(" cache: %'14lu invals, %'14lu flushes\n",
- stats__cache_invals, stats__cache_flushes );
+ VG_(printf)(" cache: %'14lu flushes_invals\n",
+ stats__cache_flushes_invals );
VG_(printf)(" cache: %'14llu arange_New %'14llu direct-to-Zreps\n",
stats__cache_make_New_arange,
stats__cache_make_New_inZrep);
@@ -6044,17 +6236,19 @@
stats__cline_swrite08s );
VG_(printf)(" cline: s rd1s %'lu, s copy1s %'lu\n",
stats__cline_sread08s, stats__cline_scopy08s );
- VG_(printf)(" cline: splits: 8to4 %'12lu 4to2 %'12lu 2to1 %'12lu\n",
- stats__cline_64to32splits,
- stats__cline_32to16splits,
- stats__cline_16to8splits );
- VG_(printf)(" cline: pulldowns: 8to4 %'12lu 4to2 %'12lu 2to1 %'12lu\n",
- stats__cline_64to32pulldown,
- stats__cline_32to16pulldown,
- stats__cline_16to8pulldown );
+ VG_(printf)(" cline: splits: 8to4 %'12lu 4to2 %'12lu"
+ " 2to1 %'12lu\n",
+ stats__cline_64to32splits, stats__cline_32to16splits,
+ stats__cline_16to8splits );
+ VG_(printf)(" cline: pulldowns: 8to4 %'12lu 4to2 %'12lu"
+ " 2to1 %'12lu\n",
+ stats__cline_64to32pulldown, stats__cline_32to16pulldown,
+ stats__cline_16to8pulldown );
if (0)
- VG_(printf)(" cline: sizeof(CacheLineZ) %ld, covers %ld bytes of arange\n",
- (Word)sizeof(LineZ), (Word)N_LINE_ARANGE);
+ VG_(printf)(" cline: sizeof(CacheLineZ) %ld,"
+ " covers %ld bytes of arange\n",
+ (Word)sizeof(LineZ),
+ (Word)N_LINE_ARANGE);
VG_(printf)("%s","\n");
@@ -6068,21 +6262,23 @@
stats__join2_queries, stats__join2_misses);
VG_(printf)("%s","\n");
- VG_(printf)( " libhb: VTSops: tick %'lu, join %'lu, cmpLEQ %'lu\n",
- stats__vts__tick, stats__vts__join, stats__vts__cmpLEQ );
- VG_(printf)( " libhb: VTSops: cmp_structural %'lu (%'lu slow)\n",
- stats__vts__cmp_structural, stats__vts__cmp_structural_slow );
- VG_(printf)( " libhb: VTSset: find__or__clone_and_add %'lu (%'lu allocd)\n",
+ VG_(printf)(" libhb: VTSops: tick %'lu, join %'lu, cmpLEQ %'lu\n",
+ stats__vts__tick, stats__vts__join, stats__vts__cmpLEQ );
+ VG_(printf)(" libhb: VTSops: cmp_structural %'lu (%'lu slow)\n",
+ stats__vts__cmp_structural, stats__vts__cmp_structural_slow);
+ VG_(printf)(" libhb: VTSset: find__or__clone_and_add %'lu"
+ " (%'lu allocd)\n",
stats__vts_set__focaa, stats__vts_set__focaa_a );
VG_(printf)( " libhb: VTSops: indexAt_SLOW %'lu\n",
stats__vts__indexat_slow );
- show_vts_stats ("libhb stats");
VG_(printf)("%s","\n");
VG_(printf)(
" libhb: %ld entries in vts_table (approximately %lu bytes)\n",
VG_(sizeXA)( vts_tab ), VG_(sizeXA)( vts_tab ) * sizeof(VtsTE)
);
+ VG_(printf)(" libhb: #%lu vts_tab GC #%lu vts pruning\n",
+ stats__vts_tab_GC, stats__vts_pruning);
VG_(printf)( " libhb: %lu entries in vts_set\n",
VG_(sizeFM)( vts_set ) );
@@ -6440,22 +6636,289 @@
/* do nothing */
}
+
+/* Set the lines zix_start till zix_end to NOACCESS. */
+static void zsm_secmap_line_range_noaccess (SecMap *sm,
+ UInt zix_start, UInt zix_end)
+{
+ for (UInt lz = zix_start; lz <= zix_end; lz++) {
+ LineZ* lineZ;
+ LineF* lineF;
+ lineZ = &sm->linesZ[lz];
+ if (lineZ->dict[0] != SVal_INVALID) {
+ rcdec_LineZ(lineZ);
+ } else {
+ UInt fix = (UInt)lineZ->dict[1];
+ tl_assert(sm->linesF);
+ tl_assert(sm->linesF_size > 0);
+ tl_assert(fix >= 0 && fix < sm->linesF_size);
+ lineF = &sm->linesF[fix];
+ rcdec_LineF(lineF);
+ lineF->inUse = False;
+ }
+ lineZ->dict[0] = SVal_NOACCESS;
+ lineZ->dict[1] = lineZ->dict[2] = lineZ->dict[3] = SVal_INVALID;
+ for (UInt i = 0; i < N_LINE_ARANGE/4; i++)
+ lineZ->ix2s[i] = 0; /* all refer to dict[0] */
+ }
+}
+
+/* Set the given range to SVal_NOACCESS in-place in the secmap.
+ a must be cacheline aligned. len must be a multiple of a cacheline
+ and must be < N_SECMAP_ARANGE. */
+static void zsm_sset_range_noaccess_in_secmap(Addr a, SizeT len)
+{
+ tl_assert (is_valid_scache_tag (a));
+ tl_assert (0 == (len & (N_LINE_ARANGE - 1)));
+ tl_assert (len < N_SECMAP_ARANGE);
+
+ SecMap *sm1 = shmem__find_SecMap (a);
+ SecMap *sm2 = shmem__find_SecMap (a + len - 1);
+ UWord zix_start = shmem__get_SecMap_offset(a ) >> N_LINE_BITS;
+ UWord zix_end = shmem__get_SecMap_offset(a + len - 1) >> N_LINE_BITS;
+
+ if (sm1) {
+ if (CHECK_ZSM) tl_assert(is_sane_SecMap(sm1));
+ zsm_secmap_line_range_noaccess (sm1, zix_start,
+ sm1 == sm2 ? zix_end : N_SECMAP_ZLINES-1);
+ }
+ if (sm2 && sm1 != sm2) {
+ if (CHECK_ZSM) tl_assert(is_sane_SecMap(sm2));
+ zsm_secmap_line_range_noaccess (sm2, 0, zix_end);
+ }
+}
+
+/* Set the given address range to SVal_NOACCESS.
+ The SecMaps fully set to SVal_NOACCESS will be pushed in SecMap_freelist. */
+static void zsm_sset_range_noaccess (Addr addr, SizeT len)
+{
+ /*
+ BPC = Before, Partial Cacheline, = addr
+ (i.e. starting inside a cacheline/inside a SecMap)
+ BFC = Before, Full Cacheline(s), but not full SecMap
+ (i.e. starting inside a SecMap)
+ FSM = Full SecMap(s)
+ (i.e. starting a SecMap)
+ AFC = After, Full Cacheline(s), but not full SecMap
+ (i.e. first address after the full SecMap(s))
+ APC = After, Partial Cacheline, i.e. first address after the
+ full CacheLines).
+ ARE = After Range End = addr+len = first address not part of the range.
+
+ If addr starts a Cacheline, then BPC == BFC.
+ If addr starts a SecMap, then BPC == BFC == FSM.
+ If addr+len starts a SecMap, then APC == ARE == AFC
+ If addr+len starts a Cacheline, then APC == ARE
+ */
+ Addr ARE = addr + len;
+ Addr BPC = addr;
+ Addr BFC = ROUNDUP(BPC, N_LINE_ARANGE);
+ Addr FSM = ROUNDUP(BPC, N_SECMAP_ARANGE);
+ Addr AFC = ROUNDDN(ARE, N_SECMAP_ARANGE);
+ Addr APC = ROUNDDN(ARE, N_LINE_ARANGE);
+ SizeT Plen = len; // Plen will be split between the following:
+ SizeT BPClen;
+ SizeT BFClen;
+ SizeT FSMlen;
+ SizeT AFClen;
+ SizeT APClen;
+
+ /* Consumes from Plen the nr of bytes between from and to.
+ from and to must be aligned on a multiple of round.
+ The length consumed will be a multiple of round, with
+ a maximum of Plen. */
+# define PlenCONSUME(from, to, round, consumed) \
+ do { \
+ if (from < to) { \
+ if (to - from < Plen) \
+ consumed = to - from; \
+ else \
+ consumed = ROUNDDN(Plen, round); \
+ } else { \
+ consumed = 0; \
+ } \
+ Plen -= consumed; } while (0)
+
+ PlenCONSUME(BPC, BFC, 1, BPClen);
+ PlenCONSUME(BFC, FSM, N_LINE_ARANGE, BFClen);
+ PlenCONSUME(FSM, AFC, N_SECMAP_ARANGE, FSMlen);
+ PlenCONSUME(AFC, APC, N_LINE_ARANGE, AFClen);
+ PlenCONSUME(APC, ARE, 1, APClen);
+
+ if (0)
+ VG_(printf) ("addr %p[%ld] ARE %p"
+ " BPC %p[%ld] BFC %p[%ld] FSM %p[%ld]"
+ " AFC %p[%ld] APC %p[%ld]\n",
+ (void*)addr, len, (void*)ARE,
+ (void*)BPC, BPClen, (void*)BFC, BFClen, (void*)FSM, FSMlen,
+ (void*)AFC, AFClen, (void*)APC, APClen);
+
+ tl_assert (Plen == 0);
+
+ /* Set to NOACCESS pieces before and after not covered by entire SecMaps. */
+
+ /* First we set the partial cachelines. This is done through the cache. */
+ if (BPClen > 0)
+ zsm_sset_range_SMALL (BPC, BPClen, SVal_NOACCESS);
+ if (APClen > 0)
+ zsm_sset_range_SMALL (APC, APClen, SVal_NOACCESS);
+
+ /* After this, we will not use the cache anymore. We will directly work
+ in-place on the z shadow memory in SecMap(s).
+ So, we invalidate the cachelines for the whole range we are setting
+ to NOACCESS below. */
+ shmem__invalidate_scache_range (BFC, APC - BFC);
+
+ if (BFClen > 0)
+ zsm_sset_range_noaccess_in_secmap (BFC, BFClen);
+ if (AFClen > 0)
+ zsm_sset_range_noaccess_in_secmap (AFC, AFClen);
+
+ if (FSMlen > 0) {
+ /* Set to NOACCESS all the SecMaps, pushing the SecMaps to the
+ free list. */
+ Addr sm_start = FSM;
+ while (sm_start < AFC) {
+ SecMap *sm = shmem__find_SecMap (sm_start);
+ if (sm) {
+ Addr gaKey;
+ SecMap *fm_sm;
+
+ if (CHECK_ZSM) tl_assert(is_sane_SecMap(sm));
+ for (UInt lz = 0; lz < N_SECMAP_ZLINES; lz++) {
+ if (sm->linesZ[lz].dict[0] != SVal_INVALID)
+ rcdec_LineZ(&sm->linesZ[lz]);
+ }
+ for (UInt lf = 0; lf < sm->linesF_size; lf++) {
+ if (sm->linesF[lf].inUse)
+ rcdec_LineF (&sm->linesF[lf]);
+ }
+ if (sm->linesF_size > 0) {
+ HG_(free)(sm->linesF);
+ stats__secmap_linesF_allocd -= sm->linesF_size;
+ stats__secmap_linesF_bytes -= sm->linesF_size * sizeof(LineF);
+ }
+ if (!VG_(delFromFM)(map_shmem, &gaKey, (UWord*)&fm_sm, sm_start))
+ tl_assert (0);
+ stats__secmaps_in_map_shmem--;
+ tl_assert (gaKey == sm_start);
+ tl_assert (sm == fm_sm);
+ stats__secmaps_ssetGCed++;
+ push_SecMap_on_freelist (sm);
+ }
+ sm_start += N_SECMAP_ARANGE;
+ }
+ tl_assert (sm_start == AFC);
+
+ /* The above loop might have kept copies of freed SecMap in the smCache.
+ => clear them. */
+ if (address_in_range(smCache[0].gaKey, FSM, FSMlen)) {
+ smCache[0].gaKey = 1;
+ smCache[0].sm = NULL;
+ }
+ if (address_in_range(smCache[1].gaKey, FSM, FSMlen)) {
+ smCache[1].gaKey = 1;
+ smCache[1].sm = NULL;
+ }
+ if (address_in_range(smCache[2].gaKey, FSM, FSMlen)) {
+ smCache[2].gaKey = 1;
+ smCache[2].sm = NULL;
+ }
+ STATIC_ASSERT (3 == sizeof(smCache)/sizeof(SMCacheEnt));
+ }
+}
+
void libhb_srange_noaccess_AHAE ( Thr* thr, Addr a, SizeT szB )
{
/* This really does put the requested range in NoAccess. It's
expensive though. */
SVal sv = SVal_NOACCESS;
tl_assert(is_sane_SVal_C(sv));
- zsm_sset_range( a, szB, sv );
+ if (LIKELY(szB < 2 * N_LINE_ARANGE))
+ zsm_sset_range_SMALL (a, szB, SVal_NOACCESS);
+ else
+ zsm_sset_range_noaccess (a, szB);
Filter__clear_range( thr->filter, a, szB );
}
+/* Works byte at a time. Can be optimised if needed. */
+UWord libhb_srange_get_abits (Addr a, UChar *abits, SizeT len)
+{
+ UWord anr = 0; // nr of bytes addressable.
+
+ /* Get the accessibility of each byte. Pay attention to not
+ create SecMap or LineZ when checking if a byte is addressable.
+
+ Note: this is used for client request. Performance deemed not critical.
+ So for simplicity, we work byte per byte.
+ Performance could be improved by working with full cachelines
+ or with full SecMap, when reaching a cacheline or secmap boundary. */
+ for (SizeT i = 0; i < len; i++) {
+ SVal sv = SVal_INVALID;
+ Addr b = a + i;
+ Addr tag = b & ~(N_LINE_ARANGE - 1);
+ UWord wix = (b >> N_LINE_BITS) & (N_WAY_NENT - 1);
+ UWord cloff = get_cacheline_offset(b);
+
+ /* Note: we do not use get_cacheline(b) to avoid creating cachelines
+ and/or SecMap for non addressable bytes. */
+ if (tag == cache_shmem.tags0[wix]) {
+ CacheLine copy = cache_shmem.lyns0[wix];
+ /* We work on a copy of the cacheline, as we do not want to
+ record the client request as a real read.
+ The below is somewhat similar to zsm_sapply08__msmcread but
+ avoids side effects on the cache. */
+ UWord toff = get_tree_offset(b); /* == 0 .. 7 */
+ UWord tno = get_treeno(b);
+ UShort descr = copy.descrs[tno];
+ if (UNLIKELY( !(descr & (TREE_DESCR_8_0 << toff)) )) {
+ SVal* tree = ©.svals[tno << 3];
+ copy.descrs[tno] = pulldown_to_8(tree, toff, descr);
+ }
+ sv = copy.svals[cloff];
+ } else {
+ /* Byte not found in the cacheline. Search for a SecMap. */
+ SecMap *sm = shmem__find_SecMap(b);
+ LineZ *lineZ;
+ if (sm == NULL)
+ sv = SVal_NOACCESS;
+ else {
+ UWord zix = shmem__get_SecMap_offset(b) >> N_LINE_BITS;
+ lineZ = &sm->linesZ[zix];
+ if (lineZ->dict[0] == SVal_INVALID) {
+ UInt fix = (UInt)lineZ->dict[1];
+ sv = sm->linesF[fix].w64s[cloff];
+ } else {
+ UWord ix = read_twobit_array( lineZ->ix2s, cloff );
+ sv = lineZ->dict[ix];
+ }
+ }
+ }
+
+ tl_assert (sv != SVal_INVALID);
+ if (sv == SVal_NOACCESS) {
+ if (abits)
+ abits[i] = 0x00;
+ } else {
+ if (abits)
+ abits[i] = 0xff;
+ anr++;
+ }
+ }
+
+ return anr;
+}
+
+
void libhb_srange_untrack ( Thr* thr, Addr a, SizeT szB )
{
SVal sv = SVal_NOACCESS;
tl_assert(is_sane_SVal_C(sv));
if (0 && TRACEME(a,szB)) trace(thr,a,szB,"untrack-before");
- zsm_sset_range( a, szB, sv );
+ if (LIKELY(szB < 2 * N_LINE_ARANGE))
+ zsm_sset_range_SMALL (a, szB, SVal_NOACCESS);
+ else
+ zsm_sset_range_noaccess (a, szB);
Filter__clear_range( thr->filter, a, szB );
if (0 && TRACEME(a,szB)) trace(thr,a,szB,"untrack-after ");
}
@@ -6492,18 +6955,26 @@
*/
if (UNLIKELY(stats__ctxt_tab_curr > N_RCEC_TAB/2
&& stats__ctxt_tab_curr + 1000 >= stats__ctxt_tab_max
- && stats__ctxt_tab_curr * 0.75 > RCEC_referenced))
+ && (stats__ctxt_tab_curr * 3)/4 > RCEC_referenced))
do_RCEC_GC();
- /* If there are still freelist entries available, no need for a
- GC. */
- if (vts_tab_freelist != VtsID_INVALID)
- return;
- /* So all the table entries are full, and we're having to expand
- the table. But did we hit the threshhold point yet? */
- if (VG_(sizeXA)( vts_tab ) < vts_next_GC_at)
- return;
- vts_tab__do_GC( False/*don't show stats*/ );
+ /* If there are still no entries available (all the table entries are full),
+ and we hit the threshhold point, then do a GC */
+ Bool vts_tab_GC = vts_tab_freelist == VtsID_INVALID
+ && VG_(sizeXA)( vts_tab ) >= vts_next_GC_at;
+ if (UNLIKELY (vts_tab_GC))
+ vts_tab__do_GC( False/*don't show stats*/ );
+
+ /* scan GC the SecMaps when
+ (1) no SecMap in the freelist
+ and (2) the current nr of live secmaps exceeds the threshold. */
+ if (UNLIKELY(SecMap_freelist == NULL
+ && stats__secmaps_in_map_shmem >= next_SecMap_GC_at)) {
+ // If we did a vts tab GC, then no need to flush the cache again.
+ if (!vts_tab_GC)
+ zsm_flush_cache();
+ shmem__SecMap_do_GC(True);
+ }
/* Check the reference counts (expensive) */
if (CHECK_CEM)
Modified: branches/ASPACEM_TWEAKS/helgrind/tests/Makefile.am
==============================================================================
--- branches/ASPACEM_TWEAKS/helgrind/tests/Makefile.am (original)
+++ branches/ASPACEM_TWEAKS/helgrind/tests/Makefile.am Wed May 13 15:30:09 2015
@@ -49,6 +49,7 @@
pth_spinlock.vgtest pth_spinlock.stdout.exp pth_spinlock.stderr.exp \
rwlock_race.vgtest rwlock_race.stdout.exp rwlock_race.stderr.exp \
rwlock_test.vgtest rwlock_test.stdout.exp rwlock_test.stderr.exp \
+ shmem_abits.vgtest shmem_abits.stdout.exp shmem_abits.stderr.exp \
stackteardown.vgtest stackteardown.stdout.exp stackteardown.stderr.exp \
t2t_laog.vgtest t2t_laog.stdout.exp t2t_laog.stderr.exp \
tc01_simple_race.vgtest tc01_simple_race.stdout.exp \
@@ -125,6 +126,7 @@
locked_vs_unlocked2 \
locked_vs_unlocked3 \
pth_destroy_cond \
+ shmem_abits \
stackteardown \
t2t \
tc01_simple_race \
Modified: branches/ASPACEM_TWEAKS/none/tests/darwin/Makefile.am
==============================================================================
--- branches/ASPACEM_TWEAKS/none/tests/darwin/Makefile.am (original)
+++ branches/ASPACEM_TWEAKS/none/tests/darwin/Makefile.am Wed May 13 15:30:09 2015
@@ -6,11 +6,13 @@
EXTRA_DIST = \
access_extended.stderr.exp access_extended.vgtest \
apple-main-arg.stderr.exp apple-main-arg.vgtest \
+ bug254164.stderr.exp bug254164.vgtest \
rlimit.stderr.exp rlimit.vgtest
check_PROGRAMS = \
access_extended \
apple-main-arg \
+ bug254164 \
rlimit
Modified: branches/ASPACEM_TWEAKS/none/tests/linux/Makefile.am
==============================================================================
--- branches/ASPACEM_TWEAKS/none/tests/linux/Makefile.am (original)
+++ branches/ASPACEM_TWEAKS/none/tests/linux/Makefile.am Wed May 13 15:30:09 2015
@@ -11,6 +11,8 @@
mremap.vgtest \
mremap2.stderr.exp mremap2.stdout.exp mremap2.vgtest \
mremap3.stderr.exp mremap3.stdout.exp mremap3.vgtest \
+ mremap4.stderr.exp mremap4.vgtest \
+ mremap5.stderr.exp mremap5.vgtest \
pthread-stack.stderr.exp pthread-stack.vgtest \
stack-overflow.stderr.exp stack-overflow.vgtest
@@ -21,6 +23,8 @@
mremap \
mremap2 \
mremap3 \
+ mremap4 \
+ mremap5 \
pthread-stack \
stack-overflow
|
Author: florian
Date: Wed May 13 15:02:17 2015
New Revision: 15223
Log:
Replace NSegment::isCH with NSegment::whatsit which tells what this
segment is part of: heap, stack, break area. So it's a generalisation
of sorts.
This is handy occasionally and also improves the segment listing.
The change affects the mergeability of segments. Previously, the
SkAnonC segment that makes up the mapped part of the extensible client stack
could be merged with an adjacent SkAnonC segment. This is no longer the case.
And likewise for the SkAnonC segment that is the client break area.
I'm not unduly concerned. Even before this change it was possible for
two neighbouring SkAnonC segments to exist, namely, if one was part of the
client heap (isCH == 1) and the other was not (isCH == 0).
So it was never (well.... since r4789) true that segments were "maximally
merged".
Modified:
branches/ASPACEM_TWEAKS/coregrind/m_addrinfo.c
branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c
branches/ASPACEM_TWEAKS/include/pub_tool_aspacemgr.h
branches/ASPACEM_TWEAKS/memcheck/mc_leakcheck.c
Modified: branches/ASPACEM_TWEAKS/coregrind/m_addrinfo.c
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_addrinfo.c (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_addrinfo.c Wed May 13 15:02:17 2015
@@ -264,10 +264,7 @@
const NSegment *seg = VG_(am_find_nsegment) (a);
/* Special case to detect the brk data segment. */
- if (seg != NULL
- && seg->kind == SkAnonC
- && VG_(brk_limit) >= seg->start
- && VG_(brk_limit) <= seg->end+1) {
+ if (seg != NULL && seg->whatsit == WiClientBreak) {
/* Address a is in a Anon Client segment which contains
VG_(brk_limit). So, this segment is the brk data segment
as initimg-linux.c:setup_client_dataseg maps an anonymous
Modified: branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c
==============================================================================
--- branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c (original)
+++ branches/ASPACEM_TWEAKS/coregrind/m_aspacemgr/aspacemgr-linux.c Wed May 13 15:02:17 2015
@@ -436,7 +436,7 @@
(ULong)seg->start, (ULong)seg->end, len_buf,
seg->hasR ? 'r' : '-', seg->hasW ? 'w' : '-',
seg->hasX ? 'x' : '-', seg->hasT ? 'T' : '-',
- seg->isCH ? 'H' : '-',
+ seg->whatsit,
show_ShrinkMode(seg->smode),
seg->dev, seg->ino, seg->offset,
ML_(am_segname_get_seqnr)(seg->fnIdx), seg->fnIdx,
@@ -471,7 +471,7 @@
(ULong)seg->start, (ULong)seg->end, len_buf,
seg->hasR ? 'r' : '-', seg->hasW ? 'w' : '-',
seg->hasX ? 'x' : '-', seg->hasT ? 'T' : '-',
- seg->isCH ? 'H' : '-'
+ seg->whatsit
);
break;
@@ -484,7 +484,7 @@
(ULong)seg->start, (ULong)seg->end, len_buf,
seg->hasR ? 'r' : '-', seg->hasW ? 'w' : '-',
seg->hasX ? 'x' : '-', seg->hasT ? 'T' : '-',
- seg->isCH ? 'H' : '-',
+ seg->whatsit,
seg->dev, seg->ino, seg->offset,
ML_(am_segname_get_seqnr)(seg->fnIdx), seg->fnIdx
);
@@ -498,7 +498,7 @@
(ULong)seg->start, (ULong)seg->end, len_buf,
seg->hasR ? 'r' : '-', seg->hasW ? 'w' : '-',
seg->hasX ? 'x' : '-', seg->hasT ? 'T' : '-',
- seg->isCH ? 'H' : '-',
+ seg->whatsit,
show_ShrinkMode(seg->smode)
);
break;
@@ -610,25 +610,26 @@
s->smode == SmFixed
&& s->dev == 0 && s->ino == 0 && s->offset == 0 && s->fnIdx == -1
&& !s->hasR && !s->hasW && !s->hasX && !s->hasT
- && !s->isCH;
+ && s->whatsit == WiUnknown;
case SkAnonC: case SkAnonV: case SkShmC:
return
s->smode == SmFixed
&& s->dev == 0 && s->ino == 0 && s->offset == 0 && s->fnIdx == -1
- && (s->kind==SkAnonC ? True : !s->isCH);
+ && (s->kind==SkAnonC ? True : s->whatsit == WiUnknown);
case SkFileC: case SkFileV:
return
s->smode == SmFixed
&& ML_(am_sane_segname)(s->fnIdx)
- && !s->isCH;
+ && s->whatsit == WiUnknown;
case SkResvn:
return
s->dev == 0 && s->ino == 0 && s->offset == 0 && s->fnIdx == -1
&& !s->hasR && !s->hasW && !s->hasX && !s->hasT
- && !s->isCH;
+ && (s->whatsit == WiClientBreak || s->whatsit == WiClientStack ||
+ s->whatsit == WiUnknown);
default:
return False;
@@ -660,7 +661,7 @@
case SkAnonC: case SkAnonV:
if (s1->hasR == s2->hasR && s1->hasW == s2->hasW
- && s1->hasX == s2->hasX && s1->isCH == s2->isCH) {
+ && s1->hasX == s2->hasX && s1->whatsit == s2->whatsit) {
s1->end = s2->end;
s1->hasT |= s2->hasT;
return True;
@@ -1452,7 +1453,8 @@
seg->mode = 0;
seg->offset = 0;
seg->fnIdx = -1;
- seg->hasR = seg->hasW = seg->hasX = seg->hasT = seg->isCH = False;
+ seg->hasR = seg->hasW = seg->hasX = seg->hasT = False;
+ seg->whatsit = WiUnknown;
}
/* Make an NSegment which holds a reservation. */
@@ -2592,7 +2594,7 @@
Addr addr = sr_Res(res);
Int ix = find_nsegment_idx(addr);
- nsegments[ix].isCH = True;
+ nsegments[ix].whatsit = WiClientHeap;
}
return res;
}
@@ -2735,8 +2737,8 @@
falls entirely within a single free segment. The returned Bool
indicates whether the creation succeeded. */
-static Bool create_reservation (Addr start, SizeT length,
- ShrinkMode smode, SSizeT extra)
+static Bool create_reservation (Addr start, SizeT length, ShrinkMode smode,
+ SSizeT extra, WhatsIt whatsit)
{
Int startI, endI;
NSegment seg;
@@ -2780,6 +2782,8 @@
reservation. */
seg.end = end1;
seg.smode = smode;
+ seg.whatsit = whatsit;
+
add_segment( &seg );
AM_SANITY_CHECK;
@@ -2931,7 +2935,8 @@
/* Try to create the data seg and associated reservation where
BASE says. */
- ok = create_reservation(resvn_start, resvn_size, SmLower, anon_size);
+ ok = create_reservation(resvn_start, resvn_size, SmLower, anon_size,
+ WiClientBreak);
if (!ok) {
/* Hmm, that didn't work. Well, let aspacem suggest an address
@@ -2940,7 +2945,8 @@
( 0/*floating*/, anon_size + resvn_size, &ok );
if (ok) {
resvn_start = anon_start + anon_size;
- ok = create_reservation(resvn_start, resvn_size, SmLower, anon_size);
+ ok = create_reservation(resvn_start, resvn_size, SmLower, anon_size,
+ WiClientBreak);
}
}
@@ -2949,7 +2955,12 @@
if (!ok)
return VG_(mk_SysRes_Error)( VKI_ENOMEM );
- return VG_(am_mmap_anon_fixed_client)( anon_start, anon_size, prot );
+ SysRes sres = VG_(am_mmap_anon_fixed_client)( anon_start, anon_size, prot );
+ if (! sr_isError(sres)) {
+ NSegment *seg = nsegments + find_nsegment_idx(sr_Res(sres));
+ seg->whatsit = WiClientBreak;
+ }
+ return sres;
}
/* Resize the client brk segment from OLDBRK to NEWBRK. Return an error if
@@ -3044,9 +3055,16 @@
/* Create a shrinkable reservation followed by an anonymous
segment. Together these constitute a growdown stack. */
- ok = create_reservation(resvn_start, resvn_size, SmUpper, anon_size);
- if (ok)
- return VG_(am_mmap_anon_fixed_client)( anon_start, anon_size, prot );
+ ok = create_reservation(resvn_start, resvn_size, SmUpper, anon_size,
+ WiClientStack);
+ if (ok) {
+ SysRes sres = VG_(am_mmap_anon_fixed_client)(anon_start, anon_size, prot);
+ if (! sr_isError(sres)) {
+ NSegment *seg = nsegments + find_nsegment_idx(sr_Res(sres));
+ seg->whatsit = WiClientStack;
+ }
+ return sres;
+ }
return VG_(mk_SysRes_Error)( VKI_ENOMEM );
}
@@ -3066,11 +3084,9 @@
const NSegment *seg = VG_(am_find_nsegment)(addr);
aspacem_assert(seg != NULL);
- /* TODO: the test "seg->kind == SkAnonC" is really inadequate,
- because although it tests whether the segment is mapped
- _somehow_, it doesn't check that it has the right permissions
- (r,w, maybe x) ? */
if (seg->kind == SkAnonC) {
+ /* Ought to be the extensible stack segment */
+ aspacem_assert(seg->whatsit == WiClientStack);
/* ADDR is already mapped. Nothing to do. */
*new_stack_base = seg->start; // not really "new" :)
return sres;
Modified: branches/ASPACEM_TWEAKS/include/pub_tool_aspacemgr.h
==============================================================================
--- branches/ASPACEM_TWEAKS/include/pub_tool_aspacemgr.h (original)
+++ branches/ASPACEM_TWEAKS/include/pub_tool_aspacemgr.h Wed May 13 15:02:17 2015
@@ -59,6 +59,17 @@
}
ShrinkMode;
+/* Describes what this segment is being part of. Encoding was chosen
+ to simplify printing. */
+typedef
+ enum {
+ WiClientHeap = 'H', // SkAnonC ONLY
+ WiClientStack = 'S', // SkAnonC and SkResvn ONLY
+ WiClientBreak = 'B', // SkAnonC and SkResvn ONLY
+ WiUnknown = '-'
+ }
+ WhatsIt;
+
/* Describes a segment. Invariants:
kind == SkFree:
@@ -111,7 +122,8 @@
Bool hasX;
Bool hasT; // True --> translations have (or MAY have)
// been taken from this segment
- Bool isCH; // True --> is client heap (SkAnonC ONLY)
+ /* Identifies what this segment is part of */
+ WhatsIt whatsit;
}
NSegment;
Modified: branches/ASPACEM_TWEAKS/memcheck/mc_leakcheck.c
==============================================================================
--- branches/ASPACEM_TWEAKS/memcheck/mc_leakcheck.c (original)
+++ branches/ASPACEM_TWEAKS/memcheck/mc_leakcheck.c Wed May 13 15:02:17 2015
@@ -1629,7 +1629,7 @@
seg->kind == SkShmC);
if (!(seg->hasR && seg->hasW)) continue;
- if (seg->isCH) continue;
+ if (seg->whatsit == WiClientHeap) continue;
// Don't poke around in device segments as this may cause
// hangs. Include /dev/zero just in case someone allocated
|
|
From: Ivo R. <iv...@iv...> - 2015-05-13 12:46:07
|
Dear developers, Please could you shed some light on why it is not allowed to return Ity_I1 from a clean helper (CCall). There is an explicit check in ir_defs.c which barfs: "Iex.CCall.retty: cannot return :: Ity_I1" I would like to use the return value from a clean helper as the guard condition for IRStmt_Exit. This requires an IRExpr of type Ity_I1. Thank you for any insights, I. |
|
From: <sv...@va...> - 2015-05-13 08:12:04
|
Author: florian
Date: Wed May 13 09:11:56 2015
New Revision: 15222
Log:
Function is_plausible_guest_addr should also consider SkShmC
segments.
Modified:
trunk/coregrind/m_redir.c
Modified: trunk/coregrind/m_redir.c
==============================================================================
--- trunk/coregrind/m_redir.c (original)
+++ trunk/coregrind/m_redir.c Wed May 13 09:11:56 2015
@@ -1537,7 +1537,8 @@
{
NSegment const* seg = VG_(am_find_nsegment)(a);
return seg != NULL
- && (seg->kind == SkAnonC || seg->kind == SkFileC)
+ && (seg->kind == SkAnonC || seg->kind == SkFileC ||
+ seg->kind == SkShmC)
&& (seg->hasX || seg->hasR); /* crude x86-specific hack */
}
|