You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(32) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(4) |
|
2
(5) |
3
(3) |
4
(3) |
5
(7) |
6
(7) |
7
(9) |
8
(10) |
|
9
(12) |
10
(26) |
11
(9) |
12
(6) |
13
(7) |
14
(15) |
15
(25) |
|
16
(20) |
17
(32) |
18
(11) |
19
(19) |
20
(22) |
21
(6) |
22
(8) |
|
23
(16) |
24
(25) |
25
(11) |
26
(16) |
27
(12) |
28
(15) |
29
(11) |
|
30
(5) |
31
(8) |
|
|
|
|
|
|
From: Julian S. <js...@ac...> - 2005-01-20 00:44:55
|
> That's more like how I had envisaged function wrapping working. Use > the existing intercept machinery to redirect the original function > call, somehow passing the original function address as we do so. > > The wrapper would then call the real function, ensuring that this > time the address didn't get redirected during translation. It would > then get control again when the real function returned. Exactly. This is the point I arrived at. The only problem -- and one I cannot immediately see a clean solution for -- is how to know what the real (non-redirected) function address is. > The only problem then is the longjmp/exception case. Do we even need to handle this case, for libpthread? For that matter, can we also ignore recursion? J |
|
From: Eyal L. <ey...@ey...> - 2005-01-20 00:32:47
|
Jeremy Fitzhardinge wrote: > On Thu, 2005-01-20 at 10:07 +1100, Eyal Lebedinsky wrote: > >>I get this report from a run: >> >>==2005-01-20 08:04:14.204 32619== Thread 9: >>==2005-01-20 08:04:14.220 32619== Syscall param socketcall.send(msg) points to uninitialised byte(s) >>==2005-01-20 08:04:14.220 32619== at 0x1C043A8E: send (in /lib/tls/libpthread-0.60.so) >>==2005-01-20 08:04:14.220 32619== Address 0x219C9749 is 57 bytes inside a block of size 12288 alloc'd >>==2005-01-20 08:04:14.220 32619== at 0x1B906FE5: calloc (vg_replace_malloc.c:175) >> >>I know that I am sending uninitialised data, but in the past I got >>a proper stack trace rather than just the 'send' message. Even the >>'calloc' message, without a stack, is not so helpful. >> >>Am I missing a new option? or is there a reason for this change? > > > I think libpthread is compiled with -fomit-frame-pointer, which makes it > hard to get good stack traces. I'm thinking about experimenting with > libunwind to see if we can use it for stack traces; it understands the > unwind info that gcc puts into new .o files, which should make it > possible to get good backtraces in these cases. > > I'm not sure why calloc isn't getting a bit more backtrace. Make sure > there are no -fomit-frame-pointers in the Valgrind makefiles. For vg I do a different build than normal. I build with '-O0' and nothing else (just some extra warn requests): -W -Wall -Wshadow -Wpointer-arith -Wcast-qual -Wcast-align -Wconversion -Wredundant-decls -ansi -D_XOPEN_SOURCE=1 -D_GNU_SOURCE=1 -O0 -fno-inline -g I should say I used to get the trace, this laconic report is recent. > Oh, and that you're not using --num-callers=1. I use '--num-callers=32' which I find good enough. > J -- Eyal Lebedinsky (ey...@ey...) <http://samba.org/eyal/> If attaching .zip rename to .dat |
|
From: Tom H. <th...@cy...> - 2005-01-20 00:10:28
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> Why do we need general function wrapping? Currently all we care about
> is intercepting libpthread calls. I would prefer to write, in C, a
> libpthread stub library, and use the existing intercept mechanism to
> route all calls there. The stub library emits events -- using the
> client request mechanism -- to those who want to know, and calls onwards
> to the real pthread functions (my hands wave here). No need to mess with
> calling conventions, guest state layout or magic run-time code modification.
That's more like how I had envisaged function wrapping working. Use
the existing intercept machinery to redirect the original function
call, somehow passing the original function address as we do so.
The wrapper would then call the real function, ensuring that this
time the address didn't get redirected during translation. It would
then get control again when the real function returned. The only
problem then is the longjmp/exception case.
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Julian S. <js...@ac...> - 2005-01-20 00:00:19
|
> I think, however, that it is a > vast improvement over the outright functional bugs (and maintenance > problems) which vg_libpthread had. And certainly better than not > reporting anything as we do now. I agree. We should make this work if we can. > We could take advantage of the codegen. If we're generating code for > the first basic block of a wrapped function, we could generate in the > preamble: > call wrap_before_func > wrap_before_func would then be able to inspect %ESP and get both the > args and the return address. The value of TID+ESP+RETADDR will give us > a unique cookie key to match the call to the return. Who writes wrap_before_func? That has to understand the baseblock layout and also the calling conventions to extract esp and retaddr, and so is going to be machine specific. > Inserting the call to wrap_after_func at R is very easy; it doesn't even > require regenerating the BB. Currently, the first 16 bytes of each BB > is a preamble which is solely concerned with decrementing and testing > VG_(dispatch_ctr); we can easily do this in wrap_after_func, so we can > just patch over the preamble with the call to wrap_after_func (and nop > out the rest). That will change drastically .. the new JIT (1) translates multiple BBs at a time, and (2) actually doesn't do translation chaining as I could not think of a clean way to do this portably. ----------- The proposal leaves me with a nasty feeling that it will introduce all sorts of complex inter-component dependencies and generally be a maintenance and portability problem later. ----------- I would prefer a solution which didn't involve so much magic in the JIT. Why do we need general function wrapping? Currently all we care about is intercepting libpthread calls. I would prefer to write, in C, a libpthread stub library, and use the existing intercept mechanism to route all calls there. The stub library emits events -- using the client request mechanism -- to those who want to know, and calls onwards to the real pthread functions (my hands wave here). No need to mess with calling conventions, guest state layout or magic run-time code modification. ------------ The cookie idea seems like the kernel of something useful -- that is, a clean statement of the semantics of function wrapping in the presence of recursion, threads, and functions which don't necessarily return. J |
|
From: Jeremy F. <je...@go...> - 2005-01-19 23:37:07
|
CVS commit by fitzhardinge:
Make sure both spinning threads have started before sleeping. Yet another attempt to get
something useful out of this test.
M +11 -6 yield.c 1.5
--- valgrind/none/tests/yield.c #1.4:1.5
@@ -10,6 +10,7 @@
static pthread_mutex_t m_go = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t c_go = PTHREAD_COND_INITIALIZER;
+static pthread_cond_t c_running = PTHREAD_COND_INITIALIZER;
-static volatile int alive;
+static volatile int alive, running;
static int spin;
@@ -21,4 +22,6 @@ static void *spinner(void *v)
while(!alive)
pthread_cond_wait(&c_go, &m_go);
+ running++;
+ pthread_cond_signal(&c_running);
pthread_mutex_unlock(&m_go);
@@ -34,4 +37,6 @@ static void *rep_nopper(void *v)
while(!alive)
pthread_cond_wait(&c_go, &m_go);
+ running++;
+ pthread_cond_signal(&c_running);
pthread_mutex_unlock(&m_go);
@@ -59,4 +64,8 @@ int main()
alive = 1;
pthread_cond_broadcast(&c_go);
+
+ /* make sure they both get started */
+ while(running < 2)
+ pthread_cond_wait(&c_running, &m_go);
pthread_mutex_unlock(&m_go);
@@ -71,9 +80,5 @@ int main()
spin, rep_nop, (float)rep_nop / spin);
- /* We expect that spinning was faster than rep_nop, but that
- rep_nop made at least .1% progress of the spin. (This is
- fairly pessimistic, but the non-determinism of this test
- makes it hard to be more precise.) */
- if (spin > rep_nop && ((float)rep_nop / spin) >= .001)
+ if (spin > rep_nop)
printf("PASS\n");
else
|
|
From: Jeremy F. <je...@go...> - 2005-01-19 23:15:31
|
On Thu, 2005-01-20 at 10:07 +1100, Eyal Lebedinsky wrote: > I get this report from a run: > > ==2005-01-20 08:04:14.204 32619== Thread 9: > ==2005-01-20 08:04:14.220 32619== Syscall param socketcall.send(msg) points to uninitialised byte(s) > ==2005-01-20 08:04:14.220 32619== at 0x1C043A8E: send (in /lib/tls/libpthread-0.60.so) > ==2005-01-20 08:04:14.220 32619== Address 0x219C9749 is 57 bytes inside a block of size 12288 alloc'd > ==2005-01-20 08:04:14.220 32619== at 0x1B906FE5: calloc (vg_replace_malloc.c:175) > > I know that I am sending uninitialised data, but in the past I got > a proper stack trace rather than just the 'send' message. Even the > 'calloc' message, without a stack, is not so helpful. > > Am I missing a new option? or is there a reason for this change? I think libpthread is compiled with -fomit-frame-pointer, which makes it hard to get good stack traces. I'm thinking about experimenting with libunwind to see if we can use it for stack traces; it understands the unwind info that gcc puts into new .o files, which should make it possible to get good backtraces in these cases. I'm not sure why calloc isn't getting a bit more backtrace. Make sure there are no -fomit-frame-pointers in the Valgrind makefiles. Oh, and that you're not using --num-callers=1. J |
|
From: Eyal L. <ey...@ey...> - 2005-01-19 23:07:27
|
I get this report from a run: ==2005-01-20 08:04:14.204 32619== Thread 9: ==2005-01-20 08:04:14.220 32619== Syscall param socketcall.send(msg) points to uninitialised byte(s) ==2005-01-20 08:04:14.220 32619== at 0x1C043A8E: send (in /lib/tls/libpthread-0.60.so) ==2005-01-20 08:04:14.220 32619== Address 0x219C9749 is 57 bytes inside a block of size 12288 alloc'd ==2005-01-20 08:04:14.220 32619== at 0x1B906FE5: calloc (vg_replace_malloc.c:175) I know that I am sending uninitialised data, but in the past I got a proper stack trace rather than just the 'send' message. Even the 'calloc' message, without a stack, is not so helpful. Am I missing a new option? or is there a reason for this change? -- Eyal Lebedinsky (ey...@ey...) <http://samba.org/eyal/> If attaching .zip rename to .dat |
|
From: Jeremy F. <je...@go...> - 2005-01-19 22:55:05
|
CVS commit by fitzhardinge:
Previous change to fix bug 97407 was not really correct. This is better.
M +2 -1 vg_scheduler.c 1.215
M +4 -0 linux/core_os.h 1.4
M +7 -4 linux/sema.c 1.3
--- valgrind/coregrind/vg_scheduler.c #1.214:1.215
@@ -606,5 +606,6 @@ static void sched_fork_cleanup(ThreadId
}
- /* re-init the sema */
+ /* re-init and take the sema */
+ VG_(sema_deinit)(&run_sema);
VG_(sema_init)(&run_sema);
VG_(sema_down)(&run_sema);
--- valgrind/coregrind/linux/sema.c #1.2:1.3
@@ -58,8 +58,4 @@ void VG_(sema_init)(vg_sema_t *sema)
void VG_(sema_init)(vg_sema_t *sema)
{
- if (sema->pipe[0] >= VG_(fd_hard_limit)) {
- VG_(close)(sema->pipe[0]);
- VG_(close)(sema->pipe[1]);
- }
VG_(pipe)(sema->pipe);
sema->pipe[0] = VG_(safe_fd)(sema->pipe[0]);
@@ -72,4 +68,11 @@ void VG_(sema_init)(vg_sema_t *sema)
}
+void VG_(sema_deinit)(vg_sema_t *sema)
+{
+ VG_(close)(sema->pipe[0]);
+ VG_(close)(sema->pipe[1]);
+ sema->pipe[0] = sema->pipe[1] = -1;
+}
+
/* get a token */
void VG_(sema_down)(vg_sema_t *sema)
--- valgrind/coregrind/linux/core_os.h #1.3:1.4
@@ -100,4 +100,7 @@ extern Int __futex_up_slow(vg_sema_t *);
void VG_(sema_init)(vg_sema_t *);
+static inline void VG_(sema_deinit)(vg_sema_t *)
+{
+}
static inline void VG_(sema_down)(vg_sema_t *futx)
@@ -141,4 +144,5 @@ typedef struct {
void VG_(sema_init)(vg_sema_t *);
+void VG_(sema_deinit)(vg_sema_t *);
void VG_(sema_down)(vg_sema_t *sema);
void VG_(sema_up)(vg_sema_t *sema);
|
|
From: Jeremy F. <je...@go...> - 2005-01-19 22:50:37
|
Hi all, Is anyone using VALGRIND_MALLOCLIKE_BLOCK/FREELIKE_BLOCK? If so, how? It seems to me that these requests are basically useless, because there's no way for Valgrind to tell when your malloc implementation is manipulating its metadata, and when the client is trashing it. If you mark your memory regions returned by your malloc-like function, Valgrind will complain when you touch memory near it with your free-like function. I think we need to add another pair of requests, VALGRIND_ENTER_MALLOCLIKE_FUNCTION/LEAVE_MALLOCLIKE_FUNCTION, which tells Valgrind not to complain about touching memory outside of blocks which have been declared MALLOCLIKE. J |
|
From: Jeremy F. <je...@go...> - 2005-01-19 21:33:34
|
I've been thinking about how to restore the pthreads functionality which
was lost as a result of the recent threading changes.
It seems to me that the only feasible approach is to wrap the standard
libpthread functions to generate a stream of events, and use that to
maintain an abstract model of the state of the threads, locks, etc.
The downside of this parallel model-keeping is that if it gets out of
sync with the real state of the threads library, it will start reporting
bogus errors (or missing real errors). I think, however, that it is a
vast improvement over the outright functional bugs (and maintenance
problems) which vg_libpthread had. And certainly better than not
reporting anything as we do now.
General function wrapping would be useful in other places too. For
example, we could wrap libc malloc rather than implementing our own. Or
we could provide a facility for clients to install their own wrappers.
So, how to wrap functions? Function wrapping basically requires
intercepting a pair of edges in the program's control flow graph, and
breaking each of them in two:
Normal: Wrapped:
------ R------ ------ R-----
| ^ ==> | ^
V | V |
S--------- B- A-
v ^
S----------
Key: S - subroutine
R - return address
B - before wrapper
A - after wrapper
I think the basic requirements are:
* the "before" function has access to all of the subroutine's
arguments
* the "after" function has access to the return value
* some state is passed between "before" and "after" so that
matching operations can be performed
* the mechanism can cope with wrapping any function with
call/return semantics and a single entrypoint
* it can cope with varargs
* it can cope with unknown numbers of parameters
* it can cope with recursion
* it can cope with multithreading
* wrapped functions can call other wrapped functions
Another wart is that functions can finish without returning to their
caller if they use longjmp/exceptions.
(Note that the existing mechanism interception is much simpler than
this, since it just redirects one edge of the CFG, and doesn't have to
worry about returns at all. There isn't much overlap in functionality.)
So, how to implement this?
An obvious way is how you'd do it in C:
int wrap_foo(int a, int b, struct bar c)
{
int ret;
void *cookie;
cookie = before_foo(a, b, c);
ret = foo(a, b, c);
after_foo(cookie, ret);
return ret;
}
The trouble with this is that it requires knowing in advance how many
arguments the function has, and then copying them for the calls to
before_foo() and foo(). It doesn't work for varargs functions unless
you can work out how many args there are (by parsing the printf format
string, for example).
So that's out.
[ From here on, I'm handwaving and thinking out loud. ]
We could take advantage of the codegen. If we're generating code for
the first basic block of a wrapped function, we could generate in the
preamble:
call wrap_before_func
wrap_before_func would then be able to inspect %ESP and get both the
args and the return address. The value of TID+ESP+RETADDR will give us
a unique cookie key to match the call to the return.
Using this, the wrap_before_func can install a hook at the beginning of
the basic block at RETADDR (point 'R' in the diagram above), which does:
call wrap_after_func
wrap_after_func gets to see the return value in %EAX, and can use TID
+ESP+EIP to generate the key to find the cookie value generated by
wrap_before_func; once used, the cookie is deleted so that the "after"
wrapper is only called once (consider the case of where the return BB
address is also the head of a loop).
Inserting the call to wrap_after_func at R is very easy; it doesn't even
require regenerating the BB. Currently, the first 16 bytes of each BB
is a preamble which is solely concerned with decrementing and testing
VG_(dispatch_ctr); we can easily do this in wrap_after_func, so we can
just patch over the preamble with the call to wrap_after_func (and nop
out the rest).
Another subtle point is what if a particular basic block is both the
start of a wrapped function and the target of a wrapped function return.
It shouldn't happen in normal code, but it could happen. This is easily
dealt with; the resulting preamble would look like:
call wrap_after_func
call wrap_before_func
rest of BB...
OK, so that's normal call-return: how to deal with longjmp/exceptions?
Well, we could just ignore it. If you call a wrapped function, and it
longjmps back, it means that the "before" function is called but not the
"after", and the cookie store fills up with junk. That's not optimal.
One thing to note is that everything below %ESP is, by definition,
undefined, and so if %ESP for a particular TID moves above the TID+ESP
encoded in a cookie, that cookie becomes invalid, (or, effectively,
returned). We can call an wrap_after_func variant to indicate that a
function returned with longjmp/exception rather than normally. This
runs into the old problem posed by user-space threading libraries, since
we would need to be able to distinguish between a switching stacks and a
normal return/longjmp.
If we don't explicitly track every ESP change, we can still periodically
sweep through the cookie list and mop up anything which has become
stale.
You know, that all looks pretty sound to me. Somewhat complex, but not
deeply intrusive. It would need:
1. Machinery for registering wrappers - it can probably make use of
the existing intercept machinery.
2. Generate the call to wrap_before_func. vg_from_ucode would do
this as part of generating the BB preamble; it will know that a
function needs to be wrapped at codegen time (obviously you need
to declare a function is to be wrapped before its first called,
though you could invalidate the TC).
3. Generate the call to wrap_after_func, just by overwriting the
standard preamble.
4. Implement a cookie list: just a skiplist. To implement
longjmp/exceptions, it needs to be searchable with a partial
key.
5. Implement wrap_before/after_func - they'll be called from
generated code, and will have non-standard calling convention,
so they'll probably be in assembler. But they would call C code
to do all the real work.
6. Hook into ESP tracking to detect longjmps (this is potentially
very expensive, so maybe it should be an option).
7. Housekeeping to mop up stale cookies, either because we're not
doing ESP tracking or because a thread exits.
Comments? What have I forgotten?
J
|
|
From: Tom H. <th...@cy...> - 2005-01-19 17:15:41
|
Nightly build on standard ( Red Hat 7.2 ) started at 2005-01-19 03:00:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow cpuid: valgrind --num-callers=4 ./cpuid dastest: valgrind --num-callers=4 ./dastest fpu_lazy_eflags: valgrind --num-callers=4 ./fpu_lazy_eflags insn_basic: valgrind --num-callers=4 ./insn_basic insn_cmov: valgrind --num-callers=4 ./insn_cmov insn_fpu: valgrind --num-callers=4 ./insn_fpu insn_mmx: valgrind --num-callers=4 ./insn_mmx insn_mmxext: valgrind --num-callers=4 ./insn_mmxext insn_sse: valgrind --num-callers=4 ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind --num-callers=4 ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind --num-callers=4 ./pushpopseg rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield Could not read `yield.stderr.exp' make: *** [regtest] Error 2 |
|
From: Tom H. <th...@cy...> - 2005-01-19 12:09:26
|
Nightly build on audi ( Red Hat 9 ) started at 2005-01-19 03:15:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow cpuid: valgrind --num-callers=4 ./cpuid dastest: valgrind --num-callers=4 ./dastest fpu_lazy_eflags: valgrind --num-callers=4 ./fpu_lazy_eflags insn_basic: valgrind --num-callers=4 ./insn_basic insn_cmov: valgrind --num-callers=4 ./insn_cmov insn_fpu: valgrind --num-callers=4 ./insn_fpu insn_mmx: valgrind --num-callers=4 ./insn_mmx insn_mmxext: valgrind --num-callers=4 ./insn_mmxext insn_sse: valgrind --num-callers=4 ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind --num-callers=4 ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind --num-callers=4 ./pushpopseg rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield Could not read `yield.stderr.exp' make: *** [regtest] Error 2 |
|
From: Jeremy F. <je...@go...> - 2005-01-19 10:33:40
|
On Tue, 2005-01-18 at 22:49 +0100, Josef Weidendorfer wrote: > On Monday 17 January 2005 23:47, Jeremy Fitzhardinge wrote: > > On Tue, 2005-01-18 at 09:12 +1100, Eyal Lebedinsky wrote: > > > abort.log is a copy+paste off my xterm of the abort I had last night. > > > > It just quietly died with SIGSEGV? And then kept doing that once it > > started? Very odd. > > Is this on Suse 9.2 ? > Sometimes on Suse 9.2 (every kernel until now, currently 2.6.8-24.10) the > kernel starts to give back the wrong faulting address to the Segfault handler > (always address 0 instead of the real one). This of course kills valgrind. > This behaviour is on user basis. Strangely, if you log out and in again it > works again... I've noticed that with the stock 2.6.10 FC3 kernel as well. It fixed itself after a short period of time, which doesn't make me feel any better... The kernel.org kernels seem fine. J |
|
From: Josef W. <Jos...@gm...> - 2005-01-19 10:13:29
|
On Monday 17 January 2005 23:47, Jeremy Fitzhardinge wrote: > On Tue, 2005-01-18 at 09:12 +1100, Eyal Lebedinsky wrote: > > abort.log is a copy+paste off my xterm of the abort I had last night. > > It just quietly died with SIGSEGV? And then kept doing that once it > started? Very odd. Is this on Suse 9.2 ? Sometimes on Suse 9.2 (every kernel until now, currently 2.6.8-24.10) the kernel starts to give back the wrong faulting address to the Segfault handler (always address 0 instead of the real one). This of course kills valgrind. This behaviour is on user basis. Strangely, if you log out and in again it works again... Josef > > J > > > > ------------------------------------------------------- > The SF.Net email is sponsored by: Beat the post-holiday blues > Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. > It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: Jeremy F. <je...@go...> - 2005-01-19 09:42:09
|
CVS commit by fitzhardinge:
Close the old semaphore pipe before creating a new one.
BUG: 97407
M +4 -0 sema.c 1.2
--- valgrind/coregrind/linux/sema.c #1.1:1.2
@@ -58,4 +58,8 @@ void VG_(sema_init)(vg_sema_t *sema)
void VG_(sema_init)(vg_sema_t *sema)
{
+ if (sema->pipe[0] >= VG_(fd_hard_limit)) {
+ VG_(close)(sema->pipe[0]);
+ VG_(close)(sema->pipe[1]);
+ }
VG_(pipe)(sema->pipe);
sema->pipe[0] = VG_(safe_fd)(sema->pipe[0]);
|
|
From: Jeremy F. <je...@go...> - 2005-01-19 09:31:17
|
CVS commit by fitzhardinge:
Some ioctls create new memory mappings. Unless Valgrind has special
support for these ioctls, it doesn't know about these new mappings,
and gets confused. This change adds --weird-hacks=ioctl-mmap, which
makes Valgrind search /proc/self/maps after every unknown ioctl,
looking for new mappings (or unmappings).
M +4 -0 coregrind/core.h 1.71
M +1 -1 coregrind/vg_main.c 1.239
M +101 -1 coregrind/vg_memory.c 1.87
M +8 -2 coregrind/vg_syscalls.c 1.237
M +1 -1 none/tests/cmdline1.stdout.exp 1.9
M +1 -1 none/tests/cmdline2.stdout.exp 1.9
--- valgrind/coregrind/vg_main.c #1.238:1.239
@@ -1530,5 +1530,5 @@ void usage ( Bool debug_help )
" uncommon user options for all Valgrind tools:\n"
" --run-libc-freeres=no|yes free up glibc memory at exit? [yes]\n"
-" --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls [none]\n"
+" --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls,ioctl-mmap [none]\n"
" --lowlat-signals=no|yes improve thread signal wake-up latency [no]\n"
" --lowlat-syscalls=no|yes improve thread syscall wake-up latency [no]\n"
--- valgrind/coregrind/vg_memory.c #1.86:1.87
@@ -933,4 +933,105 @@ void *VG_(shadow_alloc)(UInt size)
}
+
+/*--------------------------------------------------------------------*/
+/*--- Sync maps ---*/
+/*--------------------------------------------------------------------*/
+
+/* Search /proc/self/maps looking for changes which aren't reflected
+ in the segment list */
+
+static Segment *next_segment;
+
+static void sync_maps(Addr addr, SizeT len, UInt prot,
+ UInt dev, UInt ino, ULong foff, const UChar *filename)
+{
+ static const Bool debug = 0;
+
+ Addr end = addr+len;
+ Segment *seg, *first, *last;
+ UInt flags = (addr < VG_(client_end)) ? 0 : SF_VALGRIND;
+
+ seg = next_segment;
+
+ if (debug)
+ VG_(printf)("SYNC: map %p-%p\n", addr, end);
+
+ /* Traverse any segments which are before this mapping... */
+ first = seg;
+ while(seg && (seg->addr < addr))
+ seg = VG_(next_segment)(seg);
+
+ /* ...and remove them */
+ if (first && first->addr < addr) {
+ if (debug)
+ VG_(printf)("SYNC: removing %p-%p\n", first->addr, addr-first->addr);
+ VG_(unmap_range)(first->addr, addr - first->addr);
+ VG_TRACK( die_mem_munmap, first->addr, addr-first->addr );
+
+ seg = VG_(find_segment_after)(addr);
+ }
+
+ if (seg == NULL || end <= seg->addr) {
+ /* floating mapping with no segments */
+ if (debug)
+ VG_(printf)("SYNC: inserting %p-%p %s\n", addr, end, VG_(prot_str)(prot));
+ VG_(map_file_segment)(addr, len, prot, flags | SF_MMAP, dev, ino, foff, filename);
+
+ VG_TRACK ( new_mem_mmap, addr, len,
+ prot & VKI_PROT_READ, prot & VKI_PROT_WRITE, prot & VKI_PROT_EXEC );
+
+ if (addr >= VG_(client_end) && VG_(clo_pointercheck)) {
+ VG_(message)(Vg_UserMsg, "Warning: inserted mapping at %p-%p, but it is",
+ addr, addr+len);
+ VG_(message)(Vg_UserMsg, " inaccessible because pointer-checking is enabled "
+ "(expect a SIGSEGV)");
+ }
+ }
+
+ /* traverse segments covering mapping */
+ for(last = NULL; seg && seg->addr < end;
+ last = seg, seg = VG_(next_segment)(seg)) {
+ if (last && (last->addr+last->len) > addr && (last->addr+last->len) != seg->addr) {
+ /* gap fill */
+ if (debug)
+ VG_(printf)("SYNC: gap-fill %p-%p\n", last->addr+last->len, seg->addr);
+ VG_(map_file_segment)(last->addr+last->len, seg->addr - (last->addr+last->len),
+ prot, flags | SF_MMAP, dev, ino, foff, filename);
+ last = VG_(find_segment_containing)(last->addr+last->len);
+ }
+ seg->prot = prot;
+ }
+
+ next_segment = seg;
+}
+
+void VG_(sync_segments)(void)
+{
+ static const Bool debug = 0;
+ Segment *seg;
+
+ next_segment = VG_(first_segment)();
+
+ VG_(parse_procselfmaps)(sync_maps);
+
+ if (next_segment != NULL) {
+ /* Found some segments after the end of the mappings */
+ Addr first, last;
+
+ first = next_segment->addr;
+ last = next_segment->addr + next_segment->len;
+
+ for(seg = next_segment; seg; seg = VG_(next_segment)(seg))
+ last = seg->addr + seg->len;
+
+ if (debug)
+ VG_(printf)("SYNC: remove tail %p-%p\n", first, last);
+ VG_(unmap_range)(first, last-first);
+ }
+
+ if (debug)
+ VG_(sanity_check_memory)();
+}
+
/*--------------------------------------------------------------------*/
/*--- Sanity checking ---*/
@@ -957,5 +1058,4 @@ const Char *VG_(prot_str)(UInt prot)
}
-static Segment *next_segment;
static Bool segment_maps_ok;
static Addr prevmapstart, prevmapend;
--- valgrind/coregrind/vg_syscalls.c #1.236:1.237
@@ -4147,4 +4147,10 @@ POST(sys_ioctl)
&& arg3 != (Addr)NULL)
VG_TRACK( post_mem_write,arg3, size);
+
+ if (VG_(strstr)(VG_(clo_weird_hacks), "ioctl-mmap") != NULL) {
+ /* ioctls may spontaneously create memory mappings, so go
+ search for them */
+ VG_(sync_segments)();
+ }
break;
}
--- valgrind/coregrind/core.h #1.70:1.71
@@ -1259,4 +1259,8 @@ extern REGPARM(1)
void VG_(unknown_SP_update) ( Addr new_SP );
+/* Search /proc/self/maps for changes which aren't reflected in the
+ segment list */
+extern void VG_(sync_segments)();
+
/* Check vg_memory structures for sanity */
extern Bool VG_(sanity_check_memory)(void);
--- valgrind/none/tests/cmdline1.stdout.exp #1.8:1.9
@@ -14,5 +14,5 @@
uncommon user options for all Valgrind tools:
--run-libc-freeres=no|yes free up glibc memory at exit? [yes]
- --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls [none]
+ --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls,ioctl-mmap [none]
--lowlat-signals=no|yes improve thread signal wake-up latency [no]
--lowlat-syscalls=no|yes improve thread syscall wake-up latency [no]
--- valgrind/none/tests/cmdline2.stdout.exp #1.8:1.9
@@ -14,5 +14,5 @@
uncommon user options for all Valgrind tools:
--run-libc-freeres=no|yes free up glibc memory at exit? [yes]
- --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls [none]
+ --weird-hacks=hack1,hack2,... recognised hacks: lax-ioctls,ioctl-mmap [none]
--lowlat-signals=no|yes improve thread signal wake-up latency [no]
--lowlat-syscalls=no|yes improve thread syscall wake-up latency [no]
|
|
From: Jeremy F. <je...@go...> - 2005-01-19 09:31:07
|
CVS commit by fitzhardinge:
Add a flag so that internal errors are reported properly.
M +3 -0 core.h 1.70
M +8 -0 vg_scheduler.c 1.214
M +10 -9 vg_signals.c 1.113
--- valgrind/coregrind/core.h #1.69:1.70
@@ -803,4 +803,7 @@ void VG_(save_thread_state) ( ThreadId t
void VG_(load_thread_state) ( ThreadId tid );
+/* If true, a fault is Valgrind-internal (ie, a bug) */
+extern Bool VG_(my_fault);
+
/* The red-zone size which we put at the bottom (highest address) of
thread stacks, for paranoia reasons. This can be arbitrary, and
--- valgrind/coregrind/vg_signals.c #1.112:1.113
@@ -1738,14 +1738,15 @@ void vg_sync_signalhandler ( Int sigNo,
}
+ if (!VG_(my_fault)) {
/* Can't continue; must longjmp back to the scheduler and thus
enter the sighandler immediately. */
VG_(deliver_signal)(tid, info);
VG_(resume_scheduler)(tid);
+ }
-
-
- /* If resume_scheduler returns, it means we don't have longjmp
- set up, implying that we weren't running client code, and
- therefore it was actually generated by Valgrind internally.
+ /* If resume_scheduler returns or its our fault, it means we
+ don't have longjmp set up, implying that we weren't running
+ client code, and therefore it was actually generated by
+ Valgrind internally.
*/
VG_(message)(Vg_DebugMsg,
--- valgrind/coregrind/vg_scheduler.c #1.213:1.214
@@ -73,4 +73,7 @@
ThreadState VG_(threads)[VG_N_THREADS];
+/* If true, a fault is Valgrind-internal (ie, a bug) */
+Bool VG_(my_fault) = True;
+
/* The tid of the thread currently in VG_(baseBlock). */
static ThreadId vg_tid_currently_in_baseBlock = VG_INVALID_THREADID;
@@ -523,6 +526,11 @@ UInt run_thread_for_a_while ( ThreadId t
//VG_(printf)("running EIP = %p ESP=%p\n", VG_(threads)[tid].arch.m_eip, VG_(threads)[tid].arch.m_esp);
+ vg_assert(VG_(my_fault));
+ VG_(my_fault) = False;
+
SCHEDSETJMP(tid, jumped, trc = VG_(run_innerloop)());
+ VG_(my_fault) = True;
+
if (jumped) {
/* We get here if the client took a fault, which caused our
|
|
From: Nicholas N. <nj...@ca...> - 2005-01-19 04:15:32
|
On Tue, 18 Jan 2005, Jeremy Fitzhardinge wrote: > CVS commit by fitzhardinge: > > Make the use of the skiplist find functions a bit clearer, which extends > to the segment-finding functions. Ah, nice one! That's a big improvement. N |
|
From: <js...@ac...> - 2005-01-19 03:56:35
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2005-01-19 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow == 193 tests, 15 stderr failures, 1 stdout failure ================= corecheck/tests/as_mmap (stderr) corecheck/tests/fdleak_fcntl (stderr) helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/writev (stderr) none/tests/yield (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2005-01-19 03:24:23
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2005-01-19 03:20:04 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow yield: valgrind --num-callers=4 ./yield *** yield failed (stdout) *** -- Finished tests in none/tests ---------------------------------------- == 200 tests, 12 stderr failures, 1 stdout failure ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/vgtest_ume (stderr) none/tests/yield (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-01-19 03:14:00
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2005-01-19 03:10:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow yield: valgrind --num-callers=4 ./yield *** yield failed (stdout) *** -- Finished tests in none/tests ---------------------------------------- == 198 tests, 12 stderr failures, 1 stdout failure ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) none/tests/yield (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-01-19 03:09:21
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2005-01-19 03:05:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests ---------------------------------------- == 198 tests, 14 stderr failures, 1 stdout failure ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/post-syscall (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/vgtest_ume (stderr) none/tests/yield (stdout) make: *** [regtest] Error 1 |
|
From: Jeremy F. <je...@go...> - 2005-01-19 03:05:00
|
On Tue, 2005-01-18 at 09:12 +1100, Eyal Lebedinsky wrote: > uninited.log is the log I get this morning with the error (but no crash). I just checked in a fix for this. If wait*() was interrupted by SIGCHLD, Valgrind would fail to correctly note that *status had been written to. It shouldn't have caused any functional problems though. J |
|
From: Jeremy F. <je...@go...> - 2005-01-18 21:29:50
|
CVS commit by fitzhardinge:
Make the use of the skiplist find functions a bit clearer, which extends
to the segment-finding functions.
M +4 -5 coregrind/core.h 1.69
M +6 -4 coregrind/vg_main.c 1.238
M +29 -20 coregrind/vg_memory.c 1.86
M +3 -2 coregrind/vg_scheduler.c 1.213
M +3 -16 coregrind/vg_signals.c 1.112
M +26 -1 coregrind/vg_skiplist.c 1.8
M +7 -7 coregrind/vg_symtab2.c 1.99
M +8 -7 coregrind/vg_syscalls.c 1.236
M +2 -5 coregrind/vg_translate.c 1.95
M +12 -3 coregrind/linux/syscalls.c 1.7
M +2 -3 coregrind/x86/signal.c 1.11
M +2 -2 coregrind/x86-linux/syscalls.c 1.17
M +7 -3 include/tool.h.base 1.20
M +1 -0 include/x86-linux/vki_arch.h 1.12
--- valgrind/coregrind/vg_skiplist.c #1.7:1.8
@@ -340,5 +340,6 @@ static SkipNode *SkipList__Find(const Sk
}
-void *VG_(SkipList_Find)(const SkipList *l, void *k)
+/* Return list element which is <= k, or NULL if there is none. */
+void *VG_(SkipList_Find_Before)(const SkipList *l, void *k)
{
SkipNode *n = SkipList__Find(l, k, NULL);
@@ -349,4 +350,28 @@ void *VG_(SkipList_Find)(const SkipList
}
+/* Return the list element which == k, or NULL if none */
+void *VG_(SkipList_Find_Exact)(const SkipList *l, void *k)
+{
+ SkipNode *n = SkipList__Find(l, k, NULL);
+
+ if (n != NULL && (l->cmp)(key_of_node(l, n), k) == 0)
+ return data_of_node(l, n);
+ return NULL;
+}
+
+/* Return the list element which is >= k, or NULL if none */
+void *VG_(SkipList_Find_After)(const SkipList *l, void *k)
+{
+ SkipNode *n = SkipList__Find(l, k, NULL);
+
+ if (n != NULL && (l->cmp)(key_of_node(l, n), k) < 0)
+ n = n->next[0];
+
+ if (n != NULL)
+ return data_of_node(l, n);
+
+ return NULL;
+}
+
void VG_(SkipList_Insert)(SkipList *l, void *data)
{
--- valgrind/coregrind/core.h #1.68:1.69
@@ -1235,13 +1235,12 @@ extern void VG_(mprotect_range)(Addr add
extern Addr VG_(find_map_space)(Addr base, SizeT len, Bool for_client);
-/* Find the segment containing or before 'a', or NULL if there isn't
- one. Would be better named "find_segment_before". */
-extern Segment *VG_(find_segment)(Addr a);
+/* Find the segment containing or before 'a', or NULL if there isn't one. */
+extern Segment *VG_(find_segment_before)(Addr a);
/* Find the segment containing or after 'a', or NULL if there isn't one. */
extern Segment *VG_(find_segment_after)(Addr a);
-/* Find the segment returning exactly 'a'. */
-extern Segment *VG_(find_segment_exact)(Addr a);
+/* Find the segment containing 'a', or NULL if there isn't one. */
+extern Segment *VG_(find_segment_containing)(Addr a);
extern Segment *VG_(first_segment)(void);
--- valgrind/coregrind/vg_memory.c #1.85:1.86
@@ -126,5 +126,5 @@ static inline Segment *allocseg()
Segment *VG_(split_segment)(Addr a)
{
- Segment *s = VG_(SkipList_Find)(&sk_segments, &a);
+ Segment *s = VG_(SkipList_Find_Before)(&sk_segments, &a);
Segment *ns;
Int delta;
@@ -191,5 +191,5 @@ void VG_(unmap_range)(Addr addr, SizeT l
vg_assert((len & (VKI_PAGE_SIZE-1)) == 0);
- for(s = VG_(SkipList_Find)(&sk_segments, &addr);
+ for(s = VG_(SkipList_Find_Before)(&sk_segments, &addr);
s != NULL && s->addr < (addr+len);
s = next) {
@@ -312,5 +312,5 @@ static void merge_segments(Addr a, SizeT
len += VKI_PAGE_SIZE;
- for(s = VG_(SkipList_Find)(&sk_segments, &a);
+ for(s = VG_(SkipList_Find_Before)(&sk_segments, &a);
s != NULL && s->addr < (a+len);) {
next = VG_(SkipNode_Next)(&sk_segments, s);
@@ -353,5 +353,5 @@ void VG_(map_file_segment)(Addr addr, Si
/* First look to see what already exists around here */
- s = VG_(SkipList_Find)(&sk_segments, &addr);
+ s = VG_(find_segment_containing)(addr);
if (s != NULL && s->addr == addr && s->len == len) {
@@ -500,5 +500,5 @@ void VG_(mprotect_range)(Addr a, SizeT l
if (debug) {
- s = VG_(find_segment)(a);
+ s = VG_(find_segment_before)(a);
VG_(printf)(" split: s1=%p-%p s2=%p-%p s(%p)=%p-%p\n",
s1 ? s1->addr : 0, s1 ? (s1->addr+s1->len) : 0,
@@ -508,5 +508,5 @@ void VG_(mprotect_range)(Addr a, SizeT l
}
- for(s = VG_(find_segment)(a);
+ for(s = VG_(find_segment_before)(a);
s != NULL && s->addr < a+len;
s = next)
@@ -553,5 +553,5 @@ Addr VG_(find_map_space)(Addr addr, Size
ret, ret+len, for_client);
- s = VG_(SkipList_Find)(&sk_segments, &ret);
+ s = VG_(SkipList_Find_Before)(&sk_segments, &ret);
if (s == NULL)
s = VG_(SkipNode_First)(&sk_segments);
@@ -601,5 +601,5 @@ Addr VG_(find_map_space)(Addr addr, Size
void VG_(pad_address_space)(Addr start)
{
- Addr addr = start == 0 ? VG_(client_base) : start;
+ Addr addr = (start == 0) ? VG_(client_base) : start;
Segment *s = VG_(find_segment_after)(addr);
Addr ret;
@@ -608,5 +608,6 @@ void VG_(pad_address_space)(Addr start)
if (addr < s->addr) {
PLATFORM_DO_MMAP(ret, addr, s->addr - addr, 0,
- VKI_MAP_FIXED | VKI_MAP_PRIVATE | VKI_MAP_ANONYMOUS,
+ VKI_MAP_FIXED | VKI_MAP_PRIVATE |
+ VKI_MAP_ANONYMOUS | VKI_MAP_NORESERVE,
-1, 0);
}
@@ -629,6 +630,6 @@ void VG_(pad_address_space)(Addr start)
void VG_(unpad_address_space)(Addr start)
{
- Addr addr = start == 0 ? VG_(client_base) : start;
- Segment *s = VG_(find_segment)(addr);
+ Addr addr = (start == 0) ? VG_(client_base) : start;
+ Segment *s = VG_(find_segment_after)(addr);
Int ret;
@@ -649,16 +650,17 @@ void VG_(unpad_address_space)(Addr start
}
-Segment *VG_(find_segment)(Addr a)
+Segment *VG_(find_segment_before)(Addr a)
{
- return VG_(SkipList_Find)(&sk_segments, &a);
+ return VG_(SkipList_Find_Before)(&sk_segments, &a);
}
/* Return the segment starting at exactly address 'a' */
-Segment *VG_(find_segment_exact)(Addr a)
+Segment *VG_(find_segment_containing)(Addr a)
{
- Segment *seg = VG_(find_segment)(a);
- if (seg && seg->addr != a)
+ Segment *seg = VG_(find_segment_before)(a);
+
+ if (seg && ((a < seg->addr) || (seg->addr + seg->len) <= a))
seg = NULL;
- return NULL;
+ return seg;
}
@@ -666,5 +668,12 @@ Segment *VG_(find_segment_exact)(Addr a)
Segment *VG_(find_segment_after)(Addr a)
{
- Segment *seg = VG_(find_segment)(a);
+ Segment *seg = VG_(find_segment_before)(a);
+
+ if (seg == NULL) {
+ // If there's nothing before the address, then the next segment
+ // is the first
+ seg = VG_(first_segment)();
+ }
+
while (seg && a >= (seg->addr+seg->len))
seg = VG_(next_segment)(seg);
@@ -764,5 +773,5 @@ Bool VG_(is_addressable)(Addr p, SizeT s
return False;
- for(seg = VG_(find_segment)(p);
+ for(seg = VG_(find_segment_containing)(p);
size > 0 &&
seg &&
@@ -807,5 +816,5 @@ Addr VG_(client_alloc)(Addr addr, SizeT
void VG_(client_free)(Addr addr)
{
- Segment *s = VG_(find_segment)(addr);
+ Segment *s = VG_(find_segment_containing)(addr);
if (s == NULL || s->addr != addr || !(s->flags & SF_CORE)) {
--- valgrind/coregrind/vg_scheduler.c #1.212:1.213
@@ -383,5 +383,6 @@ void VG_(exit_thread)(ThreadId tid)
the stack after thread death... */
if (0 && VG_(threads)[tid].stack_base) {
- Segment *seg = VG_(find_segment)( VG_(threads)[tid].stack_base );
+ Segment *seg = VG_(find_segment_containing)( VG_(threads)[tid].stack_base );
+ if (seg)
VG_TRACK( die_mem_stack, seg->addr, seg->len );
}
--- valgrind/coregrind/vg_main.c #1.237:1.238
@@ -2325,4 +2325,7 @@ static void build_valgrind_map_callback
SF_MMAP|SF_NOSYMS|SF_VALGRIND,
dev, ino, foffset, filename);
+ /* update VG_(valgrind_last) if it looks wrong */
+ if (start+size > VG_(valgrind_last))
+ VG_(valgrind_last) = start+size-1;
}
}
@@ -2365,5 +2368,5 @@ static void build_segment_map_callback (
if (start >= VG_(client_end) && start < VG_(valgrind_last)) {
- Segment *s = VG_(find_segment)(start);
+ Segment *s = VG_(find_segment_before)(start);
/* We have to be a bit careful about inserting new mappings into
@@ -2745,7 +2748,6 @@ int main(int argc, char **argv, char **e
/* Make sure this segment isn't treated as stack */
- seg = VG_(find_segment)(VG_(client_trampoline_code));
- if (seg && VG_(seg_contains)(seg, VG_(client_trampoline_code),
- VG_(trampoline_code_length)))
+ seg = VG_(find_segment_containing)(VG_(client_trampoline_code));
+ if (seg)
seg->flags &= ~(SF_STACK | SF_GROWDOWN);
}
--- valgrind/coregrind/vg_signals.c #1.111:1.112
@@ -1594,11 +1594,7 @@ Bool VG_(extend_stack)(Addr addr, UInt m
/* Find the next Segment above addr */
- seg = VG_(find_segment)(addr);
- if (seg == NULL)
- seg = VG_(first_segment)();
- else if (VG_(seg_contains)(seg, addr, sizeof(void *)))
+ seg = VG_(find_segment_after)(addr);
+ if (seg && VG_(seg_contains)(seg, addr, sizeof(void *)))
return True;
- else
- seg = VG_(next_segment)(seg);
/* If there isn't one, or it isn't growable, fail */
@@ -1675,16 +1671,7 @@ void vg_sync_signalhandler ( Int sigNo,
? VG_(baseBlock)[VGOFF_STACK_PTR]
: ARCH_STACK_PTR(VG_(threads)[tid].arch);
- Segment *seg;
-
- /* If the fault happened between segments, find the segment
- after the fault. This is because we want to see if we can
- grow this segment down to cover the fault address. */
- seg = VG_(find_segment)(fault);
- if (seg == NULL)
- seg = VG_(first_segment)();
- else if (seg->addr+seg->len <= fault)
- seg = VG_(next_segment)(seg);
if (VG_(clo_trace_signals)) {
+ Segment *seg = VG_(find_segment_containing)(fault);
if (seg == NULL)
VG_(message)(Vg_DebugMsg,
--- valgrind/coregrind/vg_symtab2.c #1.98:1.99
@@ -1323,5 +1323,5 @@ Bool vg_read_lib_symbols ( SegInfo* si )
si->start+newsz, newsz);
- for(seg = VG_(find_segment)(si->start);
+ for(seg = VG_(find_segment_containing)(si->start);
seg != NULL && VG_(seg_overlaps)(seg, si->start, si->size);
seg = VG_(next_segment)(seg)) {
@@ -1723,7 +1723,7 @@ static void search_all_symtabs ( Addr pt
VGP_PUSHCC(VgpSearchSyms);
- s = VG_(find_segment)(ptr);
+ s = VG_(find_segment_containing)(ptr);
- if (s == NULL || !VG_(seg_overlaps)(s, ptr, 0) || s->symtab == NULL)
+ if (s == NULL || s->symtab == NULL)
goto not_found;
@@ -2393,7 +2393,7 @@ static Bool resolve_redir(CodeRedirect *
{
- CodeRedirect *r = VG_(SkipList_Find)(&sk_resolved_redir, &redir->from_addr);
+ CodeRedirect *r = VG_(SkipList_Find_Exact)(&sk_resolved_redir, &redir->from_addr);
- if (r == NULL || r->from_addr != redir->from_addr)
+ if (r == NULL)
VG_(SkipList_Insert)(&sk_resolved_redir, redir);
else if (verbose_redir)
@@ -2493,7 +2493,7 @@ static void add_redirect_addr(const Char
Addr VG_(code_redirect)(Addr a)
{
- CodeRedirect *r = VG_(SkipList_Find)(&sk_resolved_redir, &a);
+ CodeRedirect *r = VG_(SkipList_Find_Exact)(&sk_resolved_redir, &a);
- if (r == NULL || r->from_addr != a)
+ if (r == NULL)
return a;
--- valgrind/coregrind/vg_syscalls.c #1.235:1.236
@@ -236,8 +236,8 @@ Addr mremap_segment ( Addr old_addr, Siz
return old_addr;
- seg = VG_(find_segment)(old_addr);
+ seg = VG_(find_segment_containing)(old_addr);
/* range must be contained within segment */
- if (seg == NULL || !VG_(seg_contains)(seg, old_addr, old_size))
+ if (seg == NULL)
return -VKI_EINVAL;
@@ -906,7 +906,8 @@ static Addr do_brk(Addr newbrk)
/* brk isn't allowed to grow over anything else */
- seg = VG_(find_segment)(VG_(brk_limit));
+ seg = VG_(find_segment_before)(VG_(brk_limit));
- vg_assert(seg != NULL);
+ if (seg == NULL)
+ return VG_(brk_limit); /* brk unmapped - no change */
if (0)
@@ -916,5 +917,5 @@ static Addr do_brk(Addr newbrk)
seg = VG_(next_segment)(seg);
- if (seg != NULL && newbrk > seg->addr)
+ if (seg != NULL && newbrk > seg->addr) /* brk crashes into next segment - no change */
return VG_(brk_limit);
@@ -2766,7 +2767,7 @@ POST(sys_ipc)
case 22: /* IPCOP_shmdt */
{
- Segment *s = VG_(find_segment)(arg5);
+ Segment *s = VG_(find_segment_containing)(arg5);
- if (s != NULL && (s->flags & SF_SHM) && VG_(seg_contains)(s, arg5, 1)) {
+ if (s != NULL && (s->flags & SF_SHM)) {
VG_TRACK( die_mem_munmap, s->addr, s->len );
VG_(unmap_range)(s->addr, s->len);
--- valgrind/coregrind/vg_translate.c #1.94:1.95
@@ -2468,5 +2468,5 @@ Bool VG_(translate) ( ThreadId tid, Addr
notrace_until_done = VG_(get_bbs_translated)() >= notrace_until_limit;
- seg = VG_(find_segment)(orig_addr);
+ seg = VG_(find_segment_containing)(orig_addr);
if (!debugging_translation)
@@ -2474,10 +2474,7 @@ Bool VG_(translate) ( ThreadId tid, Addr
if (seg == NULL ||
- !VG_(seg_contains)(seg, orig_addr, 1) ||
(seg->prot & (VKI_PROT_READ|VKI_PROT_EXEC)) == 0) {
/* Code address is bad - deliver a signal instead */
- vg_assert(!VG_(is_addressable)(orig_addr, 1, VKI_PROT_EXEC));
-
- if (seg != NULL && VG_(seg_contains)(seg, orig_addr, 1)) {
+ if (seg != NULL) {
vg_assert((seg->prot & VKI_PROT_EXEC) == 0);
VG_(synth_fault_perms)(tid, orig_addr);
--- valgrind/include/tool.h.base #1.19:1.20
@@ -1686,6 +1686,8 @@
/* List operations:
- SkipList_Find searchs a list. If it can't find an exact match, it either
- returns NULL or a pointer to the element before where k would go
+ SkipList_Find_* search a list. The 3 variants are:
+ Before: returns a node which is <= key, or NULL if none
+ Exact: returns a node which is == key, or NULL if none
+ After: returns a node which is >= key, or NULL if none
SkipList_Insert inserts a new element into the list. Duplicates are
forbidden. The element must have been created with SkipList_Alloc!
@@ -1693,5 +1695,7 @@
doesn't free the memory.
*/
-extern void *VG_(SkipList_Find) (const SkipList *l, void *key);
+extern void *VG_(SkipList_Find_Before) (const SkipList *l, void *key);
+extern void *VG_(SkipList_Find_Exact) (const SkipList *l, void *key);
+extern void *VG_(SkipList_Find_After) (const SkipList *l, void *key);
extern void VG_(SkipList_Insert)( SkipList *l, void *data);
extern void *VG_(SkipList_Remove)( SkipList *l, void *key);
--- valgrind/coregrind/x86/signal.c #1.10:1.11
@@ -356,5 +356,5 @@ static Bool extend(ThreadState *tst, Add
if (VG_(extend_stack)(addr, tst->stack_size)) {
- stackseg = VG_(find_segment)(addr);
+ stackseg = VG_(find_segment_containing)(addr);
if (0 && stackseg)
VG_(printf)("frame=%p seg=%p-%p\n",
@@ -362,6 +362,5 @@ static Bool extend(ThreadState *tst, Add
}
- if (stackseg == NULL ||
- !VG_(is_addressable)(addr, size, VKI_PROT_READ|VKI_PROT_WRITE)) {
+ if (stackseg == NULL || (stackseg->prot & (VKI_PROT_READ|VKI_PROT_WRITE)) == 0) {
VG_(message)(Vg_UserMsg,
"Can't extend stack to %p during signal delivery for thread %d:",
--- valgrind/coregrind/x86-linux/syscalls.c #1.16:1.17
@@ -323,6 +323,6 @@ static Int do_clone(ThreadId ptid,
assume that esp starts near its highest possible value, and can
only go down to the start of the mmaped segment. */
- seg = VG_(find_segment)((Addr)esp);
- if (VG_(seg_contains)(seg, (Addr)esp, sizeof(UInt))) {
+ seg = VG_(find_segment_containing)((Addr)esp);
+ if (seg) {
ctst->stack_base = seg->addr;
ctst->stack_size = (Addr)PGROUNDUP(esp) - seg->addr;
--- valgrind/coregrind/linux/syscalls.c #1.6:1.7
@@ -500,5 +500,11 @@ PRE(sys_io_setup, Special)
arg1*sizeof(struct vki_io_event));
addr = VG_(find_map_space)(0, size, True);
- VG_(map_segment)(addr, size, VKI_PROT_READ|VKI_PROT_EXEC, SF_FIXED);
+
+ if (addr == 0) {
+ set_result( -VKI_ENOMEM );
+ return;
+ }
+
+ VG_(map_segment)(addr, size, VKI_PROT_READ|VKI_PROT_WRITE, SF_FIXED);
VG_(pad_address_space)(0);
@@ -525,7 +531,10 @@ PRE(sys_io_setup, Special)
// know that we must look at the aio_ring structure because Tom inspected the
// kernel and glibc sources to see what they do, yuk.)
+//
+// XXX This segment can be implicitly unmapped when aio
+// file-descriptors are closed...
PRE(sys_io_destroy, Special)
{
- Segment *s = VG_(find_segment)(arg1);
+ Segment *s = VG_(find_segment_containing)(arg1);
struct vki_aio_ring *r;
SizeT size;
@@ -542,5 +551,5 @@ PRE(sys_io_destroy, Special)
set_result( VG_(do_syscall)(SYSNO, arg1) );
- if (res == 0 && s != NULL && VG_(seg_contains)(s, arg1, size)) {
+ if (res == 0 && s != NULL) {
VG_TRACK( die_mem_munmap, arg1, size );
VG_(unmap_range)(arg1, size);
--- valgrind/include/x86-linux/vki_arch.h #1.11:1.12
@@ -262,4 +262,5 @@ struct vki_sigcontext {
#define VKI_MAP_FIXED 0x10 /* Interpret addr exactly */
#define VKI_MAP_ANONYMOUS 0x20 /* don't use a file */
+#define VKI_MAP_NORESERVE 0x4000 /* don't check for reservations */
//----------------------------------------------------------------------
|
|
From: Jeremy F. <je...@go...> - 2005-01-18 21:25:53
|
CVS commit by fitzhardinge:
Add a self-running test, which is predicated on building with PIE.
A selfrun.c 1.1 [no copyright]
A selfrun.stderr.exp 1.1
A selfrun.stdout.exp 1.1
A selfrun.vgtest 1.1
M +3 -1 Makefile.am 1.59
--- valgrind/none/tests/Makefile.am #1.58:1.59
@@ -45,4 +45,5 @@
resolv.stderr.exp resolv.stdout.exp resolv.vgtest \
rlimit_nofile.stderr.exp rlimit_nofile.stdout.exp rlimit_nofile.vgtest \
+ selfrun.stderr.exp selfrun.stdout.exp selfrun.vgtest \
sem.stderr.exp sem.stdout.exp sem.vgtest \
semlimit.stderr.exp semlimit.stdout.exp semlimit.vgtest \
@@ -66,5 +67,5 @@
fucomip getseg \
munmap_exe map_unaligned map_unmap mq mremap rcrl readline1 \
- resolv rlimit_nofile sem semlimit sha1_test \
+ resolv rlimit_nofile selfrun sem semlimit sha1_test \
shortpush shorts sigcontext \
stackgrowth sigstackgrowth \
@@ -105,4 +106,5 @@
resolv_SOURCES = resolv.c
rlimit_nofile_SOURCES = rlimit_nofile.c
+selfrun_SOURCES = selfrun.c
sem_SOURCES = sem.c
semlimit_SOURCES = semlimit.c
|