You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(15) |
2
(12) |
3
(11) |
4
(20) |
5
(6) |
|
6
(6) |
7
(7) |
8
(8) |
9
(17) |
10
(25) |
11
(27) |
12
(6) |
|
13
(28) |
14
(16) |
15
(20) |
16
(9) |
17
(26) |
18
(7) |
19
(25) |
|
20
(7) |
21
(18) |
22
(25) |
23
(15) |
24
(21) |
25
(32) |
26
(15) |
|
27
(23) |
28
(33) |
|
|
|
|
|
|
From: Jeremy F. <je...@go...> - 2005-02-28 23:23:48
|
Julian Seward wrote:
>What happens if you try to create a chunk which straddles the boundary
>of some other chunk?
>
>
That would indicate a bug in your allocator, so it would report a
problem at the very least. The options would be:
1. ignore the request and do nothing
2. split the new chunk into properly formed pieces
3. remove conflicting chunks and let the new chunk stand unmolested
Not sure which would be the best option. 1 is definitely simplest.
>I'm loathe to allow more complexity like this into the code base without
>a clear customer-needs-this case. Although it's an interesting idea,
>I am very concerned with spiralling complexity and the resulting
>maintenance burden it gives. At some point we must stop adding
>functionality and instead concentrate on stabilising and making
>portable what we already have. And that point is now.
>
>If anything is to happen to functionality, I'd prefer to prune
>away little-used features to make V easier to maintain.
>
>
Yeah, I'd agree with that (ie: VALGRIND_MAKE_*).
The MALLOCLIKE/FREELIKE stuff clearly serves a need for anyone who has
their own allocator (Valgrind, for example), but I'm guessing they
haven't been used much. The mempool changes are presumably useful,
otherwise Robert wouldn't have bothered with them. I'm looking to
condense the normal heap stuff and the mempool stuff into a single
unified mechanism, and make that mechanism actually work in a useful way.
But I'm not planning on doing anything beyond bug-fixing for now (which
I consider the MAKE_* issue to be).
J
|
|
From: Julian S. <js...@ac...> - 2005-02-28 21:54:00
|
> It seems to me that if we allow heap chunks to be nested, it would give > us a lot of extra power. I think it would subsume the mempool > functionality in a more general way, and allow us to define sub-chunk of > memory for all sorts of purposes. > > The semantics would be: > > 1. A new heap chunk is either not enclosed by another chunk, or is > completely contained by an outer chunk. > * It follows that if a chunk has sub-chunks, they're > completely contained. > 2. If you free a chunk, it and all its subchunks are freed. > > 1 means that chunks are always properly nested. 2 defines the lifespan > of a chunk (never longer than its containing chunks). What happens if you try to create a chunk which straddles the boundary of some other chunk? > I'm not precisely sure what the semantics of leak-checking would be in > the presence of nested chunks. It would depend on how they're being used: > > 1. If you have a memory pool, where all memory in the pool is freed > together, then the nested chunks should be scanned and marked if > the outer chunks are. > 2. If you have an allocator which is carving a superblock into > independent allocations, then scanning and marking should avoid > sub-chunks; sub-chunks need to be explicitly referenced to be > found not leaked. I'm loathe to allow more complexity like this into the code base without a clear customer-needs-this case. Although it's an interesting idea, I am very concerned with spiralling complexity and the resulting maintenance burden it gives. At some point we must stop adding functionality and instead concentrate on stabilising and making portable what we already have. And that point is now. If anything is to happen to functionality, I'd prefer to prune away little-used features to make V easier to maintain. J |
|
From: Jeremy F. <je...@go...> - 2005-02-28 21:48:33
|
Julian Seward wrote:
>I think a better approach is to forget about Special. Let the
>pre-routine run. If it turns out to have assigned a result
>value already, stop at that point; else give it to the kernel.
>
>
Sure, that would work.
J
|
|
From: Julian S. <js...@ac...> - 2005-02-28 21:45:10
|
> >But what about setrlimit: it intercepts and completes RLIMIT_NOFILE, > >RLIMIT_DATA and RLIMIT_STACK; but the rest are handed off to the kernel. > >So a static Special/non-Special division doesn't seem to make > >sense here, and setrlimit is not marked Special. > > It should probably be Special. It doesn't block, so there's no reason > not to. But the problem is that presence/absence of Special states unilaterally for the call that it does/does not need to be given to the kernel. And setrlimit is one place where that distinction is not black/white. I think a better approach is to forget about Special. Let the pre-routine run. If it turns out to have assigned a result value already, stop at that point; else give it to the kernel. The issue of whether the call may block if it is given to the kernel is orthogonal -- that's what MayBlock is for. J |
|
From: Jeremy F. <je...@go...> - 2005-02-28 21:39:11
|
Julian Seward wrote:
>>Oh, I'd prefer it if it weren't thread state, but some direct return
>>value from PRE() - either via return, or a pointer argument (like flags).
>>
>>
>
>Fair enough -- what's the reason, though? Is there a potential
>race condition or something?
>
No, just to keep it relatively local. Stuff in ThreadState is
essentially a global variable. Nick did a good job of throwing a lot of
stuff out of ThreadState, and I've been doing that as much as possible too.
J
|
|
From: Jeremy F. <je...@go...> - 2005-02-28 21:37:13
|
Julian Seward wrote:
>>This code relies on the fact that SYSRES is an alias for SYSNO, so it
>>will always be >0 unless it has been set to something else. Also,
>>SYS_PRE_MEM_READ/WRITE will SYSNO to EFAULT if a pointer argument is
>>pointing at bad memory.
>>
>>
>
>I guess so. It's not too clear to naive visitors, though.
>
>
No, but it works reasonably well in the context of PRE() functions. The
whole thing is a bit neck-deep in macro-magic to be very straightforward.
>On amd64, both the syscall number and the syscall result live
>in RAX. And __NR_read == 0. So, we start with RAX = 0, call
>the pre-function, check RES (which is RAX) for <= 0, which it
>duly is, and conclude that the pre-function decided to error out
>the syscall, and so doesn't hand it off to the kernel. Result is
>that the client believes that the read got 0 bytes and is
>totally mystified.
>
>
Urk.
>Basically inspecting RES after the pre-function is not a reliable
>way to establish whether the pre-function assigned to it. The
>current code works only because (I think) on x86 no syscall number
>has the value 0, and the go-no-further test is RES <= 0.
>
>
This is essentially the same issue that the BSD/Mach/MacOS people have,
in that their syscall interface doesn't overload the return value with
the "error occurred" status. So we can proabably fix both together.
And, yes, ia32 linux uses syscall 0 as a special magic call for
kernel-internal use only.
>
>But what about setrlimit: it intercepts and completes RLIMIT_NOFILE,
>RLIMIT_DATA and RLIMIT_STACK; but the rest are handed off to the kernel.
>So a static Special/non-Special division doesn't seem to make
>sense here, and setrlimit is not marked Special.
>
>
It should probably be Special. It doesn't block, so there's no reason
not to.
>>Well, the code is functionally correct on Linux because a result will
>>always be set to either ENOMEM or whatever PLATFORM_DO_MMAP sets it to.
>>
>>
>
>Functionally correct is good. Functionally _and_ manifestly correct
>to the casual reader is even better.
>
That would be nice. Fixing the portability would be nice too.
J
|
|
From: Jeremy F. <je...@go...> - 2005-02-28 21:18:41
|
The memcheck VALGRIND_MAKE_(READABLE|WRITABLE|NOACCESS) client calls
primarily change the V/A-bit state for a range of memory. As a side
effect, they also create an ID value which is returned, which identifies
that range of memory (a "general block").
I submit that these general blocks are not very useful, and even if they
are useful, they shouldn't be created by the MAKE_* requests.
The problem with general blocks are:
1. They are completely disassociated with the rest of the memory
management system. For example, if you create one inside a
malloced block, and then free that memory, the general block will
stay around, even if that memory later gets recycled.
2. They're not consistent with themselves. There's no attempt to
define what happens if they overlap; you can have multiple general
blocks at the same address and size.
3. Unless you remember the IDs and remember to remove them manually,
they keep building up, forming a perminent memory leak within
Valgrind.
4. (The implementation is pretty inefficient, using linear array
search for all operations. This is easy to fix.)
I generally use these requests to touch up the V/A bits on a piece of
memory, ignoring the return value. Until I actually looked at how they
worked, I didn't realize they actually create a piece of long-standing
state; I have quite a bit of code which will eventually explode under
the weight of general blocks.
Is anyone actually using the ID value returned by the MAKE_* requests?
Do we need general blocks at all? Can I remove them? If they are
useful, there should be a separate call, VALGRIND_CREATE_BLOCK(addr,
len, type?), to create them (with some better-defined semantics with
respect to each other and the heap management machinery).
(Note that these are distinct from the CustomAlloc heap chunks made by
VALGRIND_MALLOCLIKE/FREELIKE, which have their own problems.)
J
|
|
From: Jeremy F. <je...@go...> - 2005-02-28 21:18:41
|
(Note, last time I talked about these calls, I was making a bad mistake;
I assumed the size parameter of FREELIKE was the size of the block. It
is not; it is the size of the redzone, and this was causing me great
confusion.)
VALGRIND_MALLOCLIKE_BLOCK:
The main problem with MALLOCLIKE is that if you use it on a piece of
memory obtained from malloc(), it makes the leak-checker explode with an
assertion failure. This is because it creates a AllocCusttom heap chunk
embedded within a AllocMalloc chunk, which breaks the invarient that
heap chunks can't overlap.
It seems to me that if we allow heap chunks to be nested, it would give
us a lot of extra power. I think it would subsume the mempool
functionality in a more general way, and allow us to define sub-chunk of
memory for all sorts of purposes.
The semantics would be:
1. A new heap chunk is either not enclosed by another chunk, or is
completely contained by an outer chunk.
* It follows that if a chunk has sub-chunks, they're
completely contained.
2. If you free a chunk, it and all its subchunks are freed.
1 means that chunks are always properly nested. 2 defines the lifespan
of a chunk (never longer than its containing chunks).
I'm not precisely sure what the semantics of leak-checking would be in
the presence of nested chunks. It would depend on how they're being used:
1. If you have a memory pool, where all memory in the pool is freed
together, then the nested chunks should be scanned and marked if
the outer chunks are.
2. If you have an allocator which is carving a superblock into
independent allocations, then scanning and marking should avoid
sub-chunks; sub-chunks need to be explicitly referenced to be
found not leaked.
VALGRIND_FREELIKE_BLOCK
The main problem with VALGRIND_FREELIKE_BLOCK is that it immediately
forgets about the freed block. This prevents use-after-free messages
from being reported. I think it should remember freed blocks until the
memory is recycled (ie, overlapped by a VALGRIND_MALLOCLIKE_BLOCK).
J
|
|
From: Julian S. <js...@ac...> - 2005-02-28 20:25:29
|
> This code relies on the fact that SYSRES is an alias for SYSNO, so it > will always be >0 unless it has been set to something else. Also, > SYS_PRE_MEM_READ/WRITE will SYSNO to EFAULT if a pointer argument is > pointing at bad memory. I guess so. It's not too clear to naive visitors, though. > >RES <= 0 after the pre-wrapper is the wrong thing to do and causes merry > >hell with __NR_read on amd64. > > Why? On amd64, both the syscall number and the syscall result live in RAX. And __NR_read == 0. So, we start with RAX = 0, call the pre-function, check RES (which is RAX) for <= 0, which it duly is, and conclude that the pre-function decided to error out the syscall, and so doesn't hand it off to the kernel. Result is that the client believes that the read got 0 bytes and is totally mystified. I spotted this a couple of weeks back and so changed RES <= 0 to RES < 0. That fixes __NR_read. But now setrlimit is screwed since that intercepts some but not all of the time -- depends on what limit you are changing. For the file-descriptor limits, it sets RES==0 but now the RES < 0 test still allows the call through to the kernel, which means that VG_(safe_fd) is hosed after that. Basically inspecting RES after the pre-function is not a reliable way to establish whether the pre-function assigned to it. The current code works only because (I think) on x86 no syscall number has the value 0, and the go-no-further test is RES <= 0. > In general, my policy has been that if the PRE() wants to set some > positive result, then it should probably be Special. It's OK for > non-Special PRE() to fail syscalls, but not complete them. But what about setrlimit: it intercepts and completes RLIMIT_NOFILE, RLIMIT_DATA and RLIMIT_STACK; but the rest are handed off to the kernel. So a static Special/non-Special division doesn't seem to make sense here, and setrlimit is not marked Special. > Well, the code is functionally correct on Linux because a result will > always be set to either ENOMEM or whatever PLATFORM_DO_MMAP sets it to. Functionally correct is good. Functionally _and_ manifestly correct to the casual reader is even better. J |
|
From: Julian S. <js...@ac...> - 2005-02-28 20:25:24
|
> Oh, I'd prefer it if it weren't thread state, but some direct return > value from PRE() - either via return, or a pointer argument (like flags). Fair enough -- what's the reason, though? Is there a potential race condition or something? J |
|
From: Jeremy F. <je...@go...> - 2005-02-28 18:20:57
|
Julian Seward wrote:
>Background: I'm tracking down problems in syscall handling on amd64.
>As part of that, I've added a per-thread Bool used to indicate when
>a syscall's PRE wrapper has set RES, so that we don't have to
>figure this out by inspecting RES after the pre-wrapper.
>
Oh, I'd prefer it if it weren't thread state, but some direct return
value from PRE() - either via return, or a pointer argument (like flags).
J
|
|
From: Jeremy F. <je...@go...> - 2005-02-28 18:17:37
|
Julian Seward wrote:
>(in the cvs head)
>
>There seem to be paths through this which do not assign any
>value to SYSRES. I don't see how this can be right if this
>is a Special syscall (for which we provide all handling).
>
>If (a4 & VKI_MAP_FIXED) is taken
> and !VG_(valid_client_addr)(...) is not taken
>
>then we hit if (SYSRES != -VKI_ENOMEM) ...
>
>and read SYSRES without having set it first.
>
>What am I missing?
>
>
This code relies on the fact that SYSRES is an alias for SYSNO, so it
will always be >0 unless it has been set to something else. Also,
SYS_PRE_MEM_READ/WRITE will SYSNO to EFAULT if a pointer argument is
pointing at bad memory.
>Background: I'm tracking down problems in syscall handling on amd64.
>As part of that, I've added a per-thread Bool used to indicate when
>a syscall's PRE wrapper has set RES, so that we don't have to
>figure this out by inspecting RES after the pre-wrapper. Checking
>RES <= 0 after the pre-wrapper is the wrong thing to do and causes merry
>hell with __NR_read on amd64.
>
Why?
> Inspecting RES after the pre-wrapper
>for <= 0 also makes it impossible for a pre-wrapper to return any
>result > 0, which isn't good.
>Generally you can't reliably conclude anything at all about whether
>the pre-wrapper assigned RES by inspecting it afterwards.
>
>
In general, my policy has been that if the PRE() wants to set some
positive result, then it should probably be Special. It's OK for
non-Special PRE() to fail syscalls, but not complete them.
>As a side effect of this, I added an assertion to check that RES has
>been set after every syscall marked Special (fair enough, right?)
>and immediately I have it barfing on mmap.
>
>
Well, the code is functionally correct on Linux because a result will
always be set to either ENOMEM or whatever PLATFORM_DO_MMAP sets it to.
J
|
|
From: Julian S. <js...@ac...> - 2005-02-28 17:25:08
|
(in the cvs head) There seem to be paths through this which do not assign any value to SYSRES. I don't see how this can be right if this is a Special syscall (for which we provide all handling). If (a4 & VKI_MAP_FIXED) is taken and !VG_(valid_client_addr)(...) is not taken then we hit if (SYSRES != -VKI_ENOMEM) ... and read SYSRES without having set it first. What am I missing? Same problem for sys_rt_sigaction and sys_rt_sigprocmask. ----------------------------- Background: I'm tracking down problems in syscall handling on amd64. As part of that, I've added a per-thread Bool used to indicate when a syscall's PRE wrapper has set RES, so that we don't have to figure this out by inspecting RES after the pre-wrapper. Checking RES <= 0 after the pre-wrapper is the wrong thing to do and causes merry hell with __NR_read on amd64. Inspecting RES after the pre-wrapper for <= 0 also makes it impossible for a pre-wrapper to return any result > 0, which isn't good. Generally you can't reliably conclude anything at all about whether the pre-wrapper assigned RES by inspecting it afterwards. As a side effect of this, I added an assertion to check that RES has been set after every syscall marked Special (fair enough, right?) and immediately I have it barfing on mmap. J |
|
From: Jeremy F. <je...@go...> - 2005-02-28 16:00:51
|
CVS commit by fitzhardinge:
Check the file-descriptor is reasonable in each of the socketcall
subfunctions.
M +86 -32 coregrind/vg_syscalls.c 1.256
M +5 -5 memcheck/tests/buflen_check.c 1.3
--- valgrind/coregrind/vg_syscalls.c #1.255:1.256
@@ -4877,4 +4877,8 @@ PRE(sys_socketcall, MayBlock)
int addrlen); */
SYS_PRE_MEM_READ( "socketcall.bind(args)", arg2, 3*sizeof(Addr) );
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.bind", tid, False))
+ set_result(-VKI_EBADF);
+ else
pre_mem_read_sockaddr( tid, "socketcall.bind(my_addr.%s)",
(struct vki_sockaddr *) (((UWord*)arg2)[1]), ((UWord*)arg2)[2]);
@@ -4884,4 +4888,7 @@ PRE(sys_socketcall, MayBlock)
/* int listen(int s, int backlog); */
SYS_PRE_MEM_READ( "socketcall.listen(args)", arg2, 2*sizeof(Addr) );
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.listen", tid, False))
+ set_result(-VKI_EBADF);
break;
@@ -4889,5 +4896,8 @@ PRE(sys_socketcall, MayBlock)
/* int accept(int s, struct sockaddr *addr, int *addrlen); */
SYS_PRE_MEM_READ( "socketcall.accept(args)", arg2, 3*sizeof(Addr) );
- {
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.accept", tid, False))
+ set_result(-VKI_EBADF);
+ else {
Addr addr_p = ((UWord*)arg2)[1];
Addr addrlen_p = ((UWord*)arg2)[2];
@@ -4908,5 +4918,8 @@ PRE(sys_socketcall, MayBlock)
((UWord*)arg2)[1], /* msg */
((UWord*)arg2)[2] /* len */ );
- pre_mem_read_sockaddr( tid, "socketcall.sendto(to.%s)",
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.sendto", tid, False))
+ set_result(-VKI_EBADF);
+ else pre_mem_read_sockaddr( tid, "socketcall.sendto(to.%s)",
(struct vki_sockaddr *) (((UWord*)arg2)[4]), ((UWord*)arg2)[5]);
break;
@@ -4915,4 +4928,8 @@ PRE(sys_socketcall, MayBlock)
/* int send(int s, const void *msg, size_t len, int flags); */
SYS_PRE_MEM_READ( "socketcall.send(args)", arg2, 4*sizeof(Addr) );
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.send", tid, False))
+ set_result(-VKI_EBADF);
+ else
SYS_PRE_MEM_READ( "socketcall.send(msg)",
((UWord*)arg2)[1], /* msg */
@@ -4924,5 +4941,8 @@ PRE(sys_socketcall, MayBlock)
struct sockaddr *from, int *fromlen); */
SYS_PRE_MEM_READ( "socketcall.recvfrom(args)", arg2, 6*sizeof(Addr) );
- {
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.recvfrom", tid, False))
+ set_result(-VKI_EBADF);
+ else {
Addr buf_p = ((UWord*)arg2)[1];
Int len = ((UWord*)arg2)[2];
@@ -4946,4 +4966,8 @@ PRE(sys_socketcall, MayBlock)
*/
SYS_PRE_MEM_READ( "socketcall.recv(args)", arg2, 4*sizeof(Addr) );
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.recv", tid, False))
+ set_result(-VKI_EBADF);
+ else
SYS_PRE_MEM_WRITE( "socketcall.recv(buf)",
((UWord*)arg2)[1], /* buf */
@@ -4955,4 +4979,8 @@ PRE(sys_socketcall, MayBlock)
struct sockaddr *serv_addr, int addrlen ); */
SYS_PRE_MEM_READ( "socketcall.connect(args)", arg2, 3*sizeof(Addr) );
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.connect", tid, False))
+ set_result(-VKI_EBADF);
+ else {
SYS_PRE_MEM_READ( "socketcall.connect(serv_addr.sa_family)",
((UWord*)arg2)[1], /* serv_addr */
@@ -4961,4 +4989,5 @@ PRE(sys_socketcall, MayBlock)
"socketcall.connect(serv_addr.%s)",
(struct vki_sockaddr *) (((UWord*)arg2)[1]), ((UWord*)arg2)[2]);
+ }
break;
@@ -4967,4 +4996,8 @@ PRE(sys_socketcall, MayBlock)
const void *optval, int optlen); */
SYS_PRE_MEM_READ( "socketcall.setsockopt(args)", arg2, 5*sizeof(Addr) );
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.setsockopt", tid, False))
+ set_result(-VKI_EBADF);
+ else
SYS_PRE_MEM_READ( "socketcall.setsockopt(optval)",
((UWord*)arg2)[3], /* optval */
@@ -4976,5 +5009,8 @@ PRE(sys_socketcall, MayBlock)
void *optval, socklen_t *optlen); */
SYS_PRE_MEM_READ( "socketcall.getsockopt(args)", arg2, 5*sizeof(Addr) );
- {
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.getsockopt", tid, False))
+ set_result(-VKI_EBADF);
+ else {
Addr optval_p = ((UWord*)arg2)[3];
Addr optlen_p = ((UWord*)arg2)[4];
@@ -4990,5 +5026,8 @@ PRE(sys_socketcall, MayBlock)
/* int getsockname(int s, struct sockaddr* name, int* namelen) */
SYS_PRE_MEM_READ( "socketcall.getsockname(args)", arg2, 3*sizeof(Addr) );
- {
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.getsockname", tid, False))
+ set_result(-VKI_EBADF);
+ else {
Addr name_p = ((UWord*)arg2)[1];
Addr namelen_p = ((UWord*)arg2)[2];
@@ -5004,5 +5043,8 @@ PRE(sys_socketcall, MayBlock)
/* int getpeername(int s, struct sockaddr* name, int* namelen) */
SYS_PRE_MEM_READ( "socketcall.getpeername(args)", arg2, 3*sizeof(Addr) );
- {
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.getpeername", tid, False))
+ set_result(-VKI_EBADF);
+ else {
Addr name_p = ((UWord*)arg2)[1];
Addr namelen_p = ((UWord*)arg2)[2];
@@ -5018,4 +5060,7 @@ PRE(sys_socketcall, MayBlock)
/* int shutdown(int s, int how); */
SYS_PRE_MEM_READ( "socketcall.shutdown(args)", arg2, 2*sizeof(Addr) );
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.shutdown", tid, False))
+ set_result(-VKI_EBADF);
break;
@@ -5025,10 +5070,15 @@ PRE(sys_socketcall, MayBlock)
/* this causes warnings, and I don't get why. glibc bug?
* (after all it's glibc providing the arguments array)
- SYS_PRE_MEM_READ( "socketcall.sendmsg(args)", arg2, 3*sizeof(Addr) );
*/
+ SYS_PRE_MEM_READ( "socketcall.sendmsg(args)", arg2, 3*sizeof(Addr) );
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.sendmsg", tid, False))
+ set_result(-VKI_EBADF);
+ else {
struct vki_msghdr *msg = (struct vki_msghdr *)((UWord*)arg2)[ 1 ];
- msghdr_foreachfield ( tid, msg, pre_mem_read_sendmsg );
+ msghdr_foreachfield ( tid, msg, pre_mem_read_sendmsg );
+ }
break;
}
@@ -5039,10 +5089,14 @@ PRE(sys_socketcall, MayBlock)
/* this causes warnings, and I don't get why. glibc bug?
* (after all it's glibc providing the arguments array)
- SYS_PRE_MEM_READ("socketcall.recvmsg(args)", arg2, 3*sizeof(Addr) );
*/
+ SYS_PRE_MEM_READ("socketcall.recvmsg(args)", arg2, 3*sizeof(Addr) );
+ if (!VG_(is_kerror)(SYSRES) &&
+ !VG_(fd_allowed)(((Int *)arg2)[0], "socketcall.recvmsg", tid, False))
+ set_result(-VKI_EBADF);
+ else {
struct vki_msghdr *msg = (struct vki_msghdr *)((UWord*)arg2)[ 1 ];
msghdr_foreachfield ( tid, msg, pre_mem_write_recvmsg );
-
+ }
break;
}
--- valgrind/memcheck/tests/buflen_check.c #1.2:1.3
@@ -6,5 +6,5 @@ int main(void)
{
struct sockaddr name;
- int res1, res2;
+ int res1, res2, res3;
int len = 10;
@@ -16,10 +16,10 @@ int main(void)
/* Valgrind 1.0.X doesn't report the second error */
- res1 = getsockname(-1, NULL, &len); /* NULL is bogus */
- res2 = getsockname(-1, &name, NULL); /* NULL is bogus */
- if (res1 == -1) {
+ res2 = getsockname(res1, NULL, &len); /* NULL is bogus */
+ res3 = getsockname(res1, &name, NULL); /* NULL is bogus */
+ if (res2 == -1) {
fprintf(stderr, "getsockname(1) failed\n");
}
- if (res2 == -1) {
+ if (res3 == -1) {
fprintf(stderr, "getsockname(2) failed\n");
}
|
|
From: Tom H. <th...@cy...> - 2005-02-28 16:00:46
|
CVS commit by thughes:
First stab at getting the new thread modelling code to report errors
properly through the main error reporting system.
M +7 -0 core.h 1.92
M +17 -12 vg_errcontext.c 1.68
M +1 -0 vg_pthreadmodel.c 1.2
M +160 -10 vg_threadmodel.c 1.3
--- valgrind/coregrind/core.h #1.91:1.92
@@ -1030,4 +1030,5 @@ typedef
enum {
ThreadErr = -1, // Thread error
+ MutexErr = -2, // Mutex error
}
CoreErrorKind;
@@ -1869,4 +1870,10 @@ extern void VG_(tm_mutex_unlock) (Thread
extern Bool VG_(tm_mutex_exists) (Addr mutexp);
+extern UInt VG_(tm_error_update_extra) (Error *err);
+extern Bool VG_(tm_error_equal) (VgRes res, Error *e1, Error *e2);
+extern void VG_(tm_error_print) (Error *err);
+
+extern void VG_(tm_init) ();
+
/* ----- pthreads ----- */
extern void VG_(pthread_init) ();
--- valgrind/coregrind/vg_errcontext.c #1.67:1.68
@@ -202,10 +202,7 @@ static Bool eq_Error ( VgRes res, Error*
switch (e1->ekind) {
case ThreadErr:
+ case MutexErr:
vg_assert(VG_(needs).core_errors);
- if (e1->string == e2->string)
- return True;
- if (0 == VG_(strcmp)(e1->string, e2->string))
- return True;
- return False;
+ return VG_(tm_error_equal)(res, e1, e2);
default:
if (VG_(needs).skin_errors)
@@ -229,7 +226,7 @@ static void pp_Error ( Error* err, Bool
switch (err->ekind) {
case ThreadErr:
+ case MutexErr:
vg_assert(VG_(needs).core_errors);
- VG_(message)(Vg_UserMsg, "%s", err->string );
- VG_(pp_ExeContext)(err->where);
+ VG_(tm_error_print)(err);
break;
default:
@@ -330,5 +327,5 @@ static void gen_suppression(Error* err)
VG_(printf)(" <insert a suppression name here>\n");
- if (ThreadErr == err->ekind) {
+ if (ThreadErr == err->ekind || MutexErr == err->ekind) {
VG_(printf)(" core:PThread\n");
@@ -516,7 +513,15 @@ void VG_(maybe_record_error) ( ThreadId
*p = err;
- /* update `extra', for non-core errors (core ones don't use 'extra') */
- if (VG_(needs).skin_errors && ThreadErr != ekind) {
+ /* update `extra' */
+ if (VG_(needs).skin_errors) {
+ switch (ekind) {
+ case ThreadErr:
+ case MutexErr:
+ extra_size = VG_(tm_error_update_extra)(p);
+ break;
+ default:
extra_size = SK_(update_extra)(p);
+ break;
+ }
/* copy block pointed to by `extra', if there is one */
@@ -934,5 +939,5 @@ Bool supp_matches_error(Supp* su, Error*
switch (su->skind) {
case PThreadSupp:
- return (err->ekind == ThreadErr);
+ return (err->ekind == ThreadErr || err->ekind == MutexErr);
default:
if (VG_(needs).skin_errors) {
--- valgrind/coregrind/vg_pthreadmodel.c #1.1:1.2
@@ -461,4 +461,5 @@ void VG_(pthread_init)()
VG_(add_wrapper)("soname:libpthread.so.0", wraps[i].name, &wraps[i].wrapper);
}
+ VG_(tm_init)();
VG_(tm_thread_create)(VG_INVALID_THREADID, VG_(master_tid), True);
}
--- valgrind/coregrind/vg_threadmodel.c #1.2:1.3
@@ -114,4 +114,11 @@ enum thread_error
};
+struct thread_error_data
+{
+ enum thread_error err;
+ struct thread *th;
+ const Char *action;
+};
+
static const Char *pp_threadstate(const struct thread *th)
{
@@ -248,6 +255,7 @@ static struct thread *thread_get(ThreadI
static void thread_report(ThreadId tid, enum thread_error err, const Char *action)
{
+ Char *errstr = "?";
struct thread *th = thread_get(tid);
- const Char *errstr = "?";
+ struct thread_error_data errdata;
switch(err) {
@@ -260,6 +268,24 @@ static void thread_report(ThreadId tid,
}
- VG_(printf)("*** thread problem: tid=%d(%s) err=\"%s\" action=\"%s\"\n",
- tid, pp_threadstate(th), errstr, action);
+ errdata.err = err;
+ errdata.th = th;
+ errdata.action = action;
+
+ VG_(maybe_record_error)(tid, ThreadErr, 0, errstr, &errdata);
+}
+
+static void pp_thread_error(Error *err)
+{
+ struct thread_error_data *errdata = VG_(get_error_extra)(err);
+ struct thread *th = errdata->th;
+ Char *errstr = VG_(get_error_string)(err);
+
+ VG_(message)(Vg_UserMsg, "Found %s thread in state %s while %s\n",
+ errstr, pp_threadstate(th), errdata->action);
+ VG_(pp_ExeContext)(VG_(get_error_where)(err));
+
+ VG_(message)(Vg_UserMsg, "Thread was %s",
+ th->state == TS_Dead ? "destroyed" : "created");
+ VG_(pp_ExeContext)(th->ec_created);
}
@@ -526,5 +552,5 @@ struct mutex
};
-enum mutex_err
+enum mutex_error
{
MXE_NotExist, /* never existed */
@@ -537,4 +563,11 @@ enum mutex_err
};
+struct mutex_error_data
+{
+ enum mutex_error err;
+ struct mutex *mx;
+ const Char *action;
+};
+
static struct mutex *mx_get(Addr mutexp);
@@ -585,8 +618,9 @@ static void mutex_setstate(ThreadId tid,
}
-static void mutex_report(ThreadId tid, Addr mutexp, enum mutex_err err, const Char *action)
+static void mutex_report(ThreadId tid, Addr mutexp, enum mutex_error err, const Char *action)
{
- const Char *errstr="?";
+ Char *errstr="?";
struct mutex *mx = mx_get(mutexp);
+ struct mutex_error_data errdata;
switch(err) {
@@ -597,10 +631,45 @@ static void mutex_report(ThreadId tid, A
case MXE_Locked: errstr="locked"; break;
case MXE_NotOwner: errstr="unowned"; break;
- case MXE_Deadlock: errstr="deadlock"; break;
+ case MXE_Deadlock: errstr="deadlock on"; break;
}
- /* report a mutex-related error */
- VG_(printf)("*** mutex error: thread=%d found %s mutex %p(state=%s) while %s\n",
- tid, errstr, mutexp, mx ? pp_mutexstate(mx) : (const Char *)"-", action);
+ errdata.err = err;
+ errdata.mx = mx;
+ errdata.action = action;
+
+ VG_(maybe_record_error)(tid, MutexErr, 0, errstr, &errdata);
+}
+
+static void pp_mutex_error(Error *err)
+{
+ struct mutex_error_data *errdata = VG_(get_error_extra)(err);
+ struct mutex *mx = errdata->mx;
+ Char *errstr = VG_(get_error_string)(err);
+
+ VG_(message)(Vg_UserMsg, "Found %s mutex %p while %s\n",
+ errstr, mx ? mx->mutex : 0, errdata->action);
+ VG_(pp_ExeContext)(VG_(get_error_where)(err));
+
+ switch (mx->state) {
+ case MX_Init:
+ case MX_Dead:
+ break;
+ case MX_Locked:
+ VG_(message)(Vg_UserMsg, "Mutex was locked by thread %d", mx->owner);
+ VG_(pp_ExeContext)(mx->ec_locked);
+ break;
+ case MX_Unlocking:
+ VG_(message)(Vg_UserMsg, "Mutex being unlocked");
+ VG_(pp_ExeContext)(mx->ec_locked);
+ break;
+ case MX_Free:
+ VG_(message)(Vg_UserMsg, "Mutex was unlocked");
+ VG_(pp_ExeContext)(mx->ec_locked);
+ break;
+ }
+
+ VG_(message)(Vg_UserMsg, "Mutex was %s",
+ mx->state == MX_Dead ? "destroyed" : "created");
+ VG_(pp_ExeContext)(mx->ec_create);
}
@@ -935,2 +1004,83 @@ void VG_(tm_cond_signal)(ThreadId tid, v
}
+/* --------------------------------------------------
+ Error handling
+ -------------------------------------------------- */
+
+UInt VG_(tm_error_update_extra)(Error *err)
+{
+ switch (VG_(get_error_kind)(err)) {
+ case ThreadErr: {
+ struct thread_error_data *errdata = VG_(get_error_extra)(err);
+ struct thread *new_th = VG_(malloc)(sizeof(struct thread));
+
+ VG_(memcpy)(new_th, errdata->th, sizeof(struct thread));
+
+ errdata->th = new_th;
+
+ return sizeof(struct thread_error_data);
+ }
+
+ case MutexErr: {
+ struct mutex_error_data *errdata = VG_(get_error_extra)(err);
+ struct mutex *new_mx = VG_(malloc)(sizeof(struct mutex));
+
+ VG_(memcpy)(new_mx, errdata->mx, sizeof(struct mutex));
+
+ errdata->mx = new_mx;
+
+ return sizeof(struct mutex_error_data);
+ }
+
+ default:
+ return 0;
+ }
+}
+
+Bool VG_(tm_error_equal)(VgRes res, Error *e1, Error *e2)
+{
+ /* Guaranteed by calling function */
+ vg_assert(VG_(get_error_kind)(e1) == VG_(get_error_kind)(e2));
+
+ switch (VG_(get_error_kind)(e1)) {
+ case ThreadErr: {
+ struct thread_error_data *errdata1 = VG_(get_error_extra)(e1);
+ struct thread_error_data *errdata2 = VG_(get_error_extra)(e2);
+
+ return errdata1->err == errdata2->err;
+ }
+
+ case MutexErr: {
+ struct mutex_error_data *errdata1 = VG_(get_error_extra)(e1);
+ struct mutex_error_data *errdata2 = VG_(get_error_extra)(e2);
+
+ return errdata1->err == errdata2->err;
+ }
+
+ default:
+ VG_(printf)("Error:\n unknown error code %d\n",
+ VG_(get_error_kind)(e1));
+ VG_(core_panic)("unknown error code in VG_(tm_error_equal)");
+ }
+}
+
+void VG_(tm_error_print)(Error *err)
+{
+ switch (VG_(get_error_kind)(err)) {
+ case ThreadErr:
+ pp_thread_error(err);
+ break;
+ case MutexErr:
+ pp_mutex_error(err);
+ break;
+ }
+}
+
+/* --------------------------------------------------
+ Initialisation
+ -------------------------------------------------- */
+
+void VG_(tm_init)()
+{
+ VG_(needs_core_errors)();
+}
|
|
From: Jeremy F. <je...@go...> - 2005-02-28 16:00:16
|
CVS commit by fitzhardinge:
Using nanosleep in yield was making it very slow; sched_yield seems better.
M +5 -7 vg_scheduler.c 1.226
--- valgrind/coregrind/vg_scheduler.c #1.225:1.226
@@ -403,12 +403,10 @@ void VG_(vg_yield)(void)
/*
Tell the kernel we're yielding.
-
- Use a short sleep rather than an actual sched_yield, because we
- don't want the kernel to give up on us forever.
-
- (This should probably be a no-op if we haven't started more
- than one thread, but it probably doesn't make much difference.)
*/
+ if (1)
+ VG_(do_syscall)(__NR_sched_yield);
+ else
VG_(nanosleep)(&ts);
+
VG_(set_running)(tid);
|
|
From: Jeremy F. <je...@go...> - 2005-02-28 15:59:43
|
CVS commit by fitzhardinge:
Fix leak of Valgrind-internal thread stacks. This was leaking 64k
each time a thread exited.
BUGS: 100036
M +1 -1 vg_scheduler.c 1.225
--- valgrind/coregrind/vg_scheduler.c #1.224:1.225
@@ -513,5 +513,5 @@ void mostly_clear_thread_record ( Thread
VG_(sigemptyset)(&VG_(threads)[tid].eff_sig_mask);
- VGA_(os_state_init)(&VG_(threads)[tid]);
+ VGA_(os_state_clear)(&VG_(threads)[tid]);
/* start with no altstack */
|
|
From: Julian S. <js...@ac...> - 2005-02-28 14:57:26
|
> >> Recommendation: make --leak-check=yes the default. > > > > What about a lightweight leak-check which is always enabled, but only > > reports a summary of leaked memory, and a CLO which reports the full > > details of leaked memory? > > I don't see much point. People are mostly using --leak-check=yes, let's > give them what they're (indirectly) asking for. I vote for permanently-enabled summary only, with --leak-check=yes doing the full thing. Advantages: * it continually reminds users that leak checking is available * having the full output enabled by default floods people with too much info they may not want * it means the leak checker is continually exercised, exposing any bugs sooner J |
|
From: Nicholas N. <nj...@cs...> - 2005-02-28 14:41:02
|
On Sun, 27 Feb 2005, Jeremy Fitzhardinge wrote: >> Recommendation: make --leak-check=yes the default. > > What about a lightweight leak-check which is always enabled, but only > reports a summary of leaked memory, and a CLO which reports the full > details of leaked memory? I don't see much point. People are mostly using --leak-check=yes, let's give them what they're (indirectly) asking for. N |
|
From: Jeremy F. <je...@go...> - 2005-02-28 07:22:53
|
CVS commit by fitzhardinge:
By popular demand, change the leak-check output for leaked blocks which
refer to other leaked blocked to (XX direct, YY indirect) rather than
the overly terse (XX+YY) notation.
M +1 -1 addrcheck/tests/leak-0.stderr.exp 1.2
M +5 -5 addrcheck/tests/leak-cycle.stderr.exp 1.2
M +1 -1 addrcheck/tests/leak-regroot.stderr.exp 1.2
M +5 -5 addrcheck/tests/leak-tree.stderr.exp 1.2
M +2 -2 memcheck/mac_leakcheck.c 1.21
M +1 -1 memcheck/tests/leak-0.stderr.exp 1.2
M +5 -5 memcheck/tests/leak-cycle.stderr.exp 1.2
M +1 -1 memcheck/tests/leak-regroot.stderr.exp 1.2
M +5 -5 memcheck/tests/leak-tree.stderr.exp 1.2
M +1 -1 memcheck/tests/mempool.stderr.exp 1.3
M +2 -2 memcheck/tests/pointer-trace.stderr.exp 1.4
--- valgrind/memcheck/mac_leakcheck.c #1.20:1.21
@@ -203,5 +203,5 @@ void MAC_(pp_LeakError)(void* vl, UInt n
if (l->indirect_bytes) {
VG_(message)(Vg_UserMsg,
- "%d (%d+%d) bytes in %d blocks are %s in loss record %d of %d",
+ "%d (%d direct, %d indirect) bytes in %d blocks are %s in loss record %d of %d",
l->total_bytes + l->indirect_bytes,
l->total_bytes, l->indirect_bytes, l->num_blocks,
--- valgrind/addrcheck/tests/leak-regroot.stderr.exp #1.1:1.2
@@ -6,5 +6,5 @@
definitely lost: 0 bytes in 0 blocks.
indirectly lost: 0 bytes in 0 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 10 bytes in 1 blocks.
suppressed: 0 bytes in 0 blocks.
--- valgrind/addrcheck/tests/leak-cycle.stderr.exp #1.1:1.2
@@ -3,5 +3,5 @@
checked ... bytes.
-24 (8+16) bytes in 1 blocks are definitely lost in loss record 15 of 18
+24 (8 direct, 16 indirect) bytes in 1 blocks are definitely lost in loss record 15 of 18
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-cycle.c:11)
@@ -10,5 +10,5 @@
-24 (8+16) bytes in 1 blocks are definitely lost in loss record 16 of 18
+24 (8 direct, 16 indirect) bytes in 1 blocks are definitely lost in loss record 16 of 18
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-cycle.c:11)
@@ -17,5 +17,5 @@
-48 (8+40) bytes in 1 blocks are definitely lost in loss record 17 of 18
+48 (8 direct, 40 indirect) bytes in 1 blocks are definitely lost in loss record 17 of 18
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-cycle.c:11)
@@ -24,5 +24,5 @@
-48 (8+40) bytes in 1 blocks are definitely lost in loss record 18 of 18
+48 (8 direct, 40 indirect) bytes in 1 blocks are definitely lost in loss record 18 of 18
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-cycle.c:11)
@@ -33,5 +33,5 @@
definitely lost: 32 bytes in 4 blocks.
indirectly lost: 112 bytes in 14 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 0 bytes in 0 blocks.
suppressed: 0 bytes in 0 blocks.
--- valgrind/addrcheck/tests/leak-0.stderr.exp #1.1:1.2
@@ -6,5 +6,5 @@
definitely lost: 0 bytes in 0 blocks.
indirectly lost: 0 bytes in 0 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 0 bytes in 1 blocks.
suppressed: 0 bytes in 0 blocks.
--- valgrind/addrcheck/tests/leak-tree.stderr.exp #1.1:1.2
@@ -3,5 +3,5 @@
checked ... bytes.
-72 (8+64) bytes in 1 blocks are definitely lost in loss record 11 of 11
+72 (8 direct, 64 indirect) bytes in 1 blocks are definitely lost in loss record 11 of 11
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-tree.c:11)
@@ -11,5 +11,5 @@
definitely lost: 8 bytes in 1 blocks.
indirectly lost: 64 bytes in 8 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 16 bytes in 2 blocks.
suppressed: 0 bytes in 0 blocks.
@@ -26,5 +26,5 @@
-88 (8+80) bytes in 1 blocks are definitely lost in loss record 13 of 14
+88 (8 direct, 80 indirect) bytes in 1 blocks are definitely lost in loss record 13 of 14
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-tree.c:11)
@@ -32,5 +32,5 @@
-16 (8+8) bytes in 1 blocks are definitely lost in loss record 14 of 14
+16 (8 direct, 8 indirect) bytes in 1 blocks are definitely lost in loss record 14 of 14
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-tree.c:11)
@@ -40,5 +40,5 @@
definitely lost: 24 bytes in 3 blocks.
indirectly lost: 88 bytes in 11 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 0 bytes in 0 blocks.
suppressed: 0 bytes in 0 blocks.
--- valgrind/memcheck/tests/mempool.stderr.exp #1.2:1.3
@@ -32,5 +32,5 @@
-100028 (20+100008) bytes in 1 blocks are definitely lost in loss record 2 of 3
+100028 (20 direct, 100008 indirect) bytes in 1 blocks are definitely lost in loss record 2 of 3
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: make_pool (mempool.c:37)
--- valgrind/memcheck/tests/pointer-trace.stderr.exp #1.3:1.4
@@ -6,5 +6,5 @@
definitely lost: 0 bytes in 0 blocks.
indirectly lost: 0 bytes in 0 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 1048576 bytes in 1 blocks.
suppressed: 0 bytes in 0 blocks.
@@ -26,5 +26,5 @@
definitely lost: 1048576 bytes in 1 blocks.
indirectly lost: 0 bytes in 0 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 0 bytes in 0 blocks.
suppressed: 0 bytes in 0 blocks.
--- valgrind/memcheck/tests/leak-regroot.stderr.exp #1.1:1.2
@@ -6,5 +6,5 @@
definitely lost: 0 bytes in 0 blocks.
indirectly lost: 0 bytes in 0 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 10 bytes in 1 blocks.
suppressed: 0 bytes in 0 blocks.
--- valgrind/memcheck/tests/leak-tree.stderr.exp #1.1:1.2
@@ -3,5 +3,5 @@
checked ... bytes.
-72 (8+64) bytes in 1 blocks are definitely lost in loss record 11 of 11
+72 (8 direct, 64 indirect) bytes in 1 blocks are definitely lost in loss record 11 of 11
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-tree.c:11)
@@ -11,5 +11,5 @@
definitely lost: 8 bytes in 1 blocks.
indirectly lost: 64 bytes in 8 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 16 bytes in 2 blocks.
suppressed: 0 bytes in 0 blocks.
@@ -26,5 +26,5 @@
-88 (8+80) bytes in 1 blocks are definitely lost in loss record 13 of 14
+88 (8 direct, 80 indirect) bytes in 1 blocks are definitely lost in loss record 13 of 14
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-tree.c:11)
@@ -32,5 +32,5 @@
-16 (8+8) bytes in 1 blocks are definitely lost in loss record 14 of 14
+16 (8 direct, 8 indirect) bytes in 1 blocks are definitely lost in loss record 14 of 14
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-tree.c:11)
@@ -40,5 +40,5 @@
definitely lost: 24 bytes in 3 blocks.
indirectly lost: 88 bytes in 11 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 0 bytes in 0 blocks.
suppressed: 0 bytes in 0 blocks.
--- valgrind/memcheck/tests/leak-0.stderr.exp #1.1:1.2
@@ -6,5 +6,5 @@
definitely lost: 0 bytes in 0 blocks.
indirectly lost: 0 bytes in 0 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 0 bytes in 1 blocks.
suppressed: 0 bytes in 0 blocks.
--- valgrind/memcheck/tests/leak-cycle.stderr.exp #1.1:1.2
@@ -3,5 +3,5 @@
checked ... bytes.
-24 (8+16) bytes in 1 blocks are definitely lost in loss record 15 of 18
+24 (8 direct, 16 indirect) bytes in 1 blocks are definitely lost in loss record 15 of 18
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-cycle.c:11)
@@ -10,5 +10,5 @@
-24 (8+16) bytes in 1 blocks are definitely lost in loss record 16 of 18
+24 (8 direct, 16 indirect) bytes in 1 blocks are definitely lost in loss record 16 of 18
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-cycle.c:11)
@@ -17,5 +17,5 @@
-48 (8+40) bytes in 1 blocks are definitely lost in loss record 17 of 18
+48 (8 direct, 40 indirect) bytes in 1 blocks are definitely lost in loss record 17 of 18
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-cycle.c:11)
@@ -24,5 +24,5 @@
-48 (8+40) bytes in 1 blocks are definitely lost in loss record 18 of 18
+48 (8 direct, 40 indirect) bytes in 1 blocks are definitely lost in loss record 18 of 18
at 0x........: malloc (vg_replace_malloc.c:...)
by 0x........: mk (leak-cycle.c:11)
@@ -33,5 +33,5 @@
definitely lost: 32 bytes in 4 blocks.
indirectly lost: 112 bytes in 14 blocks.
- possibly lost: 0 bytes in 0 blocks.
+ possibly lost: 0 bytes in 0 blocks.
still reachable: 0 bytes in 0 blocks.
suppressed: 0 bytes in 0 blocks.
|
|
From: Jeremy F. <je...@go...> - 2005-02-28 06:57:41
|
CVS commit by fitzhardinge:
Fix --command-line-only; the code had the sense wrong.
M +3 -3 coregrind/vg_main.c 1.256
M +1 -1 none/tests/selfrun.vgtest 1.2
--- valgrind/coregrind/vg_main.c #1.255:1.256
@@ -646,5 +646,5 @@ static void get_command_line( int argc,
} else {
- Bool augment = True;
+ Bool noaugment = False;
/* Count the arguments on the command line. */
@@ -659,5 +659,5 @@ static void get_command_line( int argc,
break;
}
- VG_BOOL_CLO("--command-line-only", augment)
+ VG_BOOL_CLO("--command-line-only", noaugment)
}
cl_argv = &argv[vg_argc0];
@@ -667,5 +667,5 @@ static void get_command_line( int argc,
those extra args will already be present in VALGRINDCLO.
(We also don't do it when --command-line-only=yes.) */
- if (augment)
+ if (!noaugment)
augment_command_line(&vg_argc0, &vg_argv0);
}
--- valgrind/none/tests/selfrun.vgtest #1.1:1.2
@@ -1,3 +1,3 @@
-prog: ../../coregrind/valgrind --tool=none ./selfrun
+prog: ../../coregrind/valgrind --tool=none --command-line-only=yes ./selfrun
vgopts: --single-step=yes
prereq: grep '^#define HAVE_PIE 1' ../../config.h > /dev/null
|
|
From: Jeremy F. <je...@go...> - 2005-02-28 06:56:56
|
CVS commit by fitzhardinge:
Fix assertion; padfile can == 0
M +1 -1 vg_main.c 1.255
--- valgrind/coregrind/vg_main.c #1.254:1.255
@@ -1403,5 +1403,5 @@ void as_unpad(void *start, void *end, in
int res;
- vg_assert(padfile > 0);
+ vg_assert(padfile >= 0);
res = fstat(padfile, &padstat);
|
|
From: <js...@ac...> - 2005-02-28 04:01:05
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2005-02-28 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 207 tests, 11 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_fcntl (stderr) helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2005-02-28 03:28:24
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2005-02-28 03:20:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 214 tests, 8 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_supp (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-28 03:23:57
|
Nightly build on audi ( Red Hat 9 ) started at 2005-02-28 03:15:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 213 tests, 6 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) make: *** [regtest] Error 1 |