You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
1
(22) |
2
(19) |
3
(8) |
4
(34) |
5
(14) |
6
(14) |
|
7
(12) |
8
(15) |
9
(15) |
10
(10) |
11
(10) |
12
(28) |
13
(11) |
|
14
(22) |
15
(29) |
16
(20) |
17
(15) |
18
(39) |
19
(11) |
20
(12) |
|
21
(8) |
22
(9) |
23
(8) |
24
(10) |
25
(9) |
26
(7) |
27
(7) |
|
28
(6) |
29
(6) |
30
(11) |
|
|
|
|
|
From: Jeremy F. <je...@go...> - 2004-11-19 23:45:39
|
On Fri, 2004-11-19 at 13:42 +0000, Nicholas Nethercote wrote: > 2. V shares the dynamic linker and shared libraries with the client; V > must be very careful not to interfere with the client's use of these > (eg. have its own libc functions). > > Failure. We haven't been able to use glibc in V really because we > can't use any function that might use malloc(), because we can't > separate V and P's use of brk() and the data segment. Having our own > libc still seems like a good idea. Having our own libc is definitely not a good idea, but we haven't gone to much effort to replace it yet. glibc makes it hard to intercept brk directly, but it is possible to replace malloc/calloc/realloc/etc and be reasonably sure of avoiding the use of brk (particularly if you get the kernel to enforce it). Having our own libc going to be a portability liability. But using system libraries wasn't the only reason to disentangle ourselves from the dynamic linker. The dynamic linker itself is 1) very GNU/glibc-specific 2) has changed a lot over the last few years, and doesn't seem like stopping, and so 3) depending on it in detail is going to continue to be a maintenance and portability problem. We're stuck with having to deal with it for the purposes of interception, but it would be nice to be independent of it for the basic functioning of Valgrind. > 3. P can clobber V's memory. > > Partial success. P can't clobber V; but I'm not sure how much of a > problem this was in the first place. (Are there any cases > where Memcheck wouldn't report the clobbering first?) Also, x86 > segment selectors aren't portable, and no convincing alternative is > known for other architectures. A bounds-limit test for each memory access isn't that expensive, and in 64-bit address spaces, you can make the client address space a power of 2 in size, which simplifies the test. You could also use a v. large redzone to make hits much more unlikely. The segment test is nice because it actually is free, but explicit testing probably isn't that expensive, particularly if the codegen can remove redundant tests, and schedule the tests it does generate appropriately. memcheck and addrcheck will report on out of bounds writes, but other tools won't. In addition, if you get an out of bounds error which doesn't crash the process, it immediately means that all other program and valgrind output is unreliable, so you need to be very strict about fixing the first problem before considering the rest. I think this makes the Valgrind output unreliable. Well, hm. If Valgrind is sharing ld.so with the client, then they're not really separate programs at all. If the client screws up the dynamic linker, Valgrind could get hit and crash without being able to report on it at all. > 4. No self-hosting possible. > > Failure. Self-hosting is no closer to happening. True. We need more virtualization to do it properly. > 5. Intercepting library calls is difficult and fragile. > > ??? It still seems difficult, but I don't understand that stuff at > all. Can someone elaborate? The machinery in there now is pretty simple. You list a set of addresses of functions you want to intercept, and when the codegen is told to fetch from those addresses, it actually fetches from the intercepting function instead. The addresses can be specified either literally or symbolically; symbol names can be qualified by a particular library name, or be unqualified. glibc makes complex by being complex itself, but the core machinery is pretty simple. > 6. Statically linked programs are not supported. > > Partial success. We can run statically linked binaries. However, we > can't intercept malloc() et al, which hobbles tools to various > degrees (eg. Cachegrind not at all, Memcheck somewhat, Addrcheck > quite a lot, Massif totally). We can intercept malloc/free/etc if the program hasn't been stripped. Its just that with dynamic linking, the programs are never (can't be) completely stripped. > Six problems, one clear success, two partial sucesses, two failures, and a > (to me) unknown. All in all, not very convincing. > > There were probably other advantages as a result, that I can't think of > now, that it would be worth discussing. > > ----------------------------------------------------------------------------- > And what were the costs? > > - Code size. FV added a lot of code. Especially keeping track of all the > mapped segments (and there are still several nasty bugs in there). You know, I'm really not sure that it did. I'll agree that the skiplist code has been more subtly broken for longer than it should have been, but as a generic data structure we should be able to get good use from it. And really, the mapped segment code is there to replace the old stuff which kept reading /proc/self/map; that was getting to be a pretty significant bottleneck and was plain ugly (ie, the mapped segment stuff would have been needed anyway, regardless of FV). The other large code change is the syscall handing stuff, which is independent of FV. I dunno. Valgrind is a lot more complex now, but it does do a lot more stuff. I don't think we're going to return to the halcyon days of 1.0 simplicity and still manage to keep the functionality. > - Robustness. FV is generally more fragile; there are more things to > get right, and the consequences are bad if they are not right. IMHO > we get more random seg fault problems now. A lot have been cleaned up > (it was really bad at first), but they still happen. Yes, but I think that comes with the "doing more stuff". 1.0 would just fail outright on a lot of programs. 2.x tries to run them, and generally (but not always) succeeds. And again, I don't think this is strictly an FV issue. > - Also, the inflexibility of the memory layout has caused many problems: > - difficulties for non-standard (ie. 3G:1G) kernels > - can't run with a virtual memory ulimit (bad esp. for embedded > developers) Can you explain? What do you mean by "embedded developers"? The > - clients and tools run out of memory earlier than they used to, and > we have problems reading debug info for large files > - big-bang shadow allocation causes problems with non-overcommitting > kernels > > V has dropped from #6 to #72 highest-rated project at Freshmeat.net over > the last year or so; I think the reason for this is that V's "it just > works" characteristic has been diminished, due to the robustness and > inflexibility problems. Um, do you have anything to support that? I think the ranking is dropping because V is not new anymore, and people are taking it for granted. Are we seeing an increase in bug reports disproportionate to the number of users? > Features I'm willing to give up: > - Total separation of P and V memory. The inflexibility and complexity > are not worth it. --pointercheck isn't portable, anyway. > - V's ability to use other libraries. It doesn't really work now > anyway, and just increases V's dependence on other things. > (Dependence could become a problem as we port to other archs/OSes, as > libraries may not work in exactly the same way in all cases.) I'd still like to be able to use C++ internally. > Features I'd really like: > - Self-hosting. > - Function wrapping [but that's kind of orthogonal to the rest of this > message] > > How it would work: > - stage1 still starts things up. stage2 is maybe a .so again? Not > sure, but it wouldn't be put in a fixed location. > - V and P mappings are totally intermingled. We just let the kernel > mmap things wherever it wants, without trying to enforce any layout > ourselves. (This precludes big-bang shadow allocation, and thus > precludes fixed-offset shadow memory addressing being used in the > future. This does not worry me.) This makes startup much easier, > since we don't have to be so careful about where things go. ... I guess my OS/kernel background is really making me dislike this idea. I agree that the partition makes things tight in a 32-bit address space, and that shuffling things around is a good idea to make more things work, but I really like having Valgrind and the client be as separate as possible. And I really think that the direct mapped shadow memory makes the most sense for 64-bit systems, even if it doesn't for 32-bit. > - Self-hosting might be possible now that Valgrind (ie. stage1) is a > normal executable again, rather than a .so as it was originally? Not > sure. Don't see why. That's not the hard part of self-hosting. The tricky part is making the system emulation match what Valgrind itself uses (which in turn needs improvements to the VCPU's exception model). > - We still use our own libraries, and in fact try to remove all our use > of glibc altogether, as relying on it feels like a bad idea. (stage1 > might be able to use glibc, though? Not sure.) I think carrying around lots of private libc is just asking for more work when porting to other systems. My view has been to do a little bit of hard work, with the payoff that we can avoid lots of hard work by using the system libraries where possible. I'm not keen on using lots of 3rd party libraries, but I would like to be able to compile Valgrind as a mostly normal executable using C and C++ code without having to worry too much about what libraries it pulls in. We can't be completely blithe, and we'll always need to link a bit strangely (though -fpie helps), but I think that's better than vg_libc.c. > - V and P would probably share the dynamic linker? So we'd have to be > careful about the symbol namespace, but we already do that most of the > time anyway so it's not a problem. Well, that point of contact makes V very vulnerable to the correct behaviour of the client. > So the big wins are: reduced complexity (far less memory tracking, we > let the kernel decide where things go), increased memory layout > flexibility. The increased layout flexibility is the only obvious win to me, and I think it costs quite a bit. And now that I have a 64-bit machine, I can easily say that this is all a lot of engineering effort to keep obsolete systems happy ;-). > Things I'm not sure about: > - Valgrind's stack? Currently auto-extended, but it used to be a fixed > size, and that wasn't much of a problem. Could make the SEGV handler > much simpler? Well, we still need to keep Valgrind and the client stack separate. Valgrind's stack is fixed size, but the client's has to grow. If we use the stack the system gave us as the client stack, then we don't need to worry about it. > - Debug info reading, and segments, and the interactions there. That's an independent problem. We can massively reduce the memory/address space use needed for debug info reading whether we use FV or not. > - Is the segment list used in other ways? It's also used to keep track of what areas in memory have cached code. We don't use that at the moment, but I was anticipating it being useful for the self-modifying-code problem. > - Library interceptions -- what's the effect? Unchanged I think. Well, the old model was "we have a function called X, so it replaces some other function called X", but that ignores the real complexity of ld.so/glibc's symbol matching, versioned symbols, etc. We could either do our own lookups, or get ld.so to do them - I'd prefer to do it ourselves, on the grounds that we have a bit more understanding of what's happening (and are't subject to yet another change in ld.so's lookup rules). I'm planning on putting some actual work into dropping our private libpthread, which will change the interception requirements. > - The current SEGV handler for the client is nice, in that a > seg-faulting client dies with an informative message from V. Also, > the "INTERNAL ERROR" msg about V's own seg faults is good. It would > be good to preserve that. That's independent of FV, though it might be a bit simpler with FV than without. > I think the end result would be simpler, have less code, be more robust, > and cause fewer problems for users. Discuss. I think its more complicated than that. > [On a related note, what do we gain from keeping the P and V file > descriptors strictly partitioned? It's a lot of work enforcing the > partition, and it makes self-hosting harder.] Unix has well known historical rules about how fds are allocated - the kernel guarantees that the next fd allocated will be the lowest free one. Programs can legitimately do something like: open("foo", O_RDONLY); /* I know that fds 0-4 are allocated, so this returns 5 */ read(5, buf, sizeof(buf)); If we have one of our fds mixed in with the client's, they could easily be stomped on (not to mention that programs have at least as many use-after-free style bugs with fds as memory, so if we end up using an fd which the program had once used, the client might decide to stomp on it). J |
|
From: Nicholas N. <nj...@ca...> - 2004-11-19 16:10:31
|
CVS commit by nethercote: No longer producing this file. M +0 -1 scalar.vgtest 1.6 --- valgrind/memcheck/tests/scalar.vgtest #1.5:1.6 @@ -2,3 +2,2 @@ vgopts: -q --error-limit=no args: < scalar.c -cleanup: rm tmp_write_file_foo |
|
From: Nicholas N. <nj...@ca...> - 2004-11-19 15:43:16
|
On Fri, 19 Nov 2004, Josef Weidendorfer wrote: > in my tool, I use VKI_O_APPEND. > Can you readd it to include/x86-linux/vki_arch.h ? > > pkg-config can also forward arbitrary variables. > The proposed patch for valgrind.pc.in is attached. Done and done. N |
|
From: Nicholas N. <nj...@ca...> - 2004-11-19 15:42:52
|
CVS commit by nethercote:
Minor tweaks for external tools, for Josef W.
M +5 -1 valgrind.pc.in 1.4
M +1 -0 include/x86-linux/vki_arch.h 1.7
--- valgrind/valgrind.pc.in #1.3:1.4
@@ -3,4 +3,7 @@
libdir=@libdir@
includedir=@includedir@/valgrind
+arch=@VG_ARCH@
+os=@VG_OS@
+platform=@VG_PLATFORM@
Name: Valgrind
@@ -9,3 +12,4 @@
Requires:
Libs:
-Cflags: -I${includedir} -I${includedir}/@VG_ARCH@ -I${includedir}/@VG_OS@ -I${includedir}/@VG_PLATFORM@ @ARCH_TOOL_AM_CFLAGS@
+Cflags: -I${includedir} -I${includedir}/${arch} -I${includedir}/${os} -I${includedir}/${platform} @ARCH_TOOL_AM_CFLAGS@
+
--- valgrind/include/x86-linux/vki_arch.h #1.6:1.7
@@ -272,4 +272,5 @@ struct vki_sigcontext {
#define VKI_O_EXCL 0200 /* not fcntl */
#define VKI_O_TRUNC 01000 /* not fcntl */
+#define VKI_O_APPEND 02000
#define VKI_O_NONBLOCK 04000
|
|
From: Josef W. <Jos...@gm...> - 2004-11-19 14:32:41
|
Hi Nick, in my tool, I use VKI_O_APPEND. Can you readd it to include/x86-linux/vki_arch.h ? If I want to introduce arch/platform specific things in the external tool, there has to be a way to forward VG_ARCH from VG to the external build system, or I would have to copy all the arch detection stuff from Valgrinds configure.in. I would prefer the first. pkg-config can also forward arbitrary variables. The proposed patch for valgrind.pc.in is attached. Cheers, Josef |
|
From: Nicholas N. <nj...@ca...> - 2004-11-19 13:43:12
|
Hi,
The thesis of this message is that "Full Virtualisation" (FV, better
described as "strict address space partitioning") is a bad idea.
(In what follows, 'V' is Valgrind, 'P' is the client program being run
under Valgrind's control.)
-----------------------------------------------------------------------------
Original, V was a shared object, grafted onto the client via LD_PRELOAD.
V memory and P memory were all mixed up. This had various problems.
So we moved to FV... basically, V is a real executable, stage1 loads it
at a high address, stage2 loads the client at a low address, V maintains
a full segment list of all memory mappings, P cannot touch V due to the
use of x86 segment registers.
Here's some problems FV was meant to solve, and an evaluation of how well
it fared in practice.
1. P runs for some time before V gains control, which is dodgy, and
causes problems with threads.
Success! V has full control over P from startup.
2. V shares the dynamic linker and shared libraries with the client; V
must be very careful not to interfere with the client's use of these
(eg. have its own libc functions).
Failure. We haven't been able to use glibc in V really because we
can't use any function that might use malloc(), because we can't
separate V and P's use of brk() and the data segment. Having our own
libc still seems like a good idea.
3. P can clobber V's memory.
Partial success. P can't clobber V; but I'm not sure how much of a
problem this was in the first place. (Are there any cases
where Memcheck wouldn't report the clobbering first?) Also, x86
segment selectors aren't portable, and no convincing alternative is
known for other architectures.
4. No self-hosting possible.
Failure. Self-hosting is no closer to happening.
5. Intercepting library calls is difficult and fragile.
??? It still seems difficult, but I don't understand that stuff at
all. Can someone elaborate?
6. Statically linked programs are not supported.
Partial success. We can run statically linked binaries. However, we
can't intercept malloc() et al, which hobbles tools to various
degrees (eg. Cachegrind not at all, Memcheck somewhat, Addrcheck
quite a lot, Massif totally).
Six problems, one clear success, two partial sucesses, two failures, and a
(to me) unknown. All in all, not very convincing.
There were probably other advantages as a result, that I can't think of
now, that it would be worth discussing.
-----------------------------------------------------------------------------
And what were the costs?
- Code size. FV added a lot of code. Especially keeping track of all the
mapped segments (and there are still several nasty bugs in there).
- Robustness. FV is generally more fragile; there are more things to
get right, and the consequences are bad if they are not right. IMHO
we get more random seg fault problems now. A lot have been cleaned up
(it was really bad at first), but they still happen.
- Also, the inflexibility of the memory layout has caused many problems:
- difficulties for non-standard (ie. 3G:1G) kernels
- can't run with a virtual memory ulimit (bad esp. for embedded
developers)
- clients and tools run out of memory earlier than they used to, and
we have problems reading debug info for large files
- big-bang shadow allocation causes problems with non-overcommitting
kernels
V has dropped from #6 to #72 highest-rated project at Freshmeat.net over
the last year or so; I think the reason for this is that V's "it just
works" characteristic has been diminished, due to the robustness and
inflexibility problems.
-----------------------------------------------------------------------------
Here's a rough proposal that attempts to combine the best features of the
original scheme with FV.
First, the features I think are important to preserve:
- Valgrind having control from the very start
- Being able to run static binaries (even if the tools don't work fully)
Features I'm willing to give up:
- Total separation of P and V memory. The inflexibility and complexity
are not worth it. --pointercheck isn't portable, anyway.
- V's ability to use other libraries. It doesn't really work now
anyway, and just increases V's dependence on other things.
(Dependence could become a problem as we port to other archs/OSes, as
libraries may not work in exactly the same way in all cases.)
Features I'd really like:
- Self-hosting.
- Function wrapping [but that's kind of orthogonal to the rest of this
message]
How it would work:
- stage1 still starts things up. stage2 is maybe a .so again? Not
sure, but it wouldn't be put in a fixed location.
- V and P mappings are totally intermingled. We just let the kernel
mmap things wherever it wants, without trying to enforce any layout
ourselves. (This precludes big-bang shadow allocation, and thus
precludes fixed-offset shadow memory addressing being used in the
future. This does not worry me.) This makes startup much easier,
since we don't have to be so careful about where things go.
- We still need a segment mapping list of sorts, at least for exe segments
with debug info.
- Self-hosting might be possible now that Valgrind (ie. stage1) is a
normal executable again, rather than a .so as it was originally? Not
sure.
- We still use our own libraries, and in fact try to remove all our use
of glibc altogether, as relying on it feels like a bad idea. (stage1
might be able to use glibc, though? Not sure.)
- V and P would probably share the dynamic linker? So we'd have to be
careful about the symbol namespace, but we already do that most of the
time anyway so it's not a problem.
So the big wins are: reduced complexity (far less memory tracking, we
let the kernel decide where things go), increased memory layout
flexibility.
Things I'm not sure about:
- Valgrind's stack? Currently auto-extended, but it used to be a fixed
size, and that wasn't much of a problem. Could make the SEGV handler
much simpler?
- Debug info reading, and segments, and the interactions there.
- Is the segment list used in other ways?
- Library interceptions -- what's the effect?
- The current SEGV handler for the client is nice, in that a
seg-faulting client dies with an informative message from V. Also,
the "INTERNAL ERROR" msg about V's own seg faults is good. It would
be good to preserve that.
-----------------------------------------------------------------------------
I think the end result would be simpler, have less code, be more robust,
and cause fewer problems for users. Discuss.
[On a related note, what do we gain from keeping the P and V file
descriptors strictly partitioned? It's a lot of work enforcing the
partition, and it makes self-hosting harder.]
|
|
From: <js...@ac...> - 2004-11-19 03:56:51
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2004-11-19 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 186 tests, 5 stderr failures, 0 stdout failures ================= corecheck/tests/as_mmap (stderr) corecheck/tests/fdleak_fcntl (stderr) memcheck/tests/scalar (stderr) memcheck/tests/writev (stderr) memcheck/tests/zeropage (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-19 03:14:20
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-11-19 03:10:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_cmov: valgrind ./insn_cmov insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 1 stderr failure, 0 stdout failures ================= memcheck/tests/scalar (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-19 03:09:51
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-11-19 03:05:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 2 stderr failures, 0 stdout failures ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-19 03:04:37
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-11-19 03:00:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 2 stderr failures, 0 stdout failures ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Robert W. <rj...@du...> - 2004-11-19 02:12:04
|
CVS commit by rjwalsh:
Yipes - my last checking borked the memcheck scalar test. This should
make it better.
M +14 -12 vg_syscalls.c 1.225
--- valgrind/coregrind/vg_syscalls.c #1.224:1.225
@@ -4404,6 +4404,5 @@ POST(sys_poll)
PRE(sys_readlink, Special)
{
- char name[25];
-
+ int saved = SYSNO;
PRINT("sys_readlink ( %p, %p, %llu )", arg1,arg2,(ULong)arg3);
PRE_REG_READ3(long, "readlink",
@@ -4413,7 +4412,12 @@ PRE(sys_readlink, Special)
/*
- * Handle the single case where readlink failed reading /proc/self/exe.
+ * Handle the case where readlink is looking at /proc/self/exe or
+ * /proc/<pid>/exe.
*/
+ set_result( VG_(do_syscall)(saved, arg1, arg2, arg3));
+ if ((Int)res == -2) {
+ char name[25];
+
VG_(sprintf)(name, "/proc/%d/exe", VG_(getpid)());
@@ -4421,11 +4425,9 @@ PRE(sys_readlink, Special)
VG_(strcmp)((Char *)arg1, "/proc/self/exe") == 0) {
VG_(sprintf)(name, "/proc/self/fd/%d", VG_(clexecfd));
- res = VG_(do_syscall)(SYSNO, name, arg2, arg3);
+ set_result( VG_(do_syscall)(saved, name, arg2, arg3));
}
- else {
- res = VG_(do_syscall)(SYSNO, arg1, arg2, arg3);
}
- if (res > 0)
+ if ((Int)res > 0)
POST_MEM_WRITE( arg2, res );
}
|