You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(6) |
2
(7) |
|
3
(12) |
4
(9) |
5
(12) |
6
(9) |
7
(18) |
8
(10) |
9
(17) |
|
10
(15) |
11
(22) |
12
(16) |
13
(18) |
14
(9) |
15
(14) |
16
(18) |
|
17
(24) |
18
(11) |
19
(15) |
20
(29) |
21
(19) |
22
(20) |
23
(9) |
|
24
(25) |
25
(25) |
26
(38) |
27
(22) |
28
(16) |
29
(17) |
|
|
From: Eric P. <eri...@or...> - 2008-02-13 20:51:30
|
Tom Hughes a écrit : > In message <200...@ac...> > Julian Seward <js...@ac...> wrote: > > >> The business of tracking the bounds of each stack is a pain, and >> makes for fragility -- for example, does it now work correctly with >> user-defined stacks? >> > > Well it should do so long as they are registered with the > appropriate client request (oddly wine seems to register the > initial stack for each process but not those for any other > threads it creates). > it's mainly because we (wine project) create a specific stack for the first thread in a process (so the first thread of a process gets two stacks: the one allocated by the system, and the one wine creates) each other threads gets only one stack, and it's bound to the thread at thread creation time at the time I looked at this it wasn't necessary to add any instrumentation for this case as valgrind happily handled the one thread / one stack case A+ -- Eric Pouech "The problem with designing something completely foolproof is to underestimate the ingenuity of a complete idiot." (Douglas Adams) |
|
From: Julian S. <js...@ac...> - 2008-02-13 20:51:12
|
> As far as I understand the Valgrind core, you risk introducing all > kinds of subtle issues when you try to link the Valgrind core with > libc. Yes. Don't use libc. > In case you need a simple function from glibc, you can copy its > implementation and link it with the Valgrind core. Check first that the function does not already exist in include/pub_tool_libc*.h and coregrind/pub_core_libc*.h. > I'm afraid that > this won't work for setjmp() and longjmp() however. But did you > already have a look at the macro SCHEDSETJMP() in > coregrind/m_scheduler/scheduler.c ? Yes - just use the gcc builtins, __builtin_setmp and __builtin_lonpjmp. That works fine. J |
|
From: Bart V. A. <bar...@gm...> - 2008-02-13 18:46:12
|
On Feb 13, 2008 7:31 PM, Philippe Waroquiers <phi...@sk...> wrote: > Currently, I am working on doing this because it is fun. > I understood from the doc that it is not ok to call libc or any system call > directly in valgrind core, because this > causes various subtile problems. > > Are these problems reasonable enough that a "quick and dirty" prototype > could use libc ? > (so, typically; directly call strlen or directly call a "read" or "write" > system call) As far as I understand the Valgrind core, you risk introducing all kinds of subtle issues when you try to link the Valgrind core with libc. In case you need a simple function from glibc, you can copy its implementation and link it with the Valgrind core. I'm afraid that this won't work for setjmp() and longjmp() however. But did you already have a look at the macro SCHEDSETJMP() in coregrind/m_scheduler/scheduler.c ? Bart. |
|
From: Bart V. A. <bar...@gm...> - 2008-02-13 18:38:10
|
Hello Julian, Can you please apply the attached patch ? The attached patch fixes the assertion failure triggered by the tc18_semabuse test on Fedora 8. The fix consists of removing a tl_assert() statement. In this case this is OK, because what the tl_assert() statement was checking is that sem_post() did not report an error. The assert statement was there because if sem_post() returns an error, in theory some real data races can be suppressed. However, sem_post() only reports an error (EINVAL) in case its argument is not a valid semaphore, so there is no danger of suppressing a real data race. Bart Van Assche. |
|
From: Philippe W. <phi...@sk...> - 2008-02-13 18:31:11
|
(cfr my previous mails about gdbserver protocol). Currently, I am working on doing this because it is fun. I understood from the doc that it is not ok to call libc or any system call directly in valgrind core, because this causes various subtile problems. Are these problems reasonable enough that a "quick and dirty" prototype could use libc ? (so, typically; directly call strlen or directly call a "read" or "write" system call) (if afterwards, the idea of gdbserver in valgrind is seen as interesting enough to integrate it properly in valgrind, then of course all these calls will have to be replaced by the VG_(...) Thanks |
|
From: Philippe W. <phi...@sk...> - 2008-02-13 18:24:32
|
Hello, After my post "[Valgrind-developers] obtain a debuggable client by having gdbremote protocol in valgrind core ?", I had very few reactions, which I have interpreted as enthusiastic (but silent) admiration of the idea :). So, I have started to implement it, basing the development on the gdb "gdbserver" code. This code is using setjmp/longjmp to handle some error recovery (like a wrongly formed packet) Is it ok to use setjmp/longjmp in the valgrind core ? Or should I change the structure of the code to avoid this completely ? Thanks |
|
From: Julian S. <js...@ac...> - 2008-02-13 12:05:23
|
> Well it really needs to check a range of addresses - unwinding
> a stack frame on x86 using the frame pointer requires reading
> a group of 8 bytes, which can cross a page boundary.
Hmm, true. Still, checking for page boundary crossing is something
that can be pushed into the query function.
> Interesting - using an oset or something presumably?
OSets are problematic in m_aspacemgr because we can't use dynamic
memory allocation there. I was thinking of something along the
lines of a small fixed size array of known-safe segments, perhaps
arranged as a fully associative or 2 or 4 way set associative cache.
> Well the system call stuff could use VG_(am_is_valid_for_client) at
> the moment, though that has to binary search all the current segments
> every time.
>
> Note that stack unwinding needs to allow reading of V segments as
> well as C segments, as we are sometimes unwinding valgrind's stack
> rather than the client's. The system call check will only want to
> allow C segments.
Yes. So at least as a simple start, we need a function "Bool
VG_(am_dword_is_readable)(Addr)", which returns True if arg ..
arg+2*sizeof(Word)-1 is safe to read.
One way it could be done is to have a fixed-size (16-ish?) array
of pointers to NSegments. Each NSegment is in the array only if
it is safe to read from it. Array is searched from index 0 onwards
and a hit at index > 0 causes that entry to be moved forward one
place.
Assuming that most queries hit entry 0 -- as they should do, since
this is now a cache of segments, not pages -- then the fast case is
if cache[0] != NULL // entry present
&& a >= cache[0]->start
&& a+8 <= cache[0]->end return True
So that's 3 highly predictable conditionals, plus the call and return
branches. I'd say doable in 30 ish cycles in the common case, considering
the call/return overhead. So that's an extra 360 cycles for a common-case
12-frame stack unwind. Not bad really.
J
|
|
From: Tom H. <to...@co...> - 2008-02-13 11:02:16
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> The business of tracking the bounds of each stack is a pain, and
> makes for fragility -- for example, does it now work correctly with
> user-defined stacks?
Well it should do so long as they are registered with the
appropriate client request (oddly wine seems to register the
initial stack for each process but not those for any other
threads it creates).
> On consideration, your original solution seems preferable as it
> sidesteps all the stack bounds stuff. I wonder if the performance
> hit could be ameliorated by
>
> (1) adding a new function (in m_aspace's interface) to ask "is the
> page containing a given address mapped and readable, and
Well it really needs to check a range of addresses - unwinding
a stack frame on x86 using the frame pointer requires reading
a group of 8 bytes, which can cross a page boundary.
> (2) cacheing the results of such queries (inside m_aspacem)
Interesting - using an oset or something presumably?
As Nick said, we may as well enter the entire segment into the
cache rather than just the one page - it will only be one cache
entry anyway and it is likely to cover the entire stack with a
single entry that way.
> Clearly the cache would have to be flushed each time the mapping
> state changed, or at least partially flushed. However, unwind
> intensive code would likely do a lot of snapshots relative to
> map state changes, so the caching would be (temporally) effective.
Indeed.
> In addition, I'd bet that most stack frames are a lot smaller
> than a page, so the caching is also spatially effective: if we
> know that this frame is safe to poke around in, then it's likely
> that the next frame is in the same page and so we don't even need
> to ask aspacem about its safety.
If we cache the entire segment that we get ever better spatial
effectiveness from the cache.
> What do you reckon? Availability of a low-overhead page-safety
> check facility would be more generally useful too. For example in
> various syscall handlers we need to poke at the client supplied
> args and we're never 100% it won't fault, especially if the client
> is passing bogus pointers to the kernel.
Well the system call stuff could use VG_(am_is_valid_for_client) at
the moment, though that has to binary search all the current segments
every time.
Note that stack unwinding needs to allow reading of V segments as
well as C segments, as we are sometimes unwinding valgrind's stack
rather than the client's. The system call check will only want to
allow C segments.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: <sv...@va...> - 2008-02-13 05:05:57
|
Author: njn Date: 2008-02-13 05:05:58 +0000 (Wed, 13 Feb 2008) New Revision: 7405 Log: Fix minor breakage in 7 tests. Modified: trunk/memcheck/tests/addressable.stderr.exp trunk/memcheck/tests/badjump.stderr.exp trunk/memcheck/tests/describe-block.stderr.exp trunk/memcheck/tests/filter_stderr trunk/memcheck/tests/match-overrun.stderr.exp trunk/memcheck/tests/supp_unknown.stderr.exp trunk/none/tests/cmdline1.stdout.exp trunk/none/tests/cmdline2.stdout.exp Modified: trunk/memcheck/tests/addressable.stderr.exp =================================================================== --- trunk/memcheck/tests/addressable.stderr.exp 2008-02-12 21:55:15 UTC (rev 7404) +++ trunk/memcheck/tests/addressable.stderr.exp 2008-02-13 05:05:58 UTC (rev 7405) @@ -22,7 +22,7 @@ If you believe this happened as a result of a stack overflow in your program's main thread (unlikely but possible), you can try to increase the size of the main thread stack using the --main-stacksize= flag. - The main thread stack size used in this run was 16777216. + The main thread stack size used in this run was .... ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0) malloc/free: in use at exit: 0 bytes in 0 blocks. Modified: trunk/memcheck/tests/badjump.stderr.exp =================================================================== --- trunk/memcheck/tests/badjump.stderr.exp 2008-02-12 21:55:15 UTC (rev 7404) +++ trunk/memcheck/tests/badjump.stderr.exp 2008-02-13 05:05:58 UTC (rev 7405) @@ -11,7 +11,7 @@ If you believe this happened as a result of a stack overflow in your program's main thread (unlikely but possible), you can try to increase the size of the main thread stack using the --main-stacksize= flag. - The main thread stack size used in this run was 16777216. + The main thread stack size used in this run was .... ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0) malloc/free: in use at exit: 0 bytes in 0 blocks. Modified: trunk/memcheck/tests/describe-block.stderr.exp =================================================================== --- trunk/memcheck/tests/describe-block.stderr.exp 2008-02-12 21:55:15 UTC (rev 7404) +++ trunk/memcheck/tests/describe-block.stderr.exp 2008-02-13 05:05:58 UTC (rev 7405) @@ -10,7 +10,7 @@ If you believe this happened as a result of a stack overflow in your program's main thread (unlikely but possible), you can try to increase the size of the main thread stack using the --main-stacksize= flag. - The main thread stack size used in this run was 16777216. + The main thread stack size used in this run was .... ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0) malloc/free: in use at exit: 0 bytes in 0 blocks. Modified: trunk/memcheck/tests/filter_stderr =================================================================== --- trunk/memcheck/tests/filter_stderr 2008-02-12 21:55:15 UTC (rev 7404) +++ trunk/memcheck/tests/filter_stderr 2008-02-13 05:05:58 UTC (rev 7405) @@ -13,4 +13,7 @@ # Anonymise line numbers in mc_replace_strmem.c sed "s/mc_replace_strmem.c:[0-9]*/mc_replace_strmem.c:.../" | +# Remove the size in "The main thread stack size..." message. +sed "s/The main thread stack size used in this run was [0-9]*/The main thread stack size used in this run was .../" | + $dir/../../tests/filter_test_paths Modified: trunk/memcheck/tests/match-overrun.stderr.exp =================================================================== --- trunk/memcheck/tests/match-overrun.stderr.exp 2008-02-12 21:55:15 UTC (rev 7404) +++ trunk/memcheck/tests/match-overrun.stderr.exp 2008-02-13 05:05:58 UTC (rev 7405) @@ -7,7 +7,7 @@ If you believe this happened as a result of a stack overflow in your program's main thread (unlikely but possible), you can try to increase the size of the main thread stack using the --main-stacksize= flag. - The main thread stack size used in this run was 16777216. + The main thread stack size used in this run was .... ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) malloc/free: in use at exit: 0 bytes in 0 blocks. Modified: trunk/memcheck/tests/supp_unknown.stderr.exp =================================================================== --- trunk/memcheck/tests/supp_unknown.stderr.exp 2008-02-12 21:55:15 UTC (rev 7404) +++ trunk/memcheck/tests/supp_unknown.stderr.exp 2008-02-13 05:05:58 UTC (rev 7405) @@ -6,4 +6,4 @@ If you believe this happened as a result of a stack overflow in your program's main thread (unlikely but possible), you can try to increase the size of the main thread stack using the --main-stacksize= flag. - The main thread stack size used in this run was 16777216. + The main thread stack size used in this run was .... Modified: trunk/none/tests/cmdline1.stdout.exp =================================================================== --- trunk/none/tests/cmdline1.stdout.exp 2008-02-12 21:55:15 UTC (rev 7404) +++ trunk/none/tests/cmdline1.stdout.exp 2008-02-13 05:05:58 UTC (rev 7405) @@ -48,7 +48,7 @@ Extra options read from ~/.valgrindrc, $VALGRIND_OPTS, ./.valgrindrc - Valgrind is Copyright (C) 2000-2007 Julian Seward et al. + Valgrind is Copyright (C) 2000-2008 Julian Seward et al. and licensed under the GNU General Public License, version 2. Bug reports, feedback, admiration, abuse, etc, to: www.valgrind.org. Modified: trunk/none/tests/cmdline2.stdout.exp =================================================================== --- trunk/none/tests/cmdline2.stdout.exp 2008-02-12 21:55:15 UTC (rev 7404) +++ trunk/none/tests/cmdline2.stdout.exp 2008-02-13 05:05:58 UTC (rev 7405) @@ -92,7 +92,7 @@ Extra options read from ~/.valgrindrc, $VALGRIND_OPTS, ./.valgrindrc - Valgrind is Copyright (C) 2000-2007 Julian Seward et al. + Valgrind is Copyright (C) 2000-2008 Julian Seward et al. and licensed under the GNU General Public License, version 2. Bug reports, feedback, admiration, abuse, etc, to: www.valgrind.org. |
|
From: Tom H. <th...@cy...> - 2008-02-13 05:03:15
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2008-02-13 03:15:02 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 338 tests, 84 stderr failures, 3 stdout failures, 29 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/lsframe1 (stderr) memcheck/tests/lsframe2 (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-names (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/blockfault (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc04_free_lock (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc10_rec_lock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc13_laog1 (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) exp-drd/tests/fp_race (stderr) exp-drd/tests/fp_race2 (stderr) exp-drd/tests/matinv (stderr) exp-drd/tests/pth_barrier (stderr) exp-drd/tests/pth_broadcast (stderr) exp-drd/tests/pth_cond_race (stderr) exp-drd/tests/pth_cond_race2 (stderr) exp-drd/tests/pth_create_chain (stderr) exp-drd/tests/pth_detached (stderr) exp-drd/tests/pth_detached2 (stderr) exp-drd/tests/sem_as_mutex (stderr) exp-drd/tests/sem_as_mutex2 (stderr) exp-drd/tests/sigalrm (stderr) exp-drd/tests/tc17_sembar (stderr) exp-drd/tests/tc18_semabuse (stderr) |
|
From: Tom H. <th...@cy...> - 2008-02-13 04:07:34
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2008-02-13 03:05:04 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 372 tests, 7 stderr failures, 4 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-02-13 03:49:26
|
Nightly build on aston ( x86_64, Fedora Core 5 ) started at 2008-02-13 03:20:04 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 378 tests, 13 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/blockfault (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-02-13 03:43:34
|
Nightly build on trojan ( x86_64, Fedora Core 6 ) started at 2008-02-13 03:25:06 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 376 tests, 6 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-02-13 03:27:21
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2008-02-13 03:10:03 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 372 tests, 10 stderr failures, 4 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/tc18_semabuse (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 372 tests, 9 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) exp-drd/tests/tc18_semabuse (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Wed Feb 13 03:18:45 2008 --- new.short Wed Feb 13 03:27:25 2008 *************** *** 8,10 **** ! == 372 tests, 9 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 372 tests, 10 stderr failures, 4 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) *************** *** 18,20 **** none/tests/mremap2 (stdout) ! none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) --- 18,20 ---- none/tests/mremap2 (stdout) ! helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) |
|
From: Tom H. <th...@cy...> - 2008-02-13 03:14:42
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2008-02-13 03:00:02 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 378 tests, 34 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/supp_unknown (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/amd64/insn_ssse3 (stdout) none/tests/amd64/insn_ssse3 (stderr) none/tests/amd64/ssse3_misaligned (stderr) none/tests/blockfault (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/x86/insn_ssse3 (stdout) none/tests/x86/insn_ssse3 (stderr) none/tests/x86/ssse3_misaligned (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) |
|
From: Julian S. <js...@ac...> - 2008-02-13 02:50:01
|
On Wednesday 13 February 2008 03:35, Nicholas Nethercote wrote: > On Wed, 13 Feb 2008, Julian Seward wrote: > > In addition, I'd bet that most stack frames are a lot smaller > > than a page, so the caching is also spatially effective: if we > > know that this frame is safe to poke around in, then it's likely > > that the next frame is in the same page and so we don't even need > > to ask aspacem about its safety. > > Perhaps you could ask for the extent of the contiguous addressable memory > around the particular address? Yes, that would I guess be a logical generalisation. It does however put the administrative burden more on the caller, since then the caller first has to ask how big is the accessible area in which address X lies; then keep track of when it has gone outside the area, in which case it needs to ask again, etc. Also, the caller(s) have no obvious way to know when any information they have cached, has gone stale. It would be simpler for the caller if all such trickery were pushed into the is-this-page-safe function and we took care to ensure that said function returned extremely quickly in the majority of cases. J |
|
From: Nicholas N. <nj...@cs...> - 2008-02-13 02:35:44
|
On Wed, 13 Feb 2008, Julian Seward wrote: > In addition, I'd bet that most stack frames are a lot smaller > than a page, so the caching is also spatially effective: if we > know that this frame is safe to poke around in, then it's likely > that the next frame is in the same page and so we don't even need > to ask aspacem about its safety. Perhaps you could ask for the extent of the contiguous addressable memory around the particular address? N |
|
From: Julian S. <js...@ac...> - 2008-02-13 02:26:42
|
> > * This might give a bit of a performance hit in unwind- > > intensive programs as the stacks list now has to be searched for > > each snapshot. I guess we could mostly ameliorate this by the > > usual trick of incrementally rearranging the list to diffuse > > frequently-requested entries towards the front. > > It may be an issue, yes. In fact my original solution to the > problem of stopping the unwinder crashing was to simply ask the > address space manager whether the stack frame we were about to > try and read existed and was readable. That should be completely > bulletproof, but I was worried about the cost as that was done > for every stack entry not just once per unwind. The business of tracking the bounds of each stack is a pain, and makes for fragility -- for example, does it now work correctly with user-defined stacks? On consideration, your original solution seems preferable as it sidesteps all the stack bounds stuff. I wonder if the performance hit could be ameliorated by (1) adding a new function (in m_aspace's interface) to ask "is the page containing a given address mapped and readable, and (2) cacheing the results of such queries (inside m_aspacem) Clearly the cache would have to be flushed each time the mapping state changed, or at least partially flushed. However, unwind intensive code would likely do a lot of snapshots relative to map state changes, so the caching would be (temporally) effective. In addition, I'd bet that most stack frames are a lot smaller than a page, so the caching is also spatially effective: if we know that this frame is safe to poke around in, then it's likely that the next frame is in the same page and so we don't even need to ask aspacem about its safety. What do you reckon? Availability of a low-overhead page-safety check facility would be more generally useful too. For example in various syscall handlers we need to poke at the client supplied args and we're never 100% it won't fault, especially if the client is passing bogus pointers to the kernel. J |