You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(4) |
|
2
(5) |
3
(3) |
4
(3) |
5
(7) |
6
(7) |
7
(9) |
8
(10) |
|
9
(12) |
10
(26) |
11
(9) |
12
(6) |
13
(7) |
14
(15) |
15
(25) |
|
16
(20) |
17
(32) |
18
(11) |
19
(19) |
20
(22) |
21
(6) |
22
(8) |
|
23
(16) |
24
(25) |
25
(11) |
26
(16) |
27
(12) |
28
(15) |
29
(11) |
|
30
(5) |
31
(8) |
|
|
|
|
|
|
From: Jeremy F. <je...@go...> - 2005-01-09 23:56:58
|
On Sun, 2005-01-09 at 19:14 +0000, Julian Seward wrote:
> > If people could try it out and see if it basically works for them, I'll
> > check it in the next few days. In particular, I'd like it if the people
> > considering porting Valgrind to other OS's could have a look and see how
> > these changes affect them.
>
> Before this goes in, I'd really like that
>
> (1) support for 2.4, and particularly 2.4 on older systems (eg, your
> 2.4 on RH73) gets more beaten on and tested, since in reality we will
> have to give good support for 2.4 for at least another two years.
I can try FC2 with 2.4 kernel, but I expect it would be the same as
using LD_ASSUME_KERNEL=2.4.0 or removing /lib/tls. I don't have any old
distros lying about. I figured we could fix the old stuff once the rest
is checked in (particularly since the RH7.3 problem seems to be in Tom's
bit, hint, hint).
2.4 support will be a matter of fixing little things, since there's no
major code-path differences for 2.4 vs 2.6.
> Are there any performance consequences that you are aware of? I had
> some concern that programs which cycle rapidly between running threads
> would be slower now, because each thread switch involves a trip into
> the kernel where it didn't before.
Not noticable. Besides, the client running native would be doing kernel
context switches anyway, so I don't see that it makes any real
difference.
In fact, I'd expect a bit of an improvement, since the old code used
many syscalls per client syscall, but now it's at most 4.
> > * I implemented the thread-serializing run_sema with both a futex
> > and a pipe-based token-passing scheme. The futex code is more
> > efficient, but the pipe scheme works everywhere, so that's what
> > is enabled by default. Switching is a compile-time option, but
> > it could be done at runtime.
>
> Do you have any numbers indicating the relative expenses of the two
> schemes?
Not measurable in single-threaded programs, which is where it would make
the most difference (2 syscalls/context switch vs. 0). I'll stick with
the pipe code.
J
|
|
From: Julian S. <js...@ac...> - 2005-01-09 19:14:22
|
> Over the hols I decided it would be fun to rewrite Valgrind's
> syscall/threading stuff again.
>
> The basic idea came from Eric Estievenart [...]
That's great! I'm very pleased to see this. Especially losing 6600
loc sounds good to me.
> If people could try it out and see if it basically works for them, I'll
> check it in the next few days. In particular, I'd like it if the people
> considering porting Valgrind to other OS's could have a look and see how
> these changes affect them.
Before this goes in, I'd really like that
(1) support for 2.4, and particularly 2.4 on older systems (eg, your
2.4 on RH73) gets more beaten on and tested, since in reality we will
have to give good support for 2.4 for at least another two years.
(2) more regression test cases are fixed.
Are there any performance consequences that you are aware of? I had
some concern that programs which cycle rapidly between running threads
would be slower now, because each thread switch involves a trip into
the kernel where it didn't before.
> * Try to recover the lost pthreads functionality. All that
> pthreads API usage checking was useful, and it uncovered a lot
> of bugs.
Yes. That would be nice but is not critical. Perhaps a useful thing
to do at this point, assuming no nice-easy solution is available,
is to come up with a proposal which can be considered at length.
> * I implemented the thread-serializing run_sema with both a futex
> and a pipe-based token-passing scheme. The futex code is more
> efficient, but the pipe scheme works everywhere, so that's what
> is enabled by default. Switching is a compile-time option, but
> it could be done at runtime.
Do you have any numbers indicating the relative expenses of the two
schemes?
J
|
|
From: Julian S. <js...@ac...> - 2005-01-09 19:01:17
|
That's really amazing. What changes beyond the threading one did you have to make? So, can you complete the trick by making the inner V do allocation via malloc/free (or whatever) in such a way that the outer V can memcheck it and find real bugs therein? J > I simulate, therefore I am. |
|
From: Chris J. <ch...@at...> - 2005-01-09 14:40:56
|
> On Sat, 2005-01-08 at 20:20 +0000, Chris January wrote: > > > Yep, that would be nice to have. Do you have any more > > > concrete thoughts? It seems like a difficult problem,=20 > > > because there's nothing generic about a Tool; they can get=20 > > > very special purpose. How does one represent that on the=20 > > > wire, and how can a generic debugger deal with it once its there? > >=20 > > Tools would publish their own API helper libraries. These would=20 > > interpret the data stored in the tool specific area of the shared=20 > > memory region. A client could ask the API what information the tool=20 > > provided. A client would either request this information as=20 > a binary=20 > > structure, which it would then need to interpret, or could=20 > ask for it=20 > > in string form in which case it would be displayed=20 > unprocessed to the=20 > > user. > >=20 > > >=20 > > > > The only problem with the original shared memory implementation=20 > > > > was > > > > the exposure of internal data structures. To solve this=20 > problem I=20 > > > > propose adding a debugging API shared library to Valgrind. This=20 > > > > library would be released as part of the Valgrind package and=20 > > > > installed at the same time. The library could therefore=20 > freely use=20 > > > > access internal data structures without any problems.=20 > > > Valgrind's core > > > > would expose useful debugging information, such as=20 > registers, and > > > > debugging control variables such as single stepping=20 > through shared=20 > > > > memory as before (either SysV or mmap). > > >=20 > > > (I don't think shared memory would be a good idea. It would > > > preclude remote machine access, and adds a complex protocol=20 > > > overhead for synchronizing between the debugger and the=20 > > > Valgrind threads.) > >=20 > > It's not difficult to synchronise between the debugger and Valgrind=20 > > threads. That is to say, I encountered no problems=20 > implementing this. >=20 > Well, I've just rearranged the internals again which affects=20 > the thread structure. Sure, the basic debugging pattern will=20 > be "stop the world; inspect modify state; resume the world",=20 > so there isn't much concurrency, but I don't understand what=20 > advantage using shared memory offers. You can't inspect or modify the state of the inferior process easily if = it's running under Valgrind because the register sets are not stored in = kernel space, they are stored at an unknown location in the inferior process. Shared memory effectively gives you a known location to look for the register sets and other information.=20 > Also, now that Valgrind can self-virtualize, you need to be=20 > able to deal with multiple Valgrind instances within one=20 > process, and be able to distinguish them. I don't think this is really necessary and GDB doesn't support this kind = of architecture anyway. >=20 > > The advantage of the debugging API is you can implement the GDB=20 > > protocol atop of it (e.g. for remote debugging or to=20 > support debuggers=20 > > without a Valgrind target), but you are not limited to it. It also=20 > > means the GDB server can be a separate component, whereas=20 > without the=20 > > API it would be more closely coupled to the Valgrind core. >=20 > What about the other way around? Is the gdb protocol=20 > completely inextensible? It's inextensible in the sense that any extension would require changes = to GDB which is what we were trying to avoid in the first place. >=20 > I think the internal interface to any debugging protocol=20 > driver will be pretty simple anyway, so there's no particular=20 > advantage to deep layering. All it needs to be able to do is=20 > to be able to drive some async IO and inspect/modify memory=20 > and the ThreadState structures. Shared memory would make it=20 > more complex by adding more state to manage. Shared memory actually reduces the amount of state to manage because you don't have to worry about the inferior process at all - all interaction = is with the shared memory region. Chris |
|
From: Nicholas N. <nj...@ca...> - 2005-01-09 11:27:07
|
On Sat, 8 Jan 2005, Jeremy Fitzhardinge wrote: > One other thing we could look at is reviving Nick's interactive mode > stuff. I'm guessing it has aged rather a lot. Yes. I don't think it was the right way to go, because it duplicated a lot of the GDB stuff anyway. All I really wanted was to be able to call a tool-specified C function from within GDB, but when I tried just doing that I had various problems; sometimes it would work and sometimes it would seg fault. I vaguely remember hearing that GDB's support for such C calls was flaky, but maybe Valgrind's interactions were causing other problems. N |
|
From: <js...@ac...> - 2005-01-09 03:56:56
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2005-01-09 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 188 tests, 5 stderr failures, 0 stdout failures ================= corecheck/tests/as_mmap (stderr) corecheck/tests/fdleak_fcntl (stderr) memcheck/tests/scalar (stderr) memcheck/tests/writev (stderr) memcheck/tests/zeropage (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2005-01-09 03:24:22
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2005-01-09 03:20:04 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int sh: line 1: 15359 Illegal instruction VALGRINDLIB=/tmp/valgrind.31670/valgrind/.in_place /tmp/valgrind.31670/valgrind/./coregrind/valgrind --tool=none ./int >int.stdout.out 2>int.stderr.out rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 193 tests, 3 stderr failures, 1 stdout failure ================= memcheck/tests/scalar (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/vgtest_ume (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-01-09 03:20:26
|
Nightly build on audi ( Red Hat 9 ) started at 2005-01-09 03:15:08 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 193 tests, 12 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-01-09 03:14:14
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2005-01-09 03:10:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_cmov: valgrind ./insn_cmov insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 193 tests, 1 stderr failure, 0 stdout failures ================= memcheck/tests/scalar (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-01-09 03:08:38
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2005-01-09 03:05:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 193 tests, 3 stderr failures, 1 stdout failure ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) none/tests/susphello (stdout) none/tests/susphello (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-01-09 03:04:01
|
Nightly build on standard ( Red Hat 7.2 ) started at 2005-01-09 03:00:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 193 tests, 3 stderr failures, 1 stdout failure ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) none/tests/susphello (stdout) none/tests/susphello (stderr) make: *** [regtest] Error 1 |
|
From: Jeremy F. <je...@go...> - 2005-01-09 00:15:27
|
On Sat, 2005-01-08 at 20:20 +0000, Chris January wrote: > > Yep, that would be nice to have. Do you have any more > > concrete thoughts? It seems like a difficult problem, > > because there's nothing generic about a Tool; they can get > > very special purpose. How does one represent that on the > > wire, and how can a generic debugger deal with it once its there? > > Tools would publish their own API helper libraries. These would interpret > the data stored in the tool specific area of the shared memory region. A > client could ask the API what information the tool provided. A client would > either request this information as a binary structure, which it would then > need to interpret, or could ask for it in string form in which case it would > be displayed unprocessed to the user. > > > > > > The only problem with the original shared memory implementation was > > > the exposure of internal data structures. To solve this problem I > > > propose adding a debugging API shared library to Valgrind. This > > > library would be released as part of the Valgrind package and > > > installed at the same time. The library could therefore freely use > > > access internal data structures without any problems. > > Valgrind's core > > > would expose useful debugging information, such as registers, and > > > debugging control variables such as single stepping through shared > > > memory as before (either SysV or mmap). > > > > (I don't think shared memory would be a good idea. It would > > preclude remote machine access, and adds a complex protocol > > overhead for synchronizing between the debugger and the > > Valgrind threads.) > > It's not difficult to synchronise between the debugger and Valgrind threads. > That is to say, I encountered no problems implementing this. Well, I've just rearranged the internals again which affects the thread structure. Sure, the basic debugging pattern will be "stop the world; inspect modify state; resume the world", so there isn't much concurrency, but I don't understand what advantage using shared memory offers. Also, now that Valgrind can self-virtualize, you need to be able to deal with multiple Valgrind instances within one process, and be able to distinguish them. > The advantage of the debugging API is you can implement the GDB protocol > atop of it (e.g. for remote debugging or to support debuggers without a > Valgrind target), but you are not limited to it. It also means the GDB > server can be a separate component, whereas without the API it would be more > closely coupled to the Valgrind core. What about the other way around? Is the gdb protocol completely inextensible? I think the internal interface to any debugging protocol driver will be pretty simple anyway, so there's no particular advantage to deep layering. All it needs to be able to do is to be able to drive some async IO and inspect/modify memory and the ThreadState structures. Shared memory would make it more complex by adding more state to manage. J |