You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(2) |
2
|
3
(1) |
|
4
(3) |
5
(5) |
6
(1) |
7
(3) |
8
(1) |
9
|
10
|
|
11
|
12
|
13
(1) |
14
|
15
|
16
|
17
|
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
|
25
|
26
(5) |
27
(1) |
28
|
29
|
30
|
31
(1) |
|
From: Jeremy F. <je...@go...> - 2003-05-26 16:28:01
|
On Mon, 2003-05-26 at 02:42, Josef Weidendorfer wrote:
> currently I'm thinking a little bit of what would be needed to allow
> applications run under Valgrind to use processors in parallel. The main goal
> would be to speed up cache simulation for multithreaded applications, more
> specially first to let OpenMP apps (number crunshing) run simultaneously.
> I'm not at all convinced if there will be any benefit/speedup at all on
> multiple processors because of a possible need for additional fine-grained
> communication among the threads.
I've been thinking about this too. I think the real difficulty is in
the skins rather than the core. The core only has occasional relatively
large chunks of work to do (ie, translate a basic block). It would be
fairly easy to make a translation hash miss do the right things (you'd
probably do it locklessly, so that the hash update is an atomic
operation; if two threads happen to want the same basic block at the
same time, they'd both translate it, but one would win and the other
would be thrown away).
The real problem is that the skins want to do data structure updates on
an instruction by instruction level. Nick mentioned memcheck; helgrind
is an even more extreme example, since it actually cares a lot about the
program's precise thread and lock behaviour, and what threads touch
which memory in what order.
The only reasonable way I can see to implement it would be to generate
inline atomic operations, rather than using mutexes. Unfortunately I
think that would still have an extreme amount of overhead; probably
enough to overwhelm any possible performance benefit of multiple CPUs.
There would also be a some memory overhead, since you'd have to include
space for locks in the data, though you could choose the density as a
tradeoff between memory use and concurrency.
A much more complex, but perhaps efficient way to do it, would be to
make all skins which care about keeping per-byte memory metadata behave
more like helgrind. That is, have the skin classify heap memory as
being "per-thread" or "shared". Per-thread memory+metadata could be
handled without any locking. As soon as another thread touches it, you
would convert it to "shared", which requires locked access. Handling
this transition would be tricky, as would handling the codegen issues
(would you generate all memory accesses as if they could be shared, or
would you regenerate those memory accesses which turn out to be
shared?). The problem with this approach, like so many other possible
"optimisations" for Valgrind, is that the overhead of all the
bookkeeping could easily remove any benefit.
Of course, skins which don't keep per-byte metadata about memory
(cachegrind, vgprof, etc) can just keep everything per-thread and
reconcile at the end.
The other killer is that it would make writing Valgrind itself and skins
a lot more complex. Valgrind is hard enough to get right as it is;
adding concurrency would simply make the tool itself a lot less
trustworthy.
> * Signal handling?
Signal handling+threads = <shudder>
> * What's with Valgrinds version of the pthread library? Do you think that it's
> a big task to make this reentrant-safe? Or perhaps we even could get rid of
> our own implementation?
I think we if we do go this path, then we can take a step back. Rather
than emulating threading at the pthread level, we can emulate it at the
clone system call level, and therefore allow any user-space pthreads
implementation. We may still want to intercept pthreads library calls
so that we have a better idea of what the program is actually trying to
achieve.
J
|
|
From: Nicholas N. <nj...@ca...> - 2003-05-26 12:06:53
|
On Mon, 26 May 2003, Josef Weidendorfer wrote: > Sidenote (perhaps there's a misunderstanding): I thought about 2 threads > running on 2 processors in parallel. There does not have any thread switching > for races to occur, as without locking, interleaved memory accesses from the > two threads/processors can happen in any order. Oh, good point. That's even more difficult. > As memory is a shared resource, we must use locking (atomic access to shadow > and real memory). How fine should locking be done? One lock for all memory > accesses kills performance. Perhaps one lock for each allocated area / or > every 64 bytes of memory? This would need very fast user mode mutexes (e.g. > Futexes from Linux 2.5). > But all this is about problems in some specific skin. I currently wonder more > about problems with the generic valgrind core (binary translation engine and > runtime environment). Well, programs typically spend something like 80%+ of their time running generated code. So I think you could be quite crude with locking in the core and not suffer too much. But I could be wrong about that. Hmm... I guess you'd want one copy of VG_(innerloop) per thread on a multi-processor machine or performance would suck. Ach, this is getting a bit complex for me... > What are the most problematic Linux-centric things to get rid of? Threads, syscalls, signals, in particular the interactions of all of them. > I thought a bigger problem would be to support other processor architectures. > This seems an interesting goal to me as for skins, the architecture is > already hidden by using the RISC-like UCodes. Firstly, UCode looks platform independent but it's really tied quite closely to x86 -- consider the handling of condition codes, the fact that it's two-address, the way FPU/MMX/SSE instructions are handled, etc. Nonetheless, X-arch translation is a definite possibility. It's been done before by others so the basic ideas are fairly well worked out. Julian has had some thoughts about this, the difficult part is allowing arbitrary instrumentation in a way that is independent of the architecture; most instructions are fine (adds, movs, things like that) but every architecture has its own weirdo instructions; these are the difficult parts to handle. N |
|
From: Josef W. <Jos...@gm...> - 2003-05-26 11:52:25
|
On Monday 26 May 2003 12:05, Nicholas Nethercote wrote: > [...] > Julian and I have discussed this, and AFAWCT the killer point is shadow > memory for Memcheck -- each time memory is written shadow memory is > written a few instructions before. The danger is that if two threads are > racing on a memory word, you could get a thread switch in between the > shadow write and the real write, and then your shadow memory would not > match your real memory. Sidenote (perhaps there's a misunderstanding): I thought about 2 threads running on 2 processors in parallel. There does not have any thread switching for races to occur, as without locking, interleaved memory accesses from the two threads/processors can happen in any order. As memory is a shared resource, we must use locking (atomic access to shadow and real memory). How fine should locking be done? One lock for all memory accesses kills performance. Perhaps one lock for each allocated area / or every 64 bytes of memory? This would need very fast user mode mutexes (e.g. Futexes from Linux 2.5). But all this is about problems in some specific skin. I currently wonder more about problems with the generic valgrind core (binary translation engine and runtime environment). > The problem would be avoided if we could guarantee that thread switches > only occur between basic blocks. Actually, now that I think about it, > that shouldn't be a problem with the current implementation since it does > thread scheduling itself, and never does thread switches in the middle of > a basic block. Hmm. > > As for getting rid of Valgrind's threads implementation, the best idea so > far is to intercept the clone() syscall, and have Valgrind schedule > threads itself but not do all the pthreads ops itself. This sounds But with the szenario of kernel level threads, Valgrind can't schedule them ?! > plausible, but I think the details haven't been worked out. The big > advantage would be getting rid of libpthread.so which is a source of much > complexity. The disadvantage would be that the pthreads API error > checking would disappear (I think). OK. That's usefull in general. And the full pthreads replacement can be moved to a skin. > All the other stuff you mention about making Valgrind thread-safe seem > like it shouldn't be too hard to do (famous last words...) > > All this is very Linux-centric, and a (very) long-term goal would be to > support other operating systems, but it's not at all clear how to do this. What are the most problematic Linux-centric things to get rid of? I thought a bigger problem would be to support other processor architectures. This seems an interesting goal to me as for skins, the architecture is already hidden by using the RISC-like UCodes. > Does that answer some of your questions? Thanks very much for your response. At least the dimension of this task is somewhat clearer to me now. Josef |
|
From: Nicholas N. <nj...@ca...> - 2003-05-26 10:05:25
|
On Mon, 26 May 2003, Josef Weidendorfer wrote: > currently I'm thinking a little bit of what would be needed to allow > applications run under Valgrind to use processors in parallel. The main goal > would be to speed up cache simulation for multithreaded applications, more > specially first to let OpenMP apps (number crunshing) run simultaneously. > I'm not at all convinced if there will be any benefit/speedup at all on > multiple processors because of a possible need for additional fine-grained > communication among the threads. > > So perhaps its simple not worth it. > To come to this conclusion faster, I wanted to ask you for the problems you > see in this for the Valgrind core framework. > > As I see it: all global data structures accessable by multiple threads either > must be avoided or locked on access. > * Could the instrumentation engine/translation table be separated for each > thread? This would duplicate translation for each thread, but would avoid > synchronisation on accessing the translation hash table. > * V memory allocation functions have to be multithread-aware. > * Signal handling? Is there anything special that I have overlooked? > * What's with Valgrinds version of the pthread library? Do you think that it's > a big task to make this reentrant-safe? Or perhaps we even could get rid of > our own implementation? Julian and I have discussed this, and AFAWCT the killer point is shadow memory for Memcheck -- each time memory is written shadow memory is written a few instructions before. The danger is that if two threads are racing on a memory word, you could get a thread switch in between the shadow write and the real write, and then your shadow memory would not match your real memory. The problem would be avoided if we could guarantee that thread switches only occur between basic blocks. Actually, now that I think about it, that shouldn't be a problem with the current implementation since it does thread scheduling itself, and never does thread switches in the middle of a basic block. Hmm. As for getting rid of Valgrind's threads implementation, the best idea so far is to intercept the clone() syscall, and have Valgrind schedule threads itself but not do all the pthreads ops itself. This sounds plausible, but I think the details haven't been worked out. The big advantage would be getting rid of libpthread.so which is a source of much complexity. The disadvantage would be that the pthreads API error checking would disappear (I think). All the other stuff you mention about making Valgrind thread-safe seem like it shouldn't be too hard to do (famous last words...) All this is very Linux-centric, and a (very) long-term goal would be to support other operating systems, but it's not at all clear how to do this. Does that answer some of your questions? N |
|
From: Josef W. <Jos...@gm...> - 2003-05-26 09:36:41
|
Hi, currently I'm thinking a little bit of what would be needed to allow applications run under Valgrind to use processors in parallel. The main goal would be to speed up cache simulation for multithreaded applications, more specially first to let OpenMP apps (number crunshing) run simultaneously. I'm not at all convinced if there will be any benefit/speedup at all on multiple processors because of a possible need for additional fine-grained communication among the threads. So perhaps its simple not worth it. To come to this conclusion faster, I wanted to ask you for the problems you see in this for the Valgrind core framework. As I see it: all global data structures accessable by multiple threads either must be avoided or locked on access. * Could the instrumentation engine/translation table be separated for each thread? This would duplicate translation for each thread, but would avoid synchronisation on accessing the translation hash table. * V memory allocation functions have to be multithread-aware. * Signal handling? Is there anything special that I have overlooked? * What's with Valgrinds version of the pthread library? Do you think that it's a big task to make this reentrant-safe? Or perhaps we even could get rid of our own implementation? Thanks for any answers, Josef |