You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
|
2
(5) |
3
(2) |
4
|
5
|
6
|
7
(1) |
|
8
(2) |
9
|
10
(3) |
11
(1) |
12
(7) |
13
|
14
(1) |
|
15
|
16
|
17
|
18
(1) |
19
|
20
|
21
|
|
22
|
23
(1) |
24
|
25
|
26
|
27
|
28
|
|
29
(1) |
30
|
|
|
|
|
|
|
From: Josef W. <Jos...@gm...> - 2003-06-02 18:14:25
|
On Monday 02 June 2003 17:37, Adam Gundy wrote: > At 17:15 02/06/2003 +0200, Josef Weidendorfer wrote: > > [...] > >Marshalling should almost be a NOP, as an event tag should be enough for > > the handler to know the event argument types and how to extract it. > >For communication, busy polling without any lock should be fine on a > >2-processor machine. update the ring buffer write pointer on the sender > > side after putting the event into the buffer, and the receiver polls on > > this pointer. For the 1-processor case, use a wait condition only to > > signal buffer full. > >The buffer should be of a size so that it won't be filled up in one time > > slice (?). > > busy polling sounds like a bad idea... even a really short usleep() should > help Hmmm... OK. IMHO busy polling is the fastest (to clear the buffer again), but it only makes sense if you have a 2nd dedicated CPU, and as this will be almost never the case... So what about a buffer, splitted in two parts with a wait condition for each part, meaning "buffer part full". So if half of the ring buffer gets full, the handler process is notified. And make a timeout in the handler to check regularily for data (when little event data will be produced). > >> sounds good for cache simulation, since the events are one way only - I > >> doubt it would be useful for memcheck though - the number of error > >> events (should be) tiny. > > > >Couldn't be all the shadow memory stuff be done in the event handler > > process? Perhaps this split is even worth it only to allow for easier > > development/bug fixing/valgrinding of the memcheck event handler > > functions themself, and the normal case would be to run them in the V > > process again? > >(Perhaps this is nonsense, as I don't know enough about memcheck). > > writes but not reads... memcheck is constantly checking the shadow memory > state as well as updating it. Yup. On reads, an error to be reported could happen. But why has Valgrind core to stop, and wait to see if a read will give you an error? The error should be printed by the handler. Perhaps a problem would be that the handler has to track the stack frames for the backtrace in the error message. Ok, you can't attach a debugger when the error happens :-( > >But still, IMHO this would be one step to the goal of "allow valgrinding > >Valgrind itself", even if you only can valgrind the event handler side. > >Look at it as trying to put a skin into another process. > > yes, I agree that it would at least enable this. if you do write a generic > 'proxy' skin, it should definitely be configurable. For a general proxy, even instrumentation would have to be done by the handler side, and transfering UCode around for each basic block will be awefully slow (aside from the 2-processor case with busy polling?). OTOH, at development time of the skin this feature sure would be worth it. Better let the skin do the instrumentation and generate custom events to allow the handler process to set up structures depending on this instrumentation. Which means this can't be done transparently for skins... > >I know, I should come up with a patch ;-) > >(this would involve a stub skin, and a stub Valgrind core loading the real > >skin, to forward the events.) > > it occurs to me there is no need to limit it to 2 CPUs: if you have a four > CPU machine you can spawn three 'worker' processes (with a ring buffer > each). If the events posted to the ring buffers (round robin unless one is > full) have timestamps (just an incrementing integer), then the workers can > sort things out later... The problem is that most of the time, the data produced/updated in event handlers depend on each other: E.g. the state of the cache after one access decides if the next access will be a miss, even if we would have multiple caches to simulate because of cache coherence protocols. That's the original problem to this thread. A solution would be a pipeline of workers, splitting up the work for one event, e.g. let the first simulate the 1st level cache, the next the 2nd level cache, and the last the structure updating. Josef |
|
From: Adam G. <ar...@cy...> - 2003-06-02 15:36:59
|
At 17:15 02/06/2003 +0200, Josef Weidendorfer wrote: >On Monday 02 June 2003 16:28, Adam Gundy wrote: >> At 16:12 02/06/2003 +0200, Josef Weidendorfer wrote: >> >Fortunately, for cache simulation, no results have to be feeded back from >> >cache simulation to valgrind runtime. For cachegrind, almost all skin >> > actions (e.g. BBCC allocation, cache simulation, trace dumping) could be >> > separated into the event handler process. >> >I could imagine that even memcheck could be splitted this way for most >> > events, as error reporting can be done asynchroniously. >> > >> >What do you think about this idea? >> >> you would have to be _really_ efficient at marshalling/unmarshalling the >> events otherwise the IPC overhead would use up all the performance gain >> from 2 CPUs. maybe futexes are the answer? > >Marshalling should almost be a NOP, as an event tag should be enough for the >handler to know the event argument types and how to extract it. >For communication, busy polling without any lock should be fine on a >2-processor machine. update the ring buffer write pointer on the sender side >after putting the event into the buffer, and the receiver polls on this >pointer. For the 1-processor case, use a wait condition only to signal buffer >full. >The buffer should be of a size so that it won't be filled up in one time slice >(?). busy polling sounds like a bad idea... even a really short usleep() should help >> sounds good for cache simulation, since the events are one way only - I >> doubt it would be useful for memcheck though - the number of error events >> (should be) tiny. > >Couldn't be all the shadow memory stuff be done in the event handler process? >Perhaps this split is even worth it only to allow for easier development/bug >fixing/valgrinding of the memcheck event handler functions themself, and the >normal case would be to run them in the V process again? >(Perhaps this is nonsense, as I don't know enough about memcheck). writes but not reads... memcheck is constantly checking the shadow memory state as well as updating it. >But still, IMHO this would be one step to the goal of "allow valgrinding >Valgrind itself", even if you only can valgrind the event handler side. >Look at it as trying to put a skin into another process. yes, I agree that it would at least enable this. if you do write a generic 'proxy' skin, it should definitely be configurable. >I know, I should come up with a patch ;-) >(this would involve a stub skin, and a stub Valgrind core loading the real >skin, to forward the events.) it occurs to me there is no need to limit it to 2 CPUs: if you have a four CPU machine you can spawn three 'worker' processes (with a ring buffer each). If the events posted to the ring buffers (round robin unless one is full) have timestamps (just an incrementing integer), then the workers can sort things out later... Seeya, Adam -- Real Programmers don't comment their code. If it was hard to write, it should be hard to read, and even harder to modify. These are all my own opinions. |
|
From: Josef W. <Jos...@gm...> - 2003-06-02 15:09:51
|
On Monday 02 June 2003 16:28, Adam Gundy wrote: > At 16:12 02/06/2003 +0200, Josef Weidendorfer wrote: > >Fortunately, for cache simulation, no results have to be feeded back from > >cache simulation to valgrind runtime. For cachegrind, almost all skin > > actions (e.g. BBCC allocation, cache simulation, trace dumping) could be > > separated into the event handler process. > >I could imagine that even memcheck could be splitted this way for most > > events, as error reporting can be done asynchroniously. > > > >What do you think about this idea? > > you would have to be _really_ efficient at marshalling/unmarshalling the > events otherwise the IPC overhead would use up all the performance gain > from 2 CPUs. maybe futexes are the answer? Marshalling should almost be a NOP, as an event tag should be enough for the handler to know the event argument types and how to extract it. For communication, busy polling without any lock should be fine on a 2-processor machine. update the ring buffer write pointer on the sender side after putting the event into the buffer, and the receiver polls on this pointer. For the 1-processor case, use a wait condition only to signal buffer full. The buffer should be of a size so that it won't be filled up in one time slice (?). > sounds good for cache simulation, since the events are one way only - I > doubt it would be useful for memcheck though - the number of error events > (should be) tiny. Couldn't be all the shadow memory stuff be done in the event handler process? Perhaps this split is even worth it only to allow for easier development/bug fixing/valgrinding of the memcheck event handler functions themself, and the normal case would be to run them in the V process again? (Perhaps this is nonsense, as I don't know enough about memcheck). But still, IMHO this would be one step to the goal of "allow valgrinding Valgrind itself", even if you only can valgrind the event handler side. Look at it as trying to put a skin into another process. I know, I should come up with a patch ;-) (this would involve a stub skin, and a stub Valgrind core loading the real skin, to forward the events.) Cheers, Josef |
|
From: Adam G. <ar...@cy...> - 2003-06-02 14:28:28
|
At 16:12 02/06/2003 +0200, Josef Weidendorfer wrote: >On Monday 26 May 2003 12:05, Nicholas Nethercote wrote: >> On Mon, 26 May 2003, Josef Weidendorfer wrote: >> > currently I'm thinking a little bit of what would be needed to allow >> > applications run under Valgrind to use processors in parallel. The main >> > goal would be to speed up cache simulation for multithreaded >> > applications, more specially first to let OpenMP apps (number crunshing) >> > run simultaneously. I'm not at all convinced if there will be any >> > benefit/speedup at all on multiple processors because of a possible need >> > for additional fine-grained communication among the threads. >> > [...] > >Nick, Jeremy, Adam, > >thanks a lot for the responses. >As the main goal of my thinking was (for now) about speeding up cache >simulation, I trashed the idea of make V threads out of application threads, >because of the synchronisation issues in the skins. <snip> >Fortunately, for cache simulation, no results have to be feeded back from >cache simulation to valgrind runtime. For cachegrind, almost all skin actions >(e.g. BBCC allocation, cache simulation, trace dumping) could be separated >into the event handler process. >I could imagine that even memcheck could be splitted this way for most events, >as error reporting can be done asynchroniously. > >What do you think about this idea? you would have to be _really_ efficient at marshalling/unmarshalling the events otherwise the IPC overhead would use up all the performance gain from 2 CPUs. maybe futexes are the answer? sounds good for cache simulation, since the events are one way only - I doubt it would be useful for memcheck though - the number of error events (should be) tiny. Seeya, Adam -- Real Programmers don't comment their code. If it was hard to write, it should be hard to read, and even harder to modify. These are all my own opinions. |
|
From: Josef W. <Jos...@gm...> - 2003-06-02 14:07:37
|
On Monday 26 May 2003 12:05, Nicholas Nethercote wrote: > On Mon, 26 May 2003, Josef Weidendorfer wrote: > > currently I'm thinking a little bit of what would be needed to allow > > applications run under Valgrind to use processors in parallel. The main > > goal would be to speed up cache simulation for multithreaded > > applications, more specially first to let OpenMP apps (number crunshing) > > run simultaneously. I'm not at all convinced if there will be any > > benefit/speedup at all on multiple processors because of a possible need > > for additional fine-grained communication among the threads. > > [...] Nick, Jeremy, Adam, thanks a lot for the responses. As the main goal of my thinking was (for now) about speeding up cache simulation, I trashed the idea of make V threads out of application threads, because of the synchronisation issues in the skins. A better idea would be to separate event handling (e.g. memory access, trackable valgrind events, ...) into another process(es), forked off at valgrind startup. By using a ring buffer shared to the event handling process, communication should be really fast, and the handler process can be run in parallel on a 2-processor machine (or on a P4 with hyperthreading - here, AFAIK busy polling would be a no-no, but there are workarounds for this problem?). I think this could be a second general "split" approach, almost orthogonal to core/skin splitting: instead of calling a event handler of the skin, the event could be put into the ring buffer. We even could make the communication bidirectional by using a second ring buffer, and if a event handler has to run synchroniously, block for an answer from the event handler process. The best would be to add the ring buffer communication in a transparent way to the event handler functions, much like RPC. This way, we could make it a runtime switch to use the event handler process or not. Advantages: * speedup on 2-processor machines / P4 with hyperthreading * normal use of GDB for the event handler process, eases development of handlers in contrast to skin development (the valgrind process will block because the ring buffer is full) * the event handler process can be valgrinded (!) * if the communication to the handler process is unidirectional, the events can be dumped directly to a file, and the event handler can run afterwards, using the stored events. Of course, this often will not be practical because of the huge amount of event data. But perhaps some compression could be done here. * the event handler can be a GUI application (allowing e.g. for a real trace visualisation, not only profile). Fortunately, for cache simulation, no results have to be feeded back from cache simulation to valgrind runtime. For cachegrind, almost all skin actions (e.g. BBCC allocation, cache simulation, trace dumping) could be separated into the event handler process. I could imagine that even memcheck could be splitted this way for most events, as error reporting can be done asynchroniously. What do you think about this idea? Josef |