You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
|
2
(5) |
3
(2) |
4
|
5
|
6
|
7
(1) |
|
8
(2) |
9
|
10
(3) |
11
(1) |
12
(7) |
13
|
14
(1) |
|
15
|
16
|
17
|
18
(1) |
19
|
20
|
21
|
|
22
|
23
(1) |
24
|
25
|
26
|
27
|
28
|
|
29
(1) |
30
|
|
|
|
|
|
|
From: Josef W. <Jos...@gm...> - 2003-06-03 10:06:18
|
On Tuesday 03 June 2003 11:10, Adam Gundy wrote: > At 20:19 02/06/2003 +0200, Josef Weidendorfer wrote: > >So what about a buffer, splitted in two parts with a wait condition for > > each part, meaning "buffer part full". So if half of the ring buffer gets > > full, the handler process is notified. And make a timeout in the handler > > to check regularily for data (when little event data will be produced). > > sounds like a serial port fifo.... (eg a 16550 where you have a 16 byte > buffer, and you can set the level at which an interrupt is generated. > depending on how quickly you can clear the buffer you set it to 8, 12 etc, > so there is still some buffer space to be filled while the interrupt is > being serviced). > > a direct analogy of this would be eg a 16Mb buffer, then valgrind sends a > real-time 'empty the buffer' signal to the worker process when it is 8Mb > full etc. the idea is that valgrind would automatically adjust to the speed > of signal delivery etc by altering the point at which it decides a signal > needs to be sent. Good idea. > >> >> sounds good for cache simulation, since the events are one way only - > >> >> I doubt it would be useful for memcheck though - the number of error > >> >> events (should be) tiny. > >> > > >> >Couldn't be all the shadow memory stuff be done in the event handler > >> > process? Perhaps this split is even worth it only to allow for easier > >> > development/bug fixing/valgrinding of the memcheck event handler > >> > functions themself, and the normal case would be to run them in the V > >> > process again? > >> >(Perhaps this is nonsense, as I don't know enough about memcheck). > >> > >> writes but not reads... memcheck is constantly checking the shadow > >> memory state as well as updating it. > > > >Yup. > >On reads, an error to be reported could happen. But why has Valgrind core > > to stop, and wait to see if a read will give you an error? The error > > should be printed by the handler. Perhaps a problem would be that the > > handler has to track the stack frames for the backtrace in the error > > message. > >Ok, you can't attach a debugger when the error happens :-( > > you would have to send a stack trace with every memory test - this would be > horribly slow. hmmm. I guess delta compression of the stack trace would > help, since typically only the top frame changes... you would also need > some sort of optimized stack trace maintained for each thread as well - > track calls and returns, so that most of the time you only need to look at > EIP to get the current stack trace. there are some odd cases where RET is > not used which would cause pain. Luckily, this work is already done. In my calltree skin, I track the call chain on my own for each thread, using CALL/RET/JUMP events with the content of %esp. It handles situations like longjmp and exception handling quite fine. Josef |
|
From: Adam G. <ar...@cy...> - 2003-06-03 09:13:47
|
At 20:19 02/06/2003 +0200, Josef Weidendorfer wrote: >On Monday 02 June 2003 17:37, Adam Gundy wrote: >> At 17:15 02/06/2003 +0200, Josef Weidendorfer wrote: >> > [...] >> >Marshalling should almost be a NOP, as an event tag should be enough for >> > the handler to know the event argument types and how to extract it. >> >For communication, busy polling without any lock should be fine on a >> >2-processor machine. update the ring buffer write pointer on the sender >> > side after putting the event into the buffer, and the receiver polls on >> > this pointer. For the 1-processor case, use a wait condition only to >> > signal buffer full. >> >The buffer should be of a size so that it won't be filled up in one time >> > slice (?). >> >> busy polling sounds like a bad idea... even a really short usleep() should >> help > >Hmmm... OK. >IMHO busy polling is the fastest (to clear the buffer again), but it only >makes sense if you have a 2nd dedicated CPU, and as this will be almost never >the case... > >So what about a buffer, splitted in two parts with a wait condition for each >part, meaning "buffer part full". So if half of the ring buffer gets full, >the handler process is notified. And make a timeout in the handler to check >regularily for data (when little event data will be produced). sounds like a serial port fifo.... (eg a 16550 where you have a 16 byte buffer, and you can set the level at which an interrupt is generated. depending on how quickly you can clear the buffer you set it to 8, 12 etc, so there is still some buffer space to be filled while the interrupt is being serviced). a direct analogy of this would be eg a 16Mb buffer, then valgrind sends a real-time 'empty the buffer' signal to the worker process when it is 8Mb full etc. the idea is that valgrind would automatically adjust to the speed of signal delivery etc by altering the point at which it decides a signal needs to be sent. >> >> sounds good for cache simulation, since the events are one way only - I >> >> doubt it would be useful for memcheck though - the number of error >> >> events (should be) tiny. >> > >> >Couldn't be all the shadow memory stuff be done in the event handler >> > process? Perhaps this split is even worth it only to allow for easier >> > development/bug fixing/valgrinding of the memcheck event handler >> > functions themself, and the normal case would be to run them in the V >> > process again? >> >(Perhaps this is nonsense, as I don't know enough about memcheck). >> >> writes but not reads... memcheck is constantly checking the shadow memory >> state as well as updating it. > >Yup. >On reads, an error to be reported could happen. But why has Valgrind core to >stop, and wait to see if a read will give you an error? The error should be >printed by the handler. Perhaps a problem would be that the handler has to >track the stack frames for the backtrace in the error message. >Ok, you can't attach a debugger when the error happens :-( you would have to send a stack trace with every memory test - this would be horribly slow. hmmm. I guess delta compression of the stack trace would help, since typically only the top frame changes... you would also need some sort of optimized stack trace maintained for each thread as well - track calls and returns, so that most of the time you only need to look at EIP to get the current stack trace. there are some odd cases where RET is not used which would cause pain. it might work... >> >But still, IMHO this would be one step to the goal of "allow valgrinding >> >Valgrind itself", even if you only can valgrind the event handler side. >> >Look at it as trying to put a skin into another process. >> >> yes, I agree that it would at least enable this. if you do write a generic >> 'proxy' skin, it should definitely be configurable. > >For a general proxy, even instrumentation would have to be done by the handler >side, and transfering UCode around for each basic block will be awefully slow >(aside from the 2-processor case with busy polling?). OTOH, at development >time of the skin this feature sure would be worth it. > >Better let the skin do the instrumentation and generate custom events to allow >the handler process to set up structures depending on this instrumentation. >Which means this can't be done transparently for skins... maybe you need to write two kinds of proxy ;-) Seeya, Adam -- Real Programmers don't comment their code. If it was hard to write, it should be hard to read, and even harder to modify. These are all my own opinions. |