You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
1
(3) |
2
(20) |
3
(19) |
|
4
(14) |
5
(2) |
6
(21) |
7
(26) |
8
(32) |
9
(13) |
10
(3) |
|
11
(3) |
12
(15) |
13
(14) |
14
(15) |
15
(12) |
16
(4) |
17
(18) |
|
18
(16) |
19
(2) |
20
(16) |
21
(19) |
22
(13) |
23
(8) |
24
(1) |
|
25
(4) |
26
(18) |
27
(23) |
28
(16) |
29
(17) |
30
(15) |
31
(15) |
|
From: Siddharth N. <si...@gm...> - 2012-03-16 16:09:21
|
I was supposed to reply to this earlier! This method of looking at "CLG_(current_state).jmps_**passed" worked fine, so we can accurately figure out how we got to the current BB from the previous one. On 6 March 2012 18:28, Siddharth Nilakantan <si...@gm...> wrote: > haha, I knew there must be a way. great! So I'm guessing jmpkind at the > end of setup_bbcc() for BB1 should be the same as > CLG_(current_state)->jmp[CLG_(current_state).jmps_passed].jmpkind when > setup_bbcc() is called for the next immediate BB. > > Thanks a lot for your help. I'll use this info and let you know what > happens. > > > On 6 March 2012 17:37, Josef Weidendorfer <Jos...@gm...>wrote: > >> On 06.03.2012 23:01, Siddharth Nilakantan wrote: >> >>> Ohh, so by last, you meant the final exit (non-side exit) is the value >>> of cjmp_count. So there really is no way of determining >>> >> >> There is a way ;-) >> >> >> how we got to >> >>> the current BB from the previous one, as it could have used any of the >>> exits in the jmp[] array. >>> >> >> That is correct. However, setup_bbcc() knows the side exit from last BB: >> Callgrind instruments a write to "CLG_(current_state).jmps_**passed" >> before every side exit. Thus, when setup_bbcc() is called at the >> beginning of the next BB, "CLG_(current_state).jmps_**passed" will >> contain the side exit number. >> >> But you should not use "CLG_(current_state).jmps_**passed" outside of >> setup_bbcc() yourself, as it is reused to find out the side exit number >> of the currently running BB. >> >> The jmp[] array (and the complete BB struct) only stores information >> found at last instrumentation, and thus is static. >> >> >> Is there any point in the code where I can >> >>> capture and store the index of the most recently used exit for a BB or >>> is setup_bbcc's jmpkind the closest I can get to that? >>> >> >> Exactly. As mentioned in my last mail, you can store the "jmpkind" at >> the end of setup_bbcc() into a new field of CLG_(current_state). >> That alwyas should be correct. >> >> Probably, this technique to overwrite a global before every side exit >> to find out the side exit number seems a little dumb to you. But I do >> not see another way, as Valgrind does not allow instrumentation at >> side exits. >> >> Josef >> >> >>> On 6 March 2012 16:26, Josef Weidendorfer <Jos...@gm... >>> <mailto:Josef.Weidendorfer@**gmx.de <Jos...@gm...>>> wrote: >>> >>> On 06.03.2012 21:56, Siddharth Nilakantan wrote: >>> > However, for what should be a call or a return, I see that the >>> > cjmp_count = 0 even after that BB has been exited. >>> >>> Ah, sorry, I was to fast. "cjmp_count" is the number of side >>> exits, which of course can be 0. The length of the BB::jmp[] >>> array is cjmp_count+1. Thus, >>> >>> >>> I quickly migrated my code to valgrind 3.7. I noticed that >>> CLG_(current_state)->bbcc->bb actually points to the BB that is >>> executing currently. You mentioned that "The jumpkind is >>> now stored for every side exit, and for the last, you have to >>> look at >>> "...bb->jmp[...bb->cjmp_count-**__1].jmpkind" ". >>> >>> >>> This must be "...bb->jmp[...bb->cjmp_count]**__.jmpkind" instead. >>> >>> Josef >>> >>> >>> >> > |
|
From: <sv...@va...> - 2012-03-16 15:03:15
|
philippe 2012-03-16 15:03:08 +0000 (Fri, 16 Mar 2012)
New Revision: 12446
Log:
Make a more precise reference to the g++ version.
Modified files:
trunk/NEWS
Modified: trunk/NEWS (+1 -1)
===================================================================
--- trunk/NEWS 2012-03-14 21:27:35 +00:00 (rev 12445)
+++ trunk/NEWS 2012-03-16 15:03:08 +00:00 (rev 12446)
@@ -29,7 +29,7 @@
* ==================== OTHER CHANGES ====================
* The C++ demangler has been updated so as to work well with C++
- compiled by even the most recent g++'s.
+ compiled by up to at least g++ 4.6.
* The new option --fair-sched allows to control the locking mechanism
used by Valgrind. The locking mechanism influences the performance
|
|
From: Philippe W. <phi...@sk...> - 2012-03-16 14:59:00
|
I have now run outer memcheck/drd/helgrind/sgcheck on inner regtest.
I have analysed more in details the outer memcheck results.
Memory leaks have all been solved, except:
2 leaks suppressed in tests/outer_inner.supp, happening
for all or most inner tests
2 normal leaks in drd/tests/custom_alloc and
drd/tests/custom_alloc_fiw.
(I think it is not a good idea to write a
general suppression for these two tests,
as it would suppress all client allocations leaks).
annotate_hbefore can loop forever in an outer/inner setup,
but there is a solution (cfr my previous mail).
The outer memcheck reports other errors than leaks, but these are
difficult to analyse, as the outer detects also some of the errors that
the inner (has to) detect on the client test program. But for such
things, the outer error messages do not give an file/line nr, as these
errors are found in JIT-ed code emitted by the inner.
So, it is difficult to see if the outer memcheck detects an abnormal
or normal bug in the JIT-ed code.
There are a few tests which are failing with outer memcheck on inner:
Find the list below.
Philippe
drd/tests/pth_barrier_thr_cr (stderr)
out of memory (strange, because V reports only 1.1Gb used)
gdbserver_tests/mcinvokeWS (stdoutB)
gdbserver_tests/mcinvokeWS (stderrB)
SIGSEGV when inferior call executed in the inner
and the inner is not sleeping yet.
Probably/maybe the outer does not like a
a 'forced invocation' by vgdb ?
helgrind/tests/tc06_two_races_xml (stderr)
failing due to new inner <arg>--sim-hints=no-inner-prefix</arg>
memcheck/tests/err_disable4 (stderr)
cannot create 498 threads
(an mmap for stack failing at 199 ?)
memcheck/tests/linux/lsframe1 (stderr)
big stack not properly supported ?
memcheck/tests/linux/timerfd-syscall (stderr)
slow down by outer makes value of timer not respected ?
none/tests/map_unmap (stdout)
none/tests/map_unmap (stderr)
none/tests/sigstackgrowth (stdout)
none/tests/sigstackgrowth (stderr)
none/tests/stackgrowth (stdout)
none/tests/stackgrowth (stderr)
aspacemgr sync check failing (difference
between inner list of segments and /proc/maps
list of segments).
none/tests/shell (stderr)
some subtilities with the way wrong scripts
are executed.
|
|
From: Philippe W. <phi...@sk...> - 2012-03-16 14:47:38
|
annotate_hbefore quite systematically loops forever
in an outer helgrind or sgcheck on an inner helgrind,
and I suspect also in other setups.
Digging into that, it seems related to (un-)fair scheduling.
The inner vgtest specifies fair scheduling. Even when
I enable fair scheduling on the outer, I see that the
test often fails due to what looks like unfair scheduling.
The change below ensures this test works ok even without
fair scheduling (with or without outer).
I think the below change does not impact what is being
tested.
Comments/feedback ?
Philippe
Index: helgrind/tests/annotate_hbefore.c
===================================================================
--- helgrind/tests/annotate_hbefore.c (revision 12445)
+++ helgrind/tests/annotate_hbefore.c (working copy)
@@ -235,10 +235,18 @@
int shared_var = 0; // is not raced upon
-void delay500ms ( void )
+void delayXms ( int i )
{
- struct timespec ts = { 0, 500 * 1000 * 1000 };
- nanosleep(&ts, NULL);
+ struct timespec ts = { 0, 1 * 1000 * 1000 };
+ // We do the sleep in small pieces to have scheduling
+ // events ensuring a fair switch between threads, even
+ // without --fair-sched=yes. This is a.o. needed for
+ // running this test under an outer helgrind or an outer
+ // sgcheck.
+ while (i > 0) {
+ nanosleep(&ts, NULL);
+ i--;
+ }
}
void do_wait ( UWord* w )
@@ -246,7 +254,7 @@
UWord w0 = *w;
UWord volatile * wV = w;
while (*wV == w0)
- ;
+ delayXms(1); // small sleeps, ensuring context switches
ANNOTATE_HAPPENS_AFTER(w);
}
@@ -261,11 +269,11 @@
void* thread_fn1 ( void* arg )
{
UWord* w = (UWord*)arg;
- delay500ms(); // ensure t2 gets to its wait first
+ delayXms(500); // ensure t2 gets to its wait first
shared_var = 1; // first access
do_signal(w); // cause h-b edge to second thread
- delay500ms();
+ delayXms(500);
return NULL;
}
@@ -275,7 +283,7 @@
do_wait(w); // wait for h-b edge from first thread
shared_var = 2; // second access
- delay500ms();
+ delayXms(500);
return NULL;
}
|