You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(11) |
2
(8) |
|
3
(8) |
4
(8) |
5
(8) |
6
(19) |
7
(17) |
8
(12) |
9
(10) |
|
10
(15) |
11
(18) |
12
(14) |
13
(16) |
14
(24) |
15
(16) |
16
(12) |
|
17
(25) |
18
(23) |
19
(12) |
20
(10) |
21
(9) |
22
(12) |
23
(13) |
|
24
(19) |
25
(7) |
26
(39) |
27
(22) |
28
(22) |
29
(16) |
30
(13) |
|
31
(23) |
|
|
|
|
|
|
|
From: Nicholas N. <nj...@cs...> - 2006-12-23 23:58:59
|
On Sat, 23 Dec 2006, Bart Van Assche wrote:
> Apparently it is not yet clear why I proposed to add the capability of
> associating names with threads to Valgrind's core, so I will try to
> explain this via the following example:
> [...]
> In the output above memcheck prints one error report. How is a user
> supposed to find out which thread the error report applies to ? My
> opinion is that Valgrind should print some information that allows to
> identify the thread unambiguously, such that thread ID's printed by
> the client can be correlated with thread ID's printed by Valgrind.
> This information could be one of the following:
> - Process ID. A process ID only identifies a thread when using
> linuxthreads, not when using NPTL or when using another OS than Linux.
> - POSIX thread ID. POSIX thread ID's are currently not known by
> Valgrinds core however.
> - lwpid. It is not very convenient however to obtain this ID from
> within a client -- a client either would have to call
> readlink("/proc/self") or would have to call gettid() via inline
> assembly (gettid() is not in glibc).
Some error messages already print a thread ID, eg. "address X is on thread
1's stack" -- which of these three does it correspond to?
> - Thread name, where thread names are stored in Valgrind's core and
> provided by the client via a client request.
>
> What is your opinion about this ?
I think allowing thread naming is a good idea.
Nick
|
|
From: <sv...@va...> - 2006-12-23 23:11:25
|
Author: weidendo
Date: 2006-12-23 23:11:20 +0000 (Sat, 23 Dec 2006)
New Revision: 6414
Log:
Callgrind: Throttle calls CLG_(run_thread) after r6413
After the change in r6413, CLG_(run_thread) is called a
lot more often, increasing the polling overhead to check
for a callgrind command file (created by callgrind_control
for controlling a callgrind run in an interactive way).
This reduces the calls to only be done every 5000 BBs,
which gives a similar polling frequency as before.
=20
Modified:
trunk/callgrind/main.c
Modified: trunk/callgrind/main.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/callgrind/main.c 2006-12-23 01:21:12 UTC (rev 6413)
+++ trunk/callgrind/main.c 2006-12-23 23:11:20 UTC (rev 6414)
@@ -1026,13 +1026,19 @@
Bool is_running,=20
ULong blocks_done )
{
+ static ULong last_blocks_done =3D 0;
+
if (0)
VG_(printf)("%d %c %llu\n",=20
(Int)tid, is_running ? 'R' : 's', blocks_done);
- /* Simply call onwards to CLG_(run_thread). Maybe this can be
- simplified later? */
- if (is_running)
- CLG_(run_thread)( tid );
+
+ if (!is_running) return;
+
+ /* throttle calls to CLG_(run_thread) by number of BBs executed */
+ if (blocks_done - last_blocks_done < 5000) return;
+ last_blocks_done =3D blocks_done;
+
+ CLG_(run_thread)( tid );
}
=20
static
|
|
From: Josef W. <Jos...@gm...> - 2006-12-23 23:06:34
|
On Saturday 23 December 2006 02:35, Julian Seward wrote:
> Josef, there may be some cleaning up possible w.r.t
> clg_thread_runstate_callback
> that I created. I did not want to modify CLG_(run_thread) because
> it is called from various places, not just as a callback, so I added
> this 'impedance matching' function instead.
Yes, thanks.
Regarding the frequency, I just checked how often the polling of
callgrind's command file was done before and after this change.
This is for startup and directly quitting xclock.
Before r6413:
:~> time strace -e open valgrind --tool=callgrind xclock &> z
real 0m8.028s
user 0m4.132s
sys 0m0.156s
:~> grep callgrind.cmd z | wc
709 7090 69130
After r6413:
:~> time strace -e open valgrind --tool=callgrind xclock &> z2
real 0m10.266s
user 0m4.840s
sys 0m1.784s
:~> grep callgrind.cmd z2 | wc
39003 390030 3822297
There were around 10 million BBs executed.
So yes, this needs some fine tuning. Polling around 200 times per user
second was already a lot, but 10000 per second is _way_ too much.
After calling CLG_(run_thread) only every 5000 BBs executed, I get
back to the old behavior:
:~> time strace -e open valgrind -v --tool=callgrind xclock &> z3
real 0m6.452s
user 0m4.132s
sys 0m0.180s
weidendo@linux:~/tmp/clg> grep callgrind.cmd z3 | wc
805 8050 78893
A lower polling frequency has to problem that the "interactivity"
gets lost if a client program is sleeping most of the time; and
that is the reason I should get rid of this polling method :-(
Josef
|
|
From: Bart V. A. <bar...@gm...> - 2006-12-23 12:50:20
|
On 12/23/06, Julian Seward <js...@ac...> wrote: > > Bart, Josef, I have committed in r6413, a change which I think > should give Bart the sequencing you need and Josef the ability to > count blocks that you need. The commit message for r6413 explains > the details. Pls yell if this isn't what you wanted. This patch works fine for me. Bart. |
|
From: Bart V. A. <bar...@gm...> - 2006-12-23 12:48:27
|
Apparently it is not yet clear why I proposed to add the capability of
associating names with threads to Valgrind's core, so I will try to
explain this via the following example:
$ VALGRIND_LIB=.in_place coregrind/valgrind --tool=memcheck
memcheck/tests/stack_switch
==7764== Memcheck, a memory error detector.
==7764== Copyright (C) 2002-2006, and GNU GPL'd, by Julian Seward et al.
==7764== Using LibVEX rev 1680, a library for dynamic binary translation.
==7764== Copyright (C) 2004-2006, and GNU GPL'd, by OpenWorks LLP.
==7764== Using valgrind-3.3.0.SVN, a dynamic binary instrumentation framework.
==7764== Copyright (C) 2000-2006, and GNU GPL'd, by Julian Seward et al.
==7764== For more details, rerun with: -v
==7764==
==7764== Syscall param clone(child_tidptr) contains uninitialised byte(s)
==7764== at 0x4110648: clone (in /lib/libc-2.4.so)
==7764== by 0x406887B: (below main) (in /lib/libc-2.4.so)
==7764==
==7764== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 3 from 1)
==7764== malloc/free: in use at exit: 0 bytes in 0 blocks.
==7764== malloc/free: 0 allocs, 0 frees, 0 bytes allocated.
==7764== For counts of detected errors, rerun with: -v
==7764== All heap blocks were freed -- no leaks are possible.
In the output above memcheck prints one error report. How is a user
supposed to find out which thread the error report applies to ? My
opinion is that Valgrind should print some information that allows to
identify the thread unambiguously, such that thread ID's printed by
the client can be correlated with thread ID's printed by Valgrind.
This information could be one of the following:
- Process ID. A process ID only identifies a thread when using
linuxthreads, not when using NPTL or when using another OS than Linux.
- POSIX thread ID. POSIX thread ID's are currently not known by
Valgrinds core however.
- lwpid. It is not very convenient however to obtain this ID from
within a client -- a client either would have to call
readlink("/proc/self") or would have to call gettid() via inline
assembly (gettid() is not in glibc).
- Thread name, where thread names are stored in Valgrind's core and
provided by the client via a client request.
Thread ID's can e.g. be printed from within m_errormgr.c:
--- m_errormgr.c (revision 6413)
+++ m_errormgr.c (working copy)
@@ -628,6 +628,11 @@
n_errs_found++;
if (!is_first_shown_context)
VG_(message)(Vg_UserMsg, "");
+ else
+ {
+ VG_(message)(Vg_UserMsg, "thread context: %s",
+ VG_(get_thread_name)(VG_(get_running_tid())));
+ }
pp_Error(p);
is_first_shown_context = False;
n_errs_shown++;
What is your opinion about this ?
Bart.
|
|
From: Nicholas N. <nj...@cs...> - 2006-12-23 06:00:33
|
On Sat, 23 Dec 2006 sv...@va... wrote:
> Log:
> Change the core-tool interface 'thread_run' event to be more useful:
>
> - Rename the event to 'thread_runstate'.
>
> - Add arguments: pass also a boolean indicating whether the thread
> is running or stopping, and a 64-bit int showing how many blocks
> overall have run, so tools can make a rough estimate of workload.
>
> The boolean allows tools to see threads starting and stopping.
> Prior to this, de-schedule events were invisible to tools.
Running and stopping are quite different. I imagine most tools are always
going to have event handlers like this:
if (is_running)
do_stuff_1
else
do_stuff_2
I think it would be better to split this into two separate events, "run" and
"stop".
This would better match how the events are currently structured -- for
example, Memcheck calls a common handler function for some closely-related
events, eg. some of the new_mem_*/die_mem_* functions.
Nick
|
|
From: <js...@ac...> - 2006-12-23 05:31:03
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2006-12-23 09:00:01 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 215 tests, 10 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: <js...@ac...> - 2006-12-23 05:02:58
|
Nightly build on phoenix ( SuSE 10.0 ) started at 2006-12-23 04:30:01 GMT Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 250 tests, 6 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Tom H. <to...@co...> - 2006-12-23 03:54:05
|
Nightly build on dunsmere ( athlon, Fedora Core 6 ) started at 2006-12-23 03:30:06 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 252 tests, 5 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_detached (stdout) |
|
From: Tom H. <th...@cy...> - 2006-12-23 03:24:43
|
Nightly build on dellow ( x86_64, Fedora Core 6 ) started at 2006-12-23 03:10:03 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 280 tests, 4 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/xml1 (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |
|
From: Julian S. <js...@ac...> - 2006-12-23 01:25:50
|
Bart, Josef, I have committed in r6413, a change which I think should give Bart the sequencing you need and Josef the ability to count blocks that you need. The commit message for r6413 explains the details. Pls yell if this isn't what you wanted. Josef, there may be some cleaning up possible w.r.t clg_thread_runstate_callback that I created. I did not want to modify CLG_(run_thread) because it is called from various places, not just as a callback, so I added this 'impedance matching' function instead. J On Tuesday 19 December 2006 10:24, Josef Weidendorfer wrote: > On Tuesday 19 December 2006 03:39, Julian Seward wrote: > > > I wonder if I can use a constant factor to keep my polling for a > > > command file happening at regular intervals, yet not producing too much > > > overhead. I suppose I misused this VG_TRACK(thread_run) event ... > > > > One simple solution is for VG_TRACK(thread_run) to take a second > > argument, which is a 64-bit number, the total number of blocks executed > > by V so far. By comparing this number against the number in the previous > > call you can find out how much progress the system made in between. Then > > you could base your polling decisions on that. How does that sound? > > Oh, that sounds very good. > > But do we need to change the signature of VG_TRACK(thread_run) for this? > I still think that I am only misusing this event for my polling. > Especially, the event does not trigger when the valgrind process sleeps. So > a solution which waits for commands e.g. on a socket would be far better. > > It would be nice to have a function VG_(blocks_done)() or similar to get > the total number of blocks executed so far. > > I have something like this in callgrind myself; but of course it is not > incremented when callgrind is in "no instrumentation" mode; still, the > command polling has to work to allow to switch on instrumentation. > > Josef > > > J > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: <sv...@va...> - 2006-12-23 01:21:18
|
Author: sewardj
Date: 2006-12-23 01:21:12 +0000 (Sat, 23 Dec 2006)
New Revision: 6413
Log:
Change the core-tool interface 'thread_run' event to be more useful:
- Rename the event to 'thread_runstate'.
- Add arguments: pass also a boolean indicating whether the thread
is running or stopping, and a 64-bit int showing how many blocks
overall have run, so tools can make a rough estimate of workload.
The boolean allows tools to see threads starting and stopping.
Prior to this, de-schedule events were invisible to tools.
- Call the callback (hand the event to tools) just before client
code is run, and again immediately after it stops running. This
should give correct sequencing w.r.t posting of thread creation/
destruction events.
In order to make callgrind work without complex changes, I added a
simple impedance-matching function 'clg_thread_runstate_callback'=20
which hands thread-run events onwards to CLG_(thread_run).
Use this new 'thread_runstate' with care: it will be called before
and after every translation, which means it will be called ~500k
times in a startup of firefox. So the callback needs to be fast.
Modified:
trunk/callgrind/main.c
trunk/coregrind/m_scheduler/scheduler.c
trunk/coregrind/m_tooliface.c
trunk/coregrind/pub_core_tooliface.h
trunk/include/pub_tool_tooliface.h
Modified: trunk/callgrind/main.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/callgrind/main.c 2006-12-18 17:53:13 UTC (rev 6412)
+++ trunk/callgrind/main.c 2006-12-23 01:21:12 UTC (rev 6413)
@@ -1022,6 +1022,19 @@
/*--- Setup ---*/
/*--------------------------------------------------------------------*/
=20
+static void clg_thread_runstate_callback ( ThreadId tid,
+ Bool is_running,=20
+ ULong blocks_done )
+{
+ if (0)
+ VG_(printf)("%d %c %llu\n",=20
+ (Int)tid, is_running ? 'R' : 's', blocks_done);
+ /* Simply call onwards to CLG_(run_thread). Maybe this can be
+ simplified later? */
+ if (is_running)
+ CLG_(run_thread)( tid );
+}
+
static
void CLG_(post_clo_init)(void)
{
@@ -1088,7 +1101,7 @@
VG_(needs_syscall_wrapper)(CLG_(pre_syscalltime),
CLG_(post_syscalltime));
=20
- VG_(track_thread_run) ( & CLG_(run_thread) );
+ VG_(track_thread_runstate) ( & clg_thread_runstate_callback );
VG_(track_pre_deliver_signal) ( & CLG_(pre_signal) );
VG_(track_post_deliver_signal) ( & CLG_(post_signal) );
=20
Modified: trunk/coregrind/m_scheduler/scheduler.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_scheduler/scheduler.c 2006-12-18 17:53:13 UTC (rev =
6412)
+++ trunk/coregrind/m_scheduler/scheduler.c 2006-12-23 01:21:12 UTC (rev =
6413)
@@ -233,10 +233,6 @@
VG_(sprintf)(buf, " acquired lock (%s)", who);
print_sched_event(tid, buf);
}
-
- // While thre modeling is disable, issue thread_run events here
- // VG_(tm_thread_switchto)(tid);
- VG_TRACK( thread_run, tid );
}
=20
/*=20
@@ -616,6 +612,9 @@
VG_(printf)("\n");
}
=20
+ // Tell the tool this thread is about to run client code
+ VG_TRACK( thread_runstate, tid, True, bbs_done );
+
vg_assert(VG_(in_generated_code) =3D=3D False);
VG_(in_generated_code) =3D True;
=20
@@ -641,6 +640,9 @@
vg_assert(done_this_time >=3D 0);
bbs_done +=3D (ULong)done_this_time;
=20
+ // Tell the tool this thread has stopped running client code
+ VG_TRACK( thread_runstate, tid, False, bbs_done );
+
return trc;
}
=20
@@ -652,6 +654,7 @@
volatile Int jumped;
volatile ThreadState* tst;=20
volatile UWord argblock[4];
+ volatile UInt retval;
=20
/* Paranoia */
vg_assert(VG_(is_valid_tid)(tid));
@@ -686,6 +689,9 @@
argblock[2] =3D 0; /* next guest IP is written here */
argblock[3] =3D 0; /* guest state ptr afterwards is written here */
=20
+ // Tell the tool this thread is about to run client code
+ VG_TRACK( thread_runstate, tid, True, bbs_done );
+
vg_assert(VG_(in_generated_code) =3D=3D False);
VG_(in_generated_code) =3D True;
=20
@@ -703,16 +709,23 @@
vg_assert(argblock[2] =3D=3D 0); /* next guest IP was not written =
*/
vg_assert(argblock[3] =3D=3D 0); /* trc was not written */
block_signals(tid);
- return VG_TRC_FAULT_SIGNAL;
+ retval =3D VG_TRC_FAULT_SIGNAL;
} else {
/* store away the guest program counter */
VG_(set_IP)( tid, argblock[2] );
if (argblock[3] =3D=3D argblock[1])
/* the guest state pointer afterwards was unchanged */
- return VG_TRC_BORING;
+ retval =3D VG_TRC_BORING;
else
- return (UInt)argblock[3];
+ retval =3D (UInt)argblock[3];
}
+
+ bbs_done++;
+
+ // Tell the tool this thread has stopped running client code
+ VG_TRACK( thread_runstate, tid, False, bbs_done );
+
+ return retval;
}
=20
=20
Modified: trunk/coregrind/m_tooliface.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/m_tooliface.c 2006-12-18 17:53:13 UTC (rev 6412)
+++ trunk/coregrind/m_tooliface.c 2006-12-23 01:21:12 UTC (rev 6413)
@@ -321,7 +321,7 @@
=20
DEF(track_post_reg_write_clientcall_return, ThreadId, OffT, SizeT, Addr)
=20
-DEF(track_thread_run, ThreadId)
+DEF(track_thread_runstate, ThreadId, Bool, ULong)
=20
DEF(track_post_thread_create, ThreadId, ThreadId)
DEF(track_post_thread_join, ThreadId, ThreadId)
Modified: trunk/coregrind/pub_core_tooliface.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/pub_core_tooliface.h 2006-12-18 17:53:13 UTC (rev 641=
2)
+++ trunk/coregrind/pub_core_tooliface.h 2006-12-23 01:21:12 UTC (rev 641=
3)
@@ -200,7 +200,7 @@
void (*track_post_reg_write)(CorePart, ThreadId, OffT, SizeT);
void (*track_post_reg_write_clientcall_return)(ThreadId, OffT, SizeT,=
Addr);
=20
- void (*track_thread_run)(ThreadId);
+ void (*track_thread_runstate)(ThreadId, Bool, ULong);
=20
void (*track_post_thread_create)(ThreadId, ThreadId);
void (*track_post_thread_join) (ThreadId, ThreadId);
Modified: trunk/include/pub_tool_tooliface.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/include/pub_tool_tooliface.h 2006-12-18 17:53:13 UTC (rev 6412)
+++ trunk/include/pub_tool_tooliface.h 2006-12-23 01:21:12 UTC (rev 6413)
@@ -537,9 +537,21 @@
=20
=20
/* Scheduler events (not exhaustive) */
-void VG_(track_thread_run)(void(*f)(ThreadId tid));
=20
+/* Called when 'tid' starts or stops running client code blocks.
+ Gives the total dispatched block count at that event. Note, this
+ is not the same as 'tid' holding the BigLock: a thread can hold the
+ lock for other purposes (making translations, etc) yet not be
+ running client blocks. Obviously though, a thread must hold the
+ lock in order to run client code blocks, so the times bracketed by
+ thread_runstate(tid, True, ..) .. thread_runstate(tid, False, ..)
+ are a subset of the times when 'tid' holds the cpu lock.
+*/
+void VG_(track_thread_runstate)(
+ void(*f)(ThreadId tid, Bool running, ULong blocks_dispatched)
+ );
=20
+
/* Thread events (not exhaustive)
=20
Called during thread create, before the new thread has run any
|
|
From: <js...@ac...> - 2006-12-23 01:16:07
|
Nightly build on g5 ( SuSE 10.1, ppc970 ) started at 2006-12-23 02:00:01 CET Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 221 tests, 6 stderr failures, 2 stdout failures, 0 posttest failures == memcheck/tests/deep_templates (stdout) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_cmsg (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) |