You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(4) |
2
(2) |
|
3
(1) |
4
(1) |
5
|
6
|
7
|
8
(1) |
9
(1) |
|
10
(4) |
11
(1) |
12
(2) |
13
(2) |
14
(3) |
15
(2) |
16
(2) |
|
17
|
18
(1) |
19
(5) |
20
|
21
|
22
(8) |
23
(4) |
|
24
(1) |
25
|
26
(3) |
27
(8) |
28
(4) |
29
(4) |
30
(1) |
|
From: Ivo R. <ir...@so...> - 2017-09-01 15:31:00
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=82b3f16a18f8f6dd5888d1e0cde7bd6c0dcef3e2 commit 82b3f16a18f8f6dd5888d1e0cde7bd6c0dcef3e2 Author: Ivo Raisr <iv...@iv...> Date: Fri Sep 1 17:27:08 2017 +0200 Small fixes to notes about Inner/Outer setup. Diff: --- README_DEVELOPERS | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/README_DEVELOPERS b/README_DEVELOPERS index ab0cf66..07a48c4 100644 --- a/README_DEVELOPERS +++ b/README_DEVELOPERS @@ -151,19 +151,18 @@ This section explains : (1) Check out 2 trees, "Inner" and "Outer". Inner runs the app directly. Outer runs Inner. -(2) Configure inner with --enable-inner and build/install as usual. +(2) Configure Inner with --enable-inner and build as usual. -(3) Configure Outer normally and build/install as usual. +(3) Configure Outer normally and build+install as usual. + Note: You must use a "make install"-ed valgrind. + Do *not* use vg-in-place for the Outer valgrind. (4) Choose a very simple program (date) and try outer/.../bin/valgrind --sim-hints=enable-outer --trace-children=yes \ --smc-check=all-non-file \ --run-libc-freeres=no --tool=cachegrind -v \ - inner/.../bin/valgrind --vgdb-prefix=./inner --tool=none -v prog - -Note: You must use a "make install"-ed valgrind. -Do *not* use vg-in-place for the outer valgrind. + inner/.../vg-in-place --vgdb-prefix=./inner --tool=none -v prog If you omit the --trace-children=yes, you'll only monitor Inner's launcher program, not its stage2. Outer needs --run-libc-freeres=no, as otherwise @@ -191,12 +190,12 @@ setup, this prefix causes the reg test diff to fail. Give --sim-hints=no-inner-prefix to the Inner to disable the production of the prefix in the stdout/stderr output of Inner. -The allocator (coregrind/m_mallocfree.c) is annotated with client requests -so Memcheck can be used to find leaks and use after free in an Inner -Valgrind. +The allocators in coregrind/m_mallocfree.c and VEX/priv/main_util.h are +annotated with client requests so Memcheck can be used to find leaks +and use after free in an Inner Valgrind. The Valgrind "big lock" is annotated with helgrind client requests -so helgrind and drd can be used to find race conditions in an Inner +so Helgrind and DRD can be used to find race conditions in an Inner Valgrind. All this has not been tested much, so don't be surprised if you hit problems. |
|
From: John R. <jr...@bi...> - 2017-09-01 14:03:56
|
>> If you are considering translating the entire program and caching it, I >> think that would be much faster, > > Mhm, but then you have the problem of finding all the code that is part of > the program, which is equivalent to solving the halting problem. In practice is it not that hard; I've done it twice. Transitive closure of lexical calls, using as roots the .e_entry and all the function symbols, goes a long way. The rest is covered by C++ virtual function tables (or equivalent), and recognizing the code for 'switch' statements. (Yeah, that's ugly+heuristic+compiler-dependent, but it works well enough after a few iterations.) -- |
|
From: Diane M <dia...@gm...> - 2017-09-01 13:52:42
|
Forgot to mention that we might be able to look up code in a cache using offsets from function symbols, rather than addresses... I still don't think this will be faster though. Diane On Fri, Sep 1, 2017 at 9:50 AM, Diane M <dia...@gm...> wrote: > Julian, > > Somehow this message wound up in my spam folder. This is a very > interesting thread. > Please see my comments below. > > On Thu, Aug 31, 2017 at 4:25 AM, Julian Seward <js...@ac...> wrote: > >> >> > > If you are considering translating the entire program and caching it, I >> > think that would be much faster, >> >> Mhm, but then you have the problem of finding all the code that is part of >> the program, which is equivalent to solving the halting problem. >> > > It depends how you go about it. Oracle's Discover does this, but it > depends upon annotations inserted > by the Studio compilers to tell it where functions start and end, when > things in the .text section are > actually not code, etc. So Discover is function-based, and Valgrind is > code block-based. Obviously > the latter is much more robust, but unfortunately, slower. > >> >> ----- >> >> For these reasons, my preference is to make the JIT faster, and ultimately >> to move to having a "two speed" JIT. That is, where code initially is >> instrumented using a fast and low quality JIT, to reduce latency and to >> gather branch and block-use statistics. When we decide a particular path >> is hot enough then those blocks are given to a slower, optimising JIT, so >> we ultimately get both low latency for cold paths and high performance for >> hot paths. This seems to be the "modern way". >> >> Also, the optimising JIT can run in a helper thread, so in effect we never >> have to wait for it, because we can just use the unoptimised version of >> a (super)block until the optimised version is ready. >> > > Those optimizing JITs can take a very long time, but I like the idea of > using the slower code while > waiting for the faster code to be optimized by a separate thread. Though > there are issues with > doing that also having to do with maintaining correct program state when > switching between the two. > > Diane > > >> >> J >> > > |
|
From: Diane M <dia...@gm...> - 2017-09-01 13:51:11
|
Julian, Somehow this message wound up in my spam folder. This is a very interesting thread. Please see my comments below. On Thu, Aug 31, 2017 at 4:25 AM, Julian Seward <js...@ac...> wrote: > > > If you are considering translating the entire program and caching it, I > > think that would be much faster, > > Mhm, but then you have the problem of finding all the code that is part of > the program, which is equivalent to solving the halting problem. > It depends how you go about it. Oracle's Discover does this, but it depends upon annotations inserted by the Studio compilers to tell it where functions start and end, when things in the .text section are actually not code, etc. So Discover is function-based, and Valgrind is code block-based. Obviously the latter is much more robust, but unfortunately, slower. > > ----- > > For these reasons, my preference is to make the JIT faster, and ultimately > to move to having a "two speed" JIT. That is, where code initially is > instrumented using a fast and low quality JIT, to reduce latency and to > gather branch and block-use statistics. When we decide a particular path > is hot enough then those blocks are given to a slower, optimising JIT, so > we ultimately get both low latency for cold paths and high performance for > hot paths. This seems to be the "modern way". > > Also, the optimising JIT can run in a helper thread, so in effect we never > have to wait for it, because we can just use the unoptimised version of > a (super)block until the optimised version is ready. > Those optimizing JITs can take a very long time, but I like the idea of using the slower code while waiting for the faster code to be optimized by a separate thread. Though there are issues with doing that also having to do with maintaining correct program state when switching between the two. Diane > > J > |