You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
(14) |
2
(12) |
3
(14) |
4
(12) |
5
(15) |
6
(12) |
7
(20) |
|
8
(10) |
9
(2) |
10
(8) |
11
(12) |
12
(20) |
13
(12) |
14
(15) |
|
15
(12) |
16
(17) |
17
(16) |
18
(10) |
19
(7) |
20
(7) |
21
(9) |
|
22
(4) |
23
(8) |
24
(4) |
25
|
26
(8) |
27
(5) |
28
(10) |
|
29
(6) |
30
(20) |
31
(9) |
|
|
|
|
|
From: Philippe W. <phi...@sk...> - 2015-03-29 22:19:34
|
On Sun, 2015-03-29 at 14:27 -0700, Yan wrote: > Hi guys, > > > We use VEX pretty heavily in research here at UC Santa Barbara > (resulting in PyVEX [1] and some published [2] and in-submission > academic papers), and for us, the ability to handle multiple > architectures with the same libVEX is really crucial. As it is > already, we have to patch VEX pretty heavily (see the patch in [3]) to > get it to work statically, but the changes in option 1 sound like > they'll make things even worse for us :-( > > > Of course, no one's under any obligation to make our lives over here > easier. In fact, I talked to Philippe and Julian at FOSDEM '14 about > me sending in some patches to change up the VEX interface to be more > flexible (basically, option 3) and Julian seemed receptive at the > time. However, with PhD stuff, grant progress reports, and paper > deadlines, I haven't had any time to actually do it. If it saves > multi-arch VEX, though, we can make it a top priority over here and > devote a few good guys to it to get it done... > > > So, in summary, if option 3 could be made more attractive with some > manpower, UCSB can provide that manpower, because we use VEX in a > multi-arch, static way for our research. Isn't option 2 good enough ? I just finished implementing option 1. In option 1, we have the following comment: /* For each architecture <arch>, we define 2 macros: <arch>FN that has as argument a pointer to a function. <arch>ST that has as argument a statement. If VEX is compiled for <arch>, then these macros just expand their arg. Otherwise, the macros expands to respectively NULL and vassert(0). These macros are used to avoid introducing dependencies to object files not needed for the (only) architecture we are compiling for. To still compile the below for all supported architectures, define VEXMULTIARCH. */ // #define VEXMULTIARCH 1 So, a very trivial patch is enough to have VEX compiled in multiarch with option 1. Of course, option 2 now only means adding a configure option. If needed, we could maybe have option 2.5: We compile main_main.c twice: One with VEXMULTIARCH not defined to produce an object main_main.o and one with VEXMULTIARCH defined to produce another object multi_arch_main_main.o Then, I suppose that when linking, if you put multi_arch_main_main.o before the VEX library .a, that the linker will choose multi_arch_main_main.o to resolve the needed LibVEX functions defined in main_main.c Philippe |
|
From: Yan <ya...@ya...> - 2015-03-29 21:28:27
|
Hi guys, We use VEX pretty heavily in research here at UC Santa Barbara (resulting in PyVEX [1] and some published [2] and in-submission academic papers), and for us, the ability to handle multiple architectures with the same libVEX is really crucial. As it is already, we have to patch VEX pretty heavily (see the patch in [3]) to get it to work statically, but the changes in option 1 sound like they'll make things even worse for us :-( Of course, no one's under any obligation to make our lives over here easier. In fact, I talked to Philippe and Julian at FOSDEM '14 about me sending in some patches to change up the VEX interface to be more flexible (basically, option 3) and Julian seemed receptive at the time. However, with PhD stuff, grant progress reports, and paper deadlines, I haven't had any time to actually do it. If it saves multi-arch VEX, though, we can make it a top priority over here and devote a few good guys to it to get it done... So, in summary, if option 3 could be made more attractive with some manpower, UCSB can provide that manpower, because we use VEX in a multi-arch, static way for our research. - Yan [1] http://github.com/zardus/pyvex [2] http://www.internetsociety.org/doc/firmalice-automatic-detection-authentication-bypass-vulnerabilities-binary-firmware [3] https://github.com/zardus/pyvex/blob/master/patches/valgrind_static_3.9.0.patch On Sun, Mar 29, 2015 at 1:16 PM, Philippe Waroquiers < phi...@sk...> wrote: > On Sun, 2015-03-29 at 22:46 +0200, Florian Krohm wrote: > > Julian can give the full history. > > There is plenty of evidence that the original VEX design goal was to > > support host != guest. But when I began looking at V code in 2006 the > > code wasn't clean in that respect. And it hasn't gone any better. > > Here is a quote from Julian from Dec 2014 clarifying: > > > > This guest-vs-host stuff is still partly alive as a result of a > > hope I had that someone might want to do a cross-valgrind one day, > > eg ARM32 guest on AMD64 host. But it's been 12+ years and I've > > never once heard any mention of such a thing. So perhaps it's > > time to give up on that one. > > > > I think that settles your question :) > Yes :). > > > > > If VEX host and guest are in any case supposed to be the same, > > > then solution 1 is the easiest. > > > > Yes. > > > > It's funny.. I looked at this very same issue a few weeks back but could > > not figure out the autotools stuff. > > Note, though, that we still want to compile all VEX sources and not just > > the ones pertaining to the current architecture. > Yes, for sure, that is the idea. > I am busy now doing the solution 1. > I have already eliminated s390 :). > > Philippe > > > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers > |
|
From: Philippe W. <phi...@sk...> - 2015-03-29 21:15:50
|
On Sun, 2015-03-29 at 22:46 +0200, Florian Krohm wrote: > Julian can give the full history. > There is plenty of evidence that the original VEX design goal was to > support host != guest. But when I began looking at V code in 2006 the > code wasn't clean in that respect. And it hasn't gone any better. > Here is a quote from Julian from Dec 2014 clarifying: > > This guest-vs-host stuff is still partly alive as a result of a > hope I had that someone might want to do a cross-valgrind one day, > eg ARM32 guest on AMD64 host. But it's been 12+ years and I've > never once heard any mention of such a thing. So perhaps it's > time to give up on that one. > > I think that settles your question :) Yes :). > > > If VEX host and guest are in any case supposed to be the same, > > then solution 1 is the easiest. > > Yes. > > It's funny.. I looked at this very same issue a few weeks back but could > not figure out the autotools stuff. > Note, though, that we still want to compile all VEX sources and not just > the ones pertaining to the current architecture. Yes, for sure, that is the idea. I am busy now doing the solution 1. I have already eliminated s390 :). Philippe |
|
From: Florian K. <fl...@ei...> - 2015-03-29 20:46:58
|
On 29.03.2015 18:03, Philippe Waroquiers wrote: > > So, it looks interesting to avoid dragging all the other archs VEX > objects. > Yes, please. +10 > I see 3 ways to do the above: > > 1. Using a few conditional macros in main_main.c, ensure only > the functions needed for the compiled architecture > are referenced > This is easy to do. > However, this means the compile VEX library can only be used with > one single architecture : host and guest will be the same, and > will be the one for which the VEX lib is compîled. > I do not know if VEX lib is supposed to be usable to have > host and guest different. Julian can give the full history. There is plenty of evidence that the original VEX design goal was to support host != guest. But when I began looking at V code in 2006 the code wasn't clean in that respect. And it hasn't gone any better. Here is a quote from Julian from Dec 2014 clarifying: This guest-vs-host stuff is still partly alive as a result of a hope I had that someone might want to do a cross-valgrind one day, eg ARM32 guest on AMD64 host. But it's been 12+ years and I've never once heard any mention of such a thing. So perhaps it's time to give up on that one. I think that settles your question :) > If VEX host and guest are in any case supposed to be the same, > then solution 1 is the easiest. Yes. It's funny.. I looked at this very same issue a few weeks back but could not figure out the autotools stuff. Note, though, that we still want to compile all VEX sources and not just the ones pertaining to the current architecture. Florian |
|
From: Philippe W. <phi...@sk...> - 2015-03-29 17:03:14
|
When linking a tool for a certain architecture (e.g. x86), the resulting
executable contains a significant proportion of the VEX library for
other architectures (amd64, arm, ppc, mips, s390).
After looking at the dependencies, the other architectures object
files are dragged in by various 'switch (arch)' in main_main.c.
For example,
case VexArchPPC64:
mode64 = True;
rRegUniv = getRRegUniverse_PPC(mode64);
isMove = (__typeof__(isMove)) isMove_PPCInstr;
getRegUsage = (__typeof__(getRegUsage)) getRegUsage_PPCInstr;
will drag various ppc64 objects.
I have quickly done a trial to see how much we can gain by having
main_main.c only dragging one architecture.
On x86, the text size of memcheck-x86-linux decreases
from about 3.8 MB to about 2MB.
The startup of a tool is also slightly faster : valgrind reads its own
debug info, and smaller executable means smaller debug info to read.
The mmap-ed size of the "dinfo" arena also decreases by about 9MB
(peak mmaped decreases by 14MB).
So, it looks interesting to avoid dragging all the other archs VEX
objects.
I see 3 ways to do the above:
1. Using a few conditional macros in main_main.c, ensure only
the functions needed for the compiled architecture
are referenced
This is easy to do.
However, this means the compile VEX library can only be used with
one single architecture : host and guest will be the same, and
will be the one for which the VEX lib is compîled.
I do not know if VEX lib is supposed to be usable to have
host and guest different.
If VEX host and guest are in any case supposed to be the same,
then solution 1 is the easiest.
2. Same as 1, but allow a configure option to still allow all
architectures to be compiled in.
A little bit more work than 1.
Advantage: if someone needs a multi-arch VEX lib, it can be
decided at configure time.
3. Decouple main_main.c from the 'backend VEX' by extending VexArchInfo
and/or adding a VexBackEndInfo structure, containing pointers to the
arch specific functions.
I have started doing this (half done patch attached, only works
for x86, as the decoupling infrastructure is incomplete).
All that is not very straightforward/is a lot more work.
And of course, the VEX user will have to call a 'arch dependent'
initialise procedure (one of the things not yet done in the patch).
Personally, 1 (or maybe 2) looks good enough to me,
but that assumes there is no (significant) need for a multi-arch VEX
lib.
Feedback/comments/suggestions ?
Philippe
|
|
From: <sv...@va...> - 2015-03-29 05:21:22
|
Author: rhyskidd
Date: Sun Mar 29 06:21:15 2015
New Revision: 15048
Log:
Fix memcheck/tests/sendmsg on OS X
bz#345637
- Support the lowercase for of libsystem* in filter_libc script
Before:
== 590 tests, 238 stderr failures, 22 stdout failures, 0 stderrB failures, 0 stdoutB failures, 31 post failures ==
After:
== 590 tests, 237 stderr failures, 22 stdout failures, 0 stderrB failures, 0 stdoutB failures, 31 post failures ==
Modified:
trunk/NEWS
trunk/tests/filter_libc
Modified: trunk/NEWS
==============================================================================
--- trunk/NEWS (original)
+++ trunk/NEWS Sun Mar 29 06:21:15 2015
@@ -145,6 +145,7 @@
344939 Fix memcheck/tests/xml1 on OS X 10.10
345016 helgrind/tests/locked_vs_unlocked2 is failing sometimes
345394 Fix memcheck/tests/strchr on OS X
+345637 Fix memcheck/tests/sendmsg on OS X
n-i-bz Provide implementations of certain compiler builtins to support
compilers who may not provide those
n-i-bz Old STABS code is still being compiled, but never used. Remove it.
Modified: trunk/tests/filter_libc
==============================================================================
--- trunk/tests/filter_libc (original)
+++ trunk/tests/filter_libc Sun Mar 29 06:21:15 2015
@@ -9,9 +9,11 @@
s/ __GI___/ __/;
s/ __([a-z]*)_nocancel / $1 /;
- # "libSystem*" occurs on Darwin.
+ # "lib[S|s]ystem*" occurs on Darwin.
s/\(in \/.*(libc|libSystem).*\)$/(in \/...libc...)/;
s/\(within \/.*(libc|libSystem).*\)$/(within \/...libc...)/;
+ s/\(in \/.*(libc|libsystem).*\)$/(in \/...libc...)/;
+ s/\(within \/.*(libc|libsystem).*\)$/(within \/...libc...)/;
# Filter out dynamic loader
s/ \(in \/.*ld-.*so\)$//;
|