You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
(14) |
2
(12) |
3
(14) |
4
(12) |
5
(15) |
6
(12) |
7
(20) |
|
8
(10) |
9
(2) |
10
(8) |
11
(12) |
12
(20) |
13
(12) |
14
(15) |
|
15
(12) |
16
(17) |
17
(16) |
18
(10) |
19
(7) |
20
(7) |
21
(9) |
|
22
(4) |
23
(8) |
24
(4) |
25
|
26
(8) |
27
(5) |
28
(10) |
|
29
(6) |
30
(20) |
31
(9) |
|
|
|
|
|
From: Shujunjun <shu...@hu...> - 2015-03-30 08:10:43
|
>Also, using dlopen() with RTLD_NEXT enables the construction of a shared library
that can "wrap" any call to a function that has dynamic linkage. Any call
that goes through the PLT [ProgramLinkageTable] can be intercepted this way.
Hi, John Reiser
Thanks a lot for your suggestion.
I think you give me a nice way to trace a function linked from a shared library.
But there some problems for me to using your method.
a. Trace a funtion's input and output is a part function of my tool, the tool will do some other function depend on valgrind, so I mybe still want to find a way to implement this function in valgrind if it can support.
b. The function I traced usually not linked from a shared library. Usually, it's a application level funtion and I as a tool developer cloud not force them change to a shared library.
So, I mybe still want to know: how can my valgrind tool know the input args and output value of a special guest function ?
shu...@hu...
Best Regards.
________________________________________
发件人: John Reiser [jr...@bi...]
发送时间: 2015年3月30日 13:01
收件人: val...@li...
主题: Re: [Valgrind-developers] Ask for how to know a function's input args and return value
> I want to trace a special function used in the guest program.
> For example, a function defined like this:
> */msg_type * alloc_msg(char type, int num)
> /*When the guest program call this special function "alloc_msg", my valgrind tool can know the input args:"type" and "num", and when "alloc_msg" return, my valgrind tool can know the return value.
Note that in some cases similar functionality is provided outside of valgrind
by tools such as ltrace.
Also, using dlopen() with RTLD_NEXT enables the construction of a shared library
that can "wrap" any call to a function that has dynamic linkage. Any call
that goes through the PLT [ProgramLinkageTable] can be intercepted this way.
Calls to a C static function cannot, nor calls to an inlined function.
Once built, the shared library can be activated for any process without
re-compilation or re-linking, by using the shell environment variable LD_PRELOAD.
Consult "man dlopen" and search the 'net for a complete example.
Both of these are not exactly what you asked for, but they are
substantially similar in significant ways, avoid using valgrind altogether,
and usually execute at least 10 times faster than valgrind.
------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Valgrind-developers mailing list
Val...@li...
https://lists.sourceforge.net/lists/listinfo/valgrind-developers |
|
From: John R. <jr...@bi...> - 2015-03-30 05:01:28
|
> I want to trace a special function used in the guest program. > For example, a function defined like this: > */msg_type * alloc_msg(char type, int num) > /*When the guest program call this special function "alloc_msg", my valgrind tool can know the input args:"type" and "num", and when "alloc_msg" return, my valgrind tool can know the return value. Note that in some cases similar functionality is provided outside of valgrind by tools such as ltrace. Also, using dlopen() with RTLD_NEXT enables the construction of a shared library that can "wrap" any call to a function that has dynamic linkage. Any call that goes through the PLT [ProgramLinkageTable] can be intercepted this way. Calls to a C static function cannot, nor calls to an inlined function. Once built, the shared library can be activated for any process without re-compilation or re-linking, by using the shell environment variable LD_PRELOAD. Consult "man dlopen" and search the 'net for a complete example. Both of these are not exactly what you asked for, but they are substantially similar in significant ways, avoid using valgrind altogether, and usually execute at least 10 times faster than valgrind. |
|
From: Shujunjun <shu...@hu...> - 2015-03-30 03:30:16
|
Hi, Sorry for my first time send email to valgrind, if wrong, can you transmit this email to right people ? Thanks a lot. I have a problem when developing a valgrind tool(function_trace) for my business code. I want to trace a special function used in the guest program. For example, a function defined like this: msg_type * alloc_msg(char type, int num) When the guest program call this special function "alloc_msg", my valgrind tool can know the input args:"type" and "num", and when "alloc_msg" return, my valgrind tool can know the return value. Is there any way to implement it ? Or, which Valgrind tools(such as memcheck, callgrind) implement such functions ? I can learn from them. Or, do valgrind had interfaces to get such information? ps, I develop my valgrind tool base on "valgrind-3.10.1" download form "http://valgrind.org/" . Thank you for your help. shu...@hu...<mailto:shu...@hu...> Best Regards. |
|
From: <sv...@va...> - 2015-03-30 00:06:03
|
Author: petarj
Date: Mon Mar 30 01:05:54 2015
New Revision: 15049
Log:
mips: update list of ignored files in auxprogs
Update the ignore list with:
- getoff-mips32-linux
- getoff-mips64-linux
Modified:
trunk/auxprogs/ (props changed)
|
|
From: Philippe W. <phi...@sk...> - 2015-03-29 22:19:34
|
On Sun, 2015-03-29 at 14:27 -0700, Yan wrote: > Hi guys, > > > We use VEX pretty heavily in research here at UC Santa Barbara > (resulting in PyVEX [1] and some published [2] and in-submission > academic papers), and for us, the ability to handle multiple > architectures with the same libVEX is really crucial. As it is > already, we have to patch VEX pretty heavily (see the patch in [3]) to > get it to work statically, but the changes in option 1 sound like > they'll make things even worse for us :-( > > > Of course, no one's under any obligation to make our lives over here > easier. In fact, I talked to Philippe and Julian at FOSDEM '14 about > me sending in some patches to change up the VEX interface to be more > flexible (basically, option 3) and Julian seemed receptive at the > time. However, with PhD stuff, grant progress reports, and paper > deadlines, I haven't had any time to actually do it. If it saves > multi-arch VEX, though, we can make it a top priority over here and > devote a few good guys to it to get it done... > > > So, in summary, if option 3 could be made more attractive with some > manpower, UCSB can provide that manpower, because we use VEX in a > multi-arch, static way for our research. Isn't option 2 good enough ? I just finished implementing option 1. In option 1, we have the following comment: /* For each architecture <arch>, we define 2 macros: <arch>FN that has as argument a pointer to a function. <arch>ST that has as argument a statement. If VEX is compiled for <arch>, then these macros just expand their arg. Otherwise, the macros expands to respectively NULL and vassert(0). These macros are used to avoid introducing dependencies to object files not needed for the (only) architecture we are compiling for. To still compile the below for all supported architectures, define VEXMULTIARCH. */ // #define VEXMULTIARCH 1 So, a very trivial patch is enough to have VEX compiled in multiarch with option 1. Of course, option 2 now only means adding a configure option. If needed, we could maybe have option 2.5: We compile main_main.c twice: One with VEXMULTIARCH not defined to produce an object main_main.o and one with VEXMULTIARCH defined to produce another object multi_arch_main_main.o Then, I suppose that when linking, if you put multi_arch_main_main.o before the VEX library .a, that the linker will choose multi_arch_main_main.o to resolve the needed LibVEX functions defined in main_main.c Philippe |
|
From: Yan <ya...@ya...> - 2015-03-29 21:28:27
|
Hi guys, We use VEX pretty heavily in research here at UC Santa Barbara (resulting in PyVEX [1] and some published [2] and in-submission academic papers), and for us, the ability to handle multiple architectures with the same libVEX is really crucial. As it is already, we have to patch VEX pretty heavily (see the patch in [3]) to get it to work statically, but the changes in option 1 sound like they'll make things even worse for us :-( Of course, no one's under any obligation to make our lives over here easier. In fact, I talked to Philippe and Julian at FOSDEM '14 about me sending in some patches to change up the VEX interface to be more flexible (basically, option 3) and Julian seemed receptive at the time. However, with PhD stuff, grant progress reports, and paper deadlines, I haven't had any time to actually do it. If it saves multi-arch VEX, though, we can make it a top priority over here and devote a few good guys to it to get it done... So, in summary, if option 3 could be made more attractive with some manpower, UCSB can provide that manpower, because we use VEX in a multi-arch, static way for our research. - Yan [1] http://github.com/zardus/pyvex [2] http://www.internetsociety.org/doc/firmalice-automatic-detection-authentication-bypass-vulnerabilities-binary-firmware [3] https://github.com/zardus/pyvex/blob/master/patches/valgrind_static_3.9.0.patch On Sun, Mar 29, 2015 at 1:16 PM, Philippe Waroquiers < phi...@sk...> wrote: > On Sun, 2015-03-29 at 22:46 +0200, Florian Krohm wrote: > > Julian can give the full history. > > There is plenty of evidence that the original VEX design goal was to > > support host != guest. But when I began looking at V code in 2006 the > > code wasn't clean in that respect. And it hasn't gone any better. > > Here is a quote from Julian from Dec 2014 clarifying: > > > > This guest-vs-host stuff is still partly alive as a result of a > > hope I had that someone might want to do a cross-valgrind one day, > > eg ARM32 guest on AMD64 host. But it's been 12+ years and I've > > never once heard any mention of such a thing. So perhaps it's > > time to give up on that one. > > > > I think that settles your question :) > Yes :). > > > > > If VEX host and guest are in any case supposed to be the same, > > > then solution 1 is the easiest. > > > > Yes. > > > > It's funny.. I looked at this very same issue a few weeks back but could > > not figure out the autotools stuff. > > Note, though, that we still want to compile all VEX sources and not just > > the ones pertaining to the current architecture. > Yes, for sure, that is the idea. > I am busy now doing the solution 1. > I have already eliminated s390 :). > > Philippe > > > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers > |
|
From: Philippe W. <phi...@sk...> - 2015-03-29 21:15:50
|
On Sun, 2015-03-29 at 22:46 +0200, Florian Krohm wrote: > Julian can give the full history. > There is plenty of evidence that the original VEX design goal was to > support host != guest. But when I began looking at V code in 2006 the > code wasn't clean in that respect. And it hasn't gone any better. > Here is a quote from Julian from Dec 2014 clarifying: > > This guest-vs-host stuff is still partly alive as a result of a > hope I had that someone might want to do a cross-valgrind one day, > eg ARM32 guest on AMD64 host. But it's been 12+ years and I've > never once heard any mention of such a thing. So perhaps it's > time to give up on that one. > > I think that settles your question :) Yes :). > > > If VEX host and guest are in any case supposed to be the same, > > then solution 1 is the easiest. > > Yes. > > It's funny.. I looked at this very same issue a few weeks back but could > not figure out the autotools stuff. > Note, though, that we still want to compile all VEX sources and not just > the ones pertaining to the current architecture. Yes, for sure, that is the idea. I am busy now doing the solution 1. I have already eliminated s390 :). Philippe |
|
From: Florian K. <fl...@ei...> - 2015-03-29 20:46:58
|
On 29.03.2015 18:03, Philippe Waroquiers wrote: > > So, it looks interesting to avoid dragging all the other archs VEX > objects. > Yes, please. +10 > I see 3 ways to do the above: > > 1. Using a few conditional macros in main_main.c, ensure only > the functions needed for the compiled architecture > are referenced > This is easy to do. > However, this means the compile VEX library can only be used with > one single architecture : host and guest will be the same, and > will be the one for which the VEX lib is compîled. > I do not know if VEX lib is supposed to be usable to have > host and guest different. Julian can give the full history. There is plenty of evidence that the original VEX design goal was to support host != guest. But when I began looking at V code in 2006 the code wasn't clean in that respect. And it hasn't gone any better. Here is a quote from Julian from Dec 2014 clarifying: This guest-vs-host stuff is still partly alive as a result of a hope I had that someone might want to do a cross-valgrind one day, eg ARM32 guest on AMD64 host. But it's been 12+ years and I've never once heard any mention of such a thing. So perhaps it's time to give up on that one. I think that settles your question :) > If VEX host and guest are in any case supposed to be the same, > then solution 1 is the easiest. Yes. It's funny.. I looked at this very same issue a few weeks back but could not figure out the autotools stuff. Note, though, that we still want to compile all VEX sources and not just the ones pertaining to the current architecture. Florian |
|
From: Philippe W. <phi...@sk...> - 2015-03-29 17:03:14
|
When linking a tool for a certain architecture (e.g. x86), the resulting
executable contains a significant proportion of the VEX library for
other architectures (amd64, arm, ppc, mips, s390).
After looking at the dependencies, the other architectures object
files are dragged in by various 'switch (arch)' in main_main.c.
For example,
case VexArchPPC64:
mode64 = True;
rRegUniv = getRRegUniverse_PPC(mode64);
isMove = (__typeof__(isMove)) isMove_PPCInstr;
getRegUsage = (__typeof__(getRegUsage)) getRegUsage_PPCInstr;
will drag various ppc64 objects.
I have quickly done a trial to see how much we can gain by having
main_main.c only dragging one architecture.
On x86, the text size of memcheck-x86-linux decreases
from about 3.8 MB to about 2MB.
The startup of a tool is also slightly faster : valgrind reads its own
debug info, and smaller executable means smaller debug info to read.
The mmap-ed size of the "dinfo" arena also decreases by about 9MB
(peak mmaped decreases by 14MB).
So, it looks interesting to avoid dragging all the other archs VEX
objects.
I see 3 ways to do the above:
1. Using a few conditional macros in main_main.c, ensure only
the functions needed for the compiled architecture
are referenced
This is easy to do.
However, this means the compile VEX library can only be used with
one single architecture : host and guest will be the same, and
will be the one for which the VEX lib is compîled.
I do not know if VEX lib is supposed to be usable to have
host and guest different.
If VEX host and guest are in any case supposed to be the same,
then solution 1 is the easiest.
2. Same as 1, but allow a configure option to still allow all
architectures to be compiled in.
A little bit more work than 1.
Advantage: if someone needs a multi-arch VEX lib, it can be
decided at configure time.
3. Decouple main_main.c from the 'backend VEX' by extending VexArchInfo
and/or adding a VexBackEndInfo structure, containing pointers to the
arch specific functions.
I have started doing this (half done patch attached, only works
for x86, as the decoupling infrastructure is incomplete).
All that is not very straightforward/is a lot more work.
And of course, the VEX user will have to call a 'arch dependent'
initialise procedure (one of the things not yet done in the patch).
Personally, 1 (or maybe 2) looks good enough to me,
but that assumes there is no (significant) need for a multi-arch VEX
lib.
Feedback/comments/suggestions ?
Philippe
|
|
From: <sv...@va...> - 2015-03-29 05:21:22
|
Author: rhyskidd
Date: Sun Mar 29 06:21:15 2015
New Revision: 15048
Log:
Fix memcheck/tests/sendmsg on OS X
bz#345637
- Support the lowercase for of libsystem* in filter_libc script
Before:
== 590 tests, 238 stderr failures, 22 stdout failures, 0 stderrB failures, 0 stdoutB failures, 31 post failures ==
After:
== 590 tests, 237 stderr failures, 22 stdout failures, 0 stderrB failures, 0 stdoutB failures, 31 post failures ==
Modified:
trunk/NEWS
trunk/tests/filter_libc
Modified: trunk/NEWS
==============================================================================
--- trunk/NEWS (original)
+++ trunk/NEWS Sun Mar 29 06:21:15 2015
@@ -145,6 +145,7 @@
344939 Fix memcheck/tests/xml1 on OS X 10.10
345016 helgrind/tests/locked_vs_unlocked2 is failing sometimes
345394 Fix memcheck/tests/strchr on OS X
+345637 Fix memcheck/tests/sendmsg on OS X
n-i-bz Provide implementations of certain compiler builtins to support
compilers who may not provide those
n-i-bz Old STABS code is still being compiled, but never used. Remove it.
Modified: trunk/tests/filter_libc
==============================================================================
--- trunk/tests/filter_libc (original)
+++ trunk/tests/filter_libc Sun Mar 29 06:21:15 2015
@@ -9,9 +9,11 @@
s/ __GI___/ __/;
s/ __([a-z]*)_nocancel / $1 /;
- # "libSystem*" occurs on Darwin.
+ # "lib[S|s]ystem*" occurs on Darwin.
s/\(in \/.*(libc|libSystem).*\)$/(in \/...libc...)/;
s/\(within \/.*(libc|libSystem).*\)$/(within \/...libc...)/;
+ s/\(in \/.*(libc|libsystem).*\)$/(in \/...libc...)/;
+ s/\(within \/.*(libc|libsystem).*\)$/(within \/...libc...)/;
# Filter out dynamic loader
s/ \(in \/.*ld-.*so\)$//;
|
|
From: Matthias S. <zz...@ge...> - 2015-03-28 20:04:28
|
On 28.03.2015 19:46, Florian Krohm wrote:
>
> C++'s static_assert cannot be used on file scope either.
c++'s static_assert and C's _Static_assert can be used at file scope.
As far as I could find out, they are declarations and so are allowed
inside functions, stucts, classes and blocks (including file scope in
and out of namespaces).
I tried it and it works quite nice:
#define STATIC_ASSERT(expr) _Static_assert(expr, "failed")
I added a bogus check, and the error message looks nicer:
In file included from pub_core_basics.h:40:0,
from m_cache.c:32:
../include/pub_tool_basics.h:380:1: error: static assertion failed: "failed"
STATIC_ASSERT(1==2);
^
The only missing piece is how to detect availability of this keyword.
Maybe a check for the gcc version is enough here.
Matthias
|
|
From: <sv...@va...> - 2015-03-28 18:48:28
|
Author: florian
Date: Sat Mar 28 18:48:20 2015
New Revision: 3109
Log:
Add STATIC_ASSERT. Remove VG__STRING.
Modified:
trunk/priv/main_util.h
Modified: trunk/priv/main_util.h
==============================================================================
--- trunk/priv/main_util.h (original)
+++ trunk/priv/main_util.h Sat Mar 28 18:48:20 2015
@@ -50,13 +50,14 @@
# define offsetof(type,memb) ((SizeT)(HWord)&((type*)0)->memb)
#endif
-/* Stuff for panicking and assertion. */
+// Poor man's static assert
+#define STATIC_ASSERT(x) extern int vex__unused_array[(x) ? 1 : -1]
-#define VG__STRING(__str) #__str
+/* Stuff for panicking and assertion. */
#define vassert(expr) \
((void) (LIKELY(expr) ? 0 : \
- (vex_assert_fail (VG__STRING(expr), \
+ (vex_assert_fail (#expr, \
__FILE__, __LINE__, \
__PRETTY_FUNCTION__), 0)))
|
|
From: Florian K. <fl...@ei...> - 2015-03-28 18:46:47
|
On 28.03.2015 10:19, Matthias Schwarzott wrote: > On 27.03.2015 23:18, Florian Krohm wrote: >> >> I like the implementation of VKI_STATIC_ASSERT (without the C++ >> mumbo-jumbo) better than what I came up with, as it avoids adding a >> symbol. So I would use that approach for VEX/VG_STATIC_ASSERT as well. I spoke too fast. VKI_STATIC_ASSERT cannot be used on file scope. So it's out. > > Maybe it is possible to use static_assert (C++-11) or _Static_assert > (should be in C11) if the compiler supports it. C++'s static_assert cannot be used on file scope either. So I ended up using +// Poor man's static assert +#define STATIC_ASSERT(x) extern int VG_(VG_(VG_(unused)))[(x) ? 1 : -1] which works in all scopes (expect function parameter scope).. Florian |
|
From: <sv...@va...> - 2015-03-28 18:36:09
|
Author: florian
Date: Sat Mar 28 18:36:01 2015
New Revision: 15047
Log:
Add STATIC_ASSERT and use it.
Modified:
trunk/include/pub_tool_basics.h
trunk/memcheck/tests/vbit-test/irops.c
Modified: trunk/include/pub_tool_basics.h
==============================================================================
--- trunk/include/pub_tool_basics.h (original)
+++ trunk/include/pub_tool_basics.h Sat Mar 28 18:36:01 2015
@@ -367,6 +367,9 @@
} var = { .in = x }; var.out; \
})
+// Poor man's static assert
+#define STATIC_ASSERT(x) extern int VG_(VG_(VG_(unused)))[(x) ? 1 : -1]
+
#endif /* __PUB_TOOL_BASICS_H */
/*--------------------------------------------------------------------*/
Modified: trunk/memcheck/tests/vbit-test/irops.c
==============================================================================
--- trunk/memcheck/tests/vbit-test/irops.c (original)
+++ trunk/memcheck/tests/vbit-test/irops.c Sat Mar 28 18:36:01 2015
@@ -2,6 +2,7 @@
#include <stdio.h> // fprintf
#include <stdlib.h> // exit
+#include "pub_tool_basics.h" // STATIC_ASSERT
#include "vtest.h"
#define DEFOP(op,ukind) op, #op, ukind
@@ -1050,9 +1051,8 @@
/* Force compile time failure in case libvex_ir.h::IROp was updated
and the irops array is out of synch */
-extern int ensure_complete[
- (sizeof irops / sizeof *irops == Iop_LAST - Iop_INVALID - 1) ? 1 : -1
- ];
+STATIC_ASSERT \
+ (sizeof irops / sizeof *irops == Iop_LAST - Iop_INVALID - 1);
/* Return a descriptor for OP, iff it exists and it is implemented
for the current architecture. */
|
Author: philippe
Date: Sat Mar 28 12:52:23 2015
New Revision: 15046
Log:
Extensible main thread stack is tricky :(.
Revision 14976 causes a regression : stacktrace produced when the
stack has not yet been extended to cover SP will only contain one
element, as the stack limits are considered to be the limits of
the resvn segment.
This patch fixes that, by taking Resvn/SmUpper segment into
account to properly compute the limits.
It also contains a new regtest that fails with the trunk
(only one function in the stacktrace)
and succeeds with this patch (the 2 expected functions).
Added:
trunk/memcheck/tests/resvn_stack.c
trunk/memcheck/tests/resvn_stack.stderr.exp
trunk/memcheck/tests/resvn_stack.vgtest
Modified:
trunk/coregrind/m_stacks.c
trunk/memcheck/tests/Makefile.am
Modified: trunk/coregrind/m_stacks.c
==============================================================================
--- trunk/coregrind/m_stacks.c (original)
+++ trunk/coregrind/m_stacks.c Sat Mar 28 12:52:23 2015
@@ -277,49 +277,72 @@
*end = stack->end;
}
- /* SP is assumed to be in a RW segment.
+ /* SP is assumed to be in a RW segment or in the SkResvn segment of an
+ extensible stack (normally, only the main thread has an extensible
+ stack segment).
If no such segment is found, assume we have no valid
stack for SP, and set *start and *end to 0.
- Otherwise, possibly reduce the stack limits to the boundaries of the
- RW segment containing SP. */
+ Otherwise, possibly reduce the stack limits using the boundaries of
+ the RW segment/SkResvn segments containing SP. */
if (stackseg == NULL) {
VG_(debugLog)(2, "stacks",
"no addressable segment for SP %p\n",
(void*)SP);
*start = 0;
*end = 0;
- } else if (!stackseg->hasR || !stackseg->hasW) {
+ return;
+ }
+
+ if ((!stackseg->hasR || !stackseg->hasW)
+ && (stackseg->kind != SkResvn || stackseg->smode != SmUpper)) {
VG_(debugLog)(2, "stacks",
- "segment for SP %p is not Readable and/or not Writable\n",
+ "segment for SP %p is not RW or not a SmUpper Resvn\n",
(void*)SP);
*start = 0;
*end = 0;
- } else {
- if (*start < stackseg->start) {
- VG_(debugLog)(2, "stacks",
- "segment for SP %p changed stack start limit"
- " from %p to %p\n",
- (void*)SP, (void*)*start, (void*)stackseg->start);
- *start = stackseg->start;
- }
- if (*end > stackseg->end) {
- VG_(debugLog)(2, "stacks",
- "segment for SP %p changed stack end limit"
- " from %p to %p\n",
- (void*)SP, (void*)*end, (void*)stackseg->end);
- *end = stackseg->end;
- }
+ return;
+ }
+
+ // SP is in a RW segment, or in the SkResvn of an extensible stack.
+ if (*start < stackseg->start) {
+ VG_(debugLog)(2, "stacks",
+ "segment for SP %p changed stack start limit"
+ " from %p to %p\n",
+ (void*)SP, (void*)*start, (void*)stackseg->start);
+ *start = stackseg->start;
+ }
- /* If reducing start and/or end to the SP segment gives an
- empty range, return 'empty' limits */
- if (*start > *end) {
+ if (stackseg->kind == SkResvn) {
+ stackseg = VG_(am_next_nsegment)(stackseg, /*forward*/ True);
+ if (!stackseg || !stackseg->hasR || !stackseg->hasW
+ || stackseg->kind != SkAnonC) {
VG_(debugLog)(2, "stacks",
- "stack for SP %p start %p after end %p\n",
- (void*)SP, (void*)*start, (void*)end);
+ "Next forward segment for SP %p Resvn segment"
+ " is not RW or not AnonC\n",
+ (void*)SP);
*start = 0;
*end = 0;
+ return;
}
}
+
+ if (*end > stackseg->end) {
+ VG_(debugLog)(2, "stacks",
+ "segment for SP %p changed stack end limit"
+ " from %p to %p\n",
+ (void*)SP, (void*)*end, (void*)stackseg->end);
+ *end = stackseg->end;
+ }
+
+ /* If reducing start and/or end to the SP segment gives an
+ empty range, return 'empty' limits */
+ if (*start > *end) {
+ VG_(debugLog)(2, "stacks",
+ "stack for SP %p start %p after end %p\n",
+ (void*)SP, (void*)*start, (void*)end);
+ *start = 0;
+ *end = 0;
+ }
}
/* complaints_stack_switch reports that SP has changed by more than some
Modified: trunk/memcheck/tests/Makefile.am
==============================================================================
--- trunk/memcheck/tests/Makefile.am (original)
+++ trunk/memcheck/tests/Makefile.am Sat Mar 28 12:52:23 2015
@@ -214,6 +214,7 @@
realloc2.stderr.exp realloc2.vgtest \
realloc3.stderr.exp realloc3.vgtest \
recursive-merge.stderr.exp recursive-merge.vgtest \
+ resvn_stack.stderr.exp resvn_stack.vgtest \
sbfragment.stdout.exp sbfragment.stderr.exp sbfragment.vgtest \
sem.stderr.exp sem.vgtest \
sendmsg.stderr.exp sendmsg.vgtest \
@@ -340,6 +341,7 @@
post-syscall \
realloc1 realloc2 realloc3 \
recursive-merge \
+ resvn_stack \
sbfragment \
sendmsg \
sh-mem sh-mem-random \
Added: trunk/memcheck/tests/resvn_stack.c
==============================================================================
--- trunk/memcheck/tests/resvn_stack.c (added)
+++ trunk/memcheck/tests/resvn_stack.c Sat Mar 28 12:52:23 2015
@@ -0,0 +1,23 @@
+#include <stdio.h>
+
+__attribute__((noinline)) void big(void)
+{
+ /* The below ensures the stack grows a lot. However, we hope the stack
+ extension is not done yet, as no memory has been read/written. */
+ volatile char c[200000];
+
+ /* Access only the higher part of the stack, to avoid mapping SP */
+ /* The below 2 printfs should produce deterministic output, whatever
+ the random value of c[]. */
+ if (c[200000 - 1])
+ fprintf(stderr, "Accessing fresh %s\n", "stack");
+ else
+ fprintf(stderr, "Accessing %s stack\n", "fresh");
+
+}
+
+int main(void )
+{
+ big();
+ return 0;
+}
Added: trunk/memcheck/tests/resvn_stack.stderr.exp
==============================================================================
--- trunk/memcheck/tests/resvn_stack.stderr.exp (added)
+++ trunk/memcheck/tests/resvn_stack.stderr.exp Sat Mar 28 12:52:23 2015
@@ -0,0 +1,5 @@
+Conditional jump or move depends on uninitialised value(s)
+ at 0x........: big (resvn_stack.c:12)
+ by 0x........: main (resvn_stack.c:21)
+
+Accessing fresh stack
Added: trunk/memcheck/tests/resvn_stack.vgtest
==============================================================================
--- trunk/memcheck/tests/resvn_stack.vgtest (added)
+++ trunk/memcheck/tests/resvn_stack.vgtest Sat Mar 28 12:52:23 2015
@@ -0,0 +1,2 @@
+prog: resvn_stack
+vgopts: -q
|
|
From: <sv...@va...> - 2015-03-28 12:23:15
|
Author: philippe
Date: Sat Mar 28 12:23:07 2015
New Revision: 15045
Log:
The hint given by Valgrind gdbserver when enabling host visibility
in gdbserver was wrongly giving the file load address,
instead of the text segment address start.
This means that GDB was then showing wrong symbols for an address
(typically, symbols slightly before the address being printed).
This patch ensures the hint given is using the text start address.
Modified:
trunk/coregrind/m_gdbserver/server.c
Modified: trunk/coregrind/m_gdbserver/server.c
==============================================================================
--- trunk/coregrind/m_gdbserver/server.c (original)
+++ trunk/coregrind/m_gdbserver/server.c Sat Mar 28 12:23:07 2015
@@ -331,7 +331,12 @@
if (hostvisibility) {
const DebugInfo *tooldi
= VG_(find_DebugInfo) ((Addr)handle_gdb_valgrind_command);
- const NSegment *toolseg
+ /* Normally, we should always find the tooldi. In case we
+ do not, suggest a 'likely somewhat working' address: */
+ const Addr tool_text_start
+ = tooldi ?
+ VG_(DebugInfo_get_text_avma) (tooldi) : 0x38000000;
+ const NSegment *toolseg
= tooldi ?
VG_(am_find_nsegment) (VG_(DebugInfo_get_text_avma) (tooldi))
: NULL;
@@ -342,7 +347,7 @@
"add-symbol-file %s %p\n",
toolseg ? VG_(am_get_filename)(toolseg)
: "<toolfile> <address> e.g.",
- toolseg ? (void*)toolseg->start : (void*)0x38000000);
+ (void*)tool_text_start);
} else
VG_(gdb_printf)
("Disabled access to Valgrind memory/status by GDB\n");
|
|
From: <sv...@va...> - 2015-03-28 12:02:06
|
Author: philippe
Date: Sat Mar 28 12:01:58 2015
New Revision: 15044
Log:
Helgrind optimisation:
* do VTS pruning only if new threads were declared
very dead since the last pruning round.
* When doing pruning, use the new list of threads very dead
to do the pruning : this decreases the cost of the dichotomic search
in VTS__substract
Modified:
trunk/helgrind/libhb_core.c
Modified: trunk/helgrind/libhb_core.c
==============================================================================
--- trunk/helgrind/libhb_core.c (original)
+++ trunk/helgrind/libhb_core.c Sat Mar 28 12:01:58 2015
@@ -1818,16 +1818,18 @@
}
-/* The dead thread (ThrID, actually) table. A thread may only be
+/* The dead thread (ThrID, actually) tables. A thread may only be
listed here if we have been notified thereof by libhb_async_exit.
New entries are added at the end. The order isn't important, but
- the ThrID values must be unique. This table lists the identity of
- all threads that have ever died -- none are ever removed. We keep
- this table so as to be able to prune entries from VTSs. We don't
- actually need to keep the set of threads that have ever died --
+ the ThrID values must be unique.
+ verydead_thread_table_not_pruned lists the identity of the threads
+ that died since the previous round of pruning.
+ Once pruning is done, these ThrID are added in verydead_thread_table.
+ We don't actually need to keep the set of threads that have ever died --
only the threads that have died since the previous round of
pruning. But it's useful for sanity check purposes to keep the
entire set, so we do. */
+static XArray* /* of ThrID */ verydead_thread_table_not_pruned = NULL;
static XArray* /* of ThrID */ verydead_thread_table = NULL;
/* Arbitrary total ordering on ThrIDs. */
@@ -1839,16 +1841,40 @@
return 0;
}
-static void verydead_thread_table_init ( void )
+static void verydead_thread_tables_init ( void )
{
tl_assert(!verydead_thread_table);
+ tl_assert(!verydead_thread_table_not_pruned);
verydead_thread_table
= VG_(newXA)( HG_(zalloc),
"libhb.verydead_thread_table_init.1",
HG_(free), sizeof(ThrID) );
VG_(setCmpFnXA)(verydead_thread_table, cmp__ThrID);
+ verydead_thread_table_not_pruned
+ = VG_(newXA)( HG_(zalloc),
+ "libhb.verydead_thread_table_init.2",
+ HG_(free), sizeof(ThrID) );
+ VG_(setCmpFnXA)(verydead_thread_table_not_pruned, cmp__ThrID);
}
+static void verydead_thread_table_sort_and_check (XArray* thrids)
+{
+ UWord i;
+
+ VG_(sortXA)( thrids );
+ /* Sanity check: check for unique .sts.thr values. */
+ UWord nBT = VG_(sizeXA)( thrids );
+ if (nBT > 0) {
+ ThrID thrid1, thrid2;
+ thrid2 = *(ThrID*)VG_(indexXA)( thrids, 0 );
+ for (i = 1; i < nBT; i++) {
+ thrid1 = thrid2;
+ thrid2 = *(ThrID*)VG_(indexXA)( thrids, i );
+ tl_assert(thrid1 < thrid2);
+ }
+ }
+ /* Ok, so the dead thread table thrids has unique and in-order keys. */
+}
/* A VTS contains .ts, its vector clock, and also .id, a field to hold
a backlink for the caller's convenience. Since we have no idea
@@ -2424,7 +2450,7 @@
ThrID nyu;
nyu = Thr__to_ThrID(thr);
- VG_(addToXA)( verydead_thread_table, &nyu );
+ VG_(addToXA)( verydead_thread_table_not_pruned, &nyu );
/* We can only get here if we're assured that we'll never again
need to look at this thread's ::viR or ::viW. Set them to
@@ -2819,26 +2845,16 @@
quit at this point if it is not to be done. */
if (!do_pruning)
return;
+ /* No need to do pruning if no thread died since the last pruning as
+ no VtsTE can be pruned. */
+ if (VG_(sizeXA)( verydead_thread_table_not_pruned) == 0)
+ return;
/* ---------- BEGIN VTS PRUNING ---------- */
- /* We begin by sorting the backing table on its .thr values, so as
- to (1) check they are unique [else something has gone wrong,
- since it means we must have seen some Thr* exiting more than
- once, which can't happen], and (2) so that we can quickly look
+ /* Sort and check the very dead threads that died since the last pruning.
+ Sorting is used for the check and so that we can quickly look
up the dead-thread entries as we work through the VTSs. */
- VG_(sortXA)( verydead_thread_table );
- /* Sanity check: check for unique .sts.thr values. */
- UWord nBT = VG_(sizeXA)( verydead_thread_table );
- if (nBT > 0) {
- ThrID thrid1, thrid2;
- thrid2 = *(ThrID*)VG_(indexXA)( verydead_thread_table, 0 );
- for (i = 1; i < nBT; i++) {
- thrid1 = thrid2;
- thrid2 = *(ThrID*)VG_(indexXA)( verydead_thread_table, i );
- tl_assert(thrid1 < thrid2);
- }
- }
- /* Ok, so the dead thread table has unique and in-order keys. */
+ verydead_thread_table_sort_and_check (verydead_thread_table_not_pruned);
/* We will run through the old table, and create a new table and
set, at the same time setting the .remap entries in the old
@@ -2897,7 +2913,7 @@
nBeforePruning++;
nSTSsBefore += old_vts->usedTS;
VTS* new_vts = VTS__subtract("libhb.vts_tab__do_GC.new_vts",
- old_vts, verydead_thread_table);
+ old_vts, verydead_thread_table_not_pruned);
tl_assert(new_vts->sizeTS == new_vts->usedTS);
tl_assert(*(ULong*)(&new_vts->ts[new_vts->usedTS])
== 0x0ddC0ffeeBadF00dULL);
@@ -2957,6 +2973,21 @@
} /* for (i = 0; i < nTab; i++) */
+ /* Move very dead thread from verydead_thread_table_not_pruned to
+ verydead_thread_table. Sort and check verydead_thread_table
+ to verify a thread was reported very dead only once. */
+ {
+ UWord nBT = VG_(sizeXA)( verydead_thread_table_not_pruned);
+
+ for (i = 0; i < nBT; i++) {
+ ThrID thrid =
+ *(ThrID*)VG_(indexXA)( verydead_thread_table_not_pruned, i );
+ VG_(addToXA)( verydead_thread_table, &thrid );
+ }
+ verydead_thread_table_sort_and_check (verydead_thread_table);
+ VG_(dropHeadXA) (verydead_thread_table_not_pruned, nBT);
+ }
+
/* At this point, we have:
* the old VTS table, with its .remap entries set,
and with all .vts == NULL.
@@ -3109,11 +3140,6 @@
nAfterPruning, nSTSsAfter / (nAfterPruning ? nAfterPruning : 1)
);
}
- if (0)
- VG_(printf)("VTQ: before pruning %lu (avg sz %lu), "
- "after pruning %lu (avg sz %lu)\n",
- nBeforePruning, nSTSsBefore / nBeforePruning,
- nAfterPruning, nSTSsAfter / nAfterPruning);
/* ---------- END VTS PRUNING ---------- */
}
@@ -6080,7 +6106,7 @@
VTS singleton, tick and join operations. */
temp_max_sized_VTS = VTS__new( "libhb.libhb_init.1", ThrID_MAX_VALID );
temp_max_sized_VTS->id = VtsID_INVALID;
- verydead_thread_table_init();
+ verydead_thread_tables_init();
vts_set_init();
vts_tab_init();
event_map_init();
@@ -6257,6 +6283,13 @@
" exit %d joinedwith %d\n",
live, llexit_and_joinedwith_done,
llexit_done, joinedwith_done);
+ VG_(printf)(" libhb: %d verydead_threads, "
+ "%d verydead_threads_not_pruned\n",
+ (int) VG_(sizeXA)( verydead_thread_table),
+ (int) VG_(sizeXA)( verydead_thread_table_not_pruned));
+ tl_assert (VG_(sizeXA)( verydead_thread_table)
+ + VG_(sizeXA)( verydead_thread_table_not_pruned)
+ == llexit_and_joinedwith_done);
}
VG_(printf)("%s","\n");
|
|
From: Matthias S. <zz...@ge...> - 2015-03-28 09:19:15
|
On 27.03.2015 23:18, Florian Krohm wrote: > > I like the implementation of VKI_STATIC_ASSERT (without the C++ > mumbo-jumbo) better than what I came up with, as it avoids adding a > symbol. So I would use that approach for VEX/VG_STATIC_ASSERT as well. Maybe it is possible to use static_assert (C++-11) or _Static_assert (should be in C11) if the compiler supports it. Only in other cases the code could fall back to a seperate implementation. Regards Matthias |
|
From: <sv...@va...> - 2015-03-28 01:20:10
|
Author: petarj
Date: Sat Mar 28 01:20:02 2015
New Revision: 3108
Log:
mips64: extract correct immediate value for Cavium SEQI and SNEI
Extract immediate value from bit fields [15:6] instead of [15:0].
This fixes the issue reported in BZ #341997.
Related Valgrind commit - r15043.
Patch by Maran Pakkirisamy.
Modified:
trunk/priv/guest_mips_toIR.c
Modified: trunk/priv/guest_mips_toIR.c
==============================================================================
--- trunk/priv/guest_mips_toIR.c (original)
+++ trunk/priv/guest_mips_toIR.c Sat Mar 28 01:20:02 2015
@@ -2239,7 +2239,10 @@
UChar regRs = get_rs(theInstr);
UChar regRt = get_rt(theInstr);
UChar regRd = get_rd(theInstr);
- UInt imm = get_imm(theInstr);
+ /* MIPS trap instructions extract code from theInstr[15:6].
+ Cavium OCTEON instructions SNEI, SEQI extract immediate operands
+ from the same bit field [15:6]. */
+ UInt imm = get_code(theInstr);
UChar lenM1 = get_msb(theInstr);
UChar p = get_lsb(theInstr);
IRType ty = mode64? Ity_I64 : Ity_I32;
|
|
From: <sv...@va...> - 2015-03-28 00:59:40
|
Author: petarj
Date: Sat Mar 28 00:59:32 2015
New Revision: 15043
Log:
mips64: extend the test with new cases for Cavium SEQI and SNEI
Extend the test to introduce cases for SEQI and SNEI when immediate is
equal to the content of the GPR rs. Minor code style changes added.
Patch by Maran Pakkirisamy.
Related issue - BZ #341997.
Modified:
trunk/none/tests/mips64/cvm_ins.c
trunk/none/tests/mips64/cvm_ins.stdout.exp
Modified: trunk/none/tests/mips64/cvm_ins.c
==============================================================================
--- trunk/none/tests/mips64/cvm_ins.c (original)
+++ trunk/none/tests/mips64/cvm_ins.c Sat Mar 28 00:59:32 2015
@@ -98,7 +98,7 @@
printf("%s :: rd 0x%lx rs 0x%x, rt 0x%x\n", \
instruction, out, RSVal, RTval); \
}
-#define TESTINST3(instruction, RSVal, RT, RS,imm) \
+#define TESTINST3(instruction, RSVal, RT, RS, imm) \
{ \
unsigned long out; \
__asm__ volatile( \
@@ -127,35 +127,35 @@
switch(op) {
case EXTS: { /* To extract and sign-extend a bit field that starts
from the lower 32 bits of a register. */
- for(i = 0; i <= 255; i+=4)
+ for (i = 0; i <= 255; i+=4)
TESTINST1("exts $t1, $t2, 1, 7", reg_val[i], t1, t2, 1, 7);
break;
}
case EXTS32: { /* To extract and sign-extend a bit field that starts
from the upper 32 bits of a register. */
- for(i = 0; i <= 255; i+=4)
+ for (i = 0; i <= 255; i+=4)
TESTINST1("exts32 $t1, $t2, 1 , 7", reg_val[i], t1, t2, 1, 7);
break;
}
case CINS:{ /* To insert a bit field that starts in the lower 32 bits
of a register. */
- for(i = 0; i <= 255; i+=4)
+ for (i = 0; i <= 255; i+=4)
TESTINST1("cins $t1, $t2, 2 , 9", reg_val[i], t1, t2, 2, 9);
break;
}
case CINS32: { /* To insert a bit field that starts in the upper
32 bits of a register. */
- for(i =0; i <= 255; i+=4)
+ for (i =0; i <= 255; i+=4)
TESTINST1("cins32 $t1, $t2, 2 , 9", reg_val[i], t1, t2, 2, 9);
break;
}
case SEQ: { /* To record the result of an equals comparison. */
- for(i = 0; i <= 255; i+=4)
- for(j = 0; j <= 255; j+=4)
+ for (i = 0; i <= 255; i+=4)
+ for (j = 0; j <= 255; j+=4)
TESTINST2("seq $t1, $t2 ,$t3 ", reg_val[i], reg_val[j],
t1, t2, t3);
break;
@@ -163,14 +163,18 @@
case SEQI: { /* To record the result of an equals comparison
with a constant. */
- for(i = 0; i <= 255; i+=4)
+ /* First, make sure at least one testcase has source value (rs)
+ that equals the immediate value to validate the true case. */
+ const int immvalue = 9;
+ TESTINST3("seqi $t1, $t2 ,9 ", immvalue, t1, t2, immvalue);
+ for (i = 0; i <= 255; i+=4)
TESTINST3("seqi $t1, $t2 ,9 ", reg_val[i], t1, t2, 9);
break;
}
case SNE: { /* To record the result of a not equals comparison. */
- for(i = 0; i <= 255; i+=4)
- for(j = 0; j<= 255; j+=4)
+ for (i = 0; i <= 255; i+=4)
+ for (j = 0; j<= 255; j+=4)
TESTINST2("sne $t1, $t2 ,$t3 ", reg_val[i], reg_val[j],
t1, t2, t3);
break;
@@ -178,15 +182,19 @@
case SNEI: { /* To record the result of a not equals comparison
with a constant. */
- for(i = 0; i <= 255; i+=1)
+ /* First, make sure at least one testcase has source value (rs)
+ that equals the immediate value to validate the false case. */
+ const int immvalue = 9;
+ TESTINST3("snei $t1, $t2 ,9 ", immvalue, t1, t2, immvalue);
+ for (i = 0; i <= 255; i+=1)
TESTINST3("snei $t1, $t2 ,9 ", reg_val[i], t1, t2, 9);
break;
}
case DMUL: { /* To multiply 64-bit signed integers and
write the result to a GPR. */
- for(i = 0; i <= 255; i+=4)
- for(j = 0; j <= 255; j+=8)
+ for (i = 0; i <= 255; i+=4)
+ for (j = 0; j <= 255; j+=8)
TESTINST2("dmul $t1, $t2 ,$t3 ", reg_val[i], reg_val[j],
t1, t2, t3);
break;
Modified: trunk/none/tests/mips64/cvm_ins.stdout.exp
==============================================================================
--- trunk/none/tests/mips64/cvm_ins.stdout.exp (original)
+++ trunk/none/tests/mips64/cvm_ins.stdout.exp Sat Mar 28 00:59:32 2015
@@ -254,6 +254,7 @@
cins32 $t1, $t2, 2 , 9 :: rt 0xf5400000000 rs 0x9abc8bd5, p 0x00000002, lenm1 0x00000009
cins32 $t1, $t2, 2 , 9 :: rt 0x2c400000000 rs 0xafb010b1, p 0x00000002, lenm1 0x00000009
cins32 $t1, $t2, 2 , 9 :: rt 0x9b400000000 rs 0xbcb4666d, p 0x00000002, lenm1 0x00000009
+snei $t1, $t2 ,9 :: rt 0x0 rs 0x9,imm 0x00000009
snei $t1, $t2 ,9 :: rt 0x1 rs 0x0,imm 0x00000009
snei $t1, $t2 ,9 :: rt 0x1 rs 0x4c11db7,imm 0x00000009
snei $t1, $t2 ,9 :: rt 0x1 rs 0x9823b6e,imm 0x00000009
@@ -4606,6 +4607,7 @@
sne $t1, $t2 ,$t3 :: rd 0x1 rs 0xbcb4666d, rt 0x9abc8bd5
sne $t1, $t2 ,$t3 :: rd 0x1 rs 0xbcb4666d, rt 0xafb010b1
sne $t1, $t2 ,$t3 :: rd 0x0 rs 0xbcb4666d, rt 0xbcb4666d
+seqi $t1, $t2 ,9 :: rt 0x1 rs 0x9,imm 0x00000009
seqi $t1, $t2 ,9 :: rt 0x0 rs 0x0,imm 0x00000009
seqi $t1, $t2 ,9 :: rt 0x0 rs 0x130476dc,imm 0x00000009
seqi $t1, $t2 ,9 :: rt 0x0 rs 0x2608edb8,imm 0x00000009
|
|
From: Florian K. <fl...@ei...> - 2015-03-27 22:19:05
|
On 27.03.2015 18:11, Julian Seward wrote: > > /me thinks .. Uh huh .. Florian has been doing home-grown static > asserts :-) > Just this one :) > Except, there are dozens of places where I would like to have static asserts > generally available. Insanely enough, there is, in include/vki/vki-linux.h, > a definition of VKI_STATIC_ASSERT. Maybe we should pull it out, rename it > VG_STATIC_ASSERT, and make it generally available. What do you think? I didn't know about VKI_STATIC_ASSERT. include/vki wasn't a likely place to look for that sort of thing.. VKI_STATIC_ASSERT is used inside vki-linux.h so pulling it out is probably not ideal. It just creates a dependency as include/vki/*.h does not depend on valgrind headers right now. It's best to just leave it alone. Now, we want static asserts in VEX and valgrind. So we add VEX_STATIC_ASSERT to main_util.h and VG_STATIC_ASSERT in pub_tool_basics.h, like we do with LIKELY and friends. I like the implementation of VKI_STATIC_ASSERT (without the C++ mumbo-jumbo) better than what I came up with, as it avoids adding a symbol. So I would use that approach for VEX/VG_STATIC_ASSERT as well. Florian |
|
From: Julian S. <js...@ac...> - 2015-03-27 17:11:22
|
Making all in gdbserver_tests
Making all in memcheck/tests/vbit-test
irops.c:1053:12: error: size of array 'ensure_complete' is negative
extern int ensure_complete[
^
Makefile:720: recipe for target 'vbit_test-irops.o' failed
make[2]: *** [vbit_test-irops.o] Error 1
/me thinks .. Uh huh .. Florian has been doing home-grown static
asserts :-)
This is good.
Except, there are dozens of places where I would like to have static asserts
generally available. Insanely enough, there is, in include/vki/vki-linux.h,
a definition of VKI_STATIC_ASSERT. Maybe we should pull it out, rename it
VG_STATIC_ASSERT, and make it generally available. What do you think?
J
|
|
From: Crestez D. L. <cdl...@gm...> - 2015-03-27 13:08:22
|
On 03/27/2015 06:57 AM, Maran Pakkirisamy wrote: > > > On 03/26/2015 06:58 PM, Crestez Dan Leonard wrote: >> If you think that this sort of patch is acceptable the next step would be to post a bug in the tracker, right? > There is already a patch posted at > https://bugs.kde.org/show_bug.cgi?id=328670 to fix this issue. Please > check if the patch helps you. > Hello, I tested that patch and it works too. The patch is probably faster because it only assigns to k0 at specific points instead of every emulated syscall. Overwriting after every syscall is closer to what the kernel actually does, but it should only matter if userspace play with k0 on it's own. I see that that patch was retracted and the bug closed as "wontfix". It's not clear why it's OK to include support for cavium proprietary instructions but not proprietary kernel features. Especially since it's such a very small patch. I believe the patch should be applied. Having valgrind work (almost) out of the box on octeon seems like a good thing. Regards, Leonard |
|
From: <sv...@va...> - 2015-03-27 08:47:30
|
Author: florian
Date: Fri Mar 27 08:47:22 2015
New Revision: 15042
Log:
Change the minimum allowable value of aspacem_minAddr to
be VKI_PAGE_SIZE. That follows from the requirement that
the address ought to be page aligned and > 0.
Modified:
trunk/coregrind/m_aspacemgr/aspacemgr-linux.c
Modified: trunk/coregrind/m_aspacemgr/aspacemgr-linux.c
==============================================================================
--- trunk/coregrind/m_aspacemgr/aspacemgr-linux.c (original)
+++ trunk/coregrind/m_aspacemgr/aspacemgr-linux.c Fri Mar 27 08:47:22 2015
@@ -1606,7 +1606,7 @@
Bool
VG_(am_is_valid_for_aspacem_minAddr)( Addr addr, const HChar **errmsg )
{
- const Addr min = 0x1000; // 1 page FIXME: VKI_PAGE_SIZE ?
+ const Addr min = VKI_PAGE_SIZE;
#if VG_WORDSIZE == 4
const Addr max = 0x40000000; // 1Gb
#else
|
|
From: Maran P. <mpa...@ca...> - 2015-03-27 05:12:06
|
On 03/26/2015 06:58 PM, Crestez Dan Leonard wrote: > If you think that this sort of patch is acceptable the next step would be to post a bug in the tracker, right? There is already a patch posted at https://bugs.kde.org/show_bug.cgi?id=328670 to fix this issue. Please check if the patch helps you. -- Maran Pakkirisamy |