You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
| 2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
| 2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
| 2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
| 2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
| 2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
| 2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
| 2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
| 2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
| 2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
| 2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
| 2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
| 2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
| 2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
| 2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
| 2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
| 2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
| 2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
| 2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
| 2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
(12) |
Oct
(16) |
Nov
(1) |
Dec
|
| 2024 |
Jan
(4) |
Feb
(3) |
Mar
(6) |
Apr
(17) |
May
(2) |
Jun
(33) |
Jul
(13) |
Aug
(1) |
Sep
(6) |
Oct
(8) |
Nov
(6) |
Dec
(15) |
| 2025 |
Jan
(5) |
Feb
(11) |
Mar
(8) |
Apr
(20) |
May
(1) |
Jun
|
Jul
|
Aug
(9) |
Sep
(1) |
Oct
(7) |
Nov
(1) |
Dec
|
|
From: Tom H. <to...@co...> - 2015-08-28 15:37:52
|
On 28/08/15 15:06, John OSullivan wrote: > I have a problem running valgrind on my embedded system, you can see the > detail below but essentially the problem is that Valgrind fails with: > FATAL: aspacem assertion failed > > The problem is caused by valgrind detecting a inode number of zero for libc > b6dae000-b6eea000 r-xp 00000000 00:00 8937 /lib/libc-2.13.so > ^^^^^ > dev & ino are always zero > > My system boots from nand and copies the file system to ram, so the file > system runs from ram, as far as I can determine when running from ram the > device and inode number are going to always be zero. > I tried a similar exercise with the Raspberry PI, if the PIs file system > reside in Ram (Volatile) then the device and inodes will always be zero, if > I put the PIs file system on the SD Card (non volatile) then I get non-zero > device and inodes for the relevant sections. > > My question is how am I going to use valgrind on a ram based file system > when device numbers are going to be zero for libc, is there a configuration > or setting that I am missing. Surely the filesystem is still on a device, even if that device is a ramfs device? and that device should have major and minor numbers? Unfortunately unix semantics mean that you always have to have unique device numbers otherwise there is no way to tell if two identical inodes refer to the same file or not. Tom -- Tom Hughes (to...@co...) http://compton.nu/ |
|
From: John O. <joh...@cl...> - 2015-08-28 14:25:10
|
Hi,
I have a problem running valgrind on my embedded system, you can see the
detail below but essentially the problem is that Valgrind fails with:
FATAL: aspacem assertion failed
The problem is caused by valgrind detecting a inode number of zero for libc
b6dae000-b6eea000 r-xp 00000000 00:00 8937 /lib/libc-2.13.so
^^^^^
dev & ino are always zero
My system boots from nand and copies the file system to ram, so the file
system runs from ram, as far as I can determine when running from ram the
device and inode number are going to always be zero.
I tried a similar exercise with the Raspberry PI, if the PIs file system
reside in Ram (Volatile) then the device and inodes will always be zero, if
I put the PIs file system on the SD Card (non volatile) then I get non-zero
device and inodes for the relevant sections.
My question is how am I going to use valgrind on a ram based file system
when device numbers are going to be zero for libc, is there a configuration
or setting that I am missing.
Regards
John O'Sullivan
device - If the region was mapped from a file, this is the major and minor
device number (in hex) where the file lives.
inode - If the region was mapped from a file, this is the file number
-----Original Message-----
From: John OSullivan [mailto:joh...@cl...]
Sent: 15 April 2015 15:31
To: js...@ac...; 'Florian Krohm'; val...@li...
Subject: Re: [Valgrind-users] Valgrind: FATAL: aspacem assertion failed:
Hi Guys,
Thanks for the feedback, I will investigate further regarding the file
system, the system is built using Buildroot, so I will poke around there too
see if I can get to the bottom of it.
Regards
-----Original Message-----
From: Julian Seward [mailto:js...@ac...]
Sent: 15 April 2015 15:13
To: Florian Krohm; John OSullivan; val...@li...
Subject: Re: [Valgrind-users] Valgrind: FATAL: aspacem assertion failed:
On 15/04/15 16:03, Florian Krohm wrote:
> This isn't sane, because for an ANON segment we should have d=0 and
> i=0 and o=0.
> Clearly, this is not an ANON segment but a file segment.
>
> I suggest to change the condition on line 3248 in aspacemgr-linux.c
> (refering to 3.10.1 sources) to if (1) and rerun. That way we can see
> the contents of /proc/self/maps and can deduce why d == 0 (it should
> be != 0).
Ah, good point.
So, d is the device number, right? If that's so, then the problem is likely
because memcheck-arm-linux is on some unusual, hacky, etc, filesystem, and
the device numbers are zero, when they shouldn't be.
And in fact, you can see that in the /proc/self/maps output that John showed
in his first message:
00008000-00106000 r-xp 00000000 00:00 8773 /bin/busybox
0010e000-0010f000 rw-p 000fe000 00:00 8773 /bin/busybox
0010f000-00111000 rw-p 00000000 00:00 0 [heap]
b6dae000-b6eea000 r-xp 00000000 00:00 8937 /lib/libc-2.13.so
^^^^^
dev & ino are always zero
So John, what's with the filesystem that you installed Valgrind on?
J
----------------------------------------------------------------------------
--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT Develop your
own process in accordance with the BPMN 2 standard Learn Process modeling
best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
Valgrind-users mailing list
Val...@li...
https://lists.sourceforge.net/lists/listinfo/valgrind-users
|
|
From: Josef W. <Jos...@gm...> - 2015-08-28 07:23:13
|
Am 27.08.2015 um 22:53 schrieb Philippe Waroquiers: > E.g. I think that stop instrumentation implies to flush the translation > cache. But the currently executed block will not be really flushed till > the block is exited. To my understanding, every client request forces a block to end. It is fine to flush the translation cache in a client request handler as we are not running within the context of a translated block anymore. > So, what is measured is not very clear, maybe/probably measuring > a part of the TOGGLE call. At least, when using --dump-instr=yes > and then using kcachegrind, the executed instructions are associated > with the TOGGLE macro. Good to know. The same should happen with START/STOP_INSTRUMENTATION macros? Josef |
|
From: Josef W. <Jos...@gm...> - 2015-08-28 07:03:14
|
Hi Geoff,
Am 27.08.2015 um 08:42 schrieb gal...@nc...:
> I would like to count instructions for a specific part of my code.
> ...
This suggests we should document the difference between
START/STOP_INSTRUMENTAITON
and TOGGLE_COLLECT better. Anyway.
> for (int i = 1; i <= 1000; ++i) {
> CALLGRIND_START_INSTRUMENTATION;
> n += i;
> CALLGRIND_STOP_INSTRUMENTATION;
> }
With macros in C, the compiler can reorder stuff, so you may end up with
something like
> for (int i = 1; i <= 1000; ++i) {
> n += i;
> CALLGRIND_START_INSTRUMENTATION;
> CALLGRIND_STOP_INSTRUMENTATION;
> }
or even worse, gcc may have calculated the end result of n at compile
time, and
optimized the "n += i" away.
Further, the use of the macros may probibit some optimizations such as loop
unrolling, change register spilling and so on.
These comments also are true with using TOGGLE_COLLECT, too.
If you are curious, you can check resulting machine code with "objdump
-S ...".
In general, I would suggest to always switch measuring on/off never at this
fine granularity, ie. here in the inner loop, but on a higher level.
For your code, just do
> CALLGRIND_START_INSTRUMENTATION;
> for (int i = 1; i <= 1000; ++i) {
> n += i;
> }
> CALLGRIND_STOP_INSTRUMENTATION;
It should not really matter much whether you use these macros or
TOGGLE_COLLECT.
As the use of macros may change the resulting code subtly, you could
rearrange your code such
that you are crossing function borders at the time you want to switch
measuring on/off.
And then, use "--toggle-collect=<func>".
Your performance issues with START/STOP_INSTRUMENTATION surprised me. It
always should be
faster to run in "un-instrumented" mode.
If you do not use any macros at all, what is the runtime of your with
vs. without using
"--instr-atstart=no"?
Josef
|
|
From: Philippe W. <phi...@sk...> - 2015-08-27 20:53:53
|
I tried the below (the toggle version, translated to c) on x86, with gcc 4.9.2. Looking at the printf call, I see: 0x080483e2 <+178>: push $0x7a314 0x080483e7 <+183>: push $0x80485d0 0x080483ec <+188>: call 0x80482f0 <printf@plt> So, you see that 0x7a314 (so 500500 in decimal) is pushed. In other words, gcc optimiser has detected what you are doing and has computed the result at compilation time :). The loop is then only executed to do the CALLGRIND start/stop. This is somewhat confirmed by gdb: (gdb) list 10,10 10 n += i; (gdb) info line 10 Line 10 of "toggle.c" is at address 0x8048391 <main+97> but contains no code. (gdb) I guess then that the difference between toggle and start/stop version depends on the very fine details of what/when exactly the start/stop and/or the toggle client requests are taking effect. E.g. I think that stop instrumentation implies to flush the translation cache. But the currently executed block will not be really flushed till the block is exited. I think that at least START/STOP is not designed/not usable to measure ultra small pieces of code, as the code is only re-instrumented when 'exiting from the current executing block'. (for more info, see the long comment in coregrind/m_transtab.c around line 224). Toggle collect on the other hand might work better as this does not have to flush the translation cache : the JITted code is instrumented, the callgrind 'helpers' will be called by the JITted code but will just do nothing if collect is off. Then why is the nr of instruction 'big' ? Well, as you have seen, the addition itself is 0 instruction :). So, what is measured is not very clear, maybe/probably measuring a part of the TOGGLE call. At least, when using --dump-instr=yes and then using kcachegrind, the executed instructions are associated with the TOGGLE macro. In summary: do not use STOP/START for very small pieces of code such as loop bodies, very unlikely to work precisely/correctly. TOGGLE collect for very small pieces of code in a loop are likely to measure (partially) the TOGGLE call (in particular if gcc optimiser has just removed the 'real' loop code). Philippe On Thu, 2015-08-27 at 06:42 +0000, gal...@nc... wrote: > I would like to count instructions for a specific part of my code. > > I found a valgrind-users thread, "A great trick for counting cycles using callgrind - but what about massif?" (http://sourceforge.net/p/valgrind/mailman/message/33476105/), that gives a way to use Callgrind to do this using the CALLGRIND_START_INSTRUMENTATION and CALLGRIND_STOP_INSTRUMENTATION macros. This seemed to work but was taking way too long on "real world tests" to be of practical use. > > So I did some more searching and found a Stack Overflow question, "Callgrind: Profile a specific part of my code" (http://stackoverflow.com/questions/13688185/callgrind-profile-a-specific-part-of-my-code), that addresses the performance issue. It suggests using the CALLGRIND_TOGGLE_COLLECT macro instead the CALLGRIND_START_INSTRUMENTATION / CALLGRIND_STOP_INSTRUMENTATION macro. This worked in that performance was much better. However, the two approaches give significantly different instruction counts, which leads me to wonder if either is reporting a "correct" instruction count. > > I created two simple test programs illustrating the difference. Here's a simple program using the CALLGRIND_START_INSTRUMENTATION / CALLGRIND_STOP_INSTRUMENTATION approach: > > #include <cstdlib> > #include <iostream> > #include <string> > #include <valgrind/callgrind.h> > > int main(int argc, char* argv[]) { > int n = 0; > for (int i = 1; i <= 1000; ++i) { > CALLGRIND_START_INSTRUMENTATION; > n += i; > CALLGRIND_STOP_INSTRUMENTATION; > } > std::cout << "n: " << n << std::endl; > CALLGRIND_DUMP_STATS; > exit(0); > } > > I compiled this using g++ 4.8.2 on openSUSE 13.2 x86-64 as follows: > > g++ -O3 -o Test1 Test1.cpp > > I then ran Callgrind as follows: > > valgrind --tool=callgrind --instr-atstart=no Test1 > > which gave > > ==7371== Callgrind, a call-graph generating cache profiler > ==7371== Copyright (C) 2002-2013, and GNU GPL'd, by Josef Weidendorfer et al. > ==7371== Using Valgrind-3.10.0 and LibVEX; rerun with -h for copyright info > ==7371== Command: Test1 > ==7371== > ==7371== For interactive control, run 'callgrind_control -h'. > n: 500500 > ==7371== > ==7371== Events : Ir > ==7371== Collected : 0 > ==7371== > ==7371== I refs: 0 > > The question is why is the instruction count 0? (I do get a nonzero instruction count when profiling my real code using the CALLGRIND_START_INSTRUMENTATION / CALLGRIND_STOP_INSTRUMENTATION approach, but the reported instruction count is significantly less than the instruction count reported using the CALLGRIND_TOGGLE_COLLECT approach.) > > Here's the program using the CALLGRIND_TOGGLE_COLLECT approach: > > #include <cstdlib> > #include <iostream> > #include <string> > #include <valgrind/callgrind.h> > > int main(int argc, char* argv[]) { > int n = 0; > for (int i = 1; i <= 10000; ++i) { > CALLGRIND_TOGGLE_COLLECT; > n += i; > CALLGRIND_TOGGLE_COLLECT; > } > std::cout << "n: " << n << std::endl; > CALLGRIND_DUMP_STATS; > exit(0); > } > > I compiled the program as follows: > > g++ -O3 -o Test2 Test2.cpp > > and ran Callgrind as follows: > > valgrind --tool=callgrind --collect-atstart=no Test2 > > which gave > > ==7378== Callgrind, a call-graph generating cache profiler > ==7378== Copyright (C) 2002-2013, and GNU GPL'd, by Josef Weidendorfer et al. > ==7378== Using Valgrind-3.10.0 and LibVEX; rerun with -h for copyright info > ==7378== Command: Test2 > ==7378== > ==7378== For interactive control, run 'callgrind_control -h'. > n: 50005000 > ==7378== > ==7378== Events : Ir > ==7378== Collected : 90008 > ==7378== > ==7378== I refs: 90,008 > > The reported instruction count is 90008, which amounts to an amortized instruction count of approximately nine instructions for each "n += i;". Nine instructions seems high for a single integer addition even on an x86-64 CPU. Is the reported instruction count really a "correct" measure of the "n += i;" statement? > > Is there a better way to profile instruction count for a specific part of code? > > Thanks, > Geoff Alexander > > ------------------------------------------------------------------------------ > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users |
|
From: <gal...@nc...> - 2015-08-27 06:42:20
|
I would like to count instructions for a specific part of my code. I found a valgrind-users thread, "A great trick for counting cycles using callgrind - but what about massif?" (http://sourceforge.net/p/valgrind/mailman/message/33476105/), that gives a way to use Callgrind to do this using the CALLGRIND_START_INSTRUMENTATION and CALLGRIND_STOP_INSTRUMENTATION macros. This seemed to work but was taking way too long on "real world tests" to be of practical use. So I did some more searching and found a Stack Overflow question, "Callgrind: Profile a specific part of my code" (http://stackoverflow.com/questions/13688185/callgrind-profile-a-specific-part-of-my-code), that addresses the performance issue. It suggests using the CALLGRIND_TOGGLE_COLLECT macro instead the CALLGRIND_START_INSTRUMENTATION / CALLGRIND_STOP_INSTRUMENTATION macro. This worked in that performance was much better. However, the two approaches give significantly different instruction counts, which leads me to wonder if either is reporting a "correct" instruction count. I created two simple test programs illustrating the difference. Here's a simple program using the CALLGRIND_START_INSTRUMENTATION / CALLGRIND_STOP_INSTRUMENTATION approach: #include <cstdlib> #include <iostream> #include <string> #include <valgrind/callgrind.h> int main(int argc, char* argv[]) { int n = 0; for (int i = 1; i <= 1000; ++i) { CALLGRIND_START_INSTRUMENTATION; n += i; CALLGRIND_STOP_INSTRUMENTATION; } std::cout << "n: " << n << std::endl; CALLGRIND_DUMP_STATS; exit(0); } I compiled this using g++ 4.8.2 on openSUSE 13.2 x86-64 as follows: g++ -O3 -o Test1 Test1.cpp I then ran Callgrind as follows: valgrind --tool=callgrind --instr-atstart=no Test1 which gave ==7371== Callgrind, a call-graph generating cache profiler ==7371== Copyright (C) 2002-2013, and GNU GPL'd, by Josef Weidendorfer et al. ==7371== Using Valgrind-3.10.0 and LibVEX; rerun with -h for copyright info ==7371== Command: Test1 ==7371== ==7371== For interactive control, run 'callgrind_control -h'. n: 500500 ==7371== ==7371== Events : Ir ==7371== Collected : 0 ==7371== ==7371== I refs: 0 The question is why is the instruction count 0? (I do get a nonzero instruction count when profiling my real code using the CALLGRIND_START_INSTRUMENTATION / CALLGRIND_STOP_INSTRUMENTATION approach, but the reported instruction count is significantly less than the instruction count reported using the CALLGRIND_TOGGLE_COLLECT approach.) Here's the program using the CALLGRIND_TOGGLE_COLLECT approach: #include <cstdlib> #include <iostream> #include <string> #include <valgrind/callgrind.h> int main(int argc, char* argv[]) { int n = 0; for (int i = 1; i <= 10000; ++i) { CALLGRIND_TOGGLE_COLLECT; n += i; CALLGRIND_TOGGLE_COLLECT; } std::cout << "n: " << n << std::endl; CALLGRIND_DUMP_STATS; exit(0); } I compiled the program as follows: g++ -O3 -o Test2 Test2.cpp and ran Callgrind as follows: valgrind --tool=callgrind --collect-atstart=no Test2 which gave ==7378== Callgrind, a call-graph generating cache profiler ==7378== Copyright (C) 2002-2013, and GNU GPL'd, by Josef Weidendorfer et al. ==7378== Using Valgrind-3.10.0 and LibVEX; rerun with -h for copyright info ==7378== Command: Test2 ==7378== ==7378== For interactive control, run 'callgrind_control -h'. n: 50005000 ==7378== ==7378== Events : Ir ==7378== Collected : 90008 ==7378== ==7378== I refs: 90,008 The reported instruction count is 90008, which amounts to an amortized instruction count of approximately nine instructions for each "n += i;". Nine instructions seems high for a single integer addition even on an x86-64 CPU. Is the reported instruction count really a "correct" measure of the "n += i;" statement? Is there a better way to profile instruction count for a specific part of code? Thanks, Geoff Alexander |
|
From: Milian W. <ma...@mi...> - 2015-08-26 16:18:56
|
Hey all, what do you think about handling VALGRIND_MEMPOOL_ALLOC/FREE like normal allocations in massif? This works nicely for pool allocators that use mmap directly, instead of using malloc. For pools that use malloc, it will break as it will then in the worst-case report the duplicate memory consumption (once the block allocation, then all individual allocations in that block). Still, I think that would be a good idea to have, at least optionally with an off-by-default cli switch. I have it tested locally, it is trivial to patch Valgrind's massif to do that and I'd be happy to spent more time on making it configurable if there's interest. What do you think? -- Milian Wolff ma...@mi... http://milianw.de |
|
From: kinshuk <kin...@gm...> - 2015-08-26 04:40:13
|
Hi, 1.Is it possible to execute valgrind on forked process and not on parent process that created it? 2. when I run it with parent process with option --trace-children=yes I get output like this: ==27734== Memcheck, a memory error detector ==27734== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. ==27734== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info ==27734== Command: ./itcp ==27734== Parent PID: 18155 ==27734== ==27734== Warning: invalid file descriptor -1 in syscall close() ==27734== Warning: invalid file descriptor -1 in syscall close() Thanks, Kinshuk |
|
From: Michael K. <mtk...@gm...> - 2015-08-24 14:09:11
|
Is anyone running the following combination? : Mac OS X 10.9.4 + Android ndk r6 (on a Snapdragon 800) + Valgrind 3.10.1 When I try to ./configure, I get: checking for the kernel version... unsupported (13.3.0) configure: error: Valgrind works on kernels 2.4, 2.6 I am following the instructions on: http://valgrind.org/docs/manual/dist.readme-android.html I have also tried Frederick Germains build on git, with the same results. Any advise would be welcomed. -Michael |
|
From: Ivan S. J. <isj...@i1...> - 2015-08-24 09:37:58
|
(x86, gcc, valgrind v3.10.0) I have a C program that infrequently coredumps due to jump to an invalid address. I have been running it with valgrind for the past 3 months trying to track down this error but the results have not been helpful in the 10 coredumps so far because there are no stack backtraces when that jump occurs ==20897== Jump to the invalid address stated on the next line ==20897== at 0x810CFFFF: ??? ==20897== Address 0x810cffff is not stack'd, malloc'd or (recently) free'd ==25919== Jump to the invalid address stated on the next line ==25919== at 0xBA0CFFFF: ??? ==25919== Address 0xba0cffff is not stack'd, malloc'd or (recently) free'd There are fine stack backtraces for the other errors (conditional jumps mostly). The program is compiled with gcc -m32 -O0 -ggdb and I have verified that the compiler correctly maintains the EBP register. The program uses a large 3rd-party library that uses callbacks a lot of places, so my hunch is that a memory overwrite destroys a function pointer and that causes the invalid jump. But I need the stack backtrace to at least narrow it down. Any suggestions? |
|
From: xiaofeng <wa...@gm...> - 2015-08-23 14:54:41
|
Hi there, When write *C++* code like bellow: char *tmp = new char[10]; strcpy(tmp, "foo"); delete[] tmp; Valgrind complains ''Conditional jump or move depends on uninitialised value(s)" errors. But bellow is ok: char *tmp = new char[10]*()*; Is there anything wrong? Should that be an error? valgrind --track-origins=yes --leak-check=full --show-reachable=yes --log-file=log ./a.out valgrind-3.10.0 gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) -- xiaofeng -- gpg key fingerprint: 2048R/5E63005B C84F 671F 70B7 7330 4726 5EC8 02BC CBA2 5E63 005B -- trans-zh_cn mailing list tra...@li... https://admin.fedoraproject.org/mailman/listinfo/trans-zh_cn |
|
From: Florian K. <fl...@ei...> - 2015-08-20 21:33:54
|
On 20.08.2015 19:12, Mike McLaughlin wrote: > I run configure like so: > > ../../valgrind-3.10.1/configure --prefix /usr/local > > And get the following error message: > > checking the GLIBC_VERSION version... unsupported version 2.21 > configure: error: Valgrind requires glibc version 2.2 - 2.19. > > This there anyway to build with my existing glibc 2.21. It seems like I'm so close. > This has been fixed a while ago in the development version of valgrind and will be in the next release. If you cannot wait that long, check out the code repository. How to do that and how to build valgrind is described here: http://valgrind.org/downloads/repository.html Florian |
|
From: John R. <jr...@bi...> - 2015-08-20 20:07:06
|
> ../../valgrind-3.10.1/configure --prefix /usr/local > > And get the following error message: > > checking the GLIBC_VERSION version... unsupported version 2.21 > > configure: error: Valgrind requires glibc version 2.2 - 2.19. > > This there anyway to build with my existing glibc 2.21. It seems like I’m so close. > > I’m running Ubuntu 15.01. In the top-level file 'configure', find the string "2.19)" [or even "2.20)"] then: just above that line, add a line "2.21)". That will be a quick-and-dirty workaround. Obviously if you have more energy then you can extend the script and its clients by following the example for 2.19 or 2.20. For the truly motivated, then work with configure.ac. |
|
From: Mike M. <mi...@mi...> - 2015-08-20 17:27:46
|
I run configure like so: ../../valgrind-3.10.1/configure --prefix /usr/local And get the following error message: checking the GLIBC_VERSION version... unsupported version 2.21 configure: error: Valgrind requires glibc version 2.2 - 2.19. This there anyway to build with my existing glibc 2.21. It seems like I'm so close. I'm running Ubuntu 15.01. mikem |
|
From: John R. <jr...@bi...> - 2015-08-18 01:24:10
|
> Valgrind is producing a finding under Fedora 22 and Debian 8. The > finding is shown below. The issue seems to appear in the latest GCC > compilers, like 4.9 and 5.1. Code produced by earlier compilers do not > produce a finding. [[snip]] > I'm building with -Og and -g3. Can I get Valgrind to print its name > for me? Or, how can I get more information on the offending variable? Please share your recipe to reproduce the problem that you see, in particular including the fix hinted by your earlier remark "The CPUID instruction's ASM block was missing volatile." Here's my attempt. The environment is: ===== $ grep VERSION /etc/os-release ## Fedora VERSION="22 (Twenty Two)" $ gcc --version gcc (GCC) 5.1.1 20150618 (Red Hat 5.1.1-4) $ valgrind --version valgrind-3.10.1 ===== $ svn checkout svn://svn.code.sf.net/p/cryptopp/code/trunk cryptopp-code $ cd cryptopp-code $ make -f GNUmakefile SYMBOLS=-g3 OPTIMIZE=-Og $ valgrind --track-origins=yes ./cryptest.exe tv salsa [[snip]] ==8330== Conditional jump or move depends on uninitialised value(s) ==8330== at 0x5EDAB9: CryptoPP::DetectX86Features() (cpu.cpp:158) ==8330== by 0x5B2075: HasAESNI (cpu.h:156) ==8330== by 0x5B2075: CryptoPP::Rijndael::Base::UncheckedSetKey(unsigned char const*, unsigned int, CryptoPP::NameValuePairs const&) (rijndael.cpp:209) ==8330== by 0x51FFDA: CryptoPP::SimpleKeyingInterface::SetKey(unsigned char const*, unsigned long, CryptoPP::NameValuePairs const&) (cryptlib.cpp:68) ==8330== by 0x45A4D8: CryptoPP::ModePolicyCommonTemplate<CryptoPP::AdditiveCipherAbstractPolicy>::CipherSetKey(CryptoPP::NameValuePairs const&, unsigned char const*, unsigned long) (modes.h:89) ==8330== by 0x586BD9: CryptoPP::AdditiveCipherTemplate<CryptoPP::AbstractPolicyHolder<CryptoPP::AdditiveCipherAbstractPolicy, CryptoPP::OFB_ModePolicy> >::UncheckedSetKey(unsigned char const*, unsigned int, CryptoPP::NameValuePairs const&) (strciphr.cpp:15) ==8330== by 0x51FFDA: CryptoPP::SimpleKeyingInterface::SetKey(unsigned char const*, unsigned long, CryptoPP::NameValuePairs const&) (cryptlib.cpp:68) ==8330== by 0x521282: CryptoPP::SimpleKeyingInterface::SetKeyWithIV(unsigned char const*, unsigned long, unsigned char const*, unsigned long) (cryptlib.cpp:78) ==8330== by 0x45821A: SetKeyWithIV (cryptlib.h:399) ==8330== by 0x45821A: main (test.cpp:129) ==8330== Uninitialised value was created by a stack allocation ==8330== at 0x5926C90: sigaction (in /usr/lib64/libc-2.21.so) [[snip]] Testing SymmetricCipher algorithm Salsa20. ......==8330== Conditional jump or move depends on uninitialised value(s) ==8330== at 0x4C2E8C2: __memcmp_sse4_1 (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==8330== by 0x4A764E: compare (char_traits.h:259) ==8330== by 0x4A764E: __gnu_cxx::__enable_if<std::__is_char<char>::__value, bool>::__type std::operator==<char>(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) (basic_string.h:4912) ==8330== by 0x4A235E: operator!=<char, std::char_traits<char>, std::allocator<char> > (basic_string.h:4950) ==8330== by 0x4A235E: TestSymmetricCipher(std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, CryptoPP::NameValuePairs const&) (datatest.cpp:425) ==8330== by 0x4A63A2: TestDataFile(std::string const&, CryptoPP::NameValuePairs const&, unsigned int&, unsigned int&) (datatest.cpp:707) ==8330== by 0x4A67E3: RunTestDataFile(char const*, CryptoPP::NameValuePairs const&, bool) (datatest.cpp:752) ==8330== by 0x458E61: main (test.cpp:261) ==8330== Uninitialised value was created by a stack allocation ==8330== at 0x54B53A: CryptoPP::Salsa20_Policy::OperateKeystream(CryptoPP::KeystreamOperation, unsigned char*, unsigned char const*, unsigned long) (salsa.cpp:469) ===== |
|
From: Jeffrey W. <nol...@gm...> - 2015-08-17 04:21:02
|
Valgrind is producing a finding under Fedora 22 and Debian 8. The
finding is shown below. The issue seems to appear in the latest GCC
compilers, like 4.9 and 5.1. Code produced by earlier compilers do not
produce a finding.
I see the "Uninitialised value was created by a stack allocation" and
the file and line number. The issue I am having is it points to the
last line of a [rather large] inline assembly block. To further
complicate matters, the routine makes use of the MMX coprocessor.
I tried using addr2line, but its not offering anything more than
Valgrind is reporting:
$ addr2line -e cryptest.exe -a 0x51E41A
0x000000000051e41a
/home/jwalton/.../cryptopp-svn-5.6.3/salsa.cpp:474
I'm building with -Og and -g3. Can I get Valgrind to print its name
for me? Or, how can I get more information on the offending variable?
Thanks in advance.
**********
Line 474 reported below is actually line 468 of
http://www.cryptopp.com/docs/ref/salsa_8cpp_source.html (I'm tweaking
things trying to isolate the offender).
**********
$ valgrind --track-origins=yes ./cryptest.exe tv salsa
...
Testing SymmetricCipher algorithm Salsa20.
......==3890== Conditional jump or move depends on uninitialised value(s)
==3890== at 0x4C2CC7C: strcmp (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==3890== by 0x48744F: EqualStrings (datatest.cpp:23)
==3890== by 0x48744F: TestSymmetricCipher(std::map<std::string,
std::string, std::less<std::string>,
std::allocator<std::pair<std::string const, std::string> > >&,
CryptoPP::NameValuePairs const&) (datatest.cpp:432)
==3890== by 0x48B5F5: TestDataFile(std::string const&,
CryptoPP::NameValuePairs const&, unsigned int&, unsigned int&)
(datatest.cpp:714)
==3890== by 0x48BCF7: RunTestDataFile(char const*,
CryptoPP::NameValuePairs const&, bool) (datatest.cpp:759)
==3890== by 0x404FB0: main (test.cpp:266)
==3890== Uninitialised value was created by a stack allocation
==3890== at 0x51E3D3:
CryptoPP::Salsa20_Policy::OperateKeystream(CryptoPP::KeystreamOperation,
unsigned char*, unsigned char const*, unsigned long) (salsa.cpp:474)
==3890==
==3890== Conditional jump or move depends on uninitialised value(s)
==3890== at 0x4C2CC6A: strcmp (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==3890== by 0x48744F: EqualStrings (datatest.cpp:23)
==3890== by 0x48744F: TestSymmetricCipher(std::map<std::string,
std::string, std::less<std::string>,
std::allocator<std::pair<std::string const, std::string> > >&,
CryptoPP::NameValuePairs const&) (datatest.cpp:432)
==3890== by 0x48B5F5: TestDataFile(std::string const&,
CryptoPP::NameValuePairs const&, unsigned int&, unsigned int&)
(datatest.cpp:714)
==3890== by 0x48BCF7: RunTestDataFile(char const*,
CryptoPP::NameValuePairs const&, bool) (datatest.cpp:759)
==3890== by 0x404FB0: main (test.cpp:266)
==3890== Uninitialised value was created by a stack allocation
==3890== at 0x51E3D3:
CryptoPP::Salsa20_Policy::OperateKeystream(CryptoPP::KeystreamOperation,
unsigned char*, unsigned char const*, unsigned long) (salsa.cpp:474)
==3890==
==3890== Conditional jump or move depends on uninitialised value(s)
==3890== at 0x487472: TestSymmetricCipher(std::map<std::string,
std::string, std::less<std::string>,
std::allocator<std::pair<std::string const, std::string> > >&,
CryptoPP::NameValuePairs const&) (datatest.cpp:432)
==3890== by 0x48B5F5: TestDataFile(std::string const&,
CryptoPP::NameValuePairs const&, unsigned int&, unsigned int&)
(datatest.cpp:714)
==3890== by 0x48BCF7: RunTestDataFile(char const*,
CryptoPP::NameValuePairs const&, bool) (datatest.cpp:759)
==3890== by 0x404FB0: main (test.cpp:266)
==3890== Uninitialised value was created by a stack allocation
==3890== at 0x51E41A:
CryptoPP::Salsa20_Policy::OperateKeystream(CryptoPP::KeystreamOperation,
unsigned char*, unsigned char const*, unsigned long) (salsa.cpp:474)
==3890==
==3890== Use of uninitialised value of size 8
==3890== at 0x53F635: CryptoPP::BaseN_Encoder::Put2(unsigned char
const*, unsigned long, int, bool) (basecode.cpp:85)
==3890== by 0x571900: NextPutMaybeModifiable (filters.h:204)
==3890== by 0x571900:
CryptoPP::FilterWithBufferedInput::PutMaybeModifiable(unsigned char*,
unsigned long, int, bool, bool) (filters.cpp:376)
==3890== by 0x4F8CDB:
CryptoPP::BufferedTransformation::ChannelPut2(std::string const&,
unsigned char const*, unsigned long, int, bool) (cryptlib.cpp:432)
==3890== by 0x56E9DD:
CryptoPP::StringStore::CopyRangeTo2(CryptoPP::BufferedTransformation&,
unsigned long long&, unsigned long long, std::string const&, bool)
const (filters.cpp:1091)
==3890== by 0x56E96C:
CryptoPP::StringStore::TransferTo2(CryptoPP::BufferedTransformation&,
unsigned long long&, std::string const&, bool) (filters.cpp:1081)
==3890== by 0x487578: Pump (filters.h:738)
==3890== by 0x487578: TestSymmetricCipher(std::map<std::string,
std::string, std::less<std::string>,
std::allocator<std::pair<std::string const, std::string> > >&,
CryptoPP::NameValuePairs const&) (datatest.cpp:436)
==3890== by 0x48B5F5: TestDataFile(std::string const&,
CryptoPP::NameValuePairs const&, unsigned int&, unsigned int&)
(datatest.cpp:714)
==3890== by 0x48BCF7: RunTestDataFile(char const*,
CryptoPP::NameValuePairs const&, bool) (datatest.cpp:759)
==3890== by 0x404FB0: main (test.cpp:266)
==3890== Uninitialised value was created by a stack allocation
==3890== at 0x51E3D3:
CryptoPP::Salsa20_Policy::OperateKeystream(CryptoPP::KeystreamOperation,
unsigned char*, unsigned char const*, unsigned long) (salsa.cpp:474)
|
|
From: Tom H. <to...@co...> - 2015-08-14 15:04:14
|
On 14/08/15 15:48, John Reiser wrote: >> Can I integrate my program's reimplemented execve with Valgrind? > > Call it 'execve', put it in a shared library, then change the filename > of the shared library where valgrind expects to find execve from "glibc" > to the name of your .so. Except that valgrind traps the system call, not the library function ;-) Tom -- Tom Hughes (to...@co...) http://compton.nu/ |
|
From: John R. <jr...@bi...> - 2015-08-14 14:48:53
|
> Can I integrate my program's reimplemented execve with Valgrind? Call it 'execve', put it in a shared library, then change the filename of the shared library where valgrind expects to find execve from "glibc" to the name of your .so. |
|
From: Steven Stewart-G. <sst...@my...> - 2015-08-14 13:30:54
|
Can I integrate my program's reimplemented execve with Valgrind? Hello, Is it possible to integrate a reimplemented execve with Valgrind? I am implementing an Linux + GLibc emulator for Windows programs and so I have to reimplement execve myself (see https://gitlab.com/sstewartgallus/Sandbox-Libc/blob/master/kernel/src/main.c#L330.) Unfortunately, this means I also overwrite Valgrind's code itself and so Valgrind does not work for those processes. Is it possible to integrate with Valgrind itself and avoid this problem? Shouldn't qemu-user and user-mode Linux have also run into these problems? Thank you, Steven Stewart-Gallus |
|
From: Dan K. <da...@ke...> - 2015-08-13 22:40:16
|
On Thu, Aug 13, 2015 at 3:30 PM, Jeffrey Walton <nol...@gm...> wrote: > I tracked it down to a one-liner ASM statement. The CPUID > instruction's ASM block was missing volatile. I guess the optimizer > removed it, which confused the machinery. (The other values were > initialized, like the global flags and the word array used to > represent %EAX - %EDX). Sweet, thanks for letting us know what the culprit was! Now that you've tracked it down, can you publish a slimmed down case? If you think the optimizer's at fault, maybe we can file a bug for that. - Dan |
|
From: Jeffrey W. <nol...@gm...> - 2015-08-13 22:30:23
|
>> ...why would Valgrind >> complain a boolean value is not initialized, even though its >> initialized to false in the source code and backed via BSS? > > The most common reason is because some uninitialized value > was assigned to (or copied into) the value in question. Of course you probably > claim that this is impossible, but an unknown bug enables almost anything. > Be sure to run with --track-origins=yes. > > As others have noted, if you cannot post a concrete example > that illustrates the problem then you are not worth our time anymore. Thanks. I tracked it down to a one-liner ASM statement. The CPUID instruction's ASM block was missing volatile. I guess the optimizer removed it, which confused the machinery. (The other values were initialized, like the global flags and the word array used to represent %EAX - %EDX). Thanks again. Jeff |
|
From: John R. <jr...@bi...> - 2015-08-13 20:56:30
|
On 08/13/2015 12:47 AM, Jeffrey Walton wrote: > ...why would Valgrind > complain a boolean value is not initialized, even though its > initialized to false in the source code and backed via BSS? The most common reason is because some uninitialized value was assigned to (or copied into) the value in question. Of course you probably claim that this is impossible, but an unknown bug enables almost anything. Be sure to run with --track-origins=yes. As others have noted, if you cannot post a concrete example that illustrates the problem then you are not worth our time anymore. |
|
From: David C. <dcc...@ac...> - 2015-08-13 18:05:45
|
On 8/13/2015 10:28 AM, Jeffrey Walton wrote:
> On Thu, Aug 13, 2015 at 12:06 PM, Dan Kegel <da...@ke...> wrote:
>> On Thu, Aug 13, 2015 at 6:11 AM, Jeffrey Walton <nol...@gm...> wrote:
>>> Al we have managed to do since C++98 (maybe earlier) is move the
>>> problem around because the C++ language has not given us the tools we
>>> need to address the problem.
>> What part of "stop doing that" does C++ get in the way of?
> The objects simply exist, and telling someone "don't do that" is
> wishful thinking at best. That's the implicit bikeshedding I was
> talking about.
>
> And converting the file-scoped globals into locals with accessors
> suffers the same problem on the destructor side. So the problem was
> not fixed; rather it was just moved around.
>
"Bikeshedding" is not the analogy I would apply; it's more like "camel's
nose in the tent". The problem sounds tractable as long as it is kept
small, but people will naturally want more and more. Even the GCC
init_priority extension is limited to 65435 (65535 - 101 + 1) priority
levels. What if someone has thousands of global variables to order
(e.g. an embedded system with no memory allocator)? What if the
libraries don't play nice and all of them ask for priority 101? What if
priorities 101 to 10000 are all taken, and one more variable needs to be
added in the middle?
My code "simply existed" and it had initialization order problems. I
rewrote it. That cost money, but I have never regretted it.
--
David Chapman dcc...@ac...
Chapman Consulting -- San Jose, CA
Software Development Done Right.
www.chapman-consulting-sj.com
|
|
From: Patrick J. L. <lop...@gm...> - 2015-08-13 17:39:06
|
On Thu, Aug 13, 2015 at 10:28 AM, Jeffrey Walton <nol...@gm...> wrote: > > And converting the file-scoped globals into locals with accessors > suffers the same problem on the destructor side. Not true. The C++ spec guarantees destructors will run in the opposite order of constructors in most cases. The Meyers Singleton is such a case. (So are global objects, incidentally. Global construction order is unspecified, but whatever the order is, global destruction order is guaranteed to be the opposite.) - Pat |
|
From: Jeffrey W. <nol...@gm...> - 2015-08-13 17:28:42
|
On Thu, Aug 13, 2015 at 12:06 PM, Dan Kegel <da...@ke...> wrote: > On Thu, Aug 13, 2015 at 6:11 AM, Jeffrey Walton <nol...@gm...> wrote: >> Al we have managed to do since C++98 (maybe earlier) is move the >> problem around because the C++ language has not given us the tools we >> need to address the problem. > > What part of "stop doing that" does C++ get in the way of? The objects simply exist, and telling someone "don't do that" is wishful thinking at best. That's the implicit bikeshedding I was talking about. And converting the file-scoped globals into locals with accessors suffers the same problem on the destructor side. So the problem was not fixed; rather it was just moved around. Jeff |