You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
| 2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
| 2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
| 2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
| 2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
| 2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
| 2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
| 2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
| 2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
| 2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
| 2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
| 2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
| 2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
| 2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
| 2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
| 2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
| 2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
| 2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
| 2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
| 2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
|
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(6) |
2
|
|
3
|
4
(3) |
5
(2) |
6
|
7
(20) |
8
(9) |
9
(2) |
|
10
(1) |
11
(11) |
12
(5) |
13
(2) |
14
(5) |
15
(2) |
16
(1) |
|
17
|
18
(9) |
19
(8) |
20
(12) |
21
(9) |
22
(4) |
23
|
|
24
|
25
|
26
|
27
|
28
(2) |
29
(1) |
30
|
|
31
|
|
|
|
|
|
|
|
From: Josef W. <Jos...@gm...> - 2011-07-08 17:54:17
|
On Friday 08 July 2011, John Reiser wrote:
> What I see is an instance of EPIC FAIL for Usability.
Thanks for the detailed reply. I really appreciate it.
The first thing I must note here is that "callgrind_annotate" is meant
as fallback solution for people who can/do not want to use a GUI.
KCachegrind (the GUI for callgrind output)
- shows "Instruction Fetch" as explanation for "Ir",
- shows a call count column in the function list.
> The legend paragraph
> at the beginning of the output does not define "Ir", the output displayed
> by the command-line parameter "--help" does not define "Ir",
Ok. The same is true for cachegrind/cg_annotate. We should do something
about that.
> and the default
> output of a tool whose name begins with "call" has nothing to do with
> the number of calls to each subroutine.
Point taken.
> The legend paragraph contains these instances of "Ir":
> Events recorded: Ir
> Events shown: Ir
> Event sort order: Ir
> The output of "callgrind_annotate --help" does not contain "Ir".
> The MINIMUM I expect is for BOTH places to contain an additional annotation
> such as "Possible events: Ir (Instruction Read)".
>
> The only explanation of "Ir" is in the callgrind manual
> share/doc/valgrind/html/cl-manual.html
> and the cachegrind manual
> share/doc/valgrind/html/cg-manual.html
> Neither manual explains why "Ir" (Instruction Read) is used instead of
> something like "Ix" (Instruction eXecuted). Until some years ago most
> x86 chips read [and decode] many more instructions than they execute:
> in particular, some [effectively] non-executed instructions which reside
> after every taken branch. Some newer x86 chips read-decode-translate
> instruction bytes into microcode, often blurring the boundaries between
> architectural instructions, and execute only the translated microcode.
> The translations are cached; a loop of up to several dozen instructions
> can be executed many times after the first translation. So the
> relationship between Instruction Read and Instruction eXecute is murky.
Indeed. The "Ir" naming is part of cachegrind history.
We equally could use "Instructions Executed", and yes, this should be
explained in the manual. It would be even better to talk about
"Instructions retired" (works as abbrevation to Ir).
> > Actually, callgrind_annotate only prints the number of calls for call arcs,
> > which are displayed e.g. with "--tree=caller":
> >
> >> callgrind_annotate --tree=caller callgrind.out*
> > ...
> > 7,060 < ???:0x0000000000401d60 (1x) [/bin/date]
> > 73 * /build/buildd/eglibc-2.13/stdlib/exit.c:exit [/lib/x86_64-linux-gnu/libc-2.13.so]
> > ...
> >
> > This excerpt means that a function at "0x0000000000401d60" called
> > "exit" one time. It probably would be good to add a total call count number
> > for every function.
>
> Yes. Total incoming calls by function is the zero-level statistic which is
> useful and expected by both novice and experienced users.
Good.
> >> For another instance, using the exact suggestion:
> >> $ valgrind --tool=callgrind --ct-verbose=1 /bin/date
> >> gives output such as:
> >> -----
> >> . . . > check_match.10789(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a08a20]
> >> . . . .> strcmp(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a16ac0]
> >> . . . . > strcmp(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a16ac0]
> >> -----
> >> where the indicated parameters are nonsense,
> >
> > Sure, it would be a nice feature!
> > But "--ct-verbose=..." is actually only for internal debugging, and
> > the output is not documented.
>
> If something such as "strcmp(0x3d, 0x5b, ...)" is _not_ an indication
> of parameters, then this is another poor choice,
This is undocumented debug output. We can change it any time. I do not see
a problem here.
> especially as a
> recommendation to a new user.
The original poster asked for a trace of called functions. You suggested
a quite elaborated wrapper method, which did not seem to be easy to implement.
I just commented that callgrind can print out exactly that list of
called functions using a hidden "feature", to show him a shortcut.
The only thing I may have added: "Warning: includes a lot of totally misleading crap".
> >> and a function whose body
> >> is a loop displays as a recursion.
> >
> > Please tell me a way how to distinguish loops from tail recursion optimization
> > at machine code level, when the loop jumps back to the first instruction
> > of a function.
>
> After entry to a function and before the corresponding return, if there has
> been no write to the register or memory location which is designated to hold
> the return address, and if the target is within the same function (as determined
> by the available symbol table and perhaps by implementation details such as
> choice of opcode and instruction encoding), then it's a loop and not
> a recursion.
That is exactly what callgrind is doing, unless ...
> Sure, at the _lowest_ level a branch back to the beginning [or even after the
> prologue] cannot be distinguished between a loop and a recursion.
Right. Unfortunately, callgrind only can see the lowest level. Do you
know any debug information that can help here?
> However,
> programmers who use explicit tail recursion to the same function _know_
> that it is equivalent to a loop, and won't be surprised by such a display;
> whereas the coder of a loop might be confused by seeing a recursion.
That is arguable.
Callgrind only translates a jump into recursion if it is back to the first
instruction. This mostly works fine, as loop bodies do not include function
prologues. The "wrongly detected" strcmp really is a rare exception, as
with the x86_64 calling convention, this function actually has no prologue:
0x7ffff7df36f0: mov (%rdi),%al
0x7ffff7df36f2: cmp (%rsi),%al
0x7ffff7df36f4: jne 0x7ffff7df3703
0x7ffff7df36f6: inc %rdi
0x7ffff7df36f9: inc %rsi
0x7ffff7df36fc: test %al,%al
0x7ffff7df36fe: jne 0x7ffff7df36f0
0x7ffff7df3700: xor %eax,%eax
0x7ffff7df3702: retq
0x7ffff7df3703: mov $0x1,%eax
0x7ffff7df3708: mov $0xffffffff,%ecx
0x7ffff7df370d: cmovb %ecx,%eax
0x7ffff7df3710: retq
And you can try yourself:
int mysc1(char* a, char* b)
{
if (*a == *b) {
if (*a != 0) return mysc1(a+1,b+1);
return 0;
}
return (*a < *b) ? -1:1;
}
int mysc2(char* a, char* b)
{
while(1) {
if (*a != *b) break;
if (*a == 0) return 0;
a++, b++;
}
return (*a < *b) ? -1:1;
}
$> gcc -v
...
gcc version 4.5.2 (Ubuntu/Linaro 4.5.2-8ubuntu4)
$> gcc -O3 -Os -c mysc{1,2}.c; objdump -S mysc{1,2}.o
0000000000000000 <mysc>:
0: 8a 07 mov (%rdi),%al
2: 3a 06 cmp (%rsi),%al
4: 75 0c jne 12 <mysc+0x12>
6: 84 c0 test %al,%al
8: 74 13 je 1d <mysc+0x1d>
a: 48 ff c6 inc %rsi
d: 48 ff c7 inc %rdi
10: eb ee jmp 0 <mysc>
12: 0f 9d c0 setge %al
15: 0f b6 c0 movzbl %al,%eax
18: 8d 44 00 ff lea -0x1(%rax,%rax,1),%eax
1c: c3 retq
1d: 31 c0 xor %eax,%eax
1f: c3 retq
Both give exactly the same machine code as result with GCC 4.5.2!
> Also,
> callgrind should take advantage of having an event horizon that is larger
> than one instruction.
Please explain.
> Adopting the "subroutine outlook" also favors a loop over recursion.
> The "subroutine outlook" knows only what is [scheduled] to be done
> in the future; what has happened in the past is not relevant and
> cannot be known. Thus the current recursion level always equals
> the number of pending return addresses. The disadvantage is that
> traceback "through" a tail recursion elides some names, in much the same
> way that visible state at a debugger breakpoint shows only the _next_
> instruction to be executed, and not the previous instruction pointer.
Sorry, I don't get your point here.
Do you suggest for callgrind to better always play dumb, and do not even
try to reconstruct a potential tail recursion into a real recursion?
There is a similar scenario: machine code can have jumps between
functions (e.g. as result of a tail recursion optimization, or hand crafted
assembler). Callgrind output is about call relationship. There is no way
to reason about jumps between functions. Thus, callgrind has to map a
jump between functions either to a "call" (with an additional return when
returning from the function jumped to) or a "return/call" pair.
> > Please tell me the reason why you started with the assumption that the
> > discovered output must be buggy.
>
> I started with the assumption that a tool with a name such as "callgrind"
> would tell me how many times each subroutine was called. The output was
> not so. The output legend had no explanation.
I am really sorry about that experience. I obviously need to better care
about CLI users.
I will open a bug report for that.
Josef
> The --help output had
> no explanation. The output used notation "strcmp(0x3d, 0x5b, ...)"
> for something other than displaying [correctly] a function call with parameters.
> After all that experience, then I decided that I should be wary of callgrind.
|
|
From: John R. <jr...@bi...> - 2011-07-08 15:18:44
|
On 07/08/2011 05:45 AM, Josef Weidendorfer wrote: > On Thursday 07 July 2011, John Reiser wrote: >>> valgrind --tool=callgrind --ct-verbose=1 ... >> >> When I run callgrind and callgrind_annotate, then I don't understand >> the output. For instance, the connection between "73" and the number >> of actual dynamic calls to exit() is mysterious to me: >> ===== >> $ valgrind --tool=callgrind /bin/date >> ==3790== Using Valgrind-3.6.0 and LibVEX; rerun with -h for copyright info >> $ callgrind_annotate callgrind.out* >> Ir file:function >> 73 /usr/src/debug/glibc-2.13/stdlib/exit.c:exit [/lib64/libc-2.13.so] > > The default for callgrind_annotate is to show self cost of the functions > for the given event types collected. > Thus, the "73" in the "Ir" column is the number of executed client instruction > inside of exit. It has nothing to do with the number of calls to exit. What I see is an instance of EPIC FAIL for Usability. The legend paragraph at the beginning of the output does not define "Ir", the output displayed by the command-line parameter "--help" does not define "Ir", and the default output of a tool whose name begins with "call" has nothing to do with the number of calls to each subroutine. The legend paragraph contains these instances of "Ir": Events recorded: Ir Events shown: Ir Event sort order: Ir The output of "callgrind_annotate --help" does not contain "Ir". The MINIMUM I expect is for BOTH places to contain an additional annotation such as "Possible events: Ir (Instruction Read)". The only explanation of "Ir" is in the callgrind manual share/doc/valgrind/html/cl-manual.html and the cachegrind manual share/doc/valgrind/html/cg-manual.html Neither manual explains why "Ir" (Instruction Read) is used instead of something like "Ix" (Instruction eXecuted). Until some years ago most x86 chips read [and decode] many more instructions than they execute: in particular, some [effectively] non-executed instructions which reside after every taken branch. Some newer x86 chips read-decode-translate instruction bytes into microcode, often blurring the boundaries between architectural instructions, and execute only the translated microcode. The translations are cached; a loop of up to several dozen instructions can be executed many times after the first translation. So the relationship between Instruction Read and Instruction eXecute is murky. > > Actually, callgrind_annotate only prints the number of calls for call arcs, > which are displayed e.g. with "--tree=caller": > >> callgrind_annotate --tree=caller callgrind.out* > ... > 7,060 < ???:0x0000000000401d60 (1x) [/bin/date] > 73 * /build/buildd/eglibc-2.13/stdlib/exit.c:exit [/lib/x86_64-linux-gnu/libc-2.13.so] > ... > > This excerpt means that a function at "0x0000000000401d60" called > "exit" one time. It probably would be good to add a total call count number > for every function. Yes. Total incoming calls by function is the zero-level statistic which is useful and expected by both novice and experienced users. > > >> For another instance, using the exact suggestion: >> $ valgrind --tool=callgrind --ct-verbose=1 /bin/date >> gives output such as: >> ----- >> . . . > check_match.10789(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a08a20] >> . . . .> strcmp(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a16ac0] >> . . . . > strcmp(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a16ac0] >> ----- >> where the indicated parameters are nonsense, > > Sure, it would be a nice feature! > But "--ct-verbose=..." is actually only for internal debugging, and > the output is not documented. If something such as "strcmp(0x3d, 0x5b, ...)" is _not_ an indication of parameters, then this is another poor choice, especially as a recommendation to a new user. The values show the stack content > (interpreted as unsigned ints, as far as I remember) when entering the > function - which was enough at the time I needed the debug output. On x86_64, the first 6 [integer and pointer] parameters are passed in registers, and not on the stack. > >> and a function whose body >> is a loop displays as a recursion. > > Please tell me a way how to distinguish loops from tail recursion optimization > at machine code level, when the loop jumps back to the first instruction > of a function. After entry to a function and before the corresponding return, if there has been no write to the register or memory location which is designated to hold the return address, and if the target is within the same function (as determined by the available symbol table and perhaps by implementation details such as choice of opcode and instruction encoding), then it's a loop and not a recursion. Sure, at the _lowest_ level a branch back to the beginning [or even after the prologue] cannot be distinguished between a loop and a recursion. However, programmers who use explicit tail recursion to the same function _know_ that it is equivalent to a loop, and won't be surprised by such a display; whereas the coder of a loop might be confused by seeing a recursion. Also, callgrind should take advantage of having an event horizon that is larger than one instruction. Adopting the "subroutine outlook" also favors a loop over recursion. The "subroutine outlook" knows only what is [scheduled] to be done in the future; what has happened in the past is not relevant and cannot be known. Thus the current recursion level always equals the number of pending return addresses. The disadvantage is that traceback "through" a tail recursion elides some names, in much the same way that visible state at a debugger breakpoint shows only the _next_ instruction to be executed, and not the previous instruction pointer. > > Often quite some meta information is thrown away, and there > is no way to exactly reconstruct what has happened at the source level. > So callgrind must use heuristics which sometimes can go wrong. For the > high-optimized code of strcmp above, this heuristic obviously goes wrong. > But without source code, your claim of existance of a loop in strcmp > could actually be wrong. It could have been coded as recursive function, > resulting in the same machine code. > >> It is hard for me to trust such output. > > Please tell me the reason why you started with the assumption that the > discovered output must be buggy. I started with the assumption that a tool with a name such as "callgrind" would tell me how many times each subroutine was called. The output was not so. The output legend had no explanation. The --help output had no explanation. The output used notation "strcmp(0x3d, 0x5b, ...)" for something other than displaying [correctly] a function call with parameters. After all that experience, then I decided that I should be wary of callgrind. -- |
|
From: Josef W. <Jos...@gm...> - 2011-07-08 13:51:40
|
On Friday 08 July 2011, pankaj pawan wrote: > I know the arguments and their types. > I can get the stackpointer during > runtime but how do I read the stack after that. > Can I read memory just by dereferencing the stack pointer? VEX of course can read from memory, see IRExpr_Load. Or if you instrument a call back function, this function can just access memory on the client stack (e.g. if you pass the stack pointer as parameter). Be note that this is platform dependend. Another option is use wrapper functions, see 3.2 at http://valgrind.org/docs/manual/manual-core-adv.html Josef > Could you point to some functions which will help me in doing so. > > Regards, > pankaj > |
|
From: pankaj p. <pan...@gm...> - 2011-07-08 13:19:23
|
Hi Josef, Thanks for the reply. > If you know that a given function uses the calling conventions of a given > ABI, > and you know the number of arguments and types, you can directly access the > stack to get at parameter values. Otherwise, you need to parse debug > information. > I suppose you need to extend the debug info reader to be able to forward > such > information to tools. > I know the arguments and their types. I can get the stackpointer during runtime but how do I read the stack after that. Can I read memory just by dereferencing the stack pointer? Could you point to some functions which will help me in doing so. Regards, pankaj |
|
From: Josef W. <Jos...@gm...> - 2011-07-08 13:06:05
|
On Thursday 07 July 2011, pankaj pawan wrote: > Hi Josef, > > Thanks for your reply. I did run valgrind with > guest_chase_thresh = 0 and was able to do capture the calls. > > But my doubt was that I can't see the jump statement(is it that > unconditional jumps are not displayed in IR) > Sorry I am new, but an unconditional branch we should just set the IP to the > called location?Am I right? > How it this being taken care of in the IR? See the IRSB structure definition in "libvex_ir.h". The final jump is specified there by jumpkind/next. > I also had another question : > > Can we read the values written on stack. For example if I want to get the > arguments being passed to a certain function? > > I have been able to intercept the calls to that particular function and get > the Stack Pointer. How do I read the stack values? If you know that a given function uses the calling conventions of a given ABI, and you know the number of arguments and types, you can directly access the stack to get at parameter values. Otherwise, you need to parse debug information. I suppose you need to extend the debug info reader to be able to forward such information to tools. Josef |
|
From: Josef W. <Jos...@gm...> - 2011-07-08 12:46:06
|
On Thursday 07 July 2011, John Reiser wrote:
> > valgrind --tool=callgrind --ct-verbose=1 ...
>
> When I run callgrind and callgrind_annotate, then I don't understand
> the output. For instance, the connection between "73" and the number
> of actual dynamic calls to exit() is mysterious to me:
> =====
> $ valgrind --tool=callgrind /bin/date
> ==3790== Using Valgrind-3.6.0 and LibVEX; rerun with -h for copyright info
> $ callgrind_annotate callgrind.out*
> Ir file:function
> 73 /usr/src/debug/glibc-2.13/stdlib/exit.c:exit [/lib64/libc-2.13.so]
The default for callgrind_annotate is to show self cost of the functions
for the given event types collected.
Thus, the "73" in the "Ir" column is the number of executed client instruction
inside of exit. It has nothing to do with the number of calls to exit.
Actually, callgrind_annotate only prints the number of calls for call arcs,
which are displayed e.g. with "--tree=caller":
> callgrind_annotate --tree=caller callgrind.out*
...
7,060 < ???:0x0000000000401d60 (1x) [/bin/date]
73 * /build/buildd/eglibc-2.13/stdlib/exit.c:exit [/lib/x86_64-linux-gnu/libc-2.13.so]
...
This excerpt means that a function at "0x0000000000401d60" called
"exit" one time. It probably would be good to add a total call count number
for every function.
> For another instance, using the exact suggestion:
> $ valgrind --tool=callgrind --ct-verbose=1 /bin/date
> gives output such as:
> -----
> . . . > check_match.10789(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a08a20]
> . . . .> strcmp(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a16ac0]
> . . . . > strcmp(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a16ac0]
> -----
> where the indicated parameters are nonsense,
Sure, it would be a nice feature!
But "--ct-verbose=..." is actually only for internal debugging, and
the output is not documented. The values show the stack content
(interpreted as unsigned ints, as far as I remember) when entering the
function - which was enough at the time I needed the debug output.
> and a function whose body
> is a loop displays as a recursion.
Please tell me a way how to distinguish loops from tail recursion optimization
at machine code level, when the loop jumps back to the first instruction
of a function.
Often quite some meta information is thrown away, and there
is no way to exactly reconstruct what has happened at the source level.
So callgrind must use heuristics which sometimes can go wrong. For the
high-optimized code of strcmp above, this heuristic obviously goes wrong.
But without source code, your claim of existance of a loop in strcmp
could actually be wrong. It could have been coded as recursive function,
resulting in the same machine code.
> It is hard for me to trust such output.
Please tell me the reason why you started with the assumption that the
discovered output must be buggy.
Actually, to improve the trustworthiness, I added machine code annotation
to callgrind.
Josef
|
|
From: pankaj p. <pan...@gm...> - 2011-07-08 09:42:07
|
Dear all, Is it possible to read the local variables which are being written on the stack? I need this so as to extract the arguments of certain specific fucntion. Regards, pankaj |
|
From: Greg C. <gre...@ya...> - 2011-07-08 02:25:09
|
> Did you change any #defines in the source code? Is this a clean 3.6.1. build? #define VG_N_SEGMENTS 50000 Plus the failed attempt at extending valgrind to use more than 32GB # if VG_WORDSIZE == 8 ///gczajkow aspacem_maxAddr = (Addr)0x800000000 - 1; // 32G ///http://thread.gmane.org/gmane.comp.debugging.valgrind/7584/focus=7602 aspacem_maxAddr = (Addr)(0x800000000ULL << 2) - 1; // 128GB # define N_PRIMARY_BITS 21 Otherwise valgrind errors out with ==7560== Valgrind's memory management: out of memory: ==7560== newSuperblock's request for 4194304 bytes failed. ==7560== 33731608576 bytes have already been allocated. ==7560== Valgrind cannot continue. Sorry. Our processes under valgrind consume more than 32GB of memory, how can it be expanded to 128GB? Thanks Greg |
|
From: Julian S. <js...@ac...> - 2011-07-07 22:47:12
|
> mc_main.c:5972 (mc_pre_clo_init): Assertion 'MAX_PRIMARY_ADDRESS == > 0x7FFFFFFFFULL' failed. Did you change any #defines in the source code? Is this a clean 3.6.1. build? J |
|
From: Greg C. <gre...@ya...> - 2011-07-07 22:11:15
|
Hi all, Using memcheck in 3.6.1 causes the following assertion on SUSE10. Memcheck: mc_main.c:5972 (mc_pre_clo_init): Assertion 'MAX_PRIMARY_ADDRESS == 0x7FFFFFFFFULL' failed. ==30960== at 0x3802B487: ??? (in /somepath/valgrind.3.6.1/lib/valgrind/memcheck-amd64-linux) The only argument is --error-limit=no This assertion doesn't show up on 3.4 or 3.5, but unfortunately both of those hang on the application. Any help is much appreciated. |
|
From: Naveen K. <g_n...@ya...> - 2011-07-07 20:53:20
|
>Run "strace valgrind" and look near the end of the output >to see if some system call (that valgrind expects the OS to have) >is not implemented. File a bug report; see: > http://valgrind.org/support/bug_reports.html John I did some digging and it looks like it is aborting in memcheck (gdb) bt #0 0x38028448 in vgPlain_exit () at m_libcassert.c:157 #1 0x3802a853 in vgPlain_err_missing_prog () at m_libcprint.c:584 #2 0x3806082a in vgPlain_ii_create_image () at m_initimg/initimg-linux.c:860 #3 0x380309cb in valgrind_main (argc=1, argv=0xbffff904, envp=0xbffff90c) at m_main.c:1747 #4 0x380314f5 in _start_in_C_linux (pArgc=0xbffff900) at m_main.c:2839 It looks like the syscall __NR_exit_group is not working so the code is falling through. I changed it to __NR_exit and now valgrind is atleast able to exit properly without any segmentation fault. Now when I do valgrind ls I get the following error -18797-- WARNING: Serious error when reading debug info --18797-- When reading debug info from /lib/ld-2.2.4.so: --18797-- Can't make sense of .sbss section mapping --18797-- WARNING: Serious error when reading debug info --18797-- When reading debug info from /bin/ls: --18797-- Can't make sense of .sbss section mapping valgrind: m_scheduler/sema.c:96 (vgModuleLocal_sema_down): Assertion 'sema->owner_lwpid != lwpid' failed. ==18797== at 0x38028595: report_and_quit (m_libcassert.c:194) ==18797== by 0x380286A7: vgPlain_assert_fail (m_libcassert.c:268) ==18797== by 0x3806363C: vgModuleLocal_sema_down (m_scheduler/sema.c:118) ==18797== by 0x38060FEC: vgPlain_acquire_BigLock (m_scheduler/scheduler.c:220) ==18797== by 0x38064875: vgPlain_client_syscall (m_syswrap/syswrap-main.c:1557) ==18797== by 0x380621D6: handle_syscall (m_scheduler/scheduler.c:901) ==18797== by 0x38062696: vgPlain_scheduler (m_scheduler/scheduler.c:1091) ==18797== by 0x38070C26: thread_wrapper (m_syswrap/syswrap-linux.c:94) ==18797== by 0x38070D1C: run_a_thread_NORETURN (m_syswrap/syswrap-linux.c:127) sched status: running_tid=0 Thread 1: status = VgTs_WaitSys ==18797== at 0x4011364: ??? (in /lib/ld-2.2.4.so) ==18797== by 0x4007272: ??? (in /lib/ld-2.2.4.so) ==18797== by 0x4003832: ??? (in /lib/ld-2.2.4.so) ==18797== by 0x400F485: ??? (in /lib/ld-2.2.4.so) ==18797== by 0x4002375: ??? (in /lib/ld-2.2.4.so) ==18797== by 0x400215D: ??? (in /lib/ld-2.2.4.so) ==18797== by 0x4001E25: ??? (in /lib/ld-2.2.4.so) |
|
From: John R. <jr...@bi...> - 2011-07-07 16:16:23
|
> valgrind --tool=callgrind --ct-verbose=1 ...
When I run callgrind and callgrind_annotate, then I don't understand
the output. For instance, the connection between "73" and the number
of actual dynamic calls to exit() is mysterious to me:
=====
$ valgrind --tool=callgrind /bin/date
==3790== Using Valgrind-3.6.0 and LibVEX; rerun with -h for copyright info
$ callgrind_annotate callgrind.out*
Ir file:function
73 /usr/src/debug/glibc-2.13/stdlib/exit.c:exit [/lib64/libc-2.13.so]
-----/usr/src/debug/glibc-2.13/stdlib/exit.c
void
exit (int status)
{
__run_exit_handlers (status, &__exit_funcs, true);
}
-----
$ gdb /bin/date
(gdb) b exit
(gdb) run
Thu Jul 7 08:52:35 PDT 2011
Breakpoint 1, exit (status=0x0) at exit.c:99
(gdb) list ## we are at the right place
97 void
98 exit (int status)
99 {
100 __run_exit_handlers (status, &__exit_funcs, true);
101 }
102 libc_hidden_def (exit)
(gdb) continue
$ ## was called only once, not 73 times.
=====
For another instance, using the exact suggestion:
$ valgrind --tool=callgrind --ct-verbose=1 /bin/date
gives output such as:
-----
. . . > check_match.10789(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a08a20]
. . . .> strcmp(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a16ac0]
. . . . > strcmp(0x3d, 0x5b, ...) [ld-2.13.so / 0x3d25a16ac0]
-----
where the indicated parameters are nonsense, and a function whose body
is a loop displays as a recursion. It is hard for me to trust such output.
--
|
|
From: pankaj p. <pan...@gm...> - 2011-07-07 15:35:48
|
Hi Josef, Thanks for your reply. I did run valgrind with guest_chase_thresh = 0 and was able to do capture the calls. But my doubt was that I can't see the jump statement(is it that unconditional jumps are not displayed in IR) Sorry I am new, but an unconditional branch we should just set the IP to the called location?Am I right? How it this being taken care of in the IR? I also had another question : Can we read the values written on stack. For example if I want to get the arguments being passed to a certain function? I have been able to intercept the calls to that particular function and get the Stack Pointer. How do I read the stack values? Regards, pankaj On Thu, Jul 7, 2011 at 5:07 PM, Josef Weidendorfer < Jos...@gm...> wrote: > On Thursday 07 July 2011, pankaj pawan wrote: > > Hi all, > > > > I had doubt regarding the flattened IR for a call instruction. When I try > > and print the IR statements for call instructions, i can see the return > > instruction being written on the stack but i am unable to see how the > > branching is being done. > > I can't capture it in Ist_Exit . > > A call is nothing more than an unconditional jump with putting a return > address > on the stack. As such, the call will disappear in the middle of a > superblock. > You could detect that there is a jump in the addresses of subsequent guest > instructions, but AFAIK, there is no way to detect whether > it just was a jump or a call (*). > > Instead, you can prohibit the building of superblocks by setting VEX > attributes > > VG_(clo_vex_control).iropt_unroll_thresh = 0; > VG_(clo_vex_control).guest_chase_thresh = 0; > > in your tool initialization (as callgrind does). > Then, a call should end a BB, and IRSB attribute jumpkind should be > Ijk_Call > if the BB ends in a guest call instruction. > > Josef > > (*) It can make sense to add a VEX noop IR hint about that there was a > given > call/jump in the middle of a superblock translation. But only if a tool > really > would need it... > > > > > > Can someone explain me what am I missing. > > > > Thanks, > > pankaj > > > > > |
|
From: Josef W. <Jos...@gm...> - 2011-07-07 15:07:42
|
On Thursday 07 July 2011, pankaj pawan wrote: > Hi all, > > I had doubt regarding the flattened IR for a call instruction. When I try > and print the IR statements for call instructions, i can see the return > instruction being written on the stack but i am unable to see how the > branching is being done. > I can't capture it in Ist_Exit . A call is nothing more than an unconditional jump with putting a return address on the stack. As such, the call will disappear in the middle of a superblock. You could detect that there is a jump in the addresses of subsequent guest instructions, but AFAIK, there is no way to detect whether it just was a jump or a call (*). Instead, you can prohibit the building of superblocks by setting VEX attributes VG_(clo_vex_control).iropt_unroll_thresh = 0; VG_(clo_vex_control).guest_chase_thresh = 0; in your tool initialization (as callgrind does). Then, a call should end a BB, and IRSB attribute jumpkind should be Ijk_Call if the BB ends in a guest call instruction. Josef (*) It can make sense to add a VEX noop IR hint about that there was a given call/jump in the middle of a superblock translation. But only if a tool really would need it... > > Can someone explain me what am I missing. > > Thanks, > pankaj > |
|
From: Josef W. <Jos...@gm...> - 2011-07-07 14:52:11
|
On Thursday 07 July 2011, John Reiser wrote: > > I have a binary file what i compile it with -g.So i need to perform a > > set of action in my computer and see behavior of my file, This mean , i > > need to see functions of run when i perform those set of actions, > > This sounds like some kind of profiling. Re-compile and re-link with > "gcc -p" or "gcc -pg". If you cannot re-compile, then perhaps try > "strace -i" or "ltrace -i" which will give you partial information. > You'll need to do some work to process the instruction addresses, > and you might have to perform a short backtrace (either dynamic or > static) to get interesting intformation. > > More generally, write a utility program which reads your original binary > program, then writes a new binary program having each static call: > call subr1 > replaced with an indirection: > call *indir1 > .section indir_section > indir1: .addr subr1 > Then change the contents of location indir1 dynamically at run time: > point it to a logging subroutine for a while, etc. > [This won't track existing indirect calls, but perhaps those can > be ignored for a while.] valgrind --tool=callgrind --ct-verbose=1 ... Hmm? |
|
From: John R. <jr...@bi...> - 2011-07-07 13:50:37
|
> I have a binary file what i compile it with -g.So i need to perform a > set of action in my computer and see behavior of my file, This mean , i > need to see functions of run when i perform those set of actions, This sounds like some kind of profiling. Re-compile and re-link with "gcc -p" or "gcc -pg". If you cannot re-compile, then perhaps try "strace -i" or "ltrace -i" which will give you partial information. You'll need to do some work to process the instruction addresses, and you might have to perform a short backtrace (either dynamic or static) to get interesting intformation. More generally, write a utility program which reads your original binary program, then writes a new binary program having each static call: call subr1 replaced with an indirection: call *indir1 .section indir_section indir1: .addr subr1 Then change the contents of location indir1 dynamically at run time: point it to a logging subroutine for a while, etc. [This won't track existing indirect calls, but perhaps those can be ignored for a while.] -- |
|
From: pankaj p. <pan...@gm...> - 2011-07-07 13:28:08
|
Hi all, I had doubt regarding the flattened IR for a call instruction. When I try and print the IR statements for call instructions, i can see the return instruction being written on the stack but i am unable to see how the branching is being done. I can't capture it in Ist_Exit . Can someone explain me what am I missing. Thanks, pankaj |
|
From: Edward R. <edd...@go...> - 2011-07-07 11:22:18
|
Hi, I'd like to use the VEX IR for static analysis (hope this isn't taboo), since it seems like an easy route to disassemble into SSA for multiple architectures without having to write a large amount of parsing code. I've spent a bit of time looking at libvex.h and libvex_ir.h, and it seems as though this should be quite straightforward... as far as I understand it I can include libvex.h, set up the structs according to the architecture I am using, load an elf into memory and then call LibVEX_Translate with the appropriate VexTranslateArgs to translate a chunk of code. However, this does seem to be somewhat too good to be true, so I am wondering if I am missing something. I'm also wondering about where to put things like string tables when I am not doing a dynamic analysis. I realise I could probably do all this using valgrind instrumentation, but a few people have mentioned that valgrind is slow, and I don't want to add unnecessary overhead when my work somewhat relies on proving the efficiency of my algorithms. I presume that the speed issues are due to the interpreted nature of the dynamic analysis in valgrind (please correct me if I'm wrong) and that VEX itself is probably quite efficient. Thanks! Best, Ed. |
|
From: WAROQUIERS P. <phi...@eu...> - 2011-07-07 10:16:13
|
>Those are parts of the address space (of both Valgrind and >your program) >You can refer to coregrind/m_aspacemgr/aspacemgr-linux.c for details. >>> Now what could cause such an error? My program can create/destroy >>> threads very quickly. I assume there's something strange with the stack? >>> Any hints why valgrind would complain in such a way? >>> >>> Greets, >>> Luka A segment is a "big piece of memory" given to Valgrind by the kernel. Valgrind uses mmap to request such a segment. Such segments are used either to implement the malloc replacement (for memcheck) or for memory needed by Valgrind itself or because the program itself calls mmap. It might also be that a segment is needed for each thread stack. But if the thread is destroyed, the thread stack segment should be re-usable for the next thread creation. If you have a lot of threads running simultaneously, this might explain a big list of segments. Are the threads always created with the same stack size ? Are these stack sizes big ? Big stack sizes and/or varying stack sizes could also trigger a bug such as http://bugs.kde.org/show_bug.cgi?id=250101 Philippe ____ This message and any files transmitted with it are legally privileged and intended for the sole use of the individual(s) or entity to whom they are addressed. If you are not the intended recipient, please notify the sender by reply and delete the message and any attachments from your system. Any unauthorised use or disclosure of the content of this message is strictly prohibited and may be unlawful. Nothing in this e-mail message amounts to a contractual or legal commitment on the part of EUROCONTROL, unless it is confirmed by appropriately signed hard copy. Any views expressed in this message are those of the sender. |
|
From: Julian S. <js...@ac...> - 2011-07-07 09:46:19
|
On Thursday, July 07, 2011, WAROQUIERS Philippe wrote: > As part of Valgrind 3.7.0 SVN (not yet released so), a gdbserver has > been integrated > in Valgrind. > With this gdbserver, the process running under Valgrind can be "fully" > debugged > (e.g. it is possible to use break/next/info thread/....). > > This functionality has been tested on multiple platforms (including RHEL > 5, > similar to CentOs) with versions of gdb >= 7.0. > It should also work with versions of gdb >= 6.5 (but this is not > validated). > > So, you might try the SVN version and the gdbserver. To get started with this, run Valgrind with --vgdb-error=0 and follow the instructions that it prints. J |
|
From: WAROQUIERS P. <phi...@eu...> - 2011-07-07 09:38:49
|
As part of Valgrind 3.7.0 SVN (not yet released so), a gdbserver has been integrated in Valgrind. With this gdbserver, the process running under Valgrind can be "fully" debugged (e.g. it is possible to use break/next/info thread/....). This functionality has been tested on multiple platforms (including RHEL 5, similar to CentOs) with versions of gdb >= 7.0. It should also work with versions of gdb >= 6.5 (but this is not validated). So, you might try the SVN version and the gdbserver. Philippe >-----Original Message----- >From: Agile Aspect [mailto:agi...@gm...] >Sent: Thursday 7 July 2011 03:18 >To: val...@li... >Subject: [Valgrind-users] gdb and valgrind - the elf-x86-64 errror > >Hi - we were using older versions of gdb and valgrind on CentOS 4 >built from source without any problem on 64 bit systems. > >Now, if we atttempt to use gdb 7.2 and valgrind 3.6.1 built from >source on CentOS 5 on 64 bit machines, it appears gdb is complaining: > > I'm sorry, Dave, I can't do that. Symbol format >`elf64-x86-64' unknown. > >Any ideas on how to fix this? Or does anyone know of versions of gdb >and valgrind which work on CentOS 5? The versions which ship with >CentOS 5.5 have problems. We're dead in the water. > >Any help would be greatly appreciated. > >-- Agile > >--------------------------------------------------------------- >--------------- >All of the data generated in your IT infrastructure is >seriously valuable. >Why? It contains a definitive record of application >performance, security >threats, fraudulent activity, and more. Splunk takes this data >and makes >sense of it. IT sense. And common sense. >http://p.sf.net/sfu/splunk-d2d-c2 >_______________________________________________ >Valgrind-users mailing list >Val...@li... >https://lists.sourceforge.net/lists/listinfo/valgrind-users > ____ This message and any files transmitted with it are legally privileged and intended for the sole use of the individual(s) or entity to whom they are addressed. If you are not the intended recipient, please notify the sender by reply and delete the message and any attachments from your system. Any unauthorised use or disclosure of the content of this message is strictly prohibited and may be unlawful. Nothing in this e-mail message amounts to a contractual or legal commitment on the part of EUROCONTROL, unless it is confirmed by appropriately signed hard copy. Any views expressed in this message are those of the sender. |
|
From: Alexander P. <gl...@go...> - 2011-07-07 08:12:50
|
Those are parts of the address space (of both Valgrind and your program) You can refer to coregrind/m_aspacemgr/aspacemgr-linux.c for details. On Thu, Jul 7, 2011 at 12:01 PM, Luka Napotnik <luk...@gm...> wrote: > Oh ok so this is a valgrind issue. I thought my program does something > funny. Btw. what does VG_N_SEGMENTS mean? > > On Thu, 2011-07-07 at 09:55 +0200, pa...@fr... wrote: >> Hi >> >> To fix this you will have to build your own Valgrind (if you haven't done so already). Grep for VG_N_SEGMENTS in the source, change it to something bigger and rebuild/reinstall. >> >> A+ >> Paul >> >> ----- Original Message ----- >> From: "Luka Napotnik" <luk...@gm...> >> To: val...@li... >> Sent: Thursday, 7 July, 2011 09:28:53 GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna >> Subject: [Valgrind-users] VG_N_SEGMENTS is too low error >> >> Hello. >> >> I'm running my program under valgrind (3.6.1) and after some time I get >> the following message and valgrind aborts: >> >> --30322:0:aspacem Valgrind: FATAL: VG_N_SEGMENTS is too low. >> --30322:0:aspacem Increase it and rebuild. Exiting now. >> >> Now what could cause such an error? My program can create/destroy >> threads very quickly. I assume there's something strange with the stack? >> Any hints why valgrind would complain in such a way? >> >> Greets, >> Luka >> >> >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> Valgrind-users mailing list >> Val...@li... >> https://lists.sourceforge.net/lists/listinfo/valgrind-users >> >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> Valgrind-users mailing list >> Val...@li... >> https://lists.sourceforge.net/lists/listinfo/valgrind-users > > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users > -- Alexander Potapenko Software Engineer Google Moscow |
|
From: Luka N. <luk...@gm...> - 2011-07-07 08:02:03
|
Oh ok so this is a valgrind issue. I thought my program does something funny. Btw. what does VG_N_SEGMENTS mean? On Thu, 2011-07-07 at 09:55 +0200, pa...@fr... wrote: > Hi > > To fix this you will have to build your own Valgrind (if you haven't done so already). Grep for VG_N_SEGMENTS in the source, change it to something bigger and rebuild/reinstall. > > A+ > Paul > > ----- Original Message ----- > From: "Luka Napotnik" <luk...@gm...> > To: val...@li... > Sent: Thursday, 7 July, 2011 09:28:53 GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna > Subject: [Valgrind-users] VG_N_SEGMENTS is too low error > > Hello. > > I'm running my program under valgrind (3.6.1) and after some time I get > the following message and valgrind aborts: > > --30322:0:aspacem Valgrind: FATAL: VG_N_SEGMENTS is too low. > --30322:0:aspacem Increase it and rebuild. Exiting now. > > Now what could cause such an error? My program can create/destroy > threads very quickly. I assume there's something strange with the stack? > Any hints why valgrind would complain in such a way? > > Greets, > Luka > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users |
|
From: David C. <dcc...@ac...> - 2011-07-07 07:58:08
|
On 7/6/2011 10:53 PM, Mohsen Pahlevanzadeh wrote:
> Dear all,
>
> I have a binary file what i compile it with -g.So i need to perform a
> set of action in my computer and see behavior of my file, This mean , i
> need to see functions of run when i perform those set of actions, So i
> need to tell to valgrind : Please print source of which peace of program
> that
> running.(my program is big,for this reason i can't debug and so just see
> name of those function which running,and then i put a hook in those.)
> How i do it?
>
> Yours,
> Mohsen
>
By default valgrind analyzes all code that executes when the program is
run. There is no need to tell it which pieces of code to test; it looks
at all of them. This of course makes the program run slower.
--
David Chapman dcc...@ac...
Chapman Consulting -- San Jose, CA
|
|
From: <pa...@fr...> - 2011-07-07 07:55:48
|
Hi To fix this you will have to build your own Valgrind (if you haven't done so already). Grep for VG_N_SEGMENTS in the source, change it to something bigger and rebuild/reinstall. A+ Paul ----- Original Message ----- From: "Luka Napotnik" <luk...@gm...> To: val...@li... Sent: Thursday, 7 July, 2011 09:28:53 GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna Subject: [Valgrind-users] VG_N_SEGMENTS is too low error Hello. I'm running my program under valgrind (3.6.1) and after some time I get the following message and valgrind aborts: --30322:0:aspacem Valgrind: FATAL: VG_N_SEGMENTS is too low. --30322:0:aspacem Increase it and rebuild. Exiting now. Now what could cause such an error? My program can create/destroy threads very quickly. I assume there's something strange with the stack? Any hints why valgrind would complain in such a way? Greets, Luka ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ Valgrind-users mailing list Val...@li... https://lists.sourceforge.net/lists/listinfo/valgrind-users |