You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
| 2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
| 2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
| 2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
| 2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
| 2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
| 2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
| 2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
| 2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
| 2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
| 2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
| 2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
| 2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
| 2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
| 2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
| 2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
| 2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
| 2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
| 2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
| 2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
(12) |
Oct
(16) |
Nov
(1) |
Dec
|
| 2024 |
Jan
(4) |
Feb
(3) |
Mar
(6) |
Apr
(17) |
May
(2) |
Jun
(33) |
Jul
(13) |
Aug
(1) |
Sep
(6) |
Oct
(8) |
Nov
(6) |
Dec
(15) |
| 2025 |
Jan
(5) |
Feb
(11) |
Mar
(8) |
Apr
(20) |
May
(1) |
Jun
|
Jul
|
Aug
(9) |
Sep
(1) |
Oct
(7) |
Nov
(1) |
Dec
|
|
From: Josef W. <Jos...@gm...> - 2015-05-29 17:14:46
|
Am 29.05.2015 um 18:50 schrieb Rocky Bernstein: > I'm looking into adding into the GNU make fork remake > (http://bashdb.sf.net/remake) profiling using a callgrind file format. Not sure I get the idea. Do you want to generate a callgrind file for dependencies in Makefiles? > Is there something that describes the format? Yes, see http://valgrind.org/docs/manual/cl-format.html If something in the docs in unclear, I am happy to clarify (also the docs). Josef > > I'm interested in the semantics of fi, fn, cfn, cfi. How calls and the > + and - numbers work. The flexibility in the ordering and so on. > > Thanks. > > > ------------------------------------------------------------------------------ > > > > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users > |
|
From: Rocky B. <rb...@du...> - 2015-05-29 16:50:33
|
I'm looking into adding into the GNU make fork remake ( http://bashdb.sf.net/remake) profiling using a callgrind file format. Is there something that describes the format? I'm interested in the semantics of fi, fn, cfn, cfi. How calls and the + and - numbers work. The flexibility in the ordering and so on. Thanks. |
|
From: Fred S. <fs...@co...> - 2015-05-27 19:03:27
|
Using svn head (as from yesterday), Valgrind 3.11, I'm getting a bunch (many many) of these messages from valgrind: 30064-- warning: evaluate_Dwarf3_Expr: unhandled DW_OP_ 0xf2 Googling hasn't helped much, so I don't know if its telling me something important (to me) or not. Environment: Centos 7, x86-64. Fred Smith Senior Applications Programmer/Analyst Computrition, Inc. 175 Middlesex Turnpike Bedford, MA 01730 ph: 781-275-4488 x5013 fax: 781-357-4100 |
|
From: Philippe W. <phi...@sk...> - 2015-05-27 18:39:08
|
What this message says is: Thread 13 is freeing a piece of memory (0x09440060 of size 15). This free operation can cause a race condition with some operation done on the same memory by thread 12. Drd then gives an approximate idea of where the conflicting operation in thread 12 was. The conflicting operation was between the 2 'segment start/end' stack traces given. The allocation context is where the memory was allocated. You might maybe double check the reported race condition by using helgrind (using --free-is-write=yes might be needed to report the same race). If Helgrind also detects the race, then it should give the exact stack trace of the conflicting operation in thread12 (or else you might need to increase --conflict-cache-size) Philippe On Wed, 2015-05-27 at 17:42 +0000, Fred Smith wrote: > Not sure I understand this diagnostic, or if I do, I don’t see how to > solve it: > > > > ==00:00:00:16.486 30064== Conflicting store by thread 13 at 0x09440060 > size 15 > > ==00:00:00:16.486 30064== at 0x4C2E147: free > (vg_replace_malloc.c:476) > > ==00:00:00:16.486 30064== by 0x72EC938: tzset_internal > (tzset.c:440) > > ==00:00:00:16.486 30064== by 0x72ED302: __tz_convert > (tzset.c:629) ç > > ==00:00:00:16.486 30064== by 0x68FCE26: swill_serve_one > (in /home/interface/interface/lib/libswill.so) > > ==00:00:00:16.486 30064== by 0x68FD384: swill_serve > (in /home/interface/interface/lib/libswill.so) > > ==00:00:00:16.486 30064== by 0x41130E: HS_thr_ui (HS_ui.c:2336) > > ==00:00:00:16.486 30064== by 0x4C3024B: vgDrd_thread_wrapper > (drd_pthread_intercepts.c:367) > > ==00:00:00:16.486 30064== by 0x7029DF4: start_thread > (pthread_create.c:308) > > ==00:00:00:16.486 30064== by 0x73341AC: clone (clone.S:113) > > ==00:00:00:16.486 30064== Address 0x9440060 is at offset 0 from > 0x9440060. Allocation context: > > ==00:00:00:16.487 30064== at 0x4C2D02D: malloc > (vg_replace_malloc.c:299) > > ==00:00:00:16.487 30064== by 0x72C4529: strdup (strdup.c:42) > > ==00:00:00:16.487 30064== by 0x72EC940: tzset_internal > (tzset.c:441) ç > > ==00:00:00:16.487 30064== by 0x72ED24F: tzset > (tzset.c:597) ç > > ==00:00:00:16.487 30064== by 0x72EBD68: mktime > (mktime.c:588) > > ==00:00:00:16.487 30064== by 0x40B571: wait_til_time (HS_gc.c:105) > > ==00:00:00:16.487 30064== by 0x40B727: HS_thr_gc (HS_gc.c:204) > > ==00:00:00:16.487 30064== by 0x4C3024B: vgDrd_thread_wrapper > (drd_pthread_intercepts.c:367) > > ==00:00:00:16.487 30064== by 0x7029DF4: start_thread > (pthread_create.c:308) > > ==00:00:00:16.487 30064== by 0x73341AC: clone (clone.S:113) > > ==00:00:00:16.487 30064== Other segment start (thread 12) > > ==00:00:00:16.487 30064== at 0x4C33AA3: > pthread_mutex_unlock_intercept (drd_pthread_intercepts.c:692) > > ==00:00:00:16.487 30064== by 0x4C33AA3: pthread_mutex_unlock > (drd_pthread_intercepts.c:700) > > ==00:00:00:16.487 30064== by 0x448635: HS_procmgr_mutex_unlock > (HS_procwatch.c:56) > > ==00:00:00:16.487 30064== by 0x4487C6: HS_log_pid > (HS_procwatch.c:134) > > ==00:00:00:16.487 30064== by 0x40B6ED: HS_thr_gc (HS_gc.c:177) > > ==00:00:00:16.487 30064== by 0x4C3024B: vgDrd_thread_wrapper > (drd_pthread_intercepts.c:367) > > ==00:00:00:16.487 30064== by 0x7029DF4: start_thread > (pthread_create.c:308) > > ==00:00:00:16.487 30064== by 0x73341AC: clone (clone.S:113) > > ==00:00:00:16.487 30064== Other segment end (thread 12) > > ==00:00:00:16.487 30064== at 0x4C32DB3: > pthread_mutex_lock_intercept (drd_pthread_intercepts.c:642) > > ==00:00:00:16.487 30064== by 0x4C32DB3: pthread_mutex_lock > (drd_pthread_intercepts.c:647) > > ==00:00:00:16.487 30064== by 0x40B58F: wait_til_time (HS_gc.c:111) > > ==00:00:00:16.487 30064== by 0x40B727: HS_thr_gc (HS_gc.c:204) > > ==00:00:00:16.487 30064== by 0x4C3024B: vgDrd_thread_wrapper > (drd_pthread_intercepts.c:367) > > ==00:00:00:16.487 30064== by 0x7029DF4: start_thread > (pthread_create.c:308) > > ==00:00:00:16.487 30064== by 0x73341AC: clone (clone.S:113) > > ==00:00:00:16.487 30064== > > > > The “allocation context” is a different thread than where the conflict > is reported. The only commonality I can see is that they both are > calling tz* routines. > > > > The mktime man page indicates it stores a value in the tzname > variable. All I can figure out from this report is that DRD thinks > tzname belongs to the “allocation context” and so whines when its used > elsewhere. > > > > Am I overlooking something here? Is there some other function I should > be using (in multithreaded programs) instead of mktime() ?? > > > > Thanks in advance! > > > > > > > > Fred Smith > > Senior Applications Programmer/Analyst > > Computrition, Inc. > > 175 Middlesex Turnpike > > Bedford, MA 01730 > > ph: 781-275-4488 x5013 > > fax: 781-357-4100 > > > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users |
|
From: Fred S. <fs...@co...> - 2015-05-27 18:14:34
|
Not sure I understand this diagnostic, or if I do, I don't see how to solve it: ==00:00:00:16.486 30064== Conflicting store by thread 13 at 0x09440060 size 15 ==00:00:00:16.486 30064== at 0x4C2E147: free (vg_replace_malloc.c:476) ==00:00:00:16.486 30064== by 0x72EC938: tzset_internal (tzset.c:440) ==00:00:00:16.486 30064== by 0x72ED302: __tz_convert (tzset.c:629) <== ==00:00:00:16.486 30064== by 0x68FCE26: swill_serve_one (in /home/interface/interface/lib/libswill.so) ==00:00:00:16.486 30064== by 0x68FD384: swill_serve (in /home/interface/interface/lib/libswill.so) ==00:00:00:16.486 30064== by 0x41130E: HS_thr_ui (HS_ui.c:2336) ==00:00:00:16.486 30064== by 0x4C3024B: vgDrd_thread_wrapper (drd_pthread_intercepts.c:367) ==00:00:00:16.486 30064== by 0x7029DF4: start_thread (pthread_create.c:308) ==00:00:00:16.486 30064== by 0x73341AC: clone (clone.S:113) ==00:00:00:16.486 30064== Address 0x9440060 is at offset 0 from 0x9440060. Allocation context: ==00:00:00:16.487 30064== at 0x4C2D02D: malloc (vg_replace_malloc.c:299) ==00:00:00:16.487 30064== by 0x72C4529: strdup (strdup.c:42) ==00:00:00:16.487 30064== by 0x72EC940: tzset_internal (tzset.c:441) <== ==00:00:00:16.487 30064== by 0x72ED24F: tzset (tzset.c:597) <== ==00:00:00:16.487 30064== by 0x72EBD68: mktime (mktime.c:588) ==00:00:00:16.487 30064== by 0x40B571: wait_til_time (HS_gc.c:105) ==00:00:00:16.487 30064== by 0x40B727: HS_thr_gc (HS_gc.c:204) ==00:00:00:16.487 30064== by 0x4C3024B: vgDrd_thread_wrapper (drd_pthread_intercepts.c:367) ==00:00:00:16.487 30064== by 0x7029DF4: start_thread (pthread_create.c:308) ==00:00:00:16.487 30064== by 0x73341AC: clone (clone.S:113) ==00:00:00:16.487 30064== Other segment start (thread 12) ==00:00:00:16.487 30064== at 0x4C33AA3: pthread_mutex_unlock_intercept (drd_pthread_intercepts.c:692) ==00:00:00:16.487 30064== by 0x4C33AA3: pthread_mutex_unlock (drd_pthread_intercepts.c:700) ==00:00:00:16.487 30064== by 0x448635: HS_procmgr_mutex_unlock (HS_procwatch.c:56) ==00:00:00:16.487 30064== by 0x4487C6: HS_log_pid (HS_procwatch.c:134) ==00:00:00:16.487 30064== by 0x40B6ED: HS_thr_gc (HS_gc.c:177) ==00:00:00:16.487 30064== by 0x4C3024B: vgDrd_thread_wrapper (drd_pthread_intercepts.c:367) ==00:00:00:16.487 30064== by 0x7029DF4: start_thread (pthread_create.c:308) ==00:00:00:16.487 30064== by 0x73341AC: clone (clone.S:113) ==00:00:00:16.487 30064== Other segment end (thread 12) ==00:00:00:16.487 30064== at 0x4C32DB3: pthread_mutex_lock_intercept (drd_pthread_intercepts.c:642) ==00:00:00:16.487 30064== by 0x4C32DB3: pthread_mutex_lock (drd_pthread_intercepts.c:647) ==00:00:00:16.487 30064== by 0x40B58F: wait_til_time (HS_gc.c:111) ==00:00:00:16.487 30064== by 0x40B727: HS_thr_gc (HS_gc.c:204) ==00:00:00:16.487 30064== by 0x4C3024B: vgDrd_thread_wrapper (drd_pthread_intercepts.c:367) ==00:00:00:16.487 30064== by 0x7029DF4: start_thread (pthread_create.c:308) ==00:00:00:16.487 30064== by 0x73341AC: clone (clone.S:113) ==00:00:00:16.487 30064== The "allocation context" is a different thread than where the conflict is reported. The only commonality I can see is that they both are calling tz* routines. The mktime man page indicates it stores a value in the tzname variable. All I can figure out from this report is that DRD thinks tzname belongs to the "allocation context" and so whines when its used elsewhere. Am I overlooking something here? Is there some other function I should be using (in multithreaded programs) instead of mktime() ?? Thanks in advance! Fred Smith Senior Applications Programmer/Analyst Computrition, Inc. 175 Middlesex Turnpike Bedford, MA 01730 ph: 781-275-4488 x5013 fax: 781-357-4100 |
|
From: Jack <vm...@gm...> - 2015-05-26 04:38:21
|
Dear all thank u guys, Philippe and Tom it's a good direction i will try to do actully i read valgrind FAQ before , but i just read memcheck part and passed others....haha i have read them again appreciation again jack > Philippe Waroquiers <phi...@sk...> 於 2015年5月25日 下午8:38 寫道: > > On Mon, 2015-05-25 at 12:41 +0100, Tom Hughes wrote: >> On 25/05/15 12:26, Jack wrote: >> >>> so i wrote an test program on pc to test valgrind ability of program internal memory breaking >>> but somehow it seems can't fetch program internal over-writing and under-writing global or local valuable?? >> >> http://valgrind.org/docs/manual/faq.html#faq.overruns >> >> Tom >> > As explained in the reference given by Tom, memcheck does not find > over/under-run bugs in local or global variables. > > For that, you might try the experimental tool --tool=exp-sgcheck > > (sgcheck = s-tack or g-lobal check) > > Philippe > > > |
|
From: Philippe W. <phi...@sk...> - 2015-05-25 12:38:43
|
On Mon, 2015-05-25 at 12:41 +0100, Tom Hughes wrote: > On 25/05/15 12:26, Jack wrote: > > > so i wrote an test program on pc to test valgrind ability of program internal memory breaking > > but somehow it seems can't fetch program internal over-writing and under-writing global or local valuable?? > > http://valgrind.org/docs/manual/faq.html#faq.overruns > > Tom > As explained in the reference given by Tom, memcheck does not find over/under-run bugs in local or global variables. For that, you might try the experimental tool --tool=exp-sgcheck (sgcheck = s-tack or g-lobal check) Philippe |
|
From: Tom H. <to...@co...> - 2015-05-25 11:41:22
|
On 25/05/15 12:26, Jack wrote: > so i wrote an test program on pc to test valgrind ability of program internal memory breaking > but somehow it seems can't fetch program internal over-writing and under-writing global or local valuable?? http://valgrind.org/docs/manual/faq.html#faq.overruns Tom -- Tom Hughes (to...@co...) http://compton.nu/ |
|
From: Jack <vm...@gm...> - 2015-05-25 11:26:47
|
Dear All
i am a rookie on valgrind
so i wrote an test program on pc to test valgrind ability of program internal memory breaking
but somehow it seems can't fetch program internal over-writing and under-writing global or local valuable??
following is my test program valtest.c , which will be compiled as valtest
if i run valgrind valtest swerr
with OVERWSTACK=1000 valgrind will catch internal memory breaking with msg
Addr 0x0 si not stack'd, malloc'd or free'd
but if i reduce OVERWSTACK=100 , no thing happened , valgrind will end in peace
if i set UNDERWSTACK=7, valgrind will catch out the breaking by msg addr 0xbe9c0097 is on thread1's stack , 1 bytes below stack pointer
but if i set UNDERWSTACK under 6 , its the same , no thing happened , valgrind will end in peace ,
it's the same , when i run valgrind valtest gwerr
if set OVERWGLOBAL 10000 , valgrind msg ->Addr 0x0 si not stack'd, malloc'd or free'd
but set OVERWGLOBAL 1000, valgrind will end in peace
if set UNDERWGLOBAL 1000 , valgrind msg ->jump to the invaild addr stated on the next line
but set UNDERWGLOBAL 10 , valgrind will end in peace
so could anyone kindlly direct me
how to configure valgrind to be able to catch out the internal memory breaking???
Thanks~~jack
void test(void);
char gArray[10];
void test(void){
char cmd[10];
#define OVERWSTACK 1000
#define UNDERWSTACK 7
for(i=0;i<OVERWSTACK;i++)
cmd[i]='\0';
for(i=1;i<= UNDERWSTACK;i++)
*((char *)cmd-i)='\0';
}
int main (int argc, char *argv[])
{
if( strcmp(argv[1],"swerr")==0 ){
printf("valtest:cmd swerr\n");
test();
}
else if( strcmp(argv[1],"gwerr")==0 ){
int i;
printf("valtest:cmd gwerr\n");
#define OVERWGLOBAL 10000
#define UNDERWGLOBAL 1000
for(i=0;i<OVERWGLOBAL;i++)
gArray[i]='\0';
for(i=0;i<UNDERWGLOBAL;i++)
*((char *)gArray-i)='\0';
}
}
|
|
From: David A. <Dav...@ne...> - 2015-05-21 12:02:52
|
Hi Dave, > Also, although your main application clearly has debug information, your target file system's libraries may not. Make sure that you have debuginfo files in /usr/lib/.debug (default) or /usr/lib/debug. I'm not sure how valgrind intercepts the malloc/free calls, but I assume that valgrind will have a terrible time backtracing without dwarf info, at least on some flavors of arm. Thanks for reply and bringing me on the right path ! Still not understanding much, but the "solution" actually for me is to install debug symbol for valgrind itself, just for the vgpreload_memcheck-arm-linux.so lib. Installing the debug symbol for the C lib didn't change anything Cheers David |
|
From: Tom H. <to...@co...> - 2015-05-19 23:34:46
|
On 19/05/15 17:26, Carl Ponder wrote: > So it shuts off the warning message > > *==21898== Warning: noted but unhandled ioctl 0x30000001 with no > size/direction hints.* > ==21898== This could cause spurious value errors to appear. > ==21898== See README_MISSING_SYSCALL_OR_IOCTL for guidance on > writing a proper wrapper. > > but doesn't do anything to manage the memory state. > The output I'm getting from--trace-syscalls=yes is > > SYSCALL[20867,1](16) *sys_ioctl ( 38, 0x30000001, 0x0 *) --> [async] > ... > SYSCALL[20867,1](16) ... [async] --> Success(0x0:0x0) > > so it doesn't look like any memory-range is being passed in. > I'm checking with our engineers to see if there is any special > processing that needs to happen here, and we'll also have to watch and > see if any other NVIDIA-specific calls are happening. Yes you will need to find out if it reads/writes any memory, though it looks unlikely in this case. > One question to you: does it make sense to be hard-coding the > *0x30000001* case into the generalioctl handler, or should there be an > NVIDIA-specific file in the coregrind/m_syswrap directory? > Thanks, Well I'm not sure there's much precedent as we don't normally take system calls that aren't in the upstream kernel. There may be a few ARM ones as precedent. Tom -- Tom Hughes (to...@co...) http://compton.nu/ |
|
From: Carl P. <cp...@nv...> - 2015-05-19 16:26:45
|
*On 05/18/2015 09:12 AM, Carl Ponder wrote:*
>
> I tried inserting print-statements into this function in the
> filecoregrind/m_syswrap/syswrap-linux.c
>
> 5406 PRE(sys_ioctl)
> 5407 {
> 5408 *flags |= SfMayBlock;
> 5409
> 5410 ARG2 = (UInt)ARG2;
> 5411
> *5412 PRINT("ioctl ARG1=0x%lx\n", (unsigned long) ARG1 );**
> **5413 PRINT("ioctl ARG2=0x%lx\n", (unsigned long) ARG2 );**
> **5414 PRINT("ioctl ARG3=0x%lx\n", (unsigned long) ARG3 );*
>
> but didn't see any output, it looks like this function doesn't get
> called on ioctl0x30000001.
>
*On 05/18/2015 09:17 AM, Tom Hughes wrote:*
>
> You know PRINT only prints when --trace-syscalls=yes is used?
>
Ok got it.... here is what worked, in the
file*coregrind/m_syswrap/syswrap-linux.c*:
5406 PRE(sys_ioctl)
5407 {
5408 *flags |= SfMayBlock;
5409
5410 ARG2 = (UInt)ARG2;
5411
5412 // We first handle the ones that don't use ARG3 (even as a
5413 // scalar/non-pointer argument).
5414 switch (ARG2 /* request */) {
.....
* 5467 /* NVIDIA UvmInitialize */
5468 case 0x30000001:
5469 PRINT("sys_ioctl ( %ld, 0x%lx, 0x%lx )",ARG1,ARG2,ARG3);
5470 PRE_REG_READ3(long, "ioctl",
5471 unsigned int, fd, unsigned int, request,
unsigned long, arg);
5472 return;*
5473
5474 default:
So it shuts off the warning message
*==21898== Warning: noted but unhandled ioctl 0x30000001 with no
size/direction hints.*
==21898== This could cause spurious value errors to appear.
==21898== See README_MISSING_SYSCALL_OR_IOCTL for guidance on
writing a proper wrapper.
but doesn't do anything to manage the memory state.
The output I'm getting from--trace-syscalls=yes is
SYSCALL[20867,1](16) *sys_ioctl ( 38, 0x30000001, 0x0 *) --> [async]
...
SYSCALL[20867,1](16) ... [async] --> Success(0x0:0x0)
so it doesn't look like any memory-range is being passed in.
I'm checking with our engineers to see if there is any special
processing that needs to happen here, and we'll also have to watch and
see if any other NVIDIA-specific calls are happening.
One question to you: does it make sense to be hard-coding the
*0x30000001* case into the generalioctl handler, or should there be an
NVIDIA-specific file in the coregrind/m_syswrap directory?
Thanks,
Carl
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
|
|
From: Jack <vm...@gm...> - 2015-05-19 03:54:51
|
Hi there
i encounter an problom of CPU 0 Unable to handle kernel paging request
the env is following
#export VALGRIND_LIB=/media/mmcblk0p1/valgrind/lib/valgrind
#/media/mmcblk0p1/valgrind/bin/valgrind ls
use valgrind-3.10.1
running on Linux version 2.6.36 (ro...@lo... <mailto:ro...@lo...>) (gcc version 3.4.2) #1 Wed Jul 9 00:15:49 CST 2014
# cat /proc/cpuinfo
system type : MT7620
processor : 0
cpu model : MIPS 24Kc V5.0
BogoMIPS : 386.04
wait instruction : yes
microsecond timers : yes
tlb_entries : 32
extra interrupt vector : yes
hardware watchpoint : yes, count: 4, address/irw mask: [0x0ff8, 0x0ff8, 0x0ee3, 0x0ff8]
ASEs implemented : mips16 dsp
shadow register sets : 1
core : 0
VCED exceptions : not available
VCEI exceptions : not available
checked page size by getpagesize() equal to 4KB
./configure --prefix=/media/mmcblk0p1/valgrind --host=mipsel-linux-gnu --with-pagesize=4 \
CC=/opt/buildroot-gcc463/usr/bin/mipsel-linux-gcc \
CFLAGS=" -I/home/MTK_Ralink_4.2.1/RT288x_SDK/source/lib/include -I/home/MTK_Ralink_4.2.1/RT288x_SDK/source -I/home/MTK_Ralink_4.2.1/RT288x_SDK/source/lib/pcre-8.01 -L/home/MTK_Ralink_4.2.1/RT288x_SDK/source/lib/lib -ldl " \
AR=/opt/buildroot-gcc463/usr/bin/mipsel-linux-ar \
CPP=/opt/buildroot-gcc463/usr/bin/mipsel-linux-cpp \
CPPFLAGS="-I/home/MTK_Ralink_4.2.1/RT288x_SDK/source/lib/include -I/home/MTK_Ralink_4.2.1/RT288x_SDK/source -I/home/MTK_Ralink_4.2.1/RT288x_SDK/source/lib/pcre-8.01"
but sadly i suffered following msg........
Can anyone kindly help to tell me how to solve this?
Thank you very much..... so appreciate^^
Jack
==10993== Memcheck, a memory error detector
==10993== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==10993== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==10993== Command: ls
==10993==
CPU 0 Unable to handle kernel paging request at virtual address 00000000, epc == 800790cc, ra == 8007b354
Index: 0 pgmask=4kb va=38512000 asid=74
[pa=062a2000 c=3 d=1 v=1 g=0] [pa=00708000 c=3 d=1 v=1 g=0]
Index: 1 pgmask=4kb va=4549a000 asid=74
[pa=05085000 c=3 d=1 v=1 g=0] [pa=05086000 c=3 d=1 v=1 g=0]
Index: 2 pgmask=4kb va=45470000 asid=74
[pa=0505b000 c=3 d=1 v=1 g=0] [pa=0505c000 c=3 d=1 v=1 g=0]
Index: 3 pgmask=4kb va=454bc000 asid=74
[pa=050a7000 c=3 d=1 v=1 g=0] [pa=050a8000 c=3 d=1 v=1 g=0]
Index: 4 pgmask=4kb va=454c0000 asid=74
[pa=050ab000 c=3 d=1 v=1 g=0] [pa=00000000 c=0 d=0 v=0 g=0]
Index: 5 pgmask=4kb va=45484000 asid=74
[pa=0506f000 c=3 d=1 v=1 g=0] [pa=05070000 c=3 d=1 v=1 g=0]
Index: 6 pgmask=4kb va=4548a000 asid=74
[pa=05075000 c=3 d=1 v=1 g=0] [pa=05076000 c=3 d=1 v=1 g=0]
Index: 7 pgmask=4kb va=45498000 asid=74
[pa=05083000 c=3 d=1 v=1 g=0] [pa=05084000 c=3 d=1 v=1 g=0]
Index: 8 pgmask=4kb va=454ac000 asid=74
[pa=05097000 c=3 d=1 v=1 g=0] [pa=05098000 c=3 d=1 v=1 g=0]
Index: 9 pgmask=4kb va=4538c000 asid=74
[pa=05777000 c=3 d=1 v=1 g=0] [pa=05778000 c=3 d=1 v=1 g=0]
Index: 10 pgmask=4kb va=45420000 asid=74
[pa=0500b000 c=3 d=1 v=1 g=0] [pa=0500c000 c=3 d=1 v=1 g=0]
Index: 11 pgmask=4kb va=454a8000 asid=74
[pa=05093000 c=3 d=1 v=1 g=0] [pa=05094000 c=3 d=1 v=1 g=0]
Index: 12 pgmask=4kb va=454b2000 asid=74
[pa=0509d000 c=3 d=1 v=1 g=0] [pa=0509e000 c=3 d=1 v=1 g=0]
Index: 13 pgmask=4kb va=45478000 asid=74
[pa=05063000 c=3 d=1 v=1 g=0] [pa=05064000 c=3 d=1 v=1 g=0]
Index: 14 pgmask=4kb va=4548c000 asid=74
[pa=05077000 c=3 d=1 v=1 g=0] [pa=05078000 c=3 d=1 v=1 g=0]
Index: 15 pgmask=4kb va=45412000 asid=74
[pa=057fd000 c=3 d=1 v=1 g=0] [pa=057fe000 c=3 d=1 v=1 g=0]
Index: 16 pgmask=4kb va=454be000 asid=74
[pa=050a9000 c=3 d=1 v=1 g=0] [pa=050aa000 c=3 d=1 v=1 g=0]
Index: 17 pgmask=4kb va=45488000 asid=74
[pa=05073000 c=3 d=1 v=1 g=0] [pa=05074000 c=3 d=1 v=1 g=0]
Index: 18 pgmask=4kb va=00000000 asid=74
[pa=00000000 c=0 d=0 v=0 g=0] [pa=00000000 c=0 d=0 v=0 g=0]
Index: 19 pgmask=4kb va=454ae000 asid=74
[pa=05099000 c=3 d=1 v=1 g=0] [pa=0509a000 c=3 d=1 v=1 g=0]
Index: 20 pgmask=4kb va=45492000 asid=74
[pa=0507d000 c=3 d=1 v=1 g=0] [pa=0507e000 c=3 d=1 v=1 g=0]
Index: 21 pgmask=4kb va=45438000 asid=74
[pa=05023000 c=3 d=1 v=1 g=0] [pa=05024000 c=3 d=1 v=1 g=0]
Index: 22 pgmask=4kb va=45494000 asid=74
[pa=0507f000 c=3 d=1 v=1 g=0] [pa=05080000 c=3 d=1 v=1 g=0]
Index: 23 pgmask=4kb va=454b6000 asid=74
[pa=050a1000 c=3 d=1 v=1 g=0] [pa=050a2000 c=3 d=1 v=1 g=0]
Index: 24 pgmask=4kb va=454a6000 asid=74
[pa=05091000 c=3 d=1 v=1 g=0] [pa=05092000 c=3 d=1 v=1 g=0]
Index: 25 pgmask=4kb va=454b8000 asid=74
[pa=050a3000 c=3 d=1 v=1 g=0] [pa=050a4000 c=3 d=1 v=1 g=0]
Index: 26 pgmask=4kb va=453ea000 asid=74
[pa=057d5000 c=3 d=1 v=1 g=0] [pa=057d6000 c=3 d=1 v=1 g=0]
Index: 27 pgmask=4kb va=454ba000 asid=74
[pa=050a5000 c=3 d=1 v=1 g=0] [pa=050a6000 c=3 d=1 v=1 g=0]
Index: 28 pgmask=4kb va=4544e000 asid=74
[pa=05039000 c=3 d=1 v=1 g=0] [pa=0503a000 c=3 d=1 v=1 g=0]
Index: 29 pgmask=4kb va=4549e000 asid=74
[pa=05089000 c=3 d=1 v=1 g=0] [pa=0508a000 c=3 d=1 v=1 g=0]
Index: 30 pgmask=4kb va=380c8000 asid=74
[pa=00000000 c=0 d=0 v=0 g=0] [pa=01188000 c=3 d=0 v=1 g=0]
Index: 31 pgmask=4kb va=454aa000 asid=74
[pa=05095000 c=3 d=1 v=1 g=0] [pa=05096000 c=3 d=1 v=1 g=0]
Oops[#1]:
Cpu 0
$ 0 : 00000000 42b77ee0 00000000 00000000
$ 4 : 00000004 810a1618 00000002 80553ce0
$ 8 : 00000004 810a1600 00000010 00080000
$12 : 00000000 00000000 000000f0 00000780
$16 : 810a15f8 87c03f1c 00000004 80553c10
$20 : 87c03f00 00000000 00000002 00000001
$24 : 00000001 380cefcc
$28 : 86fcc000 86fcdd10 00000000 8007b354
Hi : 00043e42
Lo : 715ae3a8
epc : 800790cc 0x800790cc
Not tainted
ra : 8007b354 0x8007b354
Status: 1100ff02 KERNEL EXL
Cause : 4080000c
BadVA : 00000000
PrId : 00019650 (MIPS 24Kc)
Modules linked in:
Process memcheck-mips32 (pid: 10993, threadinfo=86fcc000, task=86ac0158, tls=00000000)
Stack : 00000101 8053a070 80041a24 80041a04 8053c3e0 00000001 810a15f8 87c03f1c
00000004 80553c10 87c03f00 00000001 00000007 00000001 00000000 8007b354
80553c10 00000001 00000010 00000000 00000041 00000001 80553c10 805541cc
00000000 00000001 00000010 00000000 ffffffff 00000001 86aa4304 000200da
454c103c 00000001 86f81f3c 86f5aae0 805541c8 86fcdf30 00000000 8007b6a8
...
Call Trace:[<80041a24>] 0x80041a24
[<80041a04>] 0x80041a04
[<8007b354>] 0x8007b354
[<8007b6a8>] 0x8007b6a8
[<8007f754>] 0x8007f754
[<8008fa90>] 0x8008fa90
[<8001fdd4>] 0x8001fdd4
[<80041b20>] 0x80041b20
[<80092b00>] 0x80092b00
[<8000840c>] 0x8000840c
[<800420f0>] 0x800420f0
[<800156d0>] 0x800156d0
[<80008400>] 0x80008400
[<8000af84>] 0x8000af84
Code: 8ca30004 8ca20000 24a9ffe8 <ac620000> ac430004 3c020010 8ca4ffe8 34420100 aca20000
Disabling lock debugging due to kernel taint
BUG: Bad page state in process sh pfn:00582
page:8100b040 count:-1 mapcount:0 mapping:(null) index:0x0
page flags: 0x0()
Call Trace:[<8001b2cc>] 0x8001b2cc
[<8001b2cc>] 0x8001b2cc
[<80079bc0>] 0x80079bc0
[<800b6a8c>] 0x800b6a8c
[<8007b428>] 0x8007b428
[<80175144>] 0x80175144
[<8007b6a8>] 0x8007b6a8
[<8007ac0c>] 0x8007ac0c
[<80025db8>] 0x80025db8
[<8008e27c>] 0x8008e27c
[<8008fa5c>] 0x8008fa5c
[<8003d9e0>] 0x8003d9e0
[<80034aec>] 0x80034aec
[<8001fdd4>] 0x8001fdd4
[<8003f3b8>] 0x8003f3b8
[<80011cac>] 0x80011cac
[<8003fc60>] 0x8003fc60
[<8003fd1c>] 0x8003fd1c
[<8003a880>] 0x8003a880
[<8003a778>] 0x8003a778
[<8004002c>] 0x8004002c
[<80040020>] 0x80040020
[<800a0774>] 0x800a0774
[<800a0750>] 0x800a0750
[<80008400>] 0x80008400
[<8000a124>] 0x8000a124
BUG: Bad page state in process sh pfn:005ce
page:8100b9c0 count:-1 mapcount:0 mapping:(null) index:0x0
page flags: 0x0()
Call Trace:[<8001b2cc>] 0x8001b2cc
[<8001b2cc>] 0x8001b2cc
[<80079bc0>] 0x80079bc0
[<800b6a8c>] 0x800b6a8c
[<8007b428>] 0x8007b428
[<80175144>] 0x80175144
[<8007b6a8>] 0x8007b6a8
[<8007ac0c>] 0x8007ac0c
[<80025db8>] 0x80025db8
[<8008e27c>] 0x8008e27c
[<8008fa5c>] 0x8008fa5c
[<8003d9e0>] 0x8003d9e0
[<80034aec>] 0x80034aec
[<8001fdd4>] 0x8001fdd4
[<8003f3b8>] 0x8003f3b8
[<80011cac>] 0x80011cac
[<80034aec>] 0x80034aec
[<8003510c>] 0x8003510c
[<800351ac>] 0x800351ac
[<800a0774>] 0x800a0774
[<800a0750>] 0x800a0750
[<80011cac>] 0x80011cac
[<80008400>] 0x80008400
[<80008538>] 0x80008538
Segmentation fault |
|
From: Tom H. <to...@co...> - 2015-05-18 14:17:33
|
On 18/05/15 15:12, Carl Ponder wrote:
> *On 01/28/2015 09:27 PM, Carl Ponder wrote:*
>> I ran into this output from valgrind
>>
>> *==21898== Warning: noted but unhandled ioctl 0x30000001 with no
>> size/direction hints.*
>> ==21898== This could cause spurious value errors to appear.
>> ==21898== See README_MISSING_SYSCALL_OR_IOCTL for guidance on
>> writing a proper wrapper.
>>
> *On 03/17/2015 03:44 AM, Julian Seward wrote: *
>>
>> A more fundamental question is, are you running a kernel with NVidia-specific hacks? What is this ioctl 0x30000001 ? What does it do? Is it in the mainline linux kernel sources?
>>
> I had to postpone this thread while I was working on other things, but
> am back looking atvalgrind output again.
> We're running standard Centos 6.6 but loading-in custom kernel extensions.
> The ioctl call is to UvmInitialize, which maps the GPU memory into the
> process address-space.
But this ioctl is implemented by that extra kernel module I assume?
>> The right place to add it is PRE(sys_ioctl) and POST(sys_ioctl).
>>
> I tried inserting print-statements into this function in the
> filecoregrind/m_syswrap/syswrap-linux.c
>
> 5406 PRE(sys_ioctl)
> 5407 {
> 5408 *flags |= SfMayBlock;
> 5409
> 5410 ARG2 = (UInt)ARG2;
> 5411
> *5412 PRINT("ioctl ARG1=0x%lx\n", (unsigned long) ARG1 );**
> **5413 PRINT("ioctl ARG2=0x%lx\n", (unsigned long) ARG2 );**
> **5414 PRINT("ioctl ARG3=0x%lx\n", (unsigned long) ARG3 );*
>
> but didn't see any output, it looks like this function doesn't get
> called on ioctl0x30000001.
You know PRINT only prints when --trace-syscalls=yes is used?
> The Warning at the top looks like it came from the
> So I still don't understand how to get it to handle 0x30000001, given
> that it never gets to thePRE(sys_ioctl).
> It must be getting intercepted somewhere higher in the call-chain and
> passed down toPRE_unknown_ioctlinstead.
Well you have clearly done something wrong, as unknown_ioctl is invoked
from the default clause in sys_ioctl, so it must be getting called.
Tom
--
Tom Hughes (to...@co...)
http://compton.nu/
|
|
From: Philippe W. <phi...@sk...> - 2015-05-12 19:05:45
|
On Tue, 2015-05-12 at 17:01 +0530, Austin Einter wrote: > > On debugging found that epoll out events does not come for client > sockets. > > > Is it a known valgrind issue? > To my knowledge, there is no problem with epoll and Valgrind. What version are you using ? On which platform ? To understand what is going on, you might: 1. start Valgrind with --trace-syscalls=yes and/or 2. debug your program under Valgrind, using option --vgdb-error=0 Philippe |
|
From: Austin E. <aus...@gm...> - 2015-05-12 11:31:14
|
Hi I have a typical issue while using valgrind with epoll stuff. I have ubuntu 14.04 system. I have implemented epoll client server code. When I execute epoll client and servers without valgrind, I see a proper communication between epoll client and server. When I execute epoll client with valgrind and epoll server (without valgrind), I see there is no communication between epoll client and server. On debugging found that epoll out events does not come for client sockets. Is it a known valgrind issue? Or I am missing something. Thanks Austin |
|
From: John R. <jr...@bi...> - 2015-05-12 01:56:12
|
Sorry. |
|
From: John R. <jr...@bi...> - 2015-05-12 01:45:21
|
http://www.katu.com/news/local/Rare-twin-horses-born-in-Newberg-303308271.html |
|
From: David L. <dav...@sb...> - 2015-05-08 00:21:25
|
Also, although your main application clearly has debug information, your target file system's libraries may not. Make sure that you have debuginfo files in /usr/lib/.debug (default) or /usr/lib/debug. I'm not sure how valgrind intercepts the malloc/free calls, but I assume that valgrind will have a terrible time backtracing without dwarf info, at least on some flavors of arm. Dave -------------------------------------------- On Thu, 5/7/15, David Lerner <dav...@sb...> wrote: Subject: Re: Valgrind, Yocto tunning on arm flags To: Dav...@ne... Cc: val...@li... Date: Thursday, May 7, 2015, 5:42 PM Hi David, The patch that you site should not be related with the root cause of your backtrace failure. The patch can not impact the run-time binaries since it only removes building some valgrind tests that wouldn't compile in the yocto cross-compilation environment, as well as some test specific CFLAGs. These tests are built when the yocto PTEST framework is enabled, but typically not installed into even a developer's file system image. Dave Lerner > -----Original Message----- > From: David Andrey [mailto:Dav...@ne...] > Sent: Wednesday, May 06, 2015 3:24 AM > To: val...@li... > Cc: Lerner, Dave > Subject: Valgrind, Yocto tunning on arm flags > > Hi everybody, > > I'm new to Valgrind and have some questions about architecture flags. Running the > Valgrind memchecker provided by Yocto on an ARMv7 CortexA9 (iMX6) I see the stack trace, > except for memory leaks. So I decide to build Valgrind by my own, and everything was > right. > > Actually, I don't have enough knowledge to understand all the tricks about compiler > flags for ARM, but I suppose something is conflicting with this additional Yocto patch > http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/recipes- > devtools/valgrind/valgrind/remove-arm-variant-specific.patch > > > Can anyone check if this patch is ok for Valgrind or give me some hints ? > > Here is my test, just for the log > > Source code > > #include <stdlib.h> > > void f(void) > { > int* x = malloc(10 * sizeof(int)); > x[10] = 0; // problem 1: heap block overrun > } // problem 2: memory leak -- x not freed > > int main(void) > { > f(); > return 0; > } > > Running Yocto provided Valgrind > > valgrind --leak-check=full --keep-stacktraces=alloc-and-free --num-callers=20 --track- > origins=yes --leak-resolution=high ./memcheck-example > > leads to > > ==3510== Memcheck, a memory error detector > ==3510== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. > ==3510== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info > ==3510== Command: ./memcheck-example > ==3510== > ==3510== Invalid write of size 4 > ==3510== at 0x8454: f (memcheck-example.c:6) > ==3510== by 0x846B: main (memcheck-example.c:11) > ==3510== Address 0x497b050 is 0 bytes after a block of size 40 alloc'd > ==3510== at 0x48377C4: malloc (in /usr/lib/valgrind/vgpreload_memcheck-arm- > linux.so) > ==3510== > ==3510== > ==3510== HEAP SUMMARY: > ==3510== in use at exit: 40 bytes in 1 blocks > ==3510== total heap usage: 1 allocs, 0 frees, 40 bytes allocated > ==3510== > ==3510== 40 bytes in 1 blocks are definitely lost in loss record 1 of 1 > ==3510== at 0x48377C4: malloc (in /usr/lib/valgrind/vgpreload_memcheck-arm- > linux.so) > ==3510== > ==3510== LEAK SUMMARY: > ==3510== definitely lost: 40 bytes in 1 blocks > ==3510== indirectly lost: 0 bytes in 0 blocks > ==3510== possibly lost: 0 bytes in 0 blocks > ==3510== still reachable: 0 bytes in 0 blocks > ==3510== suppressed: 0 bytes in 0 blocks > > so without stack trace for mem leak > > Regards > David |
|
From: David L. <dav...@sb...> - 2015-05-07 22:45:42
|
Hi David, The patch that you site should not be related with the root cause of your backtrace failure. The patch can not impact the run-time binaries since it only removes building some valgrind tests that wouldn't compile in the yocto cross-compilation environment, as well as some test specific CFLAGs. These tests are built when the yocto PTEST framework is enabled, but typically not installed into even a developer's file system image. Dave Lerner > -----Original Message----- > From: David Andrey [mailto:Dav...@ne...] > Sent: Wednesday, May 06, 2015 3:24 AM > To: val...@li... > Cc: Lerner, Dave > Subject: Valgrind, Yocto tunning on arm flags > > Hi everybody, > > I'm new to Valgrind and have some questions about architecture flags. Running the > Valgrind memchecker provided by Yocto on an ARMv7 CortexA9 (iMX6) I see the stack trace, > except for memory leaks. So I decide to build Valgrind by my own, and everything was > right. > > Actually, I don't have enough knowledge to understand all the tricks about compiler > flags for ARM, but I suppose something is conflicting with this additional Yocto patch > http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/recipes- > devtools/valgrind/valgrind/remove-arm-variant-specific.patch > > > Can anyone check if this patch is ok for Valgrind or give me some hints ? > > Here is my test, just for the log > > Source code > > #include <stdlib.h> > > void f(void) > { > int* x = malloc(10 * sizeof(int)); > x[10] = 0; // problem 1: heap block overrun > } // problem 2: memory leak -- x not freed > > int main(void) > { > f(); > return 0; > } > > Running Yocto provided Valgrind > > valgrind --leak-check=full --keep-stacktraces=alloc-and-free --num-callers=20 --track- > origins=yes --leak-resolution=high ./memcheck-example > > leads to > > ==3510== Memcheck, a memory error detector > ==3510== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. > ==3510== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info > ==3510== Command: ./memcheck-example > ==3510== > ==3510== Invalid write of size 4 > ==3510== at 0x8454: f (memcheck-example.c:6) > ==3510== by 0x846B: main (memcheck-example.c:11) > ==3510== Address 0x497b050 is 0 bytes after a block of size 40 alloc'd > ==3510== at 0x48377C4: malloc (in /usr/lib/valgrind/vgpreload_memcheck-arm- > linux.so) > ==3510== > ==3510== > ==3510== HEAP SUMMARY: > ==3510== in use at exit: 40 bytes in 1 blocks > ==3510== total heap usage: 1 allocs, 0 frees, 40 bytes allocated > ==3510== > ==3510== 40 bytes in 1 blocks are definitely lost in loss record 1 of 1 > ==3510== at 0x48377C4: malloc (in /usr/lib/valgrind/vgpreload_memcheck-arm- > linux.so) > ==3510== > ==3510== LEAK SUMMARY: > ==3510== definitely lost: 40 bytes in 1 blocks > ==3510== indirectly lost: 0 bytes in 0 blocks > ==3510== possibly lost: 0 bytes in 0 blocks > ==3510== still reachable: 0 bytes in 0 blocks > ==3510== suppressed: 0 bytes in 0 blocks > > so without stack trace for mem leak > > Regards > David |
|
From: David A. <Dav...@ne...> - 2015-05-06 08:24:11
|
Hi everybody, I'm new to Valgrind and have some questions about architecture flags. Running the Valgrind memchecker provided by Yocto on an ARMv7 CortexA9 (iMX6) I see the stack trace, except for memory leaks. So I decide to build Valgrind by my own, and everything was right. Actually, I don't have enough knowledge to understand all the tricks about compiler flags for ARM, but I suppose something is conflicting with this additional Yocto patch http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/recipes-devtools/valgrind/valgrind/remove-arm-variant-specific.patch Can anyone check if this patch is ok for Valgrind or give me some hints ? Here is my test, just for the log Source code #include <stdlib.h> void f(void) { int* x = malloc(10 * sizeof(int)); x[10] = 0; // problem 1: heap block overrun } // problem 2: memory leak -- x not freed int main(void) { f(); return 0; } Running Yocto provided Valgrind valgrind --leak-check=full --keep-stacktraces=alloc-and-free --num-callers=20 --track-origins=yes --leak-resolution=high ./memcheck-example leads to ==3510== Memcheck, a memory error detector ==3510== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. ==3510== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info ==3510== Command: ./memcheck-example ==3510== ==3510== Invalid write of size 4 ==3510== at 0x8454: f (memcheck-example.c:6) ==3510== by 0x846B: main (memcheck-example.c:11) ==3510== Address 0x497b050 is 0 bytes after a block of size 40 alloc'd ==3510== at 0x48377C4: malloc (in /usr/lib/valgrind/vgpreload_memcheck-arm-linux.so) ==3510== ==3510== ==3510== HEAP SUMMARY: ==3510== in use at exit: 40 bytes in 1 blocks ==3510== total heap usage: 1 allocs, 0 frees, 40 bytes allocated ==3510== ==3510== 40 bytes in 1 blocks are definitely lost in loss record 1 of 1 ==3510== at 0x48377C4: malloc (in /usr/lib/valgrind/vgpreload_memcheck-arm-linux.so) ==3510== ==3510== LEAK SUMMARY: ==3510== definitely lost: 40 bytes in 1 blocks ==3510== indirectly lost: 0 bytes in 0 blocks ==3510== possibly lost: 0 bytes in 0 blocks ==3510== still reachable: 0 bytes in 0 blocks ==3510== suppressed: 0 bytes in 0 blocks so without stack trace for mem leak Regards David |
|
From: Julian S. <js...@ac...> - 2015-05-02 07:06:25
|
On 02/05/15 00:07, Philippe Waroquiers wrote: > This is all normal. Unless you link statically, the program will be > linked with the dynamic loader and other .so, which will be called > at startup Indeed, a mere 100000 instructions to get to main() is cheap. More typically I have seen it costing around a million instructions to get to main(). J |
|
From: John R. <jr...@bi...> - 2015-05-01 22:17:56
|
> Since none of these functions were called by the program above,
Those functions are the dynamic linker adding libc.so to your process address space.
'main' is moderately far removed from the beginning of execution.
$ readelf --headers test | grep Entry
Entry point address: 0x400400
$ gdb test
(gdb) x/12i 0x400400
0x400400 <_start>: xor %ebp,%ebp
0x400402 <_start+2>: mov %rdx,%r9
0x400405 <_start+5>: pop %rsi
0x400406 <_start+6>: mov %rsp,%rdx
0x400409 <_start+9>: and $0xfffffffffffffff0,%rsp
0x40040d <_start+13>: push %rax
0x40040e <_start+14>: push %rsp
0x40040f <_start+15>: mov $0x400570,%r8
0x400416 <_start+22>: mov $0x400500,%rcx
0x40041d <_start+29>: mov $0x4004f0,%rdi
0x400424 <_start+36>: callq 0x4003e0 <__libc_start_main@plt>
0x400429 <_start+41>: hlt
(gdb) quit
$ LD_DEBUG=ALL ./test
5360:
5360: file=libc.so.6 [0]; needed by ./test [0]
5360: find library=libc.so.6 [0]; searching
5360: search cache=/etc/ld.so.cache
5360: trying file=/lib64/libc.so.6
5360:
5360: file=libc.so.6 [0]; generating link map
5360: dynamic: 0x0000003ef37b8b80 base: 0x0000000000000000 size: 0x00000000003bf260
5360: entry: 0x0000003ef3421c50 phdr: 0x0000003ef3400040 phnum: 10
5360:
5360: checking for version `GLIBC_2.2.5' in file /lib64/libc.so.6 [0] required by file ./test [0]
5360: checking for version `GLIBC_2.3' in file /lib64/ld-linux-x86-64.so.2 [0] required by file /lib64/libc.so.6 [0]
5360: checking for version `GLIBC_PRIVATE' in file /lib64/ld-linux-x86-64.so.2 [0] required by file /lib64/libc.so.6 [0]
5360:
5360: Initial object scopes
5360: object=./test [0]
5360: scope 0: ./a.out /lib64/libc.so.6 /lib64/ld-linux-x86-64.so.2
[[snip]]
|
|
From: Philippe W. <phi...@sk...> - 2015-05-01 22:07:54
|
On Fri, 2015-05-01 at 23:45 +0200, Jon wrote: > Since none of these functions were called by the program above, I wonder > if this how the output is supposed to look. Were are these function > calls coming from then? Or could there be something wrong with the way I > installed Valgrind? This is all normal. Unless you link statically, the program will be linked with the dynamic loader and other .so, which will be called at startup Use ldd ./test to see these. Note btw that having a dynamically linked executable is critical to have Valgrind tools such as memcheck or helgrind working properly. Philippe |
|
From: Jon <jo...@po...> - 2015-05-01 21:45:31
|
Hello,
I am entirely new to Valgrind and have just installed it on a Linux Mint
system by running "apt-get install valgrind". To get started, I did run
cachegrind on a very simple C program, which did nothing more than to
add two integer variables. This is the result I got:
==12404== I refs: 101,339
==12404== I1 misses: 711
==12404== LLi misses: 704
==12404== I1 miss rate: 0.70%
==12404== LLi miss rate: 0.69%
==12404==
==12404== D refs: 38,424 (25,653 rd + 12,771 wr)
==12404== D1 misses: 1,696 ( 1,210 rd + 486 wr)
==12404== LLd misses: 1,502 ( 1,043 rd + 459 wr)
==12404== D1 miss rate: 4.4% ( 4.7% + 3.8% )
==12404== LLd miss rate: 3.9% ( 4.0% + 3.5% )
I was surprised by the relatively large amount of instructions executed,
so I did the following:
mint ~ $ cat test.c
int main() {
return 0;
}
mint ~ $ gcc test.c -o test
mint ~ $ valgrind --tool=callgrind ./test
==12430== Callgrind, a call-graph generating cache profiler
==12430== Copyright (C) 2002-2013, and GNU GPL'd, by Josef Weidendorfer
et al.
==12430== Using Valgrind-3.10.0.SVN and LibVEX; rerun with -h for
copyright info
==12430== Command: ./test
==12430==
==12430== For interactive control, run 'callgrind_control -h'.
==12430==
==12430== Events : Ir
==12430== Collected : 101336
==12430==
==12430== I refs: 101,336
mint ~ $ callgrind_annotate callgrind.out.12430
--------------------------------------------------------------------------------
Profile data file 'callgrind.out.12430' (creator: callgrind-3.10.0.SVN)
--------------------------------------------------------------------------------
I1 cache:
D1 cache:
LL cache:
Timerange: Basic block 0 - 22401
Trigger: Program termination
Profiled target: ./test (PID 12430, part 1)
Events recorded: Ir
Events shown: Ir
Event sort order: Ir
Thresholds: 99
Include dirs:
User annotated:
Auto-annotation: off
--------------------------------------------------------------------------------
Ir
--------------------------------------------------------------------------------
101,336 PROGRAM TOTALS
--------------------------------------------------------------------------------
Ir file:function
--------------------------------------------------------------------------------
21,478 /build/buildd/eglibc-2.19/elf/dl-lookup.c:do_lookup_x
[/lib/x86_64-linux-gnu/ld-2.19.so]
17,844 /build/buildd/eglibc-2.19/elf/dl-lookup.c:_dl_lookup_symbol_x
[/lib/x86_64-linux-gnu/ld-2.19.so]
16,283
/build/buildd/eglibc-2.19/elf/../sysdeps/x86_64/dl-machine.h:_dl_relocate_object
8,247 /build/buildd/eglibc-2.19/elf/do-rel.h:_dl_relocate_object
8,231
/build/buildd/eglibc-2.19/string/../sysdeps/x86_64/multiarch/../strcmp.S:strcmp'2
[/lib/x86_64-linux-gnu/ld-2.19.so]
4,224 /build/buildd/eglibc-2.19/elf/dl-lookup.c:check_match.9458
[/lib/x86_64-linux-gnu/ld-2.19.so]
2,226
/build/buildd/eglibc-2.19/string/../sysdeps/x86_64/rtld-memset.S:memset
[/lib/x86_64-linux-gnu/ld-2.19.so]
1,179
/build/buildd/eglibc-2.19/elf/dl-version.c:_dl_check_map_versions
[/lib/x86_64-linux-gnu/ld-2.19.so]
1,169
/build/buildd/eglibc-2.19/string/../sysdeps/x86_64/multiarch/../strcmp.S:strcmp
[/lib/x86_64-linux-gnu/ld-2.19.so]
1,147 /build/buildd/eglibc-2.19/elf/dl-load.c:_dl_map_object_from_fd
[/lib/x86_64-linux-gnu/ld-2.19.so]
1,058 /build/buildd/eglibc-2.19/elf/dl-deps.c:_dl_map_object_deps
[/lib/x86_64-linux-gnu/ld-2.19.so]
1,018 /build/buildd/eglibc-2.19/string/../string/memcmp.c:bcmp
[/lib/x86_64-linux-gnu/ld-2.19.so]
1,014 /build/buildd/eglibc-2.19/elf/dl-minimal.c:strsep
[/lib/x86_64-linux-gnu/ld-2.19.so]
[...]
Since none of these functions were called by the program above, I wonder
if this how the output is supposed to look. Were are these function
calls coming from then? Or could there be something wrong with the way I
installed Valgrind?
Best regards,
Jon
|