|
From: Christian P. <tr...@ge...> - 2005-07-20 11:09:24
|
Hi all,
below is an extract of $(strace valgrind --help); It took me about 2
minutes or so to see the output, well, at least, very long. Although, a
$(ps ax) on another terminal in parralel seems to freeze while traversing
through the process list. $(top) shows valgrind to be using 98% of my CPU=
.
/* below follows the obove mentioned output (stripped) */
open("./.valgrindrc", O_RDONLY) =3D -1 ENOENT (No such file or
directory)
brk(0) =3D 0x5ca000
brk(0x5eb000) =3D 0x5ca000
mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) =3D 0x7ffff147d000
open("/opt/valgrind/lib/valgrind/vgtool_memcheck.so", O_RDONLY) =3D 5
read(5, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\360Z\0\0"..., 640=
)
=3D 640
fstat(5, {st_mode=3DS_IFREG|0755, st_size=3D229323, ...}) =3D 0
mmap(NULL, 3796432, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 5, 0)
=3D 0x7ffff157d000
mprotect(0x7ffff1597000, 3689936, PROT_NONE) =3D 0
mmap(0x7ffff1697000, 4096, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 5, 0x1a000) =3D 0x7ffff1697000
mmap(0x7ffff1698000, 2637264, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) =3D 0x7ffff1698000
close(5) =3D 0
access("/opt/valgrind/lib/valgrind/vgpreload_memcheck.so", R_OK) =3D 0
mmap(0x3c3c34a00000, 1048576, PROT_NONE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) =3D 0x3c3c34a00=
000
munmap(0, 66229278605312
/* here it freeyes for about 20 seconds */
) =3D 0
mmap(0x3c3c34b00000, 74507940265984, PROT_NONE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0
/* it freezes again */
) =3D 0x3c3c34b00000
fstat(3, {st_mode=3DS_IFREG, st_size=3D0, ...}) =3D 0
open("/proc/self/maps", O_RDONLY) =3D 5
read(5, "3c3c34a00000-7ffff0000000 ---p 3"..., 10240) =3D 2079
close(5) =3D 0
munmap(0x7ffff0135000, 1048576) =3D 0
munmap(0x7ffff0741000, 9170944) =3D 0
close(3) =3D 0
mmap(0x3c3c349fe000, 8192, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) =3D 0x3c3c349fe000
getrlimit(RLIMIT_NOFILE, {rlim_cur=3D1024, rlim_max=3D1024}) =3D 0
setrlimit(RLIMIT_NOFILE, {rlim_cur=3D1024, rlim_max=3D1024}) =3D 0
fcntl(4, F_DUPFD, 1014) =3D 1014
close(4) =3D 0
fcntl(1014, F_SETFD, FD_CLOEXEC) =3D 0
open("/proc/self/maps", O_RDONLY) =3D 3
read(3, "3c3c349fe000-3c3c34a00000 rwxp 3"..., 50000) =3D 1930
read(3, "", 48070) =3D 0
close(3) =3D 0
mmap(0x7ffff0741000, 1048576, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) =3D 0x7ffff0741000
write(1, "usage: valgrind --tool=3D<toolname"..., 90usage: valgrind
--tool=3D<toolname> [options] prog-and-args
/* ... */
exit_group(0) =3D ?
/* again, it freezes, and then returns to command prompt */
NOW, I'd be happy when someone tells me that this is a known issue and is
gonna be fixed right the way (before 3.0.0 hits the line)
I just fetched the sources from svn trunk.
Thanks in advance,
Christian Parpart.
|
|
From: Nicholas N. <nj...@cs...> - 2005-07-20 13:39:27
|
On Wed, 20 Jul 2005, Christian Parpart wrote: > below is an extract of $(strace valgrind --help); It took me about 2 > minutes or so to see the output, well, at least, very long. Although, a > $(ps ax) on another terminal in parralel seems to freeze while traversing > through the process list. $(top) shows valgrind to be using 98% of my CPU. > > /* below follows the obove mentioned output (stripped) */ > > munmap(0, 66229278605312 > > /* here it freeyes for about 20 seconds */ > > ) = 0 > mmap(0x3c3c34b00000, 74507940265984, PROT_NONE, > MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0 > > /* it freezes again */ > > ) = 0x3c3c34b00000 > > ... > > write(1, "usage: valgrind --tool=<toolname"..., 90usage: valgrind > --tool=<toolname> [options] prog-and-args > > /* ... */ > > exit_group(0) = ? > > /* again, it freezes, and then returns to command prompt */ > > NOW, I'd be happy when someone tells me that this is a known issue and is > gonna be fixed right the way (before 3.0.0 hits the line) I haven't seen this one. The mmap and munmap calls are very big -- 61680GB and 69390GB respectively. The fact that it happens before the usage message shows this is happening very early. I guess these mmap/munmap calls are from the start-up padding, but then the question is why isn't this causing problems on other AMD64 systems? Can you re-run with "valgrind -d" and post the output? Thanks. N |
|
From: Christian P. <tr...@ge...> - 2005-07-20 14:47:38
|
On Wednesday 20 July 2005 15:39, Nicholas Nethercote wrote: > On Wed, 20 Jul 2005, Christian Parpart wrote: > > below is an extract of $(strace valgrind --help); It took me about 2 > > minutes or so to see the output, well, at least, very long. Although, a > > $(ps ax) on another terminal in parralel seems to freeze while traversi= ng > > through the process list. $(top) shows valgrind to be using 98% of my > > CPU. > > > > /* below follows the obove mentioned output (stripped) */ > > > > munmap(0, 66229278605312 > > > > /* here it freeyes for about 20 seconds */ > > > > ) =3D 0 > > mmap(0x3c3c34b00000, 74507940265984, PROT_NONE, > > MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0 > > > > /* it freezes again */ > > > > ) =3D 0x3c3c34b00000 > > > > ... > > > > write(1, "usage: valgrind --tool=3D<toolname"..., 90usage: valgrind > > --tool=3D<toolname> [options] prog-and-args > > > > /* ... */ > > > > exit_group(0) =3D ? > > > > /* again, it freezes, and then returns to command prompt */ > > > > NOW, I'd be happy when someone tells me that this is a known issue and = is > > gonna be fixed right the way (before 3.0.0 hits the line) > > I haven't seen this one. > > The mmap and munmap calls are very big -- 61680GB and 69390GB > respectively. The fact that it happens before the usage message shows > this is happening very early. I guess these mmap/munmap calls are from > the start-up padding, but then the question is why isn't this causing > problems on other AMD64 systems? I'm having 2GB RAM, and 1GB swap. maybe this matters? Besides, why the hell is valgrind m[un]mapping such a huge space (I even=20 haven't that much RAM et al) > Can you re-run with "valgrind -d" and post the output? Thanks. simple "valgrind -d" (without any other args)? hmm... hold on... run of $(valgrind -d): =2D-22000:1:debuglog DebugLog system started by Stage 1, level 1 logging=20 requested =2D-22000:1:stage1 main(): running main2() on new stack =2D-22000:1:stage1 main2(): starting stage2 =2D-22000:1:debuglog DebugLog system started by Stage 2 (main), level 1 log= ging=20 requested =2D-22000:1:main Doing scan_auxv() =2D-22000:1:main Preprocess command line opts =2D-22000:1:main Loading tool =2D-22000:1:main Laying out remaining space /* big big sleep */ =2D-22000:1:main Loading client valgrind: no program specified valgrind: Use --help for more information. /* big big sleep */ /* return to terminal */ Hope this helps, Christian Parpart. |
|
From: Tom H. <to...@co...> - 2005-07-20 14:54:15
|
In message <200...@ge...>
Christian Parpart <tr...@ge...> wrote:
> On Wednesday 20 July 2005 15:39, Nicholas Nethercote wrote:
>
>> The mmap and munmap calls are very big -- 61680GB and 69390GB
>> respectively. The fact that it happens before the usage message shows
>> this is happening very early. I guess these mmap/munmap calls are from
>> the start-up padding, but then the question is why isn't this causing
>> problems on other AMD64 systems?
>
> I'm having 2GB RAM, and 1GB swap. maybe this matters?
> Besides, why the hell is valgrind m[un]mapping such a huge space (I even
> haven't that much RAM et al)
It's mapping it to stop anything else being given that part of the
address space - it is how valgrind controls where the OS puts stuff.
Note that the memory will never be touched so it is an entirely
virtual concept and how much memory or swap you should not matter
in the slightest. That is why we pass the NORESERVE flag to indicate
that the kernel doesn't need to reserve any swap to back the mapping.
It has been known for kernels to get upset by big mappings however
which is one reason why we plan to reorganise things so that these
large mappings are not required.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Christian P. <tr...@ge...> - 2005-07-20 15:11:22
|
On Wednesday 20 July 2005 16:54, Tom Hughes wrote: > In message <200...@ge...> > Christian Parpart <tr...@ge...> wrote: > > On Wednesday 20 July 2005 15:39, Nicholas Nethercote wrote: > >> The mmap and munmap calls are very big -- 61680GB and 69390GB > >> respectively. The fact that it happens before the usage message shows > >> this is happening very early. I guess these mmap/munmap calls are from > >> the start-up padding, but then the question is why isn't this causing > >> problems on other AMD64 systems? > > > > I'm having 2GB RAM, and 1GB swap. maybe this matters? > > Besides, why the hell is valgrind m[un]mapping such a huge space (I even > > haven't that much RAM et al) > > It's mapping it to stop anything else being given that part of the > address space - it is how valgrind controls where the OS puts stuff. > > Note that the memory will never be touched so it is an entirely > virtual concept and how much memory or swap you should not matter > in the slightest. That is why we pass the NORESERVE flag to indicate > that the kernel doesn't need to reserve any swap to back the mapping. Ah, I seem to understand; however, I now start wondering why the kernel tak= es=20 so long there. Might this be a kernel bug (regarding performance)? Although - I'm just curious - how are you thinking of working around such=20 (mis)behaviors when not using m[un]map? And why am I (obviousely) the only one having this problem? Regards, Christian Parpart. |
|
From: Tom H. <to...@co...> - 2005-07-20 15:27:08
|
In message <200...@ge...>
Christian Parpart <tr...@ge...> wrote:
> On Wednesday 20 July 2005 16:54, Tom Hughes wrote:
>
>> It's mapping it to stop anything else being given that part of the
>> address space - it is how valgrind controls where the OS puts stuff.
>>
>> Note that the memory will never be touched so it is an entirely
>> virtual concept and how much memory or swap you should not matter
>> in the slightest. That is why we pass the NORESERVE flag to indicate
>> that the kernel doesn't need to reserve any swap to back the mapping.
>
> Ah, I seem to understand; however, I now start wondering why the kernel takes
> so long there. Might this be a kernel bug (regarding performance)?
Probably something to do with setting up the page tables. I can't say
that I've noticed it on my amd64 box.
> Although - I'm just curious - how are you thinking of working around such
> (mis)behaviors when not using m[un]map?
There was a thread the other day where Julian and I explained about
the places to rework the address space manager so that there is no
need to maintain a rigid split of the address space.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Christian P. <tr...@ge...> - 2005-07-22 18:32:31
|
On Wednesday 20 July 2005 16:54, Tom Hughes wrote: > In message <200...@ge...> > > Christian Parpart <tr...@ge...> wrote: > > On Wednesday 20 July 2005 15:39, Nicholas Nethercote wrote: > >> The mmap and munmap calls are very big -- 61680GB and 69390GB > >> respectively. The fact that it happens before the usage message shows > >> this is happening very early. I guess these mmap/munmap calls are from > >> the start-up padding, but then the question is why isn't this causing > >> problems on other AMD64 systems? > > > > I'm having 2GB RAM, and 1GB swap. maybe this matters? > > Besides, why the hell is valgrind m[un]mapping such a huge space (I even > > haven't that much RAM et al) > > It's mapping it to stop anything else being given that part of the > address space - it is how valgrind controls where the OS puts stuff. While using it now some more times, I must unfortunately see, that it becom= es=20 more and more unsuable to me. Those idling times I have to wait until=20 valgrind actually starts and actually exits decrease my productivity :( Is there any way to walk around this? Regards, Christian Parpart. =2D-=20 20:31:10 up 121 days, 9:38, 2 users, load average: 1.34, 2.22, 2.62 |
|
From: Nicholas N. <nj...@cs...> - 2005-07-22 18:47:48
|
On Fri, 22 Jul 2005, Christian Parpart wrote: >> It's mapping it to stop anything else being given that part of the >> address space - it is how valgrind controls where the OS puts stuff. > > While using it now some more times, I must unfortunately see, that it becomes > more and more unsuable to me. Those idling times I have to wait until > valgrind actually starts and actually exits decrease my productivity :( > > Is there any way to walk around this? Nobody else seems to be having this problem. What kernel are you using? N |
|
From: Christian P. <tr...@ge...> - 2005-07-22 18:53:23
|
On Friday 22 July 2005 20:47, Nicholas Nethercote wrote: > On Fri, 22 Jul 2005, Christian Parpart wrote: > >> It's mapping it to stop anything else being given that part of the > >> address space - it is how valgrind controls where the OS puts stuff. > > > > While using it now some more times, I must unfortunately see, that it > > becomes more and more unsuable to me. Those idling times I have to wait > > until valgrind actually starts and actually exits decrease my > > productivity :( > > > > Is there any way to walk around this? > > Nobody else seems to be having this problem. What kernel are you using? Oh well..... poor me: Linux battousai 2.6.12-gentoo-r2 #1 Thu Jun 30 15:45:25 CEST 2005 x86_64 AM= D=20 Athlon(tm) 64 Processor 3500+ AuthenticAMD GNU/Linux I could upgrade to gentoo-r6, but I should have closed lots of windows in m= y X=20 before (which takes hell - I stopped counting them); But I shall give it a try anyway I guess :( I hate rebooting, Christian Parpart. =2D-=20 20:51:00 up 121 days, 9:58, 2 users, load average: 4.29, 3.39, 3.08 |