From: Mitchell K. <Mit...@mo...> - 2003-06-16 20:37:32
|
The setting is Redhat AS 2.1 server with 4 CPUs, 8 GB of RAM and 8 GB of swap. The application is compiled with gcc 2.96-112. When I ran the following command after changing the stack_size to 28 because of the same problem #define VG_PTHREAD_STACK_SIZE (1 << 28) valgrind --alignment=8 --skin=addrcheck --error-limit=no --leak-check=yes --num-callers=1 binary ==23496== Warning: set address range perms: large range 268435424, a 1 ==23496== valgrind's libpthread.so: KLUDGED call to: pthread_cond_destroy ==23496== valgrind's libpthread.so: IGNORED call to: pthread_attr_setinheritsched ==23496== Warning: set address range perms: large range 268435424, a 1 ==23496== Warning: set address range perms: large range 268435424, a 1 ==23496== Warning: set address range perms: large range 268435424, a 1 ==23496== Warning: set address range perms: large range 268435424, a 1 ==23496== Warning: set address range perms: large range 268435424, a 1 VG_(get_memory_from_mmap): request for 8192 bytes failed. VG_(get_memory_from_mmap): 2119042790 bytes already allocated. This may mean that you have run out of swap space, since running programs on valgrind increases their memory usage at least 3 times. You might want to use 'top' to determine whether you really have run out of swap. If so, you may be able to work around it by adding a temporary swap file -- this is easier than finding a new swap partition. Go ask your sysadmin(s) [politely!] VG_(get_memory_from_mmap): out of memory! Fatal! Bye! I believe I have enough swap space. Is there anything I can try? Thanks for your help. Mitch -- NOTICE: If received in error, please destroy and notify sender. Sender does not waive confidentiality or privilege, and use is prohibited. |
From: Julian S. <js...@ac...> - 2003-06-16 21:07:19
|
On Monday 16 June 2003 21:37, Mitchell Kang wrote: > The setting is Redhat AS 2.1 server with 4 CPUs, 8 GB of RAM and 8 GB of > swap. > > The application is compiled with gcc 2.96-112. > > When I ran the following command after changing the stack_size to 28 > because of the same problem > > #define VG_PTHREAD_STACK_SIZE (1 << 28) Are you sure this is a good idea? It means each thread stack occupies 256MB, and no sane piece of code is going to require that much. It would be far too fragile and unportable. This is probably why you've run out of 2GB of address space, by the look of it. J > valgrind --alignment=8 --skin=addrcheck --error-limit=no --leak-check=yes > --num-callers=1 binary > > ==23496== Warning: set address range perms: large range 268435424, a 1 > ==23496== valgrind's libpthread.so: KLUDGED call to: pthread_cond_destroy > ==23496== valgrind's libpthread.so: IGNORED call to: > pthread_attr_setinheritsched ==23496== Warning: set address range perms: > large range 268435424, a 1 ==23496== Warning: set address range perms: > large range 268435424, a 1 ==23496== Warning: set address range perms: > large range 268435424, a 1 ==23496== Warning: set address range perms: > large range 268435424, a 1 ==23496== Warning: set address range perms: > large range 268435424, a 1 > > VG_(get_memory_from_mmap): request for 8192 bytes failed. > VG_(get_memory_from_mmap): 2119042790 bytes already allocated. > > This may mean that you have run out of swap space, > since running programs on valgrind increases their memory > usage at least 3 times. You might want to use 'top' > to determine whether you really have run out of swap. > If so, you may be able to work around it by adding a > temporary swap file -- this is easier than finding a > new swap partition. Go ask your sysadmin(s) [politely!] > > VG_(get_memory_from_mmap): out of memory! Fatal! Bye! > > I believe I have enough swap space. Is there anything I can try? > > Thanks for your help. > > Mitch > > -- > NOTICE: If received in error, please destroy and notify sender. Sender > does not waive confidentiality or privilege, and use is prohibited. |
From: Mitchell K. <Mit...@mo...> - 2003-06-16 21:12:27
|
I don't want to. But when I used the default, 1<<20 for 1 MB, I got the same error except the warning for large range is 134217696, followed by request for 134217712 bytes failed, 1992652422 bytes already allocated. Any suggestions? Thanks. Mitch Julian Seward wrote: > On Monday 16 June 2003 21:37, Mitchell Kang wrote: > > The setting is Redhat AS 2.1 server with 4 CPUs, 8 GB of RAM and 8 GB of > > swap. > > > > The application is compiled with gcc 2.96-112. > > > > When I ran the following command after changing the stack_size to 28 > > because of the same problem > > > > #define VG_PTHREAD_STACK_SIZE (1 << 28) > > Are you sure this is a good idea? It means each thread stack occupies > 256MB, and no sane piece of code is going to require that much. It would > be far too fragile and unportable. This is probably why you've run > out of 2GB of address space, by the look of it. > > J > > > valgrind --alignment=8 --skin=addrcheck --error-limit=no --leak-check=yes > > --num-callers=1 binary > > > > ==23496== Warning: set address range perms: large range 268435424, a 1 > > ==23496== valgrind's libpthread.so: KLUDGED call to: pthread_cond_destroy > > ==23496== valgrind's libpthread.so: IGNORED call to: > > pthread_attr_setinheritsched ==23496== Warning: set address range perms: > > large range 268435424, a 1 ==23496== Warning: set address range perms: > > large range 268435424, a 1 ==23496== Warning: set address range perms: > > large range 268435424, a 1 ==23496== Warning: set address range perms: > > large range 268435424, a 1 ==23496== Warning: set address range perms: > > large range 268435424, a 1 > > > > VG_(get_memory_from_mmap): request for 8192 bytes failed. > > VG_(get_memory_from_mmap): 2119042790 bytes already allocated. > > > > This may mean that you have run out of swap space, > > since running programs on valgrind increases their memory > > usage at least 3 times. You might want to use 'top' > > to determine whether you really have run out of swap. > > If so, you may be able to work around it by adding a > > temporary swap file -- this is easier than finding a > > new swap partition. Go ask your sysadmin(s) [politely!] > > > > VG_(get_memory_from_mmap): out of memory! Fatal! Bye! > > > > I believe I have enough swap space. Is there anything I can try? > > > > Thanks for your help. > > > > Mitch > > > > -- > > NOTICE: If received in error, please destroy and notify sender. Sender > > does not waive confidentiality or privilege, and use is prohibited. -- NOTICE: If received in error, please destroy and notify sender. Sender does not waive confidentiality or privilege, and use is prohibited. |
From: John R. <jr...@Bi...> - 2003-06-16 21:36:52
|
Mitchell Kang wrote: > The setting is Redhat AS 2.1 server with 4 CPUs, 8 GB of RAM and 8 GB of swap. [snip] > When I ran the following command after changing the stack_size to 28 because of the same problem > > #define VG_PTHREAD_STACK_SIZE (1 << 28) > > valgrind --alignment=8 --skin=addrcheck --error-limit=no --leak-check=yes --num-callers=1 binary > [snip] > ==23496== Warning: set address range perms: large range 268435424, a 1 > > VG_(get_memory_from_mmap): request for 8192 bytes failed. > VG_(get_memory_from_mmap): 2119042790 bytes already allocated. [snip] > I believe I have enough swap space. Is there anything I can try? It's possible that your address space is fragmented by having TASK_UNMAPPED_BASE at 0x40000000, and a pre-linked glibc at 0x42000000. TASK_UNNMAPPED_BASE is the default address where dynamic loading takes place, as well as the beginning address to search for space to satisfy any mmap(0, ...) system call that does not specify MAP_FIXED. While the process is running, do "cat /proc/<pid>/maps" to see the layout of its address space. Search google groups for Message-ID: <bc5igc$fu9$1...@rz...> (Re: 64-bit memory management) by Ulrich Weigand, 6/10/2003 02:27PM to see a claim that RHAS has a patch that lets you set the map_base by writing an ASCII hex numeral to /proc/<pid>/map_base. The referenced patch is http://kernelnewbies.org/kernels/rh21as/SOURCES/linux-2.4.9-task-map-base.patch With some investigating on your part ("ldd ./my_app"), then you can pick a good place for map_base. Your next problem may be a pre-linked libc.so.6 at 0x42000000. The runtime loader asks the kernel for mmap(0x42000000, ...) and most likely the kernel grants that address, which means fragmentation. Try to find a non- prelinked glibc of the appropriate version. If you are willing to try some tricks, then see http://www.bitwagon.com/tub.html for experimental code which lets you move even a pre-linked glibc. This doesn't give you any more total space; however it may reduce the fragmentation by enough to squeak by. Else you will have to use Insure++ or Purify on a 64-bit RISC machine, or help get valgrind on an AMD x86_64. Regards, |
From: John R. <jr...@Bi...> - 2003-06-17 01:57:05
|
Mitchell Kang wrote: [snip] > VG_(get_memory_from_mmap): request for 8192 bytes failed. > VG_(get_memory_from_mmap): 2119042790 bytes already allocated. > > This may mean that you have run out of swap space, > since running programs on valgrind increases their memory > usage at least 3 times. [snip] Insure++ ( http://www.parasoft.com/jsp/products/home.jsp?product=Insure ) for Linux/x86 has a "no preparation required" checking mode which for large processes has a memory overhead factor close to 1/4 [total usage while monitoring is a factor of about (1 + 1/4) times the non-monitored usage]. This may be small enough to succeed in checking your process even on x86. A license for Insure++ does cost money, but the indicated domain of the original posting assures me that such a cost cannot be a significant burden. Regards, |
From: Olly B. <ol...@su...> - 2003-06-17 10:23:30
|
On Mon, Jun 16, 2003 at 06:57:07PM -0700, John Reiser wrote: > Mitchell Kang wrote: > >VG_(get_memory_from_mmap): request for 8192 bytes failed. > >VG_(get_memory_from_mmap): 2119042790 bytes already allocated. > > > >This may mean that you have run out of swap space, > >since running programs on valgrind increases their memory > >usage at least 3 times. > > Insure++ for Linux/x86 has a "no preparation required" checking mode > which for large processes has a memory overhead factor close to 1/4 > [total usage while monitoring is a factor of about (1 + 1/4) times the > non-monitored usage]. Is this mode similar to valgrind's --skin=addrcheck? According to the documentation, that has an overhead of about 1/8 (compared to 9/8 for memcheck). Or does 1/4 mean that it's also tracking validity at the byte level (rather than at the bit level as memcheck does)? Cheers, Olly |
From: John R. <jr...@Bi...> - 2003-06-17 13:27:43
|
Olly Betts wrote: > On Mon, Jun 16, 2003 at 06:57:07PM -0700, John Reiser wrote: >>Insure++ for Linux/x86 has a "no preparation required" checking mode >>which for large processes has a memory overhead factor close to 1/4 >>[total usage while monitoring is a factor of about (1 + 1/4) times the >>non-monitored usage]. > > > Is this mode similar to valgrind's --skin=addrcheck? According to the > documentation, that has an overhead of about 1/8 (compared to 9/8 for > memcheck). > > Or does 1/4 mean that it's also tracking validity at the byte level > (rather than at the bit level as memcheck does)? It tracks addressability, and contents validity at the byte level. So some uses of bitfields trigger "false positive" complaints the first time. [This can help identify performance bottlenecks. Compilers may generate slow code for consecutive accesses to adjacent bitfields.] Also, the factor of 1/4 does not count the usual "red zone" padding around allocated blocks. The red zone can be adjusted, trading off depth of traceback that is recorded at malloc and free. If the original complaint ----- VG_(get_memory_from_mmap): request for 8192 bytes failed. VG_(get_memory_from_mmap): 2119042790 bytes already allocated. ----- arose when many large blocks (say, several megabytes each) had been free()d recently, then the delayed free list may be hoarding the space. Reduce the length of the delayed free list, and try again. This gives you less protection for "referenced after being free()d", of course. |