|
From: Adishesh <adi...@re...> - 2012-08-13 17:09:44
|
Hi, I am using RHEL6.2 OS. 'uname -a' output is Linux mercury05 2.6.32-220.17.1.el6.x86_64 #1 SMP Thu Apr 26 13:37:13 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux using valgrind version 3.8.0 also shmat is failing with errno 22. Thanks and regards, Adishesh On Mon, 13 Aug 2012 21:46:34 +0530 wrote >> But when same program is executed under valgrind shmat parameters are getting modified and shmat is failing with errno 22. > > without valgrind ( taken using strace command) > ---------------------------------------------- > shmat(281542719, 0, 0) = ? > > with valgrind ( taken using strace command) > ---------------------------------------------- > shmat(281542719, 0xf1c0000, 0) = ? Please tell us which hardware and operating system, and which version of valgrind. It matters. In particular, "errno 22" might not be EINVAL on all systems. The second parameter to shmat() is the preferred address, where 0 means "I don't care which address, just give me a good one." In order to manage and track the address space, then valgrind picks what it considers to be a good address, and asks for shmat() at that address. In order to get beyond "errno 22", then we must determine _why_ your system believes that the address is not valid. The particular case of 0x0f1c0000 looks pretty good to me: on a 256 KB boundary, not too large, etc. Why does your system complain? -- ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Valgrind-users mailing list Val...@li... https://lists.sourceforge.net/lists/listinfo/valgrind-users |
|
From: John R. <jr...@bi...> - 2012-08-13 18:39:15
|
> I am using RHEL6.2 OS. > > 'uname -a' output is > Linux mercury05 2.6.32-220.17.1.el6.x86_64 #1 SMP Thu Apr 26 13:37:13 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux > > using valgrind version 3.8.0 also shmat is failing with errno 22. Thank you for that information. Is the process running in "native" 64-bit mode, or is this a 32-bit [only] process whose environment is being "emulated" by the x86_64 system? Now, please try to construct a small test case which fails in the same way: The test app calls shmat() and works without valgrind, but gets EINVAL when run under valgrind-3.8.0. Post the actual code to this mailing list; the code should be no more than a few dozen lines. Here are some situations which might affect the results: How many shared memory segments are attached by the app, and how many other shared memory segments are active at the same time in the whole system? What is the mix of access permissions (ReadOnly, ReadWrite, ...)? Were the shared memory segments created by the same process, or did the segments exist already before the process began? Is each shared memory segment being attached only once by any particular process, or are there multiple mappings? Is there any case which succeeds when (0!=shmaddr) ? Attach using shmat(shmid, 0, 0); Remember the address; Detach the segment; try to re-attach using shmat(shmid, old_addr, 0) where old_addr is the address which was returned for the first (successful) attach. -- |
|
From: Adishesh <adi...@re...> - 2012-08-14 10:20:29
|
Hi,
I have used below program for testing. Below are the steps I have followed.
Compile: gcc -g3 test_shm.c -o test_shm
Create shared memory: ./test_shm –c
Get shared memory without valgrind: ./test_shm –g ( this works fine)
Get with valgrind: /usr/bin/valgrind --tool=memcheck --leak-check=full --track-origins=yes --log-file=/tmp/val_log /home/rtp99/test_shm –g
With valgrind shmat command fails.
Total shared memory segments active on the my system is 320.
My process will attach to 40 shared memory segments. Shmat will fails when tested with 14 shared memory attach also.
Total RAM available on my system is 124GB. I am using HP C7000 blade hardware with RHEL6.2 OS.
I am using 64bit binary. 'file test_shm' command output is
test_shm: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped
#include
#include
#include
#include
#include
struct shm_details
{
key_t key;
size_t size;
int shmid;
void *shm;
};
main(int argc,char **argv) {
int i;
struct shm_details test_shm[] =
{
{0xe300626d,2147456,-1,NULL },
{0xe300626c ,144271608 ,-1,NULL},
{0xe300626e,115604 ,-1,NULL},
{0xe300626f ,118376704 ,-1,NULL},
{0xe300653d ,574592 ,-1,NULL},
{0xe300653c ,22578504 ,-1,NULL},
{0xe300653e ,18092 ,-1,NULL},
{0xe300653f ,208416960 ,-1,NULL},
{0xe30066b9 ,2147456 ,-1,NULL},
{0xe30066b8 ,120870984 ,-1,NULL},
{0xe30066ba ,484260 ,-1,NULL},
{0xe30066bb ,9793648960 ,-1,NULL},
{0xe30068ad ,8438912,-1,NULL },
{0xe30068ac,553973472,-1,NULL}
};
int list = (sizeof(test_shm)/sizeof(test_shm[0]));
int shmflg;
int delete=0;
struct shmid_ds buf;
if(argc != 2)
{
printf("Usage: ./test_shm -c|-g|-d\n -c for create shared memory\n -g for get shared memory\n -d delete the shared memory\n");
return(0);
}
if(strcmp(argv[1],"-c") == 0)
{
shmflg = IPC_CREAT | 0666;
}else if(strcmp(argv[1],"-g") == 0)
{
shmflg = 0666;
}else if(strcmp(argv[1],"-d") == 0)
{
shmflg = 0666;
delete = 1;
}else{
printf("Usage: ./test_shm -c|-g|-d\n -c for create\n -g for get shared memory\n -d delete the shared memory\n");
return(0);
}
for (i=0;i 'uname -a' output is
> Linux mercury05 2.6.32-220.17.1.el6.x86_64 #1 SMP Thu Apr 26 13:37:13 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
>
> using valgrind version 3.8.0 also shmat is failing with errno 22.
Thank you for that information. Is the process running in "native" 64-bit mode,
or is this a 32-bit [only] process whose environment is being "emulated"
by the x86_64 system?
Now, please try to construct a small test case which fails in the same way:
The test app calls shmat() and works without valgrind, but gets EINVAL
when run under valgrind-3.8.0. Post the actual code to this mailing list;
the code should be no more than a few dozen lines.
Here are some situations which might affect the results:
How many shared memory segments are attached by the app,
and how many other shared memory segments are active
at the same time in the whole system?
What is the mix of access permissions (ReadOnly, ReadWrite, ...)?
Were the shared memory segments created by the same process,
or did the segments exist already before the process began?
Is each shared memory segment being attached only once
by any particular process, or are there multiple mappings?
Is there any case which succeeds when (0!=shmaddr) ?
Attach using shmat(shmid, 0, 0); Remember the address;
Detach the segment; try to re-attach using shmat(shmid, old_addr, 0)
where old_addr is the address which was returned for the first
(successful) attach.
--
|
|
From: John R. <jr...@bi...> - 2012-08-14 16:45:41
Attachments:
shmtest.c
shmtest.out
|
The code was mangled by posting in HTML. Instead, use plain text (or an attachment, if the mailing list allows them.) I [attempt to] attach the code I used: -rw-rw-r--. 1 jreiser jreiser 3125 Aug 14 09:30 shmtest.c On my system, /usr/include/linux/shm.h says: ----- #define SHMMAX 0x2000000 /* max shared seg size (bytes) */ #define SHMMIN 1 /* min shared seg size (bytes) */ #define SHMMNI 4096 /* max num of segs system wide */ #define SHMALL (SHMMAX/getpagesize()*(SHMMNI/16)) #define SHMSEG SHMMNI /* max shared segs per process */ ----- In particular, SHMMAX is only 32MiB, so several of the attempted shared segments are too big. My current actual parameters in $ cat /proc/sys/kernel/shmall $ cat /proc/sys/kernel/shmmax $ cat /proc/sys/kernel/shmmni agree with the defaults that are in shm.h. My test program works for me under valgrind-3.8.0 on Fedora 17 x86_64 in 64-bit mode. Linux host.domain 3.5.0-2.fc17.x86_64 #1 SMP Mon Jul 30 14:48:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux gcc (GCC) 4.7.0 20120507 (Red Hat 4.7.0-5) I [attempt to] attach the output that I get: -rw-rw-r--. 1 jreiser jreiser 9321 Aug 14 09:31 shmtest.out To see the system calls that valgrind sees: valgrind --trace-syscalls=yes ... Of course you can run that under strace, too, in order to verify that valgrind actually does what it says it is doing: strace valgrind --trace-syscalls=yes ... but it is tedious to read the output. -- |
|
From: Philippe W. <phi...@sk...> - 2012-08-14 23:09:12
Attachments:
patch_shmat.txt
|
On Tue, 2012-08-14 at 10:20 +0000, Adishesh wrote: > Hi, > > I have used below program for testing. Below are the steps I have > followed. > Compile: gcc -g3 test_shm.c -o test_shm > Create shared memory: ./test_shm –c > Get shared memory without valgrind: ./test_shm –g ( this works fine) > Get with valgrind: /usr/bin/valgrind --tool=memcheck --leak-check=full > --track-origins=yes --log-file=/tmp/val_log /home/rtp99/test_shm –g > With valgrind shmat command fails. Can you try again after applying the attached patch ? Thanks Philippe |
|
From: Adishesh M <adi...@gm...> - 2012-08-15 17:48:03
|
Hi,
i will test using patch and update the test results soon.
Since previous mail was mangled, resending it.
I have used below program for testing. Below are the steps I have followed.
Compile: gcc -g3 test_shm.c -o test_shm
Create shared memory: ./test_shm –c
Get shared memory without valgrind: ./test_shm –g ( this works fine)
Get with valgrind: /usr/bin/valgrind --tool=memcheck --leak-check=full
--track-origins=yes --log-file=/tmp/val_log /home/rtp99/test_shm –g
With valgrind shmat command fails.
Total shared memory segments active on the my system is 320.
My process will attach to 40 shared memory segments. Shmat will fails
when tested with 14 shared memory attach also.
Total RAM available on my system is 124GB. I am using HP C7000 blade
hardware with RHEL6.2 OS.
I am using 64bit binary. 'file test_shm' command output is
test_shm: ELF 64-bit LSB executable, x86-64, version 1 (SYSV),
dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not
stripped
Thanks and regards,
Adishesh
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
#include <unistd.h>
struct shm_details
{
key_t key;
size_t size;
int shmid;
void *shm;
};
main(int argc,char **argv)
{
int i;
struct shm_details test_shm[] =
{
{0xe300626d,2147456,-1,NULL },
{0xe300626c ,144271608 ,-1,NULL},
{0xe300626e,115604 ,-1,NULL},
{0xe300626f ,118376704 ,-1,NULL},
{0xe300653d ,574592 ,-1,NULL},
{0xe300653c ,22578504 ,-1,NULL},
{0xe300653e ,18092 ,-1,NULL},
{0xe300653f ,208416960 ,-1,NULL},
{0xe30066b9 ,2147456 ,-1,NULL},
{0xe30066b8 ,120870984 ,-1,NULL},
{0xe30066ba ,484260 ,-1,NULL},
{0xe30066bb ,9793648960 ,-1,NULL},
{0xe30068ad ,8438912,-1,NULL },
{0xe30068ac,553973472,-1,NULL}
};
int list = (sizeof(test_shm)/sizeof(test_shm[0]));
int shmflg;
int delete=0;
struct shmid_ds buf;
if(argc != 2)
{
printf("Usage: ./test_shm -c|-g|-d\n -c for create shared memory\n
-g for get shared memory\n -d delete the shared memory\n");
return(0);
}
if(strcmp(argv[1],"-c") == 0)
{
shmflg = IPC_CREAT | 0666;
}else if(strcmp(argv[1],"-g") == 0)
{
shmflg = 0666;
}else if(strcmp(argv[1],"-d") == 0)
{
shmflg = 0666;
delete = 1;
}else{
printf("Usage: ./test_shm -c|-g|-d\n -c for create\n -g for get
shared memory\n -d delete the shared memory\n");
return(0);
}
for (i=0;i<list;i++)
{
if ((test_shm[i].shmid = shmget(test_shm[i].key,
test_shm[i].size,shmflg)) < 0)
{
perror("shmget");
printf("ERROR: shmget failed for shmkey=0x%x\n", test_shm[i].key);
return(1);
}
}
/*delete*/
if(delete == 1)
{
for (i=0;i<list;i++)
{
if(shmctl(test_shm[i].shmid, IPC_RMID, &buf) != 0){
perror("shmctl");
printf("ERROR: shmctl failed for shmkey=0x%x\n", test_shm[i].key);
return(1);
}
}
printf("INFO: All shared memories are deleted\n");
return(0);
}
/* Now we attach the segment to our data space. */
for (i=0;i<list;i++)
{
if((test_shm[i].shm = shmat(test_shm[i].shmid, 0, 0)) == (char *) -1){
perror("shmat");
printf("ERROR: shmat failed for shmkey=0x%x\n", test_shm[i].key);
return(1);
}
}
for (i=0;i<list;i++)
{
if(shmdt(test_shm[i].shm) !=0 ) {
perror("shmat");
printf("ERROR: shmdt failed for shmkey=0x%x\n", test_shm[i].key);
return(1);
}
}
printf("INFO: exit success\n");
return(0);
}
>Can you try again after applying the attached patch ?
>Thanks
>Philippe
|
|
From: Adishesh M <adi...@gm...> - 2012-08-16 07:22:05
|
Hi Philippe, After applying patch shmat is working fine. Does this patch will be included in the next valgrind release? Thanks and regards, Adishesh On Wed, Aug 15, 2012 at 11:17 PM, Adishesh M <adi...@gm...> wrote: > Hi, > > i will test using patch and update the test results soon. > > Since previous mail was mangled, resending it. > > I have used below program for testing. Below are the steps I have followed. > Compile: gcc -g3 test_shm.c -o test_shm > Create shared memory: ./test_shm –c > Get shared memory without valgrind: ./test_shm –g ( this works fine) > Get with valgrind: /usr/bin/valgrind --tool=memcheck --leak-check=full > --track-origins=yes --log-file=/tmp/val_log /home/rtp99/test_shm –g > With valgrind shmat command fails. > > Total shared memory segments active on the my system is 320. > My process will attach to 40 shared memory segments. Shmat will fails > when tested with 14 shared memory attach also. > Total RAM available on my system is 124GB. I am using HP C7000 blade > hardware with RHEL6.2 OS. > I am using 64bit binary. 'file test_shm' command output is > test_shm: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), > dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not > stripped > > Thanks and regards, > Adishesh > > >>Can you try again after applying the attached patch ? > >>Thanks > >>Philippe |
|
From: Philippe W. <phi...@sk...> - 2012-08-16 19:42:11
|
On Thu, 2012-08-16 at 12:51 +0530, Adishesh M wrote: > Hi Philippe, > > After applying patch shmat is working fine. > > Does this patch will be included in the next valgrind release? Patch has been committed (revision 12874), so will be in next release. Philippe |