From: Yuri P. <yu...@sw...> - 2000-03-22 13:52:58
|
It looks like I found the cause and the solution, however the way how read access from gdb interferes with the child read access continue to seems strange to me. I wrote a small mmap-test prog which just does create_file_vm(), mmap(), and memcpy(). Segfault occurs when two conditions are in play: - mmap use PROT_WRITE protection (no PROT_READ); - particular memcpy() implementation, which does READ access to destination area. Unfortunatly this is the implementation from my libc.a, comes in play when statically linked. Summary: PROT_WRITE works fine with: - gcc 2.95.2 __builtin_memcpy(); - glibc 2.1.3 memcpy(), comes from sysdeps/generic/memcpy.c; - Redhat 6.1 supplied glibc-2.1.2-11 PROT_WRITE known not to work with: - some ugly memcpy from Mandrake 7.0, comes from glibc-2.1.2-9mdk. This implementation touches cache lines before write to the destination for a possible speedup. Adding PROT_READ permission surely fixes the problem for this case. What still seems strange to me, why read access from gdb enable read access for the page instead of protection fault. So I suggest you to add PROT_READ flags for such non trivial memcpy() implementation and forget about the problem. FYI attached is the dissassembly dump of the affected memcpy(). memcpy.o: file format elf32-i386 Disassembly of section .text: 00000000 <memcpy>: 0: 57 push %edi 1: 56 push %esi 2: 8b 7c 24 0c mov 0xc(%esp,1),%edi 6: 8b 74 24 10 mov 0x10(%esp,1),%esi a: 8b 4c 24 14 mov 0x14(%esp,1),%ecx e: 89 f8 mov %edi,%eax 10: fc cld 11: 83 f9 20 cmp $0x20,%ecx 14: 76 56 jbe 6c <memcpy+0x6c> 16: f7 d8 neg %eax 18: 83 e0 03 and $0x3,%eax 1b: 29 c1 sub %eax,%ecx 1d: 91 xchg %eax,%ecx 1e: f3 a4 repz movsb %ds:(%esi),%es:(%edi) 20: 89 c1 mov %eax,%ecx 22: 83 e9 20 sub $0x20,%ecx 25: 78 3e js 65 <memcpy+0x65> 27: 8b 07 mov (%edi),%eax ^^^^ here it segfaults on a READ access to the destination. 29: 8b 57 1c mov 0x1c(%edi),%edx ^^^^ here is another read access to the second case line 2c: 83 e9 20 sub $0x20,%ecx 2f: 8b 06 mov (%esi),%eax 31: 8b 56 04 mov 0x4(%esi),%edx 34: 89 07 mov %eax,(%edi) 36: 89 57 04 mov %edx,0x4(%edi) 39: 8b 46 08 mov 0x8(%esi),%eax 3c: 8b 56 0c mov 0xc(%esi),%edx 3f: 89 47 08 mov %eax,0x8(%edi) 42: 89 57 0c mov %edx,0xc(%edi) 45: 8b 46 10 mov 0x10(%esi),%eax 48: 8b 56 14 mov 0x14(%esi),%edx 4b: 89 47 10 mov %eax,0x10(%edi) 4e: 89 57 14 mov %edx,0x14(%edi) 51: 8b 46 18 mov 0x18(%esi),%eax 54: 8b 56 1c mov 0x1c(%esi),%edx 57: 89 47 18 mov %eax,0x18(%edi) 5a: 89 57 1c mov %edx,0x1c(%edi) 5d: 8d 76 20 lea 0x20(%esi),%esi 60: 8d 7f 20 lea 0x20(%edi),%edi 63: 79 c4 jns 29 <memcpy+0x29> 65: 83 c1 20 add $0x20,%ecx 68: 8b 44 24 0c mov 0xc(%esp,1),%eax 6c: f3 a4 repz movsb %ds:(%esi),%es:(%edi) 6e: 5e pop %esi 6f: 5f pop %edi 70: c3 ret |