You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(47) |
Sep
(524) |
Oct
(365) |
Nov
(277) |
Dec
(178) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(431) |
Feb
(340) |
Mar
(249) |
Apr
(678) |
May
(407) |
Jun
(449) |
Jul
(712) |
Aug
(391) |
Sep
(205) |
Oct
(692) |
Nov
(45) |
Dec
(61) |
2002 |
Jan
(237) |
Feb
(28) |
Mar
(138) |
Apr
(59) |
May
(75) |
Jun
(65) |
Jul
|
Aug
(26) |
Sep
(3) |
Oct
(294) |
Nov
(193) |
Dec
(121) |
2003 |
Jan
(160) |
Feb
(2) |
Mar
(277) |
Apr
(71) |
May
(252) |
Jun
(82) |
Jul
(211) |
Aug
(184) |
Sep
(105) |
Oct
(129) |
Nov
(46) |
Dec
(13) |
2004 |
Jan
(37) |
Feb
(113) |
Mar
(115) |
Apr
(115) |
May
(45) |
Jun
(141) |
Jul
(13) |
Aug
(82) |
Sep
(12) |
Oct
(69) |
Nov
|
Dec
(37) |
2005 |
Jan
(18) |
Feb
(5) |
Mar
(79) |
Apr
(9) |
May
(47) |
Jun
(60) |
Jul
(10) |
Aug
(89) |
Sep
(28) |
Oct
(65) |
Nov
(54) |
Dec
(23) |
2006 |
Jan
(198) |
Feb
(51) |
Mar
(23) |
Apr
|
May
|
Jun
(6) |
Jul
(103) |
Aug
(217) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2007 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: kalyantej k. <kal...@ya...> - 2007-03-30 12:31:21
|
Hi everyone, I am trying to port UClinux for SH7615 processor.The kernel was able to boot properly . Before exceuting shell , I tried to execute a simple hello world application, It also works properly ,but when I try to fork a new process the system crashes. I found some code in entry.S which never get executed, In macros SAVE_ALL and RESTORE_ALL,there is a conditional check before proceeding to Kernel/user mode , where the current->tss.sr value is checked for. But when is this value of current->tss.sr is changed ! May be I hope that there is no separate stack for user mode and kernel mode,due to Which the fork gets failed . Is this a known problem in linux-2.0.x of uClinux-distribution . Please let me know if there is support for SH2 . Please help me , thanks in advance, Regards, Kalyan --------------------------------- Need Mail bonding? Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users. |
From: Paul M. <le...@us...> - 2006-09-07 06:55:05
|
Update of /cvsroot/linuxsh/linux/arch/sh/mm In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv13078/arch/sh/mm Modified Files: cache-sh4.c Log Message: IRQs disabling in flush_cache_4096 for cache purge. Under certain workloads we would get an IRQ in the middle of a purge operation, and the cachelines would remain in an inconsistent state, leading to occasional stack corruption, debugged by Takeo Takahashi <tak...@re...>. Index: cache-sh4.c =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/mm/cache-sh4.c,v retrieving revision 1.39 retrieving revision 1.40 diff -u -d -r1.39 -r1.40 --- cache-sh4.c 21 Aug 2006 02:20:15 -0000 1.39 +++ cache-sh4.c 7 Sep 2006 06:55:02 -0000 1.40 @@ -221,22 +221,20 @@ static inline void flush_cache_4096(unsigned long start, unsigned long phys) { + unsigned long flags, exec_offset = 0; + /* * All types of SH-4 require PC to be in P2 to operate on the I-cache. * Some types of SH-4 require PC to be in P2 to operate on the D-cache. */ if ((cpu_data->flags & CPU_HAS_P2_FLUSH_BUG) || - (start < CACHE_OC_ADDRESS_ARRAY)) { - unsigned long flags; + (start < CACHE_OC_ADDRESS_ARRAY)) + exec_offset = 0x20000000; - local_irq_save(flags); - __flush_cache_4096(start | SH_CACHE_ASSOC, - P1SEGADDR(phys), 0x20000000); - local_irq_restore(flags); - } else { - __flush_cache_4096(start | SH_CACHE_ASSOC, - P1SEGADDR(phys), 0); - } + local_irq_save(flags); + __flush_cache_4096(start | SH_CACHE_ASSOC, + P1SEGADDR(phys), exec_offset); + local_irq_restore(flags); } /* |
From: Paul M. <le...@us...> - 2006-09-06 16:51:20
|
Update of /cvsroot/linuxsh/linux/include/asm-sh In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv6764/include/asm-sh Modified Files: cacheflush.h pgtable.h Log Message: Move the HAVE_ARCH_UNMAPPED_AREA define. Index: cacheflush.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/cacheflush.h,v retrieving revision 1.7 retrieving revision 1.8 diff -u -d -r1.7 -r1.8 --- cacheflush.h 31 Dec 2005 11:30:47 -0000 1.7 +++ cacheflush.h 6 Sep 2006 16:51:16 -0000 1.8 @@ -28,5 +28,7 @@ memcpy(dst, src, len); \ } while (0) +#define HAVE_ARCH_UNMAPPED_AREA + #endif /* __KERNEL__ */ #endif /* __ASM_SH_CACHEFLUSH_H */ Index: pgtable.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/pgtable.h,v retrieving revision 1.34 retrieving revision 1.35 diff -u -d -r1.34 -r1.35 --- pgtable.h 30 Aug 2006 09:56:50 -0000 1.34 +++ pgtable.h 6 Sep 2006 16:51:17 -0000 1.35 @@ -338,11 +338,7 @@ extern pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #endif -#define HAVE_ARCH_GET_UNMAPPED_AREA - #include <asm-generic/pgtable.h> #endif /* !__ASSEMBLY__ */ - #endif /* __ASM_SH_PAGE_H */ - |
From: Paul M. <le...@us...> - 2006-09-01 06:12:00
|
Update of /cvsroot/linuxsh/linux/arch/sh/kernel/vsyscall In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv19786/arch/sh/kernel/vsyscall Modified Files: vsyscall-syscall.S Log Message: vsyscall compile fix from Iwamatsu-san. Index: vsyscall-syscall.S =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/kernel/vsyscall/vsyscall-syscall.S,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- vsyscall-syscall.S 28 Aug 2006 03:58:06 -0000 1.1 +++ vsyscall-syscall.S 1 Sep 2006 06:11:54 -0000 1.2 @@ -4,7 +4,7 @@ .globl vsyscall_trapa_start, vsyscall_trapa_end vsyscall_trapa_start: - .incbin "arch/sh/kernel/vsyscall-trapa.so" + .incbin "arch/sh/kernel/vsyscall/vsyscall-trapa.so" vsyscall_trapa_end: __FINIT |
From: Paul M. <le...@us...> - 2006-08-30 09:56:53
|
Update of /cvsroot/linuxsh/linux/include/asm-sh In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv21293/include/asm-sh Modified Files: page.h pgtable.h Log Message: Set the SHM alignment at runtime, based off of probed cache desc. Optimize get_unmapped_area() to only colour align shared mappings. Index: page.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/page.h,v retrieving revision 1.20 retrieving revision 1.21 diff -u -d -r1.20 -r1.21 --- page.h 28 Aug 2006 03:58:07 -0000 1.20 +++ page.h 30 Aug 2006 09:56:50 -0000 1.21 @@ -45,6 +45,8 @@ extern void (*clear_page)(void *to); extern void (*copy_page)(void *to, void *from); +extern unsigned long shm_align_mask; + #ifdef CONFIG_MMU extern void clear_page_slow(void *to); extern void copy_page_slow(void *to, void *from); Index: pgtable.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/pgtable.h,v retrieving revision 1.33 retrieving revision 1.34 diff -u -d -r1.33 -r1.34 --- pgtable.h 22 Jan 2006 17:26:20 -0000 1.33 +++ pgtable.h 30 Aug 2006 09:56:50 -0000 1.34 @@ -338,6 +338,8 @@ extern pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #endif +#define HAVE_ARCH_GET_UNMAPPED_AREA + #include <asm-generic/pgtable.h> #endif /* !__ASSEMBLY__ */ |
From: Paul M. <le...@us...> - 2006-08-30 09:56:53
|
Update of /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh3 In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv21293/include/asm-sh/cpu-sh3 Modified Files: cacheflush.h Log Message: Set the SHM alignment at runtime, based off of probed cache desc. Optimize get_unmapped_area() to only colour align shared mappings. Index: cacheflush.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh3/cacheflush.h,v retrieving revision 1.8 retrieving revision 1.9 diff -u -d -r1.8 -r1.9 --- cacheflush.h 3 Jan 2006 23:01:25 -0000 1.8 +++ cacheflush.h 30 Aug 2006 09:56:49 -0000 1.9 @@ -64,12 +64,4 @@ #define p3_cache_init() do { } while (0) -/* - * We provide our own get_unmapped_area to avoid cache aliasing issues - * on SH7705 with a 32KB cache, and to page align addresses in the - * non-aliasing case. - */ -#define HAVE_ARCH_UNMAPPED_AREA - #endif /* __ASM_CPU_SH3_CACHEFLUSH_H */ - |
From: Paul M. <le...@us...> - 2006-08-30 09:56:53
|
Update of /cvsroot/linuxsh/linux/arch/sh/kernel In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv21293/arch/sh/kernel Modified Files: sys_sh.c Log Message: Set the SHM alignment at runtime, based off of probed cache desc. Optimize get_unmapped_area() to only colour align shared mappings. Index: sys_sh.c =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/kernel/sys_sh.c,v retrieving revision 1.12 retrieving revision 1.13 diff -u -d -r1.12 -r1.13 --- sys_sh.c 4 Aug 2006 07:44:08 -0000 1.12 +++ sys_sh.c 30 Aug 2006 09:56:49 -0000 1.13 @@ -21,6 +21,7 @@ #include <linux/mman.h> #include <linux/file.h> #include <linux/utsname.h> +#include <linux/module.h> #include <asm/cacheflush.h> #include <asm/uaccess.h> #include <asm/ipc.h> @@ -44,11 +45,16 @@ return error; } -#if defined(HAVE_ARCH_UNMAPPED_AREA) && defined(CONFIG_MMU) +unsigned long shm_align_mask = PAGE_SIZE - 1; /* Sane caches */ + +EXPORT_SYMBOL(shm_align_mask); + /* - * To avoid cache alias, we map the shard page with same color. + * To avoid cache aliases, we map the shared page with same color. */ -#define COLOUR_ALIGN(addr) (((addr)+SHMLBA-1)&~(SHMLBA-1)) +#define COLOUR_ALIGN(addr, pgoff) \ + ((((addr) + shm_align_mask) & ~shm_align_mask) + \ + (((pgoff) << PAGE_SHIFT) & shm_align_mask)) unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) @@ -56,43 +62,52 @@ struct mm_struct *mm = current->mm; struct vm_area_struct *vma; unsigned long start_addr; + int do_colour_align; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate * cache aliasing constraints. */ - if ((flags & MAP_SHARED) && (addr & (SHMLBA - 1))) + if ((flags & MAP_SHARED) && (addr & shm_align_mask)) return -EINVAL; return addr; } - if (len > TASK_SIZE) + if (unlikely(len > TASK_SIZE)) return -ENOMEM; + do_colour_align = 0; + if (filp || (flags & MAP_SHARED)) + do_colour_align = 1; + if (addr) { - if (flags & MAP_PRIVATE) - addr = PAGE_ALIGN(addr); + if (do_colour_align) + addr = COLOUR_ALIGN(addr, pgoff); else - addr = COLOUR_ALIGN(addr); + addr = PAGE_ALIGN(addr); + vma = find_vma(mm, addr); if (TASK_SIZE - len >= addr && (!vma || addr + len <= vma->vm_start)) return addr; } - if (len <= mm->cached_hole_size) { + + if (len > mm->cached_hole_size) { + start_addr = addr = mm->free_area_cache; + } else { mm->cached_hole_size = 0; - mm->free_area_cache = TASK_UNMAPPED_BASE; + start_addr = addr = TASK_UNMAPPED_BASE; } - if (flags & MAP_PRIVATE) - addr = PAGE_ALIGN(mm->free_area_cache); - else - addr = COLOUR_ALIGN(mm->free_area_cache); - start_addr = addr; full_search: + if (do_colour_align) + addr = COLOUR_ALIGN(addr, pgoff); + else + addr = PAGE_ALIGN(mm->free_area_cache); + for (vma = find_vma(mm, addr); ; vma = vma->vm_next) { /* At this point: (!vma || addr < vma->vm_end). */ - if (TASK_SIZE - len < addr) { + if (unlikely(TASK_SIZE - len < addr)) { /* * Start a new search - just in case we missed * some holes. @@ -104,7 +119,7 @@ } return -ENOMEM; } - if (!vma || addr + len <= vma->vm_start) { + if (likely(!vma || addr + len <= vma->vm_start)) { /* * Remember the place where we stopped the search: */ @@ -115,11 +130,10 @@ mm->cached_hole_size = vma->vm_start - addr; addr = vma->vm_end; - if (!(flags & MAP_PRIVATE)) - addr = COLOUR_ALIGN(addr); + if (do_colour_align) + addr = COLOUR_ALIGN(addr, pgoff); } } -#endif static inline long do_mmap2(unsigned long addr, unsigned long len, unsigned long prot, |
From: Paul M. <le...@us...> - 2006-08-30 09:56:53
|
Update of /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh4 In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv21293/include/asm-sh/cpu-sh4 Modified Files: cacheflush.h Log Message: Set the SHM alignment at runtime, based off of probed cache desc. Optimize get_unmapped_area() to only colour align shared mappings. Index: cacheflush.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh4/cacheflush.h,v retrieving revision 1.6 retrieving revision 1.7 diff -u -d -r1.6 -r1.7 --- cacheflush.h 31 Dec 2005 11:30:49 -0000 1.6 +++ cacheflush.h 30 Aug 2006 09:56:50 -0000 1.7 @@ -39,9 +39,6 @@ #define PG_mapped PG_arch_1 -/* We provide our own get_unmapped_area to avoid cache alias issue */ -#define HAVE_ARCH_UNMAPPED_AREA - #ifdef CONFIG_MMU extern int remap_area_pages(unsigned long addr, unsigned long phys_addr, unsigned long size, unsigned long flags); |
From: Paul M. <le...@us...> - 2006-08-30 09:56:53
|
Update of /cvsroot/linuxsh/linux/arch/sh/kernel/cpu In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv21293/arch/sh/kernel/cpu Modified Files: init.c Log Message: Set the SHM alignment at runtime, based off of probed cache desc. Optimize get_unmapped_area() to only colour align shared mappings. Index: init.c =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/kernel/cpu/init.c,v retrieving revision 1.11 retrieving revision 1.12 diff -u -d -r1.11 -r1.12 --- init.c 5 Jul 2006 08:46:48 -0000 1.11 +++ init.c 30 Aug 2006 09:56:49 -0000 1.12 @@ -14,6 +14,7 @@ #include <linux/kernel.h> #include <asm/processor.h> #include <asm/uaccess.h> +#include <asm/page.h> #include <asm/system.h> #include <asm/cacheflush.h> #include <asm/cache.h> @@ -198,6 +199,10 @@ /* Init the cache */ cache_init(); + shm_align_mask = max_t(unsigned long, + cpu_data->dcache.way_size - 1, + PAGE_SIZE - 1); + /* Disable the FPU */ if (fpu_disabled) { printk("FPU Disabled\n"); |
From: Paul M. <le...@us...> - 2006-08-30 09:54:51
|
Update of /cvsroot/linuxsh/linux/include/asm-sh In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv20861/include/asm-sh Modified Files: dma-mapping.h Log Message: DMA-mapping API bogosity fixup, take 2. Index: dma-mapping.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/dma-mapping.h,v retrieving revision 1.13 retrieving revision 1.14 diff -u -d -r1.13 -r1.14 --- dma-mapping.h 28 Aug 2006 08:22:08 -0000 1.13 +++ dma-mapping.h 30 Aug 2006 09:54:42 -0000 1.14 @@ -142,10 +142,35 @@ } } -#define dma_sync_single_for_cpu dma_sync_single -#define dma_sync_single_for_device dma_sync_single -#define dma_sync_sg_for_cpu dma_sync_sg -#define dma_sync_sg_for_device dma_sync_sg +static inline void dma_sync_single_for_cpu(struct device *dev, + dma_addr_t dma_handle, size_t size, + enum dma_data_direction dir) +{ + dma_sync_single(dev, dma_handle, size, dir); +} + +static inline void dma_sync_single_for_device(struct device *dev, + dma_addr_t dma_handle, + size_t size, + enum dma_data_direction dir) +{ + dma_sync_single(dev, dma_handle, size, dir); +} + +static inline void dma_sync_sg_for_cpu(struct device *dev, + struct scatterlist *sg, int nelems, + enum dma_data_direction dir) +{ + dma_sync_sg(dev, sg, nelems, dir); +} + +static inline void dma_sync_sg_for_device(struct device *dev, + struct scatterlist *sg, int nelems, + enum dma_data_direction dir) +{ + dma_sync_sg(dev, sg, nelems, dir); +} + static inline int dma_get_cache_alignment(void) { |
From: Paul M. <le...@us...> - 2006-08-28 08:22:12
|
Update of /cvsroot/linuxsh/linux/include/asm-sh In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv7395/include/asm-sh Modified Files: dma-mapping.h Log Message: Fixup some dma-mapping API bogosity. Index: dma-mapping.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/dma-mapping.h,v retrieving revision 1.12 retrieving revision 1.13 diff -u -d -r1.12 -r1.13 --- dma-mapping.h 4 Jan 2006 13:24:06 -0000 1.12 +++ dma-mapping.h 28 Aug 2006 08:22:08 -0000 1.13 @@ -142,25 +142,10 @@ } } -static void dma_sync_single_for_cpu(struct device *dev, - dma_addr_t dma_handle, size_t size, - enum dma_data_direction dir) - __attribute__ ((alias("dma_sync_single"))); - -static void dma_sync_single_for_device(struct device *dev, - dma_addr_t dma_handle, size_t size, - enum dma_data_direction dir) - __attribute__ ((alias("dma_sync_single"))); - -static void dma_sync_sg_for_cpu(struct device *dev, - struct scatterlist *sg, int nelems, - enum dma_data_direction dir) - __attribute__ ((alias("dma_sync_sg"))); - -static void dma_sync_sg_for_device(struct device *dev, - struct scatterlist *sg, int nelems, - enum dma_data_direction dir) - __attribute__ ((alias("dma_sync_sg"))); +#define dma_sync_single_for_cpu dma_sync_single +#define dma_sync_single_for_device dma_sync_single +#define dma_sync_sg_for_cpu dma_sync_sg +#define dma_sync_sg_for_device dma_sync_sg static inline int dma_get_cache_alignment(void) { @@ -175,6 +160,4 @@ { return dma_addr == 0; } - #endif /* __ASM_SH_DMA_MAPPING_H */ - |
From: Paul M. <le...@us...> - 2006-08-28 08:19:32
|
Update of /cvsroot/linuxsh/linux/drivers/serial In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv6303/drivers/serial Modified Files: sh-sci.c Log Message: sh64 uses current_cpu_data, fixup the build. Index: sh-sci.c =================================================================== RCS file: /cvsroot/linuxsh/linux/drivers/serial/sh-sci.c,v retrieving revision 1.50 retrieving revision 1.51 diff -u -d -r1.50 -r1.51 --- sh-sci.c 7 Aug 2006 10:10:36 -0000 1.50 +++ sh-sci.c 28 Aug 2006 08:19:28 -0000 1.51 @@ -1124,7 +1124,7 @@ #endif sci_ports[i].port.uartclk = CONFIG_CPU_CLOCK; #elif defined(CONFIG_SUPERH64) - sci_ports[i].port.uartclk = current_cpu_info.module_clock * 16; + sci_ports[i].port.uartclk = current_cpu_data.module_clock * 16; #else /* * XXX: We should use a proper SCI/SCIF clock |
From: Paul M. <le...@us...> - 2006-08-28 03:58:10
|
Update of /cvsroot/linuxsh/linux/include/asm-sh In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv25403/include/asm-sh Modified Files: elf.h mmu.h mmu_context.h page.h processor.h Added Files: auxvec.h Log Message: Initial configurable vsyscall page support, only used for the signal trampoline return code at the moment.. --- NEW FILE: auxvec.h --- #ifndef __ASM_SH_AUXVEC_H #define __ASM_SH_AUXVEC_H /* * Architecture-neutral AT_ values in 0-17, leave some room * for more of them. */ #ifdef CONFIG_VSYSCALL /* * Only define this in the vsyscall case, the entry point to * the vsyscall page gets placed here. The kernel will attempt * to build a gate VMA we don't care about otherwise.. */ #define AT_SYSINFO_EHDR 33 #endif #endif /* __ASM_SH_AUXVEC_H */ Index: elf.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/elf.h,v retrieving revision 1.6 retrieving revision 1.7 diff -u -d -r1.6 -r1.7 --- elf.h 9 Aug 2006 03:53:04 -0000 1.6 +++ elf.h 28 Aug 2006 03:58:07 -0000 1.7 @@ -121,4 +121,24 @@ #define ELF_CORE_COPY_FPREGS(tsk, elf_fpregs) dump_task_fpu(tsk, elf_fpregs) #endif +#ifdef CONFIG_VSYSCALL +/* vDSO has arch_setup_additional_pages */ +#define ARCH_HAS_SETUP_ADDITIONAL_PAGES +struct linux_binprm; +extern int arch_setup_additional_pages(struct linux_binprm *bprm, + int executable_stack); + +extern unsigned int vdso_enabled; +extern void __kernel_vsyscall; + +#define VDSO_BASE ((unsigned long)current->mm->context.vdso) +#define VDSO_SYM(x) (VDSO_BASE + (unsigned long)(x)) + +#define ARCH_DLINFO \ +do { \ + if (vdso_enabled) \ + NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_BASE); \ +} while (0) +#endif /* CONFIG_VSYSCALL */ + #endif /* __ASM_SH_ELF_H */ Index: mmu.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/mmu.h,v retrieving revision 1.5 retrieving revision 1.6 diff -u -d -r1.5 -r1.6 --- mmu.h 19 Jul 2006 14:50:21 -0000 1.5 +++ mmu.h 28 Aug 2006 03:58:07 -0000 1.6 @@ -11,7 +11,12 @@ #else /* Default "unsigned long" context */ -typedef unsigned long mm_context_t; +typedef unsigned long mm_context_id_t; + +typedef struct { + mm_context_id_t id; + void *vdso; +} mm_context_t; #endif /* CONFIG_MMU */ Index: mmu_context.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/mmu_context.h,v retrieving revision 1.12 retrieving revision 1.13 diff -u -d -r1.12 -r1.13 --- mmu_context.h 20 Oct 2005 22:48:05 -0000 1.12 +++ mmu_context.h 28 Aug 2006 03:58:07 -0000 1.13 @@ -49,7 +49,7 @@ unsigned long mc = mmu_context_cache; /* Check if we have old version of context. */ - if (((mm->context ^ mc) & MMU_CONTEXT_VERSION_MASK) == 0) + if (((mm->context.id ^ mc) & MMU_CONTEXT_VERSION_MASK) == 0) /* It's up to date, do nothing */ return; @@ -68,7 +68,7 @@ if (!mc) mmu_context_cache = mc = MMU_CONTEXT_FIRST_VERSION; } - mm->context = mc; + mm->context.id = mc; } /* @@ -78,7 +78,7 @@ static __inline__ int init_new_context(struct task_struct *tsk, struct mm_struct *mm) { - mm->context = NO_CONTEXT; + mm->context.id = NO_CONTEXT; return 0; } @@ -123,7 +123,7 @@ static __inline__ void activate_context(struct mm_struct *mm) { get_mmu_context(mm); - set_asid(mm->context & MMU_CONTEXT_ASID_MASK); + set_asid(mm->context.id & MMU_CONTEXT_ASID_MASK); } /* MMU_TTB can be used for optimizing the fault handling. Index: page.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/page.h,v retrieving revision 1.19 retrieving revision 1.20 diff -u -d -r1.19 -r1.20 --- page.h 23 Aug 2006 05:13:54 -0000 1.19 +++ page.h 28 Aug 2006 03:58:07 -0000 1.20 @@ -120,4 +120,9 @@ #include <asm-generic/memory_model.h> #include <asm-generic/page.h> +/* vDSO support */ +#ifdef CONFIG_VSYSCALL +#define __HAVE_ARCH_GATE_AREA +#endif + #endif /* __ASM_SH_PAGE_H */ Index: processor.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/processor.h,v retrieving revision 1.42 retrieving revision 1.43 diff -u -d -r1.42 -r1.43 --- processor.h 10 Aug 2006 09:58:50 -0000 1.42 +++ processor.h 28 Aug 2006 03:58:07 -0000 1.43 @@ -276,5 +276,11 @@ #define prefetchw(x) prefetch(x) #endif +#ifdef CONFIG_VSYSCALL +extern int vsyscall_init(void); +#else +#define vsyscall_init() do { } while (0) +#endif + #endif /* __KERNEL__ */ #endif /* __ASM_SH_PROCESSOR_H */ |
From: Paul M. <le...@us...> - 2006-08-28 03:58:09
|
Update of /cvsroot/linuxsh/linux/arch/sh/mm In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv25403/arch/sh/mm Modified Files: Kconfig init.c tlb-flush.c Log Message: Initial configurable vsyscall page support, only used for the signal trampoline return code at the moment.. Index: Kconfig =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/mm/Kconfig,v retrieving revision 1.17 retrieving revision 1.18 diff -u -d -r1.17 -r1.18 --- Kconfig 9 Aug 2006 07:08:47 -0000 1.17 +++ Kconfig 28 Aug 2006 03:58:06 -0000 1.18 @@ -223,6 +223,19 @@ 32-bits through the SH-4A PMB. If this is not set, legacy 29-bit physical addressing will be used. +config VSYSCALL + bool "Support vsyscall page" + depends on MMU + default y + help + This will enable support for the kernel mapping a vDSO page + in process space, and subsequently handing down the entry point + to the libc through the ELF auxiliary vector. + + From the kernel side this is used for the signal trampoline. + For systems with an MMU that can afford to give up a page, + (the default value) say Y. + choice prompt "HugeTLB page size" depends on HUGETLB_PAGE && CPU_SH4 && MMU Index: init.c =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/mm/init.c,v retrieving revision 1.30 retrieving revision 1.31 diff -u -d -r1.30 -r1.31 --- init.c 8 Aug 2006 06:40:50 -0000 1.30 +++ init.c 28 Aug 2006 03:58:06 -0000 1.31 @@ -287,6 +287,9 @@ initsize >> 10); p3_cache_init(); + + /* Initialize the vDSO */ + vsyscall_init(); } void free_initmem(void) Index: tlb-flush.c =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/mm/tlb-flush.c,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- tlb-flush.c 31 Dec 2005 11:30:47 -0000 1.1 +++ tlb-flush.c 28 Aug 2006 03:58:06 -0000 1.2 @@ -14,12 +14,12 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page) { - if (vma->vm_mm && vma->vm_mm->context != NO_CONTEXT) { + if (vma->vm_mm && vma->vm_mm->context.id != NO_CONTEXT) { unsigned long flags; unsigned long asid; unsigned long saved_asid = MMU_NO_ASID; - asid = vma->vm_mm->context & MMU_CONTEXT_ASID_MASK; + asid = vma->vm_mm->context.id & MMU_CONTEXT_ASID_MASK; page &= PAGE_MASK; local_irq_save(flags); @@ -39,20 +39,21 @@ { struct mm_struct *mm = vma->vm_mm; - if (mm->context != NO_CONTEXT) { + if (mm->context.id != NO_CONTEXT) { unsigned long flags; int size; local_irq_save(flags); size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT; if (size > (MMU_NTLB_ENTRIES/4)) { /* Too many TLB to flush */ - mm->context = NO_CONTEXT; + mm->context.id = NO_CONTEXT; if (mm == current->mm) activate_context(mm); } else { - unsigned long asid = mm->context&MMU_CONTEXT_ASID_MASK; + unsigned long asid; unsigned long saved_asid = MMU_NO_ASID; + asid = mm->context.id & MMU_CONTEXT_ASID_MASK; start &= PAGE_MASK; end += (PAGE_SIZE - 1); end &= PAGE_MASK; @@ -81,9 +82,10 @@ if (size > (MMU_NTLB_ENTRIES/4)) { /* Too many TLB to flush */ flush_tlb_all(); } else { - unsigned long asid = init_mm.context&MMU_CONTEXT_ASID_MASK; + unsigned long asid; unsigned long saved_asid = get_asid(); + asid = init_mm.context.id & MMU_CONTEXT_ASID_MASK; start &= PAGE_MASK; end += (PAGE_SIZE - 1); end &= PAGE_MASK; @@ -101,11 +103,11 @@ { /* Invalidate all TLB of this process. */ /* Instead of invalidating each TLB, we get new MMU context. */ - if (mm->context != NO_CONTEXT) { + if (mm->context.id != NO_CONTEXT) { unsigned long flags; local_irq_save(flags); - mm->context = NO_CONTEXT; + mm->context.id = NO_CONTEXT; if (mm == current->mm) activate_context(mm); local_irq_restore(flags); |
Update of /cvsroot/linuxsh/linux/arch/sh/kernel/vsyscall In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv25403/arch/sh/kernel/vsyscall Added Files: Makefile vsyscall-note.S vsyscall-sigreturn.S vsyscall-syscall.S vsyscall-trapa.S vsyscall.c vsyscall.lds.S Log Message: Initial configurable vsyscall page support, only used for the signal trampoline return code at the moment.. --- NEW FILE: Makefile --- obj-y += vsyscall.o vsyscall-syscall.o $(obj)/vsyscall-syscall.o: \ $(foreach F,trapa,$(obj)/vsyscall-$F.so) # Teach kbuild about targets targets += $(foreach F,trapa,vsyscall-$F.o vsyscall-$F.so) targets += vsyscall-note.o vsyscall.lds # The DSO images are built using a special linker script quiet_cmd_syscall = SYSCALL $@ cmd_syscall = $(CC) -nostdlib $(SYSCFLAGS_$(@F)) \ -Wl,-T,$(filter-out FORCE,$^) -o $@ export CPPFLAGS_vsyscall.lds += -P -C -Ush vsyscall-flags = -shared -s -Wl,-soname=linux-gate.so.1 \ $(call ld-option, -Wl$(comma)--hash-style=sysv) SYSCFLAGS_vsyscall-trapa.so = $(vsyscall-flags) $(obj)/vsyscall-trapa.so: \ $(obj)/vsyscall-%.so: $(src)/vsyscall.lds $(obj)/vsyscall-%.o FORCE $(call if_changed,syscall) # We also create a special relocatable object that should mirror the symbol # table and layout of the linked DSO. With ld -R we can then refer to # these symbols in the kernel code rather than hand-coded addresses. extra-y += vsyscall-syms.o $(obj)/built-in.o: $(obj)/vsyscall-syms.o $(obj)/built-in.o: ld_flags += -R $(obj)/vsyscall-syms.o SYSCFLAGS_vsyscall-syms.o = -r $(obj)/vsyscall-syms.o: $(src)/vsyscall.lds \ $(obj)/vsyscall-trapa.o $(obj)/vsyscall-note.o FORCE $(call if_changed,syscall) --- NEW FILE: vsyscall-note.S --- /* * This supplies .note.* sections to go into the PT_NOTE inside the vDSO text. * Here we can supply some information useful to userland. */ #include <linux/uts.h> #include <linux/version.h> #define ASM_ELF_NOTE_BEGIN(name, flags, vendor, type) \ .section name, flags; \ .balign 4; \ .long 1f - 0f; /* name length */ \ .long 3f - 2f; /* data length */ \ .long type; /* note type */ \ 0: .asciz vendor; /* vendor name */ \ 1: .balign 4; \ 2: #define ASM_ELF_NOTE_END \ 3: .balign 4; /* pad out section */ \ .previous ASM_ELF_NOTE_BEGIN(".note.kernel-version", "a", UTS_SYSNAME, 0) .long LINUX_VERSION_CODE ASM_ELF_NOTE_END --- NEW FILE: vsyscall-sigreturn.S --- #include <asm/unistd.h> .text .balign 32 .globl __kernel_sigreturn .type __kernel_sigreturn,@function __kernel_sigreturn: .LSTART_sigreturn: mov.w 1f, r3 trapa #0x10 or r0, r0 or r0, r0 or r0, r0 or r0, r0 or r0, r0 1: .short __NR_sigreturn .LEND_sigreturn: .size __kernel_sigreturn,.-.LSTART_sigreturn .balign 32 .globl __kernel_rt_sigreturn .type __kernel_rt_sigreturn,@function __kernel_rt_sigreturn: .LSTART_rt_sigreturn: mov.w 1f, r3 trapa #0x10 or r0, r0 or r0, r0 or r0, r0 or r0, r0 or r0, r0 1: .short __NR_rt_sigreturn .LEND_rt_sigreturn: .size __kernel_rt_sigreturn,.-.LSTART_rt_sigreturn .section .eh_frame,"a",@progbits .previous --- NEW FILE: vsyscall-syscall.S --- #include <linux/init.h> __INITDATA .globl vsyscall_trapa_start, vsyscall_trapa_end vsyscall_trapa_start: .incbin "arch/sh/kernel/vsyscall-trapa.so" vsyscall_trapa_end: __FINIT --- NEW FILE: vsyscall-trapa.S --- .text .globl __kernel_vsyscall .type __kernel_vsyscall,@function __kernel_vsyscall: .LSTART_vsyscall: /* XXX: We'll have to do something here once we opt to use the vDSO * page for something other than the signal trampoline.. as well as * fill out .eh_frame -- PFM. */ .LEND_vsyscall: .size __kernel_vsyscall,.-.LSTART_vsyscall .previous .section .eh_frame,"a",@progbits .LCIE: .ualong .LCIE_end - .LCIE_start .LCIE_start: .ualong 0 /* CIE ID */ .byte 0x1 /* Version number */ .string "zRS" /* NUL-terminated augmentation string */ .uleb128 0x1 /* Code alignment factor */ .sleb128 -4 /* Data alignment factor */ .byte 0x11 /* Return address register column */ /* Augmentation length and data (none) */ .byte 0xc /* DW_CFA_def_cfa */ .uleb128 0xf /* r15 */ .uleb128 0x0 /* offset 0 */ .align 2 .LCIE_end: .ualong .LFDE_end-.LFDE_start /* Length FDE */ .LFDE_start: .ualong .LCIE /* CIE pointer */ .ualong .LSTART_vsyscall-. /* start address */ .ualong .LEND_vsyscall-.LSTART_vsyscall .uleb128 0 .align 2 .LFDE_end: .previous /* Get the common code for the sigreturn entry points */ #include "vsyscall-sigreturn.S" --- NEW FILE: vsyscall.c --- /* * arch/sh/kernel/vsyscall.c * * Copyright (C) 2006 Paul Mundt * * vDSO randomization * Copyright(C) 2005-2006, Red Hat, Inc., Ingo Molnar * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. */ #include <linux/mm.h> #include <linux/slab.h> #include <linux/kernel.h> #include <linux/init.h> #include <linux/gfp.h> #include <linux/module.h> #include <linux/elf.h> /* * Should the kernel map a VDSO page into processes and pass its * address down to glibc upon exec()? */ unsigned int __read_mostly vdso_enabled = 1; EXPORT_SYMBOL_GPL(vdso_enabled); static int __init vdso_setup(char *s) { vdso_enabled = simple_strtoul(s, NULL, 0); return 1; } __setup("vdso=", vdso_setup); /* * These symbols are defined by vsyscall.o to mark the bounds * of the ELF DSO images included therein. */ extern const char vsyscall_trapa_start, vsyscall_trapa_end; static void *syscall_page; int __init vsyscall_init(void) { syscall_page = (void *)get_zeroed_page(GFP_ATOMIC); /* * XXX: Map this page to a fixmap entry if we get around * to adding the page to ELF core dumps */ memcpy(syscall_page, &vsyscall_trapa_start, &vsyscall_trapa_end - &vsyscall_trapa_start); return 0; } static struct page *syscall_vma_nopage(struct vm_area_struct *vma, unsigned long address, int *type) { unsigned long offset = address - vma->vm_start; struct page *page; if (address < vma->vm_start || address > vma->vm_end) return NOPAGE_SIGBUS; page = virt_to_page(syscall_page + offset); get_page(page); return page; } /* Prevent VMA merging */ static void syscall_vma_close(struct vm_area_struct *vma) { } static struct vm_operations_struct syscall_vm_ops = { .nopage = syscall_vma_nopage, .close = syscall_vma_close, }; /* Setup a VMA at program startup for the vsyscall page */ int arch_setup_additional_pages(struct linux_binprm *bprm, int executable_stack) { struct vm_area_struct *vma; struct mm_struct *mm = current->mm; unsigned long addr; int ret; down_write(&mm->mmap_sem); addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0); if (IS_ERR_VALUE(addr)) { ret = addr; goto up_fail; } vma = kmem_cache_zalloc(vm_area_cachep, SLAB_KERNEL); if (!vma) { ret = -ENOMEM; goto up_fail; } vma->vm_start = addr; vma->vm_end = addr + PAGE_SIZE; /* MAYWRITE to allow gdb to COW and set breakpoints */ vma->vm_flags = VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC|VM_MAYWRITE; vma->vm_flags |= mm->def_flags; vma->vm_page_prot = protection_map[vma->vm_flags & 7]; vma->vm_ops = &syscall_vm_ops; vma->vm_mm = mm; ret = insert_vm_struct(mm, vma); if (unlikely(ret)) { kmem_cache_free(vm_area_cachep, vma); goto up_fail; } current->mm->context.vdso = (void *)addr; mm->total_vm++; up_fail: up_write(&mm->mmap_sem); return ret; } const char *arch_vma_name(struct vm_area_struct *vma) { if (vma->vm_mm && vma->vm_start == (long)vma->vm_mm->context.vdso) return "[vdso]"; return NULL; } struct vm_area_struct *get_gate_vma(struct task_struct *task) { return NULL; } int in_gate_area(struct task_struct *task, unsigned long address) { return 0; } int in_gate_area_no_task(unsigned long address) { return 0; } --- NEW FILE: vsyscall.lds.S --- /* * Linker script for vsyscall DSO. The vsyscall page is an ELF shared * object prelinked to its virtual address, and with only one read-only * segment (that fits in one page). This script controls its layout. */ #include <asm/asm-offsets.h> #ifdef CONFIG_CPU_LITTLE_ENDIAN OUTPUT_FORMAT("elf32-sh-linux", "elf32-sh-linux", "elf32-sh-linux") #else OUTPUT_FORMAT("elf32-shbig-linux", "elf32-shbig-linux", "elf32-shbig-linux") #endif OUTPUT_ARCH(sh) /* The ELF entry point can be used to set the AT_SYSINFO value. */ ENTRY(__kernel_vsyscall); SECTIONS { . = SIZEOF_HEADERS; .hash : { *(.hash) } :text .gnu.hash : { *(.gnu.hash) } .dynsym : { *(.dynsym) } .dynstr : { *(.dynstr) } .gnu.version : { *(.gnu.version) } .gnu.version_d : { *(.gnu.version_d) } .gnu.version_r : { *(.gnu.version_r) } /* This linker script is used both with -r and with -shared. For the layouts to match, we need to skip more than enough space for the dynamic symbol table et al. If this amount is insufficient, ld -shared will barf. Just increase it here. */ . = 0x400; .text : { *(.text) } :text =0x90909090 .note : { *(.note.*) } :text :note .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr .eh_frame : { KEEP (*(.eh_frame)) } :text .dynamic : { *(.dynamic) } :text :dynamic .useless : { *(.got.plt) *(.got) *(.data .data.* .gnu.linkonce.d.*) *(.dynbss) *(.bss .bss.* .gnu.linkonce.b.*) } :text } /* * We must supply the ELF program headers explicitly to get just one * PT_LOAD segment, and set the flags explicitly to make segments read-only. */ PHDRS { text PT_LOAD FILEHDR PHDRS FLAGS(5); /* PF_R|PF_X */ dynamic PT_DYNAMIC FLAGS(4); /* PF_R */ note PT_NOTE FLAGS(4); /* PF_R */ eh_frame_hdr 0x6474e550; /* PT_GNU_EH_FRAME, but ld doesn't match the name */ } /* * This controls what symbols we export from the DSO. */ VERSION { LINUX_2.6 { global: __kernel_vsyscall; __kernel_sigreturn; __kernel_rt_sigreturn; local: *; }; } |
From: Paul M. <le...@us...> - 2006-08-28 03:58:09
|
Update of /cvsroot/linuxsh/linux/arch/sh/kernel In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv25403/arch/sh/kernel Modified Files: Makefile process.c signal.c Log Message: Initial configurable vsyscall page support, only used for the signal trampoline return code at the moment.. Index: Makefile =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/kernel/Makefile,v retrieving revision 1.19 retrieving revision 1.20 diff -u -d -r1.19 -r1.20 --- Makefile 29 Jan 2006 17:46:23 -0000 1.19 +++ Makefile 28 Aug 2006 03:58:06 -0000 1.20 @@ -9,6 +9,7 @@ io.o io_generic.o sh_ksyms.o syscalls.o obj-y += cpu/ timers/ +obj-$(CONFIG_VSYSCALL) += vsyscall/ obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_CF_ENABLER) += cf-enabler.o Index: process.c =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/kernel/process.c,v retrieving revision 1.45 retrieving revision 1.46 diff -u -d -r1.45 -r1.46 --- process.c 9 Aug 2006 07:43:22 -0000 1.45 +++ process.c 28 Aug 2006 03:58:06 -0000 1.46 @@ -354,7 +354,7 @@ else if (next->thread.ubc_pc && next->mm) { int asid = 0; #ifdef CONFIG_MMU - asid |= next->mm->context & MMU_CONTEXT_ASID_MASK; + asid |= next->mm->context.id & MMU_CONTEXT_ASID_MASK; #endif ubc_set_tracing(asid, next->thread.ubc_pc); } else { Index: signal.c =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/kernel/signal.c,v retrieving revision 1.32 retrieving revision 1.33 diff -u -d -r1.32 -r1.33 --- signal.c 31 Jul 2006 01:21:08 -0000 1.32 +++ signal.c 28 Aug 2006 03:58:06 -0000 1.33 @@ -8,7 +8,6 @@ * SuperH version: Copyright (C) 1999, 2000 Niibe Yutaka & Kaz Kojima * */ - #include <linux/sched.h> #include <linux/mm.h> #include <linux/smp.h> @@ -21,6 +20,7 @@ #include <linux/unistd.h> #include <linux/stddef.h> #include <linux/tty.h> +#include <linux/elf.h> #include <linux/personality.h> #include <linux/binfmts.h> @@ -29,8 +29,6 @@ #include <asm/pgtable.h> #include <asm/cacheflush.h> -#undef DEBUG - #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) /* @@ -312,6 +310,11 @@ return (void __user *)((sp - frame_size) & -8ul); } +/* These symbols are defined with the addresses in the vsyscall page. + See vsyscall-trapa.S. */ +extern void __user __kernel_sigreturn; +extern void __user __kernel_rt_sigreturn; + static int setup_frame(int sig, struct k_sigaction *ka, sigset_t *set, struct pt_regs *regs) { @@ -340,6 +343,10 @@ already in userspace. */ if (ka->sa.sa_flags & SA_RESTORER) { regs->pr = (unsigned long) ka->sa.sa_restorer; +#ifdef CONFIG_VSYSCALL + } else if (likely(current->mm->context.vdso)) { + regs->pr = VDSO_SYM(&__kernel_sigreturn); +#endif } else { /* Generate return code (system call to sigreturn) */ err |= __put_user(MOVW(7), &frame->retcode[0]); @@ -416,6 +423,10 @@ already in userspace. */ if (ka->sa.sa_flags & SA_RESTORER) { regs->pr = (unsigned long) ka->sa.sa_restorer; +#ifdef CONFIG_VSYSCALL + } else if (likely(current->mm->context.vdso)) { + regs->pr = VDSO_SYM(&__kernel_rt_sigreturn); +#endif } else { /* Generate return code (system call to rt_sigreturn) */ err |= __put_user(MOVW(7), &frame->retcode[0]); |
From: Paul M. <le...@us...> - 2006-08-28 03:54:01
|
Update of /cvsroot/linuxsh/linux/arch/sh/kernel/vsyscall In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv23857/arch/sh/kernel/vsyscall Log Message: Directory /cvsroot/linuxsh/linux/arch/sh/kernel/vsyscall added to the repository |
From: Paul M. <le...@us...> - 2006-08-28 03:52:12
|
Update of /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh4 In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv23058/include/asm-sh/cpu-sh4 Removed Files: shmparam.h Log Message: Consolidate the SHMLBA definitions, we'll sort out the specifics from the cache desc. --- shmparam.h DELETED --- |
From: Paul M. <le...@us...> - 2006-08-28 03:52:12
|
Update of /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh2 In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv23058/include/asm-sh/cpu-sh2 Removed Files: shmparam.h Log Message: Consolidate the SHMLBA definitions, we'll sort out the specifics from the cache desc. --- shmparam.h DELETED --- |
From: Paul M. <le...@us...> - 2006-08-28 03:52:12
|
Update of /cvsroot/linuxsh/linux/include/asm-sh In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv23058/include/asm-sh Modified Files: shmparam.h Log Message: Consolidate the SHMLBA definitions, we'll sort out the specifics from the cache desc. Index: shmparam.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/shmparam.h,v retrieving revision 1.3 retrieving revision 1.4 diff -u -d -r1.3 -r1.4 --- shmparam.h 24 Sep 2004 14:58:00 -0000 1.3 +++ shmparam.h 28 Aug 2006 03:52:09 -0000 1.4 @@ -1,8 +1,22 @@ +/* + * include/asm-sh/shmparam.h + * + * Copyright (C) 1999 Niibe Yutaka + * Copyright (C) 2006 Paul Mundt + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + */ #ifndef __ASM_SH_SHMPARAM_H #define __ASM_SH_SHMPARAM_H -#ifdef __KERNEL__ -#include <asm/cpu/shmparam.h> +/* + * SH-4 and SH-3 7705 have an aliasing dcache. Bump this up to a sensible value + * for everyone, and work out the specifics from the probed cache descriptor. + */ +#define SHMLBA 0x4000 /* attach addr a multiple of this */ + +#define __ARCH_FORCE_SHMLBA -#endif /* __KERNEL__ */ #endif /* __ASM_SH_SHMPARAM_H */ |
From: Paul M. <le...@us...> - 2006-08-28 03:52:12
|
Update of /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh3 In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv23058/include/asm-sh/cpu-sh3 Removed Files: shmparam.h Log Message: Consolidate the SHMLBA definitions, we'll sort out the specifics from the cache desc. --- shmparam.h DELETED --- |
From: Paul M. <le...@us...> - 2006-08-23 05:13:59
|
Update of /cvsroot/linuxsh/linux/include/asm-sh In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv19796/include/asm-sh Modified Files: page.h Log Message: Fixup PAGE_SIZE to shut up libc warnings. Index: page.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/page.h,v retrieving revision 1.18 retrieving revision 1.19 diff -u -d -r1.18 -r1.19 --- page.h 8 Aug 2006 03:07:13 -0000 1.18 +++ page.h 23 Aug 2006 05:13:54 -0000 1.19 @@ -17,7 +17,13 @@ /* PAGE_SHIFT determines the page size */ #define PAGE_SHIFT 12 + +#ifdef __ASSEMBLY__ #define PAGE_SIZE (1 << PAGE_SHIFT) +#else +#define PAGE_SIZE (1UL << PAGE_SHIFT) +#endif + #define PAGE_MASK (~(PAGE_SIZE-1)) #define PTE_MASK PAGE_MASK |
From: Paul M. <le...@us...> - 2006-08-21 02:20:20
|
Update of /cvsroot/linuxsh/linux/arch/sh/mm In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv3714/arch/sh/mm Modified Files: cache-sh4.c Log Message: flush_cache_mm() wraps in to flush_cache_all(), which is rather excessive given that the number of PTEs within the specified context are generally quite low. Optimize for walking the mm's VMA list and selectively flushing the VMA ranges from the dcache. Invalidate the icache only if a VMA sets VM_EXEC. Index: cache-sh4.c =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/mm/cache-sh4.c,v retrieving revision 1.38 retrieving revision 1.39 diff -u -d -r1.38 -r1.39 --- cache-sh4.c 20 Oct 2005 22:48:04 -0000 1.38 +++ cache-sh4.c 21 Aug 2006 02:20:15 -0000 1.39 @@ -2,30 +2,31 @@ * arch/sh/mm/cache-sh4.c * * Copyright (C) 1999, 2000, 2002 Niibe Yutaka - * Copyright (C) 2001, 2002, 2003, 2004, 2005 Paul Mundt + * Copyright (C) 2001 - 2006 Paul Mundt * Copyright (C) 2003 Richard Curnow * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. */ - -#include <linux/config.h> #include <linux/init.h> -#include <linux/mman.h> #include <linux/mm.h> -#include <linux/threads.h> #include <asm/addrspace.h> -#include <asm/page.h> #include <asm/pgtable.h> #include <asm/processor.h> #include <asm/cache.h> #include <asm/io.h> -#include <asm/uaccess.h> #include <asm/pgalloc.h> #include <asm/mmu_context.h> #include <asm/cacheflush.h> +/* + * The maximum number of pages we support up to when doing ranged dcache + * flushing. Anything exceeding this will simply flush the dcache in its + * entirety. + */ +#define MAX_DCACHE_PAGES 64 /* XXX: Tune for ways */ + static void __flush_dcache_segment_1way(unsigned long start, unsigned long extent); static void __flush_dcache_segment_2way(unsigned long start, @@ -220,14 +221,14 @@ static inline void flush_cache_4096(unsigned long start, unsigned long phys) { - unsigned long flags; - /* * All types of SH-4 require PC to be in P2 to operate on the I-cache. * Some types of SH-4 require PC to be in P2 to operate on the D-cache. */ - if ((cpu_data->flags & CPU_HAS_P2_FLUSH_BUG) - || start < CACHE_OC_ADDRESS_ARRAY) { + if ((cpu_data->flags & CPU_HAS_P2_FLUSH_BUG) || + (start < CACHE_OC_ADDRESS_ARRAY)) { + unsigned long flags; + local_irq_save(flags); __flush_cache_4096(start | SH_CACHE_ASSOC, P1SEGADDR(phys), 0x20000000); @@ -258,6 +259,7 @@ wmb(); } +/* TODO: Selective icache invalidation through IC address array.. */ static inline void flush_icache_all(void) { unsigned long flags, ccr; @@ -291,19 +293,121 @@ flush_icache_all(); } +static void __flush_cache_mm(struct mm_struct *mm, unsigned long start, + unsigned long end) +{ + unsigned long d = 0, p = start & PAGE_MASK; + unsigned long alias_mask = cpu_data->dcache.alias_mask; + unsigned long n_aliases = cpu_data->dcache.n_aliases; + unsigned long select_bit; + unsigned long all_aliases_mask; + unsigned long addr_offset; + pgd_t *dir; + pmd_t *pmd; + pud_t *pud; + pte_t *pte; + int i; + + dir = pgd_offset(mm, p); + pud = pud_offset(dir, p); + pmd = pmd_offset(pud, p); + end = PAGE_ALIGN(end); + + all_aliases_mask = (1 << n_aliases) - 1; + + do { + if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) { + p &= PMD_MASK; + p += PMD_SIZE; + pmd++; + + continue; + } + + pte = pte_offset_kernel(pmd, p); + + do { + unsigned long phys; + pte_t entry = *pte; + + if (!(pte_val(entry) & _PAGE_PRESENT)) { + pte++; + p += PAGE_SIZE; + continue; + } + + phys = pte_val(entry) & PTE_PHYS_MASK; + + if ((p ^ phys) & alias_mask) { + d |= 1 << ((p & alias_mask) >> PAGE_SHIFT); + d |= 1 << ((phys & alias_mask) >> PAGE_SHIFT); + + if (d == all_aliases_mask) + goto loop_exit; + } + + pte++; + p += PAGE_SIZE; + } while (p < end && ((unsigned long)pte & ~PAGE_MASK)); + pmd++; + } while (p < end); + +loop_exit: + addr_offset = 0; + select_bit = 1; + + for (i = 0; i < n_aliases; i++) { + if (d & select_bit) { + (*__flush_dcache_segment_fn)(addr_offset, PAGE_SIZE); + wmb(); + } + + select_bit <<= 1; + addr_offset += PAGE_SIZE; + } +} + +/* + * Note : (RPC) since the caches are physically tagged, the only point + * of flush_cache_mm for SH-4 is to get rid of aliases from the + * D-cache. The assumption elsewhere, e.g. flush_cache_range, is that + * lines can stay resident so long as the virtual address they were + * accessed with (hence cache set) is in accord with the physical + * address (i.e. tag). It's no different here. So I reckon we don't + * need to flush the I-cache, since aliases don't matter for that. We + * should try that. + * + * Caller takes mm->mmap_sem. + */ void flush_cache_mm(struct mm_struct *mm) { /* - * Note : (RPC) since the caches are physically tagged, the only point - * of flush_cache_mm for SH-4 is to get rid of aliases from the - * D-cache. The assumption elsewhere, e.g. flush_cache_range, is that - * lines can stay resident so long as the virtual address they were - * accessed with (hence cache set) is in accord with the physical - * address (i.e. tag). It's no different here. So I reckon we don't - * need to flush the I-cache, since aliases don't matter for that. We - * should try that. + * If cache is only 4k-per-way, there are never any 'aliases'. Since + * the cache is physically tagged, the data can just be left in there. */ - flush_cache_all(); + if (cpu_data->dcache.n_aliases == 0) + return; + + /* + * Don't bother groveling around the dcache for the VMA ranges + * if there are too many PTEs to make it worthwhile. + */ + if (mm->nr_ptes >= MAX_DCACHE_PAGES) + flush_dcache_all(); + else { + struct vm_area_struct *vma; + + /* + * In this case there are reasonably sized ranges to flush, + * iterate through the VMA list and take care of any aliases. + */ + for (vma = mm->mmap; vma; vma = vma->vm_next) + __flush_cache_mm(mm, vma->vm_start, vma->vm_end); + } + + /* Only touch the icache if one of the VMAs has VM_EXEC set. */ + if (mm->exec_vm) + flush_icache_all(); } /* @@ -312,7 +416,8 @@ * ADDR: Virtual Address (U0 address) * PFN: Physical page number */ -void flush_cache_page(struct vm_area_struct *vma, unsigned long address, unsigned long pfn) +void flush_cache_page(struct vm_area_struct *vma, unsigned long address, + unsigned long pfn) { unsigned long phys = pfn << PAGE_SHIFT; unsigned int alias_mask; @@ -359,87 +464,22 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - unsigned long d = 0, p = start & PAGE_MASK; - unsigned long alias_mask = cpu_data->dcache.alias_mask; - unsigned long n_aliases = cpu_data->dcache.n_aliases; - unsigned long select_bit; - unsigned long all_aliases_mask; - unsigned long addr_offset; - unsigned long phys; - pgd_t *dir; - pmd_t *pmd; - pud_t *pud; - pte_t *pte; - pte_t entry; - int i; - /* * If cache is only 4k-per-way, there are never any 'aliases'. Since * the cache is physically tagged, the data can just be left in there. */ - if (n_aliases == 0) + if (cpu_data->dcache.n_aliases == 0) return; - all_aliases_mask = (1 << n_aliases) - 1; - /* * Don't bother with the lookup and alias check if we have a * wide range to cover, just blow away the dcache in its * entirety instead. -- PFM. */ - if (((end - start) >> PAGE_SHIFT) >= 64) { + if (((end - start) >> PAGE_SHIFT) >= MAX_DCACHE_PAGES) flush_dcache_all(); - - if (vma->vm_flags & VM_EXEC) - flush_icache_all(); - - return; - } - - dir = pgd_offset(vma->vm_mm, p); - pud = pud_offset(dir, p); - pmd = pmd_offset(pud, p); - end = PAGE_ALIGN(end); - - do { - if (pmd_none(*pmd) || pmd_bad(*pmd)) { - p &= ~((1 << PMD_SHIFT) - 1); - p += (1 << PMD_SHIFT); - pmd++; - - continue; - } - - pte = pte_offset_kernel(pmd, p); - - do { - entry = *pte; - - if ((pte_val(entry) & _PAGE_PRESENT)) { - phys = pte_val(entry) & PTE_PHYS_MASK; - - if ((p ^ phys) & alias_mask) { - d |= 1 << ((p & alias_mask) >> PAGE_SHIFT); - d |= 1 << ((phys & alias_mask) >> PAGE_SHIFT); - - if (d == all_aliases_mask) - goto loop_exit; - } - } - - pte++; - p += PAGE_SIZE; - } while (p < end && ((unsigned long)pte & ~PAGE_MASK)); - pmd++; - } while (p < end); - -loop_exit: - for (i = 0, select_bit = 0x1, addr_offset = 0x0; i < n_aliases; - i++, select_bit <<= 1, addr_offset += PAGE_SIZE) - if (d & select_bit) { - (*__flush_dcache_segment_fn)(addr_offset, PAGE_SIZE); - wmb(); - } + else + __flush_cache_mm(vma->vm_mm, start, end); if (vma->vm_flags & VM_EXEC) { /* @@ -732,4 +772,3 @@ a3 += linesz; } while (a0 < a0e); } - |
From: Paul M. <le...@us...> - 2006-08-19 18:33:58
|
Update of /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh4 In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv14063/include/asm-sh/cpu-sh4 Modified Files: shmparam.h Log Message: Whitespace damage. Index: shmparam.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh4/shmparam.h,v retrieving revision 1.3 retrieving revision 1.4 diff -u -d -r1.3 -r1.4 --- shmparam.h 19 Aug 2006 18:31:45 -0000 1.3 +++ shmparam.h 19 Aug 2006 18:33:54 -0000 1.4 @@ -15,7 +15,7 @@ * SH-4 has D-cache alias issue */ #ifdef CONFIG_CPU_SH4A -#define SHMLBA (PAGE_SIZE*8) /* 32k dcache */ +#define SHMLBA (PAGE_SIZE*8) /* 32k dcache */ #else #define SHMLBA (PAGE_SIZE*4) /* 16k dcache */ #endif |
From: Paul M. <le...@us...> - 2006-08-19 18:31:50
|
Update of /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh4 In directory sc8-pr-cvs6.sourceforge.net:/tmp/cvs-serv13247/include/asm-sh/cpu-sh4 Modified Files: shmparam.h Log Message: Double up SHMLBA for SH-4A, and enforce it's usage for shmat(). Index: shmparam.h =================================================================== RCS file: /cvsroot/linuxsh/linux/include/asm-sh/cpu-sh4/shmparam.h,v retrieving revision 1.2 retrieving revision 1.3 diff -u -d -r1.2 -r1.3 --- shmparam.h 4 May 2003 19:30:13 -0000 1.2 +++ shmparam.h 19 Aug 2006 18:31:45 -0000 1.3 @@ -2,6 +2,7 @@ * include/asm-sh/cpu-sh4/shmparam.h * * Copyright (C) 1999 Niibe Yutaka + * Copyright (C) 2006 Paul Mundt * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive @@ -13,7 +14,13 @@ /* * SH-4 has D-cache alias issue */ -#define SHMLBA (PAGE_SIZE*4) /* attach addr a multiple of this */ +#ifdef CONFIG_CPU_SH4A +#define SHMLBA (PAGE_SIZE*8) /* 32k dcache */ +#else +#define SHMLBA (PAGE_SIZE*4) /* 16k dcache */ +#endif + +#define __ARCH_FORCE_SHMLBA #endif /* __ASM_CPU_SH4_SHMPARAM_H */ |