From: NIIBE Y. <gn...@m1...> - 2002-03-26 05:49:02
|
OK, I've fixed cache-sh4.c. The problem was confusion of virtual address and physcall address. Note that new flush_cache_range is quite slow for exec. It's two to four time slower than simple flush_cache_all implementation. New flush_cache_range would be good if we want cache to have valid data (avoid flushing), but it takes long time for SuperH to handle cache. 2002-03-26 NIIBE Yutaka <gn...@m1...> * arch/sh/mm/cache-sh4.c (flush_cache_mm): Don't check mm->context, it's for TLB handling. (flush_cache_range): Likewise. (flush_cache_mm): Fix the comment. The alias issue is there for write-through cache too. (flush_cache_range): Don't handle in P2. 2002-03-26 NIIBE Yutaka <gn...@m1...> * arch/sh/mm/cache-sh4.c (flush_cache_range): Bug fix. Handle the case where PMD is none or bad. The argument to __flush_icache_page/__flush_dcache_page is physical address (was: virtual address). Index: arch/sh/mm/cache-sh4.c =================================================================== RCS file: /cvsroot/linuxsh/linux/arch/sh/mm/cache-sh4.c,v retrieving revision 1.6 diff -u -3 -p -r1.6 cache-sh4.c --- arch/sh/mm/cache-sh4.c 22 Mar 2002 12:57:10 -0000 1.6 +++ arch/sh/mm/cache-sh4.c 26 Mar 2002 05:43:03 -0000 @@ -298,15 +298,13 @@ void flush_cache_mm(struct mm_struct *mm * FIXME: Really, the optimal solution here would be able to flush out * individual lines created by the specified context, but this isn't * feasible for a number of architectures (such as MIPS, and some - * SPARC) .. is this possible for SuperH? (This is a non-issue if the - * SH4 cache is configured in write-through mode). + * SPARC) .. is this possible for SuperH? * - * In the meantime, we'll just flush all of the caches if we have a - * valid mm context.. this seems to be the simplest way to avoid at - * least a few wasted cache flushes. -Lethal + * In the meantime, we'll just flush all of the caches.. this + * seems to be the simplest way to avoid at least a few wasted + * cache flushes. -Lethal */ - if (mm->context != 0) - flush_cache_all(); + flush_cache_all(); } /* @@ -324,35 +322,28 @@ void flush_cache_range(struct vm_area_st unsigned long flags; struct mm_struct *mm = vma->vm_mm; - if (mm->context == 0) - return; - start &= PAGE_MASK; - if (mm->context != current->active_mm->context) { - flush_cache_all(); - } else { - pgd_t *pgd; - pmd_t *pmd; + save_and_cli(flags); + for (; start < end; start += PAGE_SIZE) { + pgd_t *pgd = pgd_offset(mm, start); + pmd_t *pmd = pmd_offset(pgd, start); pte_t *pte; + unsigned long phys; - save_and_cli(flags); - jump_to_P2(); - - for (start; start < end; start += PAGE_SIZE) { - pgd = pgd_offset(mm, start); - pmd = pmd_offset(pgd, start); - pte = pte_offset_kernel(pmd, start); - - if (pte_val(*pte) & _PAGE_PRESENT) { - __flush_icache_page(start); - __flush_dcache_page(start); - } + if (pmd_none(*pmd) || pmd_bad(*pmd)) { + start &= ~((1 << PMD_SHIFT) -1); + start += (1 << PMD_SHIFT); + continue; + } + pte = pte_offset_kernel(pmd, start); + phys = pte_val(*pte)&PTE_PHYS_MASK; + if (pte_val(*pte) & _PAGE_PRESENT) { + __flush_icache_page(phys); + __flush_dcache_page(phys); } - - back_to_P1(); - restore_flags(flags); } + restore_flags(flags); } /* |