From: NIIBE Y. <gn...@ch...> - 2000-08-17 04:17:33
|
I've looked through the code of linux/mm, and improve the changes I've done yesterday. Here's the patch. Important point is that when a page is allocated, it may have a stale data/instruction on cache line of user space. We need to flush the cache. I did "Flushing at 'end of I/O'" yesterday, but it's more than needed. I've found that when swap entry is read asynchronously, the cache remains. 2000-08-17 NIIBE Yutaka <gn...@m1...> * mm/memory.c (do_anonymous_page): We need to flush I-cache and D-cache here, as it's newly allocated page. (do_no_page): We need to flush D-cache. (do_swap_page): Flush D-cache & I-cache here. There're cases where read_swap_cache is called asynchronously and pages are cached. * Revert the change for fs/buffer.c (end_buffer_io_async). It's more than needed. We only need to flush when kernel WRITES to the page (from I/O), not READ (to I/O). * Revert the changes for mm/memory.c (do_wp_page: case 1): We have valid PTE here (it's read-only but works). Index: fs/buffer.c =================================================================== RCS file: /cvsroot/linuxsh/kernel/fs/buffer.c,v retrieving revision 1.7 diff -u -r1.7 buffer.c --- fs/buffer.c 2000/08/16 08:34:05 1.7 +++ fs/buffer.c 2000/08/17 04:08:10 @@ -770,7 +770,6 @@ /* OK, the async IO on this page is complete. */ spin_unlock_irqrestore(&page_uptodate_lock, flags); - flush_dcache_page(page); /* * if none of the buffers had errors then we can set the * page uptodate: Index: mm/memory.c =================================================================== RCS file: /cvsroot/linuxsh/kernel/mm/memory.c,v retrieving revision 1.6 diff -u -r1.6 memory.c --- mm/memory.c 2000/08/16 08:34:09 1.6 +++ mm/memory.c 2000/08/17 04:08:21 @@ -790,8 +790,8 @@ pte_t *page_table) { copy_cow_page(old_page,new_page,address); - flush_icache_page(vma, new_page); flush_dcache_page(new_page); + flush_icache_page(vma, new_page); establish_pte(vma, address, page_table, pte_mkwrite(pte_mkdirty(mk_pte(new_page, vma->vm_page_prot)))); } @@ -849,7 +849,7 @@ UnlockPage(old_page); /* FallThrough */ case 1: - flush_dcache_page(old_page); + flush_cache_page(vma, address); establish_pte(vma, address, page_table, pte_mkyoung(pte_mkdirty(pte_mkwrite(pte)))); spin_unlock(&mm->page_table_lock); return 1; /* Minor fault */ @@ -1059,9 +1059,6 @@ unlock_kernel(); if (!page) return -1; - - flush_dcache_page(page); - flush_icache_page(vma, page); } mm->rss++; @@ -1084,6 +1081,8 @@ } else UnlockPage(page); + flush_dcache_page(page); + flush_icache_page(vma, page); set_pte(page_table, pte); /* No need to invalidate - it was non-present before */ update_mmu_cache(vma, address, pte); @@ -1107,7 +1106,8 @@ clear_user_highpage(page, addr); entry = pte_mkwrite(pte_mkdirty(mk_pte(page, vma->vm_page_prot))); mm->rss++; - flush_page_to_ram(page); + flush_dcache_page(page); + flush_icache_page(vma, page); } set_pte(page_table, entry); /* No need to invalidate - it was non-present before */ @@ -1156,7 +1156,7 @@ * so we can make it writable and dirty to avoid having to * handle that later. */ - flush_page_to_ram(new_page); + flush_dcache_page(new_page); flush_icache_page(vma, new_page); entry = mk_pte(new_page, vma->vm_page_prot); if (write_access) { |