|
From: NIIBE Y. <gn...@ch...> - 2000-06-24 02:00:20
|
Stuart Menefy wrote:
> You're right, it has the same synonym problem, due to cache size vs.
> page size issues. However there is one important difference, the
> hardware detects the synonym, and raises an exception when it would be
> created.
Interesting point. Yes, we can find the similarity between MIPS and
SuperH, but there're the differences we should watch out.
> This has advantages and disadvantages:
> - the advantage is that, although there is a performance penalty,
> there sould be no risk of errors caused by synonyms
> - however it does prevent legitimate synonyms, for example when all
> the mappings are read only.
>
> SH4 doesn't have this detection capability, so we have to make sure
> we catch any problems early.
Yes.
> The other point at which synonyms can occur is when a page is explicitly
> mapped at multiple addresses in user space. So I had a go at writing a
> simple test case for this, and then fixing the problem. Attached is my
> test code, and a potential fix. This is taken almost unchanged from
> the Sparc sun4c code, which has a completely virtual cache. Its horrible,
> but I don't see any alternative, unless we can restrict the addresses
> at which shared pages can be mapped.
OK. The point is that:
When there's shared mapping which causes the alias (synonym)
we fix-up the pte(s) so that the data should not on the caches.
Specifically, on the TLB fault of the page (of WRITE&&SHARED),
we fix-up the all the pte as uncachablie page.
> void update_mmu_cache(struct vm_area_struct * vma,
> unsigned long address, pte_t pte)
> {
> @@ -276,6 +352,11 @@
> unsigned long pteaddr;
>
> save_and_cli(flags);
> +
> +#ifdef __SH4__
> + if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) == (VM_WRITE|VM_SHARED))
> + synonym_fixup(vma, address, pte);
> +#endif
>
> /* Set PTEH register */
> if (vma) {
This part, agreed.
There's a point which I can't understand.
> diff -u -r1.11 fault.c
> --- arch/sh/mm/fault.c 2000/05/18 09:34:18 1.11
> +++ arch/sh/mm/fault.c 2000/06/23 17:58:48
> @@ -268,6 +268,82 @@
> goto no_context;
> }
>
> +#ifdef __SH4__
> +/* There are really two cases of aliases to watch out for, and these
> + * are:
> + *
> + * 1) A user's page which can be aliased with the kernels virtual
> + * mapping of the physical page.
> + *
> + * 2) Multiple user mappings of the same inode/anonymous object
> + * such that two copies of the same data for the same phys page
> + * can live (writable) in the cache at the same time.
> + *
> + * We handle number 1 by flushing the kernel copy of the page always
> + * after COW page operations.
> + */
> +static void synonym_fixup(struct vm_area_struct *vma, unsigned long address, pte_t pte)
> +{
> + pgd_t *pgdp;
> + pte_t *ptep;
> +
> + if (vma->vm_file) {
> + struct address_space *mapping;
> + unsigned long offset = (address & PAGE_MASK) - vma->vm_start;
> + struct vm_area_struct *vmaring;
> + int alias_found = 0;
> +
> + mapping = vma->vm_file->f_dentry->d_inode->i_mapping;
> + spin_lock(&mapping->i_shared_lock);
> + vmaring = mapping->i_mmap;
Here, we loop the mapping ring.
> + do {
> + unsigned long vaddr = vmaring->vm_start + offset;
> + unsigned long start;
> +
> + /* Do not mistake ourselves as another mapping. */
> + if (vmaring == vma)
> + continue;
> +
> + printk("Found a shared mapping\n");
Yes, we found the shared mapping.
> + if ((vaddr ^ address) & 0x3000) {
> + printk("Found an synonym problem\n");
> + alias_found++;
Yup, we found the synonym (in terms of SuperH. In general "alias").
> + start = vmaring->vm_start;
> + /* If this is already in the cache flush it,
> + * and remove any existing TLB entries. */
> + while (start < vmaring->vm_end) {
But why we loop from the start of the area (and to the end) to fix up the
pte-s? Isn't that fixing up the relevant one-page enough? Even if we
fixed the pte-s of the counter parts of the area, another TLB fault of
different page will come here again.
> + pgdp = pgd_offset(vmaring->vm_mm, start);
> + if (!pgdp)
> + goto next;
> + ptep = pte_offset((pmd_t *) pgdp, start);
> + if (!ptep)
> + goto next;
> +
> + if (pte_val(*ptep) & _PAGE_PRESENT) {
> + printk("..and the page is present\n");
> + flush_cache_page(vmaring, start);
> + *ptep = __pte(pte_val(*ptep) &
> + (~ _PAGE_CACHABLE));
> + flush_tlb_page(vmaring, start);
> + }
> + next:
> + start += PAGE_SIZE;
> + }
In my understanding, I think that this loop is questionable.
> + }
> + } while ((vmaring = vmaring->vm_next_share) != NULL);
> + spin_unlock(&mapping->i_shared_lock);
> +
> + if (alias_found && (pte_val(pte) & _PAGE_CACHABLE)) {
> + printk("updating PTE\n");
> + pgdp = pgd_offset(vma->vm_mm, address);
> + ptep = pte_offset((pmd_t *) pgdp, address);
> + *ptep = __pte(pte_val(*ptep) & (~ _PAGE_CACHABLE));
> + pte = *ptep;
> + }
> + }
> +}
> +#endif
> +
--
|