From: Thomas S. <tho...@we...> - 2009-10-13 21:29:20
|
Jan Beulich wrote: > >>> Thomas Schlichter <tho...@we...> 12.10.09 20:32 >>> > >+ if (!pat_enabled) { > >+ void __iomem *ret = ioremap_nocache(phys_addr, size); > >+ if (ret) > >+ mtrr_add(phys_addr, size, MTRR_TYPE_WRCOMB, false); > > This won't work if phys_addr or size aren't page aligned, or if size isn't > a power of two. No, at least the comments in mtrr_add and mtrr_check state that it is just required that phys_addr and size are multiple of PAGE_SIZE. And I'm not sure if it is always safe to round these up/down to the next PAGE boundary. If it is not, maybe it is better to fail... > >+ if (!pat_enabled) { > >+ ret = set_memory_uc(addr, numpages); > >+ if (!ret) > >+ mtrr_add(__pa(addr), numpages * PAGE_SIZE, > >+ MTRR_TYPE_WRCOMB, false); > > Similarly, this requires numpages to be a power of two. Same as above. > I think user mode code handles this by splitting the request and trying to > establish as many entries as possible (and those as big as possible). Seems not required here... > Also, in all cases you pass 'false' for the 'increment' parameter, in order > to avoid having to tear down the established entries. While this may be > reasonable for kernel initiated mappings, I don't think that's acceptable > for such originating from user mode. Well, therefore I would have to remember which mmap successfully set up which MTRR entries. And it must be ensured that for each mmap there always is a munmap. If this is indeed always ensured, and if it's worth to maintain the required data, I'd be happy to add a corresponding mtrr_del there. Kind regards, Thomas |