From: Avi K. <av...@qu...> - 2008-04-18 15:51:38
|
be...@il... wrote: > From: Ben-Ami Yassour <be...@il...> > > Signed-off-by: Ben-Ami Yassour <be...@il...> > Signed-off-by: Muli Ben-Yehuda <mu...@il...> > --- > arch/x86/kvm/mmu.c | 59 +++++++++++++++++++++++++++++-------------- > arch/x86/kvm/paging_tmpl.h | 19 +++++++++---- > include/linux/kvm_host.h | 2 +- > virt/kvm/kvm_main.c | 17 +++++++++++- > 4 files changed, 69 insertions(+), 28 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 078a7f1..c89029d 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -112,6 +112,8 @@ static int dbg = 1; > #define PT_FIRST_AVAIL_BITS_SHIFT 9 > #define PT64_SECOND_AVAIL_BITS_SHIFT 52 > > +#define PT_SHADOW_IO_MARK (1ULL << PT_FIRST_AVAIL_BITS_SHIFT) > + > Please rename this PT_SHADOW_MMIO_MASK. > #define VALID_PAGE(x) ((x) != INVALID_PAGE) > > #define PT64_LEVEL_BITS 9 > @@ -237,6 +239,9 @@ static int is_dirty_pte(unsigned long pte) > > static int is_rmap_pte(u64 pte) > { > + if (pte & PT_SHADOW_IO_MARK) > + return false; > + > return is_shadow_present_pte(pte); > } > Why avoid rmap on mmio pages? Sure it's unnecessary work, but having less cases improves overall reliability. You can use pfn_valid() in gfn_to_pfn() and kvm_release_pfn_*() to conditionally update the page refcounts. -- Any sufficiently difficult bug is indistinguishable from a feature. |