From: Marcelo T. <mto...@re...> - 2008-04-28 18:12:57
|
On Fri, Apr 25, 2008 at 11:33:18AM -0600, David S. Ahern wrote: > Most of the cycles (~80% of that 54k+) are spent in paging64_prefetch_page(): > > for (i = 0; i < PT64_ENT_PER_PAGE; ++i) { > gpa_t pte_gpa = gfn_to_gpa(sp->gfn); > pte_gpa += (i+offset) * sizeof(pt_element_t); > > r = kvm_read_guest_atomic(vcpu->kvm, pte_gpa, &pt, > sizeof(pt_element_t)); > if (r || is_present_pte(pt)) > sp->spt[i] = shadow_trap_nonpresent_pte; > else > sp->spt[i] = shadow_notrap_nonpresent_pte; > } > > This loop is run 512 times and takes a total of ~45k cycles, or ~88 cycles per > loop. > > This function gets run >20,000/sec during some of the kscand loops. Hi David, Do you see the mmu_recycled counter increase? |