* [PATCH 0/2] KVM: x86: MMU: Clean up handle_mmio_page_fault() handling in kvm_mmu_page_fault() @ 2016-02-22 8:23 Takuya Yoshikawa 2016-02-22 8:23 ` [PATCH 1/2] KVM: x86: MMU: Consolidate quickly_check_mmio_pf() and is_mmio_page_fault() Takuya Yoshikawa 2016-02-22 8:23 ` [PATCH 2/2] KVM: x86: MMU: Move handle_mmio_page_fault() call to kvm_mmu_page_fault() Takuya Yoshikawa 0 siblings, 2 replies; 4+ messages in thread From: Takuya Yoshikawa @ 2016-02-22 8:23 UTC (permalink / raw) To: pbonzini; +Cc: kvm, linux-kernel, Takuya Yoshikawa The end result is very similar to handle_ept_misconfig()'s corresponding code. It may also be possible to change handle_ept_misconfig() not to call handle_mmio_page_fault() separately from kvm_mmu_page_fault(): the only difference seems to be whether it checks for PFERR_RSVD_MASK. Takuya Yoshikawa (2): KVM: MMU: Consolidate quickly_check_mmio_pf() and is_mmio_page_fault() KVM: MMU: Move handle_mmio_page_fault() call to kvm_mmu_page_fault() arch/x86/kvm/mmu.c | 54 +++++++++++++++++----------------------------- arch/x86/kvm/paging_tmpl.h | 19 ++++++---------- 2 files changed, 26 insertions(+), 47 deletions(-) -- 2.1.0 ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 1/2] KVM: x86: MMU: Consolidate quickly_check_mmio_pf() and is_mmio_page_fault() 2016-02-22 8:23 [PATCH 0/2] KVM: x86: MMU: Clean up handle_mmio_page_fault() handling in kvm_mmu_page_fault() Takuya Yoshikawa @ 2016-02-22 8:23 ` Takuya Yoshikawa 2016-02-22 8:23 ` [PATCH 2/2] KVM: x86: MMU: Move handle_mmio_page_fault() call to kvm_mmu_page_fault() Takuya Yoshikawa 1 sibling, 0 replies; 4+ messages in thread From: Takuya Yoshikawa @ 2016-02-22 8:23 UTC (permalink / raw) To: pbonzini; +Cc: kvm, linux-kernel, Takuya Yoshikawa These two have only slight differences: - whether 'addr' is of type u64 or of type gva_t - whether they have 'direct' parameter or not Concerning the former, quickly_check_mmio_pf()'s u64 is better because 'addr' needs to be able to have both a guest physical address and a guest virtual address. The latter is just a stylistic issue as we can always calculate the mode from the 'vcpu' as is_mmio_page_fault() does. This patch keeps the parameter to make the following patch cleaner. In addition, the patch renames the function to mmio_info_in_cache() to make it clear what it actually checks for. Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> --- arch/x86/kvm/mmu.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 95a955d..a28b734 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3273,7 +3273,7 @@ static bool is_shadow_zero_bits_set(struct kvm_mmu *mmu, u64 spte, int level) return __is_rsvd_bits_set(&mmu->shadow_zero_check, spte, level); } -static bool quickly_check_mmio_pf(struct kvm_vcpu *vcpu, u64 addr, bool direct) +static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct) { if (direct) return vcpu_match_mmio_gpa(vcpu, addr); @@ -3332,7 +3332,7 @@ int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct) u64 spte; bool reserved; - if (quickly_check_mmio_pf(vcpu, addr, direct)) + if (mmio_info_in_cache(vcpu, addr, direct)) return RET_MMIO_PF_EMULATE; reserved = walk_shadow_page_get_mmio_spte(vcpu, addr, &spte); @@ -4354,19 +4354,12 @@ static void make_mmu_pages_available(struct kvm_vcpu *vcpu) kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); } -static bool is_mmio_page_fault(struct kvm_vcpu *vcpu, gva_t addr) -{ - if (vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu)) - return vcpu_match_mmio_gpa(vcpu, addr); - - return vcpu_match_mmio_gva(vcpu, addr); -} - int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code, void *insn, int insn_len) { int r, emulation_type = EMULTYPE_RETRY; enum emulation_result er; + bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu); r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false); if (r < 0) @@ -4377,7 +4370,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code, goto out; } - if (is_mmio_page_fault(vcpu, cr2)) + if (mmio_info_in_cache(vcpu, cr2, direct)) emulation_type = 0; er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len); -- 2.1.0 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH 2/2] KVM: x86: MMU: Move handle_mmio_page_fault() call to kvm_mmu_page_fault() 2016-02-22 8:23 [PATCH 0/2] KVM: x86: MMU: Clean up handle_mmio_page_fault() handling in kvm_mmu_page_fault() Takuya Yoshikawa 2016-02-22 8:23 ` [PATCH 1/2] KVM: x86: MMU: Consolidate quickly_check_mmio_pf() and is_mmio_page_fault() Takuya Yoshikawa @ 2016-02-22 8:23 ` Takuya Yoshikawa 2016-02-22 12:24 ` Paolo Bonzini 1 sibling, 1 reply; 4+ messages in thread From: Takuya Yoshikawa @ 2016-02-22 8:23 UTC (permalink / raw) To: pbonzini; +Cc: kvm, linux-kernel, Takuya Yoshikawa Rather than placing a handle_mmio_page_fault() call in each vcpu->arch.mmu.page_fault() handler, moving it up to kvm_mmu_page_fault() makes the code better: - avoids code duplication - for kvm_arch_async_page_ready(), which is the other caller of vcpu->arch.mmu.page_fault(), removes an extra error_code check - avoids returning both RET_MMIO_PF_* values and raw integer values from vcpu->arch.mmu.page_fault() Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> --- arch/x86/kvm/mmu.c | 39 ++++++++++++++++----------------------- arch/x86/kvm/paging_tmpl.h | 19 ++++++------------- 2 files changed, 22 insertions(+), 36 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index a28b734..2ce3892 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3370,13 +3370,6 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, gva_t gva, pgprintk("%s: gva %lx error %x\n", __func__, gva, error_code); - if (unlikely(error_code & PFERR_RSVD_MASK)) { - r = handle_mmio_page_fault(vcpu, gva, true); - - if (likely(r != RET_MMIO_PF_INVALID)) - return r; - } - r = mmu_topup_memory_caches(vcpu); if (r) return r; @@ -3460,13 +3453,6 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, MMU_WARN_ON(!VALID_PAGE(vcpu->arch.mmu.root_hpa)); - if (unlikely(error_code & PFERR_RSVD_MASK)) { - r = handle_mmio_page_fault(vcpu, gpa, true); - - if (likely(r != RET_MMIO_PF_INVALID)) - return r; - } - r = mmu_topup_memory_caches(vcpu); if (r) return r; @@ -4361,18 +4347,27 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code, enum emulation_result er; bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu); + if (unlikely(error_code & PFERR_RSVD_MASK)) { + r = handle_mmio_page_fault(vcpu, cr2, direct); + if (r == RET_MMIO_PF_EMULATE) { + emulation_type = 0; + goto emulate; + } + if (r == RET_MMIO_PF_RETRY) + return 1; + if (r < 0) + return r; + } + r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false); if (r < 0) - goto out; - - if (!r) { - r = 1; - goto out; - } + return r; + if (!r) + return 1; if (mmio_info_in_cache(vcpu, cr2, direct)) emulation_type = 0; - +emulate: er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len); switch (er) { @@ -4386,8 +4381,6 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code, default: BUG(); } -out: - return r; } EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 6c9fed9..05827ff 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -702,24 +702,17 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, pgprintk("%s: addr %lx err %x\n", __func__, addr, error_code); - if (unlikely(error_code & PFERR_RSVD_MASK)) { - r = handle_mmio_page_fault(vcpu, addr, mmu_is_nested(vcpu)); - if (likely(r != RET_MMIO_PF_INVALID)) - return r; - - /* - * page fault with PFEC.RSVD = 1 is caused by shadow - * page fault, should not be used to walk guest page - * table. - */ - error_code &= ~PFERR_RSVD_MASK; - }; - r = mmu_topup_memory_caches(vcpu); if (r) return r; /* + * If PFEC.RSVD is set, this is a shadow page fault. + * The bit needs to be cleared before walking guest page tables. + */ + error_code &= ~PFERR_RSVD_MASK; + + /* * Look up the guest pte for the faulting address. */ r = FNAME(walk_addr)(&walker, vcpu, addr, error_code); -- 2.1.0 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH 2/2] KVM: x86: MMU: Move handle_mmio_page_fault() call to kvm_mmu_page_fault() 2016-02-22 8:23 ` [PATCH 2/2] KVM: x86: MMU: Move handle_mmio_page_fault() call to kvm_mmu_page_fault() Takuya Yoshikawa @ 2016-02-22 12:24 ` Paolo Bonzini 0 siblings, 0 replies; 4+ messages in thread From: Paolo Bonzini @ 2016-02-22 12:24 UTC (permalink / raw) To: Takuya Yoshikawa; +Cc: kvm, linux-kernel On 22/02/2016 09:23, Takuya Yoshikawa wrote: > Rather than placing a handle_mmio_page_fault() call in each > vcpu->arch.mmu.page_fault() handler, moving it up to > kvm_mmu_page_fault() makes the code better: > > - avoids code duplication > - for kvm_arch_async_page_ready(), which is the other caller of > vcpu->arch.mmu.page_fault(), removes an extra error_code check > - avoids returning both RET_MMIO_PF_* values and raw integer values > from vcpu->arch.mmu.page_fault() > > Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> > --- > arch/x86/kvm/mmu.c | 39 ++++++++++++++++----------------------- > arch/x86/kvm/paging_tmpl.h | 19 ++++++------------- > 2 files changed, 22 insertions(+), 36 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index a28b734..2ce3892 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -3370,13 +3370,6 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, gva_t gva, > > pgprintk("%s: gva %lx error %x\n", __func__, gva, error_code); > > - if (unlikely(error_code & PFERR_RSVD_MASK)) { > - r = handle_mmio_page_fault(vcpu, gva, true); > - > - if (likely(r != RET_MMIO_PF_INVALID)) > - return r; > - } > - > r = mmu_topup_memory_caches(vcpu); > if (r) > return r; > @@ -3460,13 +3453,6 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, > > MMU_WARN_ON(!VALID_PAGE(vcpu->arch.mmu.root_hpa)); > > - if (unlikely(error_code & PFERR_RSVD_MASK)) { > - r = handle_mmio_page_fault(vcpu, gpa, true); > - > - if (likely(r != RET_MMIO_PF_INVALID)) > - return r; > - } > - > r = mmu_topup_memory_caches(vcpu); > if (r) > return r; > @@ -4361,18 +4347,27 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code, > enum emulation_result er; > bool direct = vcpu->arch.mmu.direct_map || mmu_is_nested(vcpu); > > + if (unlikely(error_code & PFERR_RSVD_MASK)) { > + r = handle_mmio_page_fault(vcpu, cr2, direct); > + if (r == RET_MMIO_PF_EMULATE) { > + emulation_type = 0; > + goto emulate; > + } > + if (r == RET_MMIO_PF_RETRY) > + return 1; > + if (r < 0) > + return r; It's a bit weird how RET_MMIO_PF_RETRY is zero, but unifying all the return values of page fault routines is best left for another day. Applied to queue, thanks. Paolo > + } > + > r = vcpu->arch.mmu.page_fault(vcpu, cr2, error_code, false); > if (r < 0) > - goto out; > - > - if (!r) { > - r = 1; > - goto out; > - } > + return r; > + if (!r) > + return 1; > > if (mmio_info_in_cache(vcpu, cr2, direct)) > emulation_type = 0; > - > +emulate: > er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len); > > switch (er) { > @@ -4386,8 +4381,6 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code, > default: > BUG(); > } > -out: > - return r; > } > EXPORT_SYMBOL_GPL(kvm_mmu_page_fault); > > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > index 6c9fed9..05827ff 100644 > --- a/arch/x86/kvm/paging_tmpl.h > +++ b/arch/x86/kvm/paging_tmpl.h > @@ -702,24 +702,17 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, > > pgprintk("%s: addr %lx err %x\n", __func__, addr, error_code); > > - if (unlikely(error_code & PFERR_RSVD_MASK)) { > - r = handle_mmio_page_fault(vcpu, addr, mmu_is_nested(vcpu)); > - if (likely(r != RET_MMIO_PF_INVALID)) > - return r; > - > - /* > - * page fault with PFEC.RSVD = 1 is caused by shadow > - * page fault, should not be used to walk guest page > - * table. > - */ > - error_code &= ~PFERR_RSVD_MASK; > - }; > - > r = mmu_topup_memory_caches(vcpu); > if (r) > return r; > > /* > + * If PFEC.RSVD is set, this is a shadow page fault. > + * The bit needs to be cleared before walking guest page tables. > + */ > + error_code &= ~PFERR_RSVD_MASK; > + > + /* > * Look up the guest pte for the faulting address. > */ > r = FNAME(walk_addr)(&walker, vcpu, addr, error_code); > ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-02-22 12:24 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2016-02-22 8:23 [PATCH 0/2] KVM: x86: MMU: Clean up handle_mmio_page_fault() handling in kvm_mmu_page_fault() Takuya Yoshikawa 2016-02-22 8:23 ` [PATCH 1/2] KVM: x86: MMU: Consolidate quickly_check_mmio_pf() and is_mmio_page_fault() Takuya Yoshikawa 2016-02-22 8:23 ` [PATCH 2/2] KVM: x86: MMU: Move handle_mmio_page_fault() call to kvm_mmu_page_fault() Takuya Yoshikawa 2016-02-22 12:24 ` Paolo Bonzini
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).