From: Gleb Natapov <gleb@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH 5/7] KVM: mmu: remove argument to kvm_init_shadow_mmu and kvm_init_shadow_ept_mmu
Date: Thu, 3 Oct 2013 15:11:27 +0300 [thread overview]
Message-ID: <20131003121126.GF17294@redhat.com> (raw)
In-Reply-To: <524D5A2D.6070001@redhat.com>
On Thu, Oct 03, 2013 at 01:51:09PM +0200, Paolo Bonzini wrote:
> Il 03/10/2013 13:25, Gleb Natapov ha scritto:
> > On Wed, Oct 02, 2013 at 04:56:14PM +0200, Paolo Bonzini wrote:
> >> The initialization function in mmu.c can always use walk_mmu, which
> >> is known to be vcpu->arch.mmu. Only init_kvm_nested_mmu is used to
> >> initialize vcpu->arch.nested_mmu.
> >>
> >> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> >> ---
> >> arch/x86/kvm/mmu.c | 15 +++++++++------
> >> arch/x86/kvm/mmu.h | 5 ++---
> >> arch/x86/kvm/svm.c | 4 ++--
> >> arch/x86/kvm/vmx.c | 4 ++--
> >> 4 files changed, 15 insertions(+), 13 deletions(-)
> >>
> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> >> index 40772ef..ac598c8 100644
> >> --- a/arch/x86/kvm/mmu.c
> >> +++ b/arch/x86/kvm/mmu.c
> >> @@ -3742,11 +3742,13 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
> >> update_last_pte_bitmap(vcpu, context);
> >> }
> >>
> >> -void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
> >> +void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
> >> {
> >> bool smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP);
> >> + struct kvm_mmu *context = vcpu->arch.walk_mmu;
> > I'd rather use &vcpu->arch.mmu here.
> >
> >> +
> >> ASSERT(vcpu);
> >> - ASSERT(!VALID_PAGE(vcpu->arch.mmu.root_hpa));
> >> + ASSERT(!VALID_PAGE(context->root_hpa));
> >>
> >> if (!is_paging(vcpu))
> >> nonpaging_init_context(vcpu, context);
> >> @@ -3765,11 +3767,12 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
> >> }
> >> EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
> >>
> >> -void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context,
> >> - bool execonly)
> >> +void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly)
> >> {
> >> + struct kvm_mmu *context = vcpu->arch.walk_mmu;
> >> +
> >> ASSERT(vcpu);
> >> - ASSERT(!VALID_PAGE(vcpu->arch.mmu.root_hpa));
> >> + ASSERT(!VALID_PAGE(context->root_hpa));
> >>
> >> context->shadow_root_level = kvm_x86_ops->get_tdp_level();
> >>
> >> @@ -3790,7 +3793,7 @@ EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu);
> >>
> >> static void init_kvm_softmmu(struct kvm_vcpu *vcpu)
> >> {
> >> - kvm_init_shadow_mmu(vcpu, vcpu->arch.walk_mmu);
> >> + kvm_init_shadow_mmu(vcpu);
> >> vcpu->arch.walk_mmu->set_cr3 = kvm_x86_ops->set_cr3;
> >> vcpu->arch.walk_mmu->get_cr3 = get_cr3;
> >> vcpu->arch.walk_mmu->get_pdptr = kvm_pdptr_read;
> > And change walk_mmu to mmu here too for consistency with all other
> > places. Basically if you want to initialize use mmu or nested_mmu.
> > Use walk_mmu pointer only when you need to use mmu.
>
> Makes sense, especially considering how kvm_init_shadow_mmu initializes
> vcpu->arch.mmu.base_role directly.
>
> Something like this (large enough that I'll probably make it a separate
> patch in v2)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index ac598c8..d1f53cf 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3702,7 +3704,7 @@ static void paging32E_init_context(struct kvm_vcpu *vcpu,
>
> static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
> {
> - struct kvm_mmu *context = vcpu->arch.walk_mmu;
> + struct kvm_mmu *context = &vcpu->arch.mmu;
>
> context->base_role.word = 0;
> context->page_fault = tdp_page_fault;
> @@ -3745,7 +3747,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
> void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
> {
> bool smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP);
> - struct kvm_mmu *context = vcpu->arch.walk_mmu;
> + struct kvm_mmu *context = &vcpu->arch.mmu;
>
> ASSERT(vcpu);
> ASSERT(!VALID_PAGE(context->root_hpa));
> @@ -3759,17 +3761,17 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
> else
> paging32_init_context(vcpu, context);
>
> - vcpu->arch.mmu.base_role.nxe = is_nx(vcpu);
> - vcpu->arch.mmu.base_role.cr4_pae = !!is_pae(vcpu);
> - vcpu->arch.mmu.base_role.cr0_wp = is_write_protection(vcpu);
> - vcpu->arch.mmu.base_role.smep_andnot_wp
> + context->base_role.nxe = is_nx(vcpu);
> + context->base_role.cr4_pae = !!is_pae(vcpu);
> + context->base_role.cr0_wp = is_write_protection(vcpu);
> + context->base_role.smep_andnot_wp
> = smep && !is_write_protection(vcpu);
> }
> EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
>
> void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly)
> {
> - struct kvm_mmu *context = vcpu->arch.walk_mmu;
> + struct kvm_mmu *context = &vcpu->arch.mmu;
>
> ASSERT(vcpu);
> ASSERT(!VALID_PAGE(context->root_hpa));
> @@ -3793,11 +3795,13 @@ EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu);
>
> static void init_kvm_softmmu(struct kvm_vcpu *vcpu)
> {
> + struct kvm_mmu *context = &vcpu->arch.mmu;
> +
> kvm_init_shadow_mmu(vcpu);
> - vcpu->arch.walk_mmu->set_cr3 = kvm_x86_ops->set_cr3;
> - vcpu->arch.walk_mmu->get_cr3 = get_cr3;
> - vcpu->arch.walk_mmu->get_pdptr = kvm_pdptr_read;
> - vcpu->arch.walk_mmu->inject_page_fault = kvm_inject_page_fault;
> + context->set_cr3 = kvm_x86_ops->set_cr3;
> + context->get_cr3 = get_cr3;
> + context->get_pdptr = kvm_pdptr_read;
> + context->inject_page_fault = kvm_inject_page_fault;
> }
>
> static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu)
Yes.
>
> How far should I go? Should I also remove the context argument from
> nonpaging_init_context and friends, changing it to a local variable?
> (Doesn't seem like a big improvement in clarity).
>
If it does not no need to do it. Hard to judge for me without trying.
--
Gleb.
next prev parent reply other threads:[~2013-10-03 12:11 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-10-02 14:56 [PATCH 0/7] KVM: x86: small MMU cleanups Paolo Bonzini
2013-10-02 14:56 ` [PATCH 1/7] KVM: mmu: remove uninteresting MMU "free" callbacks Paolo Bonzini
2013-10-02 14:56 ` [PATCH 2/7] KVM: mmu: remove uninteresting MMU "new_cr3" callbacks Paolo Bonzini
2013-10-03 10:23 ` Gleb Natapov
2013-10-03 11:56 ` Paolo Bonzini
2013-10-02 14:56 ` [PATCH 3/7] KVM: mmu: unify destroy_kvm_mmu with kvm_mmu_unload Paolo Bonzini
2013-10-02 14:56 ` [PATCH 4/7] KVM: mmu: change useless int return types to void Paolo Bonzini
2013-10-02 14:56 ` [PATCH 5/7] KVM: mmu: remove argument to kvm_init_shadow_mmu and kvm_init_shadow_ept_mmu Paolo Bonzini
2013-10-03 11:25 ` Gleb Natapov
2013-10-03 11:51 ` Paolo Bonzini
2013-10-03 12:11 ` Gleb Natapov [this message]
2013-10-02 14:56 ` [PATCH 6/7] KVM: mmu: remove ASSERT(vcpu) Paolo Bonzini
2013-10-02 14:56 ` [PATCH 7/7] KVM: mmu: replace assertions with MMU_WARN_ON, a conditional WARN_ON Paolo Bonzini
2013-10-03 12:46 ` [PATCH 0/7] KVM: x86: small MMU cleanups Gleb Natapov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131003121126.GF17294@redhat.com \
--to=gleb@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).