From: Sean Christopherson <seanjc@google.com>
To: David Matlack <dmatlack@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Marc Zyngier <maz@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Anup Patel <anup@brainfault.org>,
Paul Walmsley <paul.walmsley@sifive.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Andrew Jones <drjones@redhat.com>,
Ben Gardon <bgardon@google.com>, Peter Xu <peterx@redhat.com>,
maciej.szmigiero@oracle.com,
"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)"
<kvmarm@lists.cs.columbia.edu>,
"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)"
<linux-mips@vger.kernel.org>,
"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)"
<kvm@vger.kernel.org>,
"open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)"
<kvm-riscv@lists.infradead.org>,
Peter Feiner <pfeiner@google.com>
Subject: Re: [PATCH v4 05/20] KVM: x86/mmu: Consolidate shadow page allocation and initialization
Date: Thu, 5 May 2022 22:10:28 +0000 [thread overview]
Message-ID: <YnRLVB+t0bLBeu+/@google.com> (raw)
In-Reply-To: <20220422210546.458943-6-dmatlack@google.com>
On Fri, Apr 22, 2022, David Matlack wrote:
> Consolidate kvm_mmu_alloc_page() and kvm_mmu_alloc_shadow_page() under
> the latter so that all shadow page allocation and initialization happens
> in one place.
>
> No functional change intended.
>
> Signed-off-by: David Matlack <dmatlack@google.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 39 +++++++++++++++++----------------------
> 1 file changed, 17 insertions(+), 22 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 5582badf4947..7d03320f6e08 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1703,27 +1703,6 @@ static void drop_parent_pte(struct kvm_mmu_page *sp,
> mmu_spte_clear_no_track(parent_pte);
> }
>
> -static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, bool direct)
> -{
> - struct kvm_mmu_page *sp;
> -
> - sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
> - if (!direct)
> - sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache);
> - set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
> -
> - /*
> - * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages()
> - * depends on valid pages being added to the head of the list. See
> - * comments in kvm_zap_obsolete_pages().
> - */
> - sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen;
> - list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages);
> - kvm_mod_used_mmu_pages(vcpu->kvm, +1);
> - return sp;
> -}
> -
> static void mark_unsync(u64 *spte);
> static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp)
> {
> @@ -2100,7 +2079,23 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu,
> struct hlist_head *sp_list,
> union kvm_mmu_page_role role)
> {
> - struct kvm_mmu_page *sp = kvm_mmu_alloc_page(vcpu, role.direct);
> + struct kvm_mmu_page *sp;
> +
> + sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> + sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
> + if (!role.direct)
> + sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache);
> +
> + set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
> +
> + /*
> + * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages()
> + * depends on valid pages being added to the head of the list. See
> + * comments in kvm_zap_obsolete_pages().
> + */
> + sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen;
> + list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages);
> + kvm_mod_used_mmu_pages(vcpu->kvm, +1);
To reduce churn later on, what about opportunistically grabbing vcpu->kvm in a
local variable in this patch?
Actually, at that point, it's a super trivial change, so you can probably just drop
KVM: x86/mmu: Replace vcpu with kvm in kvm_mmu_alloc_shadow_page()
and do the vcpu/kvm swap as part of
KVM: x86/mmu: Pass memory caches to allocate SPs separately
> sp->gfn = gfn;
> sp->role = role;
> --
> 2.36.0.rc2.479.g8af0fa9b8e-goog
>
next prev parent reply other threads:[~2022-05-05 22:10 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-22 21:05 [PATCH v4 00/20] KVM: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-04-22 21:05 ` [PATCH v4 01/20] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-05-07 7:46 ` Lai Jiangshan
2022-04-22 21:05 ` [PATCH v4 02/20] KVM: x86/mmu: Use a bool for direct David Matlack
2022-05-07 7:46 ` Lai Jiangshan
2022-04-22 21:05 ` [PATCH v4 03/20] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-05-05 21:50 ` Sean Christopherson
2022-05-09 22:10 ` David Matlack
2022-05-10 2:38 ` Lai Jiangshan
2022-05-07 8:28 ` Lai Jiangshan
2022-05-09 21:04 ` David Matlack
2022-05-10 2:58 ` Lai Jiangshan
2022-05-10 13:31 ` Sean Christopherson
2022-05-12 16:10 ` David Matlack
2022-05-13 18:26 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 04/20] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-05-05 21:58 ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 05/20] KVM: x86/mmu: Consolidate shadow page allocation and initialization David Matlack
2022-05-05 22:10 ` Sean Christopherson [this message]
2022-05-09 20:53 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 06/20] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-05-05 22:15 ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 07/20] KVM: x86/mmu: Move guest PT write-protection to account_shadowed() David Matlack
2022-05-05 22:51 ` Sean Christopherson
2022-05-09 21:18 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 08/20] KVM: x86/mmu: Pass memory caches to allocate SPs separately David Matlack
2022-05-05 23:00 ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 09/20] KVM: x86/mmu: Replace vcpu with kvm in kvm_mmu_alloc_shadow_page() David Matlack
2022-04-22 21:05 ` [PATCH v4 10/20] KVM: x86/mmu: Pass kvm pointer separately from vcpu to kvm_mmu_find_shadow_page() David Matlack
2022-04-22 21:05 ` [PATCH v4 11/20] KVM: x86/mmu: Allow for NULL vcpu pointer in __kvm_mmu_get_shadow_page() David Matlack
2022-05-05 23:33 ` Sean Christopherson
2022-05-09 21:26 ` David Matlack
2022-05-09 22:56 ` Sean Christopherson
2022-05-09 23:59 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 12/20] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-04-22 21:05 ` [PATCH v4 13/20] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-05-05 23:46 ` Sean Christopherson
2022-05-09 21:27 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 14/20] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-04-22 21:05 ` [PATCH v4 15/20] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-05-06 19:47 ` Sean Christopherson
2022-05-09 16:10 ` Sean Christopherson
2022-05-09 21:29 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 16/20] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU David Matlack
2022-05-09 16:22 ` Sean Christopherson
2022-05-09 21:31 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 17/20] KVM: x86/mmu: Zap collapsible SPTEs at all levels in " David Matlack
2022-05-09 16:31 ` Sean Christopherson
2022-05-09 21:34 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 18/20] KVM: x86/mmu: Refactor drop_large_spte() David Matlack
2022-05-09 16:36 ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 19/20] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-04-22 21:05 ` [PATCH v4 20/20] KVM: x86/mmu: Extend Eager Page Splitting to nested MMUs David Matlack
2022-05-07 7:51 ` Lai Jiangshan
2022-05-09 21:40 ` David Matlack
2022-05-09 16:48 ` Sean Christopherson
2022-05-09 21:44 ` David Matlack
2022-05-09 22:47 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YnRLVB+t0bLBeu+/@google.com \
--to=seanjc@google.com \
--cc=aleksandar.qemu.devel@gmail.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=bgardon@google.com \
--cc=chenhuacai@kernel.org \
--cc=dmatlack@google.com \
--cc=drjones@redhat.com \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-mips@vger.kernel.org \
--cc=maciej.szmigiero@oracle.com \
--cc=maz@kernel.org \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pfeiner@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).