From: Sean Christopherson <seanjc@google.com>
To: David Matlack <dmatlack@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Marc Zyngier <maz@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Anup Patel <anup@brainfault.org>,
Paul Walmsley <paul.walmsley@sifive.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Andrew Jones <drjones@redhat.com>,
Ben Gardon <bgardon@google.com>, Peter Xu <peterx@redhat.com>,
maciej.szmigiero@oracle.com,
"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)"
<kvmarm@lists.cs.columbia.edu>,
"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)"
<linux-mips@vger.kernel.org>,
"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)"
<kvm@vger.kernel.org>,
"open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)"
<kvm-riscv@lists.infradead.org>,
Peter Feiner <pfeiner@google.com>
Subject: Re: [PATCH v4 20/20] KVM: x86/mmu: Extend Eager Page Splitting to nested MMUs
Date: Mon, 9 May 2022 16:48:12 +0000 [thread overview]
Message-ID: <YnlFzMpJZNfFuFic@google.com> (raw)
In-Reply-To: <20220422210546.458943-21-dmatlack@google.com>
On Fri, Apr 22, 2022, David Matlack wrote:
> +static bool need_topup_split_caches_or_resched(struct kvm *kvm)
> +{
> + if (need_resched() || rwlock_needbreak(&kvm->mmu_lock))
> + return true;
> +
> + /*
> + * In the worst case, SPLIT_DESC_CACHE_CAPACITY descriptors are needed
> + * to split a single huge page. Calculating how many are actually needed
> + * is possible but not worth the complexity.
> + */
> + return need_topup(&kvm->arch.split_desc_cache, SPLIT_DESC_CACHE_CAPACITY) ||
> + need_topup(&kvm->arch.split_page_header_cache, 1) ||
> + need_topup(&kvm->arch.split_shadow_page_cache, 1);
Uber nit that Paolo will make fun of me for... please align indentiation
return need_topup(&kvm->arch.split_desc_cache, SPLIT_DESC_CACHE_CAPACITY) ||
need_topup(&kvm->arch.split_page_header_cache, 1) ||
need_topup(&kvm->arch.split_shadow_page_cache, 1);
> +static void nested_mmu_split_huge_page(struct kvm *kvm,
> + const struct kvm_memory_slot *slot,
> + u64 *huge_sptep)
> +
> +{
> + struct kvm_mmu_memory_cache *cache = &kvm->arch.split_desc_cache;
> + u64 huge_spte = READ_ONCE(*huge_sptep);
> + struct kvm_mmu_page *sp;
> + bool flush = false;
> + u64 *sptep, spte;
> + gfn_t gfn;
> + int index;
> +
> + sp = nested_mmu_get_sp_for_split(kvm, huge_sptep);
> +
> + for (index = 0; index < PT64_ENT_PER_PAGE; index++) {
> + sptep = &sp->spt[index];
> + gfn = kvm_mmu_page_get_gfn(sp, index);
> +
> + /*
> + * The SP may already have populated SPTEs, e.g. if this huge
> + * page is aliased by multiple sptes with the same access
> + * permissions. These entries are guaranteed to map the same
> + * gfn-to-pfn translation since the SP is direct, so no need to
> + * modify them.
> + *
> + * However, if a given SPTE points to a lower level page table,
> + * that lower level page table may only be partially populated.
> + * Installing such SPTEs would effectively unmap a potion of the
> + * huge page, which requires a TLB flush.
Maybe explain why a TLB flush is required? E.g. "which requires a TLB flush as
a subsequent mmu_notifier event on the unmapped region would fail to detect the
need to flush".
> +static bool nested_mmu_skip_split_huge_page(u64 *huge_sptep)
"skip" is kinda odd terminology. It reads like a command, but it's actually
querying state _and_ it's returning a boolean, which I've learned to hate :-)
I don't see any reason for a helper, there's one caller and it can just do
"continue" directly.
> +static void kvm_nested_mmu_try_split_huge_pages(struct kvm *kvm,
> + const struct kvm_memory_slot *slot,
> + gfn_t start, gfn_t end,
> + int target_level)
> +{
> + int level;
> +
> + /*
> + * Split huge pages starting with KVM_MAX_HUGEPAGE_LEVEL and working
> + * down to the target level. This ensures pages are recursively split
> + * all the way to the target level. There's no need to split pages
> + * already at the target level.
> + */
> + for (level = KVM_MAX_HUGEPAGE_LEVEL; level > target_level; level--) {
Unnecessary braces.
> + slot_handle_level_range(kvm, slot,
> + nested_mmu_try_split_huge_pages,
> + level, level, start, end - 1,
> + true, false);
IMO it's worth running over by 4 chars to drop 2 lines:
for (level = KVM_MAX_HUGEPAGE_LEVEL; level > target_level; level--)
slot_handle_level_range(kvm, slot, nested_mmu_try_split_huge_pages,
level, level, start, end - 1, true, false);
> + }
> +}
> +
> /* Must be called with the mmu_lock held in write-mode. */
Add a lockdep assertion, not a comment.
> void kvm_mmu_try_split_huge_pages(struct kvm *kvm,
> const struct kvm_memory_slot *memslot,
> u64 start, u64 end,
> int target_level)
> {
> - if (is_tdp_mmu_enabled(kvm))
> - kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end,
> - target_level, false);
> + if (!is_tdp_mmu_enabled(kvm))
> + return;
> +
> + kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level,
> + false);
> +
> + if (kvm_memslots_have_rmaps(kvm))
> + kvm_nested_mmu_try_split_huge_pages(kvm, memslot, start, end,
> + target_level);
>
> /*
> * A TLB flush is unnecessary at this point for the same resons as in
> @@ -6051,10 +6304,19 @@ void kvm_mmu_slot_try_split_huge_pages(struct kvm *kvm,
> u64 start = memslot->base_gfn;
> u64 end = start + memslot->npages;
>
> - if (is_tdp_mmu_enabled(kvm)) {
> - read_lock(&kvm->mmu_lock);
> - kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level, true);
> - read_unlock(&kvm->mmu_lock);
> + if (!is_tdp_mmu_enabled(kvm))
> + return;
> +
> + read_lock(&kvm->mmu_lock);
> + kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level,
> + true);
Eh, let this poke out.
> + read_unlock(&kvm->mmu_lock);
> +
> + if (kvm_memslots_have_rmaps(kvm)) {
> + write_lock(&kvm->mmu_lock);
> + kvm_nested_mmu_try_split_huge_pages(kvm, memslot, start, end,
> + target_level);
> + write_unlock(&kvm->mmu_lock);
Super duper nit: all other flows do rmaps first, than TDP MMU. Might as well keep
that ordering here, otherwise it suggests there's a reason to be different.
> }
>
> /*
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index ab336f7c82e4..e123e24a130f 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12161,6 +12161,12 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
> * page faults will create the large-page sptes.
> */
> kvm_mmu_zap_collapsible_sptes(kvm, new);
> +
> + /*
> + * Free any memory left behind by eager page splitting. Ignore
> + * the module parameter since userspace might have changed it.
> + */
> + free_split_caches(kvm);
> } else {
> /*
> * Initially-all-set does not require write protecting any page,
> --
> 2.36.0.rc2.479.g8af0fa9b8e-goog
>
next prev parent reply other threads:[~2022-05-09 16:48 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-22 21:05 [PATCH v4 00/20] KVM: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-04-22 21:05 ` [PATCH v4 01/20] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-05-07 7:46 ` Lai Jiangshan
2022-04-22 21:05 ` [PATCH v4 02/20] KVM: x86/mmu: Use a bool for direct David Matlack
2022-05-07 7:46 ` Lai Jiangshan
2022-04-22 21:05 ` [PATCH v4 03/20] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-05-05 21:50 ` Sean Christopherson
2022-05-09 22:10 ` David Matlack
2022-05-10 2:38 ` Lai Jiangshan
2022-05-07 8:28 ` Lai Jiangshan
2022-05-09 21:04 ` David Matlack
2022-05-10 2:58 ` Lai Jiangshan
2022-05-10 13:31 ` Sean Christopherson
2022-05-12 16:10 ` David Matlack
2022-05-13 18:26 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 04/20] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-05-05 21:58 ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 05/20] KVM: x86/mmu: Consolidate shadow page allocation and initialization David Matlack
2022-05-05 22:10 ` Sean Christopherson
2022-05-09 20:53 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 06/20] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-05-05 22:15 ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 07/20] KVM: x86/mmu: Move guest PT write-protection to account_shadowed() David Matlack
2022-05-05 22:51 ` Sean Christopherson
2022-05-09 21:18 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 08/20] KVM: x86/mmu: Pass memory caches to allocate SPs separately David Matlack
2022-05-05 23:00 ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 09/20] KVM: x86/mmu: Replace vcpu with kvm in kvm_mmu_alloc_shadow_page() David Matlack
2022-04-22 21:05 ` [PATCH v4 10/20] KVM: x86/mmu: Pass kvm pointer separately from vcpu to kvm_mmu_find_shadow_page() David Matlack
2022-04-22 21:05 ` [PATCH v4 11/20] KVM: x86/mmu: Allow for NULL vcpu pointer in __kvm_mmu_get_shadow_page() David Matlack
2022-05-05 23:33 ` Sean Christopherson
2022-05-09 21:26 ` David Matlack
2022-05-09 22:56 ` Sean Christopherson
2022-05-09 23:59 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 12/20] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-04-22 21:05 ` [PATCH v4 13/20] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-05-05 23:46 ` Sean Christopherson
2022-05-09 21:27 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 14/20] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-04-22 21:05 ` [PATCH v4 15/20] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-05-06 19:47 ` Sean Christopherson
2022-05-09 16:10 ` Sean Christopherson
2022-05-09 21:29 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 16/20] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU David Matlack
2022-05-09 16:22 ` Sean Christopherson
2022-05-09 21:31 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 17/20] KVM: x86/mmu: Zap collapsible SPTEs at all levels in " David Matlack
2022-05-09 16:31 ` Sean Christopherson
2022-05-09 21:34 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 18/20] KVM: x86/mmu: Refactor drop_large_spte() David Matlack
2022-05-09 16:36 ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 19/20] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-04-23 8:08 ` kernel test robot
2022-04-24 15:21 ` kernel test robot
2022-04-22 21:05 ` [PATCH v4 20/20] KVM: x86/mmu: Extend Eager Page Splitting to nested MMUs David Matlack
2022-05-07 7:51 ` Lai Jiangshan
2022-05-09 21:40 ` David Matlack
2022-05-09 16:48 ` Sean Christopherson [this message]
2022-05-09 21:44 ` David Matlack
2022-05-09 22:47 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YnlFzMpJZNfFuFic@google.com \
--to=seanjc@google.com \
--cc=aleksandar.qemu.devel@gmail.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=bgardon@google.com \
--cc=chenhuacai@kernel.org \
--cc=dmatlack@google.com \
--cc=drjones@redhat.com \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-mips@vger.kernel.org \
--cc=maciej.szmigiero@oracle.com \
--cc=maz@kernel.org \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pfeiner@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).