From: Sean Christopherson <seanjc@google.com>
To: David Matlack <dmatlack@google.com>
Cc: Peter Xu <peterx@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
kvm@vger.kernel.org, Ben Gardon <bgardon@google.com>,
Joerg Roedel <joro@8bytes.org>, Jim Mattson <jmattson@google.com>,
Wanpeng Li <wanpengli@tencent.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Janis Schoetterl-Glausch <scgl@linux.vnet.ibm.com>,
Junaid Shahid <junaids@google.com>,
Oliver Upton <oupton@google.com>,
Harish Barathvajasankar <hbarath@google.com>,
Peter Shier <pshier@google.com>
Subject: Re: [RFC PATCH 12/15] KVM: x86/mmu: Split large pages when dirty logging is enabled
Date: Wed, 1 Dec 2021 01:00:57 +0000 [thread overview]
Message-ID: <YabJSdRklj3T6FWJ@google.com> (raw)
In-Reply-To: <CALzav=cXgCSP3RLh+gss65==B6eYXC82V3zNjv2KCNehUMQewA@mail.gmail.com>
On Tue, Nov 30, 2021, David Matlack wrote:
> On Fri, Nov 26, 2021 at 4:01 AM Peter Xu <peterx@redhat.com> wrote:
> >
> > Hi, David,
> >
> > On Fri, Nov 19, 2021 at 11:57:56PM +0000, David Matlack wrote:
> > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > > index 2a7564703ea6..432a4df817ec 100644
> > > --- a/arch/x86/include/asm/kvm_host.h
> > > +++ b/arch/x86/include/asm/kvm_host.h
> > > @@ -1232,6 +1232,9 @@ struct kvm_arch {
> > > hpa_t hv_root_tdp;
> > > spinlock_t hv_root_tdp_lock;
> > > #endif
> > > +
> > > + /* MMU caches used when splitting large pages during VM-ioctls. */
> > > + struct kvm_mmu_memory_caches split_caches;
> >
> > Are mmu_gfn_array_cache and mmu_pte_list_desc_cache wasted here? I saw that
> > "struct kvm_mmu_memory_cache" still takes up quite a few hundreds of bytes,
> > just want to make sure we won't waste them in vain.
>
> Yes they are wasted right now. But there's a couple of things to keep in mind:
>
> 1. They are also wasted in every vCPU (in the per-vCPU caches) that
> does not use the shadow MMU.
> 2. They will (I think) be used eventually when I add Eager Page
> Splitting support to the shadow MMU.
> 3. split_caches is per-VM so it's only a few hundred bytes per VM.
>
> If we really want to save the memory the right way forward might be to
> make each kvm_mmu_memory_cache a pointer instead of an embedded
> struct. Then we can allocate each dynamically only as needed. I can
> add that to my TODO list but I don't think it'd be worth blocking this
> on it given the points above.
>
> >
> > [...]
> >
> > > +int mmu_topup_split_caches(struct kvm *kvm)
> > > +{
> > > + struct kvm_mmu_memory_caches *split_caches = &kvm->arch.split_caches;
> > > + int r;
> > > +
> > > + assert_split_caches_invariants(kvm);
> > > +
> > > + r = kvm_mmu_topup_memory_cache(&split_caches->page_header_cache, 1);
> > > + if (r)
> > > + goto out;
> > > +
> > > + r = kvm_mmu_topup_memory_cache(&split_caches->shadow_page_cache, 1);
> > > + if (r)
> > > + goto out;
> >
> > Is it intended to only top-up with one cache object? IIUC this means we'll try
> > to proactively yield the cpu for each of the huge page split right after the
> > object is consumed.
> >
> > Wondering whether it be more efficient to make it a slightly larger number, so
> > we don't overload the memory but also make the loop a bit more efficient.
>
> IIUC, 1 here is just the min needed for kvm_mmu_topup_memory_cache to
> return success. I chose 1 for each because it's the minimum necessary
> to make forward progress (split one large page).
The @min parameter is minimum number of pages that _must_ be available in the
cache, i.e. it's the maximum number of pages that can theoretically be used by
whatever upcoming operation is going to be consuming pages from the cache.
So '1' is technically correct, but I think it's the wrong choice given the behavior
of this code. E.g. if there's 1 object in the cache, the initial top-up will do
nothing, and then tdp_mmu_split_large_pages_root() will almost immediately drop
mmu_lock to topup the cache. Since the in-loop usage explicitly checks for an
empty cache, i.e. any non-zero @min will have identical behavior, I think it makes
sense to use KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE _and_ add a comment explaining why.
> No matter what you pass for min kvm_mmu_topup_memory_cache() will
> still always try to allocate KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
> objects.
No, it will try to allocate KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE if and only if there
are fewer than @min objects in the cache.
next prev parent reply other threads:[~2021-12-01 1:01 UTC|newest]
Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-19 23:57 [RFC PATCH 00/15] KVM: x86/mmu: Eager Page Splitting for the TDP MMU David Matlack
2021-11-19 23:57 ` [RFC PATCH 01/15] KVM: x86/mmu: Rename rmap_write_protect to kvm_vcpu_write_protect_gfn David Matlack
2021-11-22 18:52 ` Ben Gardon
2021-11-26 12:18 ` Peter Xu
2021-11-19 23:57 ` [RFC PATCH 02/15] KVM: x86/mmu: Rename __rmap_write_protect to rmap_write_protect David Matlack
2021-11-22 18:52 ` Ben Gardon
2021-11-26 12:18 ` Peter Xu
2021-11-19 23:57 ` [RFC PATCH 03/15] KVM: x86/mmu: Automatically update iter->old_spte if cmpxchg fails David Matlack
2021-11-22 18:52 ` Ben Gardon
2021-11-30 23:25 ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 04/15] KVM: x86/mmu: Factor out logic to atomically install a new page table David Matlack
2021-11-22 18:52 ` Ben Gardon
2021-11-30 23:27 ` David Matlack
2021-12-01 19:13 ` Sean Christopherson
2021-12-01 21:52 ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 05/15] KVM: x86/mmu: Abstract mmu caches out to a separate struct David Matlack
2021-11-22 18:55 ` Ben Gardon
2021-11-22 18:55 ` Ben Gardon
2021-11-30 23:28 ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 06/15] KVM: x86/mmu: Derive page role from parent David Matlack
2021-11-20 12:53 ` Paolo Bonzini
2021-11-27 2:07 ` Lai Jiangshan
2021-11-27 10:26 ` Paolo Bonzini
2021-11-30 23:31 ` David Matlack
2021-12-01 0:45 ` Sean Christopherson
2021-12-01 21:56 ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 07/15] KVM: x86/mmu: Pass in vcpu->arch.mmu_caches instead of vcpu David Matlack
2021-11-22 18:56 ` Ben Gardon
2021-11-19 23:57 ` [RFC PATCH 08/15] KVM: x86/mmu: Helper method to check for large and present sptes David Matlack
2021-11-22 18:56 ` Ben Gardon
2021-12-01 18:34 ` Sean Christopherson
2021-12-01 21:13 ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 09/15] KVM: x86/mmu: Move restore_acc_track_spte to spte.c David Matlack
2021-11-22 18:56 ` Ben Gardon
2021-11-19 23:57 ` [RFC PATCH 10/15] KVM: x86/mmu: Abstract need_resched logic from tdp_mmu_iter_cond_resched David Matlack
2021-11-22 18:56 ` Ben Gardon
2021-11-19 23:57 ` [RFC PATCH 11/15] KVM: x86/mmu: Refactor tdp_mmu iterators to take kvm_mmu_page root David Matlack
2021-11-22 18:56 ` Ben Gardon
2021-11-19 23:57 ` [RFC PATCH 12/15] KVM: x86/mmu: Split large pages when dirty logging is enabled David Matlack
2021-11-22 5:05 ` Nikunj A. Dadhania
2021-11-30 23:33 ` David Matlack
2021-11-22 19:30 ` Ben Gardon
2021-11-30 23:44 ` David Matlack
2021-11-26 12:01 ` Peter Xu
2021-11-30 23:56 ` David Matlack
2021-12-01 1:00 ` Sean Christopherson [this message]
2021-12-01 1:29 ` David Matlack
2021-12-01 2:29 ` Peter Xu
2021-12-01 18:29 ` Sean Christopherson
2021-12-01 21:36 ` David Matlack
2021-12-01 23:37 ` Sean Christopherson
2021-12-02 17:41 ` David Matlack
2021-12-02 18:42 ` Sean Christopherson
2021-12-03 0:00 ` David Matlack
2021-12-03 1:07 ` Sean Christopherson
2021-12-03 17:22 ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 13/15] KVM: x86/mmu: Split large pages during CLEAR_DIRTY_LOG David Matlack
2021-11-26 12:17 ` Peter Xu
2021-12-01 0:16 ` David Matlack
2021-12-01 0:17 ` David Matlack
2021-12-01 4:03 ` Peter Xu
2021-12-01 22:14 ` David Matlack
2021-12-03 4:57 ` Peter Xu
2021-12-01 19:22 ` Sean Christopherson
2021-12-01 19:49 ` Ben Gardon
2021-12-01 20:16 ` Sean Christopherson
2021-12-01 22:11 ` Ben Gardon
2021-12-01 22:17 ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 14/15] KVM: x86/mmu: Add tracepoint for splitting large pages David Matlack
2021-11-19 23:57 ` [RFC PATCH 15/15] KVM: x86/mmu: Update page stats when " David Matlack
2021-12-01 19:36 ` Sean Christopherson
2021-12-01 21:11 ` David Matlack
2021-11-26 14:13 ` [RFC PATCH 00/15] KVM: x86/mmu: Eager Page Splitting for the TDP MMU Peter Xu
2021-11-30 23:22 ` David Matlack
2021-12-01 4:10 ` Peter Xu
2021-12-01 4:19 ` Peter Xu
2021-12-01 21:46 ` David Matlack
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YabJSdRklj3T6FWJ@google.com \
--to=seanjc@google.com \
--cc=bgardon@google.com \
--cc=dmatlack@google.com \
--cc=hbarath@google.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=junaids@google.com \
--cc=kvm@vger.kernel.org \
--cc=oupton@google.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pshier@google.com \
--cc=scgl@linux.vnet.ibm.com \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).