From: Marc Zyngier <maz@kernel.org>
To: David Matlack <dmatlack@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Huacai Chen <chenhuacai@kernel.org>,
leksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Sean Christopherson <seanjc@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Peter Xu <peterx@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
Peter Feiner <pfeiner@google.com>,
Andrew Jones <drjones@redhat.com>,
"Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>,
kvm list <kvm@vger.kernel.org>
Subject: Re: [PATCH 19/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs
Date: Sat, 05 Mar 2022 16:55:30 +0000 [thread overview]
Message-ID: <878rtotk3h.wl-maz@kernel.org> (raw)
In-Reply-To: <CALzav=ccRmvCB+FsN64JujOVpb7-ocdzkiBrYLFGFRQUa7DbWQ@mail.gmail.com>
On Fri, 04 Mar 2022 21:59:12 +0000,
David Matlack <dmatlack@google.com> wrote:
>
> On Thu, Feb 24, 2022 at 11:20 AM David Matlack <dmatlack@google.com> wrote:
> >
> > On Thu, Feb 24, 2022 at 3:29 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > On Thu, 03 Feb 2022 01:00:47 +0000,
> > > David Matlack <dmatlack@google.com> wrote:
> > > >
>
> [...]
>
> > > >
> > > > /* Cache some mmu pages needed inside spinlock regions */
> > > > - struct kvm_mmu_memory_cache mmu_page_cache;
> > > > + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache);
> > >
> > > I must say I'm really not a fan of the anonymous structure trick. I
> > > can see why you are doing it that way, but it feels pretty brittle.
> >
> > Yeah I don't love it. It's really optimizing for minimizing the patch diff.
> >
> > The alternative I considered was to dynamically allocate the
> > kvm_mmu_memory_cache structs. This would get rid of the anonymous
> > struct and the objects array, and also eliminate the rather gross
> > capacity hack in kvm_mmu_topup_memory_cache().
> >
> > The downsides of this approach is more code and more failure paths if
> > the allocation fails.
>
> I tried changing all kvm_mmu_memory_cache structs to be dynamically
> allocated, but it created a lot of complexity to the setup/teardown
> code paths in x86, arm64, mips, and riscv (the arches that use the
> caches). I don't think this route is worth it, especially since these
> structs don't *need* to be dynamically allocated.
>
> When you said the anonymous struct feels brittle, what did you have in
> mind specifically?
I can perfectly see someone using a kvm_mmu_memory_cache and searching
for a bit why they end-up with memory corruption. Yes, this would be a
rookie mistake, but this are some expectations all over the kernel
that DEFINE_* and the corresponding structure are the same object.
[...]
> I see two alternatives to make this cleaner:
>
> 1. Dynamically allocate just this cache. The caches defined in
> vcpu_arch will continue to use DEFINE_KVM_MMU_MEMORY_CACHE(). This
> would get rid of the outer struct but require an extra memory
> allocation.
> 2. Move this cache to struct kvm_arch using
> DEFINE_KVM_MMU_MEMORY_CACHE(). Then we don't need to stack allocate it
> or dynamically allocate it.
>
> Do either of these approaches appeal to you more than the current one?
Certainly, #2 feels more solid. Dynamic allocations (and the resulting
pointer chasing) are usually costly in terms of performance, so I'd
avoid it if at all possible.
That being said, if it turns out that #2 isn't practical, I won't get
in the way of your current approach. Moving kvm_mmu_memory_cache to
core code was definitely a good cleanup, and I'm not overly excited
with the perspective of *more* arch-specific code.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
next prev parent reply other threads:[~2022-03-05 16:55 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-03 1:00 [PATCH 00/23] Extend Eager Page Splitting to the shadow MMU David Matlack
2022-02-03 1:00 ` [PATCH 01/23] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-02-19 0:57 ` Sean Christopherson
2022-02-03 1:00 ` [PATCH 02/23] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-02-19 1:14 ` Sean Christopherson
2022-02-24 18:45 ` David Matlack
2022-03-04 0:22 ` David Matlack
2022-02-03 1:00 ` [PATCH 03/23] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-02-19 1:25 ` Sean Christopherson
2022-02-24 18:54 ` David Matlack
2022-02-03 1:00 ` [PATCH 04/23] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-02-03 1:00 ` [PATCH 05/23] KVM: x86/mmu: Pass memslot to kvm_mmu_create_sp() David Matlack
2022-02-03 1:00 ` [PATCH 06/23] KVM: x86/mmu: Separate shadow MMU sp allocation from initialization David Matlack
2022-02-16 19:37 ` Ben Gardon
2022-02-16 21:42 ` David Matlack
2022-02-03 1:00 ` [PATCH 07/23] KVM: x86/mmu: Move huge page split sp allocation code to mmu.c David Matlack
2022-02-03 1:00 ` [PATCH 08/23] KVM: x86/mmu: Use common code to free kvm_mmu_page structs David Matlack
2022-02-03 1:00 ` [PATCH 09/23] KVM: x86/mmu: Use common code to allocate kvm_mmu_page structs from vCPU caches David Matlack
2022-02-03 1:00 ` [PATCH 10/23] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-02-23 23:25 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 11/23] KVM: x86/mmu: Pass const memslot to kvm_mmu_init_sp() and descendants David Matlack
2022-02-23 23:27 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 12/23] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-02-23 23:30 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 13/23] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-02-23 23:32 ` Ben Gardon
2022-02-23 23:35 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 14/23] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-02-28 20:30 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 15/23] KVM: x86/mmu: Pass access information to make_huge_page_split_spte() David Matlack
2022-02-28 20:32 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 16/23] KVM: x86/mmu: Zap collapsible SPTEs at all levels in the shadow MMU David Matlack
2022-02-28 20:39 ` Ben Gardon
2022-03-03 19:42 ` David Matlack
2022-02-03 1:00 ` [PATCH 17/23] KVM: x86/mmu: Pass bool flush parameter to drop_large_spte() David Matlack
2022-02-28 20:47 ` Ben Gardon
2022-03-03 19:52 ` David Matlack
2022-02-03 1:00 ` [PATCH 18/23] KVM: x86/mmu: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-02-28 21:09 ` Ben Gardon
2022-02-28 23:29 ` David Matlack
2022-02-03 1:00 ` [PATCH 19/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-02-24 11:28 ` Marc Zyngier
2022-02-24 19:20 ` David Matlack
2022-03-04 21:59 ` David Matlack
2022-03-04 22:24 ` David Matlack
2022-03-05 16:55 ` Marc Zyngier [this message]
2022-03-07 23:49 ` David Matlack
2022-03-08 7:42 ` Marc Zyngier
2022-03-09 21:49 ` David Matlack
2022-03-10 8:30 ` Marc Zyngier
2022-02-03 1:00 ` [PATCH 20/23] KVM: Allow GFP flags to be passed when topping up MMU caches David Matlack
2022-02-28 21:12 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 21/23] KVM: x86/mmu: Fully split huge pages that require extra pte_list_desc structs David Matlack
2022-02-28 21:22 ` Ben Gardon
2022-02-28 23:41 ` David Matlack
2022-03-01 0:37 ` Ben Gardon
2022-03-03 19:59 ` David Matlack
2022-02-03 1:00 ` [PATCH 22/23] KVM: x86/mmu: Split huge pages aliased by multiple SPTEs David Matlack
2022-02-03 1:00 ` [PATCH 23/23] KVM: selftests: Map x86_64 guest virtual memory with huge pages David Matlack
2022-03-07 5:21 ` [PATCH 00/23] Extend Eager Page Splitting to the shadow MMU Peter Xu
2022-03-07 23:39 ` David Matlack
2022-03-09 7:31 ` Peter Xu
2022-03-09 23:39 ` David Matlack
2022-03-10 7:03 ` Peter Xu
2022-03-10 19:26 ` David Matlack
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=878rtotk3h.wl-maz@kernel.org \
--to=maz@kernel.org \
--cc=aleksandar.qemu.devel@gmail.com \
--cc=chenhuacai@kernel.org \
--cc=dmatlack@google.com \
--cc=drjones@redhat.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=maciej.szmigiero@oracle.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pfeiner@google.com \
--cc=seanjc@google.com \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).