kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
From: David Matlack <dmatlack@google.com>
To: Sean Christopherson <seanjc@google.com>
Cc: Marc Zyngier <maz@kernel.org>, Albert Ou <aou@eecs.berkeley.edu>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS \(KVM/mips\)"
	<kvm@vger.kernel.org>, Huacai Chen <chenhuacai@kernel.org>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS \(KVM/mips\)"
	<linux-mips@vger.kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	"open list:KERNEL VIRTUAL MACHINE FOR RISC-V \(KVM/riscv\)"
	<kvm-riscv@lists.infradead.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Ben Gardon <bgardon@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>,
	"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 \(KVM/arm64\)"
	<kvmarm@lists.cs.columbia.edu>, Peter Feiner <pfeiner@google.com>
Subject: Re: [PATCH v3 00/23] KVM: Extend Eager Page Splitting to the shadow MMU
Date: Wed, 13 Apr 2022 17:57:53 +0000	[thread overview]
Message-ID: <YlcPIYJ0CB2qnfpT@google.com> (raw)
In-Reply-To: <YlYhO7GvjKY1cwHr@google.com>

On Wed, Apr 13, 2022 at 01:02:51AM +0000, Sean Christopherson wrote:
> On Tue, Apr 12, 2022, David Matlack wrote:
> > On Mon, Apr 11, 2022 at 5:39 PM Sean Christopherson <seanjc@google.com> wrote:
> > >
> > > On Mon, Apr 11, 2022, David Matlack wrote:
> > > >
> > > > One thing that would be helpful is if you can explain in a bit more
> > > > specifically what you'd like to see. Part of the reason why I prefer
> > > > to sequence your proposal after eager page splitting is that I do not
> > > > fully understand what you're proposing, and how complex it would be.
> > > > e.g. Forking FNAME(fetch), FNAME(page_fault), and kvm_mmu_get_page()
> > > > for nested MMUs does not sound like less churn.
> > >
> > > Oh, it's most definitely not less code, and probably more churn.  But, it's churn
> > > that pushes us in a more favorable direction and that is desirable long term.  I
> > > don't mind churning code, but I want the churn to make future life easier, not
> > > harder.  Details below.
> > 
> > Of course. Let's make sure we're on the same page about what churn
> > introduced by this series will make future life harder that we hope to
> > avoid. If I understand you correctly, it's the following 2 changes:
> > 
> >  (a.) Using separate functions to allocate SPs and initialize SPs.
> >  (b.) Separating kvm_mmu_find_shadow_page() from __kvm_mmu_find_shadow_page().
> > 
> > (a.) stems from the fact that SP allocation during eager page
> > splitting is made directly rather than through kvm_mmu_memory_caches,
> > which was what you pushed for in the TDP MMU implementation. We could
> > instead use kvm_mmu_memory_caches for the shadow MMU eager page
> 
> ...
> 
> > So even if we did everything you proposed (which seems like an awful
> > lot just to avoid __kvm_mmu_find_shadow_page()), there's a chance we
> > would still end up with the exact same code. i.e.
> > kvm_mmu_nested_tdp_find_sp() would be implemented by calling
> > __kvm_mmu_find_shadow_page(), because it would be a waste to
> > re-implement an almost identical function?
> 
> I went far enough down this path to know that my idea isn't completely awful,
> and wouldn't actually need to fork FNAME(page_fault) at this time, but sadly I
> still dislike the end result.

Thanks for looking into it so quickly so we could figure out a path
forward.

> 
> Your assessment that the we'd still end up with very similar (if not quite exact)
> code is spot on.  Ditto for your other assertion in (a) about using the caches.
> 
> My vote for this series is to go the cache route, e.g. wrap kvm_mmu_memory_caches
> in a struct and pass that into kvm_mmu_get_page().  I still think it was the right
> call to ignore the caches for the TDP MMU, it gives the TDP MMU more flexibility
> and it was trivial to bypass the caches since the TDP MMU was doing its own thing
> anyways.
> 
> But for the shadow MMU, IMO the cons outweigh the pros.  E.g. in addition to
> ending up with two similar but subtly different "get page" flows, passing around
> "struct kvm_mmu_page **spp" is a bit unpleasant.  Ditto for having a partially
> initialized kvm_mmu_page.  The split code also ends up in a wierd state where it
> uses the caches for the pte_list, but not the other allocations.

Sounds good. I will rework the series to use kvm_mmu_memory_cache
structs for the SP allocation during eager page splitting. That will
eliminate the separate allocation and initialization which will be a
nice cleanup. And it will be great to get rid of the spp crud.

And per your earlier feedback, I will also limit eager page splitting to
nested MMUs.

> 
> There will be one wart due to unsync pages needing @vcpu, but we can pass in NULL
> for the split case and assert that @vcpu is non-null since all of the children
> should be direct.

The NULL vcpu check will be a little gross, but it should never trigger
in practice since eager page splitting always requests direct SPs. My
preference has been to enforce that in code by splitting out
__kvm_mmu_find_shadow_page(), but I can see the advantage of your
proposal is that eager page splitting and faults will go through the
exact same code path to get a kvm_mmu_page.

> 
> 		if (sp->unsync) {
> 			if (WARN_ON_ONCE(!vcpu)) {
> 				kvm_mmu_prepare_zap_page(kvm, sp,
> 							 &invalid_list);
> 				continue;
> 			}
> 
> 			/*
> 			 * The page is good, but is stale.  kvm_sync_page does
> 			 * get the latest guest state, but (unlike mmu_unsync_children)
> 			 * it doesn't write-protect the page or mark it synchronized!
> 			 * This way the validity of the mapping is ensured, but the
> 			 * overhead of write protection is not incurred until the
> 			 * guest invalidates the TLB mapping.  This allows multiple
> 			 * SPs for a single gfn to be unsync.
> 			 *
> 			 * If the sync fails, the page is zapped.  If so, break
> 			 * in order to rebuild it.
> 			 */
> 			if (!kvm_sync_page(vcpu, sp, &invalid_list))
> 				break;
> 
> 			WARN_ON(!list_empty(&invalid_list));
> 			kvm_flush_remote_tlbs(kvm);
> 		}
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2022-04-14 10:42 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-01 17:55 [PATCH v3 00/23] KVM: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-04-01 17:55 ` [PATCH v3 01/23] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-04-01 17:55 ` [PATCH v3 02/23] KVM: x86/mmu: Use a bool for direct David Matlack
2022-04-08 22:24   ` Sean Christopherson
2022-04-01 17:55 ` [PATCH v3 03/23] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-04-01 17:55 ` [PATCH v3 04/23] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-04-01 17:55 ` [PATCH v3 05/23] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-04-01 17:55 ` [PATCH v3 06/23] KVM: x86/mmu: Pass memslot to kvm_mmu_new_shadow_page() David Matlack
2022-04-01 17:55 ` [PATCH v3 07/23] KVM: x86/mmu: Separate shadow MMU sp allocation from initialization David Matlack
2022-04-01 17:55 ` [PATCH v3 08/23] KVM: x86/mmu: Link spt to sp during allocation David Matlack
2022-04-01 17:55 ` [PATCH v3 09/23] KVM: x86/mmu: Move huge page split sp allocation code to mmu.c David Matlack
2022-04-01 17:55 ` [PATCH v3 10/23] KVM: x86/mmu: Use common code to free kvm_mmu_page structs David Matlack
2022-04-01 17:55 ` [PATCH v3 11/23] KVM: x86/mmu: Use common code to allocate shadow pages from vCPU caches David Matlack
2022-04-01 17:55 ` [PATCH v3 12/23] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-04-01 17:55 ` [PATCH v3 13/23] KVM: x86/mmu: Pass const memslot to init_shadow_page() and descendants David Matlack
2022-04-01 17:55 ` [PATCH v3 14/23] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-04-01 17:55 ` [PATCH v3 15/23] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-04-01 17:55 ` [PATCH v3 16/23] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-04-02  6:19   ` kernel test robot
2022-04-02  7:01   ` kernel test robot
2022-04-09  0:02   ` Sean Christopherson
2022-04-14 16:47     ` David Matlack
2022-04-01 17:55 ` [PATCH v3 17/23] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU David Matlack
2022-04-01 17:55 ` [PATCH v3 18/23] KVM: x86/mmu: Zap collapsible SPTEs at all levels in " David Matlack
2022-04-01 17:55 ` [PATCH v3 19/23] KVM: x86/mmu: Refactor drop_large_spte() David Matlack
2022-04-01 17:55 ` [PATCH v3 20/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-04-20 10:55   ` Anup Patel
2022-04-21 16:19   ` Ben Gardon
2022-04-21 16:33     ` David Matlack
2022-04-01 17:55 ` [PATCH v3 21/23] KVM: Allow GFP flags to be passed when topping up MMU caches David Matlack
2022-04-01 17:55 ` [PATCH v3 22/23] KVM: x86/mmu: Support Eager Page Splitting in the shadow MMU David Matlack
2022-04-09  0:39   ` Sean Christopherson
2022-04-14 16:50     ` David Matlack
2022-04-01 17:55 ` [PATCH v3 23/23] KVM: selftests: Map x86_64 guest virtual memory with huge pages David Matlack
2022-04-11 17:12 ` [PATCH v3 00/23] KVM: Extend Eager Page Splitting to the shadow MMU Sean Christopherson
2022-04-11 17:54   ` David Matlack
2022-04-11 20:12     ` Sean Christopherson
2022-04-11 23:41       ` David Matlack
2022-04-12  0:39         ` Sean Christopherson
2022-04-12 16:49           ` David Matlack
2022-04-13  1:02             ` Sean Christopherson
2022-04-13 17:57               ` David Matlack [this message]
2022-04-13 18:28                 ` Sean Christopherson
2022-04-13 21:22                   ` David Matlack

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YlcPIYJ0CB2qnfpT@google.com \
    --to=dmatlack@google.com \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=aou@eecs.berkeley.edu \
    --cc=bgardon@google.com \
    --cc=chenhuacai@kernel.org \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-mips@vger.kernel.org \
    --cc=maciej.szmigiero@oracle.com \
    --cc=maz@kernel.org \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=pfeiner@google.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).