linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>,
	Marc Zyngier <maz@kernel.org>,
	 Oliver Upton <oliver.upton@linux.dev>,
	Tianrui Zhao <zhaotianrui@loongson.cn>,
	 Bibo Mao <maobibo@loongson.cn>,
	Huacai Chen <chenhuacai@kernel.org>,
	 Michael Ellerman <mpe@ellerman.id.au>,
	Anup Patel <anup@brainfault.org>,
	 Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	 Albert Ou <aou@eecs.berkeley.edu>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	 Janosch Frank <frankja@linux.ibm.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	 Sean Christopherson <seanjc@google.com>
Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.linux.dev, loongarch@lists.linux.dev,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	"Alex Bennée" <alex.bennee@linaro.org>,
	"Yan Zhao" <yan.y.zhao@intel.com>,
	"David Matlack" <dmatlack@google.com>,
	"David Stevens" <stevensd@chromium.org>,
	"Andrew Jones" <ajones@ventanamicro.com>
Subject: [PATCH v13 08/85] KVM: x86/mmu: Mark folio dirty when creating SPTE, not when zapping/modifying
Date: Thu, 10 Oct 2024 11:23:10 -0700	[thread overview]
Message-ID: <20241010182427.1434605-9-seanjc@google.com> (raw)
In-Reply-To: <20241010182427.1434605-1-seanjc@google.com>

Mark pages/folios dirty when creating SPTEs to map PFNs into the guest,
not when zapping or modifying SPTEs, as marking folios dirty when zapping
or modifying SPTEs can be extremely inefficient.  E.g. when KVM is zapping
collapsible SPTEs to reconstitute a hugepage after disbling dirty logging,
KVM will mark every 4KiB pfn as dirty, even though _at least_ 512 pfns are
guaranteed to be in a single folio (the SPTE couldn't potentially be huge
if that weren't the case).  The problem only becomes worse for 1GiB
HugeTLB pages, as KVM can mark a single folio dirty 512*512 times.

Marking a folio dirty when mapping is functionally safe as KVM drops all
relevant SPTEs in response to an mmu_notifier invalidation, i.e. ensures
that the guest can't dirty a folio after access has been removed.

And because KVM already marks folios dirty when zapping/modifying SPTEs
for KVM reasons, i.e. not in response to an mmu_notifier invalidation,
there is no danger of "prematurely" marking a folio dirty.  E.g. if a
filesystems cleans a folio without first removing write access, then there
already exists races where KVM could mark a folio dirty before remote TLBs
are flushed, i.e. before guest writes are guaranteed to stop.  Furthermore,
x86 is literally the only architecture that marks folios dirty on the
backend; every other KVM architecture marks folios dirty at map time.

x86's unique behavior likely stems from the fact that x86's MMU predates
mmu_notifiers.  Long, long ago, before mmu_notifiers were added, marking
pages dirty when zapping SPTEs was logical, and perhaps even necessary, as
KVM held references to pages, i.e. kept a page's refcount elevated while
the page was mapped into the guest.  At the time, KVM's rmap_remove()
simply did:

        if (is_writeble_pte(*spte))
                kvm_release_pfn_dirty(pfn);
        else
                kvm_release_pfn_clean(pfn);

i.e. dropped the refcount and marked the page dirty at the same time.
After mmu_notifiers were introduced, commit acb66dd051d0 ("KVM: MMU:
don't hold pagecount reference for mapped sptes pages") removed the
refcount logic, but kept the dirty logic, i.e. converted the above to:

	if (is_writeble_pte(*spte))
		kvm_release_pfn_dirty(pfn);

And for KVM x86, that's essentially how things have stayed over the last
~15 years, without anyone revisiting *why* KVM marks pages/folios dirty at
zap/modification time, e.g. the behavior was blindly carried forward to
the TDP MMU.

Practically speaking, the only downside to marking a folio dirty during
mapping is that KVM could trigger writeback of memory that was never
actually written.  Except that can't actually happen if KVM marks folios
dirty if and only if a writable SPTE is created (as done here), because
KVM always marks writable SPTEs as dirty during make_spte().  See commit
9b51a63024bd ("KVM: MMU: Explicitly set D-bit for writable spte."), circa
2015.

Note, KVM's access tracking logic for prefetched SPTEs is a bit odd.  If a
guest PTE is dirty and writable, KVM will create a writable SPTE, but then
mark the SPTE for access tracking.  Which isn't wrong, just a bit odd, as
it results in _more_ precise dirty tracking for MMUs _without_ A/D bits.

To keep things simple, mark the folio dirty before access tracking comes
into play, as an access-tracked SPTE can be restored in the fast page
fault path, i.e. without holding mmu_lock.  While writing SPTEs and
accessing memslots outside of mmu_lock is safe, marking a folio dirty is
not.  E.g. if the fast path gets interrupted _just_ after setting a SPTE,
the primary MMU could theoretically invalidate and free a folio before KVM
marks it dirty.  Unlike the shadow MMU, which waits for CPUs to respond to
an IPI, the TDP MMU only guarantees the page tables themselves won't be
freed (via RCU).

Opportunistically update a few stale comments.

Cc: David Matlack <dmatlack@google.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c         | 30 ++++--------------------------
 arch/x86/kvm/mmu/paging_tmpl.h |  6 +++---
 arch/x86/kvm/mmu/spte.c        | 20 ++++++++++++++++++--
 arch/x86/kvm/mmu/tdp_mmu.c     | 12 ------------
 4 files changed, 25 insertions(+), 43 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 0f21d6f76cab..1ae823ebd12b 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -547,10 +547,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
 		kvm_set_pfn_accessed(spte_to_pfn(old_spte));
 	}
 
-	if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) {
+	if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte))
 		flush = true;
-		kvm_set_pfn_dirty(spte_to_pfn(old_spte));
-	}
 
 	return flush;
 }
@@ -593,9 +591,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
 	if (is_accessed_spte(old_spte))
 		kvm_set_pfn_accessed(pfn);
 
-	if (is_dirty_spte(old_spte))
-		kvm_set_pfn_dirty(pfn);
-
 	return old_spte;
 }
 
@@ -1250,16 +1245,6 @@ static bool spte_clear_dirty(u64 *sptep)
 	return mmu_spte_update(sptep, spte);
 }
 
-static bool spte_wrprot_for_clear_dirty(u64 *sptep)
-{
-	bool was_writable = test_and_clear_bit(PT_WRITABLE_SHIFT,
-					       (unsigned long *)sptep);
-	if (was_writable && !spte_ad_enabled(*sptep))
-		kvm_set_pfn_dirty(spte_to_pfn(*sptep));
-
-	return was_writable;
-}
-
 /*
  * Gets the GFN ready for another round of dirty logging by clearing the
  *	- D bit on ad-enabled SPTEs, and
@@ -1275,7 +1260,8 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
 
 	for_each_rmap_spte(rmap_head, &iter, sptep)
 		if (spte_ad_need_write_protect(*sptep))
-			flush |= spte_wrprot_for_clear_dirty(sptep);
+			flush |= test_and_clear_bit(PT_WRITABLE_SHIFT,
+						    (unsigned long *)sptep);
 		else
 			flush |= spte_clear_dirty(sptep);
 
@@ -1628,14 +1614,6 @@ static bool kvm_rmap_age_gfn_range(struct kvm *kvm,
 				clear_bit((ffs(shadow_accessed_mask) - 1),
 					(unsigned long *)sptep);
 			} else {
-				/*
-				 * Capture the dirty status of the page, so that
-				 * it doesn't get lost when the SPTE is marked
-				 * for access tracking.
-				 */
-				if (is_writable_pte(spte))
-					kvm_set_pfn_dirty(spte_to_pfn(spte));
-
 				spte = mark_spte_for_access_track(spte);
 				mmu_spte_update_no_track(sptep, spte);
 			}
@@ -3415,7 +3393,7 @@ static bool fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu,
 	 * harm. This also avoids the TLB flush needed after setting dirty bit
 	 * so non-PML cases won't be impacted.
 	 *
-	 * Compare with set_spte where instead shadow_dirty_mask is set.
+	 * Compare with make_spte() where instead shadow_dirty_mask is set.
 	 */
 	if (!try_cmpxchg64(sptep, &old_spte, new_spte))
 		return false;
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 6e7bd8921c6f..fbaae040218b 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -892,9 +892,9 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 
 /*
  * Using the information in sp->shadowed_translation (kvm_mmu_page_get_gfn()) is
- * safe because:
- * - The spte has a reference to the struct page, so the pfn for a given gfn
- *   can't change unless all sptes pointing to it are nuked first.
+ * safe because SPTEs are protected by mmu_notifiers and memslot generations, so
+ * the pfn for a given gfn can't change unless all SPTEs pointing to the gfn are
+ * nuked first.
  *
  * Returns
  * < 0: failed to sync spte
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 618059b30b8b..8e8d6ee79c8b 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -232,8 +232,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 		 * unnecessary (and expensive).
 		 *
 		 * The same reasoning applies to dirty page/folio accounting;
-		 * KVM will mark the folio dirty using the old SPTE, thus
-		 * there's no need to immediately mark the new SPTE as dirty.
+		 * KVM marked the folio dirty when the old SPTE was created,
+		 * thus there's no need to mark the folio dirty again.
 		 *
 		 * Note, both cases rely on KVM not changing PFNs without first
 		 * zapping the old SPTE, which is guaranteed by both the shadow
@@ -266,12 +266,28 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 		  "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level,
 		  get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level));
 
+	/*
+	 * Mark the memslot dirty *after* modifying it for access tracking.
+	 * Unlike folios, memslots can be safely marked dirty out of mmu_lock,
+	 * i.e. in the fast page fault handler.
+	 */
 	if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) {
 		/* Enforced by kvm_mmu_hugepage_adjust. */
 		WARN_ON_ONCE(level > PG_LEVEL_4K);
 		mark_page_dirty_in_slot(vcpu->kvm, slot, gfn);
 	}
 
+	/*
+	 * If the page that KVM got from the primary MMU is writable, i.e. if
+	 * it's host-writable, mark the page/folio dirty.  As alluded to above,
+	 * folios can't be safely marked dirty in the fast page fault handler,
+	 * and so KVM must (somewhat) speculatively mark the folio dirty even
+	 * though it isn't guaranteed to be written as KVM won't mark the folio
+	 * dirty if/when the SPTE is made writable.
+	 */
+	if (host_writable)
+		kvm_set_pfn_dirty(pfn);
+
 	*new_spte = spte;
 	return wrprot;
 }
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 76bca7a726c1..517b384473c1 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -511,10 +511,6 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
 	if (is_leaf != was_leaf)
 		kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1);
 
-	if (was_leaf && is_dirty_spte(old_spte) &&
-	    (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
-		kvm_set_pfn_dirty(spte_to_pfn(old_spte));
-
 	/*
 	 * Recursively handle child PTs if the change removed a subtree from
 	 * the paging structure.  Note the WARN on the PFN changing without the
@@ -1249,13 +1245,6 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter,
 							 iter->level);
 		new_spte = iter->old_spte & ~shadow_accessed_mask;
 	} else {
-		/*
-		 * Capture the dirty status of the page, so that it doesn't get
-		 * lost when the SPTE is marked for access tracking.
-		 */
-		if (is_writable_pte(iter->old_spte))
-			kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte));
-
 		new_spte = mark_spte_for_access_track(iter->old_spte);
 		iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep,
 							iter->old_spte, new_spte,
@@ -1596,7 +1585,6 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root,
 		trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level,
 					       iter.old_spte,
 					       iter.old_spte & ~dbit);
-		kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte));
 	}
 
 	rcu_read_unlock();
-- 
2.47.0.rc1.288.g06298d1525-goog



  parent reply	other threads:[~2024-10-10 18:25 UTC|newest]

Thread overview: 99+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-10 18:23 [PATCH v13 00/85] KVM: Stop grabbing references to PFNMAP'd pages Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 01/85] KVM: Drop KVM_ERR_PTR_BAD_PAGE and instead return NULL to indicate an error Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 02/85] KVM: Allow calling kvm_release_page_{clean,dirty}() on a NULL page pointer Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 03/85] KVM: Add kvm_release_page_unused() API to put pages that KVM never consumes Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 04/85] KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTE Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 05/85] KVM: x86/mmu: Don't overwrite shadow-present MMU SPTEs when prefaulting Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 06/85] KVM: x86/mmu: Invert @can_unsync and renamed to @synchronizing Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 07/85] KVM: x86/mmu: Mark new SPTE as Accessed when synchronizing existing SPTE Sean Christopherson
2024-10-10 18:23 ` Sean Christopherson [this message]
2024-10-10 18:23 ` [PATCH v13 09/85] KVM: x86/mmu: Mark page/folio accessed only when zapping leaf SPTEs Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 10/85] KVM: x86/mmu: Use gfn_to_page_many_atomic() when prefetching indirect PTEs Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 11/85] KVM: Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 12/85] KVM: Drop @atomic param from gfn=>pfn and hva=>pfn APIs Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 13/85] KVM: Annotate that all paths in hva_to_pfn() might sleep Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 14/85] KVM: Return ERR_SIGPENDING from hva_to_pfn() if GUP returns -EGAIN Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 15/85] KVM: Drop extra GUP (via check_user_page_hwpoison()) to detect poisoned page Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 16/85] KVM: Replace "async" pointer in gfn=>pfn with "no_wait" and error code Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 17/85] KVM: x86/mmu: Drop kvm_page_fault.hva, i.e. don't track intermediate hva Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 18/85] KVM: Drop unused "hva" pointer from __gfn_to_pfn_memslot() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 19/85] KVM: Introduce kvm_follow_pfn() to eventually replace "gfn_to_pfn" APIs Sean Christopherson
2024-10-21  8:49   ` Yan Zhao
2024-10-21 18:08     ` Sean Christopherson
2024-10-22  1:25       ` Yan Zhao
2024-10-10 18:23 ` [PATCH v13 20/85] KVM: Remove pointless sanity check on @map param to kvm_vcpu_(un)map() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 21/85] KVM: Explicitly initialize all fields at the start of kvm_vcpu_map() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 22/85] KVM: Use NULL for struct page pointer to indicate mremapped memory Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 23/85] KVM: nVMX: Rely on kvm_vcpu_unmap() to track validity of eVMCS mapping Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 24/85] KVM: nVMX: Drop pointless msr_bitmap_map field from struct nested_vmx Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 25/85] KVM: nVMX: Add helper to put (unmap) vmcs12 pages Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 26/85] KVM: Use plain "struct page" pointer instead of single-entry array Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 27/85] KVM: Provide refcounted page as output field in struct kvm_follow_pfn Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 28/85] KVM: Move kvm_{set,release}_page_{clean,dirty}() helpers up in kvm_main.c Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 29/85] KVM: pfncache: Precisely track refcounted pages Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 30/85] KVM: Migrate kvm_vcpu_map() to kvm_follow_pfn() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 31/85] KVM: Pin (as in FOLL_PIN) pages during kvm_vcpu_map() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 32/85] KVM: nVMX: Mark vmcs12's APIC access page dirty when unmapping Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 33/85] KVM: Pass in write/dirty to kvm_vcpu_map(), not kvm_vcpu_unmap() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 34/85] KVM: Get writable mapping for __kvm_vcpu_map() only when necessary Sean Christopherson
2024-10-21  9:25   ` Yan Zhao
2024-10-21 18:13     ` Sean Christopherson
2024-10-22  1:51       ` Yan Zhao
2024-10-10 18:23 ` [PATCH v13 35/85] KVM: Disallow direct access (w/o mmu_notifier) to unpinned pfn by default Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 36/85] KVM: x86: Don't fault-in APIC access page during initial allocation Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 37/85] KVM: x86/mmu: Add "mmu" prefix fault-in helpers to free up generic names Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 38/85] KVM: x86/mmu: Put direct prefetched pages via kvm_release_page_clean() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 39/85] KVM: x86/mmu: Add common helper to handle prefetching SPTEs Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 40/85] KVM: x86/mmu: Add helper to "finish" handling a guest page fault Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 41/85] KVM: x86/mmu: Mark pages/folios dirty at the origin of make_spte() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 42/85] KVM: Move declarations of memslot accessors up in kvm_host.h Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 43/85] KVM: Add kvm_faultin_pfn() to specifically service guest page faults Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 44/85] KVM: x86/mmu: Convert page fault paths to kvm_faultin_pfn() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 45/85] KVM: guest_memfd: Pass index, not gfn, to __kvm_gmem_get_pfn() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 46/85] KVM: guest_memfd: Provide "struct page" as output from kvm_gmem_get_pfn() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 47/85] KVM: x86/mmu: Put refcounted pages instead of blindly releasing pfns Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 48/85] KVM: x86/mmu: Don't mark unused faultin pages as accessed Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 49/85] KVM: Move x86's API to release a faultin page to common KVM Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 50/85] KVM: VMX: Hold mmu_lock until page is released when updating APIC access page Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 51/85] KVM: VMX: Use __kvm_faultin_page() to get APIC access page/pfn Sean Christopherson
2024-10-21 10:22   ` Yan Zhao
2024-10-21 18:57     ` Sean Christopherson
2024-10-22  2:15       ` Yan Zhao
2024-10-10 18:23 ` [PATCH v13 52/85] KVM: PPC: e500: Mark "struct page" dirty in kvmppc_e500_shadow_map() Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 53/85] KVM: PPC: e500: Mark "struct page" pfn accessed before dropping mmu_lock Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 54/85] KVM: PPC: e500: Use __kvm_faultin_pfn() to handle page faults Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 55/85] KVM: arm64: Mark "struct page" pfns accessed/dirty before dropping mmu_lock Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 56/85] KVM: arm64: Use __kvm_faultin_pfn() to handle memory aborts Sean Christopherson
2024-10-10 18:23 ` [PATCH v13 57/85] KVM: RISC-V: Mark "struct page" pfns dirty iff a stage-2 PTE is installed Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 58/85] KVM: RISC-V: Mark "struct page" pfns accessed before dropping mmu_lock Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 59/85] KVM: RISC-V: Use kvm_faultin_pfn() when mapping pfns into the guest Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 60/85] KVM: PPC: Use __kvm_faultin_pfn() to handle page faults on Book3s HV Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 61/85] KVM: PPC: Use __kvm_faultin_pfn() to handle page faults on Book3s Radix Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 62/85] KVM: PPC: Drop unused @kvm_ro param from kvmppc_book3s_instantiate_page() Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 63/85] KVM: PPC: Book3S: Mark "struct page" pfns dirty/accessed after installing PTE Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 64/85] KVM: PPC: Use kvm_faultin_pfn() to handle page faults on Book3s PR Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 65/85] KVM: LoongArch: Mark "struct page" pfns dirty only in "slow" page fault path Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 66/85] KVM: LoongArch: Mark "struct page" pfns accessed " Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 67/85] KVM: LoongArch: Mark "struct page" pfn accessed before dropping mmu_lock Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 68/85] KVM: LoongArch: Use kvm_faultin_pfn() to map pfns into the guest Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 69/85] KVM: MIPS: Mark "struct page" pfns dirty only in "slow" page fault path Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 70/85] KVM: MIPS: Mark "struct page" pfns accessed " Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 71/85] KVM: MIPS: Mark "struct page" pfns accessed prior to dropping mmu_lock Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 72/85] KVM: MIPS: Use kvm_faultin_pfn() to map pfns into the guest Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 73/85] KVM: PPC: Remove extra get_page() to fix page refcount leak Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 74/85] KVM: PPC: Use kvm_vcpu_map() to map guest memory to patch dcbz instructions Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 75/85] KVM: Convert gfn_to_page() to use kvm_follow_pfn() Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 76/85] KVM: Add support for read-only usage of gfn_to_page() Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 77/85] KVM: arm64: Use __gfn_to_page() when copying MTE tags to/from userspace Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 78/85] KVM: PPC: Explicitly require struct page memory for Ultravisor sharing Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 79/85] KVM: Drop gfn_to_pfn() APIs now that all users are gone Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 80/85] KVM: s390: Use kvm_release_page_dirty() to unpin "struct page" memory Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 81/85] KVM: Make kvm_follow_pfn.refcounted_page a required field Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 82/85] KVM: x86/mmu: Don't mark "struct page" accessed when zapping SPTEs Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 83/85] KVM: arm64: Don't mark "struct page" accessed when making SPTE young Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 84/85] KVM: Drop APIs that manipulate "struct page" via pfns Sean Christopherson
2024-10-10 18:24 ` [PATCH v13 85/85] KVM: Don't grab reference on VM_MIXEDMAP pfns that have a "struct page" Sean Christopherson
2024-10-17 17:40 ` [PATCH v13 00/85] KVM: Stop grabbing references to PFNMAP'd pages Paolo Bonzini
2024-10-22  0:25   ` Sean Christopherson
2024-10-25 17:41     ` Paolo Bonzini
2024-10-24  3:37 ` Dmitry Osipenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241010182427.1434605-9-seanjc@google.com \
    --to=seanjc@google.com \
    --cc=ajones@ventanamicro.com \
    --cc=alex.bennee@linaro.org \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=borntraeger@linux.ibm.com \
    --cc=chenhuacai@kernel.org \
    --cc=dmatlack@google.com \
    --cc=frankja@linux.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=loongarch@lists.linux.dev \
    --cc=maobibo@loongson.cn \
    --cc=maz@kernel.org \
    --cc=mpe@ellerman.id.au \
    --cc=oliver.upton@linux.dev \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=stevensd@chromium.org \
    --cc=yan.y.zhao@intel.com \
    --cc=zhaotianrui@loongson.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).