From: Sean Christopherson <seanjc@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>,
Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
Tianrui Zhao <zhaotianrui@loongson.cn>,
Bibo Mao <maobibo@loongson.cn>,
Huacai Chen <chenhuacai@kernel.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Anup Patel <anup@brainfault.org>,
Paul Walmsley <paul.walmsley@sifive.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Janosch Frank <frankja@linux.ibm.com>,
Claudio Imbrenda <imbrenda@linux.ibm.com>,
Sean Christopherson <seanjc@google.com>
Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
kvmarm@lists.linux.dev, loongarch@lists.linux.dev,
linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org,
linux-kernel@vger.kernel.org,
David Matlack <dmatlack@google.com>,
David Stevens <stevensd@chromium.org>
Subject: [PATCH v12 07/84] KVM: x86/mmu: Mark folio dirty when creating SPTE, not when zapping/modifying
Date: Fri, 26 Jul 2024 16:51:16 -0700 [thread overview]
Message-ID: <20240726235234.228822-8-seanjc@google.com> (raw)
In-Reply-To: <20240726235234.228822-1-seanjc@google.com>
Mark pages/folios dirty when creating SPTEs to map PFNs into the guest,
not when zapping or modifying SPTEs, as marking folios dirty when zapping
or modifying SPTEs can be extremely inefficient. E.g. when KVM is zapping
collapsible SPTEs to reconstitute a hugepage after disbling dirty logging,
KVM will mark every 4KiB pfn as dirty, even though _at least_ 512 pfns are
guaranteed to be in a single folio (the SPTE couldn't potentially be huge
if that weren't the case). The problem only becomes worse for 1GiB
HugeTLB pages, as KVM can mark a single folio dirty 512*512 times.
Marking a folio dirty when mapping is functionally safe as KVM drops all
relevant SPTEs in response to an mmu_notifier invalidation, i.e. ensures
that the guest can't dirty a folio after access has been removed.
And because KVM already marks folios dirty when zapping/modifying SPTEs
for KVM reasons, i.e. not in response to an mmu_notifier invalidation,
there is no danger of "prematurely" marking a folio dirty. E.g. if a
filesystems cleans a folio without first removing write access, then there
already exists races where KVM could mark a folio dirty before remote TLBs
are flushed, i.e. before guest writes are guaranteed to stop. Furthermore,
x86 is literally the only architecture that marks folios dirty on the
backend; every other KVM architecture marks folios dirty at map time.
x86's unique behavior likely stems from the fact that x86's MMU predates
mmu_notifiers. Long, long ago, before mmu_notifiers were added, marking
pages dirty when zapping SPTEs was logical, and perhaps even necessary, as
KVM held references to pages, i.e. kept a page's refcount elevated while
the page was mapped into the guest. At the time, KVM's rmap_remove()
simply did:
if (is_writeble_pte(*spte))
kvm_release_pfn_dirty(pfn);
else
kvm_release_pfn_clean(pfn);
i.e. dropped the refcount and marked the page dirty at the same time.
After mmu_notifiers were introduced, commit acb66dd051d0 ("KVM: MMU:
don't hold pagecount reference for mapped sptes pages") removed the
refcount logic, but kept the dirty logic, i.e. converted the above to:
if (is_writeble_pte(*spte))
kvm_release_pfn_dirty(pfn);
And for KVM x86, that's essentially how things have stayed over the last
~15 years, without anyone revisiting *why* KVM marks pages/folios dirty at
zap/modification time, e.g. the behavior was blindly carried forward to
the TDP MMU.
Practically speaking, the only downside to marking a folio dirty during
mapping is that KVM could trigger writeback of memory that was never
actually written. Except that can't actually happen if KVM marks folios
dirty if and only if a writable SPTE is created (as done here), because
KVM always marks writable SPTEs as dirty during make_spte(). See commit
9b51a63024bd ("KVM: MMU: Explicitly set D-bit for writable spte."), circa
2015.
Note, KVM's access tracking logic for prefetched SPTEs is a bit odd. If a
guest PTE is dirty and writable, KVM will create a writable SPTE, but then
mark the SPTE for access tracking. Which isn't wrong, just a bit odd, as
it results in _more_ precise dirty tracking for MMUs _without_ A/D bits.
To keep things simple, mark the folio dirty before access tracking comes
into play, as an access-tracked SPTE can be restored in the fast page
fault path, i.e. without holding mmu_lock. While writing SPTEs and
accessing memslots outside of mmu_lock is safe, marking a folio dirty is
not. E.g. if the fast path gets interrupted _just_ after setting a SPTE,
the primary MMU could theoretically invalidate and free a folio before KVM
marks it dirty. Unlike the shadow MMU, which waits for CPUs to respond to
an IPI, the TDP MMU only guarantees the page tables themselves won't be
freed (via RCU).
Opportunistically update a few stale comments.
Cc: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/mmu.c | 29 ++++-------------------------
arch/x86/kvm/mmu/paging_tmpl.h | 6 +++---
arch/x86/kvm/mmu/spte.c | 20 ++++++++++++++++++--
arch/x86/kvm/mmu/tdp_mmu.c | 12 ------------
4 files changed, 25 insertions(+), 42 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 901be9e420a4..2e6daa6d1cc0 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -547,10 +547,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
kvm_set_pfn_accessed(spte_to_pfn(old_spte));
}
- if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) {
+ if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte))
flush = true;
- kvm_set_pfn_dirty(spte_to_pfn(old_spte));
- }
return flush;
}
@@ -593,9 +591,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
if (is_accessed_spte(old_spte))
kvm_set_pfn_accessed(pfn);
- if (is_dirty_spte(old_spte))
- kvm_set_pfn_dirty(pfn);
-
return old_spte;
}
@@ -626,13 +621,6 @@ static bool mmu_spte_age(u64 *sptep)
clear_bit((ffs(shadow_accessed_mask) - 1),
(unsigned long *)sptep);
} else {
- /*
- * Capture the dirty status of the page, so that it doesn't get
- * lost when the SPTE is marked for access tracking.
- */
- if (is_writable_pte(spte))
- kvm_set_pfn_dirty(spte_to_pfn(spte));
-
spte = mark_spte_for_access_track(spte);
mmu_spte_update_no_track(sptep, spte);
}
@@ -1275,16 +1263,6 @@ static bool spte_clear_dirty(u64 *sptep)
return mmu_spte_update(sptep, spte);
}
-static bool spte_wrprot_for_clear_dirty(u64 *sptep)
-{
- bool was_writable = test_and_clear_bit(PT_WRITABLE_SHIFT,
- (unsigned long *)sptep);
- if (was_writable && !spte_ad_enabled(*sptep))
- kvm_set_pfn_dirty(spte_to_pfn(*sptep));
-
- return was_writable;
-}
-
/*
* Gets the GFN ready for another round of dirty logging by clearing the
* - D bit on ad-enabled SPTEs, and
@@ -1300,7 +1278,8 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
for_each_rmap_spte(rmap_head, &iter, sptep)
if (spte_ad_need_write_protect(*sptep))
- flush |= spte_wrprot_for_clear_dirty(sptep);
+ flush |= test_and_clear_bit(PT_WRITABLE_SHIFT,
+ (unsigned long *)sptep);
else
flush |= spte_clear_dirty(sptep);
@@ -3381,7 +3360,7 @@ static bool fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu,
* harm. This also avoids the TLB flush needed after setting dirty bit
* so non-PML cases won't be impacted.
*
- * Compare with set_spte where instead shadow_dirty_mask is set.
+ * Compare with make_spte() where instead shadow_dirty_mask is set.
*/
if (!try_cmpxchg64(sptep, &old_spte, new_spte))
return false;
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 69941cebb3a8..ef0b3b213e5b 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -891,9 +891,9 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
/*
* Using the information in sp->shadowed_translation (kvm_mmu_page_get_gfn()) is
- * safe because:
- * - The spte has a reference to the struct page, so the pfn for a given gfn
- * can't change unless all sptes pointing to it are nuked first.
+ * safe because SPTEs are protected by mmu_notifiers and memslot generations, so
+ * the pfn for a given gfn can't change unless all SPTEs pointing to the gfn are
+ * nuked first.
*
* Returns
* < 0: failed to sync spte
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index a3baf0cadbee..9b8795bd2f04 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -232,8 +232,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
* unnecessary (and expensive).
*
* The same reasoning applies to dirty page/folio accounting;
- * KVM will mark the folio dirty using the old SPTE, thus
- * there's no need to immediately mark the new SPTE as dirty.
+ * KVM marked the folio dirty when the old SPTE was created,
+ * thus there's no need to mark the folio dirty again.
*
* Note, both cases rely on KVM not changing PFNs without first
* zapping the old SPTE, which is guaranteed by both the shadow
@@ -266,12 +266,28 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
"spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level,
get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level));
+ /*
+ * Mark the memslot dirty *after* modifying it for access tracking.
+ * Unlike folios, memslots can be safely marked dirty out of mmu_lock,
+ * i.e. in the fast page fault handler.
+ */
if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) {
/* Enforced by kvm_mmu_hugepage_adjust. */
WARN_ON_ONCE(level > PG_LEVEL_4K);
mark_page_dirty_in_slot(vcpu->kvm, slot, gfn);
}
+ /*
+ * If the page that KVM got from the primary MMU is writable, i.e. if
+ * it's host-writable, mark the page/folio dirty. As alluded to above,
+ * folios can't be safely marked dirty in the fast page fault handler,
+ * and so KVM must (somewhat) speculatively mark the folio dirty even
+ * though it isn't guaranteed to be written as KVM won't mark the folio
+ * dirty if/when the SPTE is made writable.
+ */
+ if (host_writable)
+ kvm_set_pfn_dirty(pfn);
+
*new_spte = spte;
return wrprot;
}
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index c7dc49ee7388..7ac43d1ce918 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -511,10 +511,6 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
if (is_leaf != was_leaf)
kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1);
- if (was_leaf && is_dirty_spte(old_spte) &&
- (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
- kvm_set_pfn_dirty(spte_to_pfn(old_spte));
-
/*
* Recursively handle child PTs if the change removed a subtree from
* the paging structure. Note the WARN on the PFN changing without the
@@ -1248,13 +1244,6 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter,
iter->level);
new_spte = iter->old_spte & ~shadow_accessed_mask;
} else {
- /*
- * Capture the dirty status of the page, so that it doesn't get
- * lost when the SPTE is marked for access tracking.
- */
- if (is_writable_pte(iter->old_spte))
- kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte));
-
new_spte = mark_spte_for_access_track(iter->old_spte);
iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep,
iter->old_spte, new_spte,
@@ -1595,7 +1584,6 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root,
trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level,
iter.old_spte,
iter.old_spte & ~dbit);
- kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte));
}
rcu_read_unlock();
--
2.46.0.rc1.232.g9752f9e123-goog
next prev parent reply other threads:[~2024-07-26 23:52 UTC|newest]
Thread overview: 150+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-26 23:51 [PATCH v12 00/84] KVM: Stop grabbing references to PFNMAP'd pages Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 01/84] KVM: arm64: Release pfn, i.e. put page, if copying MTE tags hits ZONE_DEVICE Sean Christopherson
2024-07-31 16:23 ` Alex Bennée
2024-07-31 20:36 ` Sean Christopherson
2024-08-01 10:07 ` Marc Zyngier
2024-08-07 14:15 ` Catalin Marinas
2024-08-08 9:54 ` Steven Price
2024-08-22 14:24 ` (subset) " Marc Zyngier
2024-07-26 23:51 ` [PATCH v12 02/84] KVM: arm64: Disallow copying MTE to guest memory while KVM is dirty logging Sean Christopherson
2024-08-01 7:34 ` Aneesh Kumar K.V
2024-08-01 18:01 ` Sean Christopherson
2024-08-05 7:57 ` Aneesh Kumar K.V
2024-08-05 22:09 ` Sean Christopherson
2024-08-07 16:21 ` Catalin Marinas
2024-08-08 9:54 ` Steven Price
2024-08-22 14:24 ` (subset) " Marc Zyngier
2024-07-26 23:51 ` [PATCH v12 03/84] KVM: Drop KVM_ERR_PTR_BAD_PAGE and instead return NULL to indicate an error Sean Christopherson
2024-08-01 8:57 ` Alex Bennée
2024-07-26 23:51 ` [PATCH v12 04/84] KVM: Allow calling kvm_release_page_{clean,dirty}() on a NULL page pointer Sean Christopherson
2024-08-01 9:03 ` Alex Bennée
2024-07-26 23:51 ` [PATCH v12 05/84] KVM: Add kvm_release_page_unused() API to put pages that KVM never consumes Sean Christopherson
2024-08-01 9:20 ` Alex Bennée
2024-08-01 14:43 ` Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 06/84] KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTE Sean Christopherson
2024-07-26 23:51 ` Sean Christopherson [this message]
2024-07-26 23:51 ` [PATCH v12 08/84] KVM: x86/mmu: Mark page/folio accessed only when zapping leaf SPTEs Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 09/84] KVM: x86/mmu: Don't force flush if SPTE update clears Accessed bit Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 10/84] KVM: x86/mmu: Use gfn_to_page_many_atomic() when prefetching indirect PTEs Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 11/84] KVM: Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() Sean Christopherson
2024-08-02 11:16 ` Alex Bennée
2024-07-26 23:51 ` [PATCH v12 12/84] KVM: Drop @atomic param from gfn=>pfn and hva=>pfn APIs Sean Christopherson
2024-08-01 9:31 ` Alex Bennée
2024-07-26 23:51 ` [PATCH v12 13/84] KVM: Annotate that all paths in hva_to_pfn() might sleep Sean Christopherson
2024-08-08 12:00 ` Alex Bennée
2024-08-08 13:16 ` Sean Christopherson
2024-08-08 15:18 ` Alex Bennée
2024-08-08 15:31 ` Sean Christopherson
2024-08-08 16:16 ` Alex Bennée
2024-07-26 23:51 ` [PATCH v12 14/84] KVM: Replace "async" pointer in gfn=>pfn with "no_wait" and error code Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 15/84] KVM: x86/mmu: Drop kvm_page_fault.hva, i.e. don't track intermediate hva Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 16/84] KVM: Drop unused "hva" pointer from __gfn_to_pfn_memslot() Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 17/84] KVM: Introduce kvm_follow_pfn() to eventually replace "gfn_to_pfn" APIs Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 18/84] KVM: Remove pointless sanity check on @map param to kvm_vcpu_(un)map() Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 19/84] KVM: Explicitly initialize all fields at the start of kvm_vcpu_map() Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 20/84] KVM: Use NULL for struct page pointer to indicate mremapped memory Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 21/84] KVM: nVMX: Rely on kvm_vcpu_unmap() to track validity of eVMCS mapping Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 22/84] KVM: nVMX: Drop pointless msr_bitmap_map field from struct nested_vmx Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 23/84] KVM: nVMX: Add helper to put (unmap) vmcs12 pages Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 24/84] KVM: Use plain "struct page" pointer instead of single-entry array Sean Christopherson
2024-08-01 9:53 ` Alex Bennée
2024-07-26 23:51 ` [PATCH v12 25/84] KVM: Provide refcounted page as output field in struct kvm_follow_pfn Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 26/84] KVM: Move kvm_{set,release}_page_{clean,dirty}() helpers up in kvm_main.c Sean Christopherson
2024-08-01 9:55 ` Alex Bennée
2024-07-26 23:51 ` [PATCH v12 27/84] KVM: pfncache: Precisely track refcounted pages Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 28/84] KVM: Migrate kvm_vcpu_map() to kvm_follow_pfn() Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 29/84] KVM: Pin (as in FOLL_PIN) pages during kvm_vcpu_map() Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 30/84] KVM: nVMX: Mark vmcs12's APIC access page dirty when unmapping Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 31/84] KVM: Pass in write/dirty to kvm_vcpu_map(), not kvm_vcpu_unmap() Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 32/84] KVM: Get writable mapping for __kvm_vcpu_map() only when necessary Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 33/84] KVM: Disallow direct access (w/o mmu_notifier) to unpinned pfn by default Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 34/84] KVM: Add a helper to lookup a pfn without grabbing a reference Sean Christopherson
2024-07-30 10:41 ` Paolo Bonzini
2024-07-30 20:15 ` Sean Christopherson
2024-07-31 10:11 ` Paolo Bonzini
2024-07-26 23:51 ` [PATCH v12 35/84] KVM: x86: Use kvm_lookup_pfn() to check if retrying #PF is useful Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 36/84] KVM: x86: Use kvm_lookup_pfn() to check if APIC access page was installed Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 37/84] KVM: x86/mmu: Add "mmu" prefix fault-in helpers to free up generic names Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 38/84] KVM: x86/mmu: Put direct prefetched pages via kvm_release_page_clean() Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 39/84] KVM: x86/mmu: Add common helper to handle prefetching SPTEs Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 40/84] KVM: x86/mmu: Add helper to "finish" handling a guest page fault Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 41/84] KVM: x86/mmu: Mark pages/folios dirty at the origin of make_spte() Sean Christopherson
2024-07-30 8:57 ` Paolo Bonzini
2024-07-26 23:51 ` [PATCH v12 42/84] KVM: Move declarations of memslot accessors up in kvm_host.h Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 43/84] KVM: Add kvm_faultin_pfn() to specifically service guest page faults Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 44/84] KVM: x86/mmu: Convert page fault paths to kvm_faultin_pfn() Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 45/84] KVM: guest_memfd: Provide "struct page" as output from kvm_gmem_get_pfn() Sean Christopherson
2024-07-30 9:05 ` Paolo Bonzini
2024-07-30 20:00 ` Sean Christopherson
2024-07-31 10:12 ` Paolo Bonzini
2024-07-26 23:51 ` [PATCH v12 46/84] KVM: x86/mmu: Put refcounted pages instead of blindly releasing pfns Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 47/84] KVM: x86/mmu: Don't mark unused faultin pages as accessed Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 48/84] KVM: Move x86's API to release a faultin page to common KVM Sean Christopherson
2024-07-30 8:58 ` Paolo Bonzini
2024-07-30 19:15 ` Sean Christopherson
2024-07-31 10:18 ` Paolo Bonzini
2024-07-26 23:51 ` [PATCH v12 49/84] KVM: VMX: Hold mmu_lock until page is released when updating APIC access page Sean Christopherson
2024-07-26 23:51 ` [PATCH v12 50/84] KVM: VMX: Use __kvm_faultin_page() to get APIC access page/pfn Sean Christopherson
2024-07-30 8:59 ` Paolo Bonzini
2024-07-26 23:52 ` [PATCH v12 51/84] KVM: PPC: e500: Mark "struct page" dirty in kvmppc_e500_shadow_map() Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 52/84] KVM: PPC: e500: Mark "struct page" pfn accessed before dropping mmu_lock Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 53/84] KVM: PPC: e500: Use __kvm_faultin_pfn() to handle page faults Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 54/84] KVM: arm64: Mark "struct page" pfns accessed/dirty before dropping mmu_lock Sean Christopherson
2024-08-05 23:25 ` Oliver Upton
2024-08-05 23:26 ` Oliver Upton
2024-08-05 23:53 ` Sean Christopherson
2024-08-05 23:56 ` Oliver Upton
2024-08-06 8:55 ` Marc Zyngier
2024-08-06 15:19 ` Sean Christopherson
2024-08-06 8:24 ` Fuad Tabba
2024-07-26 23:52 ` [PATCH v12 55/84] KVM: arm64: Use __kvm_faultin_pfn() to handle memory aborts Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 56/84] KVM: RISC-V: Mark "struct page" pfns dirty iff a stage-2 PTE is installed Sean Christopherson
2024-07-31 8:11 ` Andrew Jones
2024-08-06 15:03 ` Anup Patel
2024-07-26 23:52 ` [PATCH v12 57/84] KVM: RISC-V: Mark "struct page" pfns accessed before dropping mmu_lock Sean Christopherson
2024-07-31 8:12 ` Andrew Jones
2024-08-06 15:04 ` Anup Patel
2024-07-26 23:52 ` [PATCH v12 58/84] KVM: RISC-V: Use kvm_faultin_pfn() when mapping pfns into the guest Sean Christopherson
2024-07-31 8:11 ` Andrew Jones
2024-08-06 15:04 ` Anup Patel
2024-07-26 23:52 ` [PATCH v12 59/84] KVM: PPC: Use __kvm_faultin_pfn() to handle page faults on Book3s HV Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 60/84] KVM: PPC: Use __kvm_faultin_pfn() to handle page faults on Book3s Radix Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 61/84] KVM: PPC: Drop unused @kvm_ro param from kvmppc_book3s_instantiate_page() Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 62/84] KVM: PPC: Book3S: Mark "struct page" pfns dirty/accessed after installing PTE Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 63/84] KVM: PPC: Use kvm_faultin_pfn() to handle page faults on Book3s PR Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 64/84] KVM: LoongArch: Mark "struct page" pfns dirty only in "slow" page fault path Sean Christopherson
2024-08-02 7:53 ` maobibo
2024-08-02 19:32 ` Sean Christopherson
2024-08-03 3:02 ` maobibo
2024-08-05 23:22 ` Sean Christopherson
2024-08-06 1:16 ` maobibo
2024-08-08 11:38 ` maobibo
2024-07-26 23:52 ` [PATCH v12 65/84] KVM: LoongArch: Mark "struct page" pfns accessed " Sean Christopherson
2024-08-02 7:34 ` maobibo
2024-07-26 23:52 ` [PATCH v12 66/84] KVM: LoongArch: Mark "struct page" pfn accessed before dropping mmu_lock Sean Christopherson
2024-08-08 11:47 ` maobibo
2024-07-26 23:52 ` [PATCH v12 67/84] KVM: LoongArch: Use kvm_faultin_pfn() to map pfns into the guest Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 68/84] KVM: MIPS: Mark "struct page" pfns dirty only in "slow" page fault path Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 69/84] KVM: MIPS: Mark "struct page" pfns accessed " Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 70/84] KVM: MIPS: Mark "struct page" pfns accessed prior to dropping mmu_lock Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 71/84] KVM: MIPS: Use kvm_faultin_pfn() to map pfns into the guest Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 72/84] KVM: PPC: Remove extra get_page() to fix page refcount leak Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 73/84] KVM: PPC: Use kvm_vcpu_map() to map guest memory to patch dcbz instructions Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 74/84] KVM: Convert gfn_to_page() to use kvm_follow_pfn() Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 75/84] KVM: Add support for read-only usage of gfn_to_page() Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 76/84] KVM: arm64: Use __gfn_to_page() when copying MTE tags to/from userspace Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 77/84] KVM: PPC: Explicitly require struct page memory for Ultravisor sharing Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 78/84] KVM: Drop gfn_to_pfn() APIs now that all users are gone Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 79/84] KVM: s390: Use kvm_release_page_dirty() to unpin "struct page" memory Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 80/84] KVM: Make kvm_follow_pfn.refcounted_page a required field Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 81/84] KVM: x86/mmu: Don't mark "struct page" accessed when zapping SPTEs Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 82/84] KVM: arm64: Don't mark "struct page" accessed when making SPTE young Sean Christopherson
2024-07-26 23:52 ` [PATCH v12 83/84] KVM: Drop APIs that manipulate "struct page" via pfns Sean Christopherson
2024-08-02 11:03 ` Alex Bennée
2024-07-26 23:52 ` [PATCH v12 84/84] KVM: Don't grab reference on VM_MIXEDMAP pfns that have a "struct page" Sean Christopherson
2024-07-30 11:38 ` Paolo Bonzini
2024-07-30 20:21 ` Sean Christopherson
2024-07-31 9:50 ` Paolo Bonzini
2024-07-30 11:52 ` [PATCH v12 00/84] KVM: Stop grabbing references to PFNMAP'd pages Paolo Bonzini
2024-07-30 22:35 ` Sean Christopherson
2024-08-27 9:06 ` Alex Bennée
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240726235234.228822-8-seanjc@google.com \
--to=seanjc@google.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=borntraeger@linux.ibm.com \
--cc=chenhuacai@kernel.org \
--cc=dmatlack@google.com \
--cc=frankja@linux.ibm.com \
--cc=imbrenda@linux.ibm.com \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=loongarch@lists.linux.dev \
--cc=maobibo@loongson.cn \
--cc=maz@kernel.org \
--cc=mpe@ellerman.id.au \
--cc=oliver.upton@linux.dev \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=stevensd@chromium.org \
--cc=zhaotianrui@loongson.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).