* [PATCH 0/3] KVM: x86/mmu: Drop async zapping of TDP MMU roots
@ 2023-09-16 0:39 Sean Christopherson
2023-09-16 0:39 ` [PATCH 1/3] KVM: x86/mmu: Open code walking TDP MMU roots for mmu_notifier's zap SPTEs Sean Christopherson
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Sean Christopherson @ 2023-09-16 0:39 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pattara Teerapong, David Stevens, Yiwei Zhang,
Paul Hsia
Yank out the asynchronous zapping of TDP MMU roots. In some setups, using
unbounded workqueues can consumes all CPUs for extended durations, and
create significant jitter in the system.
Specifically, the behavior causes audio glitches in ChromeOS VMs with
virtio-gpu when running games in the guest. Gory details in patch 3.
I tagged all of this for stable so that this gets back to v6.1 (I already
did the backport to verify it's not awful). This bug is bad enough that
the workaround for the ChromeOS usecase is to simply disable the TDP MMU,
which I really do not want to do for the v6.1 kernel (or the v6.6. kernel).
Sean Christopherson (3):
KVM: x86/mmu: Open code walking TDP MMU roots for mmu_notifier's zap
SPTEs
KVM: x86/mmu: Take "shared" instead of "as_id" TDP MMU's yield-safe
iterator
KVM: x86/mmu: Stop zapping invalidated TDP MMU roots asynchronously
arch/x86/include/asm/kvm_host.h | 3 +-
arch/x86/kvm/mmu/mmu.c | 21 ++---
arch/x86/kvm/mmu/mmu_internal.h | 13 ++-
arch/x86/kvm/mmu/tdp_mmu.c | 147 ++++++++++++++------------------
arch/x86/kvm/mmu/tdp_mmu.h | 5 +-
arch/x86/kvm/x86.c | 5 +-
6 files changed, 80 insertions(+), 114 deletions(-)
base-commit: 0bb80ecc33a8fb5a682236443c1e740d5c917d1d
--
2.42.0.459.ge4e396fd5e-goog
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/3] KVM: x86/mmu: Open code walking TDP MMU roots for mmu_notifier's zap SPTEs
2023-09-16 0:39 [PATCH 0/3] KVM: x86/mmu: Drop async zapping of TDP MMU roots Sean Christopherson
@ 2023-09-16 0:39 ` Sean Christopherson
2023-09-21 10:05 ` Paolo Bonzini
2023-09-16 0:39 ` [PATCH 2/3] KVM: x86/mmu: Take "shared" instead of "as_id" TDP MMU's yield-safe iterator Sean Christopherson
` (2 subsequent siblings)
3 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2023-09-16 0:39 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pattara Teerapong, David Stevens, Yiwei Zhang,
Paul Hsia
Use the "inner" TDP MMU root walker when zapping SPTEs in response to an
mmu_notifier invalidation instead of invoking kvm_tdp_mmu_zap_leafs().
This will allow reworking for_each_tdp_mmu_root_yield_safe() to do more
work, and to also make it usable in more places, without increasing the
number of params to the point where it adds no value.
The mmu_notifier path is a bit of a special snowflake, e.g. it zaps only a
single address space (because it's per-slot), and can't always yield.
Drop the @can_yield param from tdp_mmu_zap_leafs() as its sole remaining
caller unconditionally passes "true".
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/mmu/tdp_mmu.c | 13 +++++++++----
arch/x86/kvm/mmu/tdp_mmu.h | 4 ++--
3 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index e1d011c67cc6..59f5e40b8f55 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6260,7 +6260,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
if (tdp_mmu_enabled) {
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start,
- gfn_end, true, flush);
+ gfn_end, flush);
}
if (flush)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 6c63f2d1675f..89aaa2463373 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -878,12 +878,12 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
* more SPTEs were zapped since the MMU lock was last acquired.
*/
bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
- bool can_yield, bool flush)
+ bool flush)
{
struct kvm_mmu_page *root;
for_each_tdp_mmu_root_yield_safe(kvm, root, as_id)
- flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush);
+ flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush);
return flush;
}
@@ -1146,8 +1146,13 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
bool flush)
{
- return kvm_tdp_mmu_zap_leafs(kvm, range->slot->as_id, range->start,
- range->end, range->may_block, flush);
+ struct kvm_mmu_page *root;
+
+ __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false, false)
+ flush = tdp_mmu_zap_leafs(kvm, root, range->start, range->end,
+ range->may_block, flush);
+
+ return flush;
}
typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index 0a63b1afabd3..eb4fa345d3a4 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -20,8 +20,8 @@ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
bool shared);
-bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start,
- gfn_t end, bool can_yield, bool flush);
+bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
+ bool flush);
bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
void kvm_tdp_mmu_zap_all(struct kvm *kvm);
void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);
--
2.42.0.459.ge4e396fd5e-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/3] KVM: x86/mmu: Take "shared" instead of "as_id" TDP MMU's yield-safe iterator
2023-09-16 0:39 [PATCH 0/3] KVM: x86/mmu: Drop async zapping of TDP MMU roots Sean Christopherson
2023-09-16 0:39 ` [PATCH 1/3] KVM: x86/mmu: Open code walking TDP MMU roots for mmu_notifier's zap SPTEs Sean Christopherson
@ 2023-09-16 0:39 ` Sean Christopherson
2023-09-21 10:52 ` Paolo Bonzini
2023-09-16 0:39 ` [PATCH 3/3] KVM: x86/mmu: Stop zapping invalidated TDP MMU roots asynchronously Sean Christopherson
2023-09-22 20:40 ` [PATCH 0/3] KVM: x86/mmu: Drop async zapping of TDP MMU roots Paolo Bonzini
3 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2023-09-16 0:39 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pattara Teerapong, David Stevens, Yiwei Zhang,
Paul Hsia
Replace the address space ID in for_each_tdp_mmu_root_yield_safe() with a
shared (vs. exclusive) param, and have the walker iterate over all address
spaces as all callers want to process all address spaces. Drop the @as_id
param as well as the manual address space iteration in callers.
Add the @shared param even though the two current callers pass "false"
unconditionally, as the main reason for refactoring the walker is to
simplify using it to zap invalid TDP MMU roots, which is done with
mmu_lock held for read.
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/mmu.c | 8 ++------
arch/x86/kvm/mmu/tdp_mmu.c | 20 ++++++++++----------
arch/x86/kvm/mmu/tdp_mmu.h | 3 +--
3 files changed, 13 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 59f5e40b8f55..54f94f644b42 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6246,7 +6246,6 @@ static bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_e
void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
{
bool flush;
- int i;
if (WARN_ON_ONCE(gfn_end <= gfn_start))
return;
@@ -6257,11 +6256,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end);
- if (tdp_mmu_enabled) {
- for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
- flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start,
- gfn_end, flush);
- }
+ if (tdp_mmu_enabled)
+ flush = kvm_tdp_mmu_zap_leafs(kvm, gfn_start, gfn_end, flush);
if (flush)
kvm_flush_remote_tlbs_range(kvm, gfn_start, gfn_end - gfn_start);
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 89aaa2463373..7cb1902ae032 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -211,8 +211,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
#define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \
__for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true)
-#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id) \
- __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, false, false)
+#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared) \
+ for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, false); \
+ _root; \
+ _root = tdp_mmu_next_root(_kvm, _root, _shared, false)) \
+ if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) { \
+ } else
/*
* Iterate over all TDP MMU roots. Requires that mmu_lock be held for write,
@@ -877,12 +881,11 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
* true if a TLB flush is needed before releasing the MMU lock, i.e. if one or
* more SPTEs were zapped since the MMU lock was last acquired.
*/
-bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
- bool flush)
+bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush)
{
struct kvm_mmu_page *root;
- for_each_tdp_mmu_root_yield_safe(kvm, root, as_id)
+ for_each_tdp_mmu_root_yield_safe(kvm, root, false)
flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush);
return flush;
@@ -891,7 +894,6 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
void kvm_tdp_mmu_zap_all(struct kvm *kvm)
{
struct kvm_mmu_page *root;
- int i;
/*
* Zap all roots, including invalid roots, as all SPTEs must be dropped
@@ -905,10 +907,8 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
* is being destroyed or the userspace VMM has exited. In both cases,
* KVM_RUN is unreachable, i.e. no vCPUs will ever service the request.
*/
- for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
- for_each_tdp_mmu_root_yield_safe(kvm, root, i)
- tdp_mmu_zap_root(kvm, root, false);
- }
+ for_each_tdp_mmu_root_yield_safe(kvm, root, false)
+ tdp_mmu_zap_root(kvm, root, false);
}
/*
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index eb4fa345d3a4..bc088953f929 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -20,8 +20,7 @@ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
bool shared);
-bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
- bool flush);
+bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush);
bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
void kvm_tdp_mmu_zap_all(struct kvm *kvm);
void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);
--
2.42.0.459.ge4e396fd5e-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/3] KVM: x86/mmu: Stop zapping invalidated TDP MMU roots asynchronously
2023-09-16 0:39 [PATCH 0/3] KVM: x86/mmu: Drop async zapping of TDP MMU roots Sean Christopherson
2023-09-16 0:39 ` [PATCH 1/3] KVM: x86/mmu: Open code walking TDP MMU roots for mmu_notifier's zap SPTEs Sean Christopherson
2023-09-16 0:39 ` [PATCH 2/3] KVM: x86/mmu: Take "shared" instead of "as_id" TDP MMU's yield-safe iterator Sean Christopherson
@ 2023-09-16 0:39 ` Sean Christopherson
2023-09-21 10:02 ` Paolo Bonzini
2023-09-22 20:40 ` [PATCH 0/3] KVM: x86/mmu: Drop async zapping of TDP MMU roots Paolo Bonzini
3 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2023-09-16 0:39 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pattara Teerapong, David Stevens, Yiwei Zhang,
Paul Hsia
Stop zapping invalidate TDP MMU roots via work queue now that KVM
preserves TDP MMU roots until they are explicitly invalidated. Zapping
roots asynchronously was effectively a workaround to avoid stalling a vCPU
for an extended during if a vCPU unloaded a root, which at the time
happened whenever the guest toggled CR0.WP (a frequent operation for some
guest kernels).
While a clever hack, zapping roots via an unbound worker had subtle,
unintended consequences on host scheduling, especially when zapping
multiple roots, e.g. as part of a memslot. Because the work of zapping a
root is no longer bound to the task that initiated the zap, things like
the CPU affinity and priority of the original task get lost. Losing the
affinity and priority can be especially problematic if unbound workqueues
aren't affined to a small number of CPUs, as zapping multiple roots can
cause KVM to heavily utilize the majority of CPUs in the system, *beyond*
the CPUs KVM is already using to run vCPUs.
When deleting a memslot via KVM_SET_USER_MEMORY_REGION, the async root
zap can result in KVM occupying all logical CPUs for ~8ms, and result in
high priority tasks not being scheduled in in a timely manner. In v5.15,
which doesn't preserve unloaded roots, the issues were even more noticeable
as KVM would zap roots more frequently and could occupy all CPUs for 50ms+.
Consuming all CPUs for an extended duration can lead to significant jitter
throughout the system, e.g. on ChromeOS with virtio-gpu, deleting memslots
is a semi-frequent operation as memslots are deleted and recreated with
different host virtual addresses to react to host GPU drivers allocating
and freeing GPU blobs. On ChromeOS, the jitter manifests as audio blips
during games due to the audio server's tasks not getting scheduled in
promptly, despite the tasks having a high realtime priority.
Deleting memslots isn't exactly a fast path and should be avoided when
possible, and ChromeOS is working towards utilizing MAP_FIXED to avoid the
memslot shenanigans, but KVM is squarely in the wrong. Not to mention
that removing the async zapping eliminates a non-trivial amount of
complexity.
Note, one of the subtle behaviors hidden behind the async zapping is that
KVM would zap invalidated roots only once (ignoring partial zaps from
things like mmu_notifier events). Preserve this behavior by adding a flag
to identify roots that are scheduled to be zapped versus roots that have
already been zapped but not yet freed.
Add a comment calling out why kvm_tdp_mmu_invalidate_all_roots() can
encounter invalid roots, as it's not at all obvious why zapping
invalidated roots shouldn't simply zap all invalid roots.
Reported-by: Pattara Teerapong <pteerapong@google.com>
Cc: David Stevens <stevensd@google.com>
Cc: Yiwei Zhang<zzyiwei@google.com>
Cc: Paul Hsia <paulhsia@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/kvm_host.h | 3 +-
arch/x86/kvm/mmu/mmu.c | 13 +---
arch/x86/kvm/mmu/mmu_internal.h | 13 ++--
arch/x86/kvm/mmu/tdp_mmu.c | 116 +++++++++++++-------------------
arch/x86/kvm/mmu/tdp_mmu.h | 2 +-
arch/x86/kvm/x86.c | 5 +-
6 files changed, 59 insertions(+), 93 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 1a4def36d5bb..17715cb8731d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1419,7 +1419,6 @@ struct kvm_arch {
* the thread holds the MMU lock in write mode.
*/
spinlock_t tdp_mmu_pages_lock;
- struct workqueue_struct *tdp_mmu_zap_wq;
#endif /* CONFIG_X86_64 */
/*
@@ -1835,7 +1834,7 @@ void kvm_mmu_vendor_module_exit(void);
void kvm_mmu_destroy(struct kvm_vcpu *vcpu);
int kvm_mmu_create(struct kvm_vcpu *vcpu);
-int kvm_mmu_init_vm(struct kvm *kvm);
+void kvm_mmu_init_vm(struct kvm *kvm);
void kvm_mmu_uninit_vm(struct kvm *kvm);
void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 54f94f644b42..f7901cb4d2fa 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6167,20 +6167,15 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
}
-int kvm_mmu_init_vm(struct kvm *kvm)
+void kvm_mmu_init_vm(struct kvm *kvm)
{
- int r;
-
INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages);
INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages);
spin_lock_init(&kvm->arch.mmu_unsync_pages_lock);
- if (tdp_mmu_enabled) {
- r = kvm_mmu_init_tdp_mmu(kvm);
- if (r < 0)
- return r;
- }
+ if (tdp_mmu_enabled)
+ kvm_mmu_init_tdp_mmu(kvm);
kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
@@ -6189,8 +6184,6 @@ int kvm_mmu_init_vm(struct kvm *kvm)
kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
-
- return 0;
}
static void mmu_free_vm_memory_caches(struct kvm *kvm)
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index b102014e2c60..93b9d50c24ad 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -58,7 +58,10 @@ struct kvm_mmu_page {
bool tdp_mmu_page;
bool unsync;
- u8 mmu_valid_gen;
+ union {
+ u8 mmu_valid_gen;
+ bool tdp_mmu_scheduled_root_to_zap;
+ };
/*
* The shadow page can't be replaced by an equivalent huge page
@@ -100,13 +103,7 @@ struct kvm_mmu_page {
struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
tdp_ptep_t ptep;
};
- union {
- DECLARE_BITMAP(unsync_child_bitmap, 512);
- struct {
- struct work_struct tdp_mmu_async_work;
- void *tdp_mmu_async_data;
- };
- };
+ DECLARE_BITMAP(unsync_child_bitmap, 512);
/*
* Tracks shadow pages that, if zapped, would allow KVM to create an NX
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 7cb1902ae032..ca3304c2c00c 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -12,18 +12,10 @@
#include <trace/events/kvm.h>
/* Initializes the TDP MMU for the VM, if enabled. */
-int kvm_mmu_init_tdp_mmu(struct kvm *kvm)
+void kvm_mmu_init_tdp_mmu(struct kvm *kvm)
{
- struct workqueue_struct *wq;
-
- wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0);
- if (!wq)
- return -ENOMEM;
-
INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots);
spin_lock_init(&kvm->arch.tdp_mmu_pages_lock);
- kvm->arch.tdp_mmu_zap_wq = wq;
- return 1;
}
/* Arbitrarily returns true so that this may be used in if statements. */
@@ -46,20 +38,15 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
* ultimately frees all roots.
*/
kvm_tdp_mmu_invalidate_all_roots(kvm);
-
- /*
- * Destroying a workqueue also first flushes the workqueue, i.e. no
- * need to invoke kvm_tdp_mmu_zap_invalidated_roots().
- */
- destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
+ kvm_tdp_mmu_zap_invalidated_roots(kvm);
WARN_ON(atomic64_read(&kvm->arch.tdp_mmu_pages));
WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
/*
* Ensure that all the outstanding RCU callbacks to free shadow pages
- * can run before the VM is torn down. Work items on tdp_mmu_zap_wq
- * can call kvm_tdp_mmu_put_root and create new callbacks.
+ * can run before the VM is torn down. Putting the last reference to
+ * zapped roots will create new callbacks.
*/
rcu_barrier();
}
@@ -89,43 +76,6 @@ static void tdp_mmu_free_sp_rcu_callback(struct rcu_head *head)
static void tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
bool shared);
-static void tdp_mmu_zap_root_work(struct work_struct *work)
-{
- struct kvm_mmu_page *root = container_of(work, struct kvm_mmu_page,
- tdp_mmu_async_work);
- struct kvm *kvm = root->tdp_mmu_async_data;
-
- read_lock(&kvm->mmu_lock);
-
- /*
- * A TLB flush is not necessary as KVM performs a local TLB flush when
- * allocating a new root (see kvm_mmu_load()), and when migrating vCPU
- * to a different pCPU. Note, the local TLB flush on reuse also
- * invalidates any paging-structure-cache entries, i.e. TLB entries for
- * intermediate paging structures, that may be zapped, as such entries
- * are associated with the ASID on both VMX and SVM.
- */
- tdp_mmu_zap_root(kvm, root, true);
-
- /*
- * Drop the refcount using kvm_tdp_mmu_put_root() to test its logic for
- * avoiding an infinite loop. By design, the root is reachable while
- * it's being asynchronously zapped, thus a different task can put its
- * last reference, i.e. flowing through kvm_tdp_mmu_put_root() for an
- * asynchronously zapped root is unavoidable.
- */
- kvm_tdp_mmu_put_root(kvm, root, true);
-
- read_unlock(&kvm->mmu_lock);
-}
-
-static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root)
-{
- root->tdp_mmu_async_data = kvm;
- INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work);
- queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
-}
-
void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
bool shared)
{
@@ -917,18 +867,47 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
*/
void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
{
- flush_workqueue(kvm->arch.tdp_mmu_zap_wq);
+ struct kvm_mmu_page *root;
+
+ read_lock(&kvm->mmu_lock);
+
+ for_each_tdp_mmu_root_yield_safe(kvm, root, true) {
+ if (!root->tdp_mmu_scheduled_root_to_zap)
+ continue;
+
+ root->tdp_mmu_scheduled_root_to_zap = false;
+ KVM_BUG_ON(!root->role.invalid, kvm);
+
+ /*
+ * A TLB flush is not necessary as KVM performs a local TLB
+ * flush when allocating a new root (see kvm_mmu_load()), and
+ * when migrating a vCPU to a different pCPU. Note, the local
+ * TLB flush on reuse also invalidates paging-structure-cache
+ * entries, i.e. TLB entries for intermediate paging structures,
+ * that may be zapped, as such entries are associated with the
+ * ASID on both VMX and SVM.
+ */
+ tdp_mmu_zap_root(kvm, root, true);
+
+ /*
+ * The referenced needs to be put *after* zapping the root, as
+ * the root must be reachable by mmu_notifiers while it's being
+ * zapped
+ */
+ kvm_tdp_mmu_put_root(kvm, root, true);
+ }
+
+ read_unlock(&kvm->mmu_lock);
}
/*
* Mark each TDP MMU root as invalid to prevent vCPUs from reusing a root that
* is about to be zapped, e.g. in response to a memslots update. The actual
- * zapping is performed asynchronously. Using a separate workqueue makes it
- * easy to ensure that the destruction is performed before the "fast zap"
- * completes, without keeping a separate list of invalidated roots; the list is
- * effectively the list of work items in the workqueue.
+ * zapping is done separately so that it happens with mmu_lock with read,
+ * whereas invalidating roots must be done with mmu_lock held for write (unless
+ * the VM is being destroyed).
*
- * Note, the asynchronous worker is gifted the TDP MMU's reference.
+ * Note, kvm_tdp_mmu_zap_invalidated_roots() is gifted the TDP MMU's reference.
* See kvm_tdp_mmu_get_vcpu_root_hpa().
*/
void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
@@ -953,19 +932,20 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
/*
* As above, mmu_lock isn't held when destroying the VM! There can't
* be other references to @kvm, i.e. nothing else can invalidate roots
- * or be consuming roots, but walking the list of roots does need to be
- * guarded against roots being deleted by the asynchronous zap worker.
+ * or get/put references to roots.
*/
- rcu_read_lock();
-
- list_for_each_entry_rcu(root, &kvm->arch.tdp_mmu_roots, link) {
+ list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
+ /*
+ * Note, invalid roots can outlive a memslot update! Invalid
+ * roots must be *zapped* before the memslot update completes,
+ * but a different task can acquire a reference and keep the
+ * root alive after its been zapped.
+ */
if (!root->role.invalid) {
+ root->tdp_mmu_scheduled_root_to_zap = true;
root->role.invalid = true;
- tdp_mmu_schedule_zap_root(kvm, root);
}
}
-
- rcu_read_unlock();
}
/*
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index bc088953f929..733a3aef3a96 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -7,7 +7,7 @@
#include "spte.h"
-int kvm_mmu_init_tdp_mmu(struct kvm *kvm);
+void kvm_mmu_init_tdp_mmu(struct kvm *kvm);
void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm);
hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6c9c81e82e65..9f18b06bbda6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12308,9 +12308,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
if (ret)
goto out;
- ret = kvm_mmu_init_vm(kvm);
- if (ret)
- goto out_page_track;
+ kvm_mmu_init_vm(kvm);
ret = static_call(kvm_x86_vm_init)(kvm);
if (ret)
@@ -12355,7 +12353,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
out_uninit_mmu:
kvm_mmu_uninit_vm(kvm);
-out_page_track:
kvm_page_track_cleanup(kvm);
out:
return ret;
--
2.42.0.459.ge4e396fd5e-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 3/3] KVM: x86/mmu: Stop zapping invalidated TDP MMU roots asynchronously
2023-09-16 0:39 ` [PATCH 3/3] KVM: x86/mmu: Stop zapping invalidated TDP MMU roots asynchronously Sean Christopherson
@ 2023-09-21 10:02 ` Paolo Bonzini
0 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2023-09-21 10:02 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, linux-kernel, Pattara Teerapong, David Stevens, Yiwei Zhang,
Paul Hsia
On 9/16/23 02:39, Sean Christopherson wrote:
> + if (!root->tdp_mmu_scheduled_root_to_zap)
> + continue;
> +
> + root->tdp_mmu_scheduled_root_to_zap = false;
This is protected by slots_lock... tricky.
Worth squashing in a comment and also a small update to another comment:
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index 93b9d50c24ad..decc1f153669 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -60,6 +60,8 @@ struct kvm_mmu_page {
bool unsync;
union {
u8 mmu_valid_gen;
+
+ /* Only accessed under slots_lock. */
bool tdp_mmu_scheduled_root_to_zap;
};
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index ca3304c2c00c..070ee5b2c271 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -246,7 +246,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
* by a memslot update or by the destruction of the VM. Initialize the
* refcount to two; one reference for the vCPU, and one reference for
* the TDP MMU itself, which is held until the root is invalidated and
- * is ultimately put by tdp_mmu_zap_root_work().
+ * is ultimately put by kvm_tdp_mmu_zap_invalidated_roots().
*/
refcount_set(&root->tdp_mmu_root_count, 2);
Paolo
> + KVM_BUG_ON(!root->role.invalid, kvm);
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 1/3] KVM: x86/mmu: Open code walking TDP MMU roots for mmu_notifier's zap SPTEs
2023-09-16 0:39 ` [PATCH 1/3] KVM: x86/mmu: Open code walking TDP MMU roots for mmu_notifier's zap SPTEs Sean Christopherson
@ 2023-09-21 10:05 ` Paolo Bonzini
0 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2023-09-21 10:05 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, linux-kernel, Pattara Teerapong, David Stevens, Yiwei Zhang,
Paul Hsia
On 9/16/23 02:39, Sean Christopherson wrote:
> Use the "inner" TDP MMU root walker when zapping SPTEs in response to an
> mmu_notifier invalidation instead of invoking kvm_tdp_mmu_zap_leafs().
> This will allow reworking for_each_tdp_mmu_root_yield_safe() to do more
> work, and to also make it usable in more places, without increasing the
> number of params to the point where it adds no value.
>
> The mmu_notifier path is a bit of a special snowflake, e.g. it zaps only a
> single address space (because it's per-slot), and can't always yield.
>
> Drop the @can_yield param from tdp_mmu_zap_leafs() as its sole remaining
> caller unconditionally passes "true".
Slightly rewritten commit log:
---
The mmu_notifier path is a bit of a special snowflake, e.g. it zaps only a
single address space (because it's per-slot), and can't always yield.
Because of this, it calls kvm_tdp_mmu_zap_leafs() in ways that no one
else does.
Iterate manually over the leafs in response to an mmu_notifier
invalidation, instead of invoking kvm_tdp_mmu_zap_leafs(). Drop the
@can_yield param from kvm_tdp_mmu_zap_leafs() as its sole remaining
caller unconditionally passes "true".
---
and using the "__" macro can be moved to the second patch.
Paolo
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/3] KVM: x86/mmu: Take "shared" instead of "as_id" TDP MMU's yield-safe iterator
2023-09-16 0:39 ` [PATCH 2/3] KVM: x86/mmu: Take "shared" instead of "as_id" TDP MMU's yield-safe iterator Sean Christopherson
@ 2023-09-21 10:52 ` Paolo Bonzini
2023-09-21 14:32 ` Sean Christopherson
0 siblings, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2023-09-21 10:52 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, linux-kernel, Pattara Teerapong, David Stevens, Yiwei Zhang,
Paul Hsia
On 9/16/23 02:39, Sean Christopherson wrote:
> Replace the address space ID in for_each_tdp_mmu_root_yield_safe() with a
> shared (vs. exclusive) param, and have the walker iterate over all address
> spaces as all callers want to process all address spaces. Drop the @as_id
> param as well as the manual address space iteration in callers.
>
> Add the @shared param even though the two current callers pass "false"
> unconditionally, as the main reason for refactoring the walker is to
> simplify using it to zap invalid TDP MMU roots, which is done with
> mmu_lock held for read.
>
> Cc: stable@vger.kernel.org
> Signed-off-by: Sean Christopherson <seanjc@google.com>
You konw what, I don't really like the "bool shared" arguments anymore.
For example, neither tdp_mmu_next_root nor kvm_tdp_mmu_put_root need to
know if the lock is taken for read or write; protection is achieved via
RCU and tdp_mmu_pages_lock. It's more self-documenting to remove the
argument and assert that the lock is taken.
Likewise, the argument is more or less unnecessary in the
for_each_*_tdp_mmu_root_yield_safe() macros. Many users check for the
lock before calling it; and all of them either call small functions that
do the check, or end up calling tdp_mmu_set_spte_atomic() and
tdp_mmu_iter_set_spte(), so the per-iteration checks are also overkill.
It may be useful to a few assertions to make up for the lost check
before the first execution of the body of
for_each_*_tdp_mmu_root_yield_safe(), but even this is more for
documentation reasons than to catch actual bugs.
I'll send a v2.
Paolo
> ---
> arch/x86/kvm/mmu/mmu.c | 8 ++------
> arch/x86/kvm/mmu/tdp_mmu.c | 20 ++++++++++----------
> arch/x86/kvm/mmu/tdp_mmu.h | 3 +--
> 3 files changed, 13 insertions(+), 18 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 59f5e40b8f55..54f94f644b42 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6246,7 +6246,6 @@ static bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_e
> void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
> {
> bool flush;
> - int i;
>
> if (WARN_ON_ONCE(gfn_end <= gfn_start))
> return;
> @@ -6257,11 +6256,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
>
> flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end);
>
> - if (tdp_mmu_enabled) {
> - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
> - flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start,
> - gfn_end, flush);
> - }
> + if (tdp_mmu_enabled)
> + flush = kvm_tdp_mmu_zap_leafs(kvm, gfn_start, gfn_end, flush);
>
> if (flush)
> kvm_flush_remote_tlbs_range(kvm, gfn_start, gfn_end - gfn_start);
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 89aaa2463373..7cb1902ae032 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -211,8 +211,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
> #define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \
> __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true)
>
> -#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id) \
> - __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, false, false)
> +#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared) \
> + for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, false); \
> + _root; \
> + _root = tdp_mmu_next_root(_kvm, _root, _shared, false)) \
> + if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) { \
> + } else
>
> /*
> * Iterate over all TDP MMU roots. Requires that mmu_lock be held for write,
> @@ -877,12 +881,11 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
> * true if a TLB flush is needed before releasing the MMU lock, i.e. if one or
> * more SPTEs were zapped since the MMU lock was last acquired.
> */
> -bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
> - bool flush)
> +bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush)
> {
> struct kvm_mmu_page *root;
>
> - for_each_tdp_mmu_root_yield_safe(kvm, root, as_id)
> + for_each_tdp_mmu_root_yield_safe(kvm, root, false)
> flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush);
>
> return flush;
> @@ -891,7 +894,6 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
> void kvm_tdp_mmu_zap_all(struct kvm *kvm)
> {
> struct kvm_mmu_page *root;
> - int i;
>
> /*
> * Zap all roots, including invalid roots, as all SPTEs must be dropped
> @@ -905,10 +907,8 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
> * is being destroyed or the userspace VMM has exited. In both cases,
> * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request.
> */
> - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
> - for_each_tdp_mmu_root_yield_safe(kvm, root, i)
> - tdp_mmu_zap_root(kvm, root, false);
> - }
> + for_each_tdp_mmu_root_yield_safe(kvm, root, false)
> + tdp_mmu_zap_root(kvm, root, false);
> }
>
> /*
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> index eb4fa345d3a4..bc088953f929 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.h
> +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> @@ -20,8 +20,7 @@ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
> void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
> bool shared);
>
> -bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
> - bool flush);
> +bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush);
> bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
> void kvm_tdp_mmu_zap_all(struct kvm *kvm);
> void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/3] KVM: x86/mmu: Take "shared" instead of "as_id" TDP MMU's yield-safe iterator
2023-09-21 10:52 ` Paolo Bonzini
@ 2023-09-21 14:32 ` Sean Christopherson
0 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2023-09-21 14:32 UTC (permalink / raw)
To: Paolo Bonzini
Cc: kvm, linux-kernel, Pattara Teerapong, David Stevens, Yiwei Zhang,
Paul Hsia
On Thu, Sep 21, 2023, Paolo Bonzini wrote:
> On 9/16/23 02:39, Sean Christopherson wrote:
> > Replace the address space ID in for_each_tdp_mmu_root_yield_safe() with a
> > shared (vs. exclusive) param, and have the walker iterate over all address
> > spaces as all callers want to process all address spaces. Drop the @as_id
> > param as well as the manual address space iteration in callers.
> >
> > Add the @shared param even though the two current callers pass "false"
> > unconditionally, as the main reason for refactoring the walker is to
> > simplify using it to zap invalid TDP MMU roots, which is done with
> > mmu_lock held for read.
> >
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
>
> You konw what, I don't really like the "bool shared" arguments anymore.
Yeah, I don't like the "shared" arguments either. Never did, but they are necessary
for some paths, and I don't see an obviously better solution. :-/
> For example, neither tdp_mmu_next_root nor kvm_tdp_mmu_put_root need to know
> if the lock is taken for read or write; protection is achieved via RCU and
> tdp_mmu_pages_lock. It's more self-documenting to remove the argument and
> assert that the lock is taken.
>
> Likewise, the argument is more or less unnecessary in the
> for_each_*_tdp_mmu_root_yield_safe() macros. Many users check for the lock
> before calling it; and all of them either call small functions that do the
> check, or end up calling tdp_mmu_set_spte_atomic() and
> tdp_mmu_iter_set_spte(), so the per-iteration checks are also overkill.
Agreed.
> It may be useful to a few assertions to make up for the lost check before
> the first execution of the body of for_each_*_tdp_mmu_root_yield_safe(), but
> even this is more for documentation reasons than to catch actual bugs.
I think it's more than sufficient, arguably even better, to document which paths
*require* mmu_lock be held for read vs. write, and which paths work with either.
> I'll send a v2.
Can we do a cleanup of the @shared arguments on top? I would like to keep the
diff reasonably small to minimize the v6.1 backport.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] KVM: x86/mmu: Drop async zapping of TDP MMU roots
2023-09-16 0:39 [PATCH 0/3] KVM: x86/mmu: Drop async zapping of TDP MMU roots Sean Christopherson
` (2 preceding siblings ...)
2023-09-16 0:39 ` [PATCH 3/3] KVM: x86/mmu: Stop zapping invalidated TDP MMU roots asynchronously Sean Christopherson
@ 2023-09-22 20:40 ` Paolo Bonzini
3 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2023-09-22 20:40 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, linux-kernel, Pattara Teerapong, David Stevens, Yiwei Zhang,
Paul Hsia
Queued, thanks.
I changed a bit the splitting of the patches, to avoid mixing removal
of one argument and with the addition of another (splitting into four
patches wasn't particularly enlightening for a couple lines change),
but checked that the final result is the same.
Paolo
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2023-09-22 20:43 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-16 0:39 [PATCH 0/3] KVM: x86/mmu: Drop async zapping of TDP MMU roots Sean Christopherson
2023-09-16 0:39 ` [PATCH 1/3] KVM: x86/mmu: Open code walking TDP MMU roots for mmu_notifier's zap SPTEs Sean Christopherson
2023-09-21 10:05 ` Paolo Bonzini
2023-09-16 0:39 ` [PATCH 2/3] KVM: x86/mmu: Take "shared" instead of "as_id" TDP MMU's yield-safe iterator Sean Christopherson
2023-09-21 10:52 ` Paolo Bonzini
2023-09-21 14:32 ` Sean Christopherson
2023-09-16 0:39 ` [PATCH 3/3] KVM: x86/mmu: Stop zapping invalidated TDP MMU roots asynchronously Sean Christopherson
2023-09-21 10:02 ` Paolo Bonzini
2023-09-22 20:40 ` [PATCH 0/3] KVM: x86/mmu: Drop async zapping of TDP MMU roots Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).