* [PATCH 0/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled
@ 2022-10-27 20:03 David Matlack
2022-10-27 20:03 ` [PATCH 1/2] KVM: Keep track of the number of memslots with dirty logging enabled David Matlack
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: David Matlack @ 2022-10-27 20:03 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Sean Christopherson, kvm, David Matlack
This series turns off the NX Huge Page recovery worker when any memslot
has dirty logging enabled. This avoids theoretical performance problems
and reduces the CPU usage of NX Huge Pages when a VM is in the pre-copy
phase of a Live Migration.
Tested manually and ran all selftests.
David Matlack (2):
KVM: Keep track of the number of memslots with dirty logging enabled
KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is
enabled
arch/x86/kvm/mmu/mmu.c | 8 ++++++++
include/linux/kvm_host.h | 2 ++
virt/kvm/kvm_main.c | 10 ++++++++++
3 files changed, 20 insertions(+)
base-commit: e18d6152ff0f41b7f01f9817372022df04e0d354
--
2.38.1.273.g43a17bfeac-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/2] KVM: Keep track of the number of memslots with dirty logging enabled
2022-10-27 20:03 [PATCH 0/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled David Matlack
@ 2022-10-27 20:03 ` David Matlack
2022-10-27 20:34 ` Sean Christopherson
2022-10-27 20:03 ` [PATCH 2/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled David Matlack
2022-10-28 10:58 ` [PATCH 0/2] " Paolo Bonzini
2 siblings, 1 reply; 11+ messages in thread
From: David Matlack @ 2022-10-27 20:03 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Sean Christopherson, kvm, David Matlack
Add a new field to struct kvm that keeps track of the number of memslots
with dirty logging enabled. This will be used in a future commit to
cheaply check if any memslot is doing dirty logging.
Signed-off-by: David Matlack <dmatlack@google.com>
---
include/linux/kvm_host.h | 2 ++
virt/kvm/kvm_main.c | 10 ++++++++++
2 files changed, 12 insertions(+)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 32f259fa5801..25ed8c1725ff 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -709,6 +709,8 @@ struct kvm {
struct kvm_memslots __memslots[KVM_ADDRESS_SPACE_NUM][2];
/* The current active memslot set for each address space */
struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM];
+ /* The number of memslots with dirty logging enabled. */
+ int nr_memslots_dirty_logging;
struct xarray vcpu_array;
/* Used to wait for completion of MMU notifiers. */
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e30f1b4ecfa5..57e4406005cd 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1641,6 +1641,9 @@ static void kvm_commit_memory_region(struct kvm *kvm,
const struct kvm_memory_slot *new,
enum kvm_mr_change change)
{
+ int old_flags = old ? old->flags : 0;
+ int new_flags = new ? new->flags : 0;
+
/*
* Update the total number of memslot pages before calling the arch
* hook so that architectures can consume the result directly.
@@ -1650,6 +1653,13 @@ static void kvm_commit_memory_region(struct kvm *kvm,
else if (change == KVM_MR_CREATE)
kvm->nr_memslot_pages += new->npages;
+ if ((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES) {
+ if (new_flags & KVM_MEM_LOG_DIRTY_PAGES)
+ kvm->nr_memslots_dirty_logging++;
+ else
+ kvm->nr_memslots_dirty_logging--;
+ }
+
kvm_arch_commit_memory_region(kvm, old, new, change);
switch (change) {
--
2.38.1.273.g43a17bfeac-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled
2022-10-27 20:03 [PATCH 0/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled David Matlack
2022-10-27 20:03 ` [PATCH 1/2] KVM: Keep track of the number of memslots with dirty logging enabled David Matlack
@ 2022-10-27 20:03 ` David Matlack
2022-10-28 10:58 ` [PATCH 0/2] " Paolo Bonzini
2 siblings, 0 replies; 11+ messages in thread
From: David Matlack @ 2022-10-27 20:03 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Sean Christopherson, kvm, David Matlack
Do not recover NX Huge Pages if dirty logging is enabled on any memslot.
Zapping a region that is being dirty tracked is a waste of CPU cycles
(both by the recovery worker, and subsequent vCPU faults) since the
memory will just be faulted back in at the same 4KiB granularity.
Use kvm->nr_memslots_dirty_logging as a cheap way to check if NX Huge
Pages are being dirty tracked. This has the additional benefit of
ensuring that the NX recovery worker uses little-to-no CPU during the
precopy phase of a live migration.
Note, kvm->nr_memslots_dirty_logging can result in false positives and
false negatives, e.g. if dirty logging is only enabled on a subset of
memslots or the recovery worker races with a memslot update. However
there are no correctness issues either way, and eventually NX Huge Pages
will be recovered once dirty logging is disabled on all memslots.
An alternative approach would be to lookup the memslot of each NX Huge
Page and check if it is being dirty tracked. However, this would
increase the CPU usage of the recovery worker and MMU lock hold time
in write mode, especially in VMs with a large number of memslots.
Signed-off-by: David Matlack <dmatlack@google.com>
---
arch/x86/kvm/mmu/mmu.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6f81539061d6..b499d3757173 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6806,6 +6806,14 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
bool flush = false;
ulong to_zap;
+ /*
+ * Do not attempt to recover NX Huge Pages while dirty logging is
+ * enabled since any subsequent accesses by a vCPUs will just fault the
+ * memory back in at the same 4KiB granularity.
+ */
+ if (READ_ONCE(kvm->nr_memslots_dirty_logging))
+ return;
+
rcu_idx = srcu_read_lock(&kvm->srcu);
write_lock(&kvm->mmu_lock);
--
2.38.1.273.g43a17bfeac-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] KVM: Keep track of the number of memslots with dirty logging enabled
2022-10-27 20:03 ` [PATCH 1/2] KVM: Keep track of the number of memslots with dirty logging enabled David Matlack
@ 2022-10-27 20:34 ` Sean Christopherson
2022-10-27 22:15 ` David Matlack
0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2022-10-27 20:34 UTC (permalink / raw)
To: David Matlack; +Cc: Paolo Bonzini, kvm
On Thu, Oct 27, 2022, David Matlack wrote:
> Add a new field to struct kvm that keeps track of the number of memslots
> with dirty logging enabled. This will be used in a future commit to
> cheaply check if any memslot is doing dirty logging.
>
> Signed-off-by: David Matlack <dmatlack@google.com>
> ---
> include/linux/kvm_host.h | 2 ++
> virt/kvm/kvm_main.c | 10 ++++++++++
Why put this in common code? I'm having a hard time coming up with a second use
case since the count isn't stable, i.e. it can't be used for anything except
scenarios like x86's NX huge page mitigation where a false negative/positive is benign.
> 2 files changed, 12 insertions(+)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 32f259fa5801..25ed8c1725ff 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -709,6 +709,8 @@ struct kvm {
> struct kvm_memslots __memslots[KVM_ADDRESS_SPACE_NUM][2];
> /* The current active memslot set for each address space */
> struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM];
> + /* The number of memslots with dirty logging enabled. */
> + int nr_memslots_dirty_logging;
I believe this can technically be a u16, as even with SMM KVM ensures the total
number of memslots fits in a u16. A BUILD_BUG_ON() sanity check is probably a
good idea regardless.
> struct xarray vcpu_array;
>
> /* Used to wait for completion of MMU notifiers. */
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index e30f1b4ecfa5..57e4406005cd 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1641,6 +1641,9 @@ static void kvm_commit_memory_region(struct kvm *kvm,
> const struct kvm_memory_slot *new,
> enum kvm_mr_change change)
> {
> + int old_flags = old ? old->flags : 0;
> + int new_flags = new ? new->flags : 0;
Not that it really matters, but kvm_memory_slot.flags is a u32.
> /*
> * Update the total number of memslot pages before calling the arch
> * hook so that architectures can consume the result directly.
> @@ -1650,6 +1653,13 @@ static void kvm_commit_memory_region(struct kvm *kvm,
> else if (change == KVM_MR_CREATE)
> kvm->nr_memslot_pages += new->npages;
>
> + if ((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES) {
> + if (new_flags & KVM_MEM_LOG_DIRTY_PAGES)
> + kvm->nr_memslots_dirty_logging++;
> + else
> + kvm->nr_memslots_dirty_logging--;
A sanity check that KVM hasn't botched the count is probably a good idea. E.g.
__kvm_set_memory_region() as a WARN_ON_ONCE() sanity check that KVM won't end up
underflowing nr_memslot_pages.
> + }
> +
> kvm_arch_commit_memory_region(kvm, old, new, change);
>
> switch (change) {
> --
> 2.38.1.273.g43a17bfeac-goog
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] KVM: Keep track of the number of memslots with dirty logging enabled
2022-10-27 20:34 ` Sean Christopherson
@ 2022-10-27 22:15 ` David Matlack
2022-10-27 23:04 ` Sean Christopherson
0 siblings, 1 reply; 11+ messages in thread
From: David Matlack @ 2022-10-27 22:15 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm
On Thu, Oct 27, 2022 at 1:35 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Thu, Oct 27, 2022, David Matlack wrote:
> > Add a new field to struct kvm that keeps track of the number of memslots
> > with dirty logging enabled. This will be used in a future commit to
> > cheaply check if any memslot is doing dirty logging.
> >
> > Signed-off-by: David Matlack <dmatlack@google.com>
> > ---
> > include/linux/kvm_host.h | 2 ++
> > virt/kvm/kvm_main.c | 10 ++++++++++
>
> Why put this in common code? I'm having a hard time coming up with a second use
> case since the count isn't stable, i.e. it can't be used for anything except
> scenarios like x86's NX huge page mitigation where a false negative/positive is benign.
I agree, but what is the downside of putting it in common code? The
downside of putting it in architecture-specific code is if any other
architecture needs it (or something similar) in the future they are
unlikely to look through x86 code to see if it already exists. i.e.
we're more likely to end up with duplicate code.
And while the count is not stable outside of slots_lock, it could
still theoretically be used under slots_lock to coordinate something
that depends on dirty logging being enabled in any slot. In our
internal kernel, for example, we use it to decide when to
create/destroy the KVM dirty log worker threads (although I doubt that
specific usecase will ever see the light of day upstream :).
>
> > 2 files changed, 12 insertions(+)
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 32f259fa5801..25ed8c1725ff 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -709,6 +709,8 @@ struct kvm {
> > struct kvm_memslots __memslots[KVM_ADDRESS_SPACE_NUM][2];
> > /* The current active memslot set for each address space */
> > struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM];
> > + /* The number of memslots with dirty logging enabled. */
> > + int nr_memslots_dirty_logging;
>
> I believe this can technically be a u16, as even with SMM KVM ensures the total
> number of memslots fits in a u16. A BUILD_BUG_ON() sanity check is probably a
> good idea regardless.
Will do.
>
> > struct xarray vcpu_array;
> >
> > /* Used to wait for completion of MMU notifiers. */
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index e30f1b4ecfa5..57e4406005cd 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -1641,6 +1641,9 @@ static void kvm_commit_memory_region(struct kvm *kvm,
> > const struct kvm_memory_slot *new,
> > enum kvm_mr_change change)
> > {
> > + int old_flags = old ? old->flags : 0;
> > + int new_flags = new ? new->flags : 0;
>
> Not that it really matters, but kvm_memory_slot.flags is a u32.
Oops, thanks.
>
> > /*
> > * Update the total number of memslot pages before calling the arch
> > * hook so that architectures can consume the result directly.
> > @@ -1650,6 +1653,13 @@ static void kvm_commit_memory_region(struct kvm *kvm,
> > else if (change == KVM_MR_CREATE)
> > kvm->nr_memslot_pages += new->npages;
> >
> > + if ((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES) {
> > + if (new_flags & KVM_MEM_LOG_DIRTY_PAGES)
> > + kvm->nr_memslots_dirty_logging++;
> > + else
> > + kvm->nr_memslots_dirty_logging--;
>
> A sanity check that KVM hasn't botched the count is probably a good idea. E.g.
> __kvm_set_memory_region() as a WARN_ON_ONCE() sanity check that KVM won't end up
> underflowing nr_memslot_pages.
Good idea, will do.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] KVM: Keep track of the number of memslots with dirty logging enabled
2022-10-27 22:15 ` David Matlack
@ 2022-10-27 23:04 ` Sean Christopherson
0 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2022-10-27 23:04 UTC (permalink / raw)
To: David Matlack; +Cc: Paolo Bonzini, kvm
On Thu, Oct 27, 2022, David Matlack wrote:
> On Thu, Oct 27, 2022 at 1:35 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Thu, Oct 27, 2022, David Matlack wrote:
> > > Add a new field to struct kvm that keeps track of the number of memslots
> > > with dirty logging enabled. This will be used in a future commit to
> > > cheaply check if any memslot is doing dirty logging.
> > >
> > > Signed-off-by: David Matlack <dmatlack@google.com>
> > > ---
> > > include/linux/kvm_host.h | 2 ++
> > > virt/kvm/kvm_main.c | 10 ++++++++++
> >
> > Why put this in common code? I'm having a hard time coming up with a second use
> > case since the count isn't stable, i.e. it can't be used for anything except
> > scenarios like x86's NX huge page mitigation where a false negative/positive is benign.
>
> I agree, but what is the downside of putting it in common code?
The potential for misuse, e.g. outside of slots_lock.
> The downside of putting it in architecture-specific code is if any other
> architecture needs it (or something similar) in the future they are unlikely
> to look through x86 code to see if it already exists. i.e. we're more likely
> to end up with duplicate code.
>
> And while the count is not stable outside of slots_lock, it could
> still theoretically be used under slots_lock to coordinate something
> that depends on dirty logging being enabled in any slot. In our
> internal kernel, for example, we use it to decide when to
> create/destroy the KVM dirty log worker threads (although I doubt that
> specific usecase will ever see the light of day upstream :).
Yeah, I'm definitely not dead set against putting it in common code. I suspect
I'm a little overly sensitive at the moment to x86 pushing a bunch of x86-centric
logic into kvm_main.c and making life miserable for everyone. Been spending far
too much time unwinding the mess that is kvm_init()...
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled
2022-10-27 20:03 [PATCH 0/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled David Matlack
2022-10-27 20:03 ` [PATCH 1/2] KVM: Keep track of the number of memslots with dirty logging enabled David Matlack
2022-10-27 20:03 ` [PATCH 2/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled David Matlack
@ 2022-10-28 10:58 ` Paolo Bonzini
2022-10-28 20:05 ` David Matlack
2 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2022-10-28 10:58 UTC (permalink / raw)
To: David Matlack; +Cc: Sean Christopherson, kvm
On 10/27/22 22:03, David Matlack wrote:
> This series turns off the NX Huge Page recovery worker when any memslot
> has dirty logging enabled. This avoids theoretical performance problems
> and reduces the CPU usage of NX Huge Pages when a VM is in the pre-copy
> phase of a Live Migration.
>
> Tested manually and ran all selftests.
>
> David Matlack (2):
> KVM: Keep track of the number of memslots with dirty logging enabled
> KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is
> enabled
>
> arch/x86/kvm/mmu/mmu.c | 8 ++++++++
> include/linux/kvm_host.h | 2 ++
> virt/kvm/kvm_main.c | 10 ++++++++++
> 3 files changed, 20 insertions(+)
>
>
> base-commit: e18d6152ff0f41b7f01f9817372022df04e0d354
This can be a bit problematic because for example you could have dirty
logging enabled only for a framebuffer or similar. In this case the
memory being logged will not be the same as the one that is NX-split.
Perhaps we can take advantage of eager page splitting, that is you can
add a bool to kvm_mmu_page that is set by shadow_mmu_get_sp_for_split
and tdp_mmu_alloc_sp_for_split (or a similar place)?
Paolo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled
2022-10-28 10:58 ` [PATCH 0/2] " Paolo Bonzini
@ 2022-10-28 20:05 ` David Matlack
2022-10-28 21:07 ` Sean Christopherson
0 siblings, 1 reply; 11+ messages in thread
From: David Matlack @ 2022-10-28 20:05 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Sean Christopherson, kvm
On Fri, Oct 28, 2022 at 3:58 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 10/27/22 22:03, David Matlack wrote:
> > This series turns off the NX Huge Page recovery worker when any memslot
> > has dirty logging enabled. This avoids theoretical performance problems
> > and reduces the CPU usage of NX Huge Pages when a VM is in the pre-copy
> > phase of a Live Migration.
> >
> > Tested manually and ran all selftests.
> >
> > David Matlack (2):
> > KVM: Keep track of the number of memslots with dirty logging enabled
> > KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is
> > enabled
> >
> > arch/x86/kvm/mmu/mmu.c | 8 ++++++++
> > include/linux/kvm_host.h | 2 ++
> > virt/kvm/kvm_main.c | 10 ++++++++++
> > 3 files changed, 20 insertions(+)
> >
> >
> > base-commit: e18d6152ff0f41b7f01f9817372022df04e0d354
>
> This can be a bit problematic because for example you could have dirty
> logging enabled only for a framebuffer or similar. In this case the
> memory being logged will not be the same as the one that is NX-split.
Ah, thanks for pointing that out. It's good to know there are other
use-cases for dirty logging outside of live migration. At Google we
tend to hyper focus on LM.
>
> Perhaps we can take advantage of eager page splitting, that is you can
> add a bool to kvm_mmu_page that is set by shadow_mmu_get_sp_for_split
> and tdp_mmu_alloc_sp_for_split (or a similar place)?
I don't think that would help since eagerly-split pages aren't on the
list of possible NX Huge Pages. i.e. the recovery worker won't try to
zap them. The main thing I want to avoid is the recovery worker
zapping ranges that were actually split for NX Huge Pages while they
are being dirty tracked.
I'll experiment with a more accurate solution. i.e. have the recovery
worker lookup the memslot for each SP and check if it has dirty
logging enabled. Maybe the increase in CPU usage won't be as bad as I
thought.
>
> Paolo
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled
2022-10-28 20:05 ` David Matlack
@ 2022-10-28 21:07 ` Sean Christopherson
2022-10-28 21:24 ` David Matlack
0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2022-10-28 21:07 UTC (permalink / raw)
To: David Matlack; +Cc: Paolo Bonzini, kvm
On Fri, Oct 28, 2022, David Matlack wrote:
> I'll experiment with a more accurate solution. i.e. have the recovery
> worker lookup the memslot for each SP and check if it has dirty
> logging enabled. Maybe the increase in CPU usage won't be as bad as I
> thought.
If you end up grabbing the memslot, use kvm_mmu_max_mapping_level() instead of
checking only dirty logging. The way KVM will avoid zapping shadow pages that
could have been NX huge pages when they were created, but can no longer be NX huge
pages due to something other than dirty logging, e.g. because the gfn is being
shadow for nested TDP.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled
2022-10-28 21:07 ` Sean Christopherson
@ 2022-10-28 21:24 ` David Matlack
2022-10-28 21:42 ` Sean Christopherson
0 siblings, 1 reply; 11+ messages in thread
From: David Matlack @ 2022-10-28 21:24 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm
On Fri, Oct 28, 2022 at 2:07 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Fri, Oct 28, 2022, David Matlack wrote:
> > I'll experiment with a more accurate solution. i.e. have the recovery
> > worker lookup the memslot for each SP and check if it has dirty
> > logging enabled. Maybe the increase in CPU usage won't be as bad as I
> > thought.
>
> If you end up grabbing the memslot, use kvm_mmu_max_mapping_level() instead of
> checking only dirty logging. The way KVM will avoid zapping shadow pages that
> could have been NX huge pages when they were created, but can no longer be NX huge
> pages due to something other than dirty logging, e.g. because the gfn is being
> shadow for nested TDP.
kvm_mmu_max_mapping_level() doesn't check if dirty logging is enabled
and does the unnecessary work of checking the host mapping level
(which requires knowing the pfn). I could refactor
kvm_mmu_hugepage_adjust() and kvm_mmu_max_mapping_level() though to
achieve what you suggest. Specifically, when recovering NX Huge Pages,
check if dirty logging is enabled and if a huge page is disallowed
(lpage_info_slot), and share that code with the fault handler.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled
2022-10-28 21:24 ` David Matlack
@ 2022-10-28 21:42 ` Sean Christopherson
0 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2022-10-28 21:42 UTC (permalink / raw)
To: David Matlack; +Cc: Paolo Bonzini, kvm
On Fri, Oct 28, 2022, David Matlack wrote:
> On Fri, Oct 28, 2022 at 2:07 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Fri, Oct 28, 2022, David Matlack wrote:
> > > I'll experiment with a more accurate solution. i.e. have the recovery
> > > worker lookup the memslot for each SP and check if it has dirty
> > > logging enabled. Maybe the increase in CPU usage won't be as bad as I
> > > thought.
> >
> > If you end up grabbing the memslot, use kvm_mmu_max_mapping_level() instead of
> > checking only dirty logging. The way KVM will avoid zapping shadow pages that
> > could have been NX huge pages when they were created, but can no longer be NX huge
> > pages due to something other than dirty logging, e.g. because the gfn is being
> > shadow for nested TDP.
>
> kvm_mmu_max_mapping_level() doesn't check if dirty logging is enabled
Gah, I forgot that kvm_mmu_hugepage_adjust() does a one-off check for dirty logging
instead of the info being fed into slot->arch.lpage_info. Never mind.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2022-10-28 21:42 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-10-27 20:03 [PATCH 0/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled David Matlack
2022-10-27 20:03 ` [PATCH 1/2] KVM: Keep track of the number of memslots with dirty logging enabled David Matlack
2022-10-27 20:34 ` Sean Christopherson
2022-10-27 22:15 ` David Matlack
2022-10-27 23:04 ` Sean Christopherson
2022-10-27 20:03 ` [PATCH 2/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled David Matlack
2022-10-28 10:58 ` [PATCH 0/2] " Paolo Bonzini
2022-10-28 20:05 ` David Matlack
2022-10-28 21:07 ` Sean Christopherson
2022-10-28 21:24 ` David Matlack
2022-10-28 21:42 ` Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).