public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] KVM: x86/mmu: Rework marking folios dirty/accessed
@ 2024-03-20  0:50 Sean Christopherson
  2024-03-20  0:50 ` [RFC PATCH 1/4] KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTE Sean Christopherson
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: Sean Christopherson @ 2024-03-20  0:50 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson
  Cc: kvm, linux-kernel, David Hildenbrand, David Matlack,
	David Stevens, Matthew Wilcox

Rework KVM to mark folios dirty when creating shadow/secondary PTEs (SPTEs),
i.e. when creating mappings for KVM guests, instead of when zapping or
modifying SPTEs, e.g. when dropping mappings.

The motivation is twofold:

  1. Marking folios dirty and accessed when zapping can be extremely
     expensive and wasteful, e.g. if KVM shattered a 1GiB hugepage into
     512*512 4KiB SPTEs for dirty logging, then KVM marks the huge folio
     dirty and accessed for all 512*512 SPTEs.

  2. x86 diverges from literally every other architecture, which updates
     folios when mappings are created.  AFAIK, x86 is unique in that it's
     the only KVM arch that prefetches PTEs, so it's not quite an apples-
     to-apples comparison, but I don't see any reason for the dirty logic
     in particular to be different.

I tagged this RFC as it is barely tested, and because I'm not 100% positive
there isn't some weird edge case I'm missing, which is why I Cc'd David H.
and Matthew.

Note, I'm going to be offline from ~now until April 1st.  I rushed this out
as it could impact David S.'s kvm_follow_pfn series[*], which is imminent.
E.g. if KVM stops marking pages dirty and accessed everywhere, adding
SPTE_MMU_PAGE_REFCOUNTED just to sanity check that the refcount is elevated
seems like a poor tradeoff (medium complexity and annoying to maintain, for
not much benefit).

Regarding David S.'s series, I wouldn't be at all opposed to going even
further and having x86 follow all architectures by marking pages accessed
_only_ at map time, at which point I think KVM could simply pass in FOLL_TOUCH
as appropriate, and thus dedup a fair bit of arch code.

Lastly, regarding bullet #1 above, we might be able to eke out better
performance by batching calls to folio_mark_{accessed,dirty}() on the backend,
e.g. when zapping SPTEs that KVM knows are covered by a single hugepage.  But
I think in practice any benefit would be marginal, as it would be quite odd
for KVM to fault-in a 1GiB hugepage at 4KiB granularity.

And _if_ we wanted to optimize that case, I suspect we'd be better off
pre-mapping all SPTEs for a pfn that is mapped at a larger granularity in
the primary MMU.  E.g. if KVM is dirty logging a 1GiB HugeTLB page, install
MMU-writable 4KiB SPTEs for the entire 1GiB region when any pfn is accessed.

P.S. Matthew ruined the "nothing but Davids!" Cc list.

[*] https://lore.kernel.org/all/20240229025759.1187910-1-stevensd@google.com

Sean Christopherson (4):
  KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf
    SPTE
  KVM: x86/mmu: Mark folio dirty when creating SPTE, not when
    zapping/modifying
  KVM: x86/mmu: Mark page/folio accessed only when zapping leaf SPTEs
  KVM: x86/mmu: Don't force flush if SPTE update clears Accessed bit

 Documentation/virt/kvm/locking.rst | 76 +++++++++++++++---------------
 arch/x86/kvm/mmu/mmu.c             | 60 +++++------------------
 arch/x86/kvm/mmu/paging_tmpl.h     |  7 ++-
 arch/x86/kvm/mmu/spte.c            | 27 ++++++++---
 arch/x86/kvm/mmu/tdp_mmu.c         | 19 ++------
 5 files changed, 78 insertions(+), 111 deletions(-)


base-commit: 964d0c614c7f71917305a5afdca9178fe8231434
-- 
2.44.0.291.gc1ea87d7ee-goog


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2024-04-05 14:06 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-20  0:50 [RFC PATCH 0/4] KVM: x86/mmu: Rework marking folios dirty/accessed Sean Christopherson
2024-03-20  0:50 ` [RFC PATCH 1/4] KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTE Sean Christopherson
2024-03-20  0:50 ` [RFC PATCH 2/4] KVM: x86/mmu: Mark folio dirty when creating SPTE, not when zapping/modifying Sean Christopherson
2024-03-20  0:50 ` [RFC PATCH 3/4] KVM: x86/mmu: Mark page/folio accessed only when zapping leaf SPTEs Sean Christopherson
2024-03-20  0:50 ` [RFC PATCH 4/4] KVM: x86/mmu: Don't force flush if SPTE update clears Accessed bit Sean Christopherson
2024-03-20 12:56 ` [RFC PATCH 0/4] KVM: x86/mmu: Rework marking folios dirty/accessed David Hildenbrand
2024-04-02 17:38   ` David Matlack
2024-04-02 18:31     ` David Hildenbrand
2024-04-03  0:17       ` Sean Christopherson
2024-04-03 21:43         ` David Hildenbrand
2024-04-03 22:19           ` Sean Christopherson
2024-04-04 15:44             ` David Hildenbrand
2024-04-04 17:31               ` Sean Christopherson
2024-04-04 18:23                 ` David Hildenbrand
2024-04-04 22:02                   ` Sean Christopherson
2024-04-05  6:53                     ` David Hildenbrand
2024-04-05  9:37                       ` Paolo Bonzini
2024-04-05 10:14                         ` David Hildenbrand
2024-04-05 13:59                           ` Sean Christopherson
2024-04-05 14:06                             ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox