* [PATCH v8 1/8] KVM: Assert that a page's refcount is elevated when marking accessed/dirty
2023-08-24 8:04 [PATCH v8 0/8] KVM: allow mapping non-refcounted pages David Stevens
@ 2023-08-24 8:04 ` David Stevens
2023-08-24 8:04 ` [PATCH v8 2/8] KVM: mmu: Introduce __kvm_follow_pfn function David Stevens
` (6 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: David Stevens @ 2023-08-24 8:04 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, Marc Zyngier, linux-kernel, Peter Xu, Yu Zhang,
Isaku Yamahata, kvmarm, linuxppc-dev, linux-arm-kernel
From: Sean Christopherson <seanjc@google.com>
Assert that a page's refcount is elevated, i.e. that _something_ holds a
reference to the page, when KVM marks a page as accessed and/or dirty.
KVM typically doesn't hold a reference to pages that are mapped into the
guest, e.g. to allow page migration, compaction, swap, etc., and instead
relies on mmu_notifiers to react to changes in the primary MMU.
Incorrect handling of mmu_notifier events (or similar mechanisms) can
result in KVM keeping a mapping beyond the lifetime of the backing page,
i.e. can (and often does) result in use-after-free. Yelling if KVM marks
a freed page as accessed/dirty doesn't prevent badness as KVM usually
only does A/D updates when unmapping memory from the guest, i.e. the
assertion fires well after an underlying bug has occurred, but yelling
does help detect, triage, and debug use-after-free bugs.
Note, the assertion must use page_count(), NOT page_ref_count()! For
hugepages, the returned struct page may be a tailpage and thus not have
its own refcount.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
virt/kvm/kvm_main.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 5bbb5612b207..1e4586aaa6cb 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2888,6 +2888,19 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_unmap);
static bool kvm_is_ad_tracked_page(struct page *page)
{
+ /*
+ * Assert that KVM isn't attempting to mark a freed page as Accessed or
+ * Dirty, i.e. that KVM's MMU doesn't have a use-after-free bug. KVM
+ * (typically) doesn't pin pages that are mapped in KVM's MMU, and
+ * instead relies on mmu_notifiers to know when a mapping needs to be
+ * zapped/invalidated. Unmapping from KVM's MMU must happen _before_
+ * KVM returns from its mmu_notifier, i.e. the page should have an
+ * elevated refcount at this point even though KVM doesn't hold a
+ * reference of its own.
+ */
+ if (WARN_ON_ONCE(!page_count(page)))
+ return false;
+
/*
* Per page-flags.h, pages tagged PG_reserved "should in general not be
* touched (e.g. set dirty) except by its owner".
--
2.42.0.rc1.204.g551eb34607-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v8 2/8] KVM: mmu: Introduce __kvm_follow_pfn function
2023-08-24 8:04 [PATCH v8 0/8] KVM: allow mapping non-refcounted pages David Stevens
2023-08-24 8:04 ` [PATCH v8 1/8] KVM: Assert that a page's refcount is elevated when marking accessed/dirty David Stevens
@ 2023-08-24 8:04 ` David Stevens
2023-08-24 8:04 ` [PATCH v8 3/8] KVM: mmu: Make __kvm_follow_pfn not imply FOLL_GET David Stevens
` (5 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: David Stevens @ 2023-08-24 8:04 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, Marc Zyngier, linux-kernel, Peter Xu, Yu Zhang,
Isaku Yamahata, kvmarm, linuxppc-dev, linux-arm-kernel,
David Stevens
From: David Stevens <stevensd@chromium.org>
Introduce __kvm_follow_pfn, which will replace __gfn_to_pfn_memslot.
__kvm_follow_pfn refactors the old API's arguments into a struct and,
where possible, combines the boolean arguments into a single flags
argument.
Signed-off-by: David Stevens <stevensd@chromium.org>
---
include/linux/kvm_host.h | 16 ++++
virt/kvm/kvm_main.c | 171 ++++++++++++++++++++++-----------------
virt/kvm/kvm_mm.h | 3 +-
virt/kvm/pfncache.c | 10 ++-
4 files changed, 123 insertions(+), 77 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 9d3ac7720da9..59d9b5e5db33 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -97,6 +97,7 @@
#define KVM_PFN_ERR_HWPOISON (KVM_PFN_ERR_MASK + 1)
#define KVM_PFN_ERR_RO_FAULT (KVM_PFN_ERR_MASK + 2)
#define KVM_PFN_ERR_SIGPENDING (KVM_PFN_ERR_MASK + 3)
+#define KVM_PFN_ERR_NEEDS_IO (KVM_PFN_ERR_MASK + 4)
/*
* error pfns indicate that the gfn is in slot but faild to
@@ -1156,6 +1157,21 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, gfn_t gfn,
void kvm_release_page_clean(struct page *page);
void kvm_release_page_dirty(struct page *page);
+struct kvm_follow_pfn {
+ const struct kvm_memory_slot *slot;
+ gfn_t gfn;
+ unsigned int flags;
+ bool atomic;
+ /* Try to create a writable mapping even for a read fault */
+ bool try_map_writable;
+
+ /* Outputs of __kvm_follow_pfn */
+ hva_t hva;
+ bool writable;
+};
+
+kvm_pfn_t __kvm_follow_pfn(struct kvm_follow_pfn *foll);
+
kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn);
kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault,
bool *writable);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 1e4586aaa6cb..5fde46f05117 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2486,8 +2486,7 @@ static inline int check_user_page_hwpoison(unsigned long addr)
* true indicates success, otherwise false is returned. It's also the
* only part that runs if we can in atomic context.
*/
-static bool hva_to_pfn_fast(unsigned long addr, bool write_fault,
- bool *writable, kvm_pfn_t *pfn)
+static bool hva_to_pfn_fast(struct kvm_follow_pfn *foll, kvm_pfn_t *pfn)
{
struct page *page[1];
@@ -2496,14 +2495,12 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault,
* or the caller allows to map a writable pfn for a read fault
* request.
*/
- if (!(write_fault || writable))
+ if (!((foll->flags & FOLL_WRITE) || foll->try_map_writable))
return false;
- if (get_user_page_fast_only(addr, FOLL_WRITE, page)) {
+ if (get_user_page_fast_only(foll->hva, FOLL_WRITE, page)) {
*pfn = page_to_pfn(page[0]);
-
- if (writable)
- *writable = true;
+ foll->writable = true;
return true;
}
@@ -2514,35 +2511,26 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault,
* The slow path to get the pfn of the specified host virtual address,
* 1 indicates success, -errno is returned if error is detected.
*/
-static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault,
- bool interruptible, bool *writable, kvm_pfn_t *pfn)
+static int hva_to_pfn_slow(struct kvm_follow_pfn *foll, kvm_pfn_t *pfn)
{
- unsigned int flags = FOLL_HWPOISON;
+ unsigned int flags = FOLL_HWPOISON | foll->flags;
struct page *page;
int npages;
might_sleep();
- if (writable)
- *writable = write_fault;
-
- if (write_fault)
- flags |= FOLL_WRITE;
- if (async)
- flags |= FOLL_NOWAIT;
- if (interruptible)
- flags |= FOLL_INTERRUPTIBLE;
-
- npages = get_user_pages_unlocked(addr, 1, &page, flags);
+ npages = get_user_pages_unlocked(foll->hva, 1, &page, flags);
if (npages != 1)
return npages;
- /* map read fault as writable if possible */
- if (unlikely(!write_fault) && writable) {
+ if (foll->flags & FOLL_WRITE) {
+ foll->writable = true;
+ } else if (foll->try_map_writable) {
struct page *wpage;
- if (get_user_page_fast_only(addr, FOLL_WRITE, &wpage)) {
- *writable = true;
+ /* map read fault as writable if possible */
+ if (get_user_page_fast_only(foll->hva, FOLL_WRITE, &wpage)) {
+ foll->writable = true;
put_page(page);
page = wpage;
}
@@ -2573,23 +2561,23 @@ static int kvm_try_get_pfn(kvm_pfn_t pfn)
}
static int hva_to_pfn_remapped(struct vm_area_struct *vma,
- unsigned long addr, bool write_fault,
- bool *writable, kvm_pfn_t *p_pfn)
+ struct kvm_follow_pfn *foll, kvm_pfn_t *p_pfn)
{
kvm_pfn_t pfn;
pte_t *ptep;
pte_t pte;
spinlock_t *ptl;
+ bool write_fault = foll->flags & FOLL_WRITE;
int r;
- r = follow_pte(vma->vm_mm, addr, &ptep, &ptl);
+ r = follow_pte(vma->vm_mm, foll->hva, &ptep, &ptl);
if (r) {
/*
* get_user_pages fails for VM_IO and VM_PFNMAP vmas and does
* not call the fault handler, so do it here.
*/
bool unlocked = false;
- r = fixup_user_fault(current->mm, addr,
+ r = fixup_user_fault(current->mm, foll->hva,
(write_fault ? FAULT_FLAG_WRITE : 0),
&unlocked);
if (unlocked)
@@ -2597,7 +2585,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
if (r)
return r;
- r = follow_pte(vma->vm_mm, addr, &ptep, &ptl);
+ r = follow_pte(vma->vm_mm, foll->hva, &ptep, &ptl);
if (r)
return r;
}
@@ -2609,8 +2597,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
goto out;
}
- if (writable)
- *writable = pte_write(pte);
+ foll->writable = pte_write(*ptep);
pfn = pte_pfn(pte);
/*
@@ -2655,24 +2642,22 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
* 2): @write_fault = false && @writable, @writable will tell the caller
* whether the mapping is writable.
*/
-kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible,
- bool *async, bool write_fault, bool *writable)
+kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *foll)
{
struct vm_area_struct *vma;
kvm_pfn_t pfn;
int npages, r;
/* we can do it either atomically or asynchronously, not both */
- BUG_ON(atomic && async);
+ BUG_ON(foll->atomic && (foll->flags & FOLL_NOWAIT));
- if (hva_to_pfn_fast(addr, write_fault, writable, &pfn))
+ if (hva_to_pfn_fast(foll, &pfn))
return pfn;
- if (atomic)
+ if (foll->atomic)
return KVM_PFN_ERR_FAULT;
- npages = hva_to_pfn_slow(addr, async, write_fault, interruptible,
- writable, &pfn);
+ npages = hva_to_pfn_slow(foll, &pfn);
if (npages == 1)
return pfn;
if (npages == -EINTR)
@@ -2680,83 +2665,123 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible,
mmap_read_lock(current->mm);
if (npages == -EHWPOISON ||
- (!async && check_user_page_hwpoison(addr))) {
+ (!(foll->flags & FOLL_NOWAIT) && check_user_page_hwpoison(foll->hva))) {
pfn = KVM_PFN_ERR_HWPOISON;
goto exit;
}
retry:
- vma = vma_lookup(current->mm, addr);
+ vma = vma_lookup(current->mm, foll->hva);
if (vma == NULL)
pfn = KVM_PFN_ERR_FAULT;
else if (vma->vm_flags & (VM_IO | VM_PFNMAP)) {
- r = hva_to_pfn_remapped(vma, addr, write_fault, writable, &pfn);
+ r = hva_to_pfn_remapped(vma, foll, &pfn);
if (r == -EAGAIN)
goto retry;
if (r < 0)
pfn = KVM_PFN_ERR_FAULT;
} else {
- if (async && vma_is_valid(vma, write_fault))
- *async = true;
- pfn = KVM_PFN_ERR_FAULT;
+ if ((foll->flags & FOLL_NOWAIT) &&
+ vma_is_valid(vma, foll->flags & FOLL_WRITE))
+ pfn = KVM_PFN_ERR_NEEDS_IO;
+ else
+ pfn = KVM_PFN_ERR_FAULT;
}
exit:
mmap_read_unlock(current->mm);
return pfn;
}
-kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn,
- bool atomic, bool interruptible, bool *async,
- bool write_fault, bool *writable, hva_t *hva)
+kvm_pfn_t __kvm_follow_pfn(struct kvm_follow_pfn *foll)
{
- unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault);
+ foll->writable = false;
+ foll->hva = __gfn_to_hva_many(foll->slot, foll->gfn, NULL,
+ foll->flags & FOLL_WRITE);
- if (hva)
- *hva = addr;
-
- if (addr == KVM_HVA_ERR_RO_BAD) {
- if (writable)
- *writable = false;
+ if (foll->hva == KVM_HVA_ERR_RO_BAD)
return KVM_PFN_ERR_RO_FAULT;
- }
- if (kvm_is_error_hva(addr)) {
- if (writable)
- *writable = false;
+ if (kvm_is_error_hva(foll->hva))
return KVM_PFN_NOSLOT;
- }
- /* Do not map writable pfn in the readonly memslot. */
- if (writable && memslot_is_readonly(slot)) {
- *writable = false;
- writable = NULL;
- }
+ if (memslot_is_readonly(foll->slot))
+ foll->try_map_writable = false;
- return hva_to_pfn(addr, atomic, interruptible, async, write_fault,
- writable);
+ return hva_to_pfn(foll);
+}
+EXPORT_SYMBOL_GPL(__kvm_follow_pfn);
+
+kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn,
+ bool atomic, bool interruptible, bool *async,
+ bool write_fault, bool *writable, hva_t *hva)
+{
+ kvm_pfn_t pfn;
+ struct kvm_follow_pfn foll = {
+ .slot = slot,
+ .gfn = gfn,
+ .flags = 0,
+ .atomic = atomic,
+ .try_map_writable = !!writable,
+ };
+
+ if (write_fault)
+ foll.flags |= FOLL_WRITE;
+ if (async)
+ foll.flags |= FOLL_NOWAIT;
+ if (interruptible)
+ foll.flags |= FOLL_INTERRUPTIBLE;
+
+ pfn = __kvm_follow_pfn(&foll);
+ if (pfn == KVM_PFN_ERR_NEEDS_IO) {
+ *async = true;
+ pfn = KVM_PFN_ERR_FAULT;
+ }
+ if (hva)
+ *hva = foll.hva;
+ if (writable)
+ *writable = foll.writable;
+ return pfn;
}
EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot);
kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault,
bool *writable)
{
- return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false,
- NULL, write_fault, writable, NULL);
+ kvm_pfn_t pfn;
+ struct kvm_follow_pfn foll = {
+ .slot = gfn_to_memslot(kvm, gfn),
+ .gfn = gfn,
+ .flags = write_fault ? FOLL_WRITE : 0,
+ .try_map_writable = !!writable,
+ };
+ pfn = __kvm_follow_pfn(&foll);
+ if (writable)
+ *writable = foll.writable;
+ return pfn;
}
EXPORT_SYMBOL_GPL(gfn_to_pfn_prot);
kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn)
{
- return __gfn_to_pfn_memslot(slot, gfn, false, false, NULL, true,
- NULL, NULL);
+ struct kvm_follow_pfn foll = {
+ .slot = slot,
+ .gfn = gfn,
+ .flags = FOLL_WRITE,
+ };
+ return __kvm_follow_pfn(&foll);
}
EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot);
kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn)
{
- return __gfn_to_pfn_memslot(slot, gfn, true, false, NULL, true,
- NULL, NULL);
+ struct kvm_follow_pfn foll = {
+ .slot = slot,
+ .gfn = gfn,
+ .flags = FOLL_WRITE,
+ .atomic = true,
+ };
+ return __kvm_follow_pfn(&foll);
}
EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic);
diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h
index 180f1a09e6ba..ed896aee5396 100644
--- a/virt/kvm/kvm_mm.h
+++ b/virt/kvm/kvm_mm.h
@@ -20,8 +20,7 @@
#define KVM_MMU_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock)
#endif /* KVM_HAVE_MMU_RWLOCK */
-kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible,
- bool *async, bool write_fault, bool *writable);
+kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *foll);
#ifdef CONFIG_HAVE_KVM_PFNCACHE
void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index 2d6aba677830..86cd40acad11 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -144,6 +144,12 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc)
kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT;
void *new_khva = NULL;
unsigned long mmu_seq;
+ struct kvm_follow_pfn foll = {
+ .slot = gpc->memslot,
+ .gfn = gpa_to_gfn(gpc->gpa),
+ .flags = FOLL_WRITE,
+ .hva = gpc->uhva,
+ };
lockdep_assert_held(&gpc->refresh_lock);
@@ -182,8 +188,8 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc)
cond_resched();
}
- /* We always request a writeable mapping */
- new_pfn = hva_to_pfn(gpc->uhva, false, false, NULL, true, NULL);
+ /* We always request a writable mapping */
+ new_pfn = hva_to_pfn(&foll);
if (is_error_noslot_pfn(new_pfn))
goto out_error;
--
2.42.0.rc1.204.g551eb34607-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v8 3/8] KVM: mmu: Make __kvm_follow_pfn not imply FOLL_GET
2023-08-24 8:04 [PATCH v8 0/8] KVM: allow mapping non-refcounted pages David Stevens
2023-08-24 8:04 ` [PATCH v8 1/8] KVM: Assert that a page's refcount is elevated when marking accessed/dirty David Stevens
2023-08-24 8:04 ` [PATCH v8 2/8] KVM: mmu: Introduce __kvm_follow_pfn function David Stevens
@ 2023-08-24 8:04 ` David Stevens
2023-08-24 8:04 ` [PATCH v8 4/8] KVM: x86/mmu: Migrate to __kvm_follow_pfn David Stevens
` (4 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: David Stevens @ 2023-08-24 8:04 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, Marc Zyngier, linux-kernel, Peter Xu, Yu Zhang,
Isaku Yamahata, kvmarm, linuxppc-dev, linux-arm-kernel,
David Stevens
From: David Stevens <stevensd@chromium.org>
Make it so that __kvm_follow_pfn does not imply FOLL_GET. This allows
callers to resolve a gfn when the associated pfn has a valid struct page
that isn't being actively refcounted (e.g. tail pages of non-compound
higher order pages). For a caller to safely omit FOLL_GET, all usages of
the returned pfn must be guarded by a mmu notifier.
This also adds a is_refcounted_page out parameter to kvm_follow_pfn that
is set when the returned pfn has an associated struct page with a valid
refcount. Callers that don't pass FOLL_GET should remember this value
and use it to avoid places like kvm_is_ad_tracked_page that assume a
non-zero refcount.
Signed-off-by: David Stevens <stevensd@chromium.org>
---
include/linux/kvm_host.h | 7 ++++
virt/kvm/kvm_main.c | 84 ++++++++++++++++++++++++----------------
virt/kvm/pfncache.c | 2 +-
3 files changed, 58 insertions(+), 35 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 59d9b5e5db33..713fc2d91f95 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1164,10 +1164,17 @@ struct kvm_follow_pfn {
bool atomic;
/* Try to create a writable mapping even for a read fault */
bool try_map_writable;
+ /*
+ * Usage of the returned pfn will be guared by a mmu notifier. Must
+ * be true if FOLL_GET is not set.
+ */
+ bool guarded_by_mmu_notifier;
/* Outputs of __kvm_follow_pfn */
hva_t hva;
bool writable;
+ /* True if the returned pfn is for a page with a valid refcount. */
+ bool is_refcounted_page;
};
kvm_pfn_t __kvm_follow_pfn(struct kvm_follow_pfn *foll);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 5fde46f05117..963b96cd8ff9 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2481,6 +2481,25 @@ static inline int check_user_page_hwpoison(unsigned long addr)
return rc == -EHWPOISON;
}
+static kvm_pfn_t kvm_follow_refcounted_pfn(struct kvm_follow_pfn *foll,
+ struct page *page)
+{
+ kvm_pfn_t pfn = page_to_pfn(page);
+
+ foll->is_refcounted_page = true;
+
+ /*
+ * FIXME: Ideally, KVM wouldn't pass FOLL_GET to gup() when the caller
+ * doesn't want to grab a reference, but gup() doesn't support getting
+ * just the pfn, i.e. FOLL_GET is effectively mandatory. If that ever
+ * changes, drop this and simply don't pass FOLL_GET to gup().
+ */
+ if (!(foll->flags & FOLL_GET))
+ put_page(page);
+
+ return pfn;
+}
+
/*
* The fast path to get the writable pfn which will be stored in @pfn,
* true indicates success, otherwise false is returned. It's also the
@@ -2499,8 +2518,8 @@ static bool hva_to_pfn_fast(struct kvm_follow_pfn *foll, kvm_pfn_t *pfn)
return false;
if (get_user_page_fast_only(foll->hva, FOLL_WRITE, page)) {
- *pfn = page_to_pfn(page[0]);
foll->writable = true;
+ *pfn = kvm_follow_refcounted_pfn(foll, page[0]);
return true;
}
@@ -2513,7 +2532,7 @@ static bool hva_to_pfn_fast(struct kvm_follow_pfn *foll, kvm_pfn_t *pfn)
*/
static int hva_to_pfn_slow(struct kvm_follow_pfn *foll, kvm_pfn_t *pfn)
{
- unsigned int flags = FOLL_HWPOISON | foll->flags;
+ unsigned int flags = FOLL_HWPOISON | FOLL_GET | foll->flags;
struct page *page;
int npages;
@@ -2535,7 +2554,7 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *foll, kvm_pfn_t *pfn)
page = wpage;
}
}
- *pfn = page_to_pfn(page);
+ *pfn = kvm_follow_refcounted_pfn(foll, page);
return npages;
}
@@ -2550,16 +2569,6 @@ static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault)
return true;
}
-static int kvm_try_get_pfn(kvm_pfn_t pfn)
-{
- struct page *page = kvm_pfn_to_refcounted_page(pfn);
-
- if (!page)
- return 1;
-
- return get_page_unless_zero(page);
-}
-
static int hva_to_pfn_remapped(struct vm_area_struct *vma,
struct kvm_follow_pfn *foll, kvm_pfn_t *p_pfn)
{
@@ -2568,6 +2577,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
pte_t pte;
spinlock_t *ptl;
bool write_fault = foll->flags & FOLL_WRITE;
+ struct page *page;
int r;
r = follow_pte(vma->vm_mm, foll->hva, &ptep, &ptl);
@@ -2601,28 +2611,29 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
pfn = pte_pfn(pte);
/*
- * Get a reference here because callers of *hva_to_pfn* and
- * *gfn_to_pfn* ultimately call kvm_release_pfn_clean on the
- * returned pfn. This is only needed if the VMA has VM_MIXEDMAP
- * set, but the kvm_try_get_pfn/kvm_release_pfn_clean pair will
- * simply do nothing for reserved pfns.
- *
- * Whoever called remap_pfn_range is also going to call e.g.
- * unmap_mapping_range before the underlying pages are freed,
- * causing a call to our MMU notifier.
+ * Now deal with reference counting. If kvm_pfn_to_refcounted_page
+ * returns NULL, then there's no refcount to worry about.
*
- * Certain IO or PFNMAP mappings can be backed with valid
- * struct pages, but be allocated without refcounting e.g.,
- * tail pages of non-compound higher order allocations, which
- * would then underflow the refcount when the caller does the
- * required put_page. Don't allow those pages here.
+ * Otherwise, certain IO or PFNMAP mappings can be backed with valid
+ * struct pages but be allocated without refcounting e.g., tail pages of
+ * non-compound higher order allocations. If FOLL_GET is set and we
+ * increment such a refcount, then when that pfn is eventually passed to
+ * kvm_release_pfn_clean, its refcount would hit zero and be incorrectly
+ * freed. Therefore don't allow those pages here when FOLL_GET is set.
*/
- if (!kvm_try_get_pfn(pfn))
- r = -EFAULT;
+ page = kvm_pfn_to_refcounted_page(pfn);
+ if (!page)
+ goto out;
+
+ if (get_page_unless_zero(page))
+ WARN_ON_ONCE(kvm_follow_refcounted_pfn(foll, page) != pfn);
out:
pte_unmap_unlock(ptep, ptl);
- *p_pfn = pfn;
+ if (!foll->is_refcounted_page && !foll->guarded_by_mmu_notifier)
+ r = -EFAULT;
+ else
+ *p_pfn = pfn;
return r;
}
@@ -2696,6 +2707,11 @@ kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *foll)
kvm_pfn_t __kvm_follow_pfn(struct kvm_follow_pfn *foll)
{
foll->writable = false;
+ foll->is_refcounted_page = false;
+
+ if (WARN_ON_ONCE(!(foll->flags & FOLL_GET) && !foll->guarded_by_mmu_notifier))
+ return KVM_PFN_ERR_FAULT;
+
foll->hva = __gfn_to_hva_many(foll->slot, foll->gfn, NULL,
foll->flags & FOLL_WRITE);
@@ -2720,7 +2736,7 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn,
struct kvm_follow_pfn foll = {
.slot = slot,
.gfn = gfn,
- .flags = 0,
+ .flags = FOLL_GET,
.atomic = atomic,
.try_map_writable = !!writable,
};
@@ -2752,7 +2768,7 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault,
struct kvm_follow_pfn foll = {
.slot = gfn_to_memslot(kvm, gfn),
.gfn = gfn,
- .flags = write_fault ? FOLL_WRITE : 0,
+ .flags = FOLL_GET | (write_fault ? FOLL_WRITE : 0),
.try_map_writable = !!writable,
};
pfn = __kvm_follow_pfn(&foll);
@@ -2767,7 +2783,7 @@ kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn)
struct kvm_follow_pfn foll = {
.slot = slot,
.gfn = gfn,
- .flags = FOLL_WRITE,
+ .flags = FOLL_GET | FOLL_WRITE,
};
return __kvm_follow_pfn(&foll);
}
@@ -2778,7 +2794,7 @@ kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gf
struct kvm_follow_pfn foll = {
.slot = slot,
.gfn = gfn,
- .flags = FOLL_WRITE,
+ .flags = FOLL_GET | FOLL_WRITE,
.atomic = true,
};
return __kvm_follow_pfn(&foll);
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index 86cd40acad11..c558f510ab51 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -147,7 +147,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc)
struct kvm_follow_pfn foll = {
.slot = gpc->memslot,
.gfn = gpa_to_gfn(gpc->gpa),
- .flags = FOLL_WRITE,
+ .flags = FOLL_WRITE | FOLL_GET,
.hva = gpc->uhva,
};
--
2.42.0.rc1.204.g551eb34607-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v8 4/8] KVM: x86/mmu: Migrate to __kvm_follow_pfn
2023-08-24 8:04 [PATCH v8 0/8] KVM: allow mapping non-refcounted pages David Stevens
` (2 preceding siblings ...)
2023-08-24 8:04 ` [PATCH v8 3/8] KVM: mmu: Make __kvm_follow_pfn not imply FOLL_GET David Stevens
@ 2023-08-24 8:04 ` David Stevens
2023-08-24 8:04 ` [PATCH v8 5/8] KVM: x86/mmu: Don't pass FOLL_GET " David Stevens
` (3 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: David Stevens @ 2023-08-24 8:04 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, Marc Zyngier, linux-kernel, Peter Xu, Yu Zhang,
Isaku Yamahata, kvmarm, linuxppc-dev, linux-arm-kernel,
David Stevens
From: David Stevens <stevensd@chromium.org>
Migrate from __gfn_to_pfn_memslot to __kvm_follow_pfn. Most arguments
directly map to the new API. The largest change is replacing the async
in/out parameter with FOLL_NOWAIT parameter and the KVM_PFN_ERR_NEEDS_IO
return value.
Signed-off-by: David Stevens <stevensd@chromium.org>
---
arch/x86/kvm/mmu/mmu.c | 41 +++++++++++++++++++++++++++++++----------
1 file changed, 31 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index ec169f5c7dce..dabae67f198b 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4296,7 +4296,12 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
{
struct kvm_memory_slot *slot = fault->slot;
- bool async;
+ struct kvm_follow_pfn foll = {
+ .slot = slot,
+ .gfn = fault->gfn,
+ .flags = FOLL_GET | (fault->write ? FOLL_WRITE : 0),
+ .try_map_writable = true,
+ };
/*
* Retry the page fault if the gfn hit a memslot that is being deleted
@@ -4325,12 +4330,20 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
return RET_PF_EMULATE;
}
- async = false;
- fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async,
- fault->write, &fault->map_writable,
- &fault->hva);
- if (!async)
- return RET_PF_CONTINUE; /* *pfn has correct page already */
+ foll.flags |= FOLL_NOWAIT;
+ fault->pfn = __kvm_follow_pfn(&foll);
+
+ if (!is_error_noslot_pfn(fault->pfn))
+ goto success;
+
+ /*
+ * If __kvm_follow_pfn() failed because I/O is needed to fault in the
+ * page, then either set up an asynchronous #PF to do the I/O, or if
+ * doing an async #PF isn't possible, retry __kvm_follow_pfn() with
+ * I/O allowed. All other failures are fatal, i.e. retrying won't help.
+ */
+ if (fault->pfn != KVM_PFN_ERR_NEEDS_IO)
+ return RET_PF_CONTINUE;
if (!fault->prefetch && kvm_can_do_async_pf(vcpu)) {
trace_kvm_try_async_get_page(fault->addr, fault->gfn);
@@ -4348,9 +4361,17 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
* to wait for IO. Note, gup always bails if it is unable to quickly
* get a page and a fatal signal, i.e. SIGKILL, is pending.
*/
- fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, true, NULL,
- fault->write, &fault->map_writable,
- &fault->hva);
+ foll.flags |= FOLL_INTERRUPTIBLE;
+ foll.flags &= ~FOLL_NOWAIT;
+ fault->pfn = __kvm_follow_pfn(&foll);
+
+ if (!is_error_noslot_pfn(fault->pfn))
+ goto success;
+
+ return RET_PF_CONTINUE;
+success:
+ fault->hva = foll.hva;
+ fault->map_writable = foll.writable;
return RET_PF_CONTINUE;
}
--
2.42.0.rc1.204.g551eb34607-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v8 5/8] KVM: x86/mmu: Don't pass FOLL_GET to __kvm_follow_pfn
2023-08-24 8:04 [PATCH v8 0/8] KVM: allow mapping non-refcounted pages David Stevens
` (3 preceding siblings ...)
2023-08-24 8:04 ` [PATCH v8 4/8] KVM: x86/mmu: Migrate to __kvm_follow_pfn David Stevens
@ 2023-08-24 8:04 ` David Stevens
2023-08-24 9:13 ` Mika Penttilä
2023-08-24 8:04 ` [PATCH v8 6/8] KVM: arm64: Migrate " David Stevens
` (2 subsequent siblings)
7 siblings, 1 reply; 10+ messages in thread
From: David Stevens @ 2023-08-24 8:04 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, Marc Zyngier, linux-kernel, Peter Xu, Yu Zhang,
Isaku Yamahata, kvmarm, linuxppc-dev, linux-arm-kernel,
David Stevens
From: David Stevens <stevensd@chromium.org>
Stop passing FOLL_GET to __kvm_follow_pfn. This allows the host to map
memory into the guest that is backed by un-refcounted struct pages - for
example, the tail pages of higher order non-compound pages allocated by
the amdgpu driver via ttm_pool_alloc_page.
The bulk of this change is tracking the is_refcounted_page flag so that
non-refcounted pages don't trigger page_count() == 0 warnings. This is
done by storing the flag in an unused bit in the sptes. This only bit is
not available in PAE SPTEs, so FOLL_GET is only omitted for TDP on
x86-64.
Signed-off-by: David Stevens <stevensd@chromium.org>
---
arch/x86/kvm/mmu/mmu.c | 55 +++++++++++++++++++++++----------
arch/x86/kvm/mmu/mmu_internal.h | 1 +
arch/x86/kvm/mmu/paging_tmpl.h | 8 +++--
arch/x86/kvm/mmu/spte.c | 4 ++-
arch/x86/kvm/mmu/spte.h | 12 ++++++-
arch/x86/kvm/mmu/tdp_mmu.c | 22 +++++++------
include/linux/kvm_host.h | 3 ++
virt/kvm/kvm_main.c | 6 ++--
8 files changed, 79 insertions(+), 32 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index dabae67f198b..4f5d33e95c6e 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -553,12 +553,14 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) {
flush = true;
- kvm_set_pfn_accessed(spte_to_pfn(old_spte));
+ if (is_refcounted_page_pte(old_spte))
+ kvm_set_page_accessed(pfn_to_page(spte_to_pfn(old_spte)));
}
if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) {
flush = true;
- kvm_set_pfn_dirty(spte_to_pfn(old_spte));
+ if (is_refcounted_page_pte(old_spte))
+ kvm_set_page_dirty(pfn_to_page(spte_to_pfn(old_spte)));
}
return flush;
@@ -596,14 +598,18 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
* before they are reclaimed. Sanity check that, if the pfn is backed
* by a refcounted page, the refcount is elevated.
*/
- page = kvm_pfn_to_refcounted_page(pfn);
- WARN_ON(page && !page_count(page));
+ if (is_refcounted_page_pte(old_spte)) {
+ page = kvm_pfn_to_refcounted_page(pfn);
+ WARN_ON(!page || !page_count(page));
+ }
- if (is_accessed_spte(old_spte))
- kvm_set_pfn_accessed(pfn);
+ if (is_refcounted_page_pte(old_spte)) {
+ if (is_accessed_spte(old_spte))
+ kvm_set_page_accessed(pfn_to_page(pfn));
- if (is_dirty_spte(old_spte))
- kvm_set_pfn_dirty(pfn);
+ if (is_dirty_spte(old_spte))
+ kvm_set_page_dirty(pfn_to_page(pfn));
+ }
return old_spte;
}
@@ -639,8 +645,8 @@ static bool mmu_spte_age(u64 *sptep)
* Capture the dirty status of the page, so that it doesn't get
* lost when the SPTE is marked for access tracking.
*/
- if (is_writable_pte(spte))
- kvm_set_pfn_dirty(spte_to_pfn(spte));
+ if (is_writable_pte(spte) && is_refcounted_page_pte(spte))
+ kvm_set_page_dirty(pfn_to_page(spte_to_pfn(spte)));
spte = mark_spte_for_access_track(spte);
mmu_spte_update_no_track(sptep, spte);
@@ -1278,8 +1284,8 @@ static bool spte_wrprot_for_clear_dirty(u64 *sptep)
{
bool was_writable = test_and_clear_bit(PT_WRITABLE_SHIFT,
(unsigned long *)sptep);
- if (was_writable && !spte_ad_enabled(*sptep))
- kvm_set_pfn_dirty(spte_to_pfn(*sptep));
+ if (was_writable && !spte_ad_enabled(*sptep) && is_refcounted_page_pte(*sptep))
+ kvm_set_page_dirty(pfn_to_page(spte_to_pfn(*sptep)));
return was_writable;
}
@@ -2937,6 +2943,11 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
bool host_writable = !fault || fault->map_writable;
bool prefetch = !fault || fault->prefetch;
bool write_fault = fault && fault->write;
+ /*
+ * Prefetching uses gfn_to_page_many_atomic, which never gets
+ * non-refcounted pages.
+ */
+ bool is_refcounted = !fault || fault->is_refcounted_page;
pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__,
*sptep, write_fault, gfn);
@@ -2969,7 +2980,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
}
wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch,
- true, host_writable, &spte);
+ true, host_writable, is_refcounted, &spte);
if (*sptep == spte) {
ret = RET_PF_SPURIOUS;
@@ -4296,11 +4307,19 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
{
struct kvm_memory_slot *slot = fault->slot;
+ /*
+ * We only allow non-refcounted pages if we can track whether or not
+ * pages are refcounted via an SPTE bit. This bit is not available
+ * in PAE SPTEs, so pass FOLL_GET if we may have to deal with those.
+ */
+ unsigned int get_flag = (tdp_enabled && IS_ENABLED(CONFIG_X86_64)) ?
+ 0 : FOLL_GET;
struct kvm_follow_pfn foll = {
.slot = slot,
.gfn = fault->gfn,
- .flags = FOLL_GET | (fault->write ? FOLL_WRITE : 0),
+ .flags = (fault->write ? FOLL_WRITE : 0) | get_flag,
.try_map_writable = true,
+ .guarded_by_mmu_notifier = true,
};
/*
@@ -4317,6 +4336,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
fault->slot = NULL;
fault->pfn = KVM_PFN_NOSLOT;
fault->map_writable = false;
+ fault->is_refcounted_page = false;
return RET_PF_CONTINUE;
}
/*
@@ -4372,6 +4392,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
success:
fault->hva = foll.hva;
fault->map_writable = foll.writable;
+ fault->is_refcounted_page = foll.is_refcounted_page;
return RET_PF_CONTINUE;
}
@@ -4456,8 +4477,9 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
r = direct_map(vcpu, fault);
out_unlock:
+ if (fault->is_refcounted_page)
+ kvm_set_page_accessed(pfn_to_page(fault->pfn));
write_unlock(&vcpu->kvm->mmu_lock);
- kvm_release_pfn_clean(fault->pfn);
return r;
}
@@ -4534,8 +4556,9 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
r = kvm_tdp_mmu_map(vcpu, fault);
out_unlock:
+ if (fault->is_refcounted_page)
+ kvm_set_page_accessed(pfn_to_page(fault->pfn));
read_unlock(&vcpu->kvm->mmu_lock);
- kvm_release_pfn_clean(fault->pfn);
return r;
}
#endif
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index d39af5639ce9..55790085884f 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -240,6 +240,7 @@ struct kvm_page_fault {
kvm_pfn_t pfn;
hva_t hva;
bool map_writable;
+ bool is_refcounted_page;
/*
* Indicates the guest is trying to write a gfn that contains one or
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 0662e0278e70..4ffcebc0c3ce 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -828,8 +828,9 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
r = FNAME(fetch)(vcpu, fault, &walker);
out_unlock:
+ if (fault->is_refcounted_page)
+ kvm_set_page_accessed(pfn_to_page(fault->pfn));
write_unlock(&vcpu->kvm->mmu_lock);
- kvm_release_pfn_clean(fault->pfn);
return r;
}
@@ -883,7 +884,7 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
*/
static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i)
{
- bool host_writable;
+ bool host_writable, is_refcounted;
gpa_t first_pte_gpa;
u64 *sptep, spte;
struct kvm_memory_slot *slot;
@@ -940,10 +941,11 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int
sptep = &sp->spt[i];
spte = *sptep;
host_writable = spte & shadow_host_writable_mask;
+ is_refcounted = spte & SPTE_MMU_PAGE_REFCOUNTED;
slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
make_spte(vcpu, sp, slot, pte_access, gfn,
spte_to_pfn(spte), spte, true, false,
- host_writable, &spte);
+ host_writable, is_refcounted, &spte);
return mmu_spte_update(sptep, spte);
}
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index cf2c6426a6fc..5b13d9143d56 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -138,7 +138,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
const struct kvm_memory_slot *slot,
unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn,
u64 old_spte, bool prefetch, bool can_unsync,
- bool host_writable, u64 *new_spte)
+ bool host_writable, bool is_refcounted, u64 *new_spte)
{
int level = sp->role.level;
u64 spte = SPTE_MMU_PRESENT_MASK;
@@ -188,6 +188,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
if (level > PG_LEVEL_4K)
spte |= PT_PAGE_SIZE_MASK;
+ if (is_refcounted)
+ spte |= SPTE_MMU_PAGE_REFCOUNTED;
if (shadow_memtype_mask)
spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn,
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 1279db2eab44..be93dd061ae3 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -95,6 +95,11 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK));
/* Defined only to keep the above static asserts readable. */
#undef SHADOW_ACC_TRACK_SAVED_MASK
+/*
+ * Indicates that the SPTE refers to a page with a valid refcount.
+ */
+#define SPTE_MMU_PAGE_REFCOUNTED BIT_ULL(59)
+
/*
* Due to limited space in PTEs, the MMIO generation is a 19 bit subset of
* the memslots generation and is derived as follows:
@@ -332,6 +337,11 @@ static inline bool is_dirty_spte(u64 spte)
return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK;
}
+static inline bool is_refcounted_page_pte(u64 spte)
+{
+ return spte & SPTE_MMU_PAGE_REFCOUNTED;
+}
+
static inline u64 get_rsvd_bits(struct rsvd_bits_validate *rsvd_check, u64 pte,
int level)
{
@@ -462,7 +472,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
const struct kvm_memory_slot *slot,
unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn,
u64 old_spte, bool prefetch, bool can_unsync,
- bool host_writable, u64 *new_spte);
+ bool host_writable, bool is_refcounted, u64 *new_spte);
u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte,
union kvm_mmu_page_role role, int index);
u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled);
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 512163d52194..a9b1b14d2e26 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -474,6 +474,7 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
bool was_leaf = was_present && is_last_spte(old_spte, level);
bool is_leaf = is_present && is_last_spte(new_spte, level);
bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
+ bool is_refcounted = is_refcounted_page_pte(old_spte);
WARN_ON(level > PT64_ROOT_MAX_LEVEL);
WARN_ON(level < PG_LEVEL_4K);
@@ -538,9 +539,9 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
if (is_leaf != was_leaf)
kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1);
- if (was_leaf && is_dirty_spte(old_spte) &&
+ if (was_leaf && is_dirty_spte(old_spte) && is_refcounted &&
(!is_present || !is_dirty_spte(new_spte) || pfn_changed))
- kvm_set_pfn_dirty(spte_to_pfn(old_spte));
+ kvm_set_page_dirty(pfn_to_page(spte_to_pfn(old_spte)));
/*
* Recursively handle child PTs if the change removed a subtree from
@@ -552,9 +553,9 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
(is_leaf || !is_present || WARN_ON_ONCE(pfn_changed)))
handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared);
- if (was_leaf && is_accessed_spte(old_spte) &&
+ if (was_leaf && is_accessed_spte(old_spte) && is_refcounted &&
(!is_present || !is_accessed_spte(new_spte) || pfn_changed))
- kvm_set_pfn_accessed(spte_to_pfn(old_spte));
+ kvm_set_page_accessed(pfn_to_page(spte_to_pfn(old_spte)));
}
/*
@@ -988,8 +989,9 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
else
wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
- fault->pfn, iter->old_spte, fault->prefetch, true,
- fault->map_writable, &new_spte);
+ fault->pfn, iter->old_spte, fault->prefetch, true,
+ fault->map_writable, fault->is_refcounted_page,
+ &new_spte);
if (new_spte == iter->old_spte)
ret = RET_PF_SPURIOUS;
@@ -1205,8 +1207,9 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter,
* Capture the dirty status of the page, so that it doesn't get
* lost when the SPTE is marked for access tracking.
*/
- if (is_writable_pte(iter->old_spte))
- kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte));
+ if (is_writable_pte(iter->old_spte) &&
+ is_refcounted_page_pte(iter->old_spte))
+ kvm_set_page_dirty(pfn_to_page(spte_to_pfn(iter->old_spte)));
new_spte = mark_spte_for_access_track(iter->old_spte);
iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep,
@@ -1626,7 +1629,8 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root,
trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level,
iter.old_spte,
iter.old_spte & ~dbit);
- kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte));
+ if (is_refcounted_page_pte(iter.old_spte))
+ kvm_set_page_dirty(pfn_to_page(spte_to_pfn(iter.old_spte)));
}
rcu_read_unlock();
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 713fc2d91f95..292701339198 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1157,6 +1157,9 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, gfn_t gfn,
void kvm_release_page_clean(struct page *page);
void kvm_release_page_dirty(struct page *page);
+void kvm_set_page_accessed(struct page *page);
+void kvm_set_page_dirty(struct page *page);
+
struct kvm_follow_pfn {
const struct kvm_memory_slot *slot;
gfn_t gfn;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 963b96cd8ff9..fa1848c6c84f 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2949,17 +2949,19 @@ static bool kvm_is_ad_tracked_page(struct page *page)
return !PageReserved(page);
}
-static void kvm_set_page_dirty(struct page *page)
+void kvm_set_page_dirty(struct page *page)
{
if (kvm_is_ad_tracked_page(page))
SetPageDirty(page);
}
+EXPORT_SYMBOL_GPL(kvm_set_page_dirty);
-static void kvm_set_page_accessed(struct page *page)
+void kvm_set_page_accessed(struct page *page)
{
if (kvm_is_ad_tracked_page(page))
mark_page_accessed(page);
}
+EXPORT_SYMBOL_GPL(kvm_set_page_accessed);
void kvm_release_page_clean(struct page *page)
{
--
2.42.0.rc1.204.g551eb34607-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH v8 5/8] KVM: x86/mmu: Don't pass FOLL_GET to __kvm_follow_pfn
2023-08-24 8:04 ` [PATCH v8 5/8] KVM: x86/mmu: Don't pass FOLL_GET " David Stevens
@ 2023-08-24 9:13 ` Mika Penttilä
0 siblings, 0 replies; 10+ messages in thread
From: Mika Penttilä @ 2023-08-24 9:13 UTC (permalink / raw)
To: David Stevens, Sean Christopherson
Cc: kvm, Marc Zyngier, linux-kernel, Peter Xu, Yu Zhang,
Isaku Yamahata, kvmarm, linuxppc-dev, linux-arm-kernel
On 8/24/23 11:04, David Stevens wrote:
> From: David Stevens <stevensd@chromium.org>
>
> Stop passing FOLL_GET to __kvm_follow_pfn. This allows the host to map
> memory into the guest that is backed by un-refcounted struct pages - for
> example, the tail pages of higher order non-compound pages allocated by
> the amdgpu driver via ttm_pool_alloc_page.
>
> The bulk of this change is tracking the is_refcounted_page flag so that
> non-refcounted pages don't trigger page_count() == 0 warnings. This is
> done by storing the flag in an unused bit in the sptes. This only bit is
> not available in PAE SPTEs, so FOLL_GET is only omitted for TDP on
> x86-64.
>
> Signed-off-by: David Stevens <stevensd@chromium.org>
> ---
> arch/x86/kvm/mmu/mmu.c | 55 +++++++++++++++++++++++----------
> arch/x86/kvm/mmu/mmu_internal.h | 1 +
> arch/x86/kvm/mmu/paging_tmpl.h | 8 +++--
> arch/x86/kvm/mmu/spte.c | 4 ++-
> arch/x86/kvm/mmu/spte.h | 12 ++++++-
> arch/x86/kvm/mmu/tdp_mmu.c | 22 +++++++------
> include/linux/kvm_host.h | 3 ++
> virt/kvm/kvm_main.c | 6 ++--
> 8 files changed, 79 insertions(+), 32 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index dabae67f198b..4f5d33e95c6e 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -553,12 +553,14 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
>
> if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) {
> flush = true;
> - kvm_set_pfn_accessed(spte_to_pfn(old_spte));
> + if (is_refcounted_page_pte(old_spte))
> + kvm_set_page_accessed(pfn_to_page(spte_to_pfn(old_spte)));
> }
>
> if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) {
> flush = true;
> - kvm_set_pfn_dirty(spte_to_pfn(old_spte));
> + if (is_refcounted_page_pte(old_spte))
> + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(old_spte)));
> }
>
> return flush;
> @@ -596,14 +598,18 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
> * before they are reclaimed. Sanity check that, if the pfn is backed
> * by a refcounted page, the refcount is elevated.
> */
> - page = kvm_pfn_to_refcounted_page(pfn);
> - WARN_ON(page && !page_count(page));
> + if (is_refcounted_page_pte(old_spte)) {
> + page = kvm_pfn_to_refcounted_page(pfn);
> + WARN_ON(!page || !page_count(page));
> + }
>
> - if (is_accessed_spte(old_spte))
> - kvm_set_pfn_accessed(pfn);
> + if (is_refcounted_page_pte(old_spte)) {
> + if (is_accessed_spte(old_spte))
> + kvm_set_page_accessed(pfn_to_page(pfn));
>
> - if (is_dirty_spte(old_spte))
> - kvm_set_pfn_dirty(pfn);
> + if (is_dirty_spte(old_spte))
> + kvm_set_page_dirty(pfn_to_page(pfn));
> + }
>
> return old_spte;
> }
> @@ -639,8 +645,8 @@ static bool mmu_spte_age(u64 *sptep)
> * Capture the dirty status of the page, so that it doesn't get
> * lost when the SPTE is marked for access tracking.
> */
> - if (is_writable_pte(spte))
> - kvm_set_pfn_dirty(spte_to_pfn(spte));
> + if (is_writable_pte(spte) && is_refcounted_page_pte(spte))
> + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(spte)));
>
> spte = mark_spte_for_access_track(spte);
> mmu_spte_update_no_track(sptep, spte);
> @@ -1278,8 +1284,8 @@ static bool spte_wrprot_for_clear_dirty(u64 *sptep)
> {
> bool was_writable = test_and_clear_bit(PT_WRITABLE_SHIFT,
> (unsigned long *)sptep);
> - if (was_writable && !spte_ad_enabled(*sptep))
> - kvm_set_pfn_dirty(spte_to_pfn(*sptep));
> + if (was_writable && !spte_ad_enabled(*sptep) && is_refcounted_page_pte(*sptep))
> + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(*sptep)));
>
> return was_writable;
> }
> @@ -2937,6 +2943,11 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
> bool host_writable = !fault || fault->map_writable;
> bool prefetch = !fault || fault->prefetch;
> bool write_fault = fault && fault->write;
> + /*
> + * Prefetching uses gfn_to_page_many_atomic, which never gets
> + * non-refcounted pages.
> + */
> + bool is_refcounted = !fault || fault->is_refcounted_page;
>
> pgprintk("%s: spte %llx write_fault %d gfn %llx\n", __func__,
> *sptep, write_fault, gfn);
> @@ -2969,7 +2980,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
> }
>
> wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch,
> - true, host_writable, &spte);
> + true, host_writable, is_refcounted, &spte);
>
> if (*sptep == spte) {
> ret = RET_PF_SPURIOUS;
> @@ -4296,11 +4307,19 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
> static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> {
> struct kvm_memory_slot *slot = fault->slot;
> + /*
> + * We only allow non-refcounted pages if we can track whether or not
> + * pages are refcounted via an SPTE bit. This bit is not available
> + * in PAE SPTEs, so pass FOLL_GET if we may have to deal with those.
> + */
> + unsigned int get_flag = (tdp_enabled && IS_ENABLED(CONFIG_X86_64)) ?
> + 0 : FOLL_GET;
> struct kvm_follow_pfn foll = {
> .slot = slot,
> .gfn = fault->gfn,
> - .flags = FOLL_GET | (fault->write ? FOLL_WRITE : 0),
> + .flags = (fault->write ? FOLL_WRITE : 0) | get_flag,
> .try_map_writable = true,
> + .guarded_by_mmu_notifier = true,
> };
>
> /*
> @@ -4317,6 +4336,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
> fault->slot = NULL;
> fault->pfn = KVM_PFN_NOSLOT;
> fault->map_writable = false;
> + fault->is_refcounted_page = false;
> return RET_PF_CONTINUE;
> }
> /*
> @@ -4372,6 +4392,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
> success:
> fault->hva = foll.hva;
> fault->map_writable = foll.writable;
> + fault->is_refcounted_page = foll.is_refcounted_page;
> return RET_PF_CONTINUE;
> }
>
> @@ -4456,8 +4477,9 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
> r = direct_map(vcpu, fault);
>
> out_unlock:
> + if (fault->is_refcounted_page)
> + kvm_set_page_accessed(pfn_to_page(fault->pfn));
> write_unlock(&vcpu->kvm->mmu_lock);
> - kvm_release_pfn_clean(fault->pfn);
Should we check FOLL_GET is actually omitted to skip the kvm_release_pfn_clean() ?
> return r;
> }
>
> @@ -4534,8 +4556,9 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
> r = kvm_tdp_mmu_map(vcpu, fault);
>
> out_unlock:
> + if (fault->is_refcounted_page)
> + kvm_set_page_accessed(pfn_to_page(fault->pfn));
> read_unlock(&vcpu->kvm->mmu_lock);
> - kvm_release_pfn_clean(fault->pfn);
> return r;
> }
> #endif
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index d39af5639ce9..55790085884f 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -240,6 +240,7 @@ struct kvm_page_fault {
> kvm_pfn_t pfn;
> hva_t hva;
> bool map_writable;
> + bool is_refcounted_page;
>
> /*
> * Indicates the guest is trying to write a gfn that contains one or
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 0662e0278e70..4ffcebc0c3ce 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -828,8 +828,9 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
> r = FNAME(fetch)(vcpu, fault, &walker);
>
> out_unlock:
> + if (fault->is_refcounted_page)
> + kvm_set_page_accessed(pfn_to_page(fault->pfn));
> write_unlock(&vcpu->kvm->mmu_lock);
> - kvm_release_pfn_clean(fault->pfn);
> return r;
> }
>
> @@ -883,7 +884,7 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
> */
> static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i)
> {
> - bool host_writable;
> + bool host_writable, is_refcounted;
> gpa_t first_pte_gpa;
> u64 *sptep, spte;
> struct kvm_memory_slot *slot;
> @@ -940,10 +941,11 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int
> sptep = &sp->spt[i];
> spte = *sptep;
> host_writable = spte & shadow_host_writable_mask;
> + is_refcounted = spte & SPTE_MMU_PAGE_REFCOUNTED;
> slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
> make_spte(vcpu, sp, slot, pte_access, gfn,
> spte_to_pfn(spte), spte, true, false,
> - host_writable, &spte);
> + host_writable, is_refcounted, &spte);
>
> return mmu_spte_update(sptep, spte);
> }
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index cf2c6426a6fc..5b13d9143d56 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -138,7 +138,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> const struct kvm_memory_slot *slot,
> unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn,
> u64 old_spte, bool prefetch, bool can_unsync,
> - bool host_writable, u64 *new_spte)
> + bool host_writable, bool is_refcounted, u64 *new_spte)
> {
> int level = sp->role.level;
> u64 spte = SPTE_MMU_PRESENT_MASK;
> @@ -188,6 +188,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
>
> if (level > PG_LEVEL_4K)
> spte |= PT_PAGE_SIZE_MASK;
> + if (is_refcounted)
> + spte |= SPTE_MMU_PAGE_REFCOUNTED;
>
> if (shadow_memtype_mask)
> spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn,
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index 1279db2eab44..be93dd061ae3 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -95,6 +95,11 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK));
> /* Defined only to keep the above static asserts readable. */
> #undef SHADOW_ACC_TRACK_SAVED_MASK
>
> +/*
> + * Indicates that the SPTE refers to a page with a valid refcount.
> + */
> +#define SPTE_MMU_PAGE_REFCOUNTED BIT_ULL(59)
> +
> /*
> * Due to limited space in PTEs, the MMIO generation is a 19 bit subset of
> * the memslots generation and is derived as follows:
> @@ -332,6 +337,11 @@ static inline bool is_dirty_spte(u64 spte)
> return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK;
> }
>
> +static inline bool is_refcounted_page_pte(u64 spte)
> +{
> + return spte & SPTE_MMU_PAGE_REFCOUNTED;
> +}
> +
> static inline u64 get_rsvd_bits(struct rsvd_bits_validate *rsvd_check, u64 pte,
> int level)
> {
> @@ -462,7 +472,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> const struct kvm_memory_slot *slot,
> unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn,
> u64 old_spte, bool prefetch, bool can_unsync,
> - bool host_writable, u64 *new_spte);
> + bool host_writable, bool is_refcounted, u64 *new_spte);
> u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte,
> union kvm_mmu_page_role role, int index);
> u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled);
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 512163d52194..a9b1b14d2e26 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -474,6 +474,7 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> bool was_leaf = was_present && is_last_spte(old_spte, level);
> bool is_leaf = is_present && is_last_spte(new_spte, level);
> bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
> + bool is_refcounted = is_refcounted_page_pte(old_spte);
>
> WARN_ON(level > PT64_ROOT_MAX_LEVEL);
> WARN_ON(level < PG_LEVEL_4K);
> @@ -538,9 +539,9 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> if (is_leaf != was_leaf)
> kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1);
>
> - if (was_leaf && is_dirty_spte(old_spte) &&
> + if (was_leaf && is_dirty_spte(old_spte) && is_refcounted &&
> (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
> - kvm_set_pfn_dirty(spte_to_pfn(old_spte));
> + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(old_spte)));
>
> /*
> * Recursively handle child PTs if the change removed a subtree from
> @@ -552,9 +553,9 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed)))
> handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared);
>
> - if (was_leaf && is_accessed_spte(old_spte) &&
> + if (was_leaf && is_accessed_spte(old_spte) && is_refcounted &&
> (!is_present || !is_accessed_spte(new_spte) || pfn_changed))
> - kvm_set_pfn_accessed(spte_to_pfn(old_spte));
> + kvm_set_page_accessed(pfn_to_page(spte_to_pfn(old_spte)));
> }
>
> /*
> @@ -988,8 +989,9 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
> new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
> else
> wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
> - fault->pfn, iter->old_spte, fault->prefetch, true,
> - fault->map_writable, &new_spte);
> + fault->pfn, iter->old_spte, fault->prefetch, true,
> + fault->map_writable, fault->is_refcounted_page,
> + &new_spte);
>
> if (new_spte == iter->old_spte)
> ret = RET_PF_SPURIOUS;
> @@ -1205,8 +1207,9 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter,
> * Capture the dirty status of the page, so that it doesn't get
> * lost when the SPTE is marked for access tracking.
> */
> - if (is_writable_pte(iter->old_spte))
> - kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte));
> + if (is_writable_pte(iter->old_spte) &&
> + is_refcounted_page_pte(iter->old_spte))
> + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(iter->old_spte)));
>
> new_spte = mark_spte_for_access_track(iter->old_spte);
> iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep,
> @@ -1626,7 +1629,8 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root,
> trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level,
> iter.old_spte,
> iter.old_spte & ~dbit);
> - kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte));
> + if (is_refcounted_page_pte(iter.old_spte))
> + kvm_set_page_dirty(pfn_to_page(spte_to_pfn(iter.old_spte)));
> }
>
> rcu_read_unlock();
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 713fc2d91f95..292701339198 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1157,6 +1157,9 @@ unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, gfn_t gfn,
> void kvm_release_page_clean(struct page *page);
> void kvm_release_page_dirty(struct page *page);
>
> +void kvm_set_page_accessed(struct page *page);
> +void kvm_set_page_dirty(struct page *page);
> +
> struct kvm_follow_pfn {
> const struct kvm_memory_slot *slot;
> gfn_t gfn;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 963b96cd8ff9..fa1848c6c84f 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2949,17 +2949,19 @@ static bool kvm_is_ad_tracked_page(struct page *page)
> return !PageReserved(page);
> }
>
> -static void kvm_set_page_dirty(struct page *page)
> +void kvm_set_page_dirty(struct page *page)
> {
> if (kvm_is_ad_tracked_page(page))
> SetPageDirty(page);
> }
> +EXPORT_SYMBOL_GPL(kvm_set_page_dirty);
>
> -static void kvm_set_page_accessed(struct page *page)
> +void kvm_set_page_accessed(struct page *page)
> {
> if (kvm_is_ad_tracked_page(page))
> mark_page_accessed(page);
> }
> +EXPORT_SYMBOL_GPL(kvm_set_page_accessed);
>
> void kvm_release_page_clean(struct page *page)
> {
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v8 6/8] KVM: arm64: Migrate to __kvm_follow_pfn
2023-08-24 8:04 [PATCH v8 0/8] KVM: allow mapping non-refcounted pages David Stevens
` (4 preceding siblings ...)
2023-08-24 8:04 ` [PATCH v8 5/8] KVM: x86/mmu: Don't pass FOLL_GET " David Stevens
@ 2023-08-24 8:04 ` David Stevens
2023-08-24 8:04 ` [PATCH v8 7/8] KVM: PPC: " David Stevens
2023-08-24 8:04 ` [PATCH v8 8/8] KVM: mmu: remove __gfn_to_pfn_memslot David Stevens
7 siblings, 0 replies; 10+ messages in thread
From: David Stevens @ 2023-08-24 8:04 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, Marc Zyngier, linux-kernel, Peter Xu, Yu Zhang,
Isaku Yamahata, kvmarm, linuxppc-dev, linux-arm-kernel,
David Stevens
From: David Stevens <stevensd@chromium.org>
Migrate from __gfn_to_pfn_memslot to __kvm_follow_pfn.
Signed-off-by: David Stevens <stevensd@chromium.org>
---
arch/arm64/kvm/mmu.c | 25 ++++++++++++++-----------
1 file changed, 14 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index d3b4feed460c..e4abc2a57d7d 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1334,7 +1334,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
unsigned long fault_status)
{
int ret = 0;
- bool write_fault, writable, force_pte = false;
+ bool write_fault = kvm_is_write_fault(vcpu);
+ bool force_pte = false;
bool exec_fault, mte_allowed;
bool device = false;
unsigned long mmu_seq;
@@ -1342,16 +1343,19 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
struct vm_area_struct *vma;
short vma_shift;
- gfn_t gfn;
kvm_pfn_t pfn;
bool logging_active = memslot_is_logging(memslot);
unsigned long fault_level = kvm_vcpu_trap_get_fault_level(vcpu);
long vma_pagesize, fault_granule;
enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
struct kvm_pgtable *pgt;
+ struct kvm_follow_pfn foll = {
+ .slot = memslot,
+ .flags = FOLL_GET | (write_fault ? FOLL_WRITE : 0),
+ .try_map_writable = true,
+ };
fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level);
- write_fault = kvm_is_write_fault(vcpu);
exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
VM_BUG_ON(write_fault && exec_fault);
@@ -1425,7 +1429,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
fault_ipa &= ~(vma_pagesize - 1);
- gfn = fault_ipa >> PAGE_SHIFT;
+ foll.gfn = fault_ipa >> PAGE_SHIFT;
mte_allowed = kvm_vma_mte_allowed(vma);
/* Don't use the VMA after the unlock -- it may have vanished */
@@ -1433,7 +1437,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
/*
* Read mmu_invalidate_seq so that KVM can detect if the results of
- * vma_lookup() or __gfn_to_pfn_memslot() become stale prior to
+ * vma_lookup() or __kvm_follow_pfn() become stale prior to
* acquiring kvm->mmu_lock.
*
* Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs
@@ -1442,8 +1446,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
mmu_seq = vcpu->kvm->mmu_invalidate_seq;
mmap_read_unlock(current->mm);
- pfn = __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL,
- write_fault, &writable, NULL);
+ pfn = __kvm_follow_pfn(&foll);
if (pfn == KVM_PFN_ERR_HWPOISON) {
kvm_send_hwpoison_signal(hva, vma_shift);
return 0;
@@ -1468,7 +1471,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* Only actually map the page as writable if this was a write
* fault.
*/
- writable = false;
+ foll.writable = false;
}
if (exec_fault && device)
@@ -1508,7 +1511,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
}
- if (writable)
+ if (foll.writable)
prot |= KVM_PGTABLE_PROT_W;
if (exec_fault)
@@ -1534,9 +1537,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
KVM_PGTABLE_WALK_SHARED);
/* Mark the page dirty only if the fault is handled successfully */
- if (writable && !ret) {
+ if (foll.writable && !ret) {
kvm_set_pfn_dirty(pfn);
- mark_page_dirty_in_slot(kvm, memslot, gfn);
+ mark_page_dirty_in_slot(kvm, memslot, foll.gfn);
}
out_unlock:
--
2.42.0.rc1.204.g551eb34607-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v8 7/8] KVM: PPC: Migrate to __kvm_follow_pfn
2023-08-24 8:04 [PATCH v8 0/8] KVM: allow mapping non-refcounted pages David Stevens
` (5 preceding siblings ...)
2023-08-24 8:04 ` [PATCH v8 6/8] KVM: arm64: Migrate " David Stevens
@ 2023-08-24 8:04 ` David Stevens
2023-08-24 8:04 ` [PATCH v8 8/8] KVM: mmu: remove __gfn_to_pfn_memslot David Stevens
7 siblings, 0 replies; 10+ messages in thread
From: David Stevens @ 2023-08-24 8:04 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, Marc Zyngier, linux-kernel, Peter Xu, Yu Zhang,
Isaku Yamahata, kvmarm, linuxppc-dev, linux-arm-kernel,
David Stevens
From: David Stevens <stevensd@chromium.org>
Migrate from __gfn_to_pfn_memslot to __kvm_follow_pfn. As part of the
refactoring, remove the redundant calls to get_user_page_fast_only,
since the check for !async && !atomic was removed from the KVM generic
code in b9b33da2aa74. Also, remove the kvm_ro parameter because the KVM
generic code handles RO memslots.
Signed-off-by: David Stevens <stevensd@chromium.org>
---
arch/powerpc/include/asm/kvm_book3s.h | 2 +-
arch/powerpc/kvm/book3s_64_mmu_hv.c | 38 +++++++++-----------
arch/powerpc/kvm/book3s_64_mmu_radix.c | 50 +++++++++++---------------
arch/powerpc/kvm/book3s_hv_nested.c | 4 +--
4 files changed, 38 insertions(+), 56 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index bbf5e2c5fe09..bf48c511e700 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -202,7 +202,7 @@ extern bool kvmppc_hv_handle_set_rc(struct kvm *kvm, bool nested,
extern int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
unsigned long gpa,
struct kvm_memory_slot *memslot,
- bool writing, bool kvm_ro,
+ bool writing,
pte_t *inserted_pte, unsigned int *levelp);
extern int kvmppc_init_vm_radix(struct kvm *kvm);
extern void kvmppc_free_radix(struct kvm *kvm);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 7f765d5ad436..4688046626af 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -523,6 +523,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu,
unsigned long rcbits;
long mmio_update;
pte_t pte, *ptep;
+ struct kvm_follow_pfn foll = {
+ .try_map_writable = true,
+ };
if (kvm_is_radix(kvm))
return kvmppc_book3s_radix_page_fault(vcpu, ea, dsisr);
@@ -599,29 +602,20 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu,
page = NULL;
writing = (dsisr & DSISR_ISSTORE) != 0;
/* If writing != 0, then the HPTE must allow writing, if we get here */
- write_ok = writing;
- hva = gfn_to_hva_memslot(memslot, gfn);
- /*
- * Do a fast check first, since __gfn_to_pfn_memslot doesn't
- * do it with !atomic && !async, which is how we call it.
- * We always ask for write permission since the common case
- * is that the page is writable.
- */
- if (get_user_page_fast_only(hva, FOLL_WRITE, &page)) {
- write_ok = true;
- } else {
- /* Call KVM generic code to do the slow-path check */
- pfn = __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL,
- writing, &write_ok, NULL);
- if (is_error_noslot_pfn(pfn))
- return -EFAULT;
- page = NULL;
- if (pfn_valid(pfn)) {
- page = pfn_to_page(pfn);
- if (PageReserved(page))
- page = NULL;
- }
+ foll.slot = memslot;
+ foll.gfn = gfn;
+ foll.flags = FOLL_GET | (writing ? FOLL_WRITE : 0);
+ pfn = __kvm_follow_pfn(&foll);
+ if (is_error_noslot_pfn(pfn))
+ return -EFAULT;
+ page = NULL;
+ write_ok = foll.writable;
+ hva = foll.hva;
+ if (pfn_valid(pfn)) {
+ page = pfn_to_page(pfn);
+ if (PageReserved(page))
+ page = NULL;
}
/*
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 572707858d65..498f89128c3a 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -815,47 +815,39 @@ bool kvmppc_hv_handle_set_rc(struct kvm *kvm, bool nested, bool writing,
int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
unsigned long gpa,
struct kvm_memory_slot *memslot,
- bool writing, bool kvm_ro,
+ bool writing,
pte_t *inserted_pte, unsigned int *levelp)
{
struct kvm *kvm = vcpu->kvm;
struct page *page = NULL;
unsigned long mmu_seq;
- unsigned long hva, gfn = gpa >> PAGE_SHIFT;
- bool upgrade_write = false;
- bool *upgrade_p = &upgrade_write;
+ unsigned long hva, pfn, gfn = gpa >> PAGE_SHIFT;
+ bool upgrade_write;
pte_t pte, *ptep;
unsigned int shift, level;
int ret;
bool large_enable;
+ struct kvm_follow_pfn foll = {
+ .slot = memslot,
+ .gfn = gfn,
+ .flags = FOLL_GET | (writing ? FOLL_WRITE : 0),
+ .try_map_writable = true,
+ };
/* used to check for invalidations in progress */
mmu_seq = kvm->mmu_invalidate_seq;
smp_rmb();
- /*
- * Do a fast check first, since __gfn_to_pfn_memslot doesn't
- * do it with !atomic && !async, which is how we call it.
- * We always ask for write permission since the common case
- * is that the page is writable.
- */
- hva = gfn_to_hva_memslot(memslot, gfn);
- if (!kvm_ro && get_user_page_fast_only(hva, FOLL_WRITE, &page)) {
- upgrade_write = true;
- } else {
- unsigned long pfn;
-
- /* Call KVM generic code to do the slow-path check */
- pfn = __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL,
- writing, upgrade_p, NULL);
- if (is_error_noslot_pfn(pfn))
- return -EFAULT;
- page = NULL;
- if (pfn_valid(pfn)) {
- page = pfn_to_page(pfn);
- if (PageReserved(page))
- page = NULL;
- }
+ pfn = __kvm_follow_pfn(&foll);
+ if (is_error_noslot_pfn(pfn))
+ return -EFAULT;
+ page = NULL;
+ hva = foll.hva;
+ upgrade_write = foll.writable;
+ if (pfn_valid(pfn)) {
+ page = pfn_to_page(pfn);
+ if (PageReserved(page))
+ page = NULL;
}
/*
@@ -944,7 +936,6 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcpu,
struct kvm_memory_slot *memslot;
long ret;
bool writing = !!(dsisr & DSISR_ISSTORE);
- bool kvm_ro = false;
/* Check for unusual errors */
if (dsisr & DSISR_UNSUPP_MMU) {
@@ -997,7 +988,6 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcpu,
ea, DSISR_ISSTORE | DSISR_PROTFAULT);
return RESUME_GUEST;
}
- kvm_ro = true;
}
/* Failed to set the reference/change bits */
@@ -1015,7 +1005,7 @@ int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcpu,
/* Try to insert a pte */
ret = kvmppc_book3s_instantiate_page(vcpu, gpa, memslot, writing,
- kvm_ro, NULL, NULL);
+ NULL, NULL);
if (ret == 0 || ret == -EAGAIN)
ret = RESUME_GUEST;
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 377d0b4a05ee..6d531051df04 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -1497,7 +1497,6 @@ static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu,
unsigned long n_gpa, gpa, gfn, perm = 0UL;
unsigned int shift, l1_shift, level;
bool writing = !!(dsisr & DSISR_ISSTORE);
- bool kvm_ro = false;
long int ret;
if (!gp->l1_gr_to_hr) {
@@ -1577,7 +1576,6 @@ static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu,
ea, DSISR_ISSTORE | DSISR_PROTFAULT);
return RESUME_GUEST;
}
- kvm_ro = true;
}
/* 2. Find the host pte for this L1 guest real address */
@@ -1599,7 +1597,7 @@ static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu,
if (!pte_present(pte) || (writing && !(pte_val(pte) & _PAGE_WRITE))) {
/* No suitable pte found -> try to insert a mapping */
ret = kvmppc_book3s_instantiate_page(vcpu, gpa, memslot,
- writing, kvm_ro, &pte, &level);
+ writing, &pte, &level);
if (ret == -EAGAIN)
return RESUME_GUEST;
else if (ret)
--
2.42.0.rc1.204.g551eb34607-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v8 8/8] KVM: mmu: remove __gfn_to_pfn_memslot
2023-08-24 8:04 [PATCH v8 0/8] KVM: allow mapping non-refcounted pages David Stevens
` (6 preceding siblings ...)
2023-08-24 8:04 ` [PATCH v8 7/8] KVM: PPC: " David Stevens
@ 2023-08-24 8:04 ` David Stevens
7 siblings, 0 replies; 10+ messages in thread
From: David Stevens @ 2023-08-24 8:04 UTC (permalink / raw)
To: Sean Christopherson
Cc: kvm, Marc Zyngier, linux-kernel, Peter Xu, Yu Zhang,
Isaku Yamahata, kvmarm, linuxppc-dev, linux-arm-kernel,
David Stevens
From: David Stevens <stevensd@chromium.org>
All callers have been migrated to __kvm_follow_pfn.
Signed-off-by: David Stevens <stevensd@chromium.org>
---
virt/kvm/kvm_main.c | 33 ---------------------------------
1 file changed, 33 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index fa1848c6c84f..aebaf4a7340e 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2728,39 +2728,6 @@ kvm_pfn_t __kvm_follow_pfn(struct kvm_follow_pfn *foll)
}
EXPORT_SYMBOL_GPL(__kvm_follow_pfn);
-kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn,
- bool atomic, bool interruptible, bool *async,
- bool write_fault, bool *writable, hva_t *hva)
-{
- kvm_pfn_t pfn;
- struct kvm_follow_pfn foll = {
- .slot = slot,
- .gfn = gfn,
- .flags = FOLL_GET,
- .atomic = atomic,
- .try_map_writable = !!writable,
- };
-
- if (write_fault)
- foll.flags |= FOLL_WRITE;
- if (async)
- foll.flags |= FOLL_NOWAIT;
- if (interruptible)
- foll.flags |= FOLL_INTERRUPTIBLE;
-
- pfn = __kvm_follow_pfn(&foll);
- if (pfn == KVM_PFN_ERR_NEEDS_IO) {
- *async = true;
- pfn = KVM_PFN_ERR_FAULT;
- }
- if (hva)
- *hva = foll.hva;
- if (writable)
- *writable = foll.writable;
- return pfn;
-}
-EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot);
-
kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault,
bool *writable)
{
--
2.42.0.rc1.204.g551eb34607-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread