From: Nikunj A Dadhania <nikunj@amd.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
Brijesh Singh <brijesh.singh@amd.com>,
Tom Lendacky <thomas.lendacky@amd.com>,
Peter Gonda <pgonda@google.com>, <kvm@vger.kernel.org>,
<linux-kernel@vger.kernel.org>,
Nikunj A Dadhania <nikunj@amd.com>
Subject: [RFC PATCH 6/6] KVM: SVM: Pin SEV pages in MMU during sev_launch_update_data()
Date: Tue, 18 Jan 2022 16:36:21 +0530 [thread overview]
Message-ID: <20220118110621.62462-7-nikunj@amd.com> (raw)
In-Reply-To: <20220118110621.62462-1-nikunj@amd.com>
From: Sean Christopherson <sean.j.christopherson@intel.com>
Pin the memory for the data being passed to launch_update_data()
because it gets encrypted before the guest is first run and must
not be moved which would corrupt it.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
[ * Changed hva_to_gva() to take an extra argument and return gpa_t.
* Updated sev_pin_memory_in_mmu() error handling.
* As pinning/unpining pages is handled within MMU, removed
{get,put}_user(). ]
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
---
arch/x86/kvm/svm/sev.c | 122 ++++++++++++++++++++++++++++++++++++++++-
1 file changed, 119 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 14aeccfc500b..1ae714e83a3c 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -22,6 +22,7 @@
#include <asm/trapnr.h>
#include <asm/fpu/xcr.h>
+#include "mmu.h"
#include "x86.h"
#include "svm.h"
#include "svm_ops.h"
@@ -490,6 +491,110 @@ static unsigned long get_num_contig_pages(unsigned long idx,
return pages;
}
+#define SEV_PFERR_RO (PFERR_USER_MASK)
+#define SEV_PFERR_RW (PFERR_WRITE_MASK | PFERR_USER_MASK)
+
+static struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm,
+ unsigned long hva)
+{
+ struct kvm_memslots *slots = kvm_memslots(kvm);
+ struct kvm_memory_slot *memslot;
+ int bkt;
+
+ kvm_for_each_memslot(memslot, bkt, slots) {
+ if (hva >= memslot->userspace_addr &&
+ hva < memslot->userspace_addr +
+ (memslot->npages << PAGE_SHIFT))
+ return memslot;
+ }
+
+ return NULL;
+}
+
+static gpa_t hva_to_gpa(struct kvm *kvm, unsigned long hva, bool *ro)
+{
+ struct kvm_memory_slot *memslot;
+ gpa_t gpa_offset;
+
+ memslot = hva_to_memslot(kvm, hva);
+ if (!memslot)
+ return UNMAPPED_GVA;
+
+ *ro = !!(memslot->flags & KVM_MEM_READONLY);
+ gpa_offset = hva - memslot->userspace_addr;
+ return ((memslot->base_gfn << PAGE_SHIFT) + gpa_offset);
+}
+
+static struct page **sev_pin_memory_in_mmu(struct kvm *kvm, unsigned long addr,
+ unsigned long size,
+ unsigned long *npages)
+{
+ struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ struct kvm_vcpu *vcpu;
+ struct page **pages;
+ unsigned long i;
+ u32 error_code;
+ kvm_pfn_t pfn;
+ int idx, ret = 0;
+ gpa_t gpa;
+ bool ro;
+
+ pages = sev_alloc_pages(sev, addr, size, npages);
+ if (IS_ERR(pages))
+ return pages;
+
+ vcpu = kvm_get_vcpu(kvm, 0);
+ if (mutex_lock_killable(&vcpu->mutex)) {
+ kvfree(pages);
+ return ERR_PTR(-EINTR);
+ }
+
+ vcpu_load(vcpu);
+ idx = srcu_read_lock(&kvm->srcu);
+
+ kvm_mmu_load(vcpu);
+
+ for (i = 0; i < *npages; i++, addr += PAGE_SIZE) {
+ if (signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+
+ if (need_resched())
+ cond_resched();
+
+ gpa = hva_to_gpa(kvm, addr, &ro);
+ if (gpa == UNMAPPED_GVA) {
+ ret = -EFAULT;
+ break;
+ }
+
+ error_code = ro ? SEV_PFERR_RO : SEV_PFERR_RW;
+
+ /*
+ * Fault in the page and sev_pin_page() will handle the
+ * pinning
+ */
+ pfn = kvm_mmu_map_tdp_page(vcpu, gpa, error_code, PG_LEVEL_4K);
+ if (is_error_noslot_pfn(pfn)) {
+ ret = -EFAULT;
+ break;
+ }
+ pages[i] = pfn_to_page(pfn);
+ }
+
+ kvm_mmu_unload(vcpu);
+ srcu_read_unlock(&kvm->srcu, idx);
+ vcpu_put(vcpu);
+ mutex_unlock(&vcpu->mutex);
+
+ if (!ret)
+ return pages;
+
+ kvfree(pages);
+ return ERR_PTR(ret);
+}
+
static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
{
unsigned long vaddr, vaddr_end, next_vaddr, npages, pages, size, i;
@@ -510,15 +615,21 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
vaddr_end = vaddr + size;
/* Lock the user memory. */
- inpages = sev_pin_memory(kvm, vaddr, size, &npages, 1);
+ if (atomic_read(&kvm->online_vcpus))
+ inpages = sev_pin_memory_in_mmu(kvm, vaddr, size, &npages);
+ else
+ inpages = sev_pin_memory(kvm, vaddr, size, &npages, 1);
if (IS_ERR(inpages))
return PTR_ERR(inpages);
/*
* Flush (on non-coherent CPUs) before LAUNCH_UPDATE encrypts pages in
* place; the cache may contain the data that was written unencrypted.
+ * Flushing is automatically handled if the pages can be pinned in the
+ * MMU.
*/
- sev_clflush_pages(inpages, npages);
+ if (!atomic_read(&kvm->online_vcpus))
+ sev_clflush_pages(inpages, npages);
data.reserved = 0;
data.handle = sev->handle;
@@ -553,8 +664,13 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
set_page_dirty_lock(inpages[i]);
mark_page_accessed(inpages[i]);
}
+
/* unlock the user pages */
- sev_unpin_memory(kvm, inpages, npages);
+ if (atomic_read(&kvm->online_vcpus))
+ kvfree(inpages);
+ else
+ sev_unpin_memory(kvm, inpages, npages);
+
return ret;
}
--
2.32.0
next prev parent reply other threads:[~2022-01-18 11:07 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-18 11:06 [RFC PATCH 0/6] KVM: SVM: Defer page pinning for SEV guests Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 1/6] KVM: x86/mmu: Add hook to pin PFNs on demand in MMU Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 2/6] KVM: SVM: Add pinning metadata in the arch memslot Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 3/6] KVM: SVM: Implement demand page pinning Nikunj A Dadhania
2022-01-25 16:47 ` Peter Gonda
2022-01-25 17:49 ` Nikunj A. Dadhania
2022-01-25 17:59 ` Peter Gonda
2022-01-27 16:29 ` Nikunj A. Dadhania
2022-01-26 10:46 ` David Hildenbrand
2022-01-28 6:57 ` Nikunj A. Dadhania
2022-01-28 8:27 ` David Hildenbrand
2022-01-28 11:04 ` Nikunj A. Dadhania
2022-01-28 11:08 ` David Hildenbrand
2022-01-31 11:56 ` David Hildenbrand
2022-01-31 12:18 ` Nikunj A. Dadhania
2022-01-31 12:41 ` David Hildenbrand
2022-03-06 19:48 ` Mingwei Zhang
2022-03-07 7:08 ` Nikunj A. Dadhania
2022-01-18 11:06 ` [RFC PATCH 4/6] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by SEV/TDX Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 5/6] KVM: SEV: Carve out routine for allocation of pages Nikunj A Dadhania
2022-01-18 11:06 ` Nikunj A Dadhania [this message]
2022-01-18 15:00 ` [RFC PATCH 6/6] KVM: SVM: Pin SEV pages in MMU during sev_launch_update_data() Maciej S. Szmigiero
2022-01-18 17:29 ` Maciej S. Szmigiero
2022-01-19 11:35 ` Nikunj A. Dadhania
2022-01-19 6:33 ` Nikunj A. Dadhania
2022-01-19 18:52 ` Maciej S. Szmigiero
2022-01-20 4:24 ` Nikunj A. Dadhania
2022-01-20 16:17 ` Peter Gonda
2022-01-21 4:08 ` Nikunj A. Dadhania
2022-01-21 16:00 ` Peter Gonda
2022-01-21 17:14 ` Nikunj A. Dadhania
2022-03-06 20:07 ` [RFC PATCH 0/6] KVM: SVM: Defer page pinning for SEV guests Mingwei Zhang
2022-03-07 13:02 ` Nikunj A. Dadhania
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220118110621.62462-7-nikunj@amd.com \
--to=nikunj@amd.com \
--cc=brijesh.singh@amd.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=pgonda@google.com \
--cc=seanjc@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox