public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Mingwei Zhang <mizhang@google.com>
To: Nikunj A Dadhania <nikunj@amd.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Brijesh Singh <brijesh.singh@amd.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Peter Gonda <pgonda@google.com>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 0/6] KVM: SVM: Defer page pinning for SEV guests
Date: Sun, 6 Mar 2022 20:07:14 +0000	[thread overview]
Message-ID: <YiUUcuEuWbQrPs2E@google.com> (raw)
In-Reply-To: <20220118110621.62462-1-nikunj@amd.com>

On Tue, Jan 18, 2022, Nikunj A Dadhania wrote:
> SEV guest requires the guest's pages to be pinned in host physical
> memory as migration of encrypted pages is not supported. The memory
> encryption scheme uses the physical address of the memory being
> encrypted. If guest pages are moved by the host, content decrypted in
> the guest would be incorrect thereby corrupting guest's memory.
> 
> For SEV/SEV-ES guests, the hypervisor doesn't know which pages are
> encrypted and when the guest is done using those pages. Hypervisor
> should treat all the guest pages as encrypted until the guest is
> destroyed.
"Hypervisor should treat all the guest pages as encrypted until they are
deallocated or the guest is destroyed".

Note: in general, the guest VM could ask the user-level VMM to free the
page by either free the memslot or free the pages (munmap(2)).

> 
> Actual pinning management is handled by vendor code via new
> kvm_x86_ops hooks. MMU calls in to vendor code to pin the page on
> demand. Metadata of the pinning is stored in architecture specific
> memslot area. During the memslot freeing path guest pages are
> unpinned.

"During the memslot freeing path and deallocation path"

> 
> Initially started with [1], where the idea was to store the pinning
> information using the software bit in the SPTE to track the pinned
> page. That is not feasible for the following reason:
> 
> The pinned SPTE information gets stored in the shadow pages(SP). The
> way current MMU is designed, the full MMU context gets dropped
> multiple number of times even when CR0.WP bit gets flipped. Due to
> dropping of the MMU context (aka roots), there is a huge amount of SP
> alloc/remove churn. Pinned information stored in the SP gets lost
> during the dropping of the root and subsequent SP at the child levels.
> Without this information making decisions about re-pinnning page or
> unpinning during the guest shutdown will not be possible
> 
> [1] https://patchwork.kernel.org/project/kvm/cover/20200731212323.21746-1-sean.j.christopherson@intel.com/ 
> 

A general feedback: I really like this patch set and I think doing
memory pinning at fault path in kernel and storing the metadata in
memslot is the right thing to do.

This basically solves all the problems triggered by the KVM based API
that trusts the user-level VMM to do the memory pinning.

Thanks.
> Nikunj A Dadhania (4):
>   KVM: x86/mmu: Add hook to pin PFNs on demand in MMU
>   KVM: SVM: Add pinning metadata in the arch memslot
>   KVM: SVM: Implement demand page pinning
>   KVM: SEV: Carve out routine for allocation of pages
> 
> Sean Christopherson (2):
>   KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by SEV/TDX
>   KVM: SVM: Pin SEV pages in MMU during sev_launch_update_data()
> 
>  arch/x86/include/asm/kvm-x86-ops.h |   3 +
>  arch/x86/include/asm/kvm_host.h    |   9 +
>  arch/x86/kvm/mmu.h                 |   3 +
>  arch/x86/kvm/mmu/mmu.c             |  41 +++
>  arch/x86/kvm/mmu/tdp_mmu.c         |   7 +
>  arch/x86/kvm/svm/sev.c             | 423 +++++++++++++++++++----------
>  arch/x86/kvm/svm/svm.c             |   4 +
>  arch/x86/kvm/svm/svm.h             |   9 +-
>  arch/x86/kvm/x86.c                 |  11 +-
>  9 files changed, 359 insertions(+), 151 deletions(-)
> 
> -- 
> 2.32.0
> 

  parent reply	other threads:[~2022-03-06 20:07 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-18 11:06 [RFC PATCH 0/6] KVM: SVM: Defer page pinning for SEV guests Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 1/6] KVM: x86/mmu: Add hook to pin PFNs on demand in MMU Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 2/6] KVM: SVM: Add pinning metadata in the arch memslot Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 3/6] KVM: SVM: Implement demand page pinning Nikunj A Dadhania
2022-01-25 16:47   ` Peter Gonda
2022-01-25 17:49     ` Nikunj A. Dadhania
2022-01-25 17:59       ` Peter Gonda
2022-01-27 16:29         ` Nikunj A. Dadhania
2022-01-26 10:46   ` David Hildenbrand
2022-01-28  6:57     ` Nikunj A. Dadhania
2022-01-28  8:27       ` David Hildenbrand
2022-01-28 11:04         ` Nikunj A. Dadhania
2022-01-28 11:08           ` David Hildenbrand
2022-01-31 11:56             ` David Hildenbrand
2022-01-31 12:18               ` Nikunj A. Dadhania
2022-01-31 12:41                 ` David Hildenbrand
2022-03-06 19:48   ` Mingwei Zhang
2022-03-07  7:08     ` Nikunj A. Dadhania
2022-01-18 11:06 ` [RFC PATCH 4/6] KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by SEV/TDX Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 5/6] KVM: SEV: Carve out routine for allocation of pages Nikunj A Dadhania
2022-01-18 11:06 ` [RFC PATCH 6/6] KVM: SVM: Pin SEV pages in MMU during sev_launch_update_data() Nikunj A Dadhania
2022-01-18 15:00   ` Maciej S. Szmigiero
2022-01-18 17:29     ` Maciej S. Szmigiero
2022-01-19 11:35       ` Nikunj A. Dadhania
2022-01-19  6:33     ` Nikunj A. Dadhania
2022-01-19 18:52       ` Maciej S. Szmigiero
2022-01-20  4:24         ` Nikunj A. Dadhania
2022-01-20 16:17   ` Peter Gonda
2022-01-21  4:08     ` Nikunj A. Dadhania
2022-01-21 16:00       ` Peter Gonda
2022-01-21 17:14         ` Nikunj A. Dadhania
2022-03-06 20:07 ` Mingwei Zhang [this message]
2022-03-07 13:02   ` [RFC PATCH 0/6] KVM: SVM: Defer page pinning for SEV guests Nikunj A. Dadhania

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YiUUcuEuWbQrPs2E@google.com \
    --to=mizhang@google.com \
    --cc=brijesh.singh@amd.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nikunj@amd.com \
    --cc=pbonzini@redhat.com \
    --cc=pgonda@google.com \
    --cc=seanjc@google.com \
    --cc=thomas.lendacky@amd.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox