From: Mike Day <michael.day@amd.com>
To: Elliot Berman <quic_eberman@quicinc.com>,
Andrew Morton <akpm@linux-foundation.org>,
Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>,
Fuad Tabba <tabba@google.com>,
David Hildenbrand <david@redhat.com>,
Patrick Roy <roypat@amazon.co.uk>,
qperret@google.com, Ackerley Tng <ackerleytng@google.com>,
Mike Rapoport <rppt@kernel.org>,
x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
kvm@vger.kernel.org, linux-coco@lists.linux.dev,
linux-arm-msm@vger.kernel.org
Subject: Re: [PATCH RFC v2 3/5] kvm: Convert to use guest_memfd library
Date: Tue, 22 Oct 2024 20:18:17 -0500 [thread overview]
Message-ID: <4eae43fb-28f8-4e84-afe1-812b71f890d4@amd.com> (raw)
In-Reply-To: <20240829-guest-memfd-lib-v2-3-b9afc1ff3656@quicinc.com>
On 8/29/24 17:24, Elliot Berman wrote:
> Use the recently created mm/guest_memfd implementation. No functional
> change intended.
>
> Note: I've only compile-tested this. Appreciate some help from SEV folks
> to be able to test this.
Is there an updated patchset?
>
> Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
> ---
> arch/x86/kvm/svm/sev.c | 3 +-
> virt/kvm/Kconfig | 1 +
> virt/kvm/guest_memfd.c | 371 ++++++++++---------------------------------------
> virt/kvm/kvm_main.c | 2 -
> virt/kvm/kvm_mm.h | 6 -
> 5 files changed, 77 insertions(+), 306 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 714c517dd4b72..f3a6857270943 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -2297,8 +2297,7 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn_start, kvm_pfn_t pf
> kunmap_local(vaddr);
> }
>
> - ret = rmp_make_private(pfn + i, gfn << PAGE_SHIFT, PG_LEVEL_4K,
> - sev_get_asid(kvm), true);
Need to keep rmp_make_private(), it is updating firmware reverse mapping (RMP) to assign the folio to the
guest. Would be used in combination with guest_memfd_make_inaccessible(), but that call cannot be
made from here, needs to move elsewhere.
> +static inline struct kvm_gmem *inode_to_kvm_gmem(struct inode *inode)
> +{
> + struct list_head *gmem_list = &inode->i_mapping->i_private_list;
> +
> + return list_first_entry_or_null(gmem_list, struct kvm_gmem, entry);
gmem SEV-SNP guests end up creating multiple struct kvm_gmem objects per guest, each one having
different memory slots. So this will not always return the correct gmem object for an SEV-SNP guest.
> +}
> +
> -static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot,
> - pgoff_t index, struct folio *folio)
> +static int kvm_gmem_prepare_inaccessible(struct inode *inode, struct folio *folio)
> {
> #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE
> - kvm_pfn_t pfn = folio_file_pfn(folio, index);
> - gfn_t gfn = slot->base_gfn + index - slot->gmem.pgoff;
> + kvm_pfn_t pfn = folio_file_pfn(folio, 0);
> + gfn_t gfn = slot->base_gfn + folio_index(folio) - slot->gmem.pgoff;
There is no longer a struct kvm_memory_slot * in the prototype, so this won't compile. It creates
an impedence mismatch with the way kvm gmem calls prepare_folio() on SEV-SNP.
> int rc = kvm_arch_gmem_prepare(kvm, gfn, pfn, folio_order(folio));
> if (rc) {
> pr_warn_ratelimited("gmem: Failed to prepare folio for index %lx GFN %llx PFN %llx error %d.\n",
> @@ -42,67 +46,7 @@ static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slo
> return 0;
> }
>
> -static inline void kvm_gmem_mark_prepared(struct folio *folio)
> -{
> - folio_mark_uptodate(folio);
> -}
mark_prepared takes on additional meaning with SEV-SNP beyond uptodate, although this
could be separated into a different state. "preparation" includes setting the Reverse MaPping (RMP)
assigned bit - it eventually ends up in the sev code making and RMP assignment and
clearing the folio (from :/arch/x86/kvm/svm/sev.c)
if (!folio_test_uptodate(folio)) {
unsigned long nr_pages = level == PG_LEVEL_4K ? 1 : 512;
int i;
pr_debug("%s: folio not up-to-date, clearing folio pages.\n", __func__);
for (i = 0; i < nr_pages; i++)
clear_highpage(pfn_to_page(pfn_aligned + i));
{mark|test}_uptodate is still intertwined with the architectural code, probably should be
disentangled in favor of "prepare."
> -
> -/*
> - * Process @folio, which contains @gfn, so that the guest can use it.
> - * The folio must be locked and the gfn must be contained in @slot.
> - * On successful return the guest sees a zero page so as to avoid
> - * leaking host data and the up-to-date flag is set.
> - */
> -static int kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot,
> - gfn_t gfn, struct folio *folio)
>
Is it correct that gmem->prepare_inaccessible() is the direct analogue to
kvm_gmem_prepare_folio?
> -#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE
> -static void kvm_gmem_free_folio(struct folio *folio)
> -{
> - struct page *page = folio_page(folio, 0);
> - kvm_pfn_t pfn = page_to_pfn(page);
> - int order = folio_order(folio);
> -
> - kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order));
kvm_arch_gmem_invalidate() is necessary for gmem SEV-SNP - it calls sev_gmem_invalidate()
which performs RMP modifications and flushes caches. When a guest page is split or released
these operations must occur.
> @@ -656,19 +444,12 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long
> break;
> }
>
> - folio = __kvm_gmem_get_pfn(file, slot, gfn, &pfn, &is_prepared, &max_order);
> + folio = __kvm_gmem_get_pfn(file, slot, gfn, &pfn, true, &max_order);
probably need to retain a check _is_prepared() here instead of always declaring the folio prepared.
thanks,
Mike
next prev parent reply other threads:[~2024-10-23 18:23 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-29 22:24 [PATCH RFC v2 0/5] mm: Introduce guest_memfd library Elliot Berman
2024-08-29 22:24 ` [PATCH RFC v2 1/5] mm: Introduce guest_memfd Elliot Berman
2024-08-29 22:24 ` [PATCH RFC v2 2/5] mm: guest_memfd: Allow folios to be accessible to host Elliot Berman
2024-10-09 20:14 ` Vishal Annapurve
2024-08-29 22:24 ` [PATCH RFC v2 3/5] kvm: Convert to use guest_memfd library Elliot Berman
2024-10-23 1:18 ` Mike Day [this message]
2024-08-29 22:24 ` [PATCH RFC v2 4/5] mm: guest_memfd: Add ability for userspace to mmap pages Elliot Berman
2024-08-29 22:24 ` [PATCH RFC v2 5/5] mm: guest_memfd: Add option to remove inaccessible memory from direct map Elliot Berman
2024-10-10 13:04 ` [PATCH RFC v2 0/5] mm: Introduce guest_memfd library Paolo Bonzini
2024-10-10 21:20 ` Elliot Berman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4eae43fb-28f8-4e84-afe1-812b71f890d4@amd.com \
--to=michael.day@amd.com \
--cc=ackerleytng@google.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=hpa@zytor.com \
--cc=kvm@vger.kernel.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qperret@google.com \
--cc=quic_eberman@quicinc.com \
--cc=roypat@amazon.co.uk \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).