linux-coco.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Binbin Wu <binbin.wu@linux.intel.com>
To: Michael Roth <michael.roth@amd.com>, kvm@vger.kernel.org
Cc: linux-coco@lists.linux.dev, Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Isaku Yamahata <isaku.yamahata@linux.intel.com>,
	Xu Yilun <yilun.xu@linux.intel.com>,
	Xiaoyao Li <xiaoyao.li@intel.com>,
	Isaku Yamahata <isaku.yamahata@intel.com>
Subject: Re: [PATCH gmem 2/6] KVM: guest_memfd: Only call kvm_arch_gmem_prepare hook if necessary
Date: Mon, 1 Apr 2024 13:06:07 +0800	[thread overview]
Message-ID: <297fd9b8-9321-40e3-816b-2de92cb1a3ae@linux.intel.com> (raw)
In-Reply-To: <20240329212444.395559-3-michael.roth@amd.com>



On 3/30/2024 5:24 AM, Michael Roth wrote:
> It has been reported that the internal workings of
> kvm_gmem_prepare_folio() incurs noticeable overhead for large guests
> even for platforms where kvm_arch_gmem_prepare() is a no-op.
>
> Provide a new kvm_arch_gmem_prepare_needed() hook so that architectures
> that set CONFIG_HAVE_KVM_GMEM_PREPARE can still opt-out of issuing the
> kvm_arch_gmem_prepare() callback

Just wondering which part has big impact on performance,
the issue of kvm_arch_gmem_prepare() callback or the preparation code for
the kvm_arch_gmem_prepare()?


> if the particular KVM instance doesn't
> require any sort of special preparation of its gmem pages prior to use.
>
> Link: https://lore.kernel.org/lkml/20240228202906.GB10568@ls.amr.corp.intel.com/
> Suggested-by: Isaku Yamahata <isaku.yamahata@intel.com>
> Signed-off-by: Michael Roth <michael.roth@amd.com>
> ---
>   include/linux/kvm_host.h |  1 +
>   virt/kvm/guest_memfd.c   | 10 ++++++++++
>   2 files changed, 11 insertions(+)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 2f5074eff958..5b8308b5e4af 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -2466,6 +2466,7 @@ static inline int kvm_gmem_undo_get_pfn(struct kvm *kvm,
>   
>   #ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
>   int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order);
> +bool kvm_arch_gmem_prepare_needed(struct kvm *kvm);
>   #endif
>   
>   #ifdef CONFIG_HAVE_KVM_GMEM_INVALIDATE
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 74e19170af8a..4ce0056d1149 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -13,6 +13,13 @@ struct kvm_gmem {
>   	struct list_head entry;
>   };
>   
> +#ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
> +bool __weak kvm_arch_gmem_prepare_needed(struct kvm *kvm)
> +{
> +	return false;
> +}
> +#endif
> +
>   static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio)
>   {
>   #ifdef CONFIG_HAVE_KVM_GMEM_PREPARE
> @@ -27,6 +34,9 @@ static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct fol
>   		gfn_t gfn;
>   		int rc;
>   
> +		if (!kvm_arch_gmem_prepare_needed(kvm))
> +			continue;

Can multiple gmems (if any) bound to the same inode's address space 
belong to different kvm instances?
If not, just return here?

> +
>   		slot = xa_load(&gmem->bindings, index);
>   		if (!slot)
>   			continue;


  reply	other threads:[~2024-04-01  5:06 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-29 21:24 [PATCH gmem 0/6] gmem fix-ups and interfaces for populating gmem pages Michael Roth
2024-03-29 21:24 ` [PATCH gmem 1/6] KVM: guest_memfd: Fix stub for kvm_gmem_get_uninit_pfn() Michael Roth
2024-03-29 21:24 ` [PATCH gmem 2/6] KVM: guest_memfd: Only call kvm_arch_gmem_prepare hook if necessary Michael Roth
2024-04-01  5:06   ` Binbin Wu [this message]
2024-04-02 21:50     ` Isaku Yamahata
2024-03-29 21:24 ` [PATCH gmem 3/6] KVM: x86: Pass private/shared fault indicator to gmem_validate_fault Michael Roth
2024-03-29 21:24 ` [PATCH gmem 4/6] mm: Introduce AS_INACCESSIBLE for encrypted/confidential memory Michael Roth
2024-04-15 13:19   ` Vlastimil Babka
2024-03-29 21:24 ` [PATCH gmem 5/6] KVM: guest_memfd: Use AS_INACCESSIBLE when creating guest_memfd inode Michael Roth
2024-04-15 13:21   ` Vlastimil Babka
2024-03-29 21:24 ` [PATCH gmem 6/6] KVM: guest_memfd: Add interface for populating gmem pages with user data Michael Roth
2024-04-15 13:36   ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=297fd9b8-9321-40e3-816b-2de92cb1a3ae@linux.intel.com \
    --to=binbin.wu@linux.intel.com \
    --cc=isaku.yamahata@intel.com \
    --cc=isaku.yamahata@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-coco@lists.linux.dev \
    --cc=michael.roth@amd.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=xiaoyao.li@intel.com \
    --cc=yilun.xu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).