public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Nikunj A Dadhania <nikunj@amd.com>
To: Zheyun Shen <szy0127@sjtu.edu.cn>, <thomas.lendacky@amd.com>,
	<seanjc@google.com>, <pbonzini@redhat.com>, <tglx@linutronix.de>,
	<kevinloughlin@google.com>, <mingo@redhat.com>, <bp@alien8.de>
Cc: <kvm@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	Zheyun Shen <szy0127@sjtu.edu.cn>
Subject: Re: [PATCH v5 3/3] KVM: SVM: Flush cache only on CPUs running SEV guest
Date: Tue, 21 Jan 2025 05:01:35 +0000	[thread overview]
Message-ID: <85frlcvjyo.fsf@amd.com> (raw)
In-Reply-To: <20250120120503.470533-4-szy0127@sjtu.edu.cn>

Zheyun Shen <szy0127@sjtu.edu.cn> writes:

> On AMD CPUs without ensuring cache consistency, each memory page
> reclamation in an SEV guest triggers a call to wbinvd_on_all_cpus(),
> thereby affecting the performance of other programs on the host.
>
> Typically, an AMD server may have 128 cores or more, while the SEV guest
> might only utilize 8 of these cores. Meanwhile, host can use qemu-affinity
> to bind these 8 vCPUs to specific physical CPUs.
>
> Therefore, keeping a record of the physical core numbers each time a vCPU
> runs can help avoid flushing the cache for all CPUs every time.
>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Zheyun Shen <szy0127@sjtu.edu.cn>
> ---
>  arch/x86/kvm/svm/sev.c | 39 ++++++++++++++++++++++++++++++++++++---
>  arch/x86/kvm/svm/svm.c |  2 ++
>  arch/x86/kvm/svm/svm.h |  5 ++++-
>  3 files changed, 42 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 1ce67de9d..91469edd1 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -252,6 +252,36 @@ static void sev_asid_free(struct kvm_sev_info *sev)
>  	sev->misc_cg = NULL;
>  }
>  
> +static struct cpumask *sev_get_wbinvd_dirty_mask(struct kvm *kvm)
> +{
> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;

There is a helper to get sev_info: to_kvm_sev_info(), if you use that,
sev_get_wbinvd_dirty_mask() helper will not be needed.

> +
> +	return sev->wbinvd_dirty_mask;
> +}
> +
> +void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> +{
> +	/*
> +	 * To optimize cache flushes when memory is reclaimed from an SEV VM,
> +	 * track physical CPUs that enter the guest for SEV VMs and thus can
> +	 * have encrypted, dirty data in the cache, and flush caches only for
> +	 * CPUs that have entered the guest.
> +	 */
> +	cpumask_set_cpu(cpu, sev_get_wbinvd_dirty_mask(vcpu->kvm));
> +}
> +
> +static void sev_do_wbinvd(struct kvm *kvm)
> +{
> +	struct cpumask *dirty_mask = sev_get_wbinvd_dirty_mask(kvm);
> +
> +	/*
> +	 * TODO: Clear CPUs from the bitmap prior to flushing.  Doing so
> +	 * requires serializing multiple calls and having CPUs mark themselves
> +	 * "dirty" if they are currently running a vCPU for the VM.
> +	 */
> +	wbinvd_on_many_cpus(dirty_mask);
> +}

Something like the below

void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
        /* ... */
        cpumask_set_cpu(cpu, to_kvm_sev_info(kvm)->wbinvd_dirty_mask);
}

static void sev_do_wbinvd(struct kvm *kvm)
{
        /* ... */
        wbinvd_on_many_cpus(to_kvm_sev_info(kvm)->wbinvd_dirty_mask);
}

Regards,
Nikunj


  reply	other threads:[~2025-01-21  5:01 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-20 12:05 [PATCH v5 0/3] KVM: SVM: Flush cache only on CPUs running SEV guest Zheyun Shen
2025-01-20 12:05 ` [PATCH v5 1/3] KVM: x86: Add a wbinvd helper Zheyun Shen
2025-01-20 12:05 ` [PATCH v5 2/3] KVM: SVM: Remove wbinvd in sev_vm_destroy() Zheyun Shen
2025-01-20 12:05 ` [PATCH v5 3/3] KVM: SVM: Flush cache only on CPUs running SEV guest Zheyun Shen
2025-01-21  5:01   ` Nikunj A Dadhania [this message]
2025-01-26 10:33     ` Zheyun Shen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=85frlcvjyo.fsf@amd.com \
    --to=nikunj@amd.com \
    --cc=bp@alien8.de \
    --cc=kevinloughlin@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=szy0127@sjtu.edu.cn \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox