From: Sean Christopherson <seanjc@google.com>
To: Kevin Loughlin <kevinloughlin@google.com>
Cc: linux-kernel@vger.kernel.org, tglx@linutronix.de,
mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
x86@kernel.org, hpa@zytor.com, pbonzini@redhat.com,
kirill.shutemov@linux.intel.com, kai.huang@intel.com,
ubizjak@gmail.com, jgross@suse.com, kvm@vger.kernel.org,
thomas.lendacky@amd.com, pgonda@google.com,
sidtelang@google.com, mizhang@google.com, rientjes@google.com,
manalinandan@google.com, szy0127@sjtu.edu.cn
Subject: Re: [PATCH v5 2/2] KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency
Date: Tue, 25 Feb 2025 17:30:09 -0800 [thread overview]
Message-ID: <Z75uoedJyZ18CZeh@google.com> (raw)
In-Reply-To: <20250123002422.1632517-3-kevinloughlin@google.com>
On Thu, Jan 23, 2025, Kevin Loughlin wrote:
> +static inline void sev_writeback_caches(void)
> +{
> + /*
> + * Ensure that all dirty guest tagged cache entries are written back
> + * before releasing the pages back to the system for use. CLFLUSH will
> + * not do this without SME_COHERENT, so issue a WBNOINVD.
This is somewhat misleading. A naive reading of this would interpret the above
as saying that KVM _should_ do CLFLUSH on SME_COHERENT CPUs, which begs the
question of why KVM _doesn't_ do that. I also think this is the right place to
call out that WBNOINVD support isn't guaranteed.
* Ensure that all dirty guest tagged cache entries are written back
* before releasing the pages back to the system for use. CLFLUSH will
* not do this without SME_COHERENT, and flushing many cache lines
* individually is slower than blasting WBINVD for large VMs, so issue
* WBNOINVD (or WBINVD if the "no invalidate" variant is unsupported)
* on CPUs that have done VMRUN, i.e. may have dirtied data using the
* VM's ASID.
> + */
> + wbnoinvd_on_all_cpus();
> +}
> +
> static unsigned long get_num_contig_pages(unsigned long idx,
> struct page **inpages, unsigned long npages)
> {
> @@ -2773,12 +2784,7 @@ int sev_mem_enc_unregister_region(struct kvm *kvm,
> goto failed;
> }
>
> - /*
> - * Ensure that all guest tagged cache entries are flushed before
> - * releasing the pages back to the system for use. CLFLUSH will
> - * not do this, so issue a WBINVD.
> - */
> - wbinvd_on_all_cpus();
> + sev_writeback_caches();
>
> __unregister_enc_region_locked(kvm, region);
>
> @@ -2899,12 +2905,7 @@ void sev_vm_destroy(struct kvm *kvm)
> return;
> }
>
> - /*
> - * Ensure that all guest tagged cache entries are flushed before
> - * releasing the pages back to the system for use. CLFLUSH will
> - * not do this, so issue a WBINVD.
> - */
> - wbinvd_on_all_cpus();
> + sev_writeback_caches();
>
> /*
> * if userspace was terminated before unregistering the memory regions
> @@ -3126,16 +3127,16 @@ static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va)
>
> /*
> * VM Page Flush takes a host virtual address and a guest ASID. Fall
> - * back to WBINVD if this faults so as not to make any problems worse
> + * back to WBNOINVD if this faults so as not to make any problems worse
I don't like replacing WBINVD with WBNOINVD everywhere. It incorrectly implies
that WBNOINVD is guaranteed. Easiest thing is just to avoid mnemonics and describe
the behavior in conversational language (well, as conversational as cache flushin
gets :-D).
> @@ -3858,7 +3859,7 @@ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu)
> * guest-mapped page rather than the initial one allocated
> * by KVM in svm->sev_es.vmsa. In theory, svm->sev_es.vmsa
> * could be free'd and cleaned up here, but that involves
> - * cleanups like wbinvd_on_all_cpus() which would ideally
> + * cleanups like sev_writeback_caches() which would ideally
Similarly, avoid function names in comments, because they too become stale.
> * be handled during teardown rather than guest boot.
> * Deferring that also allows the existing logic for SEV-ES
> * VMSAs to be re-used with minimal SNP-specific changes.
next prev parent reply other threads:[~2025-02-26 1:30 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-09 22:55 [PATCH v2 0/2] KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency Kevin Loughlin
2025-01-09 22:55 ` [PATCH v2 1/2] x86, lib: Add WBNOINVD helper functions Kevin Loughlin
2025-01-09 22:55 ` [PATCH v2 2/2] KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency Kevin Loughlin
2025-01-10 8:23 ` Kirill A. Shutemov
2025-01-13 18:47 ` Kevin Loughlin
2025-01-14 7:50 ` Kirill A. Shutemov
2025-01-14 16:12 ` Sean Christopherson
2025-01-17 22:20 ` Kevin Loughlin
2025-01-13 21:46 ` Mingwei Zhang
2025-01-22 0:13 ` [PATCH v3 0/2] " Kevin Loughlin
2025-01-22 0:13 ` [PATCH v3 1/2] x86, lib: Add WBNOINVD helper functions Kevin Loughlin
2025-01-22 0:30 ` Dave Hansen
2025-01-22 0:30 ` Dave Hansen
2025-01-22 1:14 ` Kevin Loughlin
2025-01-22 0:13 ` [PATCH v3 2/2] KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency Kevin Loughlin
2025-01-22 1:34 ` [PATCH v4 0/2] " Kevin Loughlin
2025-01-22 1:34 ` [PATCH v4 1/2] x86, lib: Add WBNOINVD helper functions Kevin Loughlin
2025-01-22 7:32 ` Kirill A. Shutemov
2025-01-22 19:39 ` Tom Lendacky
2025-01-22 23:16 ` Dave Hansen
2025-01-23 0:06 ` Kevin Loughlin
2025-01-23 0:33 ` Dave Hansen
2025-01-23 0:58 ` Kevin Loughlin
2025-01-23 1:17 ` Kevin Loughlin
2025-01-22 1:34 ` [PATCH v4 2/2] KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency Kevin Loughlin
2025-01-23 0:24 ` [PATCH v5 0/2] " Kevin Loughlin
2025-01-23 0:24 ` [PATCH v5 1/2] x86, lib: Add WBNOINVD helper functions Kevin Loughlin
2025-01-23 0:36 ` Dave Hansen
2025-01-23 0:55 ` Kevin Loughlin
2025-01-23 0:24 ` [PATCH v5 2/2] KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency Kevin Loughlin
2025-02-26 1:30 ` Sean Christopherson [this message]
2025-02-01 0:02 ` [PATCH v6 0/2] " Kevin Loughlin
2025-02-01 0:02 ` [PATCH v6 1/2] x86, lib: Add WBNOINVD helper functions Kevin Loughlin
2025-02-04 16:59 ` Tom Lendacky
2025-02-26 1:26 ` Sean Christopherson
2025-02-26 14:22 ` Borislav Petkov
2025-02-01 0:02 ` [PATCH v6 2/2] KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency Kevin Loughlin
2025-02-04 17:00 ` Tom Lendacky
2025-02-26 1:35 ` [PATCH v5 0/2] " Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z75uoedJyZ18CZeh@google.com \
--to=seanjc@google.com \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=hpa@zytor.com \
--cc=jgross@suse.com \
--cc=kai.huang@intel.com \
--cc=kevinloughlin@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=manalinandan@google.com \
--cc=mingo@redhat.com \
--cc=mizhang@google.com \
--cc=pbonzini@redhat.com \
--cc=pgonda@google.com \
--cc=rientjes@google.com \
--cc=sidtelang@google.com \
--cc=szy0127@sjtu.edu.cn \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=ubizjak@gmail.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox