From: Zheyun Shen <szy0127@sjtu.edu.cn>
To: Sean Christopherson <seanjc@google.com>
Cc: pbonzini <pbonzini@redhat.com>, tglx <tglx@linutronix.de>,
thomas lendacky <thomas.lendacky@amd.com>,
kvm <kvm@vger.kernel.org>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] KVM:SVM: Flush cache only on CPUs running SEV guest
Date: Tue, 5 Mar 2024 22:57:03 +0800 (CST) [thread overview]
Message-ID: <722904540.5000784.1709650623262.JavaMail.zimbra@sjtu.edu.cn> (raw)
On Mon, Mar 04, 2024, Sean Christopherson wrote:
> Instead of copy+paste WBINVD+cpumask_clear() everywhere, add a prep patch to
> replace relevant open coded calls to wbinvd_on_all_cpus() with calls to
> sev_guest_memory_reclaimed(). Then only sev_guest_memory_reclaimed() needs to
> updated, and IMO it helps document why KVM is blasting WBINVD.
> I'm also pretty sure this should be a cpumask_var_t, and dynamically allocated
> as appropriate. And at that point, it should be allocated and filled if and only
> if the CPU doesn't have X86_FEATURE_SME_COHERENT
I notice that several callers of wbinvd_on_all_cpus() must use wbinvd to flush cache
instead of using clflush or just doing nothing if the CPU has X86_FEATURE_SME_COHERENT,
according to https://github.com/AMDESE/linux/commit/2e2409afe5f0c284c7dfe5504058e8d115806a7d
Therefore, I think the flush operation should be divided into two functions. One is the
optimized wbinvd, which does not consider X86_FEATURE_SME_COHERENT, and the other is
sev_guest_memory_reclaimed(), which should use clflush instead of wbinvd in case of
X86_FEATURE_SME_COHERENT. Thus the cpumask struct should be exist whether the CPU has
X86_FEATURE_SME_COHERENT or not.
Besides, if we consider X86_FEATURE_SME_COHERENT to get rid of wbinvd in sev_guest_memory_reclaimed(),
we should ensure the clflush is called on corresponding addresses, as mentioned in
https://github.com/AMDESE/linux/commit/d45829b351ee6ec5f54dd55e6aca1f44fe239fe6
However, caller of sev_guest_memory_reclaimed() (e.g., kvm_mmu_notifier_invalidate_range_start())
only get HVA belongs to userspace(e.g., qemu), so calling clflush with this HVA may
lead to a page fault in kernel. I was wondering if notifying userspace applications to
do clflush themselves is the only solution here. But for the sake of safety, maybe KVM
cannot left the work for untrusted userspace applications?
Or should I just temporarily ignore the X86_FEATURE_SME_COHERENT scenario which is hard to implement,
and just refine the patch only for wbinvd_on_all_cpus() ?
Best regards,
Zheyun Shen
next reply other threads:[~2024-03-05 14:57 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-05 14:57 Zheyun Shen [this message]
2024-03-05 16:49 ` [PATCH] KVM:SVM: Flush cache only on CPUs running SEV guest Sean Christopherson
-- strict thread matches above, loose matches on Subject: below --
2024-03-01 2:30 Zheyun Shen
2024-03-04 17:55 ` Sean Christopherson
2024-03-05 22:51 ` Tom Lendacky
2024-03-06 0:19 ` Sean Christopherson
2024-03-06 16:26 ` Tom Lendacky
2024-03-06 16:45 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=722904540.5000784.1709650623262.JavaMail.zimbra@sjtu.edu.cn \
--to=szy0127@sjtu.edu.cn \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox