* [PATCH 00/21] Fixes and lock cleanup+hardening
@ 2026-03-10 23:48 Sean Christopherson
2026-03-10 23:48 ` [PATCH 01/21] KVM: selftests: Remove duplicate LAUNCH_UPDATE_VMSA call in SEV-ES migrate test Sean Christopherson
` (21 more replies)
0 siblings, 22 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Fix several fatal SEV bugs, then clean up the SEV+ APIs to either document
that they are safe to query outside of kvm->lock, or to use lockdep-protected
version. The sev_mem_enc_register_region() goof is at least the second bug
we've had related to checking for an SEV guest outside of kvm->lock, and in
general it's nearly impossible to just "eyeball" the safety of KVM's usage.
I included Carlos' guard() cleanups here to avoid annoying conflicts (well,
to solve them now instead of when applying).
Carlos López (5):
KVM: SEV: use mutex guard in snp_launch_update()
KVM: SEV: use mutex guard in sev_mem_enc_ioctl()
KVM: SEV: use mutex guard in sev_mem_enc_unregister_region()
KVM: SEV: use mutex guard in snp_handle_guest_req()
KVM: SVM: Move lock-protected allocation of SEV ASID into a separate
helper
Sean Christopherson (16):
KVM: selftests: Remove duplicate LAUNCH_UPDATE_VMSA call in SEV-ES
migrate test
KVM: SEV: Reject attempts to sync VMSA of an
already-launched/encrypted vCPU
KVM: SEV: Protect *all* of sev_mem_enc_register_region() with
kvm->lock
KVM: SEV: Disallow LAUNCH_FINISH if vCPUs are actively being created
KVM: SEV: Lock all vCPUs when synchronzing VMSAs for SNP launch finish
KVM: SEV: Lock all vCPUs for the duration of SEV-ES VMSA
synchronization
KVM: SEV: Provide vCPU-scoped accessors for detecting SEV+ guests
KVM: SEV: Add quad-underscore version of VM-scoped APIs to detect SEV+
guests
KVM: SEV: Document the SEV-ES check when querying SMM support as
"safe"
KVM: SEV: Move standard VM-scoped helpers to detect SEV+ guests to
sev.c
KVM: SEV: Move SEV-specific VM initialization to sev.c
KVM: SEV: WARN on unhandled VM type when initializing VM
KVM: SEV: Hide "struct kvm_sev_info" behind CONFIG_KVM_AMD_SEV=y
KVM: SEV: Document that checking for SEV+ guests when reclaiming
memory is "safe"
KVM: SEV: Assert that kvm->lock is held when querying SEV+ support
KVM: SEV: Goto an existing error label if charging misc_cg for an ASID
fails
arch/x86/kvm/svm/sev.c | 315 +++++++++++-------
arch/x86/kvm/svm/svm.c | 106 +++---
arch/x86/kvm/svm/svm.h | 36 +-
include/linux/kvm_host.h | 7 +
.../selftests/kvm/x86/sev_migrate_tests.c | 2 -
5 files changed, 275 insertions(+), 191 deletions(-)
base-commit: 11439c4635edd669ae435eec308f4ab8a0804808
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 01/21] KVM: selftests: Remove duplicate LAUNCH_UPDATE_VMSA call in SEV-ES migrate test
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 02/21] KVM: SEV: Reject attempts to sync VMSA of an already-launched/encrypted vCPU Sean Christopherson
` (20 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Drop the explicit KVM_SEV_LAUNCH_UPDATE_VMSA call when creating an SEV-ES
VM in the SEV migration test, as sev_vm_create() automatically updates the
VMSA pages for SEV-ES guests. The only reason the duplicate call doesn't
cause visible problems is because the test doesn't actually try to run the
vCPUs. That will change when KVM adds a check to prevent userspace from
re-launching a VMSA (which corrupts the VMSA page due to KVM writing
encrypted private memory).
Fixes: 69f8e15ab61f ("KVM: selftests: Use the SEV library APIs in the intra-host migration test")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/x86/sev_migrate_tests.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86/sev_migrate_tests.c b/tools/testing/selftests/kvm/x86/sev_migrate_tests.c
index 0a6dfba3905b..6b0928e69051 100644
--- a/tools/testing/selftests/kvm/x86/sev_migrate_tests.c
+++ b/tools/testing/selftests/kvm/x86/sev_migrate_tests.c
@@ -36,8 +36,6 @@ static struct kvm_vm *sev_vm_create(bool es)
sev_vm_launch(vm, es ? SEV_POLICY_ES : 0);
- if (es)
- vm_sev_ioctl(vm, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
return vm;
}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 02/21] KVM: SEV: Reject attempts to sync VMSA of an already-launched/encrypted vCPU
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
2026-03-10 23:48 ` [PATCH 01/21] KVM: selftests: Remove duplicate LAUNCH_UPDATE_VMSA call in SEV-ES migrate test Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 03/21] KVM: SEV: Protect *all* of sev_mem_enc_register_region() with kvm->lock Sean Christopherson
` (19 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Reject synchronizing vCPU state to its associated VMSA if the vCPU has
already been launched, i.e. if the VMSA has already been encrypted. On a
host with SNP enabled, accessing guest-private memory generates an RMP #PF
and panics the host.
BUG: unable to handle page fault for address: ff1276cbfdf36000
#PF: supervisor write access in kernel mode
#PF: error_code(0x80000003) - RMP violation
PGD 5a31801067 P4D 5a31802067 PUD 40ccfb5063 PMD 40e5954063 PTE 80000040fdf36163
SEV-SNP: PFN 0x40fdf36, RMP entry: [0x6010fffffffff001 - 0x000000000000001f]
Oops: Oops: 0003 [#1] SMP NOPTI
CPU: 33 UID: 0 PID: 996180 Comm: qemu-system-x86 Tainted: G OE
Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
Hardware name: Dell Inc. PowerEdge R7625/0H1TJT, BIOS 1.5.8 07/21/2023
RIP: 0010:sev_es_sync_vmsa+0x54/0x4c0 [kvm_amd]
Call Trace:
<TASK>
snp_launch_update_vmsa+0x19d/0x290 [kvm_amd]
snp_launch_finish+0xb6/0x380 [kvm_amd]
sev_mem_enc_ioctl+0x14e/0x720 [kvm_amd]
kvm_arch_vm_ioctl+0x837/0xcf0 [kvm]
kvm_vm_ioctl+0x3fd/0xcc0 [kvm]
__x64_sys_ioctl+0xa3/0x100
x64_sys_call+0xfe0/0x2350
do_syscall_64+0x81/0x10f0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7ffff673287d
</TASK>
Note, the KVM flaw has been present since commit ad73109ae7ec ("KVM: SVM:
Provide support to launch and run an SEV-ES guest"), but has only been
actively dangerous for the host since SNP support was added. With SEV-ES,
KVM would "just" clobber guest state, which is totally fine from a host
kernel perspective since userspace can clobber guest state any time before
sev_launch_update_vmsa().
Fixes: ad27ce155566 ("KVM: SEV: Add KVM_SEV_SNP_LAUNCH_FINISH command")
Reported-by: Jethro Beekman <jethro@fortanix.com>
Closes: https://lore.kernel.org/all/d98692e2-d96b-4c36-8089-4bc1e5cc3d57@fortanix.com
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 3f9c1aa39a0a..fa319a66938c 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -882,6 +882,9 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
u8 *d;
int i;
+ if (vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
/* Check some debug related fields before encrypting the VMSA */
if (svm->vcpu.guest_debug || (svm->vmcb->save.dr7 & ~DR7_FIXED_1))
return -EINVAL;
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 03/21] KVM: SEV: Protect *all* of sev_mem_enc_register_region() with kvm->lock
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
2026-03-10 23:48 ` [PATCH 01/21] KVM: selftests: Remove duplicate LAUNCH_UPDATE_VMSA call in SEV-ES migrate test Sean Christopherson
2026-03-10 23:48 ` [PATCH 02/21] KVM: SEV: Reject attempts to sync VMSA of an already-launched/encrypted vCPU Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 04/21] KVM: SEV: Disallow LAUNCH_FINISH if vCPUs are actively being created Sean Christopherson
` (18 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Take and hold kvm->lock for before checking sev_guest() in
sev_mem_enc_register_region(), as sev_guest() isn't stable unless kvm->lock
is held (or KVM can guarantee KVM_SEV_INIT{2} has completed and can't
rollack state). If KVM_SEV_INIT{2} fails, KVM can end up trying to add to
a not-yet-initialized sev->regions_list, e.g. triggering a #GP
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 110 UID: 0 PID: 72717 Comm: syz.15.11462 Tainted: G U W O 6.16.0-smp-DEV #1 NONE
Tainted: [U]=USER, [W]=WARN, [O]=OOT_MODULE
Hardware name: Google, Inc. Arcadia_IT_80/Arcadia_IT_80, BIOS 12.52.0-0 10/28/2024
RIP: 0010:sev_mem_enc_register_region+0x3f0/0x4f0 ../include/linux/list.h:83
Code: <41> 80 3c 04 00 74 08 4c 89 ff e8 f1 c7 a2 00 49 39 ed 0f 84 c6 00
RSP: 0018:ffff88838647fbb8 EFLAGS: 00010256
RAX: dffffc0000000000 RBX: 1ffff92015cf1e0b RCX: dffffc0000000000
RDX: 0000000000000000 RSI: 0000000000001000 RDI: ffff888367870000
RBP: ffffc900ae78f050 R08: ffffea000d9e0007 R09: 1ffffd4001b3c000
R10: dffffc0000000000 R11: fffff94001b3c001 R12: 0000000000000000
R13: ffff8982ab0bde00 R14: ffffc900ae78f058 R15: 0000000000000000
FS: 00007f34e9dc66c0(0000) GS:ffff89ee64d33000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe180adef98 CR3: 000000047210e000 CR4: 0000000000350ef0
Call Trace:
<TASK>
kvm_arch_vm_ioctl+0xa72/0x1240 ../arch/x86/kvm/x86.c:7371
kvm_vm_ioctl+0x649/0x990 ../virt/kvm/kvm_main.c:5363
__se_sys_ioctl+0x101/0x170 ../fs/ioctl.c:51
do_syscall_x64 ../arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x6f/0x1f0 ../arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7f34e9f7e9a9
Code: <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f34e9dc6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f34ea1a6080 RCX: 00007f34e9f7e9a9
RDX: 0000200000000280 RSI: 000000008010aebb RDI: 0000000000000007
RBP: 00007f34ea000d69 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f34ea1a6080 R15: 00007ffce77197a8
</TASK>
with a syzlang reproducer that looks like:
syz_kvm_add_vcpu$x86(0x0, &(0x7f0000000040)={0x0, &(0x7f0000000180)=ANY=[], 0x70}) (async)
syz_kvm_add_vcpu$x86(0x0, &(0x7f0000000080)={0x0, &(0x7f0000000180)=ANY=[@ANYBLOB="..."], 0x4f}) (async)
r0 = openat$kvm(0xffffffffffffff9c, &(0x7f0000000200), 0x0, 0x0)
r1 = ioctl$KVM_CREATE_VM(r0, 0xae01, 0x0)
r2 = openat$kvm(0xffffffffffffff9c, &(0x7f0000000240), 0x0, 0x0)
r3 = ioctl$KVM_CREATE_VM(r2, 0xae01, 0x0)
ioctl$KVM_SET_CLOCK(r3, 0xc008aeba, &(0x7f0000000040)={0x1, 0x8, 0x0, 0x5625e9b0}) (async)
ioctl$KVM_SET_PIT2(r3, 0x8010aebb, &(0x7f0000000280)={[...], 0x5}) (async)
ioctl$KVM_SET_PIT2(r1, 0x4070aea0, 0x0) (async)
r4 = ioctl$KVM_CREATE_VM(0xffffffffffffffff, 0xae01, 0x0)
openat$kvm(0xffffffffffffff9c, 0x0, 0x0, 0x0) (async)
ioctl$KVM_SET_USER_MEMORY_REGION(r4, 0x4020ae46, &(0x7f0000000400)={0x0, 0x0, 0x0, 0x2000, &(0x7f0000001000/0x2000)=nil}) (async)
r5 = ioctl$KVM_CREATE_VCPU(r4, 0xae41, 0x2)
close(r0) (async)
openat$kvm(0xffffffffffffff9c, &(0x7f0000000000), 0x8000, 0x0) (async)
ioctl$KVM_SET_GUEST_DEBUG(r5, 0x4048ae9b, &(0x7f0000000300)={0x4376ea830d46549b, 0x0, [0x46, 0x0, 0x0, 0x0, 0x0, 0x1000]}) (async)
ioctl$KVM_RUN(r5, 0xae80, 0x0)
Opportunistically use guard() to avoid having to define a new error label
and goto usage.
Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
Cc: stable@vger.kernel.org
Reported-by: Alexander Potapenko <glider@google.com>
Tested-by: Alexander Potapenko <glider@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index fa319a66938c..7da040baba1c 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2704,6 +2704,8 @@ int sev_mem_enc_register_region(struct kvm *kvm,
struct enc_region *region;
int ret = 0;
+ guard(mutex)(&kvm->lock);
+
if (!sev_guest(kvm))
return -ENOTTY;
@@ -2718,12 +2720,10 @@ int sev_mem_enc_register_region(struct kvm *kvm,
if (!region)
return -ENOMEM;
- mutex_lock(&kvm->lock);
region->pages = sev_pin_memory(kvm, range->addr, range->size, ®ion->npages,
FOLL_WRITE | FOLL_LONGTERM);
if (IS_ERR(region->pages)) {
ret = PTR_ERR(region->pages);
- mutex_unlock(&kvm->lock);
goto e_free;
}
@@ -2741,8 +2741,6 @@ int sev_mem_enc_register_region(struct kvm *kvm,
region->size = range->size;
list_add_tail(®ion->list, &sev->regions_list);
- mutex_unlock(&kvm->lock);
-
return ret;
e_free:
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 04/21] KVM: SEV: Disallow LAUNCH_FINISH if vCPUs are actively being created
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (2 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 03/21] KVM: SEV: Protect *all* of sev_mem_enc_register_region() with kvm->lock Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 05/21] KVM: SEV: Lock all vCPUs when synchronzing VMSAs for SNP launch finish Sean Christopherson
` (17 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Reject LAUNCH_FINISH for SEV-ES and SNP VMs if KVM is actively creating
one or more vCPUs, as KVM needs to process and encrypt each vCPU's VMSA.
Letting userspace create vCPUs while LAUNCH_FINISH is in-progress is
"fine", at least in the current code base, as kvm_for_each_vcpu() operates
on online_vcpus, LAUNCH_FINISH (all SEV+ sub-ioctls) holds kvm->mutex, and
fully onlining a vCPU in kvm_vm_ioctl_create_vcpu() is done under
kvm->mutex. I.e. there's no difference between an in-progress vCPU and a
vCPU that is created entirely after LAUNCH_FINISH.
However, given that concurrent LAUNCH_FINISH and vCPU creation can't
possibly work (for any reasonable definition of "work"), since userspace
can't guarantee whether a particular vCPU will be encrypted or not,
disallow the combination as a hardening measure, to reduce the probability
of introducing bugs in the future, and to avoid having to reason about the
safety of future changes related to LAUNCH_FINISH.
Cc: Jethro Beekman <jethro@fortanix.com>
Closes: https://lore.kernel.org/all/b31f7c6e-2807-4662-bcdd-eea2c1e132fa@fortanix.com
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 10 ++++++++--
include/linux/kvm_host.h | 7 +++++++
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 7da040baba1c..5de36bbc4c53 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1030,6 +1030,9 @@ static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (!sev_es_guest(kvm))
return -ENOTTY;
+ if (kvm_is_vcpu_creation_in_progress(kvm))
+ return -EBUSY;
+
kvm_for_each_vcpu(i, vcpu, kvm) {
ret = mutex_lock_killable(&vcpu->mutex);
if (ret)
@@ -2050,8 +2053,8 @@ static int sev_check_source_vcpus(struct kvm *dst, struct kvm *src)
struct kvm_vcpu *src_vcpu;
unsigned long i;
- if (src->created_vcpus != atomic_read(&src->online_vcpus) ||
- dst->created_vcpus != atomic_read(&dst->online_vcpus))
+ if (kvm_is_vcpu_creation_in_progress(src) ||
+ kvm_is_vcpu_creation_in_progress(dst))
return -EBUSY;
if (!sev_es_guest(src))
@@ -2450,6 +2453,9 @@ static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
unsigned long i;
int ret;
+ if (kvm_is_vcpu_creation_in_progress(kvm))
+ return -EBUSY;
+
data.gctx_paddr = __psp_pa(sev->snp_context);
data.page_type = SNP_PAGE_TYPE_VMSA;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 34759a262b28..3c7f8557f7af 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1029,6 +1029,13 @@ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id)
return NULL;
}
+static inline bool kvm_is_vcpu_creation_in_progress(struct kvm *kvm)
+{
+ lockdep_assert_held(&kvm->lock);
+
+ return kvm->created_vcpus != atomic_read(&kvm->online_vcpus);
+}
+
void kvm_destroy_vcpus(struct kvm *kvm);
int kvm_trylock_all_vcpus(struct kvm *kvm);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 05/21] KVM: SEV: Lock all vCPUs when synchronzing VMSAs for SNP launch finish
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (3 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 04/21] KVM: SEV: Disallow LAUNCH_FINISH if vCPUs are actively being created Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 06/21] KVM: SEV: Lock all vCPUs for the duration of SEV-ES VMSA synchronization Sean Christopherson
` (16 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Lock all vCPUs when synchronizing and encrypting VMSAs for SNP guests, as
allowing userspace to manipulate and/or run a vCPU while its state is being
synchronized would at best corrupt vCPU state, and at worst crash the host
kernel.
Opportunistically assert that vcpu->mutex is held when synchronizing its
VMSA (the SEV-ES path already locks vCPUs).
Fixes: ad27ce155566 ("KVM: SEV: Add KVM_SEV_SNP_LAUNCH_FINISH command")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 5de36bbc4c53..c10c71608208 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -882,6 +882,8 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
u8 *d;
int i;
+ lockdep_assert_held(&vcpu->mutex);
+
if (vcpu->arch.guest_state_protected)
return -EINVAL;
@@ -2456,6 +2458,10 @@ static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (kvm_is_vcpu_creation_in_progress(kvm))
return -EBUSY;
+ ret = kvm_lock_all_vcpus(kvm);
+ if (ret)
+ return ret;
+
data.gctx_paddr = __psp_pa(sev->snp_context);
data.page_type = SNP_PAGE_TYPE_VMSA;
@@ -2465,12 +2471,12 @@ static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
ret = sev_es_sync_vmsa(svm);
if (ret)
- return ret;
+ goto err;
/* Transition the VMSA page to a firmware state. */
ret = rmp_make_private(pfn, INITIAL_VMSA_GPA, PG_LEVEL_4K, sev->asid, true);
if (ret)
- return ret;
+ goto err;
/* Issue the SNP command to encrypt the VMSA */
data.address = __sme_pa(svm->sev_es.vmsa);
@@ -2479,7 +2485,7 @@ static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (ret) {
snp_page_reclaim(kvm, pfn);
- return ret;
+ goto err;
}
svm->vcpu.arch.guest_state_protected = true;
@@ -2494,6 +2500,10 @@ static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
}
return 0;
+
+err:
+ kvm_unlock_all_vcpus(kvm);
+ return ret;
}
static int snp_launch_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 06/21] KVM: SEV: Lock all vCPUs for the duration of SEV-ES VMSA synchronization
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (4 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 05/21] KVM: SEV: Lock all vCPUs when synchronzing VMSAs for SNP launch finish Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 07/21] KVM: SEV: Provide vCPU-scoped accessors for detecting SEV+ guests Sean Christopherson
` (15 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Lock and unlock all vCPUs in a single batch when synchronizing SEV-ES VMSAs
during launch finish, partly to dedup the code by a tiny amount, but mostly
so that sev_launch_update_vmsa() uses the same logic/flow as all other SEV
ioctls that lock all vCPUs.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index c10c71608208..1bdcc5bef7c3 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1035,19 +1035,18 @@ static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (kvm_is_vcpu_creation_in_progress(kvm))
return -EBUSY;
- kvm_for_each_vcpu(i, vcpu, kvm) {
- ret = mutex_lock_killable(&vcpu->mutex);
- if (ret)
- return ret;
+ ret = kvm_lock_all_vcpus(kvm);
+ if (ret)
+ return ret;
+ kvm_for_each_vcpu(i, vcpu, kvm) {
ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error);
-
- mutex_unlock(&vcpu->mutex);
if (ret)
- return ret;
+ break;
}
- return 0;
+ kvm_unlock_all_vcpus(kvm);
+ return ret;
}
static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp)
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 07/21] KVM: SEV: Provide vCPU-scoped accessors for detecting SEV+ guests
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (5 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 06/21] KVM: SEV: Lock all vCPUs for the duration of SEV-ES VMSA synchronization Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 08/21] KVM: SEV: Add quad-underscore version of VM-scoped APIs to detect " Sean Christopherson
` (14 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Provide vCPU-scoped accessors for detecting if the vCPU belongs to an SEV,
SEV-ES, or SEV-SNP VM, partly to dedup a small amount of code, but mostly
to better document which usages are "safe". Generally speaking, using the
VM-scoped sev_guest() and friends outside of kvm->lock is unsafe, as they
can get both false positives and false negatives.
But for vCPUs, the accessors are guaranteed to provide a stable result as
KVM disallows initialization SEV+ state after vCPUs are created. I.e.
operating on a vCPU guarantees the VM can't "become" an SEV+ VM, and that
it can't revert back to a "normal" VM.
This will also allow dropping the stubs for the VM-scoped accessors, as
it's relatively easy to eliminate usage of the accessors from common SVM
once the vCPU-scoped checks are out of the way.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 49 +++++++++++++-------------
arch/x86/kvm/svm/svm.c | 80 +++++++++++++++++++++---------------------
arch/x86/kvm/svm/svm.h | 17 +++++++++
3 files changed, 82 insertions(+), 64 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 1bdcc5bef7c3..35033dc79390 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -3271,7 +3271,7 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm;
- if (!sev_es_guest(vcpu->kvm))
+ if (!is_sev_es_guest(vcpu))
return;
svm = to_svm(vcpu);
@@ -3281,7 +3281,7 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
* a guest-owned page. Transition the page to hypervisor state before
* releasing it back to the system.
*/
- if (sev_snp_guest(vcpu->kvm)) {
+ if (is_sev_snp_guest(vcpu)) {
u64 pfn = __pa(svm->sev_es.vmsa) >> PAGE_SHIFT;
if (kvm_rmp_make_shared(vcpu->kvm, pfn, PG_LEVEL_4K))
@@ -3482,7 +3482,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
goto vmgexit_err;
break;
case SVM_VMGEXIT_AP_CREATION:
- if (!sev_snp_guest(vcpu->kvm))
+ if (!is_sev_snp_guest(vcpu))
goto vmgexit_err;
if (lower_32_bits(control->exit_info_1) != SVM_VMGEXIT_AP_DESTROY)
if (!kvm_ghcb_rax_is_valid(svm))
@@ -3496,12 +3496,12 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
case SVM_VMGEXIT_TERM_REQUEST:
break;
case SVM_VMGEXIT_PSC:
- if (!sev_snp_guest(vcpu->kvm) || !kvm_ghcb_sw_scratch_is_valid(svm))
+ if (!is_sev_snp_guest(vcpu) || !kvm_ghcb_sw_scratch_is_valid(svm))
goto vmgexit_err;
break;
case SVM_VMGEXIT_GUEST_REQUEST:
case SVM_VMGEXIT_EXT_GUEST_REQUEST:
- if (!sev_snp_guest(vcpu->kvm) ||
+ if (!is_sev_snp_guest(vcpu) ||
!PAGE_ALIGNED(control->exit_info_1) ||
!PAGE_ALIGNED(control->exit_info_2) ||
control->exit_info_1 == control->exit_info_2)
@@ -3575,7 +3575,8 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm)
int pre_sev_run(struct vcpu_svm *svm, int cpu)
{
struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu);
- struct kvm *kvm = svm->vcpu.kvm;
+ struct kvm_vcpu *vcpu = &svm->vcpu;
+ struct kvm *kvm = vcpu->kvm;
unsigned int asid = sev_get_asid(kvm);
/*
@@ -3583,7 +3584,7 @@ int pre_sev_run(struct vcpu_svm *svm, int cpu)
* VMSA, e.g. if userspace forces the vCPU to be RUNNABLE after an SNP
* AP Destroy event.
*/
- if (sev_es_guest(kvm) && !VALID_PAGE(svm->vmcb->control.vmsa_pa))
+ if (is_sev_es_guest(vcpu) && !VALID_PAGE(svm->vmcb->control.vmsa_pa))
return -EINVAL;
/*
@@ -4129,7 +4130,7 @@ static int snp_handle_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t resp_
sev_ret_code fw_err = 0;
int ret;
- if (!sev_snp_guest(kvm))
+ if (!is_sev_snp_guest(&svm->vcpu))
return -EINVAL;
mutex_lock(&sev->guest_req_mutex);
@@ -4199,10 +4200,12 @@ static int snp_complete_req_certs(struct kvm_vcpu *vcpu)
static int snp_handle_ext_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t resp_gpa)
{
- struct kvm *kvm = svm->vcpu.kvm;
+ struct kvm_vcpu *vcpu = &svm->vcpu;
+ struct kvm *kvm = vcpu->kvm;
+
u8 msg_type;
- if (!sev_snp_guest(kvm))
+ if (!is_sev_snp_guest(vcpu))
return -EINVAL;
if (kvm_read_guest(kvm, req_gpa + offsetof(struct snp_guest_msg_hdr, msg_type),
@@ -4221,7 +4224,6 @@ static int snp_handle_ext_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t r
*/
if (msg_type == SNP_MSG_REPORT_REQ) {
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
- struct kvm_vcpu *vcpu = &svm->vcpu;
u64 data_npages;
gpa_t data_gpa;
@@ -4338,7 +4340,7 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
GHCB_MSR_INFO_MASK, GHCB_MSR_INFO_POS);
break;
case GHCB_MSR_PREF_GPA_REQ:
- if (!sev_snp_guest(vcpu->kvm))
+ if (!is_sev_snp_guest(vcpu))
goto out_terminate;
set_ghcb_msr_bits(svm, GHCB_MSR_PREF_GPA_NONE, GHCB_MSR_GPA_VALUE_MASK,
@@ -4349,7 +4351,7 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
case GHCB_MSR_REG_GPA_REQ: {
u64 gfn;
- if (!sev_snp_guest(vcpu->kvm))
+ if (!is_sev_snp_guest(vcpu))
goto out_terminate;
gfn = get_ghcb_msr_bits(svm, GHCB_MSR_GPA_VALUE_MASK,
@@ -4364,7 +4366,7 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
break;
}
case GHCB_MSR_PSC_REQ:
- if (!sev_snp_guest(vcpu->kvm))
+ if (!is_sev_snp_guest(vcpu))
goto out_terminate;
ret = snp_begin_psc_msr(svm, control->ghcb_gpa);
@@ -4437,7 +4439,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
sev_es_sync_from_ghcb(svm);
/* SEV-SNP guest requires that the GHCB GPA must be registered */
- if (sev_snp_guest(svm->vcpu.kvm) && !ghcb_gpa_is_registered(svm, ghcb_gpa)) {
+ if (is_sev_snp_guest(vcpu) && !ghcb_gpa_is_registered(svm, ghcb_gpa)) {
vcpu_unimpl(&svm->vcpu, "vmgexit: GHCB GPA [%#llx] is not registered.\n", ghcb_gpa);
return -EINVAL;
}
@@ -4695,10 +4697,10 @@ void sev_init_vmcb(struct vcpu_svm *svm, bool init_event)
*/
clr_exception_intercept(svm, GP_VECTOR);
- if (init_event && sev_snp_guest(vcpu->kvm))
+ if (init_event && is_sev_snp_guest(vcpu))
sev_snp_init_protected_guest_state(vcpu);
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
sev_es_init_vmcb(svm, init_event);
}
@@ -4709,7 +4711,7 @@ int sev_vcpu_create(struct kvm_vcpu *vcpu)
mutex_init(&svm->sev_es.snp_vmsa_mutex);
- if (!sev_es_guest(vcpu->kvm))
+ if (!is_sev_es_guest(vcpu))
return 0;
/*
@@ -4729,8 +4731,6 @@ int sev_vcpu_create(struct kvm_vcpu *vcpu)
void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa)
{
- struct kvm *kvm = svm->vcpu.kvm;
-
/*
* All host state for SEV-ES guests is categorized into three swap types
* based on how it is handled by hardware during a world switch:
@@ -4769,7 +4769,8 @@ void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_are
* loaded with the correct values *if* the CPU writes the MSRs.
*/
if (sev_vcpu_has_debug_swap(svm) ||
- (sev_snp_guest(kvm) && cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP))) {
+ (cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP) &&
+ is_sev_snp_guest(&svm->vcpu))) {
hostsa->dr0_addr_mask = amd_get_dr_addr_mask(0);
hostsa->dr1_addr_mask = amd_get_dr_addr_mask(1);
hostsa->dr2_addr_mask = amd_get_dr_addr_mask(2);
@@ -5133,7 +5134,7 @@ struct vmcb_save_area *sev_decrypt_vmsa(struct kvm_vcpu *vcpu)
int error = 0;
int ret;
- if (!sev_es_guest(vcpu->kvm))
+ if (!is_sev_es_guest(vcpu))
return NULL;
/*
@@ -5146,7 +5147,7 @@ struct vmcb_save_area *sev_decrypt_vmsa(struct kvm_vcpu *vcpu)
sev = to_kvm_sev_info(vcpu->kvm);
/* Check if the SEV policy allows debugging */
- if (sev_snp_guest(vcpu->kvm)) {
+ if (is_sev_snp_guest(vcpu)) {
if (!(sev->policy & SNP_POLICY_MASK_DEBUG))
return NULL;
} else {
@@ -5154,7 +5155,7 @@ struct vmcb_save_area *sev_decrypt_vmsa(struct kvm_vcpu *vcpu)
return NULL;
}
- if (sev_snp_guest(vcpu->kvm)) {
+ if (is_sev_snp_guest(vcpu)) {
struct sev_data_snp_dbg dbg = {0};
vmsa = snp_alloc_firmware_page(__GFP_ZERO);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 8f8bc863e214..0a1acc21b133 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -241,7 +241,7 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
* Never intercept #GP for SEV guests, KVM can't
* decrypt guest memory to workaround the erratum.
*/
- if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm))
+ if (svm_gp_erratum_intercept && !is_sev_guest(vcpu))
set_exception_intercept(svm, GP_VECTOR);
}
}
@@ -283,7 +283,7 @@ static int __svm_skip_emulated_instruction(struct kvm_vcpu *vcpu,
* SEV-ES does not expose the next RIP. The RIP update is controlled by
* the type of exit and the #VC handler in the guest.
*/
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
goto done;
if (nrips && svm->vmcb->control.next_rip != 0) {
@@ -720,7 +720,7 @@ static void svm_recalc_lbr_msr_intercepts(struct kvm_vcpu *vcpu)
svm_set_intercept_for_msr(vcpu, MSR_IA32_LASTINTFROMIP, MSR_TYPE_RW, intercept);
svm_set_intercept_for_msr(vcpu, MSR_IA32_LASTINTTOIP, MSR_TYPE_RW, intercept);
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
svm_set_intercept_for_msr(vcpu, MSR_IA32_DEBUGCTLMSR, MSR_TYPE_RW, intercept);
svm->lbr_msrs_intercepted = intercept;
@@ -830,7 +830,7 @@ static void svm_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
svm_set_intercept_for_msr(vcpu, MSR_IA32_PL3_SSP, MSR_TYPE_RW, !shstk_enabled);
}
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
sev_es_recalc_msr_intercepts(vcpu);
svm_recalc_pmu_msr_intercepts(vcpu);
@@ -865,7 +865,7 @@ void svm_enable_lbrv(struct kvm_vcpu *vcpu)
static void __svm_disable_lbrv(struct kvm_vcpu *vcpu)
{
- KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm);
+ KVM_BUG_ON(is_sev_es_guest(vcpu), vcpu->kvm);
to_svm(vcpu)->vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK;
}
@@ -1207,7 +1207,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu, bool init_event)
if (vcpu->kvm->arch.bus_lock_detection_enabled)
svm_set_intercept(svm, INTERCEPT_BUSLOCK);
- if (sev_guest(vcpu->kvm))
+ if (is_sev_guest(vcpu))
sev_init_vmcb(svm, init_event);
svm_hv_init_vmcb(vmcb);
@@ -1381,7 +1381,7 @@ static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
struct vcpu_svm *svm = to_svm(vcpu);
struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, vcpu->cpu);
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
sev_es_unmap_ghcb(svm);
if (svm->guest_state_loaded)
@@ -1392,7 +1392,7 @@ static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
* or subsequent vmload of host save area.
*/
vmsave(sd->save_area_pa);
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
sev_es_prepare_switch_to_guest(svm, sev_es_host_save_area(sd));
if (tsc_scaling)
@@ -1405,7 +1405,7 @@ static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
* all CPUs support TSC_AUX virtualization).
*/
if (likely(tsc_aux_uret_slot >= 0) &&
- (!boot_cpu_has(X86_FEATURE_V_TSC_AUX) || !sev_es_guest(vcpu->kvm)))
+ (!boot_cpu_has(X86_FEATURE_V_TSC_AUX) || !is_sev_es_guest(vcpu)))
kvm_set_user_return_msr(tsc_aux_uret_slot, svm->tsc_aux, -1ull);
if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE) &&
@@ -1472,7 +1472,7 @@ static bool svm_get_if_flag(struct kvm_vcpu *vcpu)
{
struct vmcb *vmcb = to_svm(vcpu)->vmcb;
- return sev_es_guest(vcpu->kvm)
+ return is_sev_es_guest(vcpu)
? vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK
: kvm_get_rflags(vcpu) & X86_EFLAGS_IF;
}
@@ -1706,7 +1706,7 @@ static void sev_post_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
* contents of the VMSA, and future VMCB save area updates won't be
* seen.
*/
- if (sev_es_guest(vcpu->kvm)) {
+ if (is_sev_es_guest(vcpu)) {
svm->vmcb->save.cr3 = cr3;
vmcb_mark_dirty(svm->vmcb, VMCB_CR);
}
@@ -1761,7 +1761,7 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
* SEV-ES guests must always keep the CR intercepts cleared. CR
* tracking is done using the CR write traps.
*/
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
return;
if (hcr0 == cr0) {
@@ -1872,7 +1872,7 @@ static void svm_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
- if (WARN_ON_ONCE(sev_es_guest(vcpu->kvm)))
+ if (WARN_ON_ONCE(is_sev_es_guest(vcpu)))
return;
get_debugreg(vcpu->arch.db[0], 0);
@@ -1951,7 +1951,7 @@ static int npf_interception(struct kvm_vcpu *vcpu)
}
}
- if (sev_snp_guest(vcpu->kvm) && (error_code & PFERR_GUEST_ENC_MASK))
+ if (is_sev_snp_guest(vcpu) && (error_code & PFERR_GUEST_ENC_MASK))
error_code |= PFERR_PRIVATE_ACCESS;
trace_kvm_page_fault(vcpu, gpa, error_code);
@@ -2096,7 +2096,7 @@ static int shutdown_interception(struct kvm_vcpu *vcpu)
* The VM save area for SEV-ES guests has already been encrypted so it
* cannot be reinitialized, i.e. synthesizing INIT is futile.
*/
- if (!sev_es_guest(vcpu->kvm)) {
+ if (!is_sev_es_guest(vcpu)) {
clear_page(svm->vmcb);
#ifdef CONFIG_KVM_SMM
if (is_smm(vcpu))
@@ -2123,7 +2123,7 @@ static int io_interception(struct kvm_vcpu *vcpu)
size = (io_info & SVM_IOIO_SIZE_MASK) >> SVM_IOIO_SIZE_SHIFT;
if (string) {
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
return sev_es_string_io(svm, size, port, in);
else
return kvm_emulate_instruction(vcpu, 0);
@@ -2455,13 +2455,13 @@ static int task_switch_interception(struct kvm_vcpu *vcpu)
static void svm_clr_iret_intercept(struct vcpu_svm *svm)
{
- if (!sev_es_guest(svm->vcpu.kvm))
+ if (!is_sev_es_guest(&svm->vcpu))
svm_clr_intercept(svm, INTERCEPT_IRET);
}
static void svm_set_iret_intercept(struct vcpu_svm *svm)
{
- if (!sev_es_guest(svm->vcpu.kvm))
+ if (!is_sev_es_guest(&svm->vcpu))
svm_set_intercept(svm, INTERCEPT_IRET);
}
@@ -2469,7 +2469,7 @@ static int iret_interception(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
- WARN_ON_ONCE(sev_es_guest(vcpu->kvm));
+ WARN_ON_ONCE(is_sev_es_guest(vcpu));
++vcpu->stat.nmi_window_exits;
svm->awaiting_iret_completion = true;
@@ -2643,7 +2643,7 @@ static int dr_interception(struct kvm_vcpu *vcpu)
* SEV-ES intercepts DR7 only to disable guest debugging and the guest issues a VMGEXIT
* for DR7 write only. KVM cannot change DR7 (always swapped as type 'A') so return early.
*/
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
return 1;
if (vcpu->guest_debug == 0) {
@@ -2725,7 +2725,7 @@ static int svm_get_feature_msr(u32 msr, u64 *data)
static bool sev_es_prevent_msr_access(struct kvm_vcpu *vcpu,
struct msr_data *msr_info)
{
- return sev_es_guest(vcpu->kvm) && vcpu->arch.guest_state_protected &&
+ return is_sev_es_guest(vcpu) && vcpu->arch.guest_state_protected &&
msr_info->index != MSR_IA32_XSS &&
!msr_write_intercepted(vcpu, msr_info->index);
}
@@ -2861,7 +2861,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
static int svm_complete_emulated_msr(struct kvm_vcpu *vcpu, int err)
{
struct vcpu_svm *svm = to_svm(vcpu);
- if (!err || !sev_es_guest(vcpu->kvm) || WARN_ON_ONCE(!svm->sev_es.ghcb))
+ if (!err || !is_sev_es_guest(vcpu) || WARN_ON_ONCE(!svm->sev_es.ghcb))
return kvm_complete_insn_gp(vcpu, err);
svm_vmgexit_inject_exception(svm, X86_TRAP_GP);
@@ -3042,7 +3042,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
* required in this case because TSC_AUX is restored on #VMEXIT
* from the host save area.
*/
- if (boot_cpu_has(X86_FEATURE_V_TSC_AUX) && sev_es_guest(vcpu->kvm))
+ if (boot_cpu_has(X86_FEATURE_V_TSC_AUX) && is_sev_es_guest(vcpu))
break;
/*
@@ -3156,7 +3156,7 @@ static int pause_interception(struct kvm_vcpu *vcpu)
* vcpu->arch.preempted_in_kernel can never be true. Just
* set in_kernel to false as well.
*/
- in_kernel = !sev_es_guest(vcpu->kvm) && svm_get_cpl(vcpu) == 0;
+ in_kernel = !is_sev_es_guest(vcpu) && svm_get_cpl(vcpu) == 0;
grow_ple_window(vcpu);
@@ -3321,9 +3321,9 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
guard(mutex)(&vmcb_dump_mutex);
- vm_type = sev_snp_guest(vcpu->kvm) ? "SEV-SNP" :
- sev_es_guest(vcpu->kvm) ? "SEV-ES" :
- sev_guest(vcpu->kvm) ? "SEV" : "SVM";
+ vm_type = is_sev_snp_guest(vcpu) ? "SEV-SNP" :
+ is_sev_es_guest(vcpu) ? "SEV-ES" :
+ is_sev_guest(vcpu) ? "SEV" : "SVM";
pr_err("%s vCPU%u VMCB %p, last attempted VMRUN on CPU %d\n",
vm_type, vcpu->vcpu_id, svm->current_vmcb->ptr, vcpu->arch.last_vmentry_cpu);
@@ -3368,7 +3368,7 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
pr_err("%-20s%016llx\n", "allowed_sev_features:", control->allowed_sev_features);
pr_err("%-20s%016llx\n", "guest_sev_features:", control->guest_sev_features);
- if (sev_es_guest(vcpu->kvm)) {
+ if (is_sev_es_guest(vcpu)) {
save = sev_decrypt_vmsa(vcpu);
if (!save)
goto no_vmsa;
@@ -3451,7 +3451,7 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
"excp_from:", save->last_excp_from,
"excp_to:", save->last_excp_to);
- if (sev_es_guest(vcpu->kvm)) {
+ if (is_sev_es_guest(vcpu)) {
struct sev_es_save_area *vmsa = (struct sev_es_save_area *)save;
pr_err("%-15s %016llx\n",
@@ -3512,7 +3512,7 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
}
no_vmsa:
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
sev_free_decrypted_vmsa(vcpu, save);
}
@@ -3601,7 +3601,7 @@ static int svm_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
struct kvm_run *kvm_run = vcpu->run;
/* SEV-ES guests must use the CR write traps to track CR registers. */
- if (!sev_es_guest(vcpu->kvm)) {
+ if (!is_sev_es_guest(vcpu)) {
if (!svm_is_intercept(svm, INTERCEPT_CR0_WRITE))
vcpu->arch.cr0 = svm->vmcb->save.cr0;
if (npt_enabled)
@@ -3653,7 +3653,7 @@ static int pre_svm_run(struct kvm_vcpu *vcpu)
svm->current_vmcb->cpu = vcpu->cpu;
}
- if (sev_guest(vcpu->kvm))
+ if (is_sev_guest(vcpu))
return pre_sev_run(svm, vcpu->cpu);
/* FIXME: handle wraparound of asid_generation */
@@ -3796,7 +3796,7 @@ static void svm_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
* SEV-ES guests must always keep the CR intercepts cleared. CR
* tracking is done using the CR write traps.
*/
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
return;
if (nested_svm_virtualize_tpr(vcpu))
@@ -3985,7 +3985,7 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
* ignores SEV-ES guest writes to EFER.SVME *and* CLGI/STGI are not
* supported NAEs in the GHCB protocol.
*/
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
return;
if (!gif_set(svm)) {
@@ -4273,7 +4273,7 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in
amd_clear_divider();
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
__svm_sev_es_vcpu_run(svm, spec_ctrl_intercepted,
sev_es_host_save_area(sd));
else
@@ -4374,7 +4374,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL))
x86_spec_ctrl_restore_host(svm->virt_spec_ctrl);
- if (!sev_es_guest(vcpu->kvm)) {
+ if (!is_sev_es_guest(vcpu)) {
vcpu->arch.cr2 = svm->vmcb->save.cr2;
vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
@@ -4524,7 +4524,7 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
if (guest_cpuid_is_intel_compatible(vcpu))
guest_cpu_cap_clear(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD);
- if (sev_guest(vcpu->kvm))
+ if (is_sev_guest(vcpu))
sev_vcpu_after_set_cpuid(svm);
}
@@ -4920,7 +4920,7 @@ static int svm_check_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
return X86EMUL_UNHANDLEABLE_VECTORING;
/* Emulation is always possible when KVM has access to all guest state. */
- if (!sev_guest(vcpu->kvm))
+ if (!is_sev_guest(vcpu))
return X86EMUL_CONTINUE;
/* #UD and #GP should never be intercepted for SEV guests. */
@@ -4932,7 +4932,7 @@ static int svm_check_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type,
* Emulation is impossible for SEV-ES guests as KVM doesn't have access
* to guest register state.
*/
- if (sev_es_guest(vcpu->kvm))
+ if (is_sev_es_guest(vcpu))
return X86EMUL_RETRY_INSTR;
/*
@@ -5069,7 +5069,7 @@ static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu)
static void svm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
{
- if (!sev_es_guest(vcpu->kvm))
+ if (!is_sev_es_guest(vcpu))
return kvm_vcpu_deliver_sipi_vector(vcpu, vector);
sev_vcpu_deliver_sipi_vector(vcpu, vector);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index ebd7b36b1ceb..121138901fd6 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -388,10 +388,27 @@ static __always_inline bool sev_snp_guest(struct kvm *kvm)
return (sev->vmsa_features & SVM_SEV_FEAT_SNP_ACTIVE) &&
!WARN_ON_ONCE(!sev_es_guest(kvm));
}
+
+static __always_inline bool is_sev_guest(struct kvm_vcpu *vcpu)
+{
+ return sev_guest(vcpu->kvm);
+}
+static __always_inline bool is_sev_es_guest(struct kvm_vcpu *vcpu)
+{
+ return sev_es_guest(vcpu->kvm);
+}
+
+static __always_inline bool is_sev_snp_guest(struct kvm_vcpu *vcpu)
+{
+ return sev_snp_guest(vcpu->kvm);
+}
#else
#define sev_guest(kvm) false
#define sev_es_guest(kvm) false
#define sev_snp_guest(kvm) false
+#define is_sev_guest(vcpu) false
+#define is_sev_es_guest(vcpu) false
+#define is_sev_snp_guest(vcpu) false
#endif
static inline bool ghcb_gpa_is_registered(struct vcpu_svm *svm, u64 val)
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 08/21] KVM: SEV: Add quad-underscore version of VM-scoped APIs to detect SEV+ guests
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (6 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 07/21] KVM: SEV: Provide vCPU-scoped accessors for detecting SEV+ guests Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 09/21] KVM: SEV: Document the SEV-ES check when querying SMM support as "safe" Sean Christopherson
` (13 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Add "unsafe" quad-underscore versions of the SEV+ guest detectors in
anticipation of hardening the APIs via lockdep assertions. This will allow
adding exceptions for usage that is known to be safe in advance of the
lockdep assertions.
Use a pile of underscores to try and communicate that use of the "unsafe"
shouldn't be done lightly.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.h | 28 +++++++++++++++++++++-------
1 file changed, 21 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 121138901fd6..5f8977eec874 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -370,37 +370,51 @@ static __always_inline struct kvm_sev_info *to_kvm_sev_info(struct kvm *kvm)
}
#ifdef CONFIG_KVM_AMD_SEV
-static __always_inline bool sev_guest(struct kvm *kvm)
+static __always_inline bool ____sev_guest(struct kvm *kvm)
{
return to_kvm_sev_info(kvm)->active;
}
-static __always_inline bool sev_es_guest(struct kvm *kvm)
+static __always_inline bool ____sev_es_guest(struct kvm *kvm)
{
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
return sev->es_active && !WARN_ON_ONCE(!sev->active);
}
-static __always_inline bool sev_snp_guest(struct kvm *kvm)
+static __always_inline bool ____sev_snp_guest(struct kvm *kvm)
{
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
return (sev->vmsa_features & SVM_SEV_FEAT_SNP_ACTIVE) &&
- !WARN_ON_ONCE(!sev_es_guest(kvm));
+ !WARN_ON_ONCE(!____sev_es_guest(kvm));
+}
+
+static __always_inline bool sev_guest(struct kvm *kvm)
+{
+ return ____sev_guest(kvm);
+}
+static __always_inline bool sev_es_guest(struct kvm *kvm)
+{
+ return ____sev_es_guest(kvm);
+}
+
+static __always_inline bool sev_snp_guest(struct kvm *kvm)
+{
+ return ____sev_snp_guest(kvm);
}
static __always_inline bool is_sev_guest(struct kvm_vcpu *vcpu)
{
- return sev_guest(vcpu->kvm);
+ return ____sev_guest(vcpu->kvm);
}
static __always_inline bool is_sev_es_guest(struct kvm_vcpu *vcpu)
{
- return sev_es_guest(vcpu->kvm);
+ return ____sev_es_guest(vcpu->kvm);
}
static __always_inline bool is_sev_snp_guest(struct kvm_vcpu *vcpu)
{
- return sev_snp_guest(vcpu->kvm);
+ return ____sev_snp_guest(vcpu->kvm);
}
#else
#define sev_guest(kvm) false
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 09/21] KVM: SEV: Document the SEV-ES check when querying SMM support as "safe"
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (7 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 08/21] KVM: SEV: Add quad-underscore version of VM-scoped APIs to detect " Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 10/21] KVM: SEV: Move standard VM-scoped helpers to detect SEV+ guests to sev.c Sean Christopherson
` (12 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Use the "unsafe" API to check for an SEV-ES+ guest when determining whether
or not SMBASE is a supported MSR, i.e. whether or not emulated SMM is
supported. This will eventually allow adding lockdep assertings to the
APIs for detecting SEV+ VMs without triggering "real" false positives.
While svm_has_emulated_msr() doesn't hold kvm->lock, i.e. can get both
false positives *and* false negatives, both are completely fine, as the
only time the result isn't stable is when userspace is the sole consumer
of the result. I.e. userspace can confuse itself, but that's it.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 0a1acc21b133..bd0c497c6040 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4487,9 +4487,17 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index)
case MSR_IA32_SMBASE:
if (!IS_ENABLED(CONFIG_KVM_SMM))
return false;
- /* SEV-ES guests do not support SMM, so report false */
- if (kvm && sev_es_guest(kvm))
+
+#ifdef CONFIG_KVM_AMD_SEV
+ /*
+ * KVM can't access register state to emulate SMM for SEV-ES
+ * guests. Conusming stale data here is "fine", as KVM only
+ * checks for MSR_IA32_SMBASE support without a vCPU when
+ * userspace is querying KVM_CAP_X86_SMM.
+ */
+ if (kvm && ____sev_es_guest(kvm))
return false;
+#endif
break;
default:
break;
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 10/21] KVM: SEV: Move standard VM-scoped helpers to detect SEV+ guests to sev.c
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (8 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 09/21] KVM: SEV: Document the SEV-ES check when querying SMM support as "safe" Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-17 10:33 ` Alexander Potapenko
2026-03-10 23:48 ` [PATCH 11/21] KVM: SEV: Move SEV-specific VM initialization " Sean Christopherson
` (11 subsequent siblings)
21 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Now that all external usage of the VM-scoped APIs to detect SEV+ guests is
gone, drop the stubs provided for CONFIG_KVM_AMD_SEV=n builds and bury the
"standard" APIs in sev.c.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 14 ++++++++++++++
arch/x86/kvm/svm/svm.h | 17 -----------------
2 files changed, 14 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 35033dc79390..0c5b47272335 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -107,6 +107,20 @@ static unsigned int nr_asids;
static unsigned long *sev_asid_bitmap;
static unsigned long *sev_reclaim_asid_bitmap;
+static bool sev_guest(struct kvm *kvm)
+{
+ return ____sev_guest(kvm);
+}
+static bool sev_es_guest(struct kvm *kvm)
+{
+ return ____sev_es_guest(kvm);
+}
+
+static bool sev_snp_guest(struct kvm *kvm)
+{
+ return ____sev_snp_guest(kvm);
+}
+
static int snp_decommission_context(struct kvm *kvm);
struct enc_region {
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 5f8977eec874..1b27d20f3022 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -389,20 +389,6 @@ static __always_inline bool ____sev_snp_guest(struct kvm *kvm)
!WARN_ON_ONCE(!____sev_es_guest(kvm));
}
-static __always_inline bool sev_guest(struct kvm *kvm)
-{
- return ____sev_guest(kvm);
-}
-static __always_inline bool sev_es_guest(struct kvm *kvm)
-{
- return ____sev_es_guest(kvm);
-}
-
-static __always_inline bool sev_snp_guest(struct kvm *kvm)
-{
- return ____sev_snp_guest(kvm);
-}
-
static __always_inline bool is_sev_guest(struct kvm_vcpu *vcpu)
{
return ____sev_guest(vcpu->kvm);
@@ -417,9 +403,6 @@ static __always_inline bool is_sev_snp_guest(struct kvm_vcpu *vcpu)
return ____sev_snp_guest(vcpu->kvm);
}
#else
-#define sev_guest(kvm) false
-#define sev_es_guest(kvm) false
-#define sev_snp_guest(kvm) false
#define is_sev_guest(vcpu) false
#define is_sev_es_guest(vcpu) false
#define is_sev_snp_guest(vcpu) false
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 11/21] KVM: SEV: Move SEV-specific VM initialization to sev.c
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (9 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 10/21] KVM: SEV: Move standard VM-scoped helpers to detect SEV+ guests to sev.c Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 12/21] KVM: SEV: WARN on unhandled VM type when initializing VM Sean Christopherson
` (10 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Move SEV+ VM initialization to sev.c (as sev_vm_init()) so that
kvm_sev_info (and all usage) can be gated on CONFIG_KVM_AMD_SEV=y without
needing more #ifdefs. As a bonus, isolating the logic will make it easier
to harden the flow, e.g. to WARN if the vm_type is unknown.
No functional change intended (SEV, SEV_ES, and SNP VM types are only
supported if CONFIG_KVM_AMD_SEV=y).
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 15 +++++++++++++++
arch/x86/kvm/svm/svm.c | 12 +-----------
arch/x86/kvm/svm/svm.h | 2 ++
3 files changed, 18 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 0c5b47272335..e5dae76d2986 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2928,6 +2928,21 @@ static int snp_decommission_context(struct kvm *kvm)
return 0;
}
+void sev_vm_init(struct kvm *kvm)
+{
+ int type = kvm->arch.vm_type;
+
+ if (type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM)
+ return;
+
+ kvm->arch.has_protected_state = (type == KVM_X86_SEV_ES_VM ||
+ type == KVM_X86_SNP_VM);
+ to_kvm_sev_info(kvm)->need_init = true;
+
+ kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM);
+ kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem;
+}
+
void sev_vm_destroy(struct kvm *kvm)
{
struct kvm_sev_info *sev = to_kvm_sev_info(kvm);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index bd0c497c6040..780acd454913 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -5093,17 +5093,7 @@ static void svm_vm_destroy(struct kvm *kvm)
static int svm_vm_init(struct kvm *kvm)
{
- int type = kvm->arch.vm_type;
-
- if (type != KVM_X86_DEFAULT_VM &&
- type != KVM_X86_SW_PROTECTED_VM) {
- kvm->arch.has_protected_state =
- (type == KVM_X86_SEV_ES_VM || type == KVM_X86_SNP_VM);
- to_kvm_sev_info(kvm)->need_init = true;
-
- kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM);
- kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem;
- }
+ sev_vm_init(kvm);
if (!pause_filter_count || !pause_filter_thresh)
kvm_disable_exits(kvm, KVM_X86_DISABLE_EXITS_PAUSE);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 1b27d20f3022..7f28445766b6 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -899,6 +899,7 @@ static inline struct page *snp_safe_alloc_page(void)
int sev_vcpu_create(struct kvm_vcpu *vcpu);
void sev_free_vcpu(struct kvm_vcpu *vcpu);
+void sev_vm_init(struct kvm *kvm);
void sev_vm_destroy(struct kvm *kvm);
void __init sev_set_cpu_caps(void);
void __init sev_hardware_setup(void);
@@ -925,6 +926,7 @@ static inline struct page *snp_safe_alloc_page(void)
static inline int sev_vcpu_create(struct kvm_vcpu *vcpu) { return 0; }
static inline void sev_free_vcpu(struct kvm_vcpu *vcpu) {}
+static inline void sev_vm_init(struct kvm *kvm) {}
static inline void sev_vm_destroy(struct kvm *kvm) {}
static inline void __init sev_set_cpu_caps(void) {}
static inline void __init sev_hardware_setup(void) {}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 12/21] KVM: SEV: WARN on unhandled VM type when initializing VM
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (10 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 11/21] KVM: SEV: Move SEV-specific VM initialization " Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 13/21] KVM: SEV: Hide "struct kvm_sev_info" behind CONFIG_KVM_AMD_SEV=y Sean Christopherson
` (9 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
WARN if KVM encounters an unhandled VM type when setting up flags for SEV+
VMs, e.g. to guard against adding a new flavor of SEV without adding proper
recognition in sev_vm_init().
Practically speaking, no functional change intended (the new "default" case
should be unreachable).
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 29 ++++++++++++++++++-----------
1 file changed, 18 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index e5dae76d2986..9a907bc057d0 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2930,17 +2930,24 @@ static int snp_decommission_context(struct kvm *kvm)
void sev_vm_init(struct kvm *kvm)
{
- int type = kvm->arch.vm_type;
-
- if (type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM)
- return;
-
- kvm->arch.has_protected_state = (type == KVM_X86_SEV_ES_VM ||
- type == KVM_X86_SNP_VM);
- to_kvm_sev_info(kvm)->need_init = true;
-
- kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM);
- kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem;
+ switch (kvm->arch.vm_type) {
+ case KVM_X86_DEFAULT_VM:
+ case KVM_X86_SW_PROTECTED_VM:
+ break;
+ case KVM_X86_SNP_VM:
+ kvm->arch.has_private_mem = true;
+ fallthrough;
+ case KVM_X86_SEV_ES_VM:
+ kvm->arch.has_protected_state = true;
+ fallthrough;
+ case KVM_X86_SEV_VM:
+ kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem;
+ to_kvm_sev_info(kvm)->need_init = true;
+ break;
+ default:
+ WARN_ONCE(1, "Unsupported VM type %lu", kvm->arch.vm_type);
+ break;
+ }
}
void sev_vm_destroy(struct kvm *kvm)
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 13/21] KVM: SEV: Hide "struct kvm_sev_info" behind CONFIG_KVM_AMD_SEV=y
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (11 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 12/21] KVM: SEV: WARN on unhandled VM type when initializing VM Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 14/21] KVM: SEV: Document that checking for SEV+ guests when reclaiming memory is "safe" Sean Christopherson
` (8 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Bury "struct kvm_sev_info" behind CONFIG_KVM_AMD_SEV=y to make it harder
for SEV specific code to sneak into common SVM code.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 2 ++
arch/x86/kvm/svm/svm.h | 6 +++++-
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 780acd454913..e6691c044913 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4215,8 +4215,10 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)
{
+#ifdef CONFIG_KVM_AMD_SEV
if (to_kvm_sev_info(vcpu->kvm)->need_init)
return -EINVAL;
+#endif
return 1;
}
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 7f28445766b6..58c08ed0819a 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -92,6 +92,7 @@ enum {
/* TPR and CR2 are always written before VMRUN */
#define VMCB_ALWAYS_DIRTY_MASK ((1U << VMCB_INTR) | (1U << VMCB_CR2))
+#ifdef CONFIG_KVM_AMD_SEV
struct kvm_sev_info {
bool active; /* SEV enabled guest */
bool es_active; /* SEV-ES enabled guest */
@@ -117,6 +118,7 @@ struct kvm_sev_info {
cpumask_var_t have_run_cpus; /* CPUs that have done VMRUN for this VM. */
bool snp_certs_enabled; /* SNP certificate-fetching support. */
};
+#endif
struct kvm_svm {
struct kvm kvm;
@@ -127,7 +129,9 @@ struct kvm_svm {
u64 *avic_physical_id_table;
struct hlist_node hnode;
+#ifdef CONFIG_KVM_AMD_SEV
struct kvm_sev_info sev_info;
+#endif
};
struct kvm_vcpu;
@@ -364,12 +368,12 @@ static __always_inline struct kvm_svm *to_kvm_svm(struct kvm *kvm)
return container_of(kvm, struct kvm_svm, kvm);
}
+#ifdef CONFIG_KVM_AMD_SEV
static __always_inline struct kvm_sev_info *to_kvm_sev_info(struct kvm *kvm)
{
return &to_kvm_svm(kvm)->sev_info;
}
-#ifdef CONFIG_KVM_AMD_SEV
static __always_inline bool ____sev_guest(struct kvm *kvm)
{
return to_kvm_sev_info(kvm)->active;
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 14/21] KVM: SEV: Document that checking for SEV+ guests when reclaiming memory is "safe"
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (12 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 13/21] KVM: SEV: Hide "struct kvm_sev_info" behind CONFIG_KVM_AMD_SEV=y Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 15/21] KVM: SEV: Assert that kvm->lock is held when querying SEV+ support Sean Christopherson
` (7 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Document that the check for an SEV+ guest when reclaiming guest memory is
safe even though kvm->lock isn't held. This will allow asserting that
kvm->lock is held in the SEV accessors, without triggering false positives
on the "safe" cases.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 9a907bc057d0..7bb9318f0703 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -3296,8 +3296,14 @@ void sev_guest_memory_reclaimed(struct kvm *kvm)
* With SNP+gmem, private/encrypted memory is unreachable via the
* hva-based mmu notifiers, i.e. these events are explicitly scoped to
* shared pages, where there's no need to flush caches.
+ *
+ * Checking for SEV+ outside of kvm->lock is safe as __sev_guest_init()
+ * can only be done before vCPUs are created, caches can be incoherent
+ * if and only if a vCPU was run, and either this task will see the VM
+ * as being SEV+ or the vCPU won't be to access the memory (because of
+ * the in-progress invalidation).
*/
- if (!sev_guest(kvm) || sev_snp_guest(kvm))
+ if (!____sev_guest(kvm) || ____sev_snp_guest(kvm))
return;
sev_writeback_caches(kvm);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 15/21] KVM: SEV: Assert that kvm->lock is held when querying SEV+ support
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (13 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 14/21] KVM: SEV: Document that checking for SEV+ guests when reclaiming memory is "safe" Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 16/21] KVM: SEV: use mutex guard in snp_launch_update() Sean Christopherson
` (6 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Assert that kvm->lock is held when checking if a VM is an SEV+ VM, as KVM
sets *and* resets the relevant flags when initialization SEV state, i.e.
it's extremely easy to end up with TOCTOU bugs if kvm->lock isn't held.
Add waivers for a VM being torn down (refcount is '0') and for there being
a loaded vCPU, with comments for both explaining why they're safe.
Note, the "vCPU loaded" waiver is necessary to avoid splats on the SNP
checks in sev_gmem_prepare() and sev_gmem_max_mapping_level(), which are
currently called when handling nested page faults. Alternatively, those
checks could key off KVM_X86_SNP_VM, as kvm_arch.vm_type is stable early
in VM creation. Prioritize consistency, at least for now, and to leave a
"reminder" that the max mapping level code in particular likely needs
special attention if/when KVM supports dirty logging for SNP guests.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 7bb9318f0703..cbb5886304fa 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -107,17 +107,42 @@ static unsigned int nr_asids;
static unsigned long *sev_asid_bitmap;
static unsigned long *sev_reclaim_asid_bitmap;
+static __always_inline void kvm_lockdep_assert_sev_lock_held(struct kvm *kvm)
+{
+#ifdef CONFIG_PROVE_LOCKING
+ /*
+ * Querying SEV+ support is safe if there are no other references, i.e.
+ * if concurrent initialization of SEV+ is impossible.
+ */
+ if (!refcount_read(&kvm->users_count))
+ return;
+
+ /*
+ * Querying SEV+ support from vCPU context is always safe, as vCPUs can
+ * only be created after SEV+ is initialized (and KVM disallows all SEV
+ * sub-ioctls while vCPU creation is in-progress).
+ */
+ if (kvm_get_running_vcpu())
+ return;
+
+ lockdep_assert_held(&kvm->lock);
+#endif
+}
+
static bool sev_guest(struct kvm *kvm)
{
+ kvm_lockdep_assert_sev_lock_held(kvm);
return ____sev_guest(kvm);
}
static bool sev_es_guest(struct kvm *kvm)
{
+ kvm_lockdep_assert_sev_lock_held(kvm);
return ____sev_es_guest(kvm);
}
static bool sev_snp_guest(struct kvm *kvm)
{
+ kvm_lockdep_assert_sev_lock_held(kvm);
return ____sev_snp_guest(kvm);
}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 16/21] KVM: SEV: use mutex guard in snp_launch_update()
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (14 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 15/21] KVM: SEV: Assert that kvm->lock is held when querying SEV+ support Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 17/21] KVM: SEV: use mutex guard in sev_mem_enc_ioctl() Sean Christopherson
` (5 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
From: Carlos López <clopez@suse.de>
Simplify the error paths in snp_launch_update() by using a mutex guard,
allowing early return instead of using gotos.
Signed-off-by: Carlos López <clopez@suse.de>
Link: https://patch.msgid.link/20260120201013.3931334-4-clopez@suse.de
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 31 ++++++++++++-------------------
1 file changed, 12 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index cbb5886304fa..b559d7141ae9 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2405,7 +2405,6 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
struct kvm_memory_slot *memslot;
long npages, count;
void __user *src;
- int ret = 0;
if (!sev_snp_guest(kvm) || !sev->snp_context)
return -EINVAL;
@@ -2450,13 +2449,11 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
* initial expected state and better guard against unexpected
* situations.
*/
- mutex_lock(&kvm->slots_lock);
+ guard(mutex)(&kvm->slots_lock);
memslot = gfn_to_memslot(kvm, params.gfn_start);
- if (!kvm_slot_has_gmem(memslot)) {
- ret = -EINVAL;
- goto out;
- }
+ if (!kvm_slot_has_gmem(memslot))
+ return -EINVAL;
sev_populate_args.sev_fd = argp->sev_fd;
sev_populate_args.type = params.type;
@@ -2467,22 +2464,18 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
argp->error = sev_populate_args.fw_error;
pr_debug("%s: kvm_gmem_populate failed, ret %ld (fw_error %d)\n",
__func__, count, argp->error);
- ret = -EIO;
- } else {
- params.gfn_start += count;
- params.len -= count * PAGE_SIZE;
- if (params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO)
- params.uaddr += count * PAGE_SIZE;
-
- ret = 0;
- if (copy_to_user(u64_to_user_ptr(argp->data), ¶ms, sizeof(params)))
- ret = -EFAULT;
+ return -EIO;
}
-out:
- mutex_unlock(&kvm->slots_lock);
+ params.gfn_start += count;
+ params.len -= count * PAGE_SIZE;
+ if (params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO)
+ params.uaddr += count * PAGE_SIZE;
- return ret;
+ if (copy_to_user(u64_to_user_ptr(argp->data), ¶ms, sizeof(params)))
+ return -EFAULT;
+
+ return 0;
}
static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 17/21] KVM: SEV: use mutex guard in sev_mem_enc_ioctl()
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (15 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 16/21] KVM: SEV: use mutex guard in snp_launch_update() Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 18/21] KVM: SEV: use mutex guard in sev_mem_enc_unregister_region() Sean Christopherson
` (4 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
From: Carlos López <clopez@suse.de>
Simplify the error paths in sev_mem_enc_ioctl() by using a mutex guard,
allowing early return instead of using gotos.
Signed-off-by: Carlos López <clopez@suse.de>
Link: https://patch.msgid.link/20260120201013.3931334-5-clopez@suse.de
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 25 ++++++++-----------------
1 file changed, 8 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index b559d7141ae9..d71241e8de95 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2637,30 +2637,24 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
if (copy_from_user(&sev_cmd, argp, sizeof(struct kvm_sev_cmd)))
return -EFAULT;
- mutex_lock(&kvm->lock);
+ guard(mutex)(&kvm->lock);
/* Only the enc_context_owner handles some memory enc operations. */
if (is_mirroring_enc_context(kvm) &&
- !is_cmd_allowed_from_mirror(sev_cmd.id)) {
- r = -EINVAL;
- goto out;
- }
+ !is_cmd_allowed_from_mirror(sev_cmd.id))
+ return -EINVAL;
/*
* Once KVM_SEV_INIT2 initializes a KVM instance as an SNP guest, only
* allow the use of SNP-specific commands.
*/
- if (sev_snp_guest(kvm) && sev_cmd.id < KVM_SEV_SNP_LAUNCH_START) {
- r = -EPERM;
- goto out;
- }
+ if (sev_snp_guest(kvm) && sev_cmd.id < KVM_SEV_SNP_LAUNCH_START)
+ return -EPERM;
switch (sev_cmd.id) {
case KVM_SEV_ES_INIT:
- if (!sev_es_enabled) {
- r = -ENOTTY;
- goto out;
- }
+ if (!sev_es_enabled)
+ return -ENOTTY;
fallthrough;
case KVM_SEV_INIT:
r = sev_guest_init(kvm, &sev_cmd);
@@ -2732,15 +2726,12 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
r = snp_enable_certs(kvm);
break;
default:
- r = -EINVAL;
- goto out;
+ return -EINVAL;
}
if (copy_to_user(argp, &sev_cmd, sizeof(struct kvm_sev_cmd)))
r = -EFAULT;
-out:
- mutex_unlock(&kvm->lock);
return r;
}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 18/21] KVM: SEV: use mutex guard in sev_mem_enc_unregister_region()
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (16 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 17/21] KVM: SEV: use mutex guard in sev_mem_enc_ioctl() Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 19/21] KVM: SEV: use mutex guard in snp_handle_guest_req() Sean Christopherson
` (3 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
From: Carlos López <clopez@suse.de>
Simplify the error paths in sev_mem_enc_unregister_region() by using a
mutex guard, allowing early return instead of using gotos.
Signed-off-by: Carlos López <clopez@suse.de>
Link: https://patch.msgid.link/20260120201013.3931334-7-clopez@suse.de
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 20 +++++---------------
1 file changed, 5 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index d71241e8de95..61347d8508f2 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2814,35 +2814,25 @@ int sev_mem_enc_unregister_region(struct kvm *kvm,
struct kvm_enc_region *range)
{
struct enc_region *region;
- int ret;
/* If kvm is mirroring encryption context it isn't responsible for it */
if (is_mirroring_enc_context(kvm))
return -EINVAL;
- mutex_lock(&kvm->lock);
+ guard(mutex)(&kvm->lock);
- if (!sev_guest(kvm)) {
- ret = -ENOTTY;
- goto failed;
- }
+ if (!sev_guest(kvm))
+ return -ENOTTY;
region = find_enc_region(kvm, range);
- if (!region) {
- ret = -EINVAL;
- goto failed;
- }
+ if (!region)
+ return -EINVAL;
sev_writeback_caches(kvm);
__unregister_enc_region_locked(kvm, region);
- mutex_unlock(&kvm->lock);
return 0;
-
-failed:
- mutex_unlock(&kvm->lock);
- return ret;
}
int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd)
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 19/21] KVM: SEV: use mutex guard in snp_handle_guest_req()
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (17 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 18/21] KVM: SEV: use mutex guard in sev_mem_enc_unregister_region() Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 20/21] KVM: SVM: Move lock-protected allocation of SEV ASID into a separate helper Sean Christopherson
` (2 subsequent siblings)
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
From: Carlos López <clopez@suse.de>
Simplify the error paths in snp_handle_guest_req() by using a mutex
guard, allowing early return instead of using gotos.
Signed-off-by: Carlos López <clopez@suse.de>
Link: https://patch.msgid.link/20260120201013.3931334-8-clopez@suse.de
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 23 ++++++++---------------
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 61347d8508f2..36a33e8ade4d 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -4174,12 +4174,10 @@ static int snp_handle_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t resp_
if (!is_sev_snp_guest(&svm->vcpu))
return -EINVAL;
- mutex_lock(&sev->guest_req_mutex);
+ guard(mutex)(&sev->guest_req_mutex);
- if (kvm_read_guest(kvm, req_gpa, sev->guest_req_buf, PAGE_SIZE)) {
- ret = -EIO;
- goto out_unlock;
- }
+ if (kvm_read_guest(kvm, req_gpa, sev->guest_req_buf, PAGE_SIZE))
+ return -EIO;
data.gctx_paddr = __psp_pa(sev->snp_context);
data.req_paddr = __psp_pa(sev->guest_req_buf);
@@ -4192,21 +4190,16 @@ static int snp_handle_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t resp_
*/
ret = sev_issue_cmd(kvm, SEV_CMD_SNP_GUEST_REQUEST, &data, &fw_err);
if (ret && !fw_err)
- goto out_unlock;
+ return ret;
- if (kvm_write_guest(kvm, resp_gpa, sev->guest_resp_buf, PAGE_SIZE)) {
- ret = -EIO;
- goto out_unlock;
- }
+ if (kvm_write_guest(kvm, resp_gpa, sev->guest_resp_buf, PAGE_SIZE))
+ return -EIO;
/* No action is requested *from KVM* if there was a firmware error. */
svm_vmgexit_no_action(svm, SNP_GUEST_ERR(0, fw_err));
- ret = 1; /* resume guest */
-
-out_unlock:
- mutex_unlock(&sev->guest_req_mutex);
- return ret;
+ /* resume guest */
+ return 1;
}
static int snp_req_certs_err(struct vcpu_svm *svm, u32 vmm_error)
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 20/21] KVM: SVM: Move lock-protected allocation of SEV ASID into a separate helper
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (18 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 19/21] KVM: SEV: use mutex guard in snp_handle_guest_req() Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-10 23:48 ` [PATCH 21/21] KVM: SEV: Goto an existing error label if charging misc_cg for an ASID fails Sean Christopherson
2026-03-11 14:29 ` [PATCH 00/21] Fixes and lock cleanup+hardening Jethro Beekman
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
From: Carlos López <clopez@suse.de>
Extract the lock-protected parts of SEV ASID allocation into a new helper
and opportunistically convert it to use guard() when acquiring the mutex.
Preserve the goto even though it's a little odd, as it's there's a fair
amount of subtlety that makes it surprisingly difficult to replicate the
functionality with a loop construct, and arguably using goto yields the
most readable code.
No functional change intended.
Signed-off-by: Carlos López <clopez@suse.de>
[sean: move code to separate helper, rework shortlog+changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 37 +++++++++++++++++++++++--------------
1 file changed, 23 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 36a33e8ade4d..c35eb9e30993 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -237,6 +237,28 @@ static void sev_misc_cg_uncharge(struct kvm_sev_info *sev)
misc_cg_uncharge(type, sev->misc_cg, 1);
}
+static unsigned int sev_alloc_asid(unsigned int min_asid, unsigned int max_asid)
+{
+ unsigned int asid;
+ bool retry = true;
+
+ guard(mutex)(&sev_bitmap_lock);
+
+again:
+ asid = find_next_zero_bit(sev_asid_bitmap, max_asid + 1, min_asid);
+ if (asid > max_asid) {
+ if (retry && __sev_recycle_asids(min_asid, max_asid)) {
+ retry = false;
+ goto again;
+ }
+
+ return asid;
+ }
+
+ __set_bit(asid, sev_asid_bitmap);
+ return asid;
+}
+
static int sev_asid_new(struct kvm_sev_info *sev, unsigned long vm_type)
{
/*
@@ -244,7 +266,6 @@ static int sev_asid_new(struct kvm_sev_info *sev, unsigned long vm_type)
* SEV-ES-enabled guest can use from 1 to min_sev_asid - 1.
*/
unsigned int min_asid, max_asid, asid;
- bool retry = true;
int ret;
if (vm_type == KVM_X86_SNP_VM) {
@@ -277,24 +298,12 @@ static int sev_asid_new(struct kvm_sev_info *sev, unsigned long vm_type)
return ret;
}
- mutex_lock(&sev_bitmap_lock);
-
-again:
- asid = find_next_zero_bit(sev_asid_bitmap, max_asid + 1, min_asid);
+ asid = sev_alloc_asid(min_asid, max_asid);
if (asid > max_asid) {
- if (retry && __sev_recycle_asids(min_asid, max_asid)) {
- retry = false;
- goto again;
- }
- mutex_unlock(&sev_bitmap_lock);
ret = -EBUSY;
goto e_uncharge;
}
- __set_bit(asid, sev_asid_bitmap);
-
- mutex_unlock(&sev_bitmap_lock);
-
sev->asid = asid;
return 0;
e_uncharge:
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 21/21] KVM: SEV: Goto an existing error label if charging misc_cg for an ASID fails
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (19 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 20/21] KVM: SVM: Move lock-protected allocation of SEV ASID into a separate helper Sean Christopherson
@ 2026-03-10 23:48 ` Sean Christopherson
2026-03-11 14:29 ` [PATCH 00/21] Fixes and lock cleanup+hardening Jethro Beekman
21 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-10 23:48 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Jethro Beekman, Alexander Potapenko,
Carlos López
Dedup a small amount of cleanup code in SEV ASID allocation by reusing
an existing error label.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/sev.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index c35eb9e30993..32d7f329f92c 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -289,14 +289,11 @@ static int sev_asid_new(struct kvm_sev_info *sev, unsigned long vm_type)
if (min_asid > max_asid)
return -ENOTTY;
- WARN_ON(sev->misc_cg);
+ WARN_ON_ONCE(sev->misc_cg);
sev->misc_cg = get_current_misc_cg();
ret = sev_misc_cg_try_charge(sev);
- if (ret) {
- put_misc_cg(sev->misc_cg);
- sev->misc_cg = NULL;
- return ret;
- }
+ if (ret)
+ goto e_put_cg;
asid = sev_alloc_asid(min_asid, max_asid);
if (asid > max_asid) {
@@ -306,8 +303,10 @@ static int sev_asid_new(struct kvm_sev_info *sev, unsigned long vm_type)
sev->asid = asid;
return 0;
+
e_uncharge:
sev_misc_cg_uncharge(sev);
+e_put_cg:
put_misc_cg(sev->misc_cg);
sev->misc_cg = NULL;
return ret;
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 00/21] Fixes and lock cleanup+hardening
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
` (20 preceding siblings ...)
2026-03-10 23:48 ` [PATCH 21/21] KVM: SEV: Goto an existing error label if charging misc_cg for an ASID fails Sean Christopherson
@ 2026-03-11 14:29 ` Jethro Beekman
2026-03-12 16:03 ` Sean Christopherson
21 siblings, 1 reply; 25+ messages in thread
From: Jethro Beekman @ 2026-03-11 14:29 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Alexander Potapenko, Carlos López
[-- Attachment #1: Type: text/plain, Size: 7003 bytes --]
On 2026-03-11 00:48, Sean Christopherson wrote:
> Fix several fatal SEV bugs, then clean up the SEV+ APIs to either document
> that they are safe to query outside of kvm->lock, or to use lockdep-protected
> version. The sev_mem_enc_register_region() goof is at least the second bug
> we've had related to checking for an SEV guest outside of kvm->lock, and in
> general it's nearly impossible to just "eyeball" the safety of KVM's usage.
>
> I included Carlos' guard() cleanups here to avoid annoying conflicts (well,
> to solve them now instead of when applying).
I wrote a bunch of tests (see below) to check the kernel can properly handle bad userspace flows. I haven't had the chance to test them with your patch set.
test_vcpu_hotplug() triggers dump_vmcb()
test_snp_launch_finish_after_finish() triggers the OOPS I wrote about earlier.
All other tests seem to be handled (somewhat) gracefully already.
diff --git a/tools/testing/selftests/kvm/x86/sev_smoke_test.c b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
index 77256c89bb8d..93339420b281 100644
--- a/tools/testing/selftests/kvm/x86/sev_smoke_test.c
+++ b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
@@ -160,6 +160,155 @@ static void test_sev(void *guest_code, uint32_t type, uint64_t policy)
kvm_vm_free(vm);
}
+static void test_vcpu_hotplug(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vcpu *vcpu2;
+ struct kvm_mp_state mps;
+ struct kvm_vm *vm;
+
+ vm = vm_sev_create_with_one_vcpu(KVM_X86_SNP_VM, guest_snp_code, &vcpu);
+
+ vm_sev_launch(vm, snp_default_policy(), NULL);
+
+ vcpu2 = vm_vcpu_add(vm, 1, guest_snp_code);
+ mps.mp_state = KVM_MP_STATE_RUNNABLE;
+ vcpu_ioctl(vcpu2, KVM_SET_MP_STATE, &mps);
+ vcpu_run(vcpu2);
+ printf("test_vcpu_hotplug/vcpu_run returns %s\n", exit_reason_str(vcpu2->run->exit_reason));
+
+ kvm_vm_free(vm);
+}
+
+static int try_snp_vm_launch_start(struct kvm_vm *vm, uint64_t policy)
+{
+ struct kvm_sev_snp_launch_start launch_start = {
+ .policy = policy,
+ };
+
+ if (__vm_sev_ioctl(vm, KVM_SEV_SNP_LAUNCH_START, &launch_start) != 0)
+ return errno;
+
+ return 0;
+}
+
+static int try_snp_vm_launch_update(struct kvm_vm *vm)
+{
+ struct userspace_mem_region *region;
+ int ctr;
+
+ hash_for_each(vm->regions.slot_hash, ctr, region, slot_node)
+ {
+ const struct sparsebit *protected_phy_pages = region->protected_phy_pages;
+ const vm_paddr_t gpa_base = region->region.guest_phys_addr;
+ const sparsebit_idx_t lowest_page_in_region = gpa_base >> vm->page_shift;
+ sparsebit_idx_t i, j;
+
+ if (!sparsebit_any_set(protected_phy_pages))
+ continue;
+
+ sparsebit_for_each_set_range(protected_phy_pages, i, j) {
+ const uint64_t size = (j - i + 1) * vm->page_size;
+ const uint64_t offset = (i - lowest_page_in_region) * vm->page_size;
+
+ vm_mem_set_private(vm, gpa_base + offset, size);
+
+ struct kvm_sev_snp_launch_update update_data = {
+ .uaddr = (uint64_t)addr_gpa2hva(vm, gpa_base + offset),
+ .gfn_start = (gpa_base + offset) >> PAGE_SHIFT,
+ .len = size,
+ .type = KVM_SEV_SNP_PAGE_TYPE_NORMAL,
+ };
+
+ if (__vm_sev_ioctl(vm, KVM_SEV_SNP_LAUNCH_UPDATE, &update_data) != 0)
+ return errno;
+ }
+ }
+
+ return 0;
+}
+
+static int try_snp_vm_launch_finish(struct kvm_vm *vm)
+{
+ struct kvm_sev_snp_launch_finish launch_finish = { 0 };
+
+ if (__vm_sev_ioctl(vm, KVM_SEV_SNP_LAUNCH_FINISH, &launch_finish) != 0)
+ return errno;
+
+ return 0;
+}
+
+static void test_snp_launch_update_before_start(void) {
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = vm_sev_create_with_one_vcpu(KVM_X86_SNP_VM, guest_snp_code, &vcpu);
+
+ TEST_ASSERT_EQ(try_snp_vm_launch_update(vm), EINVAL);
+
+ kvm_vm_free(vm);
+}
+
+static void test_snp_launch_finish_before_start(void) {
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = vm_sev_create_with_one_vcpu(KVM_X86_SNP_VM, guest_snp_code, &vcpu);
+
+ TEST_ASSERT_EQ(try_snp_vm_launch_finish(vm), EINVAL);
+
+ kvm_vm_free(vm);
+}
+
+static void test_snp_launch_start_after_start(void) {
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = vm_sev_create_with_one_vcpu(KVM_X86_SNP_VM, guest_snp_code, &vcpu);
+
+ snp_vm_launch_start(vm, snp_default_policy());
+ TEST_ASSERT_EQ(try_snp_vm_launch_start(vm, snp_default_policy()), EINVAL);
+
+ kvm_vm_free(vm);
+}
+
+static void test_snp_launch_start_after_finish(void) {
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = vm_sev_create_with_one_vcpu(KVM_X86_SNP_VM, guest_snp_code, &vcpu);
+
+ vm_sev_launch(vm, snp_default_policy(), NULL);
+ TEST_ASSERT_EQ(try_snp_vm_launch_start(vm, snp_default_policy()), EINVAL);
+
+ kvm_vm_free(vm);
+}
+
+static void test_snp_launch_update_after_finish(void) {
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = vm_sev_create_with_one_vcpu(KVM_X86_SNP_VM, guest_snp_code, &vcpu);
+
+ TEST_ASSERT_EQ(try_snp_vm_launch_start(vm, snp_default_policy()), 0);
+ TEST_ASSERT_EQ(try_snp_vm_launch_finish(vm), 0);
+ TEST_ASSERT_EQ(try_snp_vm_launch_update(vm), EIO);
+
+ kvm_vm_free(vm);
+}
+
+static void test_snp_launch_finish_after_finish(void) {
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = vm_sev_create_with_one_vcpu(KVM_X86_SNP_VM, guest_snp_code, &vcpu);
+
+ vm_sev_launch(vm, snp_default_policy(), NULL);
+ TEST_ASSERT_EQ(try_snp_vm_launch_finish(vm), EINVAL);
+
+ kvm_vm_free(vm);
+}
+
static void guest_shutdown_code(void)
{
struct desc_ptr idt;
@@ -217,13 +366,29 @@ int main(int argc, char *argv[])
{
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SEV));
- test_sev_smoke(guest_sev_code, KVM_X86_SEV_VM, 0);
+ if (!kvm_cpu_has(X86_FEATURE_SEV_SNP)) {
+ test_sev_smoke(guest_sev_code, KVM_X86_SEV_VM, 0);
- if (kvm_cpu_has(X86_FEATURE_SEV_ES))
- test_sev_smoke(guest_sev_es_code, KVM_X86_SEV_ES_VM, SEV_POLICY_ES);
-
- if (kvm_cpu_has(X86_FEATURE_SEV_SNP))
+ if (kvm_cpu_has(X86_FEATURE_SEV_ES))
+ test_sev_smoke(guest_sev_es_code, KVM_X86_SEV_ES_VM, SEV_POLICY_ES);
+ } else {
test_sev_smoke(guest_snp_code, KVM_X86_SNP_VM, snp_default_policy());
+ system("logger starting test test_vcpu_hotplug");
+ test_vcpu_hotplug();
+ system("logger starting test test_snp_launch_update_before_start");
+ test_snp_launch_update_before_start();
+ system("logger starting test test_snp_launch_finish_before_start");
+ test_snp_launch_finish_before_start();
+ system("logger starting test test_snp_launch_start_after_start");
+ test_snp_launch_start_after_start();
+ system("logger starting test test_snp_launch_start_after_finish");
+ test_snp_launch_start_after_finish();
+ system("logger starting test test_snp_launch_update_after_finish");
+ test_snp_launch_update_after_finish();
+ system("logger starting test test_snp_launch_finish_after_finish");
+ test_snp_launch_finish_after_finish();
+ system("logger all tests done");
+ }
return 0;
}
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4839 bytes --]
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 00/21] Fixes and lock cleanup+hardening
2026-03-11 14:29 ` [PATCH 00/21] Fixes and lock cleanup+hardening Jethro Beekman
@ 2026-03-12 16:03 ` Sean Christopherson
0 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2026-03-12 16:03 UTC (permalink / raw)
To: Jethro Beekman
Cc: Paolo Bonzini, kvm, linux-kernel, Alexander Potapenko,
Carlos López
On Wed, Mar 11, 2026, Jethro Beekman wrote:
> On 2026-03-11 00:48, Sean Christopherson wrote:
> > Fix several fatal SEV bugs, then clean up the SEV+ APIs to either document
> > that they are safe to query outside of kvm->lock, or to use lockdep-protected
> > version. The sev_mem_enc_register_region() goof is at least the second bug
> > we've had related to checking for an SEV guest outside of kvm->lock, and in
> > general it's nearly impossible to just "eyeball" the safety of KVM's usage.
> >
> > I included Carlos' guard() cleanups here to avoid annoying conflicts (well,
> > to solve them now instead of when applying).
>
> I wrote a bunch of tests (see below) to check the kernel can properly handle bad userspace flows. I haven't had the chance to test them with your patch set.
>
> test_vcpu_hotplug() triggers dump_vmcb()
FWIW, this is a non-issue, especially since SEV-ES+ guests can effectively fuzz
the VMSA at will.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 10/21] KVM: SEV: Move standard VM-scoped helpers to detect SEV+ guests to sev.c
2026-03-10 23:48 ` [PATCH 10/21] KVM: SEV: Move standard VM-scoped helpers to detect SEV+ guests to sev.c Sean Christopherson
@ 2026-03-17 10:33 ` Alexander Potapenko
0 siblings, 0 replies; 25+ messages in thread
From: Alexander Potapenko @ 2026-03-17 10:33 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, kvm, linux-kernel, Jethro Beekman,
Carlos López
> -static __always_inline bool sev_guest(struct kvm *kvm)
> -{
> - return ____sev_guest(kvm);
> -}
> -static __always_inline bool sev_es_guest(struct kvm *kvm)
> -{
> - return ____sev_es_guest(kvm);
> -}
arch/x86/kvm/svm/avic.c is still using sev_es_guest(), isn't it?
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2026-03-17 10:34 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-10 23:48 [PATCH 00/21] Fixes and lock cleanup+hardening Sean Christopherson
2026-03-10 23:48 ` [PATCH 01/21] KVM: selftests: Remove duplicate LAUNCH_UPDATE_VMSA call in SEV-ES migrate test Sean Christopherson
2026-03-10 23:48 ` [PATCH 02/21] KVM: SEV: Reject attempts to sync VMSA of an already-launched/encrypted vCPU Sean Christopherson
2026-03-10 23:48 ` [PATCH 03/21] KVM: SEV: Protect *all* of sev_mem_enc_register_region() with kvm->lock Sean Christopherson
2026-03-10 23:48 ` [PATCH 04/21] KVM: SEV: Disallow LAUNCH_FINISH if vCPUs are actively being created Sean Christopherson
2026-03-10 23:48 ` [PATCH 05/21] KVM: SEV: Lock all vCPUs when synchronzing VMSAs for SNP launch finish Sean Christopherson
2026-03-10 23:48 ` [PATCH 06/21] KVM: SEV: Lock all vCPUs for the duration of SEV-ES VMSA synchronization Sean Christopherson
2026-03-10 23:48 ` [PATCH 07/21] KVM: SEV: Provide vCPU-scoped accessors for detecting SEV+ guests Sean Christopherson
2026-03-10 23:48 ` [PATCH 08/21] KVM: SEV: Add quad-underscore version of VM-scoped APIs to detect " Sean Christopherson
2026-03-10 23:48 ` [PATCH 09/21] KVM: SEV: Document the SEV-ES check when querying SMM support as "safe" Sean Christopherson
2026-03-10 23:48 ` [PATCH 10/21] KVM: SEV: Move standard VM-scoped helpers to detect SEV+ guests to sev.c Sean Christopherson
2026-03-17 10:33 ` Alexander Potapenko
2026-03-10 23:48 ` [PATCH 11/21] KVM: SEV: Move SEV-specific VM initialization " Sean Christopherson
2026-03-10 23:48 ` [PATCH 12/21] KVM: SEV: WARN on unhandled VM type when initializing VM Sean Christopherson
2026-03-10 23:48 ` [PATCH 13/21] KVM: SEV: Hide "struct kvm_sev_info" behind CONFIG_KVM_AMD_SEV=y Sean Christopherson
2026-03-10 23:48 ` [PATCH 14/21] KVM: SEV: Document that checking for SEV+ guests when reclaiming memory is "safe" Sean Christopherson
2026-03-10 23:48 ` [PATCH 15/21] KVM: SEV: Assert that kvm->lock is held when querying SEV+ support Sean Christopherson
2026-03-10 23:48 ` [PATCH 16/21] KVM: SEV: use mutex guard in snp_launch_update() Sean Christopherson
2026-03-10 23:48 ` [PATCH 17/21] KVM: SEV: use mutex guard in sev_mem_enc_ioctl() Sean Christopherson
2026-03-10 23:48 ` [PATCH 18/21] KVM: SEV: use mutex guard in sev_mem_enc_unregister_region() Sean Christopherson
2026-03-10 23:48 ` [PATCH 19/21] KVM: SEV: use mutex guard in snp_handle_guest_req() Sean Christopherson
2026-03-10 23:48 ` [PATCH 20/21] KVM: SVM: Move lock-protected allocation of SEV ASID into a separate helper Sean Christopherson
2026-03-10 23:48 ` [PATCH 21/21] KVM: SEV: Goto an existing error label if charging misc_cg for an ASID fails Sean Christopherson
2026-03-11 14:29 ` [PATCH 00/21] Fixes and lock cleanup+hardening Jethro Beekman
2026-03-12 16:03 ` Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox