From: Sean Christopherson <seanjc@google.com>
To: Vitaly Kuznetsov <vkuznets@redhat.com>,
Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH 3/5] KVM: x86/hyperv: Ensure vCPU's Hyper-V object is initialized on cross-vCPU accesses
Date: Thu, 23 Apr 2026 07:08:31 -0700 [thread overview]
Message-ID: <20260423140833.439512-4-seanjc@google.com> (raw)
In-Reply-To: <20260423140833.439512-1-seanjc@google.com>
When initializing a vCPU's Hyper-V object, ensure the object is fully
initialized prior to exposing it through the vCPU, and ensure accesses from
other tasks (e.g. other vCPUs) see the fully initialized object if
vcpu->arch.hyperv is non-NULL.
Lack of ordering manifests as a lockdep splat due to attempting to lock a
TLB flush FIFO before the spinlock is initialized.
INFO: trying to register non-static key.
The code is fine but needs lockdep annotation, or maybe
you didn't initialize this object before use?
turning off the locking correctness validator.
CPU: 1 PID: 5005 Comm: syz-executor189 Not tainted 6.6.120-smp-DEV #1
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
[<ffffffff810dd10c>] dump_stack_lvl+0xcc/0x130 lib/dump_stack.c:106
[<ffffffff8192bddd>] assign_lock_key+0x1fd/0x230 kernel/locking/lockdep.c:977
[<ffffffff8191cb97>] register_lock_class+0x187/0x7a0 kernel/locking/lockdep.c:1291
[<ffffffff8191e7a9>] __lock_acquire+0x179/0x7650 kernel/locking/lockdep.c:5016
[<ffffffff8191e28f>] lock_acquire+0x13f/0x3d0 kernel/locking/lockdep.c:5756
[<ffffffff8101a65b>] __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
[<ffffffff8101a65b>] _raw_spin_lock+0x2b/0x40 kernel/locking/spinlock.c:154
[<ffffffff81319d44>] spin_lock include/linux/spinlock.h:351 [inline]
[<ffffffff81319d44>] hv_tlb_flush_enqueue+0xb4/0x270 arch/x86/kvm/hyperv.c:1946
[<ffffffff813160c6>] kvm_hv_flush_tlb+0xa96/0x1dc0 arch/x86/kvm/hyperv.c:2145
[<ffffffff8131438b>] kvm_hv_hypercall+0x103b/0x1fe0 arch/x86/kvm/hyperv.c:-1
[<ffffffff8133bff3>] __vmx_handle_exit arch/x86/kvm/vmx/vmx.c:6624 [inline]
[<ffffffff8133bff3>] vmx_handle_exit+0x12e3/0x21f0 arch/x86/kvm/vmx/vmx.c:6641
[<ffffffff81215d11>] vcpu_enter_guest arch/x86/kvm/x86.c:11649 [inline]
[<ffffffff81215d11>] vcpu_run+0x4d01/0x79c0 arch/x86/kvm/x86.c:11832
[<ffffffff8120fe39>] kvm_arch_vcpu_ioctl_run+0xb49/0x1c80 arch/x86/kvm/x86.c:12179
[<ffffffff8119cd60>] kvm_vcpu_ioctl+0xc80/0xff0 virt/kvm/kvm_main.c:6029
[<ffffffff8226fefd>] vfs_ioctl fs/ioctl.c:52 [inline]
[<ffffffff8226fefd>] __do_sys_ioctl fs/ioctl.c:872 [inline]
[<ffffffff8226fefd>] __se_sys_ioctl+0xfd/0x170 fs/ioctl.c:858
[<ffffffff85ac97d9>] do_syscall_x64 arch/x86/entry/common.c:52 [inline]
[<ffffffff85ac97d9>] do_syscall_64+0x69/0xb0 arch/x86/entry/common.c:93
[<ffffffff85c000d0>] entry_SYSCALL_64_after_hwframe+0x68/0xd2
</TASK>
Use the "safe" variant in all paths that are known to access the Hyper-V
object, as detected by an upcoming lockdep assertion.
Fixes: 0823570f0198 ("KVM: x86: hyper-v: Introduce TLB flush fifo")
Fixes: fc08b628d7c9 ("KVM: x86: hyper-v: Allocate Hyper-V context lazily")
Reported-by: syzbot+5b32c49cd8f005e65654@syzkaller.appspotmail.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/hyperv.c | 23 ++++++++++++++++++-----
arch/x86/kvm/hyperv.h | 16 ++++++++++++++--
2 files changed, 32 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 3cf8b3cdfc1c..92a715d06d92 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -206,13 +206,19 @@ static struct kvm_vcpu *get_vcpu_by_vpidx(struct kvm *kvm, u32 vpidx)
static struct kvm_vcpu_hv_synic *synic_get(struct kvm *kvm, u32 vpidx)
{
- struct kvm_vcpu *vcpu;
struct kvm_vcpu_hv_synic *synic;
+ struct kvm_vcpu_hv *hv_vcpu;
+ struct kvm_vcpu *vcpu;
vcpu = get_vcpu_by_vpidx(kvm, vpidx);
- if (!vcpu || !to_hv_vcpu(vcpu))
+ if (!vcpu)
return NULL;
- synic = to_hv_synic(vcpu);
+
+ hv_vcpu = to_hv_vcpu_safe(vcpu);
+ if (!hv_vcpu)
+ return NULL;
+
+ synic = &hv_vcpu->synic;
return (synic->active) ? synic : NULL;
}
@@ -972,7 +978,6 @@ int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu)
if (!hv_vcpu)
return -ENOMEM;
- vcpu->arch.hyperv = hv_vcpu;
hv_vcpu->vcpu = vcpu;
synic_init(&hv_vcpu->synic);
@@ -988,6 +993,14 @@ int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu)
spin_lock_init(&hv_vcpu->tlb_flush_fifo[i].write_lock);
}
+ /*
+ * Ensure the structure is fully initialized before it's visible to
+ * other tasks, as much of the state can be legally accessed without
+ * holding vcpu->mutex.
+ *
+ * Pairs with the smp_load_acquire() in to_hv_vcpu_safe().
+ */
+ smp_store_release(&vcpu->arch.hyperv, hv_vcpu);
return 0;
}
@@ -2159,7 +2172,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
bitmap_zero(vcpu_mask, KVM_MAX_VCPUS);
kvm_for_each_vcpu(i, v, kvm) {
- hv_v = to_hv_vcpu(v);
+ hv_v = to_hv_vcpu_safe(v);
/*
* The following check races with nested vCPUs entering/exiting
diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
index 53534e1004bb..ca5366341110 100644
--- a/arch/x86/kvm/hyperv.h
+++ b/arch/x86/kvm/hyperv.h
@@ -61,6 +61,18 @@ static inline struct kvm_hv *to_kvm_hv(struct kvm *kvm)
return &kvm->arch.hyperv;
}
+static inline struct kvm_vcpu_hv *to_hv_vcpu_safe(struct kvm_vcpu *vcpu)
+{
+ /*
+ * Ensure the HyperV structure is fully initialized when accessing it
+ * without holding vcpu->mutex (or some other guarantee that KVM can't
+ * concurrently instantiate the structure).
+ *
+ * Pairs with the smp_store_release() in kvm_hv_vcpu_init().
+ */
+ return smp_load_acquire(&vcpu->arch.hyperv);
+}
+
static inline struct kvm_vcpu_hv *to_hv_vcpu(struct kvm_vcpu *vcpu)
{
return vcpu->arch.hyperv;
@@ -87,7 +99,7 @@ static inline struct kvm_hv_syndbg *to_hv_syndbg(struct kvm_vcpu *vcpu)
static inline u32 kvm_hv_get_vpindex(struct kvm_vcpu *vcpu)
{
- struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu_safe(vcpu);
return hv_vcpu ? hv_vcpu->vp_index : vcpu->vcpu_idx;
}
@@ -197,7 +209,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
static inline struct kvm_vcpu_hv_tlb_flush_fifo *kvm_hv_get_tlb_flush_fifo(struct kvm_vcpu *vcpu,
bool is_guest_mode)
{
- struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu_safe(vcpu);
int i = is_guest_mode ? HV_L2_TLB_FLUSH_FIFO :
HV_L1_TLB_FLUSH_FIFO;
--
2.54.0.545.g6539524ca2-goog
next prev parent reply other threads:[~2026-04-23 14:08 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-23 14:08 [PATCH 0/5] KVM: x86/hyperv: Fix racy usage of vcpu->arch.hyperv Sean Christopherson
2026-04-23 14:08 ` [PATCH 1/5] KVM: x86/hyperv: Get target FIFO in hv_tlb_flush_enqueue(), not caller Sean Christopherson
2026-04-23 14:08 ` [PATCH 2/5] KVM: x86/hyperv: Check for NULL vCPU Hyper-V object in kvm_hv_get_tlb_flush_fifo() Sean Christopherson
2026-04-23 14:08 ` Sean Christopherson [this message]
2026-04-23 14:08 ` [PATCH 4/5] KVM: x86/hyperv: Assert vCPU's mutex is held in to_hv_vcpu() Sean Christopherson
2026-04-23 14:08 ` [PATCH 5/5] KVM: x86/hyperv: Use {READ,WRITE}_ONCE for cross-task synic->active accesses Sean Christopherson
2026-04-23 14:40 ` [PATCH 0/5] KVM: x86/hyperv: Fix racy usage of vcpu->arch.hyperv Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260423140833.439512-4-seanjc@google.com \
--to=seanjc@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=vkuznets@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox