From: Sean Christopherson <seanjc@google.com>
To: Vitaly Kuznetsov <vkuznets@redhat.com>,
Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH 1/5] KVM: x86/hyperv: Get target FIFO in hv_tlb_flush_enqueue(), not caller
Date: Thu, 23 Apr 2026 07:08:29 -0700 [thread overview]
Message-ID: <20260423140833.439512-2-seanjc@google.com> (raw)
In-Reply-To: <20260423140833.439512-1-seanjc@google.com>
When handling Hyper-V PV TLB flushes, retrieve the to-be-used FIFO in
hv_tlb_flush_enqueue() instead of having the caller pass in the FIFO. This
will make it easier to fix a cross-vCPU race where KVM can access a vCPU's
FIFO before it's fully initialized.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/hyperv.c | 24 +++++++++---------------
1 file changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 9b140bbdc1d8..3b7e860bd8d4 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1930,16 +1930,18 @@ static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc
return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt, entries);
}
-static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu,
- struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo,
- u64 *entries, int count)
+static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, u64 *entries, int count,
+ bool is_guest_mode)
{
+ struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
u64 flush_all_entry = KVM_HV_TLB_FLUSHALL_ENTRY;
if (!hv_vcpu)
return;
+ tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(vcpu, is_guest_mode);
+
spin_lock(&tlb_flush_fifo->write_lock);
/*
@@ -2012,7 +2014,6 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
struct kvm *kvm = vcpu->kvm;
struct hv_tlb_flush_ex flush_ex;
struct hv_tlb_flush flush;
- struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
/*
* Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE'
* entries on the TLB flush fifo. The last entry, however, needs to be
@@ -2138,11 +2139,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
* analyze it here, flush TLB regardless of the specified address space.
*/
if (all_cpus && !is_guest_mode(vcpu)) {
- kvm_for_each_vcpu(i, v, kvm) {
- tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(v, false);
- hv_tlb_flush_enqueue(v, tlb_flush_fifo,
- tlb_flush_entries, hc->rep_cnt);
- }
+ kvm_for_each_vcpu(i, v, kvm)
+ hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt, false);
kvm_make_all_cpus_request(kvm, KVM_REQ_HV_TLB_FLUSH);
} else if (!is_guest_mode(vcpu)) {
@@ -2152,9 +2150,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
v = kvm_get_vcpu(kvm, i);
if (!v)
continue;
- tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(v, false);
- hv_tlb_flush_enqueue(v, tlb_flush_fifo,
- tlb_flush_entries, hc->rep_cnt);
+ hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt, false);
}
kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
@@ -2185,9 +2181,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
continue;
__set_bit(i, vcpu_mask);
- tlb_flush_fifo = kvm_hv_get_tlb_flush_fifo(v, true);
- hv_tlb_flush_enqueue(v, tlb_flush_fifo,
- tlb_flush_entries, hc->rep_cnt);
+ hv_tlb_flush_enqueue(v, tlb_flush_entries, hc->rep_cnt, true);
}
kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH, vcpu_mask);
--
2.54.0.545.g6539524ca2-goog
next prev parent reply other threads:[~2026-04-23 14:08 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-23 14:08 [PATCH 0/5] KVM: x86/hyperv: Fix racy usage of vcpu->arch.hyperv Sean Christopherson
2026-04-23 14:08 ` Sean Christopherson [this message]
2026-04-23 14:08 ` [PATCH 2/5] KVM: x86/hyperv: Check for NULL vCPU Hyper-V object in kvm_hv_get_tlb_flush_fifo() Sean Christopherson
2026-04-23 14:08 ` [PATCH 3/5] KVM: x86/hyperv: Ensure vCPU's Hyper-V object is initialized on cross-vCPU accesses Sean Christopherson
2026-04-23 14:08 ` [PATCH 4/5] KVM: x86/hyperv: Assert vCPU's mutex is held in to_hv_vcpu() Sean Christopherson
2026-04-23 14:08 ` [PATCH 5/5] KVM: x86/hyperv: Use {READ,WRITE}_ONCE for cross-task synic->active accesses Sean Christopherson
2026-04-23 14:40 ` [PATCH 0/5] KVM: x86/hyperv: Fix racy usage of vcpu->arch.hyperv Sean Christopherson
2026-04-23 20:52 ` [syzbot ci] " syzbot ci
2026-04-23 21:40 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260423140833.439512-2-seanjc@google.com \
--to=seanjc@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=vkuznets@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox