From: "shaikh.kamal" <shaikhkamal2012@gmail.com>
To: "H. Peter Anvin" <hpa@zytor.com>, Paul Durrant <paul@xen.org>,
Sean Christopherson <seanjc@google.com>,
David Woodhouse <dwmw@amazon.co.uk>,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-rt-devel@lists.linux.dev
Cc: pbonzini@redhat.com, skhan@linuxfoundation.org,
me@brighamcampbell.com,
syzbot+919877893c9d28162dc2@syzkaller.appspotmail.com,
"shaikh.kamal" <shaikhkamal2012@gmail.com>
Subject: [PATCH v2 1/1] KVM: x86/xen: Use trylock for fast path event channel delivery
Date: Thu, 2 Apr 2026 07:01:02 +0530 [thread overview]
Message-ID: <20260402013102.21951-1-shaikhkamal2012@gmail.com> (raw)
In-Reply-To: <ac08V4TaM2yh9SY1@google.com>
kvm_xen_set_evtchn_fast() acquires gpc->lock with read_lock_irqsave(),
which becomes a sleeping lock on PREEMPT_RT, triggering:
BUG: sleeping function called from invalid context
in_hardirq(): 1, in_serving_softirq(): 0
Call Trace:
<IRQ>
rt_spin_lock+0x70/0x130
kvm_xen_set_evtchn_fast+0x20b/0xa40
xen_timer_callback+0x91/0x1a0
__run_hrtimer
hrtimer_interrupt
when called from hard IRQ context (e.g., hrtimer callback).
The function uses read_lock_irqsave() to access two gpc structures:
shinfo_cache and vcpu_info_cache. On PREEMPT_RT, these rwlocks are
rt_mutex-based and cannot be acquired from hard IRQ context.
Use read_trylock() instead for both gpc lock acquisitions. If either
lock is contended, return -EWOULDBLOCK to trigger the existing slow
path: xen_timer_callback() sets vcpu->arch.xen.timer_pending, kicks
the vCPU with KVM_REQ_UNBLOCK, and the event gets injected from
process context via kvm_xen_inject_timer_irqs().
This approach works on all kernels (RT and non-RT) and preserves the
"fast path" semantics: acquire the lock only if immediately available,
otherwise bail out rather than blocking.
Reported-by: syzbot+919877893c9d28162dc2@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=919877893c9d28162dc2
Fixes: 77c9b9dea4fb ("KVM: x86/xen: Use fast path for Xen timer delivery")
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: shaikh.kamal <shaikhkamal2012@gmail.com>
---
arch/x86/kvm/xen.c | 33 +++++++++++++++++++++++++++++----
1 file changed, 29 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index d6b2a665b499..479e8f23a9c4 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -1817,7 +1817,17 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm)
idx = srcu_read_lock(&kvm->srcu);
- read_lock_irqsave(&gpc->lock, flags);
+ /*
+ * Use trylock for the "fast" path. If the lock is contended,
+ * return -EWOULDBLOCK to use the slow path which injects the
+ * event from process context via timer_pending + KVM_REQ_UNBLOCK.
+ */
+ local_irq_save(flags);
+ if (!read_trylock(&gpc->lock)) {
+ local_irq_restore(flags);
+ srcu_read_unlock(&kvm->srcu, idx);
+ return -EWOULDBLOCK;
+ }
if (!kvm_gpc_check(gpc, PAGE_SIZE))
goto out_rcu;
@@ -1848,10 +1858,22 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm)
} else {
rc = 1; /* Delivered to the bitmap in shared_info. */
/* Now switch to the vCPU's vcpu_info to set the index and pending_sel */
- read_unlock_irqrestore(&gpc->lock, flags);
+ read_unlock(&gpc->lock);
+ local_irq_restore(flags);
gpc = &vcpu->arch.xen.vcpu_info_cache;
- read_lock_irqsave(&gpc->lock, flags);
+ local_irq_save(flags);
+ if (!read_trylock(&gpc->lock)) {
+ /*
+ * Lock contended. Set the in-kernel pending flag
+ * and kick the vCPU to inject via the slow path.
+ */
+ local_irq_restore(flags);
+ if (!test_and_set_bit(port_word_bit,
+ &vcpu->arch.xen.evtchn_pending_sel))
+ kick_vcpu = true;
+ goto out_kick;
+ }
if (!kvm_gpc_check(gpc, sizeof(struct vcpu_info))) {
/*
* Could not access the vcpu_info. Set the bit in-kernel
@@ -1885,7 +1907,10 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm)
}
out_rcu:
- read_unlock_irqrestore(&gpc->lock, flags);
+ read_unlock(&gpc->lock);
+ local_irq_restore(flags);
+
+ out_kick:
srcu_read_unlock(&kvm->srcu, idx);
if (kick_vcpu) {
--
2.43.0
next prev parent reply other threads:[~2026-04-02 1:31 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-29 13:15 [PATCH] KVM: x86/xen: Fix sleeping lock in hard IRQ context on PREEMPT_RT shaikh.kamal
2026-03-30 14:18 ` Steven Rostedt
2026-03-30 14:51 ` Woodhouse, David
2026-04-01 15:40 ` Sean Christopherson
2026-04-02 1:30 ` [PATCH v2 0/1] KVM: x86/xen: Fix PREEMPT_RT sleeping lock bug shaikh.kamal
2026-04-02 1:31 ` shaikh.kamal [this message]
2026-04-02 6:36 ` [PATCH v2 1/1] KVM: x86/xen: Use trylock for fast path event channel delivery Sebastian Andrzej Siewior
2026-04-02 22:40 ` Sean Christopherson
2026-04-02 6:42 ` [PATCH] KVM: x86/xen: Fix sleeping lock in hard IRQ context on PREEMPT_RT Sebastian Andrzej Siewior
2026-04-02 22:23 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260402013102.21951-1-shaikhkamal2012@gmail.com \
--to=shaikhkamal2012@gmail.com \
--cc=dwmw@amazon.co.uk \
--cc=hpa@zytor.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=me@brighamcampbell.com \
--cc=paul@xen.org \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=skhan@linuxfoundation.org \
--cc=syzbot+919877893c9d28162dc2@syzkaller.appspotmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox