From: Leon Romanovsky <leon@kernel.org>
To: Philipp Reisner <philipp.reisner@linbit.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>, Zhu Yanjun <yanjun.zhu@linux.dev>,
linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH V2] rdma_rxe: call comp_handler without holding cq->cq_lock
Date: Wed, 10 Sep 2025 13:27:29 +0300 [thread overview]
Message-ID: <20250910102729.GP341237@unreal> (raw)
In-Reply-To: <CADGDV=VZK4oXM=h4PzYOm_PJihMKdQUkrADOiw6EaC4kCssAcQ@mail.gmail.com>
On Tue, Sep 09, 2025 at 06:00:36PM +0200, Philipp Reisner wrote:
> On Tue, Sep 9, 2025 at 5:31 PM Jason Gunthorpe <jgg@ziepe.ca> wrote:
> >
> > On Tue, Sep 09, 2025 at 04:48:19PM +0200, Philipp Reisner wrote:
> > > On Mon, Sep 8, 2025 at 4:25 PM Leon Romanovsky <leon@kernel.org> wrote:
> > > >
> > > > On Fri, Aug 22, 2025 at 10:19:41AM +0200, Philipp Reisner wrote:
> > > > > Allow the comp_handler callback implementation to call ib_poll_cq().
> > > > > A call to ib_poll_cq() calls rxe_poll_cq() with the rdma_rxe driver.
> > > > > And rxe_poll_cq() locks cq->cq_lock. That leads to a spinlock deadlock.
> > > >
> > > > Can you please be more specific about the deadlock?
> > > > Please write call stack to describe it.
> > > >
> > > Instead of a call stack, I write it from top to bottom:
> > >
> > > The line numbers in the .c files are valid for Linux-6.16:
> > >
> > > 1 rxe_cq_post() [rxe_cq.c:85]
> > > 2 spin_lock_irqsave() [rxe_cq.c:93]
> > > 3 cq->ibcq.comp_handler() [rxe_cq.c:116]
> > > 4 some_comp_handler()
> > > 5 ib_poll_cq()
> > > 6 cq->device->ops.poll_cq() [ib_verbs.h:4037]
> > > 7 rxe_poll_cq() [rxe_verbs.c:1165]
> > > 8 spin_lock_irqsave() [rxe_verbs.c:1172]
> > >
> > > In line 8 of this call graph, it deadlocks because the spinlock
> > > was already acquired in line 2 of the call graph.
> >
> > Is this even legal in verbs? I'm not sure you can do pull cq from a
> > interrupt driven comp handler.. Is something already doing this intree?
> >
>
> The file drivers/infiniband/sw/rdmavt/cq.c has this comment:
> /*
> * The completion handler will most likely rearm the notification
> * and poll for all pending entries. If a new completion entry
> * is added while we are in this routine, queue_work()
> * won't call us again until we return so we check triggered to
> * see if we need to call the handler again.
> */
>
> Also, Intel and Mellanox cards and drivers allow calling ib_poll_cq()
> from the completion handler.
And do these drivers drop CQ lock like you are proposing here?
>
> The problem exists only with the RXE driver.
next prev parent reply other threads:[~2025-09-10 10:27 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-22 8:19 [PATCH V2] rdma_rxe: call comp_handler without holding cq->cq_lock Philipp Reisner
2025-09-08 14:24 ` Leon Romanovsky
2025-09-09 14:48 ` Philipp Reisner
2025-09-09 15:31 ` Jason Gunthorpe
2025-09-09 16:00 ` Philipp Reisner
2025-09-10 10:27 ` Leon Romanovsky [this message]
2025-09-24 13:21 ` Jason Gunthorpe
2025-09-25 6:02 ` Philipp Reisner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250910102729.GP341237@unreal \
--to=leon@kernel.org \
--cc=jgg@ziepe.ca \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=philipp.reisner@linbit.com \
--cc=yanjun.zhu@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).