linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Philipp Reisner <philipp.reisner@linbit.com>
Cc: Zhu Yanjun <yanjun.zhu@linux.dev>, Jason Gunthorpe <jgg@ziepe.ca>,
	linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH V2] rdma_rxe: call comp_handler without holding cq->cq_lock
Date: Mon, 8 Sep 2025 17:24:57 +0300	[thread overview]
Message-ID: <20250908142457.GA341237@unreal> (raw)
In-Reply-To: <20250822081941.989520-1-philipp.reisner@linbit.com>

On Fri, Aug 22, 2025 at 10:19:41AM +0200, Philipp Reisner wrote:
> Allow the comp_handler callback implementation to call ib_poll_cq().
> A call to ib_poll_cq() calls rxe_poll_cq() with the rdma_rxe driver.
> And rxe_poll_cq() locks cq->cq_lock. That leads to a spinlock deadlock.

Can you please be more specific about the deadlock?
Please write call stack to describe it.

> 
> The Mellanox and Intel drivers allow a comp_handler callback
> implementation to call ib_poll_cq().
> 
> Avoid the deadlock by calling the comp_handler callback without
> holding cq->cq_lock.
> 
> Changelog:
> v1: https://lore.kernel.org/all/20250806123921.633410-1-philipp.reisner@linbit.com/
> v1 -> v2:
> - Only reset cq->notify to 0 when invoking the comp_handler
> ====================
> 
> Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
> Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>
> ---
>  drivers/infiniband/sw/rxe/rxe_cq.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
> index fffd144d509e..95652001665d 100644
> --- a/drivers/infiniband/sw/rxe/rxe_cq.c
> +++ b/drivers/infiniband/sw/rxe/rxe_cq.c
> @@ -88,6 +88,7 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
>  	int full;
>  	void *addr;
>  	unsigned long flags;
> +	bool invoke_handler = false;
>  
>  	spin_lock_irqsave(&cq->cq_lock, flags);
>  
> @@ -113,11 +114,14 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
>  	if ((cq->notify & IB_CQ_NEXT_COMP) ||
>  	    (cq->notify & IB_CQ_SOLICITED && solicited)) {
>  		cq->notify = 0;
> -		cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
> +		invoke_handler = true;
>  	}
>  
>  	spin_unlock_irqrestore(&cq->cq_lock, flags);
>  
> +	if (invoke_handler)
> +		cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
> +
>  	return 0;
>  }
>  
> -- 
> 2.50.1

  reply	other threads:[~2025-09-08 14:25 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-22  8:19 [PATCH V2] rdma_rxe: call comp_handler without holding cq->cq_lock Philipp Reisner
2025-09-08 14:24 ` Leon Romanovsky [this message]
2025-09-09 14:48   ` Philipp Reisner
2025-09-09 15:31     ` Jason Gunthorpe
2025-09-09 16:00       ` Philipp Reisner
2025-09-10 10:27         ` Leon Romanovsky
2025-09-24 13:21 ` Jason Gunthorpe
2025-09-25  6:02   ` Philipp Reisner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250908142457.GA341237@unreal \
    --to=leon@kernel.org \
    --cc=jgg@ziepe.ca \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=philipp.reisner@linbit.com \
    --cc=yanjun.zhu@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).