public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Leon Romanovsky <leon@kernel.org>
Cc: Doug Ledford <dledford@redhat.com>, Shay Drory <shayd@nvidia.com>,
	linux-rdma@vger.kernel.org
Subject: Re: [PATCH rdma-next] RDMA/restrack: Delay QP deletion till all users are gone
Date: Tue, 20 Apr 2021 09:39:50 -0300	[thread overview]
Message-ID: <20210420123950.GA2138447@nvidia.com> (raw)
In-Reply-To: <9ba5a611ceac86774d3d0fda12704cecc30606f9.1618753038.git.leonro@nvidia.com>

On Sun, Apr 18, 2021 at 04:37:35PM +0300, Leon Romanovsky wrote:
> From: Shay Drory <shayd@nvidia.com>
> 
> Currently, in case of QP, the following use-after-free is possible:
> 
> 	cpu0				cpu1
> 	----				----
>  res_get_common_dumpit()
>  rdma_restrack_get()
>  fill_res_qp_entry()
> 				ib_destroy_qp_user()
> 				 rdma_restrack_del()
> 				 qp->device->ops.destroy_qp()
>   ib_query_qp()
>   qp->device->ops.query_qp()
>     --> use-after-free-qp
> 
> This is because rdma_restrack_del(), in case of QP, isn't waiting until
> all users are gone.
> 
> Fix it by making rdma_restrack_del() wait until all users are gone for
> QPs as well.
> 
> Fixes: 13ef5539def7 ("RDMA/restrack: Count references to the verbs objects")
> Signed-off-by: Shay Drory <shayd@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>  drivers/infiniband/core/restrack.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/core/restrack.c b/drivers/infiniband/core/restrack.c
> index ffabaf327242..def0c5b0efe9 100644
> --- a/drivers/infiniband/core/restrack.c
> +++ b/drivers/infiniband/core/restrack.c
> @@ -340,7 +340,7 @@ void rdma_restrack_del(struct rdma_restrack_entry *res)
>  	rt = &dev->res[res->type];
>  
>  	old = xa_erase(&rt->xa, res->id);
> -	if (res->type == RDMA_RESTRACK_MR || res->type == RDMA_RESTRACK_QP)
> +	if (res->type == RDMA_RESTRACK_MR)
>  		return;

Why is MR skipping this?


It also calls into the driver under its dumpit, at the very least this
needs a comment.

Jason

  reply	other threads:[~2021-04-20 12:39 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-18 13:37 [PATCH rdma-next] RDMA/restrack: Delay QP deletion till all users are gone Leon Romanovsky
2021-04-20 12:39 ` Jason Gunthorpe [this message]
2021-04-20 13:06   ` Leon Romanovsky
2021-04-20 15:25     ` Jason Gunthorpe
2021-04-21  5:03       ` Leon Romanovsky
2021-04-22 14:29         ` Jason Gunthorpe
2021-04-25 13:03           ` Leon Romanovsky
2021-04-25 13:08             ` Jason Gunthorpe
2021-04-25 13:44               ` Leon Romanovsky
2021-04-25 17:22                 ` Jason Gunthorpe
2021-04-25 17:38                   ` Leon Romanovsky
2021-04-26 12:03                     ` Jason Gunthorpe
2021-04-26 13:08                       ` Leon Romanovsky
2021-04-26 13:11                         ` Jason Gunthorpe
2021-04-27  4:45                           ` Leon Romanovsky
2021-05-02 11:28           ` Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210420123950.GA2138447@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=dledford@redhat.com \
    --cc=leon@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=shayd@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox