From: Leon Romanovsky <leon@kernel.org>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Michael Guralnik <michaelgur@nvidia.com>,
linux-rdma@vger.kernel.org, Maor Gottlieb <maorg@nvidia.com>
Subject: Re: [PATCH rdma-rc 06/10] RDMA/mlx5: Fix mkey cache possible deadlock on cleanup
Date: Tue, 6 Jun 2023 08:50:04 +0300 [thread overview]
Message-ID: <20230606055004.GA6830@unreal> (raw)
In-Reply-To: <ZH4TTgqmYi0/A/bj@nvidia.com>
On Mon, Jun 05, 2023 at 01:54:38PM -0300, Jason Gunthorpe wrote:
> On Mon, Jun 05, 2023 at 01:33:22PM +0300, Leon Romanovsky wrote:
>
> > diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
> > index 1ce48e485c5b..f113656e4027 100644
> > --- a/drivers/infiniband/hw/mlx5/mr.c
> > +++ b/drivers/infiniband/hw/mlx5/mr.c
> > @@ -1033,7 +1033,15 @@ void mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev)
> > xa_lock_irq(&ent->mkeys);
> > ent->disabled = true;
> > xa_unlock_irq(&ent->mkeys);
> > - cancel_delayed_work_sync(&ent->dwork);
> > + }
> > +
> > + /* Run the canceling of delayed works on the cache in a separate loop after
> > + * disabling all entries to ensure someone_adding() will not try taking the
> > + * rb_lock while flushing the workqueue.
> > + */
> > + for (node = rb_first(root); node; node = rb_next(node)) {
> > + ent = rb_entry(node, struct mlx5_cache_ent, node);
> > + cancel_delayed_work(&ent->dwork);
> > }
> >
> This goes on to kfree end, so this can't drop the sync.
with _sync, we will get same code as it was before.
Let's put this patch aside.
Thanks
>
> Jason
next prev parent reply other threads:[~2023-06-06 5:50 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-05 10:33 [PATCH rdma-rc 00/10] Batch of uverbs and mlx5_ib fixes Leon Romanovsky
2023-06-05 10:33 ` [PATCH rdma-rc 01/10] RDMA/mlx5: Initiate dropless RQ for RAW Ethernet functions Leon Romanovsky
2023-06-05 10:33 ` [PATCH rdma-rc 02/10] RDMA/mlx5: Create an indirect flow table for steering anchor Leon Romanovsky
2023-06-05 10:33 ` [PATCH rdma-rc 03/10] RDMA/mlx5: Fix Q-counters per vport allocation Leon Romanovsky
2023-06-05 10:33 ` [PATCH rdma-rc 04/10] RDMA/mlx5: Remove vport Q-counters dependency on normal Q-counters Leon Romanovsky
2023-06-05 10:33 ` [PATCH rdma-rc 05/10] RDMA/mlx5: Fix Q-counters query in LAG mode Leon Romanovsky
2023-06-05 10:33 ` [PATCH rdma-rc 06/10] RDMA/mlx5: Fix mkey cache possible deadlock on cleanup Leon Romanovsky
2023-06-05 16:54 ` Jason Gunthorpe
2023-06-06 5:50 ` Leon Romanovsky [this message]
2023-06-05 10:33 ` [PATCH rdma-rc 07/10] RDMA/cma: Always set static rate to 0 for RoCE Leon Romanovsky
2023-06-05 10:33 ` [PATCH rdma-rc 08/10] RDMA/uverbs: Restrict usage of privileged QKEYs Leon Romanovsky
2023-06-05 16:55 ` Jason Gunthorpe
2023-06-05 10:33 ` [PATCH rdma-rc 09/10] IB/uverbs: Fix to consider event queue closing also upon non-blocking mode Leon Romanovsky
2023-06-05 10:33 ` [PATCH rdma-rc 10/10] RDMA/mlx5: Fix affinity assignment Leon Romanovsky
2023-06-11 9:21 ` (subset) [PATCH rdma-rc 00/10] Batch of uverbs and mlx5_ib fixes Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230606055004.GA6830@unreal \
--to=leon@kernel.org \
--cc=jgg@nvidia.com \
--cc=linux-rdma@vger.kernel.org \
--cc=maorg@nvidia.com \
--cc=michaelgur@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).