* [PATCH rdma-rc] RDMA/mlx5: Fix mkey cache WQ flush
@ 2023-10-25 17:49 Leon Romanovsky
2023-10-31 14:06 ` Jason Gunthorpe
0 siblings, 1 reply; 2+ messages in thread
From: Leon Romanovsky @ 2023-10-25 17:49 UTC (permalink / raw)
To: Jason Gunthorpe; +Cc: Moshe Shemesh, linux-rdma, Michael Guralnik, Shay Drory
From: Moshe Shemesh <moshe@nvidia.com>
The cited patch tries to ensure no pending works on the mkey cache
workqueue by disabling adding new works and call flush_workqueue().
But this workqueue also has delayed works which might still be pending
the delay time to be queued.
Add cancel_delayed_work() for the delayed works which waits to be queued
and then the flush_workqueue() will flush all works which are already
queued and running.
Fixes: 374012b00457 ("RDMA/mlx5: Fix mkey cache possible deadlock on cleanup")
Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/hw/mlx5/mr.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 8a3762d9ff58..e0629898c3c0 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1026,11 +1026,13 @@ void mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev)
return;
mutex_lock(&dev->cache.rb_lock);
+ cancel_delayed_work(&dev->cache.remove_ent_dwork);
for (node = rb_first(root); node; node = rb_next(node)) {
ent = rb_entry(node, struct mlx5_cache_ent, node);
xa_lock_irq(&ent->mkeys);
ent->disabled = true;
xa_unlock_irq(&ent->mkeys);
+ cancel_delayed_work(&ent->dwork);
}
mutex_unlock(&dev->cache.rb_lock);
--
2.41.0
^ permalink raw reply related [flat|nested] 2+ messages in thread* Re: [PATCH rdma-rc] RDMA/mlx5: Fix mkey cache WQ flush
2023-10-25 17:49 [PATCH rdma-rc] RDMA/mlx5: Fix mkey cache WQ flush Leon Romanovsky
@ 2023-10-31 14:06 ` Jason Gunthorpe
0 siblings, 0 replies; 2+ messages in thread
From: Jason Gunthorpe @ 2023-10-31 14:06 UTC (permalink / raw)
To: Leon Romanovsky; +Cc: Moshe Shemesh, linux-rdma, Michael Guralnik, Shay Drory
On Wed, Oct 25, 2023 at 08:49:59PM +0300, Leon Romanovsky wrote:
> From: Moshe Shemesh <moshe@nvidia.com>
>
> The cited patch tries to ensure no pending works on the mkey cache
> workqueue by disabling adding new works and call flush_workqueue().
> But this workqueue also has delayed works which might still be pending
> the delay time to be queued.
>
> Add cancel_delayed_work() for the delayed works which waits to be queued
> and then the flush_workqueue() will flush all works which are already
> queued and running.
>
> Fixes: 374012b00457 ("RDMA/mlx5: Fix mkey cache possible deadlock on cleanup")
> Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
> drivers/infiniband/hw/mlx5/mr.c | 2 ++
> 1 file changed, 2 insertions(+)
Applied to for-next, thanks
Jason
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2023-10-31 14:07 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-25 17:49 [PATCH rdma-rc] RDMA/mlx5: Fix mkey cache WQ flush Leon Romanovsky
2023-10-31 14:06 ` Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).