linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys
  2024-01-28  9:29 [PATCH rdma-next v1 0/6] Collection of mlx5_ib fixes Leon Romanovsky
@ 2024-01-28  9:29 ` Leon Romanovsky
  2024-01-29 17:52   ` Jason Gunthorpe
  0 siblings, 1 reply; 8+ messages in thread
From: Leon Romanovsky @ 2024-01-28  9:29 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Or Har-Toov, Leon Romanovsky, Edward Srouji, linux-rdma,
	Maor Gottlieb, Mark Zhang, Michael Guralnik, Tamar Mashiah,
	Yishai Hadas

From: Or Har-Toov <ohartoov@nvidia.com>

In the dereg flow, UMEM is not a good enough indication whether an MR
is from userspace since in mlx5_ib_rereg_user_mr there are some cases
when a new MR is created and the UMEM of the old MR is set to NULL.
Currently when mlx5_ib_dereg_mr is called on the old MR, UMEM is NULL
but cache_ent can be different than NULL. So, the mkey will not be
destroyed.
Therefore checking if mkey is from user application and cacheable
should be done by checking if rb_key or cache_ent exist and all other kind of
mkeys should be destroyed.

Fixes: dd1b913fb0d0 ("RDMA/mlx5: Cache all user cacheable mkeys on dereg MR flow")
Signed-off-by: Or Har-Toov <ohartoov@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/mr.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 12bca6ca4760..87552a689e07 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1857,6 +1857,11 @@ static int cache_ent_find_and_store(struct mlx5_ib_dev *dev,
 	return ret;
 }
 
+static bool is_cacheable_mkey(struct mlx5_ib_mkey *mkey)
+{
+	return mkey->cache_ent || mkey->rb_key.ndescs;
+}
+
 int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 {
 	struct mlx5_ib_mr *mr = to_mmr(ibmr);
@@ -1901,12 +1906,6 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 		mr->sig = NULL;
 	}
 
-	/* Stop DMA */
-	if (mr->umem && mlx5r_umr_can_load_pas(dev, mr->umem->length))
-		if (mlx5r_umr_revoke_mr(mr) ||
-		    cache_ent_find_and_store(dev, mr))
-			mr->mmkey.cache_ent = NULL;
-
 	if (mr->umem && mr->umem->is_peer) {
 		rc = mlx5r_umr_revoke_mr(mr);
 		if (rc)
@@ -1914,7 +1913,9 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 		ib_umem_stop_invalidation_notifier(mr->umem);
 	}
 
-	if (!mr->mmkey.cache_ent) {
+	/* Stop DMA */
+	if (!is_cacheable_mkey(&mr->mmkey) || mlx5r_umr_revoke_mr(mr) ||
+	    cache_ent_find_and_store(dev, mr)) {
 		rc = destroy_mkey(to_mdev(mr->ibmr.device), mr);
 		if (rc)
 			return rc;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys
  2024-01-28  9:29 ` [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys Leon Romanovsky
@ 2024-01-29 17:52   ` Jason Gunthorpe
  2024-01-30 13:47     ` Leon Romanovsky
  0 siblings, 1 reply; 8+ messages in thread
From: Jason Gunthorpe @ 2024-01-29 17:52 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Or Har-Toov, Leon Romanovsky, Edward Srouji, linux-rdma,
	Maor Gottlieb, Mark Zhang, Michael Guralnik, Tamar Mashiah,
	Yishai Hadas

On Sun, Jan 28, 2024 at 11:29:15AM +0200, Leon Romanovsky wrote:
> From: Or Har-Toov <ohartoov@nvidia.com>
> 
> In the dereg flow, UMEM is not a good enough indication whether an MR
> is from userspace since in mlx5_ib_rereg_user_mr there are some cases
> when a new MR is created and the UMEM of the old MR is set to NULL.

Why is this a problem though? The only thing the umem has to do is to
trigger the UMR optimization. If UMR is not triggered then the mkey is
destroyed and it shouldn't be part of the cache at all.

> Currently when mlx5_ib_dereg_mr is called on the old MR, UMEM is NULL
> but cache_ent can be different than NULL. So, the mkey will not be
> destroyed.
> Therefore checking if mkey is from user application and cacheable
> should be done by checking if rb_key or cache_ent exist and all other kind of
> mkeys should be destroyed.
> 
> Fixes: dd1b913fb0d0 ("RDMA/mlx5: Cache all user cacheable mkeys on dereg MR flow")
> Signed-off-by: Or Har-Toov <ohartoov@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>  drivers/infiniband/hw/mlx5/mr.c | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
> index 12bca6ca4760..87552a689e07 100644
> --- a/drivers/infiniband/hw/mlx5/mr.c
> +++ b/drivers/infiniband/hw/mlx5/mr.c
> @@ -1857,6 +1857,11 @@ static int cache_ent_find_and_store(struct mlx5_ib_dev *dev,
>  	return ret;
>  }
>  
> +static bool is_cacheable_mkey(struct mlx5_ib_mkey *mkey)
> +{
> +	return mkey->cache_ent || mkey->rb_key.ndescs;
> +}
> +
>  int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
>  {
>  	struct mlx5_ib_mr *mr = to_mmr(ibmr);
> @@ -1901,12 +1906,6 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
>  		mr->sig = NULL;
>  	}
>  
> -	/* Stop DMA */
> -	if (mr->umem && mlx5r_umr_can_load_pas(dev, mr->umem->length))
> -		if (mlx5r_umr_revoke_mr(mr) ||
> -		    cache_ent_find_and_store(dev, mr))
> -			mr->mmkey.cache_ent = NULL;
> -
>  	if (mr->umem && mr->umem->is_peer) {
>  		rc = mlx5r_umr_revoke_mr(mr);
>  		if (rc)

?? this isn't based on an upstream tree

> @@ -1914,7 +1913,9 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
>  		ib_umem_stop_invalidation_notifier(mr->umem);
>  	}
>  
> -	if (!mr->mmkey.cache_ent) {
> +	/* Stop DMA */
> +	if (!is_cacheable_mkey(&mr->mmkey) || mlx5r_umr_revoke_mr(mr) ||
> +	    cache_ent_find_and_store(dev, mr)) {

And now the mlx5r_umr_can_load_pas() can been lost, that isn't good. A
non-umr-able object should never be placed in the cache. If the mkey's
size is too big it has to be freed normally.

>  		rc = destroy_mkey(to_mdev(mr->ibmr.device), mr);
>  		if (rc)
>  			return rc;

I'm not sure it is right to re-order this? The revokation of a mkey
should be a single operation, which ever path we choose to take..

Regardless the upstream code doesn't have this ordering so it should
all be one sequence of revoking the mkey and synchronizing the cache.

I suggest to put the revoke sequence into one function:

static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
{
	struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);

	if (mr->umem && mlx5r_umr_can_load_pas(dev, mr->umem->length)) {
		if (mlx5r_umr_revoke_mr(mr))
			goto destroy;

		if (cache_ent_find_and_store(dev, mr))
			goto destroy;
		return 0;
	}

destroy:
	if (mr->mmkey.cache_ent) {
		spin_lock_irq(&mr->mmkey.cache_ent->mkeys_queue.lock);
		mr->mmkey.cache_ent->in_use--;
		mr->mmkey.cache_ent = NULL;
		spin_unlock_irq(&mr->mmkey.cache_ent->mkeys_queue.lock);
	}
	return destroy_mkey(dev, mr);
}

(notice we probably shouldn't set cache_ent to null without adjusting in_use)

Jason

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys
  2024-01-29 17:52   ` Jason Gunthorpe
@ 2024-01-30 13:47     ` Leon Romanovsky
  0 siblings, 0 replies; 8+ messages in thread
From: Leon Romanovsky @ 2024-01-30 13:47 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Or Har-Toov, Edward Srouji, linux-rdma, Maor Gottlieb, Mark Zhang,
	Michael Guralnik, Tamar Mashiah, Yishai Hadas

On Mon, Jan 29, 2024 at 01:52:39PM -0400, Jason Gunthorpe wrote:
> On Sun, Jan 28, 2024 at 11:29:15AM +0200, Leon Romanovsky wrote:
> > From: Or Har-Toov <ohartoov@nvidia.com>

<...>

> >  	if (mr->umem && mr->umem->is_peer) {
> >  		rc = mlx5r_umr_revoke_mr(mr);
> >  		if (rc)
> 
> ?? this isn't based on an upstream tree

Yes, it is my mistake. I will fix it.

Thanks

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys
       [not found] <7429abbc-5400-b034-c26a-cdc587689904@nvidia.com>
@ 2024-01-31 12:50 ` Michael Guralnik
  2024-01-31 14:18   ` Jason Gunthorpe
  0 siblings, 1 reply; 8+ messages in thread
From: Michael Guralnik @ 2024-01-31 12:50 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: Leon Romanovsky, Edward Srouji, linux-rdma, Maor Gottlieb,
	Mark Zhang, Tamar Mashiah, Yishai Hadas, Or Har-Toov

On 29/01/2024 19:52, Jason Gunthorpe wrote:
> On Sun, Jan 28, 2024 at 11:29:15AM +0200, Leon Romanovsky wrote:
>> From: Or Har-Toov <ohartoov@nvidia.com>
>>
>> In the dereg flow, UMEM is not a good enough indication whether an MR
>> is from userspace since in mlx5_ib_rereg_user_mr there are some cases
>> when a new MR is created and the UMEM of the old MR is set to NULL.
> Why is this a problem though? The only thing the umem has to do is to
> trigger the UMR optimization. If UMR is not triggered then the mkey is
> destroyed and it shouldn't be part of the cache at all.

The problem is that it doesn't trigger the UMR on mkeys that are dereged
from the rereg flow.
Optimally, we'd want them to return to the cache, if possible.

We can keep relying on the UMEM to decide whether we want to try to return
them to cache, as you suggested in the revoke_mr() below, but that way those
mkeys will not return to the cache and we have to deal with the in_use in
the revoke flow.

>> @@ -1914,7 +1913,9 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct 
>> ib_udata *udata)
>> ib_umem_stop_invalidation_notifier(mr->umem);
>> }
>> - if (!mr->mmkey.cache_ent) {
>> + /* Stop DMA */
>> + if (!is_cacheable_mkey(&mr->mmkey) || mlx5r_umr_revoke_mr(mr) ||
>> + cache_ent_find_and_store(dev, mr)) {
> And now the mlx5r_umr_can_load_pas() can been lost, that isn't good. A
> non-umr-able object should never be placed in the cache. If the mkey's
> size is too big it has to be freed normally.

mlx5r_can_load_pas() will not get lost since mkeys that are not-umr-able 
will
not have rb_key or cache_ent set so is_cacheable_mkey is always false 
for them.

>> rc = destroy_mkey(to_mdev(mr->ibmr.device), mr);
>> if (rc)
>> return rc;
> I'm not sure it is right to re-order this? The revokation of a mkey
> should be a single operation, which ever path we choose to take..
>
> Regardless the upstream code doesn't have this ordering so it should
> all be one sequence of revoking the mkey and synchronizing the cache.
>
> I suggest to put the revoke sequence into one function:
>
> static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
> {
> struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
>
> if (mr->umem && mlx5r_umr_can_load_pas(dev, mr->umem->length)) {
> if (mlx5r_umr_revoke_mr(mr))
> goto destroy;
>
> if (cache_ent_find_and_store(dev, mr))
> goto destroy;
> return 0;
> }
>
> destroy:
> if (mr->mmkey.cache_ent) {
> spin_lock_irq(&mr->mmkey.cache_ent->mkeys_queue.lock);
> mr->mmkey.cache_ent->in_use--;
> mr->mmkey.cache_ent = NULL;
> spin_unlock_irq(&mr->mmkey.cache_ent->mkeys_queue.lock);
> }
> return destroy_mkey(dev, mr);
> }
>
> (notice we probably shouldn't set cache_ent to null without adjusting 
> in_use)
> Jason

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys
  2024-01-31 12:50 ` [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys Michael Guralnik
@ 2024-01-31 14:18   ` Jason Gunthorpe
  2024-01-31 14:35     ` Michael Guralnik
  0 siblings, 1 reply; 8+ messages in thread
From: Jason Gunthorpe @ 2024-01-31 14:18 UTC (permalink / raw)
  To: Michael Guralnik
  Cc: Leon Romanovsky, Leon Romanovsky, Edward Srouji, linux-rdma,
	Maor Gottlieb, Mark Zhang, Tamar Mashiah, Yishai Hadas,
	Or Har-Toov

On Wed, Jan 31, 2024 at 02:50:03PM +0200, Michael Guralnik wrote:
> On 29/01/2024 19:52, Jason Gunthorpe wrote:
> > On Sun, Jan 28, 2024 at 11:29:15AM +0200, Leon Romanovsky wrote:
> > > From: Or Har-Toov <ohartoov@nvidia.com>
> > > 
> > > In the dereg flow, UMEM is not a good enough indication whether an MR
> > > is from userspace since in mlx5_ib_rereg_user_mr there are some cases
> > > when a new MR is created and the UMEM of the old MR is set to NULL.
> > Why is this a problem though? The only thing the umem has to do is to
> > trigger the UMR optimization. If UMR is not triggered then the mkey is
> > destroyed and it shouldn't be part of the cache at all.
> 
> The problem is that it doesn't trigger the UMR on mkeys that are dereged
> from the rereg flow.
> Optimally, we'd want them to return to the cache, if possible.

Right, so you suggest changing the umem and umr_can_load into
is_cacheable_mkey() and carefully ensuring the rb_key.ndescs is 
zero for non-umrable?

> We can keep relying on the UMEM to decide whether we want to try to return
> them to cache, as you suggested in the revoke_mr() below, but that way those
> mkeys will not return to the cache and we have to deal with the in_use in
> the revoke flow.

I don't know what this in_use means? in_use should be only an issue if
the cache_ent is set? Are we really having in_use be set and cache_ent
bet NULL? That seems like a different bug that should be fixed by
keeping cache_ent and in_use consistent.

Jason

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys
  2024-01-31 14:18   ` Jason Gunthorpe
@ 2024-01-31 14:35     ` Michael Guralnik
  2024-01-31 15:23       ` Jason Gunthorpe
  0 siblings, 1 reply; 8+ messages in thread
From: Michael Guralnik @ 2024-01-31 14:35 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Leon Romanovsky, Leon Romanovsky, Edward Srouji, linux-rdma,
	Maor Gottlieb, Mark Zhang, Tamar Mashiah, Yishai Hadas,
	Or Har-Toov


On 31/01/2024 16:18, Jason Gunthorpe wrote:
> On Wed, Jan 31, 2024 at 02:50:03PM +0200, Michael Guralnik wrote:
>> On 29/01/2024 19:52, Jason Gunthorpe wrote:
>>> On Sun, Jan 28, 2024 at 11:29:15AM +0200, Leon Romanovsky wrote:
>>>> From: Or Har-Toov <ohartoov@nvidia.com>
>>>>
>>>> In the dereg flow, UMEM is not a good enough indication whether an MR
>>>> is from userspace since in mlx5_ib_rereg_user_mr there are some cases
>>>> when a new MR is created and the UMEM of the old MR is set to NULL.
>>> Why is this a problem though? The only thing the umem has to do is to
>>> trigger the UMR optimization. If UMR is not triggered then the mkey is
>>> destroyed and it shouldn't be part of the cache at all.
>> The problem is that it doesn't trigger the UMR on mkeys that are dereged
>> from the rereg flow.
>> Optimally, we'd want them to return to the cache, if possible.
> Right, so you suggest changing the umem and umr_can_load into
> is_cacheable_mkey() and carefully ensuring the rb_key.ndescs is
> zero for non-umrable?

Yes. The code is already written trying to ensure this and we've rephrased
a comment in the previous patch to describe this more accurately.

>> We can keep relying on the UMEM to decide whether we want to try to return
>> them to cache, as you suggested in the revoke_mr() below, but that way those
>> mkeys will not return to the cache and we have to deal with the in_use in
>> the revoke flow.
> I don't know what this in_use means? in_use should be only an issue if
> the cache_ent is set? Are we really having in_use be set and cache_ent
> bet NULL? That seems like a different bug that should be fixed by
> keeping cache_ent and in_use consistent.

in_use should be handled only if mkey has a cache_ent.

I take back what I wrote previously, in_use should be handled in revoke_mr
no matter how we choose to implement this, since we're not guaranteed to
succeed in UMR and might end up dereging mkeys from the cache.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys
  2024-01-31 14:35     ` Michael Guralnik
@ 2024-01-31 15:23       ` Jason Gunthorpe
  2024-01-31 18:25         ` Michael Guralnik
  0 siblings, 1 reply; 8+ messages in thread
From: Jason Gunthorpe @ 2024-01-31 15:23 UTC (permalink / raw)
  To: Michael Guralnik
  Cc: Leon Romanovsky, Leon Romanovsky, Edward Srouji, linux-rdma,
	Maor Gottlieb, Mark Zhang, Tamar Mashiah, Yishai Hadas,
	Or Har-Toov

On Wed, Jan 31, 2024 at 04:35:17PM +0200, Michael Guralnik wrote:
> 
> On 31/01/2024 16:18, Jason Gunthorpe wrote:
> > On Wed, Jan 31, 2024 at 02:50:03PM +0200, Michael Guralnik wrote:
> > > On 29/01/2024 19:52, Jason Gunthorpe wrote:
> > > > On Sun, Jan 28, 2024 at 11:29:15AM +0200, Leon Romanovsky wrote:
> > > > > From: Or Har-Toov <ohartoov@nvidia.com>
> > > > > 
> > > > > In the dereg flow, UMEM is not a good enough indication whether an MR
> > > > > is from userspace since in mlx5_ib_rereg_user_mr there are some cases
> > > > > when a new MR is created and the UMEM of the old MR is set to NULL.
> > > > Why is this a problem though? The only thing the umem has to do is to
> > > > trigger the UMR optimization. If UMR is not triggered then the mkey is
> > > > destroyed and it shouldn't be part of the cache at all.
> > > The problem is that it doesn't trigger the UMR on mkeys that are dereged
> > > from the rereg flow.
> > > Optimally, we'd want them to return to the cache, if possible.
> > Right, so you suggest changing the umem and umr_can_load into
> > is_cacheable_mkey() and carefully ensuring the rb_key.ndescs is
> > zero for non-umrable?
> 
> Yes. The code is already written trying to ensure this and we've rephrased
> a comment in the previous patch to describe this more accurately.

But then I wonder why does cache_ent become NULL but the rb_key.ndesc
is set? That seems pretty confusing.

> > > We can keep relying on the UMEM to decide whether we want to try to return
> > > them to cache, as you suggested in the revoke_mr() below, but that way those
> > > mkeys will not return to the cache and we have to deal with the in_use in
> > > the revoke flow.
> > I don't know what this in_use means? in_use should be only an issue if
> > the cache_ent is set? Are we really having in_use be set and cache_ent
> > bet NULL? That seems like a different bug that should be fixed by
> > keeping cache_ent and in_use consistent.
> 
> in_use should be handled only if mkey has a cache_ent.
> 
> I take back what I wrote previously, in_use should be handled in revoke_mr
> no matter how we choose to implement this, since we're not guaranteed to
> succeed in UMR and might end up dereging mkeys from the cache.

That makes the most sense, yes.

Jason

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys
  2024-01-31 15:23       ` Jason Gunthorpe
@ 2024-01-31 18:25         ` Michael Guralnik
  0 siblings, 0 replies; 8+ messages in thread
From: Michael Guralnik @ 2024-01-31 18:25 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Leon Romanovsky, Leon Romanovsky, Edward Srouji, linux-rdma,
	Maor Gottlieb, Mark Zhang, Tamar Mashiah, Yishai Hadas,
	Or Har-Toov


On 31/01/2024 17:23, Jason Gunthorpe wrote:
> On Wed, Jan 31, 2024 at 04:35:17PM +0200, Michael Guralnik wrote:
>> On 31/01/2024 16:18, Jason Gunthorpe wrote:
>>> On Wed, Jan 31, 2024 at 02:50:03PM +0200, Michael Guralnik wrote:
>>>> On 29/01/2024 19:52, Jason Gunthorpe wrote:
>>>>> On Sun, Jan 28, 2024 at 11:29:15AM +0200, Leon Romanovsky wrote:
>>>>>> From: Or Har-Toov <ohartoov@nvidia.com>
>>>>>>
>>>>>> In the dereg flow, UMEM is not a good enough indication whether an MR
>>>>>> is from userspace since in mlx5_ib_rereg_user_mr there are some cases
>>>>>> when a new MR is created and the UMEM of the old MR is set to NULL.
>>>>> Why is this a problem though? The only thing the umem has to do is to
>>>>> trigger the UMR optimization. If UMR is not triggered then the mkey is
>>>>> destroyed and it shouldn't be part of the cache at all.
>>>> The problem is that it doesn't trigger the UMR on mkeys that are dereged
>>>> from the rereg flow.
>>>> Optimally, we'd want them to return to the cache, if possible.
>>> Right, so you suggest changing the umem and umr_can_load into
>>> is_cacheable_mkey() and carefully ensuring the rb_key.ndescs is
>>> zero for non-umrable?
>> Yes. The code is already written trying to ensure this and we've rephrased
>> a comment in the previous patch to describe this more accurately.
> But then I wonder why does cache_ent become NULL but the rb_key.ndesc
> is set? That seems pretty confusing.

I think we did it only in the flow where we destroy the mkey to mark mkeys
that were not returned to the cache.

Do you think it'll be better if we switch to marking this explicitly?
Add a flag to the mkey that marks it as a cachable mkey and we'll make this
decision per mkey creation.
Then we can stop relying on the value of other variables to decide what
goes back to cache.


Michael


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-01-31 18:26 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <7429abbc-5400-b034-c26a-cdc587689904@nvidia.com>
2024-01-31 12:50 ` [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys Michael Guralnik
2024-01-31 14:18   ` Jason Gunthorpe
2024-01-31 14:35     ` Michael Guralnik
2024-01-31 15:23       ` Jason Gunthorpe
2024-01-31 18:25         ` Michael Guralnik
2024-01-28  9:29 [PATCH rdma-next v1 0/6] Collection of mlx5_ib fixes Leon Romanovsky
2024-01-28  9:29 ` [PATCH rdma-next v1 5/6] RDMA/mlx5: Change check for cacheable user mkeys Leon Romanovsky
2024-01-29 17:52   ` Jason Gunthorpe
2024-01-30 13:47     ` Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).