From: Jason Gunthorpe <jgg@nvidia.com>
To: Michael Guralnik <michaelgur@nvidia.com>
Cc: leonro@nvidia.com, linux-rdma@vger.kernel.org, maorg@nvidia.com,
aharonl@nvidia.com
Subject: Re: [PATCH v4 rdma-next 2/6] RDMA/mlx5: Remove explicit ODP cache entry
Date: Tue, 17 Jan 2023 10:49:44 -0400 [thread overview]
Message-ID: <Y8a1iHmFzZL50lYD@nvidia.com> (raw)
In-Reply-To: <04b75e85-dcc4-b012-06e3-77a298a7d0e2@nvidia.com>
On Tue, Jan 17, 2023 at 02:08:35AM +0200, Michael Guralnik wrote:
>
> On 1/17/2023 1:45 AM, Jason Gunthorpe wrote:
> > On Tue, Jan 17, 2023 at 01:24:34AM +0200, Michael Guralnik wrote:
> > > On 1/16/2023 6:59 PM, Jason Gunthorpe wrote:
> > > > On Sun, Jan 15, 2023 at 03:34:50PM +0200, Michael Guralnik wrote:
> > > > > From: Aharon Landau <aharonl@nvidia.com>
> > > > >
> > > > > Explicit ODP mkey doesn't have unique properties. It shares the same
> > > > > properties as the order 18 cache entry. There is no need to devote a special
> > > > > entry for that.
> > > > IMR is "implicit mr" for implicit ODP, the commit message is wrong
> > > Yes. I'll change to: "IMR MTT mkeys don't have unique properties..."
> > >
> > > > > @@ -1591,20 +1593,8 @@ void mlx5_odp_init_mkey_cache_entry(struct mlx5_cache_ent *ent)
> > > > > {
> > > > > if (!(ent->dev->odp_caps.general_caps & IB_ODP_SUPPORT_IMPLICIT))
> > > > > return;
> > > > > -
> > > > > - switch (ent->order - 2) {
> > > > > - case MLX5_IMR_MTT_CACHE_ENTRY:
> > > > > - ent->ndescs = MLX5_IMR_MTT_ENTRIES;
> > > > > - ent->access_mode = MLX5_MKC_ACCESS_MODE_MTT;
> > > > > - ent->limit = 0;
> > > > > - break;
> > > > > -
> > > > > - case MLX5_IMR_KSM_CACHE_ENTRY:
> > > > > - ent->ndescs = mlx5_imr_ksm_entries;
> > > > > - ent->access_mode = MLX5_MKC_ACCESS_MODE_KSM;
> > > > > - ent->limit = 0;
> > > > > - break;
> > > > > - }
> > > > > + ent->ndescs = mlx5_imr_ksm_entries;
> > > > > + ent->access_mode = MLX5_MKC_ACCESS_MODE_KSM;
> > > > And you didn't answer my question, is this URMable?
> > > Yes, we can UMR between access modes.
> > > > Because I don't quite understand how this can work at this point, for
> > > > lower orders the access_mode is assumed to be MTT, a KLM cannot be put
> > > > in a low order entry at this point.
> > > In our current code, the only non-MTT mkeys using the cache are the IMR KSM
> > > that this patch doesn't change.
> > It does change it, the isolation between the special IMR and the
> > normal MTT order is removed right here.
> >
> > Now it is broken
>
> How do IMR MTT mkeys sharing a cache entry with other MTT mkeys break
> anything?
Oh, I read it wrong, this is still keeping the high order
MLX5_IMR_KSM_CACHE_ENTRY
> > > > Ideally you'd teach UMR to switch between MTT/KSM and then the cache
> > > > is fine, size the amount of space required based on the number of
> > > > bytes in the memory.
> > > Agreed, access_mode and ndescs can be dropped from the rb_key that this
> > > series introduces and instead we'll add the size of the descriptors as a
> > > cache entry property.
> > > Doing this will reduce number of entries in the RB tree but will add
> > > complexity to the dereg and rereg flows .
> > Not really, you just always set the access mode in the UMR like
> > everything else.
> >
> > Jason
>
> ok, I'll give this a second look. if it's really only this, I can probably
> push this quickly.
> BTW, this will mean that IMR KSM mkeys will also share an entry with other
> MTT mkeys
That would be perfect, you should definately do it
But it seems there is not an issue here, so a followup is OK
Jason
next prev parent reply other threads:[~2023-01-17 14:49 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-15 13:34 [PATCH v4 rdma-next 0/6] RDMA/mlx5: Switch MR cache to use RB-tree Michael Guralnik
2023-01-15 13:34 ` [PATCH v4 rdma-next 1/6] RDMA/mlx5: Don't keep umrable 'page_shift' in cache entries Michael Guralnik
2023-01-15 13:34 ` [PATCH v4 rdma-next 2/6] RDMA/mlx5: Remove explicit ODP cache entry Michael Guralnik
2023-01-16 16:59 ` Jason Gunthorpe
2023-01-16 23:24 ` Michael Guralnik
2023-01-16 23:45 ` Jason Gunthorpe
2023-01-17 0:08 ` Michael Guralnik
2023-01-17 14:49 ` Jason Gunthorpe [this message]
2023-01-15 13:34 ` [PATCH v4 rdma-next 3/6] RDMA/mlx5: Change the cache structure to an RB-tree Michael Guralnik
2023-01-15 13:34 ` [PATCH v4 rdma-next 4/6] RDMA/mlx5: Introduce mlx5r_cache_rb_key Michael Guralnik
2023-01-17 6:57 ` kernel test robot
2023-01-24 21:29 ` kernel test robot
2023-01-15 13:34 ` [PATCH v4 rdma-next 5/6] RDMA/mlx5: Cache all user cacheable mkeys on dereg MR flow Michael Guralnik
2023-01-15 13:34 ` [PATCH v4 rdma-next 6/6] RDMA/mlx5: Add work to remove temporary entries from the cache Michael Guralnik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y8a1iHmFzZL50lYD@nvidia.com \
--to=jgg@nvidia.com \
--cc=aharonl@nvidia.com \
--cc=leonro@nvidia.com \
--cc=linux-rdma@vger.kernel.org \
--cc=maorg@nvidia.com \
--cc=michaelgur@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).