linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Anand Khoje <anand.a.khoje@oracle.com>
Cc: linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org, saeedm@mellanox.com, davem@davemloft.net
Subject: Re: [PATCH v3] net/mlx5 : Reclaim max 50K pages at once
Date: Wed, 19 Jun 2024 11:59:34 +0300	[thread overview]
Message-ID: <20240619085934.GL4025@unreal> (raw)
In-Reply-To: <032ba44f-1552-45bc-a68c-c848bf6da784@oracle.com>

On Tue, Jun 18, 2024 at 11:14:33PM +0530, Anand Khoje wrote:
> 
> On 6/16/24 21:14, Leon Romanovsky wrote:
> > On Fri, Jun 14, 2024 at 01:31:35PM +0530, Anand Khoje wrote:
> > > In non FLR context, at times CX-5 requests release of ~8 million FW pages.
> > > This needs humongous number of cmd mailboxes, which to be released once
> > > the pages are reclaimed. Release of humongous number of cmd mailboxes is
> > > consuming cpu time running into many seconds. Which with non preemptible
> > > kernels is leading to critical process starving on that cpu’s RQ.
> > > To alleviate this, this change restricts the total number of pages
> > > a worker will try to reclaim maximum 50K pages in one go.
> > > The limit 50K is aligned with the current firmware capacity/limit of
> > > releasing 50K pages at once per MLX5_CMD_OP_MANAGE_PAGES + MLX5_PAGES_TAKE
> > > device command.
> > > 
> > > Our tests have shown significant benefit of this change in terms of
> > > time consumed by dma_pool_free().
> > > During a test where an event was raised by HCA
> > > to release 1.3 Million pages, following observations were made:
> > > 
> > > - Without this change:
> > > Number of mailbox messages allocated was around 20K, to accommodate
> > > the DMA addresses of 1.3 million pages.
> > > The average time spent by dma_pool_free() to free the DMA pool is between
> > > 16 usec to 32 usec.
> > >             value  ------------- Distribution ------------- count
> > >               256 |                                         0
> > >               512 |@                                        287
> > >              1024 |@@@                                      1332
> > >              2048 |@                                        656
> > >              4096 |@@@@@                                    2599
> > >              8192 |@@@@@@@@@@                               4755
> > >             16384 |@@@@@@@@@@@@@@@                          7545
> > >             32768 |@@@@@                                    2501
> > >             65536 |                                         0
> > > 
> > > - With this change:
> > > Number of mailbox messages allocated was around 800; this was to
> > > accommodate DMA addresses of only 50K pages.
> > > The average time spent by dma_pool_free() to free the DMA pool in this case
> > > lies between 1 usec to 2 usec.
> > >             value  ------------- Distribution ------------- count
> > >               256 |                                         0
> > >               512 |@@@@@@@@@@@@@@@@@@                       346
> > >              1024 |@@@@@@@@@@@@@@@@@@@@@@                   435
> > >              2048 |                                         0
> > >              4096 |                                         0
> > >              8192 |                                         1
> > >             16384 |                                         0
> > > 
> > > Signed-off-by: Anand Khoje <anand.a.khoje@oracle.com>
> > > ---
> > > Changes in v3:
> > >     - Shifted the logic to function req_pages_handler() as per
> > >       Leon's suggestion.
> > > ---
> > >   drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 7 ++++++-
> > >   1 file changed, 6 insertions(+), 1 deletion(-)
> > > 
> > The title has extra space:
> > "net/mlx5 : Reclaim max 50K pages at once" -> "net/mlx5: Reclaim max 50K pages at once"
> > 
> > But the code looks good to me.
> > 
> > Thanks,
> > Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
> 
> Hi Leon,
> 
> Thanks for providing the R-B. Should I send a v4 with the fix for the extra
> space issue?

Yes, please.
And run get_maintainer.pl to get the correct email address for the maintainers and ML.
This patch will be applied by netdev maintainers.

Thanks

> 
> -Anand
> 

      reply	other threads:[~2024-06-19  8:59 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-14  8:01 [PATCH v3] net/mlx5 : Reclaim max 50K pages at once Anand Khoje
2024-06-16 15:44 ` Leon Romanovsky
2024-06-18 17:44   ` Anand Khoje
2024-06-19  8:59     ` Leon Romanovsky [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240619085934.GL4025@unreal \
    --to=leon@kernel.org \
    --cc=anand.a.khoje@oracle.com \
    --cc=davem@davemloft.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).