netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Anand Khoje <anand.a.khoje@oracle.com>
Cc: linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org, rama.nichanamatlu@oracle.com,
	manjunath.b.patil@oracle.com
Subject: Re: [PATCH v2] RDMA/mlx5 : Reclaim max 50K pages at once
Date: Thu, 13 Jun 2024 22:03:09 +0300	[thread overview]
Message-ID: <20240613190309.GI4966@unreal> (raw)
In-Reply-To: <20240613121252.93315-1-anand.a.khoje@oracle.com>

On Thu, Jun 13, 2024 at 05:42:52PM +0530, Anand Khoje wrote:
> In non FLR context, at times CX-5 requests release of ~8 million FW pages.
> This needs humongous number of cmd mailboxes, which to be released once
> the pages are reclaimed. Release of humongous number of cmd mailboxes is
> consuming cpu time running into many seconds. Which with non preemptible
> kernels is leading to critical process starving on that cpu’s RQ.
> To alleviate this, this change restricts the total number of pages
> a worker will try to reclaim maximum 50K pages in one go.
> The limit 50K is aligned with the current firmware capacity/limit of
> releasing 50K pages at once per MLX5_CMD_OP_MANAGE_PAGES + MLX5_PAGES_TAKE
> device command.
> 
> Our tests have shown significant benefit of this change in terms of
> time consumed by dma_pool_free().
> During a test where an event was raised by HCA
> to release 1.3 Million pages, following observations were made:
> 
> - Without this change:
> Number of mailbox messages allocated was around 20K, to accommodate
> the DMA addresses of 1.3 million pages.
> The average time spent by dma_pool_free() to free the DMA pool is between
> 16 usec to 32 usec.
>            value  ------------- Distribution ------------- count
>              256 |                                         0
>              512 |@                                        287
>             1024 |@@@                                      1332
>             2048 |@                                        656
>             4096 |@@@@@                                    2599
>             8192 |@@@@@@@@@@                               4755
>            16384 |@@@@@@@@@@@@@@@                          7545
>            32768 |@@@@@                                    2501
>            65536 |                                         0
> 
> - With this change:
> Number of mailbox messages allocated was around 800; this was to
> accommodate DMA addresses of only 50K pages.
> The average time spent by dma_pool_free() to free the DMA pool in this case
> lies between 1 usec to 2 usec.
>            value  ------------- Distribution ------------- count
>              256 |                                         0
>              512 |@@@@@@@@@@@@@@@@@@                       346
>             1024 |@@@@@@@@@@@@@@@@@@@@@@                   435
>             2048 |                                         0
>             4096 |                                         0
>             8192 |                                         1
>            16384 |                                         0
> 
> Signed-off-by: Anand Khoje <anand.a.khoje@oracle.com>
> ---
> Changes in v2:
>  - In v1, CPUs were yielded if more than 2 msec are spent in
>    mlx5_free_cmd_msg(). The approach to limit the time spent is changed
>    in this version.
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> index 1b38397..b1cf97d 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> @@ -482,12 +482,16 @@ static int reclaim_pages(struct mlx5_core_dev *dev, u32 func_id, int npages,
>  	return err;
>  }
>  
> +#define MAX_RECLAIM_NPAGES -50000
>  static void pages_work_handler(struct work_struct *work)
>  {
>  	struct mlx5_pages_req *req = container_of(work, struct mlx5_pages_req, work);
>  	struct mlx5_core_dev *dev = req->dev;
>  	int err = 0;
>  
> +	if (req->npages < MAX_RECLAIM_NPAGES)
> +		req->npages = MAX_RECLAIM_NPAGES;

I like this change more than previous variant with yield.
Regarding the patch:
1. Please limit the number of pages in req_pages_handler() and not int pages_work_handler().
2. Patch title should be "net/mlx5: Reclaim max 50K pages at once" and not "RDMA...".
3. You should run get_maintainer.pl script to find the right maintainers and add them to the TO or CC list.

And I still think that you will get better performance by parallelizing the reclaim process.

Thanks

> +
>  	if (req->release_all)
>  		release_all_pages(dev, req->func_id, req->ec_function);
>  	else if (req->npages < 0)
> -- 
> 1.8.3.1
> 
> 

  reply	other threads:[~2024-06-13 19:03 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-13 12:12 [PATCH v2] RDMA/mlx5 : Reclaim max 50K pages at once Anand Khoje
2024-06-13 19:03 ` Leon Romanovsky [this message]
2024-06-14  5:11   ` Anand Khoje

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240613190309.GI4966@unreal \
    --to=leon@kernel.org \
    --cc=anand.a.khoje@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=manjunath.b.patil@oracle.com \
    --cc=netdev@vger.kernel.org \
    --cc=rama.nichanamatlu@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).