From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 375087D3EF; Wed, 19 Jun 2024 08:59:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718787579; cv=none; b=eSsIH51gqpU7X3UrY2eCITu0wZRZbtK2+NsJaeH10ad8TOdNX8vFLJAEEc4Xk43PG9c+l+jETKCHZx+JDHNMz3szJJjSbBqnxx7JabNbYm00rg1D7ZJ+YUNWI5GLbErPyT4oDH0CjNzefGPUZqVkKR7OXNxLa/D3ynioYiRHqBo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718787579; c=relaxed/simple; bh=zDcR2FS7O6dHmR0unpZTzE5pR4nnWFbov29koMl8rGM=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=jKZwine0JxwBWWU2/rX6B7XeaxLuW4Y9DkmqU9yxKzl0rwm30bAJXuTQrjNGXQTvkrzjpxK0LLk/LRBtVJ/YfDgf4C4fTCZ961K0FuOd7JSZYFLHRn/yX2LjJzuYggjnfYaQVgDo6i2AInViAt6mu3JM9l3g8omV6RwdKJAgczI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=g/G+KU+h; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="g/G+KU+h" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 85FD0C2BBFC; Wed, 19 Jun 2024 08:59:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718787579; bh=zDcR2FS7O6dHmR0unpZTzE5pR4nnWFbov29koMl8rGM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=g/G+KU+hrwbUVwRf+GFM0hLoIN4/2yCHbGExcVhR/mKvle4N5LUmJmLvb0p+3QFmW RSva60BXGfbQSmy1+d6Awk3ldCtsYIAwGWA6kb8Oa69GQgZYqAmPl8r0cc+4scmBVu iQqLgd77j81O936PgSjKcUte4MBWFUa3VuBKTPlX/O53Zae7uCQtXEvqFkugwTmpsV l4+tow9Nc3fEB5dup6ehEsRO46n2t6/vKK2QLXv7pj2FJIRdAclqAPh2EWA0GyTRIy WzpPfeZY0opKjr41KJkPKig79GRb3At8lwpfpQ8bY3Os2bsDp8+pGWTR0FiW6RRLY0 DIzjtjRjf8t+A== Date: Wed, 19 Jun 2024 11:59:34 +0300 From: Leon Romanovsky To: Anand Khoje Cc: linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, saeedm@mellanox.com, davem@davemloft.net Subject: Re: [PATCH v3] net/mlx5 : Reclaim max 50K pages at once Message-ID: <20240619085934.GL4025@unreal> References: <20240614080135.122656-1-anand.a.khoje@oracle.com> <20240616154415.GA57288@unreal> <032ba44f-1552-45bc-a68c-c848bf6da784@oracle.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <032ba44f-1552-45bc-a68c-c848bf6da784@oracle.com> On Tue, Jun 18, 2024 at 11:14:33PM +0530, Anand Khoje wrote: > > On 6/16/24 21:14, Leon Romanovsky wrote: > > On Fri, Jun 14, 2024 at 01:31:35PM +0530, Anand Khoje wrote: > > > In non FLR context, at times CX-5 requests release of ~8 million FW pages. > > > This needs humongous number of cmd mailboxes, which to be released once > > > the pages are reclaimed. Release of humongous number of cmd mailboxes is > > > consuming cpu time running into many seconds. Which with non preemptible > > > kernels is leading to critical process starving on that cpu’s RQ. > > > To alleviate this, this change restricts the total number of pages > > > a worker will try to reclaim maximum 50K pages in one go. > > > The limit 50K is aligned with the current firmware capacity/limit of > > > releasing 50K pages at once per MLX5_CMD_OP_MANAGE_PAGES + MLX5_PAGES_TAKE > > > device command. > > > > > > Our tests have shown significant benefit of this change in terms of > > > time consumed by dma_pool_free(). > > > During a test where an event was raised by HCA > > > to release 1.3 Million pages, following observations were made: > > > > > > - Without this change: > > > Number of mailbox messages allocated was around 20K, to accommodate > > > the DMA addresses of 1.3 million pages. > > > The average time spent by dma_pool_free() to free the DMA pool is between > > > 16 usec to 32 usec. > > > value ------------- Distribution ------------- count > > > 256 | 0 > > > 512 |@ 287 > > > 1024 |@@@ 1332 > > > 2048 |@ 656 > > > 4096 |@@@@@ 2599 > > > 8192 |@@@@@@@@@@ 4755 > > > 16384 |@@@@@@@@@@@@@@@ 7545 > > > 32768 |@@@@@ 2501 > > > 65536 | 0 > > > > > > - With this change: > > > Number of mailbox messages allocated was around 800; this was to > > > accommodate DMA addresses of only 50K pages. > > > The average time spent by dma_pool_free() to free the DMA pool in this case > > > lies between 1 usec to 2 usec. > > > value ------------- Distribution ------------- count > > > 256 | 0 > > > 512 |@@@@@@@@@@@@@@@@@@ 346 > > > 1024 |@@@@@@@@@@@@@@@@@@@@@@ 435 > > > 2048 | 0 > > > 4096 | 0 > > > 8192 | 1 > > > 16384 | 0 > > > > > > Signed-off-by: Anand Khoje > > > --- > > > Changes in v3: > > > - Shifted the logic to function req_pages_handler() as per > > > Leon's suggestion. > > > --- > > > drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 7 ++++++- > > > 1 file changed, 6 insertions(+), 1 deletion(-) > > > > > The title has extra space: > > "net/mlx5 : Reclaim max 50K pages at once" -> "net/mlx5: Reclaim max 50K pages at once" > > > > But the code looks good to me. > > > > Thanks, > > Reviewed-by: Leon Romanovsky > > Hi Leon, > > Thanks for providing the R-B. Should I send a v4 with the fix for the extra > space issue? Yes, please. And run get_maintainer.pl to get the correct email address for the maintainers and ML. This patch will be applied by netdev maintainers. Thanks > > -Anand >