netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
To: Tariq Toukan <tariqt@nvidia.com>
Cc: "David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Eric Dumazet <edumazet@google.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	netdev@vger.kernel.org, Saeed Mahameed <saeedm@nvidia.com>,
	Gal Pressman <gal@nvidia.com>,
	Leon Romanovsky <leonro@nvidia.com>,
	Simon Horman <horms@kernel.org>,
	Donald Hunter <donald.hunter@gmail.com>,
	Jiri Pirko <jiri@resnulli.us>, Jonathan Corbet <corbet@lwn.net>,
	Leon Romanovsky <leon@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Richard Cochran <richardcochran@gmail.com>,
	linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-rdma@vger.kernel.org, bpf@vger.kernel.org,
	William Tu <witu@nvidia.com>, Bodong Wang <bodong@nvidia.com>
Subject: Re: [PATCH net-next 07/15] net/mlx5e: reduce rep rxq depth to 256 for ECPF
Date: Mon, 10 Feb 2025 10:49:49 +0100	[thread overview]
Message-ID: <Z6nLtN5rn68kY4i0@mev-dev.igk.intel.com> (raw)
In-Reply-To: <20250209101716.112774-8-tariqt@nvidia.com>

On Sun, Feb 09, 2025 at 12:17:08PM +0200, Tariq Toukan wrote:
> From: William Tu <witu@nvidia.com>
> 
> By experiments, a single queue representor netdev consumes kernel
> memory around 2.8MB, and 1.8MB out of the 2.8MB is due to page
> pool for the RXQ. Scaling to a thousand representors consumes 2.8GB,
> which becomes a memory pressure issue for embedded devices such as
> BlueField-2 16GB / BlueField-3 32GB memory.
> 
> Since representor netdevs mostly handles miss traffic, and ideally,
> most of the traffic will be offloaded, reduce the default non-uplink
> rep netdev's RXQ default depth from 1024 to 256 if mdev is ecpf eswitch
> manager. This saves around 1MB of memory per regular RQ,
> (1024 - 256) * 2KB, allocated from page pool.
> 
> With rxq depth of 256, the netlink page pool tool reports
> $./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
> 	 --dump page-pool-get
>  {'id': 277,
>   'ifindex': 9,
>   'inflight': 128,
>   'inflight-mem': 786432,
>   'napi-id': 775}]
> 
> This is due to mtu 1500 + headroom consumes half pages, so 256 rxq
> entries consumes around 128 pages (thus create a page pool with
> size 128), shown above at inflight.
> 
> Note that each netdev has multiple types of RQs, including
> Regular RQ, XSK, PTP, Drop, Trap RQ. Since non-uplink representor
> only supports regular rq, this patch only changes the regular RQ's
> default depth.
> 
> Signed-off-by: William Tu <witu@nvidia.com>
> Reviewed-by: Bodong Wang <bodong@nvidia.com>
> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com>
> Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
> index fdff9fd8a89e..da399adc8854 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
> @@ -65,6 +65,7 @@
>  #define MLX5E_REP_PARAMS_DEF_LOG_SQ_SIZE \
>  	max(0x7, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE)
>  #define MLX5E_REP_PARAMS_DEF_NUM_CHANNELS 1
> +#define MLX5E_REP_PARAMS_DEF_LOG_RQ_SIZE 0x8
>  
>  static const char mlx5e_rep_driver_name[] = "mlx5e_rep";
>  
> @@ -855,6 +856,8 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
>  
>  	/* RQ */
>  	mlx5e_build_rq_params(mdev, params);
> +	if (!mlx5e_is_uplink_rep(priv) && mlx5_core_is_ecpf(mdev))
> +		params->log_rq_mtu_frames = MLX5E_REP_PARAMS_DEF_LOG_RQ_SIZE;
>  
>  	/* If netdev is already registered (e.g. move from nic profile to uplink,
>  	 * RTNL lock must be held before triggering netdev notifiers.

Thanks for detailed commit message.

Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>

> -- 
> 2.45.0

  reply	other threads:[~2025-02-10  9:53 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-09 10:17 [PATCH net-next 00/15] Rate management on traffic classes + misc Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 01/15] devlink: Extend devlink rate API with traffic classes bandwidth management Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 02/15] net/mlx5: Add no-op implementation for setting tc-bw on rate objects Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 03/15] net/mlx5: Add support for setting tc-bw on nodes Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 04/15] net/mlx5: Add traffic class scheduling support for vport QoS Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 05/15] net/mlx5: Manage TC arbiter nodes and implement full support for tc-bw Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 06/15] net/mlx5e: reduce the max log mpwrq sz for ECPF and reps Tariq Toukan
2025-02-10  9:47   ` Michal Swiatkowski
2025-02-09 10:17 ` [PATCH net-next 07/15] net/mlx5e: reduce rep rxq depth to 256 for ECPF Tariq Toukan
2025-02-10  9:49   ` Michal Swiatkowski [this message]
2025-02-09 10:17 ` [PATCH net-next 08/15] net/mlx5e: set the tx_queue_len for pfifo_fast Tariq Toukan
2025-02-10  9:51   ` Michal Swiatkowski
2025-02-09 10:17 ` [PATCH net-next 09/15] net/mlx5: Rename and move mlx5_esw_query_vport_vhca_id Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 10/15] net/mlx5: Expose ICM consumption per function Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 11/15] net/mlx5e: Move RQs diagnose to a dedicated function Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 12/15] net/mlx5e: Add direct TIRs to devlink rx reporter diagnose Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 13/15] net/mlx5e: Expose RSS via " Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 14/15] net/mlx5: Extend Ethtool loopback selftest to support non-linear SKB Tariq Toukan
2025-02-09 10:17 ` [PATCH net-next 15/15] net/mlx5: XDP, Enable TX side XDP multi-buffer support Tariq Toukan
2025-02-12  3:36 ` [PATCH net-next 00/15] Rate management on traffic classes + misc Jakub Kicinski
2025-02-12 11:08   ` Tariq Toukan
2025-02-12 20:19   ` Tariq Toukan
2025-03-06 14:08   ` Cosmin Ratiu
2025-02-12 19:20 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z6nLtN5rn68kY4i0@mev-dev.igk.intel.com \
    --to=michal.swiatkowski@linux.intel.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=ast@kernel.org \
    --cc=bodong@nvidia.com \
    --cc=bpf@vger.kernel.org \
    --cc=corbet@lwn.net \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=donald.hunter@gmail.com \
    --cc=edumazet@google.com \
    --cc=gal@nvidia.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=jiri@resnulli.us \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=leonro@nvidia.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=richardcochran@gmail.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    --cc=witu@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).