linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Zhu Yanjun <yanjun.zhu@linux.dev>
To: Mark Bloch <mbloch@nvidia.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Eric Dumazet <edumazet@google.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	Simon Horman <horms@kernel.org>
Cc: saeedm@nvidia.com, gal@nvidia.com, leonro@nvidia.com,
	tariqt@nvidia.com, Leon Romanovsky <leon@kernel.org>,
	netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
	linux-kernel@vger.kernel.org, Moshe Shemesh <moshe@nvidia.com>
Subject: Re: [PATCH net 1/9] net/mlx5: Ensure fw pages are always allocated on same NUMA
Date: Fri, 13 Jun 2025 09:22:27 -0700	[thread overview]
Message-ID: <1688e772-3067-4277-ad45-6564b4fbbddf@linux.dev> (raw)
In-Reply-To: <20250610151514.1094735-2-mbloch@nvidia.com>

在 2025/6/10 8:15, Mark Bloch 写道:
> From: Moshe Shemesh <moshe@nvidia.com>
> 
> When firmware asks the driver to allocate more pages, using event of
> give_pages, the driver should always allocate it from same NUMA, the
> original device NUMA. Current code uses dev_to_node() which can result
> in different NUMA as it is changed by other driver flows, such as
> mlx5_dma_zalloc_coherent_node(). Instead, use saved numa node for
> allocating firmware pages.

I'm not sure whether NUMA balancing is currently being considered or not.

If I understand correctly, after this commit is applied, all pages will 
be allocated from the same NUMA node — specifically, the original 
device's NUMA node. This seems like it could lead to NUMA imbalance.

By using dev_to_node, it appears that pages could be allocated from 
other NUMA nodes, which might help maintain better NUMA balance.

In the past, I encountered a NUMA balancing issue caused by the mlx5 
NIC, so using dev_to_node might be beneficial in addressing similar 
problems.

Thanks,
Zhu Yanjun

> 
> Fixes: 311c7c71c9bb ("net/mlx5e: Allocate DMA coherent memory on reader NUMA node")
> Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
> Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
> Signed-off-by: Mark Bloch <mbloch@nvidia.com>
> ---
>   drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> index 972e8e9df585..9bc9bd83c232 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> @@ -291,7 +291,7 @@ static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 function)
>   static int alloc_system_page(struct mlx5_core_dev *dev, u32 function)
>   {
>   	struct device *device = mlx5_core_dma_dev(dev);
> -	int nid = dev_to_node(device);
> +	int nid = dev->priv.numa_node;
>   	struct page *page;
>   	u64 zero_addr = 1;
>   	u64 addr;


  reply	other threads:[~2025-06-13 16:22 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-10 15:15 [PATCH net 0/9] mlx5 misc fixes 2025-06-10 Mark Bloch
2025-06-10 15:15 ` [PATCH net 1/9] net/mlx5: Ensure fw pages are always allocated on same NUMA Mark Bloch
2025-06-13 16:22   ` Zhu Yanjun [this message]
2025-06-15  5:55     ` Moshe Shemesh
2025-06-15 14:44       ` Zhu Yanjun
2025-06-19 16:31         ` Moshe Shemesh
2025-06-10 15:15 ` [PATCH net 2/9] net/mlx5: Fix ECVF vports unload on shutdown flow Mark Bloch
2025-06-10 15:15 ` [PATCH net 3/9] net/mlx5: Fix return value when searching for existing flow group Mark Bloch
2025-06-10 15:15 ` [PATCH net 4/9] net/mlx5: HWS, Init mutex on the correct path Mark Bloch
2025-06-10 15:15 ` [PATCH net 5/9] net/mlx5: HWS, fix missing ip_version handling in definer Mark Bloch
2025-06-10 15:15 ` [PATCH net 6/9] net/mlx5: HWS, make sure the uplink is the last destination Mark Bloch
2025-06-10 15:15 ` [PATCH net 7/9] net/mlx5e: Properly access RCU protected qdisc_sleeping variable Mark Bloch
2025-06-11 21:40   ` Jakub Kicinski
2025-06-12  7:31     ` Mark Bloch
2025-06-12 14:22       ` Jakub Kicinski
2025-06-12 14:47         ` Mark Bloch
2025-06-10 15:15 ` [PATCH net 8/9] net/mlx5e: Fix leak of Geneve TLV option object Mark Bloch
2025-06-10 15:15 ` [PATCH net 9/9] net/mlx5e: Fix number of lanes to UNKNOWN when using data_rate_oper Mark Bloch
2025-06-11 21:43 ` [PATCH net 0/9] mlx5 misc fixes 2025-06-10 Jakub Kicinski
2025-06-11 21:50 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1688e772-3067-4277-ad45-6564b4fbbddf@linux.dev \
    --to=yanjun.zhu@linux.dev \
    --cc=andrew+netdev@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=gal@nvidia.com \
    --cc=horms@kernel.org \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=leonro@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mbloch@nvidia.com \
    --cc=moshe@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).