public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Erwan Velu <erwanaliasr1@gmail.com>
Cc: Yury Norov <yury.norov@gmail.com>,
	Tariq Toukan <ttoukan.linux@gmail.com>,
	Erwan Velu <e.velu@criteo.com>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Leon Romanovsky <leon@kernel.org>,
	Tariq Toukan <tariqt@nvidia.com>, Yury Norov <ynorov@nvidia.com>,
	Rahul Anand <raanand@nvidia.com>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Paolo Abeni <pabeni@redhat.com>,
	netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] net/mlx5: Use cpumask_local_spread() instead of custom code
Date: Mon, 19 Aug 2024 08:34:26 -0700	[thread overview]
Message-ID: <20240819083426.1aebc18f@kernel.org> (raw)
In-Reply-To: <CAL2JzuzEBAdkQfRPLXQHry2a2M7_EsScOV_kheo+oXUuKM9rWA@mail.gmail.com>

On Mon, 19 Aug 2024 12:15:10 +0200 Erwan Velu wrote:
> 2/ I was also wondering if we shouldn't have a kernel module option to
> choose the allocation algorithm (I have a POC in that direction).
> The benefit could be allowing the platform owner to select the
> allocation algorithm that sys-admin needs.
> On single-package AMD EPYC servers, the numa topology is pretty handy
> for mapping the L3 affinity but it doesn't provide any particular hint
> about the actual "distance" to the network device.
> You can have up to 12 NUMA nodes on a single package but the actual
> distance to the nic is almost identical as each core needs to use the
> IOdie to reach the PCI devices.
> We can see in the NUMA allocation logic assumptions like "1 NUMA per
> package" logic that the actual distance between nodes should be
> considered in the allocation logic.

I think user space has more information on what the appropriate
placement is than the kernel. We can have a reasonable default,
and maybe try not to stupidly reset the settings when config
changes (I don't think mlx5 does that but other drivers do);
but having a way to select algorithm would only work if there
was a well understood and finite set of algorithms.

IMHO we should try to sell this task to systemd-networkd or some other 
user space daemon. We now have netlink access to NAPI information,
including IRQ<>NAPI<>queue mapping. It's possible to implement a
completely driver-agnostic IRQ mapping support from user space
(without the need to grep irq names like we used to)

  reply	other threads:[~2024-08-19 15:34 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-12  8:22 [PATCH] net/mlx5: Use cpumask_local_spread() instead of custom code Erwan Velu
2024-08-14  7:48 ` Tariq Toukan
2024-08-14 14:45   ` Yury Norov
2024-08-15 10:39     ` Tariq Toukan
2024-08-19 10:15     ` Erwan Velu
2024-08-19 15:34       ` Jakub Kicinski [this message]
2024-08-19 15:41         ` Erwan Velu
2024-08-16  2:10 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240819083426.1aebc18f@kernel.org \
    --to=kuba@kernel.org \
    --cc=davem@davemloft.net \
    --cc=e.velu@criteo.com \
    --cc=edumazet@google.com \
    --cc=erwanaliasr1@gmail.com \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=raanand@nvidia.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    --cc=ttoukan.linux@gmail.com \
    --cc=ynorov@nvidia.com \
    --cc=yury.norov@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox