From: Shay Drori <shayd@nvidia.com>
To: Ahmed Zaki <ahmed.zaki@intel.com>, <netdev@vger.kernel.org>
Cc: <intel-wired-lan@lists.osuosl.org>, <andrew+netdev@lunn.ch>,
<edumazet@google.com>, <kuba@kernel.org>, <pabeni@redhat.com>,
<davem@davemloft.net>, <michael.chan@broadcom.com>,
<tariqt@nvidia.com>, <anthony.l.nguyen@intel.com>,
<przemyslaw.kitszel@intel.com>, <jdamato@fastly.com>,
<akpm@linux-foundation.org>
Subject: Re: [Intel-wired-lan] [PATCH net-next v2 0/8] net: napi: add CPU affinity to napi->config
Date: Sun, 22 Dec 2024 11:23:31 +0200 [thread overview]
Message-ID: <58e58bb6-730f-4167-9f86-92ea8ec17019@nvidia.com> (raw)
In-Reply-To: <20241218165843.744647-1-ahmed.zaki@intel.com>
On 18/12/2024 18:58, Ahmed Zaki wrote:
> External email: Use caution opening links or attachments
>
>
> Move the IRQ affinity management to the napi struct. All drivers that are
> already using netif_napi_set_irq() are modified to the new API. Except
> mlx5 because it is implementing IRQ pools and moving to the new API does
> not seem trivial.
>
> Tested on bnxt, ice and idpf.
> ---
> Opens: is cpu_online_mask the best default mask? drivers do this differently
cpu_online_mask is not the best default mask for IRQ affinity management.
Here are two reasons:
- Performance Gains from Driver-Specific CPU Assignment: Many drivers
assign different CPUs to each IRQ to optimize performance. This plays
a crucial role in CPU utilization.
- Impact of NUMA Node Distance on Traffic Performance: NUMA topology
plays a crucial role in IRQ performance. Assigning IRQs to CPUs on
the same NUMA node as the associated device minimizes latency caused
by remote memory access.[1]
[1]
for more details on NUMA preference, you can look at commit
2acda57736de1e486036b90a648e67a3599080a1
>
> v2:
> - Also move the ARFS IRQ affinity management from drivers to core. Via
> netif_napi_set_irq(), drivers can ask the core to add the IRQ to the
> ARFS rmap (already allocated by the driver).
>
> RFC -> v1:
> - https://lore.kernel.org/netdev/20241210002626.366878-1-ahmed.zaki@intel.com/
> - move static inline affinity functions to net/dev/core.c
> - add the new napi->irq_flags (patch 1)
> - add code changes to bnxt, mlx4 and ice.
>
> Ahmed Zaki (8):
> net: napi: add irq_flags to napi struct
> net: allow ARFS rmap management in core
> lib: cpu_rmap: allow passing a notifier callback
> net: napi: add CPU affinity to napi->config
> bnxt: use napi's irq affinity
> ice: use napi's irq affinity
> idpf: use napi's irq affinity
> mlx4: use napi's irq affinity
>
> drivers/net/ethernet/amazon/ena/ena_netdev.c | 21 ++---
> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 51 +++--------
> drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 -
> drivers/net/ethernet/broadcom/tg3.c | 2 +-
> drivers/net/ethernet/cisco/enic/enic_main.c | 3 +-
> drivers/net/ethernet/google/gve/gve_utils.c | 2 +-
> .../net/ethernet/hisilicon/hns3/hns3_enet.c | 2 +-
> drivers/net/ethernet/intel/e1000/e1000_main.c | 2 +-
> drivers/net/ethernet/intel/e1000e/netdev.c | 2 +-
> drivers/net/ethernet/intel/ice/ice.h | 3 -
> drivers/net/ethernet/intel/ice/ice_arfs.c | 10 +--
> drivers/net/ethernet/intel/ice/ice_base.c | 7 +-
> drivers/net/ethernet/intel/ice/ice_lib.c | 14 +--
> drivers/net/ethernet/intel/ice/ice_main.c | 44 ----------
> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 19 ++--
> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 6 +-
> drivers/net/ethernet/mellanox/mlx4/en_cq.c | 8 +-
> .../net/ethernet/mellanox/mlx4/en_netdev.c | 33 +------
> drivers/net/ethernet/mellanox/mlx4/eq.c | 24 +----
> drivers/net/ethernet/mellanox/mlx4/main.c | 42 +--------
> drivers/net/ethernet/mellanox/mlx4/mlx4.h | 1 -
> drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 1 -
> .../net/ethernet/mellanox/mlx5/core/en_main.c | 2 +-
> .../net/ethernet/mellanox/mlx5/core/pci_irq.c | 2 +-
> drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 3 +-
> drivers/net/ethernet/qlogic/qede/qede_main.c | 28 +++---
> drivers/net/ethernet/sfc/falcon/efx.c | 9 ++
> drivers/net/ethernet/sfc/falcon/nic.c | 10 ---
> drivers/net/ethernet/sfc/nic.c | 2 +-
> drivers/net/ethernet/sfc/siena/efx_channels.c | 9 ++
> drivers/net/ethernet/sfc/siena/nic.c | 10 ---
> include/linux/cpu_rmap.h | 13 ++-
> include/linux/netdevice.h | 23 ++++-
> lib/cpu_rmap.c | 20 ++---
> net/core/dev.c | 87 ++++++++++++++++++-
> 35 files changed, 215 insertions(+), 302 deletions(-)
>
> --
> 2.43.0
>
next prev parent reply other threads:[~2024-12-22 9:24 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-18 16:58 [Intel-wired-lan] [PATCH net-next v2 0/8] net: napi: add CPU affinity to napi->config Ahmed Zaki
2024-12-18 16:58 ` [Intel-wired-lan] [PATCH net-next v2 1/8] net: napi: add irq_flags to napi struct Ahmed Zaki
2024-12-20 3:34 ` Jakub Kicinski
2024-12-20 14:50 ` Ahmed Zaki
2024-12-18 16:58 ` [Intel-wired-lan] [PATCH net-next v2 2/8] net: allow ARFS rmap management in core Ahmed Zaki
2024-12-18 19:56 ` kernel test robot
2024-12-18 16:58 ` [Intel-wired-lan] [PATCH net-next v2 3/8] lib: cpu_rmap: allow passing a notifier callback Ahmed Zaki
2024-12-18 16:58 ` [Intel-wired-lan] [PATCH net-next v2 4/8] net: napi: add CPU affinity to napi->config Ahmed Zaki
2024-12-18 20:16 ` kernel test robot
2024-12-18 20:27 ` kernel test robot
2024-12-20 3:42 ` Jakub Kicinski
2024-12-20 14:51 ` Ahmed Zaki
2024-12-20 17:23 ` Jakub Kicinski
2024-12-20 19:15 ` Ahmed Zaki
2024-12-20 19:37 ` Jakub Kicinski
2024-12-20 20:14 ` Ahmed Zaki
2024-12-20 20:51 ` Jakub Kicinski
2024-12-18 16:58 ` [Intel-wired-lan] [PATCH net-next v2 5/8] bnxt: use napi's irq affinity Ahmed Zaki
2024-12-18 16:58 ` [Intel-wired-lan] [PATCH net-next v2 6/8] ice: " Ahmed Zaki
2024-12-18 16:58 ` [Intel-wired-lan] [PATCH net-next v2 7/8] idpf: " Ahmed Zaki
2024-12-18 16:58 ` [Intel-wired-lan] [PATCH net-next v2 8/8] mlx4: " Ahmed Zaki
2024-12-22 9:23 ` Shay Drori [this message]
2025-01-02 21:38 ` [Intel-wired-lan] [PATCH net-next v2 0/8] net: napi: add CPU affinity to napi->config Ahmed Zaki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=58e58bb6-730f-4167-9f86-92ea8ec17019@nvidia.com \
--to=shayd@nvidia.com \
--cc=ahmed.zaki@intel.com \
--cc=akpm@linux-foundation.org \
--cc=andrew+netdev@lunn.ch \
--cc=anthony.l.nguyen@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jdamato@fastly.com \
--cc=kuba@kernel.org \
--cc=michael.chan@broadcom.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=przemyslaw.kitszel@intel.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox