netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Martin Habets <habetsm.xilinx@gmail.com>
To: "Íñigo Huguet" <ihuguet@redhat.com>
Cc: ecree.xilinx@gmail.com, davem@davemloft.net, kuba@kernel.org,
	netdev@vger.kernel.org
Subject: Re: [PATCH v2 net-next 2/2] sfc: set affinity hints in local NUMA node only
Date: Tue, 1 Mar 2022 09:16:47 +0000	[thread overview]
Message-ID: <20220301091647.3hb5b4qtpdf3nxqs@gmail.com> (raw)
In-Reply-To: <20220228132254.25787-3-ihuguet@redhat.com>

On Mon, Feb 28, 2022 at 02:22:54PM +0100, Íñigo Huguet wrote:
> Affinity hints were being set to CPUs in local NUMA node first, and then
> in other CPUs. This was creating 2 unintended issues:
> 1. Channels created to be assigned each to a different physical core
>    were assigned to hyperthreading siblings because of being in same
>    NUMA node.
>    Since the patch previous to this one, this did not longer happen
>    with default rss_cpus modparam because less channels are created.
> 2. XDP channels could be assigned to CPUs in different NUMA nodes,
>    decreasing performance too much (to less than half in some of my
>    tests).
> 
> This patch sets the affinity hints spreading the channels only in local
> NUMA node's CPUs. A fallback for the case that no CPU in local NUMA node
> is online has been added too.
> 
> Example of CPUs being assigned in a non optimal way before this and the
> previous patch (note: in this system, xdp-8 to xdp-15 are created
> because num_possible_cpus == 64, but num_present_cpus == 32 so they're
> never used):
> 
> $ lscpu | grep -i numa
> NUMA node(s):                    2
> NUMA node0 CPU(s):               0-7,16-23
> NUMA node1 CPU(s):               8-15,24-31
> 
> $ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list
> /proc/irq/141/0000:07:00.0-0/../smp_affinity_list:0
> /proc/irq/142/0000:07:00.0-1/../smp_affinity_list:1
> /proc/irq/143/0000:07:00.0-2/../smp_affinity_list:2
> /proc/irq/144/0000:07:00.0-3/../smp_affinity_list:3
> /proc/irq/145/0000:07:00.0-4/../smp_affinity_list:4
> /proc/irq/146/0000:07:00.0-5/../smp_affinity_list:5
> /proc/irq/147/0000:07:00.0-6/../smp_affinity_list:6
> /proc/irq/148/0000:07:00.0-7/../smp_affinity_list:7
> /proc/irq/149/0000:07:00.0-8/../smp_affinity_list:16
> /proc/irq/150/0000:07:00.0-9/../smp_affinity_list:17
> /proc/irq/151/0000:07:00.0-10/../smp_affinity_list:18
> /proc/irq/152/0000:07:00.0-11/../smp_affinity_list:19
> /proc/irq/153/0000:07:00.0-12/../smp_affinity_list:20
> /proc/irq/154/0000:07:00.0-13/../smp_affinity_list:21
> /proc/irq/155/0000:07:00.0-14/../smp_affinity_list:22
> /proc/irq/156/0000:07:00.0-15/../smp_affinity_list:23
> /proc/irq/157/0000:07:00.0-xdp-0/../smp_affinity_list:8
> /proc/irq/158/0000:07:00.0-xdp-1/../smp_affinity_list:9
> /proc/irq/159/0000:07:00.0-xdp-2/../smp_affinity_list:10
> /proc/irq/160/0000:07:00.0-xdp-3/../smp_affinity_list:11
> /proc/irq/161/0000:07:00.0-xdp-4/../smp_affinity_list:12
> /proc/irq/162/0000:07:00.0-xdp-5/../smp_affinity_list:13
> /proc/irq/163/0000:07:00.0-xdp-6/../smp_affinity_list:14
> /proc/irq/164/0000:07:00.0-xdp-7/../smp_affinity_list:15
> /proc/irq/165/0000:07:00.0-xdp-8/../smp_affinity_list:24
> /proc/irq/166/0000:07:00.0-xdp-9/../smp_affinity_list:25
> /proc/irq/167/0000:07:00.0-xdp-10/../smp_affinity_list:26
> /proc/irq/168/0000:07:00.0-xdp-11/../smp_affinity_list:27
> /proc/irq/169/0000:07:00.0-xdp-12/../smp_affinity_list:28
> /proc/irq/170/0000:07:00.0-xdp-13/../smp_affinity_list:29
> /proc/irq/171/0000:07:00.0-xdp-14/../smp_affinity_list:30
> /proc/irq/172/0000:07:00.0-xdp-15/../smp_affinity_list:31
> 
> CPUs assignments after this and previous patch, so normal channels
> created only one per core in NUMA node and affinities set only to local
> NUMA node:
> 
> $ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list
> /proc/irq/116/0000:07:00.0-0/../smp_affinity_list:0
> /proc/irq/117/0000:07:00.0-1/../smp_affinity_list:1
> /proc/irq/118/0000:07:00.0-2/../smp_affinity_list:2
> /proc/irq/119/0000:07:00.0-3/../smp_affinity_list:3
> /proc/irq/120/0000:07:00.0-4/../smp_affinity_list:4
> /proc/irq/121/0000:07:00.0-5/../smp_affinity_list:5
> /proc/irq/122/0000:07:00.0-6/../smp_affinity_list:6
> /proc/irq/123/0000:07:00.0-7/../smp_affinity_list:7
> /proc/irq/124/0000:07:00.0-xdp-0/../smp_affinity_list:16
> /proc/irq/125/0000:07:00.0-xdp-1/../smp_affinity_list:17
> /proc/irq/126/0000:07:00.0-xdp-2/../smp_affinity_list:18
> /proc/irq/127/0000:07:00.0-xdp-3/../smp_affinity_list:19
> /proc/irq/128/0000:07:00.0-xdp-4/../smp_affinity_list:20
> /proc/irq/129/0000:07:00.0-xdp-5/../smp_affinity_list:21
> /proc/irq/130/0000:07:00.0-xdp-6/../smp_affinity_list:22
> /proc/irq/131/0000:07:00.0-xdp-7/../smp_affinity_list:23
> /proc/irq/132/0000:07:00.0-xdp-8/../smp_affinity_list:0
> /proc/irq/133/0000:07:00.0-xdp-9/../smp_affinity_list:1
> /proc/irq/134/0000:07:00.0-xdp-10/../smp_affinity_list:2
> /proc/irq/135/0000:07:00.0-xdp-11/../smp_affinity_list:3
> /proc/irq/136/0000:07:00.0-xdp-12/../smp_affinity_list:4
> /proc/irq/137/0000:07:00.0-xdp-13/../smp_affinity_list:5
> /proc/irq/138/0000:07:00.0-xdp-14/../smp_affinity_list:6
> /proc/irq/139/0000:07:00.0-xdp-15/../smp_affinity_list:7
> 
> Signed-off-by: Íñigo Huguet <ihuguet@redhat.com>

Acked-by: Martin Habets <habetsm.xilinx@gmail.com>

> ---
> v2: changed variables declaration order to meet reverse Xmas tree
> convention
> ---
>  drivers/net/ethernet/sfc/efx_channels.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
> index ec6c2f231e73..47467694f242 100644
> --- a/drivers/net/ethernet/sfc/efx_channels.c
> +++ b/drivers/net/ethernet/sfc/efx_channels.c
> @@ -385,12 +385,20 @@ int efx_probe_interrupts(struct efx_nic *efx)
>  #if defined(CONFIG_SMP)
>  void efx_set_interrupt_affinity(struct efx_nic *efx)
>  {
> +	int numa_node = pcibus_to_node(efx->pci_dev->bus);
> +	const struct cpumask *numa_mask = cpumask_of_node(numa_node);
>  	struct efx_channel *channel;
>  	unsigned int cpu;
>  
> +	/* If no online CPUs in local node, fallback to any online CPU */
> +	if (cpumask_first_and(cpu_online_mask, numa_mask) >= nr_cpu_ids)
> +		numa_mask = cpu_online_mask;
> +
> +	cpu = -1;
>  	efx_for_each_channel(channel, efx) {
> -		cpu = cpumask_local_spread(channel->channel,
> -					   pcibus_to_node(efx->pci_dev->bus));
> +		cpu = cpumask_next_and(cpu, cpu_online_mask, numa_mask);
> +		if (cpu >= nr_cpu_ids)
> +			cpu = cpumask_first_and(cpu_online_mask, numa_mask);
>  		irq_set_affinity_hint(channel->irq, cpumask_of(cpu));
>  	}
>  }
> -- 
> 2.34.1

  reply	other threads:[~2022-03-01  9:16 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-28 13:22 [PATCH v2 net-next 0/2] sfc: optimize RXQs count and affinities Íñigo Huguet
2022-02-28 13:22 ` [PATCH v2 net-next 1/2] sfc: default config to 1 channel/core in local NUMA node only Íñigo Huguet
2022-03-01  9:16   ` Martin Habets
2022-02-28 13:22 ` [PATCH v2 net-next 2/2] sfc: set affinity hints " Íñigo Huguet
2022-03-01  9:16   ` Martin Habets [this message]
2022-03-02  1:40 ` [PATCH v2 net-next 0/2] sfc: optimize RXQs count and affinities patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220301091647.3hb5b4qtpdf3nxqs@gmail.com \
    --to=habetsm.xilinx@gmail.com \
    --cc=davem@davemloft.net \
    --cc=ecree.xilinx@gmail.com \
    --cc=ihuguet@redhat.com \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).