* [PATCH v2 net-next 1/2] sfc: default config to 1 channel/core in local NUMA node only
2022-02-28 13:22 [PATCH v2 net-next 0/2] sfc: optimize RXQs count and affinities Íñigo Huguet
@ 2022-02-28 13:22 ` Íñigo Huguet
2022-03-01 9:16 ` Martin Habets
2022-02-28 13:22 ` [PATCH v2 net-next 2/2] sfc: set affinity hints " Íñigo Huguet
2022-03-02 1:40 ` [PATCH v2 net-next 0/2] sfc: optimize RXQs count and affinities patchwork-bot+netdevbpf
2 siblings, 1 reply; 6+ messages in thread
From: Íñigo Huguet @ 2022-02-28 13:22 UTC (permalink / raw)
To: ecree.xilinx, habetsm.xilinx, davem, kuba; +Cc: netdev, Íñigo Huguet
Handling channels from CPUs in different NUMA node can penalize
performance, so better configure only one channel per core in the same
NUMA node than the NIC, and not per each core in the system.
Fallback to all other online cores if there are not online CPUs in local
NUMA node.
Signed-off-by: Íñigo Huguet <ihuguet@redhat.com>
---
drivers/net/ethernet/sfc/efx_channels.c | 50 ++++++++++++++++---------
1 file changed, 33 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
index ead550ae2709..ec6c2f231e73 100644
--- a/drivers/net/ethernet/sfc/efx_channels.c
+++ b/drivers/net/ethernet/sfc/efx_channels.c
@@ -78,31 +78,48 @@ static const struct efx_channel_type efx_default_channel_type = {
* INTERRUPTS
*************/
-static unsigned int efx_wanted_parallelism(struct efx_nic *efx)
+static unsigned int count_online_cores(struct efx_nic *efx, bool local_node)
{
- cpumask_var_t thread_mask;
+ cpumask_var_t filter_mask;
unsigned int count;
int cpu;
+
+ if (unlikely(!zalloc_cpumask_var(&filter_mask, GFP_KERNEL))) {
+ netif_warn(efx, probe, efx->net_dev,
+ "RSS disabled due to allocation failure\n");
+ return 1;
+ }
+
+ cpumask_copy(filter_mask, cpu_online_mask);
+ if (local_node) {
+ int numa_node = pcibus_to_node(efx->pci_dev->bus);
+
+ cpumask_and(filter_mask, filter_mask, cpumask_of_node(numa_node));
+ }
+
+ count = 0;
+ for_each_cpu(cpu, filter_mask) {
+ ++count;
+ cpumask_andnot(filter_mask, filter_mask, topology_sibling_cpumask(cpu));
+ }
+
+ free_cpumask_var(filter_mask);
+
+ return count;
+}
+
+static unsigned int efx_wanted_parallelism(struct efx_nic *efx)
+{
+ unsigned int count;
if (rss_cpus) {
count = rss_cpus;
} else {
- if (unlikely(!zalloc_cpumask_var(&thread_mask, GFP_KERNEL))) {
- netif_warn(efx, probe, efx->net_dev,
- "RSS disabled due to allocation failure\n");
- return 1;
- }
-
- count = 0;
- for_each_online_cpu(cpu) {
- if (!cpumask_test_cpu(cpu, thread_mask)) {
- ++count;
- cpumask_or(thread_mask, thread_mask,
- topology_sibling_cpumask(cpu));
- }
- }
+ count = count_online_cores(efx, true);
- free_cpumask_var(thread_mask);
+ /* If no online CPUs in local node, fallback to any online CPUs */
+ if (count == 0)
+ count = count_online_cores(efx, false);
}
if (count > EFX_MAX_RX_QUEUES) {
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH v2 net-next 1/2] sfc: default config to 1 channel/core in local NUMA node only
2022-02-28 13:22 ` [PATCH v2 net-next 1/2] sfc: default config to 1 channel/core in local NUMA node only Íñigo Huguet
@ 2022-03-01 9:16 ` Martin Habets
0 siblings, 0 replies; 6+ messages in thread
From: Martin Habets @ 2022-03-01 9:16 UTC (permalink / raw)
To: Íñigo Huguet; +Cc: ecree.xilinx, davem, kuba, netdev
On Mon, Feb 28, 2022 at 02:22:53PM +0100, Íñigo Huguet wrote:
> Handling channels from CPUs in different NUMA node can penalize
> performance, so better configure only one channel per core in the same
> NUMA node than the NIC, and not per each core in the system.
>
> Fallback to all other online cores if there are not online CPUs in local
> NUMA node.
>
> Signed-off-by: Íñigo Huguet <ihuguet@redhat.com>
Acked-by: Martin Habets <habetsm.xilinx@gmail.com>
> ---
> drivers/net/ethernet/sfc/efx_channels.c | 50 ++++++++++++++++---------
> 1 file changed, 33 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
> index ead550ae2709..ec6c2f231e73 100644
> --- a/drivers/net/ethernet/sfc/efx_channels.c
> +++ b/drivers/net/ethernet/sfc/efx_channels.c
> @@ -78,31 +78,48 @@ static const struct efx_channel_type efx_default_channel_type = {
> * INTERRUPTS
> *************/
>
> -static unsigned int efx_wanted_parallelism(struct efx_nic *efx)
> +static unsigned int count_online_cores(struct efx_nic *efx, bool local_node)
> {
> - cpumask_var_t thread_mask;
> + cpumask_var_t filter_mask;
> unsigned int count;
> int cpu;
> +
> + if (unlikely(!zalloc_cpumask_var(&filter_mask, GFP_KERNEL))) {
> + netif_warn(efx, probe, efx->net_dev,
> + "RSS disabled due to allocation failure\n");
> + return 1;
> + }
> +
> + cpumask_copy(filter_mask, cpu_online_mask);
> + if (local_node) {
> + int numa_node = pcibus_to_node(efx->pci_dev->bus);
> +
> + cpumask_and(filter_mask, filter_mask, cpumask_of_node(numa_node));
> + }
> +
> + count = 0;
> + for_each_cpu(cpu, filter_mask) {
> + ++count;
> + cpumask_andnot(filter_mask, filter_mask, topology_sibling_cpumask(cpu));
> + }
> +
> + free_cpumask_var(filter_mask);
> +
> + return count;
> +}
> +
> +static unsigned int efx_wanted_parallelism(struct efx_nic *efx)
> +{
> + unsigned int count;
>
> if (rss_cpus) {
> count = rss_cpus;
> } else {
> - if (unlikely(!zalloc_cpumask_var(&thread_mask, GFP_KERNEL))) {
> - netif_warn(efx, probe, efx->net_dev,
> - "RSS disabled due to allocation failure\n");
> - return 1;
> - }
> -
> - count = 0;
> - for_each_online_cpu(cpu) {
> - if (!cpumask_test_cpu(cpu, thread_mask)) {
> - ++count;
> - cpumask_or(thread_mask, thread_mask,
> - topology_sibling_cpumask(cpu));
> - }
> - }
> + count = count_online_cores(efx, true);
>
> - free_cpumask_var(thread_mask);
> + /* If no online CPUs in local node, fallback to any online CPUs */
> + if (count == 0)
> + count = count_online_cores(efx, false);
> }
>
> if (count > EFX_MAX_RX_QUEUES) {
> --
> 2.34.1
--
Martin Habets <habetsm.xilinx@gmail.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 net-next 2/2] sfc: set affinity hints in local NUMA node only
2022-02-28 13:22 [PATCH v2 net-next 0/2] sfc: optimize RXQs count and affinities Íñigo Huguet
2022-02-28 13:22 ` [PATCH v2 net-next 1/2] sfc: default config to 1 channel/core in local NUMA node only Íñigo Huguet
@ 2022-02-28 13:22 ` Íñigo Huguet
2022-03-01 9:16 ` Martin Habets
2022-03-02 1:40 ` [PATCH v2 net-next 0/2] sfc: optimize RXQs count and affinities patchwork-bot+netdevbpf
2 siblings, 1 reply; 6+ messages in thread
From: Íñigo Huguet @ 2022-02-28 13:22 UTC (permalink / raw)
To: ecree.xilinx, habetsm.xilinx, davem, kuba; +Cc: netdev, Íñigo Huguet
Affinity hints were being set to CPUs in local NUMA node first, and then
in other CPUs. This was creating 2 unintended issues:
1. Channels created to be assigned each to a different physical core
were assigned to hyperthreading siblings because of being in same
NUMA node.
Since the patch previous to this one, this did not longer happen
with default rss_cpus modparam because less channels are created.
2. XDP channels could be assigned to CPUs in different NUMA nodes,
decreasing performance too much (to less than half in some of my
tests).
This patch sets the affinity hints spreading the channels only in local
NUMA node's CPUs. A fallback for the case that no CPU in local NUMA node
is online has been added too.
Example of CPUs being assigned in a non optimal way before this and the
previous patch (note: in this system, xdp-8 to xdp-15 are created
because num_possible_cpus == 64, but num_present_cpus == 32 so they're
never used):
$ lscpu | grep -i numa
NUMA node(s): 2
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31
$ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list
/proc/irq/141/0000:07:00.0-0/../smp_affinity_list:0
/proc/irq/142/0000:07:00.0-1/../smp_affinity_list:1
/proc/irq/143/0000:07:00.0-2/../smp_affinity_list:2
/proc/irq/144/0000:07:00.0-3/../smp_affinity_list:3
/proc/irq/145/0000:07:00.0-4/../smp_affinity_list:4
/proc/irq/146/0000:07:00.0-5/../smp_affinity_list:5
/proc/irq/147/0000:07:00.0-6/../smp_affinity_list:6
/proc/irq/148/0000:07:00.0-7/../smp_affinity_list:7
/proc/irq/149/0000:07:00.0-8/../smp_affinity_list:16
/proc/irq/150/0000:07:00.0-9/../smp_affinity_list:17
/proc/irq/151/0000:07:00.0-10/../smp_affinity_list:18
/proc/irq/152/0000:07:00.0-11/../smp_affinity_list:19
/proc/irq/153/0000:07:00.0-12/../smp_affinity_list:20
/proc/irq/154/0000:07:00.0-13/../smp_affinity_list:21
/proc/irq/155/0000:07:00.0-14/../smp_affinity_list:22
/proc/irq/156/0000:07:00.0-15/../smp_affinity_list:23
/proc/irq/157/0000:07:00.0-xdp-0/../smp_affinity_list:8
/proc/irq/158/0000:07:00.0-xdp-1/../smp_affinity_list:9
/proc/irq/159/0000:07:00.0-xdp-2/../smp_affinity_list:10
/proc/irq/160/0000:07:00.0-xdp-3/../smp_affinity_list:11
/proc/irq/161/0000:07:00.0-xdp-4/../smp_affinity_list:12
/proc/irq/162/0000:07:00.0-xdp-5/../smp_affinity_list:13
/proc/irq/163/0000:07:00.0-xdp-6/../smp_affinity_list:14
/proc/irq/164/0000:07:00.0-xdp-7/../smp_affinity_list:15
/proc/irq/165/0000:07:00.0-xdp-8/../smp_affinity_list:24
/proc/irq/166/0000:07:00.0-xdp-9/../smp_affinity_list:25
/proc/irq/167/0000:07:00.0-xdp-10/../smp_affinity_list:26
/proc/irq/168/0000:07:00.0-xdp-11/../smp_affinity_list:27
/proc/irq/169/0000:07:00.0-xdp-12/../smp_affinity_list:28
/proc/irq/170/0000:07:00.0-xdp-13/../smp_affinity_list:29
/proc/irq/171/0000:07:00.0-xdp-14/../smp_affinity_list:30
/proc/irq/172/0000:07:00.0-xdp-15/../smp_affinity_list:31
CPUs assignments after this and previous patch, so normal channels
created only one per core in NUMA node and affinities set only to local
NUMA node:
$ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list
/proc/irq/116/0000:07:00.0-0/../smp_affinity_list:0
/proc/irq/117/0000:07:00.0-1/../smp_affinity_list:1
/proc/irq/118/0000:07:00.0-2/../smp_affinity_list:2
/proc/irq/119/0000:07:00.0-3/../smp_affinity_list:3
/proc/irq/120/0000:07:00.0-4/../smp_affinity_list:4
/proc/irq/121/0000:07:00.0-5/../smp_affinity_list:5
/proc/irq/122/0000:07:00.0-6/../smp_affinity_list:6
/proc/irq/123/0000:07:00.0-7/../smp_affinity_list:7
/proc/irq/124/0000:07:00.0-xdp-0/../smp_affinity_list:16
/proc/irq/125/0000:07:00.0-xdp-1/../smp_affinity_list:17
/proc/irq/126/0000:07:00.0-xdp-2/../smp_affinity_list:18
/proc/irq/127/0000:07:00.0-xdp-3/../smp_affinity_list:19
/proc/irq/128/0000:07:00.0-xdp-4/../smp_affinity_list:20
/proc/irq/129/0000:07:00.0-xdp-5/../smp_affinity_list:21
/proc/irq/130/0000:07:00.0-xdp-6/../smp_affinity_list:22
/proc/irq/131/0000:07:00.0-xdp-7/../smp_affinity_list:23
/proc/irq/132/0000:07:00.0-xdp-8/../smp_affinity_list:0
/proc/irq/133/0000:07:00.0-xdp-9/../smp_affinity_list:1
/proc/irq/134/0000:07:00.0-xdp-10/../smp_affinity_list:2
/proc/irq/135/0000:07:00.0-xdp-11/../smp_affinity_list:3
/proc/irq/136/0000:07:00.0-xdp-12/../smp_affinity_list:4
/proc/irq/137/0000:07:00.0-xdp-13/../smp_affinity_list:5
/proc/irq/138/0000:07:00.0-xdp-14/../smp_affinity_list:6
/proc/irq/139/0000:07:00.0-xdp-15/../smp_affinity_list:7
Signed-off-by: Íñigo Huguet <ihuguet@redhat.com>
---
v2: changed variables declaration order to meet reverse Xmas tree
convention
---
drivers/net/ethernet/sfc/efx_channels.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
index ec6c2f231e73..47467694f242 100644
--- a/drivers/net/ethernet/sfc/efx_channels.c
+++ b/drivers/net/ethernet/sfc/efx_channels.c
@@ -385,12 +385,20 @@ int efx_probe_interrupts(struct efx_nic *efx)
#if defined(CONFIG_SMP)
void efx_set_interrupt_affinity(struct efx_nic *efx)
{
+ int numa_node = pcibus_to_node(efx->pci_dev->bus);
+ const struct cpumask *numa_mask = cpumask_of_node(numa_node);
struct efx_channel *channel;
unsigned int cpu;
+ /* If no online CPUs in local node, fallback to any online CPU */
+ if (cpumask_first_and(cpu_online_mask, numa_mask) >= nr_cpu_ids)
+ numa_mask = cpu_online_mask;
+
+ cpu = -1;
efx_for_each_channel(channel, efx) {
- cpu = cpumask_local_spread(channel->channel,
- pcibus_to_node(efx->pci_dev->bus));
+ cpu = cpumask_next_and(cpu, cpu_online_mask, numa_mask);
+ if (cpu >= nr_cpu_ids)
+ cpu = cpumask_first_and(cpu_online_mask, numa_mask);
irq_set_affinity_hint(channel->irq, cpumask_of(cpu));
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH v2 net-next 2/2] sfc: set affinity hints in local NUMA node only
2022-02-28 13:22 ` [PATCH v2 net-next 2/2] sfc: set affinity hints " Íñigo Huguet
@ 2022-03-01 9:16 ` Martin Habets
0 siblings, 0 replies; 6+ messages in thread
From: Martin Habets @ 2022-03-01 9:16 UTC (permalink / raw)
To: Íñigo Huguet; +Cc: ecree.xilinx, davem, kuba, netdev
On Mon, Feb 28, 2022 at 02:22:54PM +0100, Íñigo Huguet wrote:
> Affinity hints were being set to CPUs in local NUMA node first, and then
> in other CPUs. This was creating 2 unintended issues:
> 1. Channels created to be assigned each to a different physical core
> were assigned to hyperthreading siblings because of being in same
> NUMA node.
> Since the patch previous to this one, this did not longer happen
> with default rss_cpus modparam because less channels are created.
> 2. XDP channels could be assigned to CPUs in different NUMA nodes,
> decreasing performance too much (to less than half in some of my
> tests).
>
> This patch sets the affinity hints spreading the channels only in local
> NUMA node's CPUs. A fallback for the case that no CPU in local NUMA node
> is online has been added too.
>
> Example of CPUs being assigned in a non optimal way before this and the
> previous patch (note: in this system, xdp-8 to xdp-15 are created
> because num_possible_cpus == 64, but num_present_cpus == 32 so they're
> never used):
>
> $ lscpu | grep -i numa
> NUMA node(s): 2
> NUMA node0 CPU(s): 0-7,16-23
> NUMA node1 CPU(s): 8-15,24-31
>
> $ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list
> /proc/irq/141/0000:07:00.0-0/../smp_affinity_list:0
> /proc/irq/142/0000:07:00.0-1/../smp_affinity_list:1
> /proc/irq/143/0000:07:00.0-2/../smp_affinity_list:2
> /proc/irq/144/0000:07:00.0-3/../smp_affinity_list:3
> /proc/irq/145/0000:07:00.0-4/../smp_affinity_list:4
> /proc/irq/146/0000:07:00.0-5/../smp_affinity_list:5
> /proc/irq/147/0000:07:00.0-6/../smp_affinity_list:6
> /proc/irq/148/0000:07:00.0-7/../smp_affinity_list:7
> /proc/irq/149/0000:07:00.0-8/../smp_affinity_list:16
> /proc/irq/150/0000:07:00.0-9/../smp_affinity_list:17
> /proc/irq/151/0000:07:00.0-10/../smp_affinity_list:18
> /proc/irq/152/0000:07:00.0-11/../smp_affinity_list:19
> /proc/irq/153/0000:07:00.0-12/../smp_affinity_list:20
> /proc/irq/154/0000:07:00.0-13/../smp_affinity_list:21
> /proc/irq/155/0000:07:00.0-14/../smp_affinity_list:22
> /proc/irq/156/0000:07:00.0-15/../smp_affinity_list:23
> /proc/irq/157/0000:07:00.0-xdp-0/../smp_affinity_list:8
> /proc/irq/158/0000:07:00.0-xdp-1/../smp_affinity_list:9
> /proc/irq/159/0000:07:00.0-xdp-2/../smp_affinity_list:10
> /proc/irq/160/0000:07:00.0-xdp-3/../smp_affinity_list:11
> /proc/irq/161/0000:07:00.0-xdp-4/../smp_affinity_list:12
> /proc/irq/162/0000:07:00.0-xdp-5/../smp_affinity_list:13
> /proc/irq/163/0000:07:00.0-xdp-6/../smp_affinity_list:14
> /proc/irq/164/0000:07:00.0-xdp-7/../smp_affinity_list:15
> /proc/irq/165/0000:07:00.0-xdp-8/../smp_affinity_list:24
> /proc/irq/166/0000:07:00.0-xdp-9/../smp_affinity_list:25
> /proc/irq/167/0000:07:00.0-xdp-10/../smp_affinity_list:26
> /proc/irq/168/0000:07:00.0-xdp-11/../smp_affinity_list:27
> /proc/irq/169/0000:07:00.0-xdp-12/../smp_affinity_list:28
> /proc/irq/170/0000:07:00.0-xdp-13/../smp_affinity_list:29
> /proc/irq/171/0000:07:00.0-xdp-14/../smp_affinity_list:30
> /proc/irq/172/0000:07:00.0-xdp-15/../smp_affinity_list:31
>
> CPUs assignments after this and previous patch, so normal channels
> created only one per core in NUMA node and affinities set only to local
> NUMA node:
>
> $ grep -H . /proc/irq/*/0000:07:00.0*/../smp_affinity_list
> /proc/irq/116/0000:07:00.0-0/../smp_affinity_list:0
> /proc/irq/117/0000:07:00.0-1/../smp_affinity_list:1
> /proc/irq/118/0000:07:00.0-2/../smp_affinity_list:2
> /proc/irq/119/0000:07:00.0-3/../smp_affinity_list:3
> /proc/irq/120/0000:07:00.0-4/../smp_affinity_list:4
> /proc/irq/121/0000:07:00.0-5/../smp_affinity_list:5
> /proc/irq/122/0000:07:00.0-6/../smp_affinity_list:6
> /proc/irq/123/0000:07:00.0-7/../smp_affinity_list:7
> /proc/irq/124/0000:07:00.0-xdp-0/../smp_affinity_list:16
> /proc/irq/125/0000:07:00.0-xdp-1/../smp_affinity_list:17
> /proc/irq/126/0000:07:00.0-xdp-2/../smp_affinity_list:18
> /proc/irq/127/0000:07:00.0-xdp-3/../smp_affinity_list:19
> /proc/irq/128/0000:07:00.0-xdp-4/../smp_affinity_list:20
> /proc/irq/129/0000:07:00.0-xdp-5/../smp_affinity_list:21
> /proc/irq/130/0000:07:00.0-xdp-6/../smp_affinity_list:22
> /proc/irq/131/0000:07:00.0-xdp-7/../smp_affinity_list:23
> /proc/irq/132/0000:07:00.0-xdp-8/../smp_affinity_list:0
> /proc/irq/133/0000:07:00.0-xdp-9/../smp_affinity_list:1
> /proc/irq/134/0000:07:00.0-xdp-10/../smp_affinity_list:2
> /proc/irq/135/0000:07:00.0-xdp-11/../smp_affinity_list:3
> /proc/irq/136/0000:07:00.0-xdp-12/../smp_affinity_list:4
> /proc/irq/137/0000:07:00.0-xdp-13/../smp_affinity_list:5
> /proc/irq/138/0000:07:00.0-xdp-14/../smp_affinity_list:6
> /proc/irq/139/0000:07:00.0-xdp-15/../smp_affinity_list:7
>
> Signed-off-by: Íñigo Huguet <ihuguet@redhat.com>
Acked-by: Martin Habets <habetsm.xilinx@gmail.com>
> ---
> v2: changed variables declaration order to meet reverse Xmas tree
> convention
> ---
> drivers/net/ethernet/sfc/efx_channels.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
> index ec6c2f231e73..47467694f242 100644
> --- a/drivers/net/ethernet/sfc/efx_channels.c
> +++ b/drivers/net/ethernet/sfc/efx_channels.c
> @@ -385,12 +385,20 @@ int efx_probe_interrupts(struct efx_nic *efx)
> #if defined(CONFIG_SMP)
> void efx_set_interrupt_affinity(struct efx_nic *efx)
> {
> + int numa_node = pcibus_to_node(efx->pci_dev->bus);
> + const struct cpumask *numa_mask = cpumask_of_node(numa_node);
> struct efx_channel *channel;
> unsigned int cpu;
>
> + /* If no online CPUs in local node, fallback to any online CPU */
> + if (cpumask_first_and(cpu_online_mask, numa_mask) >= nr_cpu_ids)
> + numa_mask = cpu_online_mask;
> +
> + cpu = -1;
> efx_for_each_channel(channel, efx) {
> - cpu = cpumask_local_spread(channel->channel,
> - pcibus_to_node(efx->pci_dev->bus));
> + cpu = cpumask_next_and(cpu, cpu_online_mask, numa_mask);
> + if (cpu >= nr_cpu_ids)
> + cpu = cpumask_first_and(cpu_online_mask, numa_mask);
> irq_set_affinity_hint(channel->irq, cpumask_of(cpu));
> }
> }
> --
> 2.34.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 net-next 0/2] sfc: optimize RXQs count and affinities
2022-02-28 13:22 [PATCH v2 net-next 0/2] sfc: optimize RXQs count and affinities Íñigo Huguet
2022-02-28 13:22 ` [PATCH v2 net-next 1/2] sfc: default config to 1 channel/core in local NUMA node only Íñigo Huguet
2022-02-28 13:22 ` [PATCH v2 net-next 2/2] sfc: set affinity hints " Íñigo Huguet
@ 2022-03-02 1:40 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-03-02 1:40 UTC (permalink / raw)
To: =?utf-8?b?w43DsWlnbyBIdWd1ZXQgPGlodWd1ZXRAcmVkaGF0LmNvbT4=?=
Cc: ecree.xilinx, habetsm.xilinx, davem, kuba, netdev
Hello:
This series was applied to netdev/net-next.git (master)
by Jakub Kicinski <kuba@kernel.org>:
On Mon, 28 Feb 2022 14:22:52 +0100 you wrote:
> In sfc driver one RX queue per physical core was allocated by default.
> Later on, IRQ affinities were set spreading the IRQs in all NUMA local
> CPUs.
>
> However, with that default configuration it result in a non very optimal
> configuration in many modern systems. Specifically, in systems with hyper
> threading and 2 NUMA nodes, affinities are set in a way that IRQs are
> handled by all logical cores of one same NUMA node. Handling IRQs from
> both hyper threading siblings has no benefit, and setting affinities to one
> queue per physical core is neither a very good idea because there is a
> performance penalty for moving data across nodes (I was able to check it
> with some XDP tests using pktgen).
>
> [...]
Here is the summary with links:
- [v2,net-next,1/2] sfc: default config to 1 channel/core in local NUMA node only
(no matching commit)
- [v2,net-next,2/2] sfc: set affinity hints in local NUMA node only
https://git.kernel.org/netdev/net-next/c/09a99ab16c60
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 6+ messages in thread