* [PATCH 0/4] sched: drop for_each_numa_hop_mask()
@ 2023-09-25 2:05 Yury Norov
2023-09-25 2:05 ` [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread() Yury Norov
` (3 more replies)
0 siblings, 4 replies; 17+ messages in thread
From: Yury Norov @ 2023-09-25 2:05 UTC (permalink / raw)
To: linux-kernel, netdev, linux-rdma
Cc: Yury Norov, Tariq Toukan, Valentin Schneider, Maher Sanalla,
Ingo Molnar, Mel Gorman, Saeed Mahameed, Leon Romanovsky,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
Pawel Chmielewski, Jacob Keller
Recently added mlx5_cpumask_default_spread() makes for_each_numa_hop_mask()
opencoding cpumask_local_spread().
This series replaces mlx5_cpumask_default_spread() with generic
cpumask_local_spread(). And because mlx5_cpumask_default_spread()
is the only user of for_each_numa_hop_mask() machinery, drops it
entirely.
Regarding for_each_numa_hop_mask(), I've got a small series introducing
a better version of it - for_each_numa_cpu():
https://lore.kernel.org/netdev/20230430171809.124686-1-yury.norov@gmail.com/T/
But with the lack of interest, I believe it's wotrth to drop
for_each_numa_hop_mask() and put for_each_numa_cpu() on hold
until further updates...
Yury Norov (4):
net: mellanox: drop mlx5_cpumask_default_spread()
Revert "sched/topology: Introduce for_each_numa_hop_mask()"
Revert "sched/topology: Introduce sched_numa_hop_mask()"
lib/cpumask: don't mention for_each_numa_hop_mask in
cpumask_local_spread()"
drivers/net/ethernet/mellanox/mlx5/core/eq.c | 28 ++---------------
include/linux/topology.h | 25 ---------------
kernel/sched/topology.c | 33 --------------------
lib/cpumask.c | 21 -------------
4 files changed, 2 insertions(+), 105 deletions(-)
--
2.39.2
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread()
2023-09-25 2:05 [PATCH 0/4] sched: drop for_each_numa_hop_mask() Yury Norov
@ 2023-09-25 2:05 ` Yury Norov
2023-09-25 22:45 ` Jacob Keller
2023-10-03 10:04 ` Paolo Abeni
2023-09-25 2:05 ` [PATCH 2/4] Revert "sched/topology: Introduce for_each_numa_hop_mask()" Yury Norov
` (2 subsequent siblings)
3 siblings, 2 replies; 17+ messages in thread
From: Yury Norov @ 2023-09-25 2:05 UTC (permalink / raw)
To: linux-kernel, netdev, linux-rdma
Cc: Yury Norov, Tariq Toukan, Valentin Schneider, Maher Sanalla,
Ingo Molnar, Mel Gorman, Saeed Mahameed, Leon Romanovsky,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
Pawel Chmielewski, Jacob Keller, Yury Norov
The function duplicates existing cpumask_local_spread(), and it's O(N),
while cpumask_local_spread() implementation is based on bsearch, and
thus is O(log n), so drop mlx5_cpumask_default_spread() and use generic
cpumask_local_spread().
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Yury Norov <ynorov@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eq.c | 28 ++------------------
1 file changed, 2 insertions(+), 26 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index ea0405e0a43f..bd9f857cc52d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -828,30 +828,6 @@ static void comp_irq_release_pci(struct mlx5_core_dev *dev, u16 vecidx)
mlx5_irq_release_vector(irq);
}
-static int mlx5_cpumask_default_spread(int numa_node, int index)
-{
- const struct cpumask *prev = cpu_none_mask;
- const struct cpumask *mask;
- int found_cpu = 0;
- int i = 0;
- int cpu;
-
- rcu_read_lock();
- for_each_numa_hop_mask(mask, numa_node) {
- for_each_cpu_andnot(cpu, mask, prev) {
- if (i++ == index) {
- found_cpu = cpu;
- goto spread_done;
- }
- }
- prev = mask;
- }
-
-spread_done:
- rcu_read_unlock();
- return found_cpu;
-}
-
static struct cpu_rmap *mlx5_eq_table_get_pci_rmap(struct mlx5_core_dev *dev)
{
#ifdef CONFIG_RFS_ACCEL
@@ -873,7 +849,7 @@ static int comp_irq_request_pci(struct mlx5_core_dev *dev, u16 vecidx)
int cpu;
rmap = mlx5_eq_table_get_pci_rmap(dev);
- cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vecidx);
+ cpu = cpumask_local_spread(vecidx, dev->priv.numa_node);
irq = mlx5_irq_request_vector(dev, cpu, vecidx, &rmap);
if (IS_ERR(irq))
return PTR_ERR(irq);
@@ -1125,7 +1101,7 @@ int mlx5_comp_vector_get_cpu(struct mlx5_core_dev *dev, int vector)
if (mask)
cpu = cpumask_first(mask);
else
- cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vector);
+ cpu = cpumask_local_spread(vector, dev->priv.numa_node);
return cpu;
}
--
2.39.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 2/4] Revert "sched/topology: Introduce for_each_numa_hop_mask()"
2023-09-25 2:05 [PATCH 0/4] sched: drop for_each_numa_hop_mask() Yury Norov
2023-09-25 2:05 ` [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread() Yury Norov
@ 2023-09-25 2:05 ` Yury Norov
2023-09-25 22:46 ` Jacob Keller
2023-09-25 2:05 ` [PATCH 3/4] Revert "sched/topology: Introduce sched_numa_hop_mask()" Yury Norov
2023-09-25 2:05 ` [PATCH 4/4] lib/cpumask: don't mention for_each_numa_hop_mask in cpumask_local_spread()" Yury Norov
3 siblings, 1 reply; 17+ messages in thread
From: Yury Norov @ 2023-09-25 2:05 UTC (permalink / raw)
To: linux-kernel, netdev, linux-rdma
Cc: Yury Norov, Tariq Toukan, Valentin Schneider, Maher Sanalla,
Ingo Molnar, Mel Gorman, Saeed Mahameed, Leon Romanovsky,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
Pawel Chmielewski, Jacob Keller, Yury Norov
Now that the only user of for_each_numa_hop_mask() is switched to using
cpumask_local_spread(), for_each_numa_hop_mask() is a dead code. Thus,
revert commit 06ac01721f7d ("sched/topology: Introduce
for_each_numa_hop_mask()").
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Yury Norov <ynorov@nvidia.com>
---
include/linux/topology.h | 18 ------------------
1 file changed, 18 deletions(-)
diff --git a/include/linux/topology.h b/include/linux/topology.h
index fea32377f7c7..344c2362755a 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -261,22 +261,4 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
}
#endif /* CONFIG_NUMA */
-/**
- * for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
- * from a given node.
- * @mask: the iteration variable.
- * @node: the NUMA node to start the search from.
- *
- * Requires rcu_lock to be held.
- *
- * Yields cpu_online_mask for @node == NUMA_NO_NODE.
- */
-#define for_each_numa_hop_mask(mask, node) \
- for (unsigned int __hops = 0; \
- mask = (node != NUMA_NO_NODE || __hops) ? \
- sched_numa_hop_mask(node, __hops) : \
- cpu_online_mask, \
- !IS_ERR_OR_NULL(mask); \
- __hops++)
-
#endif /* _LINUX_TOPOLOGY_H */
--
2.39.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 3/4] Revert "sched/topology: Introduce sched_numa_hop_mask()"
2023-09-25 2:05 [PATCH 0/4] sched: drop for_each_numa_hop_mask() Yury Norov
2023-09-25 2:05 ` [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread() Yury Norov
2023-09-25 2:05 ` [PATCH 2/4] Revert "sched/topology: Introduce for_each_numa_hop_mask()" Yury Norov
@ 2023-09-25 2:05 ` Yury Norov
2023-09-25 22:46 ` Jacob Keller
2023-09-25 2:05 ` [PATCH 4/4] lib/cpumask: don't mention for_each_numa_hop_mask in cpumask_local_spread()" Yury Norov
3 siblings, 1 reply; 17+ messages in thread
From: Yury Norov @ 2023-09-25 2:05 UTC (permalink / raw)
To: linux-kernel, netdev, linux-rdma
Cc: Yury Norov, Tariq Toukan, Valentin Schneider, Maher Sanalla,
Ingo Molnar, Mel Gorman, Saeed Mahameed, Leon Romanovsky,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
Pawel Chmielewski, Jacob Keller, Yury Norov
This reverts commit 9feae65845f7b16376716fe70b7d4b9bf8721848.
Now that for_each_numa_hop_mask() is reverted, revert underlying
machinery.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Yury Norov <ynorov@nvidia.com>
---
include/linux/topology.h | 7 -------
kernel/sched/topology.c | 33 ---------------------------------
2 files changed, 40 deletions(-)
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 344c2362755a..72f264575698 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -247,18 +247,11 @@ static inline const struct cpumask *cpu_cpu_mask(int cpu)
#ifdef CONFIG_NUMA
int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node);
-extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops);
#else
static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
{
return cpumask_nth(cpu, cpus);
}
-
-static inline const struct cpumask *
-sched_numa_hop_mask(unsigned int node, unsigned int hops)
-{
- return ERR_PTR(-EOPNOTSUPP);
-}
#endif /* CONFIG_NUMA */
#endif /* _LINUX_TOPOLOGY_H */
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 05a5bc678c08..3f1c09a9ef6d 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2143,39 +2143,6 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
return ret;
}
EXPORT_SYMBOL_GPL(sched_numa_find_nth_cpu);
-
-/**
- * sched_numa_hop_mask() - Get the cpumask of CPUs at most @hops hops away from
- * @node
- * @node: The node to count hops from.
- * @hops: Include CPUs up to that many hops away. 0 means local node.
- *
- * Return: On success, a pointer to a cpumask of CPUs at most @hops away from
- * @node, an error value otherwise.
- *
- * Requires rcu_lock to be held. Returned cpumask is only valid within that
- * read-side section, copy it if required beyond that.
- *
- * Note that not all hops are equal in distance; see sched_init_numa() for how
- * distances and masks are handled.
- * Also note that this is a reflection of sched_domains_numa_masks, which may change
- * during the lifetime of the system (offline nodes are taken out of the masks).
- */
-const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
-{
- struct cpumask ***masks;
-
- if (node >= nr_node_ids || hops >= sched_domains_numa_levels)
- return ERR_PTR(-EINVAL);
-
- masks = rcu_dereference(sched_domains_numa_masks);
- if (!masks)
- return ERR_PTR(-EBUSY);
-
- return masks[hops][node];
-}
-EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
-
#endif /* CONFIG_NUMA */
static int __sdt_alloc(const struct cpumask *cpu_map)
--
2.39.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 4/4] lib/cpumask: don't mention for_each_numa_hop_mask in cpumask_local_spread()"
2023-09-25 2:05 [PATCH 0/4] sched: drop for_each_numa_hop_mask() Yury Norov
` (2 preceding siblings ...)
2023-09-25 2:05 ` [PATCH 3/4] Revert "sched/topology: Introduce sched_numa_hop_mask()" Yury Norov
@ 2023-09-25 2:05 ` Yury Norov
2023-09-25 22:47 ` Jacob Keller
3 siblings, 1 reply; 17+ messages in thread
From: Yury Norov @ 2023-09-25 2:05 UTC (permalink / raw)
To: linux-kernel, netdev, linux-rdma
Cc: Yury Norov, Tariq Toukan, Valentin Schneider, Maher Sanalla,
Ingo Molnar, Mel Gorman, Saeed Mahameed, Leon Romanovsky,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
Pawel Chmielewski, Jacob Keller, Yury Norov
Now that for_each_numa_hop_mask() is reverted, also revert reference to
it in the comment to cpumask_local_spread().
This partially reverts commit 2ac4980c57f5 ("lib/cpumask: update comment
for cpumask_local_spread()")
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Yury Norov <ynorov@nvidia.com>
---
lib/cpumask.c | 21 ---------------------
1 file changed, 21 deletions(-)
diff --git a/lib/cpumask.c b/lib/cpumask.c
index a7fd02b5ae26..d341fb71a8a9 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -117,27 +117,6 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
*
* Returns online CPU according to a numa aware policy; local cpus are returned
* first, followed by non-local ones, then it wraps around.
- *
- * For those who wants to enumerate all CPUs based on their NUMA distances,
- * i.e. call this function in a loop, like:
- *
- * for (i = 0; i < num_online_cpus(); i++) {
- * cpu = cpumask_local_spread(i, node);
- * do_something(cpu);
- * }
- *
- * There's a better alternative based on for_each()-like iterators:
- *
- * for_each_numa_hop_mask(mask, node) {
- * for_each_cpu_andnot(cpu, mask, prev)
- * do_something(cpu);
- * prev = mask;
- * }
- *
- * It's simpler and more verbose than above. Complexity of iterator-based
- * enumeration is O(sched_domains_numa_levels * nr_cpu_ids), while
- * cpumask_local_spread() when called for each cpu is
- * O(sched_domains_numa_levels * nr_cpu_ids * log(nr_cpu_ids)).
*/
unsigned int cpumask_local_spread(unsigned int i, int node)
{
--
2.39.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread()
2023-09-25 2:05 ` [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread() Yury Norov
@ 2023-09-25 22:45 ` Jacob Keller
2023-10-03 10:04 ` Paolo Abeni
1 sibling, 0 replies; 17+ messages in thread
From: Jacob Keller @ 2023-09-25 22:45 UTC (permalink / raw)
To: Yury Norov, linux-kernel, netdev, linux-rdma
Cc: Tariq Toukan, Valentin Schneider, Maher Sanalla, Ingo Molnar,
Mel Gorman, Saeed Mahameed, Leon Romanovsky, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Daniel Bristot de Oliveira, Pawel Chmielewski,
Yury Norov
On 9/24/2023 7:05 PM, Yury Norov wrote:
> The function duplicates existing cpumask_local_spread(), and it's O(N),
> while cpumask_local_spread() implementation is based on bsearch, and
> thus is O(log n), so drop mlx5_cpumask_default_spread() and use generic
> cpumask_local_spread().
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Yury Norov <ynorov@nvidia.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/eq.c | 28 ++------------------
> 1 file changed, 2 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> index ea0405e0a43f..bd9f857cc52d 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> @@ -828,30 +828,6 @@ static void comp_irq_release_pci(struct mlx5_core_dev *dev, u16 vecidx)
> mlx5_irq_release_vector(irq);
> }
>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
> -static int mlx5_cpumask_default_spread(int numa_node, int index)
> -{
> - const struct cpumask *prev = cpu_none_mask;
> - const struct cpumask *mask;
> - int found_cpu = 0;
> - int i = 0;
> - int cpu;
> -
> - rcu_read_lock();
> - for_each_numa_hop_mask(mask, numa_node) {
> - for_each_cpu_andnot(cpu, mask, prev) {
> - if (i++ == index) {
> - found_cpu = cpu;
> - goto spread_done;
> - }
> - }
> - prev = mask;
> - }
> -
> -spread_done:
> - rcu_read_unlock();
> - return found_cpu;
> -}
> -
> static struct cpu_rmap *mlx5_eq_table_get_pci_rmap(struct mlx5_core_dev *dev)
> {
> #ifdef CONFIG_RFS_ACCEL
> @@ -873,7 +849,7 @@ static int comp_irq_request_pci(struct mlx5_core_dev *dev, u16 vecidx)
> int cpu;
>
> rmap = mlx5_eq_table_get_pci_rmap(dev);
> - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vecidx);
> + cpu = cpumask_local_spread(vecidx, dev->priv.numa_node);
> irq = mlx5_irq_request_vector(dev, cpu, vecidx, &rmap);
> if (IS_ERR(irq))
> return PTR_ERR(irq);
> @@ -1125,7 +1101,7 @@ int mlx5_comp_vector_get_cpu(struct mlx5_core_dev *dev, int vector)
> if (mask)
> cpu = cpumask_first(mask);
> else
> - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vector);
> + cpu = cpumask_local_spread(vector, dev->priv.numa_node);
>
> return cpu;
> }
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 2/4] Revert "sched/topology: Introduce for_each_numa_hop_mask()"
2023-09-25 2:05 ` [PATCH 2/4] Revert "sched/topology: Introduce for_each_numa_hop_mask()" Yury Norov
@ 2023-09-25 22:46 ` Jacob Keller
2023-09-25 22:55 ` Yury Norov
0 siblings, 1 reply; 17+ messages in thread
From: Jacob Keller @ 2023-09-25 22:46 UTC (permalink / raw)
To: Yury Norov, linux-kernel, netdev, linux-rdma
Cc: Tariq Toukan, Valentin Schneider, Maher Sanalla, Ingo Molnar,
Mel Gorman, Saeed Mahameed, Leon Romanovsky, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Daniel Bristot de Oliveira, Pawel Chmielewski,
Yury Norov
On 9/24/2023 7:05 PM, Yury Norov wrote:
> Now that the only user of for_each_numa_hop_mask() is switched to using
> cpumask_local_spread(), for_each_numa_hop_mask() is a dead code. Thus,
> revert commit 06ac01721f7d ("sched/topology: Introduce
> for_each_numa_hop_mask()").
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Yury Norov <ynorov@nvidia.com>
> ---
> include/linux/topology.h | 18 ------------------
> 1 file changed, 18 deletions(-)
>
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index fea32377f7c7..344c2362755a 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -261,22 +261,4 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> }
> #endif /* CONFIG_NUMA */
>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
I might have squashed all of 2 through 4 into a single patch but not a
big deal.
> -/**
> - * for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
> - * from a given node.
> - * @mask: the iteration variable.
> - * @node: the NUMA node to start the search from.
> - *
> - * Requires rcu_lock to be held.
> - *
> - * Yields cpu_online_mask for @node == NUMA_NO_NODE.
> - */
> -#define for_each_numa_hop_mask(mask, node) \
> - for (unsigned int __hops = 0; \
> - mask = (node != NUMA_NO_NODE || __hops) ? \
> - sched_numa_hop_mask(node, __hops) : \
> - cpu_online_mask, \
> - !IS_ERR_OR_NULL(mask); \
> - __hops++)
> -
> #endif /* _LINUX_TOPOLOGY_H */
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 3/4] Revert "sched/topology: Introduce sched_numa_hop_mask()"
2023-09-25 2:05 ` [PATCH 3/4] Revert "sched/topology: Introduce sched_numa_hop_mask()" Yury Norov
@ 2023-09-25 22:46 ` Jacob Keller
0 siblings, 0 replies; 17+ messages in thread
From: Jacob Keller @ 2023-09-25 22:46 UTC (permalink / raw)
To: Yury Norov, linux-kernel, netdev, linux-rdma
Cc: Tariq Toukan, Valentin Schneider, Maher Sanalla, Ingo Molnar,
Mel Gorman, Saeed Mahameed, Leon Romanovsky, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Daniel Bristot de Oliveira, Pawel Chmielewski,
Yury Norov
On 9/24/2023 7:05 PM, Yury Norov wrote:
> This reverts commit 9feae65845f7b16376716fe70b7d4b9bf8721848.
>
> Now that for_each_numa_hop_mask() is reverted, revert underlying
> machinery.
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Yury Norov <ynorov@nvidia.com>
> ---
> include/linux/topology.h | 7 -------
> kernel/sched/topology.c | 33 ---------------------------------
> 2 files changed, 40 deletions(-)
>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index 344c2362755a..72f264575698 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -247,18 +247,11 @@ static inline const struct cpumask *cpu_cpu_mask(int cpu)
>
> #ifdef CONFIG_NUMA
> int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node);
> -extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops);
> #else
> static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
> {
> return cpumask_nth(cpu, cpus);
> }
> -
> -static inline const struct cpumask *
> -sched_numa_hop_mask(unsigned int node, unsigned int hops)
> -{
> - return ERR_PTR(-EOPNOTSUPP);
> -}
> #endif /* CONFIG_NUMA */
>
> #endif /* _LINUX_TOPOLOGY_H */
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 05a5bc678c08..3f1c09a9ef6d 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2143,39 +2143,6 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
> return ret;
> }
> EXPORT_SYMBOL_GPL(sched_numa_find_nth_cpu);
> -
> -/**
> - * sched_numa_hop_mask() - Get the cpumask of CPUs at most @hops hops away from
> - * @node
> - * @node: The node to count hops from.
> - * @hops: Include CPUs up to that many hops away. 0 means local node.
> - *
> - * Return: On success, a pointer to a cpumask of CPUs at most @hops away from
> - * @node, an error value otherwise.
> - *
> - * Requires rcu_lock to be held. Returned cpumask is only valid within that
> - * read-side section, copy it if required beyond that.
> - *
> - * Note that not all hops are equal in distance; see sched_init_numa() for how
> - * distances and masks are handled.
> - * Also note that this is a reflection of sched_domains_numa_masks, which may change
> - * during the lifetime of the system (offline nodes are taken out of the masks).
> - */
> -const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
> -{
> - struct cpumask ***masks;
> -
> - if (node >= nr_node_ids || hops >= sched_domains_numa_levels)
> - return ERR_PTR(-EINVAL);
> -
> - masks = rcu_dereference(sched_domains_numa_masks);
> - if (!masks)
> - return ERR_PTR(-EBUSY);
> -
> - return masks[hops][node];
> -}
> -EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
> -
> #endif /* CONFIG_NUMA */
>
> static int __sdt_alloc(const struct cpumask *cpu_map)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 4/4] lib/cpumask: don't mention for_each_numa_hop_mask in cpumask_local_spread()"
2023-09-25 2:05 ` [PATCH 4/4] lib/cpumask: don't mention for_each_numa_hop_mask in cpumask_local_spread()" Yury Norov
@ 2023-09-25 22:47 ` Jacob Keller
2023-09-25 22:53 ` Yury Norov
0 siblings, 1 reply; 17+ messages in thread
From: Jacob Keller @ 2023-09-25 22:47 UTC (permalink / raw)
To: Yury Norov, linux-kernel, netdev, linux-rdma
Cc: Tariq Toukan, Valentin Schneider, Maher Sanalla, Ingo Molnar,
Mel Gorman, Saeed Mahameed, Leon Romanovsky, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Daniel Bristot de Oliveira, Pawel Chmielewski,
Yury Norov
On 9/24/2023 7:05 PM, Yury Norov wrote:
> Now that for_each_numa_hop_mask() is reverted, also revert reference to
> it in the comment to cpumask_local_spread().
>
> This partially reverts commit 2ac4980c57f5 ("lib/cpumask: update comment
> for cpumask_local_spread()")
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Yury Norov <ynorov@nvidia.com>
> ---
Interesting to see both sign-offs here. Not sure what that implies here
since both represent you :)
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
> lib/cpumask.c | 21 ---------------------
> 1 file changed, 21 deletions(-)
>
> diff --git a/lib/cpumask.c b/lib/cpumask.c
> index a7fd02b5ae26..d341fb71a8a9 100644
> --- a/lib/cpumask.c
> +++ b/lib/cpumask.c
> @@ -117,27 +117,6 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
> *
> * Returns online CPU according to a numa aware policy; local cpus are returned
> * first, followed by non-local ones, then it wraps around.
> - *
> - * For those who wants to enumerate all CPUs based on their NUMA distances,
> - * i.e. call this function in a loop, like:
> - *
> - * for (i = 0; i < num_online_cpus(); i++) {
> - * cpu = cpumask_local_spread(i, node);
> - * do_something(cpu);
> - * }
> - *
> - * There's a better alternative based on for_each()-like iterators:
> - *
> - * for_each_numa_hop_mask(mask, node) {
> - * for_each_cpu_andnot(cpu, mask, prev)
> - * do_something(cpu);
> - * prev = mask;
> - * }
> - *
> - * It's simpler and more verbose than above. Complexity of iterator-based
> - * enumeration is O(sched_domains_numa_levels * nr_cpu_ids), while
> - * cpumask_local_spread() when called for each cpu is
> - * O(sched_domains_numa_levels * nr_cpu_ids * log(nr_cpu_ids)).
> */
> unsigned int cpumask_local_spread(unsigned int i, int node)
> {
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 4/4] lib/cpumask: don't mention for_each_numa_hop_mask in cpumask_local_spread()"
2023-09-25 22:47 ` Jacob Keller
@ 2023-09-25 22:53 ` Yury Norov
0 siblings, 0 replies; 17+ messages in thread
From: Yury Norov @ 2023-09-25 22:53 UTC (permalink / raw)
To: Jacob Keller
Cc: linux-kernel, netdev, linux-rdma, Tariq Toukan,
Valentin Schneider, Maher Sanalla, Ingo Molnar, Mel Gorman,
Saeed Mahameed, Leon Romanovsky, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Daniel Bristot de Oliveira, Pawel Chmielewski, Yury Norov
On Mon, Sep 25, 2023 at 3:48 PM Jacob Keller <jacob.e.keller@intel.com> wrote:
>
>
>
> On 9/24/2023 7:05 PM, Yury Norov wrote:
> > Now that for_each_numa_hop_mask() is reverted, also revert reference to
> > it in the comment to cpumask_local_spread().
> >
> > This partially reverts commit 2ac4980c57f5 ("lib/cpumask: update comment
> > for cpumask_local_spread()")
> >
> > Signed-off-by: Yury Norov <yury.norov@gmail.com>
> > Signed-off-by: Yury Norov <ynorov@nvidia.com>
> > ---
>
> Interesting to see both sign-offs here. Not sure what that implies here
> since both represent you :)
>
> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Thanks for the review, Jacob. The 2nd sign-off is nothing interesting,
just bureaucracy.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 2/4] Revert "sched/topology: Introduce for_each_numa_hop_mask()"
2023-09-25 22:46 ` Jacob Keller
@ 2023-09-25 22:55 ` Yury Norov
2023-09-25 23:01 ` Jacob Keller
0 siblings, 1 reply; 17+ messages in thread
From: Yury Norov @ 2023-09-25 22:55 UTC (permalink / raw)
To: Jacob Keller
Cc: linux-kernel, netdev, linux-rdma, Tariq Toukan,
Valentin Schneider, Maher Sanalla, Ingo Molnar, Mel Gorman,
Saeed Mahameed, Leon Romanovsky, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Daniel Bristot de Oliveira, Pawel Chmielewski, Yury Norov
On Mon, Sep 25, 2023 at 3:46 PM Jacob Keller <jacob.e.keller@intel.com> wrote:
>
>
>
> On 9/24/2023 7:05 PM, Yury Norov wrote:
> > Now that the only user of for_each_numa_hop_mask() is switched to using
> > cpumask_local_spread(), for_each_numa_hop_mask() is a dead code. Thus,
> > revert commit 06ac01721f7d ("sched/topology: Introduce
> > for_each_numa_hop_mask()").
> >
> > Signed-off-by: Yury Norov <yury.norov@gmail.com>
> > Signed-off-by: Yury Norov <ynorov@nvidia.com>
> > ---
> > include/linux/topology.h | 18 ------------------
> > 1 file changed, 18 deletions(-)
> >
> > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > index fea32377f7c7..344c2362755a 100644
> > --- a/include/linux/topology.h
> > +++ b/include/linux/topology.h
> > @@ -261,22 +261,4 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> > }
> > #endif /* CONFIG_NUMA */
> >
>
> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
>
> I might have squashed all of 2 through 4 into a single patch but not a
> big deal.
I just wanted to keep the changes more trackable. No objections to squash 2-4,
whatever maintainers will feel better.
Thanks,
Yury
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 2/4] Revert "sched/topology: Introduce for_each_numa_hop_mask()"
2023-09-25 22:55 ` Yury Norov
@ 2023-09-25 23:01 ` Jacob Keller
0 siblings, 0 replies; 17+ messages in thread
From: Jacob Keller @ 2023-09-25 23:01 UTC (permalink / raw)
To: Yury Norov
Cc: linux-kernel, netdev, linux-rdma, Tariq Toukan,
Valentin Schneider, Maher Sanalla, Ingo Molnar, Mel Gorman,
Saeed Mahameed, Leon Romanovsky, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Daniel Bristot de Oliveira, Pawel Chmielewski, Yury Norov
On 9/25/2023 3:55 PM, Yury Norov wrote:
> On Mon, Sep 25, 2023 at 3:46 PM Jacob Keller <jacob.e.keller@intel.com> wrote:
>>
>>
>>
>> On 9/24/2023 7:05 PM, Yury Norov wrote:
>>> Now that the only user of for_each_numa_hop_mask() is switched to using
>>> cpumask_local_spread(), for_each_numa_hop_mask() is a dead code. Thus,
>>> revert commit 06ac01721f7d ("sched/topology: Introduce
>>> for_each_numa_hop_mask()").
>>>
>>> Signed-off-by: Yury Norov <yury.norov@gmail.com>
>>> Signed-off-by: Yury Norov <ynorov@nvidia.com>
>>> ---
>>> include/linux/topology.h | 18 ------------------
>>> 1 file changed, 18 deletions(-)
>>>
>>> diff --git a/include/linux/topology.h b/include/linux/topology.h
>>> index fea32377f7c7..344c2362755a 100644
>>> --- a/include/linux/topology.h
>>> +++ b/include/linux/topology.h
>>> @@ -261,22 +261,4 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
>>> }
>>> #endif /* CONFIG_NUMA */
>>>
>>
>> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
>>
>> I might have squashed all of 2 through 4 into a single patch but not a
>> big deal.
>
> I just wanted to keep the changes more trackable. No objections to squash 2-4,
> whatever maintainers will feel better.
>
> Thanks,
> Yury
I'm fine with keeping them separate.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread()
2023-09-25 2:05 ` [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread() Yury Norov
2023-09-25 22:45 ` Jacob Keller
@ 2023-10-03 10:04 ` Paolo Abeni
2023-10-03 13:46 ` Yury Norov
1 sibling, 1 reply; 17+ messages in thread
From: Paolo Abeni @ 2023-10-03 10:04 UTC (permalink / raw)
To: Yury Norov, linux-kernel, netdev, linux-rdma
Cc: Tariq Toukan, Valentin Schneider, Maher Sanalla, Ingo Molnar,
Mel Gorman, Saeed Mahameed, Leon Romanovsky, David S. Miller,
Eric Dumazet, Jakub Kicinski, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Daniel Bristot de Oliveira, Pawel Chmielewski, Jacob Keller,
Yury Norov
On Sun, 2023-09-24 at 19:05 -0700, Yury Norov wrote:
> The function duplicates existing cpumask_local_spread(), and it's O(N),
> while cpumask_local_spread() implementation is based on bsearch, and
> thus is O(log n), so drop mlx5_cpumask_default_spread() and use generic
> cpumask_local_spread().
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Yury Norov <ynorov@nvidia.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/eq.c | 28 ++------------------
> 1 file changed, 2 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> index ea0405e0a43f..bd9f857cc52d 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> @@ -828,30 +828,6 @@ static void comp_irq_release_pci(struct mlx5_core_dev *dev, u16 vecidx)
> mlx5_irq_release_vector(irq);
> }
>
> -static int mlx5_cpumask_default_spread(int numa_node, int index)
> -{
> - const struct cpumask *prev = cpu_none_mask;
> - const struct cpumask *mask;
> - int found_cpu = 0;
> - int i = 0;
> - int cpu;
> -
> - rcu_read_lock();
> - for_each_numa_hop_mask(mask, numa_node) {
> - for_each_cpu_andnot(cpu, mask, prev) {
> - if (i++ == index) {
> - found_cpu = cpu;
> - goto spread_done;
> - }
> - }
> - prev = mask;
> - }
> -
> -spread_done:
> - rcu_read_unlock();
> - return found_cpu;
> -}
> -
> static struct cpu_rmap *mlx5_eq_table_get_pci_rmap(struct mlx5_core_dev *dev)
> {
> #ifdef CONFIG_RFS_ACCEL
> @@ -873,7 +849,7 @@ static int comp_irq_request_pci(struct mlx5_core_dev *dev, u16 vecidx)
> int cpu;
>
> rmap = mlx5_eq_table_get_pci_rmap(dev);
> - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vecidx);
> + cpu = cpumask_local_spread(vecidx, dev->priv.numa_node);
> irq = mlx5_irq_request_vector(dev, cpu, vecidx, &rmap);
> if (IS_ERR(irq))
> return PTR_ERR(irq);
> @@ -1125,7 +1101,7 @@ int mlx5_comp_vector_get_cpu(struct mlx5_core_dev *dev, int vector)
> if (mask)
> cpu = cpumask_first(mask);
> else
> - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vector);
> + cpu = cpumask_local_spread(vector, dev->priv.numa_node);
>
> return cpu;
> }
It looks like this series is going to cause some later conflicts
regardless of the target tree. I think the whole series could go via
the net-next tree, am I missing any relevant point?
Thanks!
Paolo
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread()
2023-10-03 10:04 ` Paolo Abeni
@ 2023-10-03 13:46 ` Yury Norov
2023-10-03 22:20 ` Jakub Kicinski
0 siblings, 1 reply; 17+ messages in thread
From: Yury Norov @ 2023-10-03 13:46 UTC (permalink / raw)
To: Paolo Abeni
Cc: linux-kernel, netdev, linux-rdma, Tariq Toukan,
Valentin Schneider, Maher Sanalla, Ingo Molnar, Mel Gorman,
Saeed Mahameed, Leon Romanovsky, David S. Miller, Eric Dumazet,
Jakub Kicinski, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall,
Daniel Bristot de Oliveira, Pawel Chmielewski, Jacob Keller,
Yury Norov
On Tue, Oct 03, 2023 at 12:04:01PM +0200, Paolo Abeni wrote:
> On Sun, 2023-09-24 at 19:05 -0700, Yury Norov wrote:
> > The function duplicates existing cpumask_local_spread(), and it's O(N),
> > while cpumask_local_spread() implementation is based on bsearch, and
> > thus is O(log n), so drop mlx5_cpumask_default_spread() and use generic
> > cpumask_local_spread().
> >
> > Signed-off-by: Yury Norov <yury.norov@gmail.com>
> > Signed-off-by: Yury Norov <ynorov@nvidia.com>
> > ---
> > drivers/net/ethernet/mellanox/mlx5/core/eq.c | 28 ++------------------
> > 1 file changed, 2 insertions(+), 26 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > index ea0405e0a43f..bd9f857cc52d 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> > @@ -828,30 +828,6 @@ static void comp_irq_release_pci(struct mlx5_core_dev *dev, u16 vecidx)
> > mlx5_irq_release_vector(irq);
> > }
> >
> > -static int mlx5_cpumask_default_spread(int numa_node, int index)
> > -{
> > - const struct cpumask *prev = cpu_none_mask;
> > - const struct cpumask *mask;
> > - int found_cpu = 0;
> > - int i = 0;
> > - int cpu;
> > -
> > - rcu_read_lock();
> > - for_each_numa_hop_mask(mask, numa_node) {
> > - for_each_cpu_andnot(cpu, mask, prev) {
> > - if (i++ == index) {
> > - found_cpu = cpu;
> > - goto spread_done;
> > - }
> > - }
> > - prev = mask;
> > - }
> > -
> > -spread_done:
> > - rcu_read_unlock();
> > - return found_cpu;
> > -}
> > -
> > static struct cpu_rmap *mlx5_eq_table_get_pci_rmap(struct mlx5_core_dev *dev)
> > {
> > #ifdef CONFIG_RFS_ACCEL
> > @@ -873,7 +849,7 @@ static int comp_irq_request_pci(struct mlx5_core_dev *dev, u16 vecidx)
> > int cpu;
> >
> > rmap = mlx5_eq_table_get_pci_rmap(dev);
> > - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vecidx);
> > + cpu = cpumask_local_spread(vecidx, dev->priv.numa_node);
> > irq = mlx5_irq_request_vector(dev, cpu, vecidx, &rmap);
> > if (IS_ERR(irq))
> > return PTR_ERR(irq);
> > @@ -1125,7 +1101,7 @@ int mlx5_comp_vector_get_cpu(struct mlx5_core_dev *dev, int vector)
> > if (mask)
> > cpu = cpumask_first(mask);
> > else
> > - cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vector);
> > + cpu = cpumask_local_spread(vector, dev->priv.numa_node);
> >
> > return cpu;
> > }
>
> It looks like this series is going to cause some later conflicts
> regardless of the target tree. I think the whole series could go via
> the net-next tree, am I missing any relevant point?
Hi Paolo,
Can you elaborate on the conflicts you see? For me it applies cleanly
on current master, and with some 3-way merging on latest -next...
Thanks,
Yury
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread()
2023-10-03 13:46 ` Yury Norov
@ 2023-10-03 22:20 ` Jakub Kicinski
2023-10-04 1:31 ` Yury Norov
0 siblings, 1 reply; 17+ messages in thread
From: Jakub Kicinski @ 2023-10-03 22:20 UTC (permalink / raw)
To: Yury Norov
Cc: Paolo Abeni, linux-kernel, netdev, linux-rdma, Tariq Toukan,
Valentin Schneider, Maher Sanalla, Ingo Molnar, Mel Gorman,
Saeed Mahameed, Leon Romanovsky, David S. Miller, Eric Dumazet,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
Pawel Chmielewski, Jacob Keller, Yury Norov
On Tue, 3 Oct 2023 06:46:17 -0700 Yury Norov wrote:
> Can you elaborate on the conflicts you see? For me it applies cleanly
> on current master, and with some 3-way merging on latest -next...
We're half way thru the release cycle the conflicts can still come in.
There's no dependency for the first patch. The most normal way to
handle this would be to send patch 1 to the networking tree and send
the rest in the subsequent merge window.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread()
2023-10-03 22:20 ` Jakub Kicinski
@ 2023-10-04 1:31 ` Yury Norov
2023-10-04 17:19 ` Jakub Kicinski
0 siblings, 1 reply; 17+ messages in thread
From: Yury Norov @ 2023-10-04 1:31 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Paolo Abeni, linux-kernel, netdev, linux-rdma, Tariq Toukan,
Valentin Schneider, Maher Sanalla, Ingo Molnar, Mel Gorman,
Saeed Mahameed, Leon Romanovsky, David S. Miller, Eric Dumazet,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
Pawel Chmielewski, Jacob Keller, Yury Norov
On Tue, Oct 03, 2023 at 03:20:30PM -0700, Jakub Kicinski wrote:
> On Tue, 3 Oct 2023 06:46:17 -0700 Yury Norov wrote:
> > Can you elaborate on the conflicts you see? For me it applies cleanly
> > on current master, and with some 3-way merging on latest -next...
>
> We're half way thru the release cycle the conflicts can still come in.
>
> There's no dependency for the first patch. The most normal way to
> handle this would be to send patch 1 to the networking tree and send
> the rest in the subsequent merge window.
Ah, I understand now. I didn't plan to move it in current merge
window. In fact, I'll be more comfortable to keep it in -next for
longer and merge it in v6.7.
But it's up to you. If you think it's better, I can resend 1st patch
separately.
Thanks,
Yury
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread()
2023-10-04 1:31 ` Yury Norov
@ 2023-10-04 17:19 ` Jakub Kicinski
0 siblings, 0 replies; 17+ messages in thread
From: Jakub Kicinski @ 2023-10-04 17:19 UTC (permalink / raw)
To: Yury Norov, Saeed Mahameed
Cc: Paolo Abeni, linux-kernel, netdev, linux-rdma, Tariq Toukan,
Valentin Schneider, Maher Sanalla, Ingo Molnar, Mel Gorman,
Leon Romanovsky, David S. Miller, Eric Dumazet, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Daniel Bristot de Oliveira, Pawel Chmielewski,
Jacob Keller, Yury Norov
On Tue, 3 Oct 2023 18:31:28 -0700 Yury Norov wrote:
> > We're half way thru the release cycle the conflicts can still come in.
> >
> > There's no dependency for the first patch. The most normal way to
> > handle this would be to send patch 1 to the networking tree and send
> > the rest in the subsequent merge window.
>
> Ah, I understand now. I didn't plan to move it in current merge
> window. In fact, I'll be more comfortable to keep it in -next for
> longer and merge it in v6.7.
>
> But it's up to you. If you think it's better, I can resend 1st patch
> separately.
Let's see if Saeed can help us.
Saeed, could you pick up patch 1 from this series for the mlx5 tree?
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2023-10-04 17:19 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-25 2:05 [PATCH 0/4] sched: drop for_each_numa_hop_mask() Yury Norov
2023-09-25 2:05 ` [PATCH 1/4] net: mellanox: drop mlx5_cpumask_default_spread() Yury Norov
2023-09-25 22:45 ` Jacob Keller
2023-10-03 10:04 ` Paolo Abeni
2023-10-03 13:46 ` Yury Norov
2023-10-03 22:20 ` Jakub Kicinski
2023-10-04 1:31 ` Yury Norov
2023-10-04 17:19 ` Jakub Kicinski
2023-09-25 2:05 ` [PATCH 2/4] Revert "sched/topology: Introduce for_each_numa_hop_mask()" Yury Norov
2023-09-25 22:46 ` Jacob Keller
2023-09-25 22:55 ` Yury Norov
2023-09-25 23:01 ` Jacob Keller
2023-09-25 2:05 ` [PATCH 3/4] Revert "sched/topology: Introduce sched_numa_hop_mask()" Yury Norov
2023-09-25 22:46 ` Jacob Keller
2023-09-25 2:05 ` [PATCH 4/4] lib/cpumask: don't mention for_each_numa_hop_mask in cpumask_local_spread()" Yury Norov
2023-09-25 22:47 ` Jacob Keller
2023-09-25 22:53 ` Yury Norov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).