* [PATCH 24/49] mm/vmstat: replace cpumask_weight with cpumask_empty where appropriate
[not found] <20220210224933.379149-1-yury.norov@gmail.com>
@ 2022-02-10 22:49 ` Yury Norov
2022-02-11 10:39 ` Mike Rapoport
2022-02-10 22:49 ` [PATCH 46/49] mm/mempolicy: replace nodes_weight with nodes_weight_eq Yury Norov
2022-02-10 22:49 ` [PATCH 47/49] nodemask: add num_node_state_eq() Yury Norov
2 siblings, 1 reply; 8+ messages in thread
From: Yury Norov @ 2022-02-10 22:49 UTC (permalink / raw)
To: Yury Norov, Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
linux-mm
mm/vmstat.c code calls cpumask_weight() to check if any bit of a given
cpumask is set. We can do it more efficiently with cpumask_empty() because
cpumask_empty() stops traversing the cpumask as soon as it finds first set
bit, while cpumask_weight() counts all bits unconditionally.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
mm/vmstat.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index d5cc8d739fac..27a94afd4ee5 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2041,7 +2041,7 @@ static void __init init_cpu_node_state(void)
int node;
for_each_online_node(node) {
- if (cpumask_weight(cpumask_of_node(node)) > 0)
+ if (!cpumask_empty(cpumask_of_node(node)))
node_set_state(node, N_CPU);
}
}
@@ -2068,7 +2068,7 @@ static int vmstat_cpu_dead(unsigned int cpu)
refresh_zone_stat_thresholds();
node_cpus = cpumask_of_node(node);
- if (cpumask_weight(node_cpus) > 0)
+ if (!cpumask_empty(node_cpus))
return 0;
node_clear_state(node, N_CPU);
--
2.32.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH 24/49] mm/vmstat: replace cpumask_weight with cpumask_empty where appropriate
2022-02-10 22:49 ` [PATCH 24/49] mm/vmstat: replace cpumask_weight with cpumask_empty where appropriate Yury Norov
@ 2022-02-11 10:39 ` Mike Rapoport
0 siblings, 0 replies; 8+ messages in thread
From: Mike Rapoport @ 2022-02-11 10:39 UTC (permalink / raw)
To: Yury Norov
Cc: Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
linux-mm
On Thu, Feb 10, 2022 at 02:49:08PM -0800, Yury Norov wrote:
> mm/vmstat.c code calls cpumask_weight() to check if any bit of a given
> cpumask is set. We can do it more efficiently with cpumask_empty() because
> cpumask_empty() stops traversing the cpumask as soon as it finds first set
> bit, while cpumask_weight() counts all bits unconditionally.
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
> mm/vmstat.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index d5cc8d739fac..27a94afd4ee5 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -2041,7 +2041,7 @@ static void __init init_cpu_node_state(void)
> int node;
>
> for_each_online_node(node) {
> - if (cpumask_weight(cpumask_of_node(node)) > 0)
> + if (!cpumask_empty(cpumask_of_node(node)))
> node_set_state(node, N_CPU);
> }
> }
> @@ -2068,7 +2068,7 @@ static int vmstat_cpu_dead(unsigned int cpu)
>
> refresh_zone_stat_thresholds();
> node_cpus = cpumask_of_node(node);
> - if (cpumask_weight(node_cpus) > 0)
> + if (!cpumask_empty(node_cpus))
> return 0;
>
> node_clear_state(node, N_CPU);
> --
> 2.32.0
>
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 46/49] mm/mempolicy: replace nodes_weight with nodes_weight_eq
[not found] <20220210224933.379149-1-yury.norov@gmail.com>
2022-02-10 22:49 ` [PATCH 24/49] mm/vmstat: replace cpumask_weight with cpumask_empty where appropriate Yury Norov
@ 2022-02-10 22:49 ` Yury Norov
2022-02-11 10:40 ` Mike Rapoport
2022-02-11 17:44 ` Christophe JAILLET
2022-02-10 22:49 ` [PATCH 47/49] nodemask: add num_node_state_eq() Yury Norov
2 siblings, 2 replies; 8+ messages in thread
From: Yury Norov @ 2022-02-10 22:49 UTC (permalink / raw)
To: Yury Norov, Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
linux-mm
do_migrate_pages() calls nodes_weight() to compare the weight
of nodemask with a given number. We can do it more efficiently with
nodes_weight_eq() because conditional nodes_weight() may stop
traversing the nodemask earlier, as soon as condition is (or is not)
met.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
mm/mempolicy.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 7c852793d9e8..56efd00b1b6e 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1154,7 +1154,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
* [0-7] - > [3,4,5] moves only 0,1,2,6,7.
*/
- if ((nodes_weight(*from) != nodes_weight(*to)) &&
+ if (!nodes_weight_eq(*from, nodes_weight(*to)) &&
(node_isset(s, *to)))
continue;
--
2.32.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH 46/49] mm/mempolicy: replace nodes_weight with nodes_weight_eq
2022-02-10 22:49 ` [PATCH 46/49] mm/mempolicy: replace nodes_weight with nodes_weight_eq Yury Norov
@ 2022-02-11 10:40 ` Mike Rapoport
2022-02-11 17:44 ` Christophe JAILLET
1 sibling, 0 replies; 8+ messages in thread
From: Mike Rapoport @ 2022-02-11 10:40 UTC (permalink / raw)
To: Yury Norov
Cc: Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
linux-mm
On Thu, Feb 10, 2022 at 02:49:30PM -0800, Yury Norov wrote:
> do_migrate_pages() calls nodes_weight() to compare the weight
> of nodemask with a given number. We can do it more efficiently with
> nodes_weight_eq() because conditional nodes_weight() may stop
> traversing the nodemask earlier, as soon as condition is (or is not)
> met.
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
> mm/mempolicy.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 7c852793d9e8..56efd00b1b6e 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -1154,7 +1154,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
> * [0-7] - > [3,4,5] moves only 0,1,2,6,7.
> */
>
> - if ((nodes_weight(*from) != nodes_weight(*to)) &&
> + if (!nodes_weight_eq(*from, nodes_weight(*to)) &&
> (node_isset(s, *to)))
> continue;
>
> --
> 2.32.0
>
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 46/49] mm/mempolicy: replace nodes_weight with nodes_weight_eq
2022-02-10 22:49 ` [PATCH 46/49] mm/mempolicy: replace nodes_weight with nodes_weight_eq Yury Norov
2022-02-11 10:40 ` Mike Rapoport
@ 2022-02-11 17:44 ` Christophe JAILLET
2022-02-11 19:47 ` Yury Norov
1 sibling, 1 reply; 8+ messages in thread
From: Christophe JAILLET @ 2022-02-11 17:44 UTC (permalink / raw)
To: Yury Norov, Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
linux-mm
Le 10/02/2022 à 23:49, Yury Norov a écrit :
> do_migrate_pages() calls nodes_weight() to compare the weight
> of nodemask with a given number. We can do it more efficiently with
> nodes_weight_eq() because conditional nodes_weight() may stop
> traversing the nodemask earlier, as soon as condition is (or is not)
> met.
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
> ---
> mm/mempolicy.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 7c852793d9e8..56efd00b1b6e 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -1154,7 +1154,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
> * [0-7] - > [3,4,5] moves only 0,1,2,6,7.
> */
>
> - if ((nodes_weight(*from) != nodes_weight(*to)) &&
> + if (!nodes_weight_eq(*from, nodes_weight(*to)) &&
> (node_isset(s, *to)))
Hi,
I've not looked in details, but would it make sense to hoist the
"(nodes_weight(*from) != nodes_weight(*to))" test out of the
for_each_node_mask() to compute it only once?
'from' and 'to' look unmodified in the loop.
Just my 2c,
CJ
> continue;
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 46/49] mm/mempolicy: replace nodes_weight with nodes_weight_eq
2022-02-11 17:44 ` Christophe JAILLET
@ 2022-02-11 19:47 ` Yury Norov
0 siblings, 0 replies; 8+ messages in thread
From: Yury Norov @ 2022-02-11 19:47 UTC (permalink / raw)
To: Christophe JAILLET, Larry Woodman
Cc: Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
linux-mm
+ Larry Woodman <lwoodman@redhat.com>
On Fri, Feb 11, 2022 at 06:44:39PM +0100, Christophe JAILLET wrote:
> Le 10/02/2022 à 23:49, Yury Norov a écrit :
> > do_migrate_pages() calls nodes_weight() to compare the weight
> > of nodemask with a given number. We can do it more efficiently with
> > nodes_weight_eq() because conditional nodes_weight() may stop
> > traversing the nodemask earlier, as soon as condition is (or is not)
> > met.
> >
> > Signed-off-by: Yury Norov <yury.norov@gmail.com>
> > ---
> > mm/mempolicy.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > index 7c852793d9e8..56efd00b1b6e 100644
> > --- a/mm/mempolicy.c
> > +++ b/mm/mempolicy.c
> > @@ -1154,7 +1154,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
> > * [0-7] - > [3,4,5] moves only 0,1,2,6,7.
> > */
> > - if ((nodes_weight(*from) != nodes_weight(*to)) &&
> > + if (!nodes_weight_eq(*from, nodes_weight(*to)) &&
> > (node_isset(s, *to)))
>
> Hi,
>
> I've not looked in details, but would it make sense to hoist the
> "(nodes_weight(*from) != nodes_weight(*to))" test out of the
> for_each_node_mask() to compute it only once?
>
> 'from' and 'to' look unmodified in the loop.
It seems that 'from' and 'to' are untouched in the outer while()
loop as well, so we can compare weights of nodemaps only once at the
beginning.
Larry, can you please comment on that?
Thanks,
Yury
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 47/49] nodemask: add num_node_state_eq()
[not found] <20220210224933.379149-1-yury.norov@gmail.com>
2022-02-10 22:49 ` [PATCH 24/49] mm/vmstat: replace cpumask_weight with cpumask_empty where appropriate Yury Norov
2022-02-10 22:49 ` [PATCH 46/49] mm/mempolicy: replace nodes_weight with nodes_weight_eq Yury Norov
@ 2022-02-10 22:49 ` Yury Norov
2022-02-11 10:41 ` Mike Rapoport
2 siblings, 1 reply; 8+ messages in thread
From: Yury Norov @ 2022-02-10 22:49 UTC (permalink / raw)
To: Yury Norov, Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
linux-mm
Page allocator uses num_node_state() to compare number of nodes with a
given number. The underlying code calls bitmap_weight(), and we can do
it more efficiently with num_node_state_eq because conditional nodes_weight
may stop traversing the nodemask earlier, as soon as condition is (or is
not) met.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
include/linux/nodemask.h | 5 +++++
mm/page_alloc.c | 2 +-
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
index 197598e075e9..c5014dbf3cce 100644
--- a/include/linux/nodemask.h
+++ b/include/linux/nodemask.h
@@ -466,6 +466,11 @@ static inline int num_node_state(enum node_states state)
return nodes_weight(node_states[state]);
}
+static inline int num_node_state_eq(enum node_states state, int num)
+{
+ return nodes_weight_eq(node_states[state], num);
+}
+
#define for_each_node_state(__node, __state) \
for_each_node_mask((__node), node_states[__state])
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cface1d38093..897e64b66ca4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8434,7 +8434,7 @@ void __init page_alloc_init(void)
int ret;
#ifdef CONFIG_NUMA
- if (num_node_state(N_MEMORY) == 1)
+ if (num_node_state_eq(N_MEMORY, 1))
hashdist = 0;
#endif
--
2.32.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH 47/49] nodemask: add num_node_state_eq()
2022-02-10 22:49 ` [PATCH 47/49] nodemask: add num_node_state_eq() Yury Norov
@ 2022-02-11 10:41 ` Mike Rapoport
0 siblings, 0 replies; 8+ messages in thread
From: Mike Rapoport @ 2022-02-11 10:41 UTC (permalink / raw)
To: Yury Norov
Cc: Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
linux-mm
On Thu, Feb 10, 2022 at 02:49:31PM -0800, Yury Norov wrote:
> Page allocator uses num_node_state() to compare number of nodes with a
> given number. The underlying code calls bitmap_weight(), and we can do
> it more efficiently with num_node_state_eq because conditional nodes_weight
> may stop traversing the nodemask earlier, as soon as condition is (or is
> not) met.
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
> include/linux/nodemask.h | 5 +++++
> mm/page_alloc.c | 2 +-
> 2 files changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
> index 197598e075e9..c5014dbf3cce 100644
> --- a/include/linux/nodemask.h
> +++ b/include/linux/nodemask.h
> @@ -466,6 +466,11 @@ static inline int num_node_state(enum node_states state)
> return nodes_weight(node_states[state]);
> }
>
> +static inline int num_node_state_eq(enum node_states state, int num)
> +{
> + return nodes_weight_eq(node_states[state], num);
> +}
> +
> #define for_each_node_state(__node, __state) \
> for_each_node_mask((__node), node_states[__state])
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index cface1d38093..897e64b66ca4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8434,7 +8434,7 @@ void __init page_alloc_init(void)
> int ret;
>
> #ifdef CONFIG_NUMA
> - if (num_node_state(N_MEMORY) == 1)
> + if (num_node_state_eq(N_MEMORY, 1))
> hashdist = 0;
> #endif
>
> --
> 2.32.0
>
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 8+ messages in thread