linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] percpu: some trivial cleanup patches
@ 2023-07-21 13:17 Baoquan He
  2023-07-21 13:17 ` [PATCH 1/3] mm/percpu.c: remove redundant check Baoquan He
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Baoquan He @ 2023-07-21 13:17 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, dennis, tj, cl, mawupeng1, Baoquan He

There's a left issue in my mailbox about percpu code at below. When
I rechecked it and considered Dennis's comment, I made an attmept
to fix it with patch 3.

https://lore.kernel.org/all/Y407wDMKq5ibE9sc@fedora/T/#u

Patch 1 and 2 are trivial clean up patches when reading percpu code.

Baoquan He (3):
  mm/percpu.c: remove redundant check
  mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little
    bit
  mm/percpu.c: print error message too if atomic alloc failed

 mm/percpu.c | 39 +++++++++++++++++----------------------
 1 file changed, 17 insertions(+), 22 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/3] mm/percpu.c: remove redundant check
  2023-07-21 13:17 [PATCH 0/3] percpu: some trivial cleanup patches Baoquan He
@ 2023-07-21 13:17 ` Baoquan He
  2023-07-21 20:55   ` Dennis Zhou
  2023-07-21 13:17 ` [PATCH 2/3] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little bit Baoquan He
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Baoquan He @ 2023-07-21 13:17 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, dennis, tj, cl, mawupeng1, Baoquan He

The conditional check "(ai->dyn_size < PERCPU_DYNAMIC_EARLY_SIZE) has
covered the check '(!ai->dyn_size)'.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/percpu.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index 28e07ede46f6..1480bf283d11 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2615,7 +2615,6 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 	PCPU_SETUP_BUG_ON(ai->unit_size < PCPU_MIN_UNIT_SIZE);
 	PCPU_SETUP_BUG_ON(!IS_ALIGNED(ai->unit_size, PCPU_BITMAP_BLOCK_SIZE));
 	PCPU_SETUP_BUG_ON(ai->dyn_size < PERCPU_DYNAMIC_EARLY_SIZE);
-	PCPU_SETUP_BUG_ON(!ai->dyn_size);
 	PCPU_SETUP_BUG_ON(!IS_ALIGNED(ai->reserved_size, PCPU_MIN_ALLOC_SIZE));
 	PCPU_SETUP_BUG_ON(!(IS_ALIGNED(PCPU_BITMAP_BLOCK_SIZE, PAGE_SIZE) ||
 			    IS_ALIGNED(PAGE_SIZE, PCPU_BITMAP_BLOCK_SIZE)));
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/3] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little bit
  2023-07-21 13:17 [PATCH 0/3] percpu: some trivial cleanup patches Baoquan He
  2023-07-21 13:17 ` [PATCH 1/3] mm/percpu.c: remove redundant check Baoquan He
@ 2023-07-21 13:17 ` Baoquan He
  2023-07-21 21:01   ` Dennis Zhou
  2023-07-21 13:18 ` [PATCH 3/3] mm/percpu.c: print error message too if atomic alloc failed Baoquan He
  2023-07-21 21:04 ` [PATCH 0/3] percpu: some trivial cleanup patches Dennis Zhou
  3 siblings, 1 reply; 13+ messages in thread
From: Baoquan He @ 2023-07-21 13:17 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, dennis, tj, cl, mawupeng1, Baoquan He

This removes the need of local varibale 'chunk', and optimize the code
calling pcpu_alloc_first_chunk() to initialize reserved chunk and
dynamic chunk to make it simpler.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/percpu.c | 32 +++++++++++++-------------------
 1 file changed, 13 insertions(+), 19 deletions(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index 1480bf283d11..c25b058a46ad 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2581,7 +2581,6 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 {
 	size_t size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
 	size_t static_size, dyn_size;
-	struct pcpu_chunk *chunk;
 	unsigned long *group_offsets;
 	size_t *group_sizes;
 	unsigned long *unit_off;
@@ -2697,7 +2696,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 	pcpu_unit_pages = ai->unit_size >> PAGE_SHIFT;
 	pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT;
 	pcpu_atom_size = ai->atom_size;
-	pcpu_chunk_struct_size = struct_size(chunk, populated,
+	pcpu_chunk_struct_size = struct_size((struct pcpu_chunk *)0, populated,
 					     BITS_TO_LONGS(pcpu_unit_pages));
 
 	pcpu_stats_save_ai(ai);
@@ -2735,28 +2734,23 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 
 	/*
 	 * Initialize first chunk.
-	 * If the reserved_size is non-zero, this initializes the reserved
-	 * chunk.  If the reserved_size is zero, the reserved chunk is NULL
-	 * and the dynamic region is initialized here.  The first chunk,
-	 * pcpu_first_chunk, will always point to the chunk that serves
-	 * the dynamic region.
+	 * If the reserved_size is non-zero, initializes the reserved chunk
+	 * firstly. If the reserved_size is zero, the reserved chunk is NULL
+	 * and the dynamic region is initialized directly. The first chunk,
+	 * pcpu_first_chunk, will always point to the chunk that serves the
+	 * dynamic region.
 	 */
 	tmp_addr = (unsigned long)base_addr + static_size;
-	map_size = ai->reserved_size ?: dyn_size;
-	chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
-
-	/* init dynamic chunk if necessary */
 	if (ai->reserved_size) {
-		pcpu_reserved_chunk = chunk;
-
-		tmp_addr = (unsigned long)base_addr + static_size +
-			   ai->reserved_size;
-		map_size = dyn_size;
-		chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
+		map_size = ai->reserved_size;
+		pcpu_reserved_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
 	}
 
-	/* link the first chunk in */
-	pcpu_first_chunk = chunk;
+	/* init dynamic chunk if necessary */
+	tmp_addr += (unsigned long)ai->reserved_size;
+	map_size = dyn_size;
+	pcpu_first_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
+
 	pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages;
 	pcpu_chunk_relocate(pcpu_first_chunk, -1);
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/3] mm/percpu.c: print error message too if atomic alloc failed
  2023-07-21 13:17 [PATCH 0/3] percpu: some trivial cleanup patches Baoquan He
  2023-07-21 13:17 ` [PATCH 1/3] mm/percpu.c: remove redundant check Baoquan He
  2023-07-21 13:17 ` [PATCH 2/3] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little bit Baoquan He
@ 2023-07-21 13:18 ` Baoquan He
  2023-07-21 21:03   ` Dennis Zhou
  2023-07-21 21:04 ` [PATCH 0/3] percpu: some trivial cleanup patches Dennis Zhou
  3 siblings, 1 reply; 13+ messages in thread
From: Baoquan He @ 2023-07-21 13:18 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, akpm, dennis, tj, cl, mawupeng1, Baoquan He

The variable 'err' is assgigned to an error message if atomic alloc
failed, while it has no chance to be printed if is_atomic is true.

Here change to print error message too if atomic alloc failed, while
avoid to call dump_stack() if that case.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/percpu.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index c25b058a46ad..74f75ef0ad58 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1890,13 +1890,15 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
 fail:
 	trace_percpu_alloc_percpu_fail(reserved, is_atomic, size, align);
 
-	if (!is_atomic && do_warn && warn_limit) {
+	if (do_warn && warn_limit) {
 		pr_warn("allocation failed, size=%zu align=%zu atomic=%d, %s\n",
 			size, align, is_atomic, err);
-		dump_stack();
+		if (is_atomic)
+			dump_stack();
 		if (!--warn_limit)
 			pr_info("limit reached, disable warning\n");
 	}
+
 	if (is_atomic) {
 		/* see the flag handling in pcpu_balance_workfn() */
 		pcpu_atomic_alloc_failed = true;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] mm/percpu.c: remove redundant check
  2023-07-21 13:17 ` [PATCH 1/3] mm/percpu.c: remove redundant check Baoquan He
@ 2023-07-21 20:55   ` Dennis Zhou
  0 siblings, 0 replies; 13+ messages in thread
From: Dennis Zhou @ 2023-07-21 20:55 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-kernel, linux-mm, akpm, tj, cl, mawupeng1

On Fri, Jul 21, 2023 at 09:17:58PM +0800, Baoquan He wrote:
> The conditional check "(ai->dyn_size < PERCPU_DYNAMIC_EARLY_SIZE) has
> covered the check '(!ai->dyn_size)'.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  mm/percpu.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/mm/percpu.c b/mm/percpu.c
> index 28e07ede46f6..1480bf283d11 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -2615,7 +2615,6 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
>  	PCPU_SETUP_BUG_ON(ai->unit_size < PCPU_MIN_UNIT_SIZE);
>  	PCPU_SETUP_BUG_ON(!IS_ALIGNED(ai->unit_size, PCPU_BITMAP_BLOCK_SIZE));
>  	PCPU_SETUP_BUG_ON(ai->dyn_size < PERCPU_DYNAMIC_EARLY_SIZE);
> -	PCPU_SETUP_BUG_ON(!ai->dyn_size);
>  	PCPU_SETUP_BUG_ON(!IS_ALIGNED(ai->reserved_size, PCPU_MIN_ALLOC_SIZE));
>  	PCPU_SETUP_BUG_ON(!(IS_ALIGNED(PCPU_BITMAP_BLOCK_SIZE, PAGE_SIZE) ||
>  			    IS_ALIGNED(PAGE_SIZE, PCPU_BITMAP_BLOCK_SIZE)));
> -- 
> 2.34.1
> 

Acked-by: Dennis Zhou <dennis@kernel.org>

Thanks,
Dennis


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/3] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little bit
  2023-07-21 13:17 ` [PATCH 2/3] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little bit Baoquan He
@ 2023-07-21 21:01   ` Dennis Zhou
  2023-07-22  1:14     ` Baoquan He
  0 siblings, 1 reply; 13+ messages in thread
From: Dennis Zhou @ 2023-07-21 21:01 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-kernel, linux-mm, akpm, tj, cl, mawupeng1

Hello,

On Fri, Jul 21, 2023 at 09:17:59PM +0800, Baoquan He wrote:
> This removes the need of local varibale 'chunk', and optimize the code
> calling pcpu_alloc_first_chunk() to initialize reserved chunk and
> dynamic chunk to make it simpler.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  mm/percpu.c | 32 +++++++++++++-------------------
>  1 file changed, 13 insertions(+), 19 deletions(-)
> 
> diff --git a/mm/percpu.c b/mm/percpu.c
> index 1480bf283d11..c25b058a46ad 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -2581,7 +2581,6 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
>  {
>  	size_t size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
>  	size_t static_size, dyn_size;
> -	struct pcpu_chunk *chunk;
>  	unsigned long *group_offsets;
>  	size_t *group_sizes;
>  	unsigned long *unit_off;
> @@ -2697,7 +2696,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
>  	pcpu_unit_pages = ai->unit_size >> PAGE_SHIFT;
>  	pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT;
>  	pcpu_atom_size = ai->atom_size;
> -	pcpu_chunk_struct_size = struct_size(chunk, populated,
> +	pcpu_chunk_struct_size = struct_size((struct pcpu_chunk *)0, populated,
>  					     BITS_TO_LONGS(pcpu_unit_pages));
>  
>  	pcpu_stats_save_ai(ai);
> @@ -2735,28 +2734,23 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
>  
>  	/*
>  	 * Initialize first chunk.
> -	 * If the reserved_size is non-zero, this initializes the reserved
> -	 * chunk.  If the reserved_size is zero, the reserved chunk is NULL
> -	 * and the dynamic region is initialized here.  The first chunk,
> -	 * pcpu_first_chunk, will always point to the chunk that serves
> -	 * the dynamic region.
> +	 * If the reserved_size is non-zero, initializes the reserved chunk
                                         ^initialize
> +	 * firstly. If the reserved_size is zero, the reserved chunk is NULL
        ^ can remove firstly.
> +	 * and the dynamic region is initialized directly. The first chunk,
> +	 * pcpu_first_chunk, will always point to the chunk that serves the
> +	 * dynamic region.

Reading this, I'll probably reword this comment to explain the reserved
chunk better.

>  	 */
>  	tmp_addr = (unsigned long)base_addr + static_size;
> -	map_size = ai->reserved_size ?: dyn_size;
> -	chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> -
> -	/* init dynamic chunk if necessary */
>  	if (ai->reserved_size) {
> -		pcpu_reserved_chunk = chunk;
> -
> -		tmp_addr = (unsigned long)base_addr + static_size +
> -			   ai->reserved_size;
> -		map_size = dyn_size;
> -		chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> +		map_size = ai->reserved_size;
> +		pcpu_reserved_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
>  	}
>  
> -	/* link the first chunk in */
> -	pcpu_first_chunk = chunk;
> +	/* init dynamic chunk if necessary */
> +	tmp_addr += (unsigned long)ai->reserved_size;

I'm not a big fan of += the tmp_addr as I personally find it easier to
read if it's just laid out explicitly.

> +	map_size = dyn_size;
> +	pcpu_first_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> +
>  	pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages;
>  	pcpu_chunk_relocate(pcpu_first_chunk, -1);
>  
> -- 
> 2.34.1
> 

Overall, I think this is good, but I'd go 1 step further and get rid of
map_size. Regarding tmp_addr, I'd prefer if we kept all the math
together.

Thanks,
Dennis


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/3] mm/percpu.c: print error message too if atomic alloc failed
  2023-07-21 13:18 ` [PATCH 3/3] mm/percpu.c: print error message too if atomic alloc failed Baoquan He
@ 2023-07-21 21:03   ` Dennis Zhou
  2023-07-22  1:56     ` Baoquan He
  0 siblings, 1 reply; 13+ messages in thread
From: Dennis Zhou @ 2023-07-21 21:03 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-kernel, linux-mm, akpm, tj, cl, mawupeng1

On Fri, Jul 21, 2023 at 09:18:00PM +0800, Baoquan He wrote:
> The variable 'err' is assgigned to an error message if atomic alloc
> failed, while it has no chance to be printed if is_atomic is true.
> 
> Here change to print error message too if atomic alloc failed, while
> avoid to call dump_stack() if that case.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  mm/percpu.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/percpu.c b/mm/percpu.c
> index c25b058a46ad..74f75ef0ad58 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -1890,13 +1890,15 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
>  fail:
>  	trace_percpu_alloc_percpu_fail(reserved, is_atomic, size, align);
>  
> -	if (!is_atomic && do_warn && warn_limit) {
> +	if (do_warn && warn_limit) {
>  		pr_warn("allocation failed, size=%zu align=%zu atomic=%d, %s\n",
>  			size, align, is_atomic, err);
> -		dump_stack();
> +		if (is_atomic)
> +			dump_stack();

This should be (!is_atomic) to preserve the current logic?

>  		if (!--warn_limit)
>  			pr_info("limit reached, disable warning\n");
>  	}
> +
>  	if (is_atomic) {
>  		/* see the flag handling in pcpu_balance_workfn() */
>  		pcpu_atomic_alloc_failed = true;
> -- 
> 2.34.1
> 

Thanks,
Dennis


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/3] percpu: some trivial cleanup patches
  2023-07-21 13:17 [PATCH 0/3] percpu: some trivial cleanup patches Baoquan He
                   ` (2 preceding siblings ...)
  2023-07-21 13:18 ` [PATCH 3/3] mm/percpu.c: print error message too if atomic alloc failed Baoquan He
@ 2023-07-21 21:04 ` Dennis Zhou
  2023-07-22  3:30   ` Baoquan He
  3 siblings, 1 reply; 13+ messages in thread
From: Dennis Zhou @ 2023-07-21 21:04 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-kernel, linux-mm, akpm, tj, cl, mawupeng1

Hello,

On Fri, Jul 21, 2023 at 09:17:57PM +0800, Baoquan He wrote:
> There's a left issue in my mailbox about percpu code at below. When
> I rechecked it and considered Dennis's comment, I made an attmept
> to fix it with patch 3.
> 
> https://lore.kernel.org/all/Y407wDMKq5ibE9sc@fedora/T/#u
> 
> Patch 1 and 2 are trivial clean up patches when reading percpu code.
> 
> Baoquan He (3):
>   mm/percpu.c: remove redundant check
>   mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little
>     bit
>   mm/percpu.c: print error message too if atomic alloc failed
> 
>  mm/percpu.c | 39 +++++++++++++++++----------------------
>  1 file changed, 17 insertions(+), 22 deletions(-)
> 
> -- 
> 2.34.1
> 

Thanks for these. I left a few comments. I think I might have some stuff
for v6.6, I'll figure that out in a couple days. If that's so, I can
pull 1, probably massage 2 and send that out again, and then I think
you'll need to resend 3.

Thanks,
Dennis


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/3] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little bit
  2023-07-21 21:01   ` Dennis Zhou
@ 2023-07-22  1:14     ` Baoquan He
  0 siblings, 0 replies; 13+ messages in thread
From: Baoquan He @ 2023-07-22  1:14 UTC (permalink / raw)
  To: Dennis Zhou; +Cc: linux-kernel, linux-mm, akpm, tj, cl, mawupeng1

On 07/21/23 at 02:01pm, Dennis Zhou wrote:
> Hello,
> 
> On Fri, Jul 21, 2023 at 09:17:59PM +0800, Baoquan He wrote:
> > This removes the need of local varibale 'chunk', and optimize the code
> > calling pcpu_alloc_first_chunk() to initialize reserved chunk and
> > dynamic chunk to make it simpler.
> > 
> > Signed-off-by: Baoquan He <bhe@redhat.com>
> > ---
> >  mm/percpu.c | 32 +++++++++++++-------------------
> >  1 file changed, 13 insertions(+), 19 deletions(-)
> > 
> > diff --git a/mm/percpu.c b/mm/percpu.c
> > index 1480bf283d11..c25b058a46ad 100644
> > --- a/mm/percpu.c
> > +++ b/mm/percpu.c
> > @@ -2581,7 +2581,6 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
> >  {
> >  	size_t size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
> >  	size_t static_size, dyn_size;
> > -	struct pcpu_chunk *chunk;
> >  	unsigned long *group_offsets;
> >  	size_t *group_sizes;
> >  	unsigned long *unit_off;
> > @@ -2697,7 +2696,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
> >  	pcpu_unit_pages = ai->unit_size >> PAGE_SHIFT;
> >  	pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT;
> >  	pcpu_atom_size = ai->atom_size;
> > -	pcpu_chunk_struct_size = struct_size(chunk, populated,
> > +	pcpu_chunk_struct_size = struct_size((struct pcpu_chunk *)0, populated,
> >  					     BITS_TO_LONGS(pcpu_unit_pages));
> >  
> >  	pcpu_stats_save_ai(ai);
> > @@ -2735,28 +2734,23 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
> >  
> >  	/*
> >  	 * Initialize first chunk.
> > -	 * If the reserved_size is non-zero, this initializes the reserved
> > -	 * chunk.  If the reserved_size is zero, the reserved chunk is NULL
> > -	 * and the dynamic region is initialized here.  The first chunk,
> > -	 * pcpu_first_chunk, will always point to the chunk that serves
> > -	 * the dynamic region.
> > +	 * If the reserved_size is non-zero, initializes the reserved chunk
>                                          ^initialize
> > +	 * firstly. If the reserved_size is zero, the reserved chunk is NULL
>         ^ can remove firstly.
> > +	 * and the dynamic region is initialized directly. The first chunk,
> > +	 * pcpu_first_chunk, will always point to the chunk that serves the
> > +	 * dynamic region.
> 
> Reading this, I'll probably reword this comment to explain the reserved
> chunk better.

Agree. The expression is a little messy and too colloquial.

> 
> >  	 */
> >  	tmp_addr = (unsigned long)base_addr + static_size;
> > -	map_size = ai->reserved_size ?: dyn_size;
> > -	chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> > -
> > -	/* init dynamic chunk if necessary */
> >  	if (ai->reserved_size) {
> > -		pcpu_reserved_chunk = chunk;
> > -
> > -		tmp_addr = (unsigned long)base_addr + static_size +
> > -			   ai->reserved_size;
> > -		map_size = dyn_size;
> > -		chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> > +		map_size = ai->reserved_size;
> > +		pcpu_reserved_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> >  	}
> >  
> > -	/* link the first chunk in */
> > -	pcpu_first_chunk = chunk;
> > +	/* init dynamic chunk if necessary */
> > +	tmp_addr += (unsigned long)ai->reserved_size;
> 
> I'm not a big fan of += the tmp_addr as I personally find it easier to
> read if it's just laid out explicitly.

OK, will change.

> 
> > +	map_size = dyn_size;
> > +	pcpu_first_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> > +
> >  	pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages;
> >  	pcpu_chunk_relocate(pcpu_first_chunk, -1);
> >  
> > -- 
> > 2.34.1
> > 
> 
> Overall, I think this is good, but I'd go 1 step further and get rid of
> map_size. Regarding tmp_addr, I'd prefer if we kept all the math
> together.

Makes sense. Thanks a lot for your careful review and great suggestions.

According to your comments, I made a draft v2. Please help check if I
have got them correctly and if the new change is OK to you.

From 17832ce8a755d8327b853a18c6f1cc00c9f93e50 Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe@redhat.com>
Date: Tue, 27 Jun 2023 09:33:28 +0800
Subject: [PATCH] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a
 little bit
Content-type: text/plain

This removes the need of local varibale 'chunk', and optimize the code
calling pcpu_alloc_first_chunk() to initialize reserved chunk and
dynamic chunk to make it simpler.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/percpu.c | 36 +++++++++++++-----------------------
 1 file changed, 13 insertions(+), 23 deletions(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index 1480bf283d11..83fc47206680 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2581,14 +2581,12 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 {
 	size_t size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
 	size_t static_size, dyn_size;
-	struct pcpu_chunk *chunk;
 	unsigned long *group_offsets;
 	size_t *group_sizes;
 	unsigned long *unit_off;
 	unsigned int cpu;
 	int *unit_map;
 	int group, unit, i;
-	int map_size;
 	unsigned long tmp_addr;
 	size_t alloc_size;
 
@@ -2697,7 +2695,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 	pcpu_unit_pages = ai->unit_size >> PAGE_SHIFT;
 	pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT;
 	pcpu_atom_size = ai->atom_size;
-	pcpu_chunk_struct_size = struct_size(chunk, populated,
+	pcpu_chunk_struct_size = struct_size((struct pcpu_chunk *)0, populated,
 					     BITS_TO_LONGS(pcpu_unit_pages));
 
 	pcpu_stats_save_ai(ai);
@@ -2734,29 +2732,21 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 	dyn_size = ai->dyn_size - (static_size - ai->static_size);
 
 	/*
-	 * Initialize first chunk.
-	 * If the reserved_size is non-zero, this initializes the reserved
-	 * chunk.  If the reserved_size is zero, the reserved chunk is NULL
-	 * and the dynamic region is initialized here.  The first chunk,
-	 * pcpu_first_chunk, will always point to the chunk that serves
-	 * the dynamic region.
+	 * Initialize first chunk:
+	 *
+	 * - If the reserved_size is non-zero, initialize the reserved
+	 *   chunk firstly. Otherwise, the reserved chunk is NULL.
+	 *
+	 * - The first chunk, pcpu_first_chunk, always points to the
+	 *   chunk that serves the dynamic region.
 	 */
 	tmp_addr = (unsigned long)base_addr + static_size;
-	map_size = ai->reserved_size ?: dyn_size;
-	chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
-
-	/* init dynamic chunk if necessary */
-	if (ai->reserved_size) {
-		pcpu_reserved_chunk = chunk;
-
-		tmp_addr = (unsigned long)base_addr + static_size +
-			   ai->reserved_size;
-		map_size = dyn_size;
-		chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
-	}
+	if (ai->reserved_size)
+		pcpu_reserved_chunk = pcpu_alloc_first_chunk(tmp_addr,
+						ai->reserved_size);
+	tmp_addr = (unsigned long)base_addr + static_size + ai->reserved_size;
+	pcpu_first_chunk = pcpu_alloc_first_chunk(tmp_addr, dyn_size);
 
-	/* link the first chunk in */
-	pcpu_first_chunk = chunk;
 	pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages;
 	pcpu_chunk_relocate(pcpu_first_chunk, -1);
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/3] mm/percpu.c: print error message too if atomic alloc failed
  2023-07-21 21:03   ` Dennis Zhou
@ 2023-07-22  1:56     ` Baoquan He
  0 siblings, 0 replies; 13+ messages in thread
From: Baoquan He @ 2023-07-22  1:56 UTC (permalink / raw)
  To: Dennis Zhou; +Cc: linux-kernel, linux-mm, akpm, tj, cl, mawupeng1

On 07/21/23 at 02:03pm, Dennis Zhou wrote:
> On Fri, Jul 21, 2023 at 09:18:00PM +0800, Baoquan He wrote:
> > The variable 'err' is assgigned to an error message if atomic alloc
> > failed, while it has no chance to be printed if is_atomic is true.
> > 
> > Here change to print error message too if atomic alloc failed, while
> > avoid to call dump_stack() if that case.
> > 
> > Signed-off-by: Baoquan He <bhe@redhat.com>
> > ---
> >  mm/percpu.c | 6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/percpu.c b/mm/percpu.c
> > index c25b058a46ad..74f75ef0ad58 100644
> > --- a/mm/percpu.c
> > +++ b/mm/percpu.c
> > @@ -1890,13 +1890,15 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
> >  fail:
> >  	trace_percpu_alloc_percpu_fail(reserved, is_atomic, size, align);
> >  
> > -	if (!is_atomic && do_warn && warn_limit) {
> > +	if (do_warn && warn_limit) {
> >  		pr_warn("allocation failed, size=%zu align=%zu atomic=%d, %s\n",
> >  			size, align, is_atomic, err);
> > -		dump_stack();
> > +		if (is_atomic)
> > +			dump_stack();
> 
> This should be (!is_atomic) to preserve the current logic?

You are quite right, I must be dizzy at the moment when making change.
Will fix this. Thanks for reviewing.

> 
> >  		if (!--warn_limit)
> >  			pr_info("limit reached, disable warning\n");
> >  	}
> > +
> >  	if (is_atomic) {
> >  		/* see the flag handling in pcpu_balance_workfn() */
> >  		pcpu_atomic_alloc_failed = true;
> > -- 
> > 2.34.1
> > 
> 
> Thanks,
> Dennis
> 



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/3] percpu: some trivial cleanup patches
  2023-07-21 21:04 ` [PATCH 0/3] percpu: some trivial cleanup patches Dennis Zhou
@ 2023-07-22  3:30   ` Baoquan He
  2023-07-27 22:50     ` Dennis Zhou
  0 siblings, 1 reply; 13+ messages in thread
From: Baoquan He @ 2023-07-22  3:30 UTC (permalink / raw)
  To: Dennis Zhou; +Cc: linux-kernel, linux-mm, akpm, tj, cl, mawupeng1

On 07/21/23 at 02:04pm, Dennis Zhou wrote:
> Hello,
> 
> On Fri, Jul 21, 2023 at 09:17:57PM +0800, Baoquan He wrote:
> > There's a left issue in my mailbox about percpu code at below. When
> > I rechecked it and considered Dennis's comment, I made an attmept
> > to fix it with patch 3.
> > 
> > https://lore.kernel.org/all/Y407wDMKq5ibE9sc@fedora/T/#u
> > 
> > Patch 1 and 2 are trivial clean up patches when reading percpu code.
> > 
> > Baoquan He (3):
> >   mm/percpu.c: remove redundant check
> >   mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little
> >     bit
> >   mm/percpu.c: print error message too if atomic alloc failed
> > 
> >  mm/percpu.c | 39 +++++++++++++++++----------------------
> >  1 file changed, 17 insertions(+), 22 deletions(-)
> > 
> > -- 
> > 2.34.1
> > 
> 
> Thanks for these. I left a few comments. I think I might have some stuff
> for v6.6, I'll figure that out in a couple days. If that's so, I can
> pull 1, probably massage 2 and send that out again, and then I think
> you'll need to resend 3.

Sure, thanks for careful reviewing and great suggestion. So I only need
to send v2 of patch 3, right? Or I should change and send v2 of the
whold series? I may not get it clear.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/3] percpu: some trivial cleanup patches
  2023-07-22  3:30   ` Baoquan He
@ 2023-07-27 22:50     ` Dennis Zhou
  2023-07-28  3:04       ` Baoquan He
  0 siblings, 1 reply; 13+ messages in thread
From: Dennis Zhou @ 2023-07-27 22:50 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-kernel, linux-mm, akpm, tj, cl, mawupeng1

Hi,

On Sat, Jul 22, 2023 at 11:30:14AM +0800, Baoquan He wrote:
> On 07/21/23 at 02:04pm, Dennis Zhou wrote:
> > Hello,
> > 
> > On Fri, Jul 21, 2023 at 09:17:57PM +0800, Baoquan He wrote:
> > > There's a left issue in my mailbox about percpu code at below. When
> > > I rechecked it and considered Dennis's comment, I made an attmept
> > > to fix it with patch 3.
> > > 
> > > https://lore.kernel.org/all/Y407wDMKq5ibE9sc@fedora/T/#u
> > > 
> > > Patch 1 and 2 are trivial clean up patches when reading percpu code.
> > > 
> > > Baoquan He (3):
> > >   mm/percpu.c: remove redundant check
> > >   mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little
> > >     bit
> > >   mm/percpu.c: print error message too if atomic alloc failed
> > > 
> > >  mm/percpu.c | 39 +++++++++++++++++----------------------
> > >  1 file changed, 17 insertions(+), 22 deletions(-)
> > > 
> > > -- 
> > > 2.34.1
> > > 
> > 
> > Thanks for these. I left a few comments. I think I might have some stuff
> > for v6.6, I'll figure that out in a couple days. If that's so, I can
> > pull 1, probably massage 2 and send that out again, and then I think
> > you'll need to resend 3.
> 
> Sure, thanks for careful reviewing and great suggestion. So I only need
> to send v2 of patch 3, right? Or I should change and send v2 of the
> whold series? I may not get it clear.
> 

Sorry for the delay. I've pulled 1 and 2 (reworded the comment). Can you
please resend patch 3.

Thanks,
Dennis


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/3] percpu: some trivial cleanup patches
  2023-07-27 22:50     ` Dennis Zhou
@ 2023-07-28  3:04       ` Baoquan He
  0 siblings, 0 replies; 13+ messages in thread
From: Baoquan He @ 2023-07-28  3:04 UTC (permalink / raw)
  To: Dennis Zhou; +Cc: linux-kernel, linux-mm, akpm, tj, cl, mawupeng1

On 07/27/23 at 03:50pm, Dennis Zhou wrote:
> Hi,
> 
> On Sat, Jul 22, 2023 at 11:30:14AM +0800, Baoquan He wrote:
> > On 07/21/23 at 02:04pm, Dennis Zhou wrote:
> > > Hello,
> > > 
> > > On Fri, Jul 21, 2023 at 09:17:57PM +0800, Baoquan He wrote:
> > > > There's a left issue in my mailbox about percpu code at below. When
> > > > I rechecked it and considered Dennis's comment, I made an attmept
> > > > to fix it with patch 3.
> > > > 
> > > > https://lore.kernel.org/all/Y407wDMKq5ibE9sc@fedora/T/#u
> > > > 
> > > > Patch 1 and 2 are trivial clean up patches when reading percpu code.
> > > > 
> > > > Baoquan He (3):
> > > >   mm/percpu.c: remove redundant check
> > > >   mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little
> > > >     bit
> > > >   mm/percpu.c: print error message too if atomic alloc failed
> > > > 
> > > >  mm/percpu.c | 39 +++++++++++++++++----------------------
> > > >  1 file changed, 17 insertions(+), 22 deletions(-)
> > > > 
> > > > -- 
> > > > 2.34.1
> > > > 
> > > 
> > > Thanks for these. I left a few comments. I think I might have some stuff
> > > for v6.6, I'll figure that out in a couple days. If that's so, I can
> > > pull 1, probably massage 2 and send that out again, and then I think
> > > you'll need to resend 3.
> > 
> > Sure, thanks for careful reviewing and great suggestion. So I only need
> > to send v2 of patch 3, right? Or I should change and send v2 of the
> > whold series? I may not get it clear.
> > 
> 
> Sorry for the delay. I've pulled 1 and 2 (reworded the comment). Can you
> please resend patch 3.

Sent out v2 of patch 3, thanks.



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-07-28  3:04 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-21 13:17 [PATCH 0/3] percpu: some trivial cleanup patches Baoquan He
2023-07-21 13:17 ` [PATCH 1/3] mm/percpu.c: remove redundant check Baoquan He
2023-07-21 20:55   ` Dennis Zhou
2023-07-21 13:17 ` [PATCH 2/3] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little bit Baoquan He
2023-07-21 21:01   ` Dennis Zhou
2023-07-22  1:14     ` Baoquan He
2023-07-21 13:18 ` [PATCH 3/3] mm/percpu.c: print error message too if atomic alloc failed Baoquan He
2023-07-21 21:03   ` Dennis Zhou
2023-07-22  1:56     ` Baoquan He
2023-07-21 21:04 ` [PATCH 0/3] percpu: some trivial cleanup patches Dennis Zhou
2023-07-22  3:30   ` Baoquan He
2023-07-27 22:50     ` Dennis Zhou
2023-07-28  3:04       ` Baoquan He

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).