linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [Patch v2] mm/page_alloc: find_large_buddy() from start_pfn aligned order
@ 2025-09-02  2:58 Wei Yang
  2025-09-02  8:04 ` Vlastimil Babka
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Wei Yang @ 2025-09-02  2:58 UTC (permalink / raw)
  To: akpm, vbabka, hannes, ziy
  Cc: linux-mm, vishal.moola, Wei Yang, David Hildenbrand

We iterate pfn from order 0 to MAX_PAGE_ORDER aligned to find large buddy.
While if the order is less than start_pfn aligned order, we would get the
same pfn and do the same check again.

Iterate from start_pfn aligned order to reduce duplicated work.

Link: https://lkml.kernel.org/r/20250828091618.7869-1-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Zi Yan <ziy@nvidia.com>

---
v2: add comment on assignment of order
---
 mm/page_alloc.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 07d79ae557f8..5d9ceca869e5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2033,7 +2033,13 @@ static int move_freepages_block(struct zone *zone, struct page *page,
 /* Look for a buddy that straddles start_pfn */
 static unsigned long find_large_buddy(unsigned long start_pfn)
 {
-	int order = 0;
+	/*
+	 * If start_pfn is not an order-0 PageBuddy, next PageBuddy containing
+	 * start_pfn has minimal order of __ffs(start_pfn) + 1. Start checking
+	 * the order with __ffs(start_pfn). If start_pfn is order-0 PageBuddy,
+	 * the starting order does not matter.
+	 */
+	int order = start_pfn ? __ffs(start_pfn) : MAX_PAGE_ORDER;
 	struct page *page;
 	unsigned long pfn = start_pfn;
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [Patch v2] mm/page_alloc: find_large_buddy() from start_pfn aligned order
  2025-09-02  2:58 [Patch v2] mm/page_alloc: find_large_buddy() from start_pfn aligned order Wei Yang
@ 2025-09-02  8:04 ` Vlastimil Babka
  2025-09-02 14:33 ` Johannes Weiner
  2025-09-02 16:25 ` Vishal Moola (Oracle)
  2 siblings, 0 replies; 6+ messages in thread
From: Vlastimil Babka @ 2025-09-02  8:04 UTC (permalink / raw)
  To: Wei Yang, akpm, hannes, ziy; +Cc: linux-mm, vishal.moola, David Hildenbrand

On 9/2/25 04:58, Wei Yang wrote:
> We iterate pfn from order 0 to MAX_PAGE_ORDER aligned to find large buddy.
> While if the order is less than start_pfn aligned order, we would get the
> same pfn and do the same check again.
> 
> Iterate from start_pfn aligned order to reduce duplicated work.
> 
> Link: https://lkml.kernel.org/r/20250828091618.7869-1-richard.weiyang@gmail.com
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: David Hildenbrand <david@redhat.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Reviewed-by: Zi Yan <ziy@nvidia.com>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>

> 
> ---
> v2: add comment on assignment of order
> ---
>  mm/page_alloc.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 07d79ae557f8..5d9ceca869e5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2033,7 +2033,13 @@ static int move_freepages_block(struct zone *zone, struct page *page,
>  /* Look for a buddy that straddles start_pfn */
>  static unsigned long find_large_buddy(unsigned long start_pfn)
>  {
> -	int order = 0;
> +	/*
> +	 * If start_pfn is not an order-0 PageBuddy, next PageBuddy containing
> +	 * start_pfn has minimal order of __ffs(start_pfn) + 1. Start checking
> +	 * the order with __ffs(start_pfn). If start_pfn is order-0 PageBuddy,
> +	 * the starting order does not matter.
> +	 */
> +	int order = start_pfn ? __ffs(start_pfn) : MAX_PAGE_ORDER;
>  	struct page *page;
>  	unsigned long pfn = start_pfn;
>  



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Patch v2] mm/page_alloc: find_large_buddy() from start_pfn aligned order
  2025-09-02  2:58 [Patch v2] mm/page_alloc: find_large_buddy() from start_pfn aligned order Wei Yang
  2025-09-02  8:04 ` Vlastimil Babka
@ 2025-09-02 14:33 ` Johannes Weiner
  2025-09-02 14:36   ` Zi Yan
  2025-09-02 16:25 ` Vishal Moola (Oracle)
  2 siblings, 1 reply; 6+ messages in thread
From: Johannes Weiner @ 2025-09-02 14:33 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, vbabka, ziy, linux-mm, vishal.moola, David Hildenbrand

On Tue, Sep 02, 2025 at 02:58:07AM +0000, Wei Yang wrote:
> We iterate pfn from order 0 to MAX_PAGE_ORDER aligned to find large buddy.
> While if the order is less than start_pfn aligned order, we would get the
> same pfn and do the same check again.
> 
> Iterate from start_pfn aligned order to reduce duplicated work.
> 
> Link: https://lkml.kernel.org/r/20250828091618.7869-1-richard.weiyang@gmail.com
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: David Hildenbrand <david@redhat.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> 
> ---
> v2: add comment on assignment of order
> ---
>  mm/page_alloc.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 07d79ae557f8..5d9ceca869e5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2033,7 +2033,13 @@ static int move_freepages_block(struct zone *zone, struct page *page,
>  /* Look for a buddy that straddles start_pfn */
>  static unsigned long find_large_buddy(unsigned long start_pfn)
>  {
> -	int order = 0;
> +	/*
> +	 * If start_pfn is not an order-0 PageBuddy, next PageBuddy containing
> +	 * start_pfn has minimal order of __ffs(start_pfn) + 1. Start checking
> +	 * the order with __ffs(start_pfn). If start_pfn is order-0 PageBuddy,
> +	 * the starting order does not matter.
> +	 */
> +	int order = start_pfn ? __ffs(start_pfn) : MAX_PAGE_ORDER;

This should be __ffs(start_pfn) - 1, no?

If you have the lowest bit set in the pfn, you should check the
order-1 buddy to the left first. But ffs(1) is already 1, which means
the loop will check order-2 next.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Patch v2] mm/page_alloc: find_large_buddy() from start_pfn aligned order
  2025-09-02 14:33 ` Johannes Weiner
@ 2025-09-02 14:36   ` Zi Yan
  2025-09-02 15:12     ` Johannes Weiner
  0 siblings, 1 reply; 6+ messages in thread
From: Zi Yan @ 2025-09-02 14:36 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Wei Yang, akpm, vbabka, linux-mm, vishal.moola, David Hildenbrand

On 2 Sep 2025, at 10:33, Johannes Weiner wrote:

> On Tue, Sep 02, 2025 at 02:58:07AM +0000, Wei Yang wrote:
>> We iterate pfn from order 0 to MAX_PAGE_ORDER aligned to find large buddy.
>> While if the order is less than start_pfn aligned order, we would get the
>> same pfn and do the same check again.
>>
>> Iterate from start_pfn aligned order to reduce duplicated work.
>>
>> Link: https://lkml.kernel.org/r/20250828091618.7869-1-richard.weiyang@gmail.com
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: David Hildenbrand <david@redhat.com>
>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>>
>> ---
>> v2: add comment on assignment of order
>> ---
>>  mm/page_alloc.c | 8 +++++++-
>>  1 file changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 07d79ae557f8..5d9ceca869e5 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -2033,7 +2033,13 @@ static int move_freepages_block(struct zone *zone, struct page *page,
>>  /* Look for a buddy that straddles start_pfn */
>>  static unsigned long find_large_buddy(unsigned long start_pfn)
>>  {
>> -	int order = 0;
>> +	/*
>> +	 * If start_pfn is not an order-0 PageBuddy, next PageBuddy containing
>> +	 * start_pfn has minimal order of __ffs(start_pfn) + 1. Start checking
>> +	 * the order with __ffs(start_pfn). If start_pfn is order-0 PageBuddy,
>> +	 * the starting order does not matter.
>> +	 */
>> +	int order = start_pfn ? __ffs(start_pfn) : MAX_PAGE_ORDER;
>
> This should be __ffs(start_pfn) - 1, no?
>
> If you have the lowest bit set in the pfn, you should check the
> order-1 buddy to the left first. But ffs(1) is already 1, which means
> the loop will check order-2 next.

__ffs() seems different from usespace ffs() and is 0-index, so __ffs() is
what you mean ffs() - 1.


Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Patch v2] mm/page_alloc: find_large_buddy() from start_pfn aligned order
  2025-09-02 14:36   ` Zi Yan
@ 2025-09-02 15:12     ` Johannes Weiner
  0 siblings, 0 replies; 6+ messages in thread
From: Johannes Weiner @ 2025-09-02 15:12 UTC (permalink / raw)
  To: Zi Yan; +Cc: Wei Yang, akpm, vbabka, linux-mm, vishal.moola, David Hildenbrand

On Tue, Sep 02, 2025 at 10:36:14AM -0400, Zi Yan wrote:
> On 2 Sep 2025, at 10:33, Johannes Weiner wrote:
> 
> > On Tue, Sep 02, 2025 at 02:58:07AM +0000, Wei Yang wrote:
> >> We iterate pfn from order 0 to MAX_PAGE_ORDER aligned to find large buddy.
> >> While if the order is less than start_pfn aligned order, we would get the
> >> same pfn and do the same check again.
> >>
> >> Iterate from start_pfn aligned order to reduce duplicated work.
> >>
> >> Link: https://lkml.kernel.org/r/20250828091618.7869-1-richard.weiyang@gmail.com
> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> >> Cc: Johannes Weiner <hannes@cmpxchg.org>
> >> Cc: Zi Yan <ziy@nvidia.com>
> >> Cc: Vlastimil Babka <vbabka@suse.cz>
> >> Cc: David Hildenbrand <david@redhat.com>
> >> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> >> Reviewed-by: Zi Yan <ziy@nvidia.com>
> >>
> >> ---
> >> v2: add comment on assignment of order
> >> ---
> >>  mm/page_alloc.c | 8 +++++++-
> >>  1 file changed, 7 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index 07d79ae557f8..5d9ceca869e5 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -2033,7 +2033,13 @@ static int move_freepages_block(struct zone *zone, struct page *page,
> >>  /* Look for a buddy that straddles start_pfn */
> >>  static unsigned long find_large_buddy(unsigned long start_pfn)
> >>  {
> >> -	int order = 0;
> >> +	/*
> >> +	 * If start_pfn is not an order-0 PageBuddy, next PageBuddy containing
> >> +	 * start_pfn has minimal order of __ffs(start_pfn) + 1. Start checking
> >> +	 * the order with __ffs(start_pfn). If start_pfn is order-0 PageBuddy,
> >> +	 * the starting order does not matter.
> >> +	 */
> >> +	int order = start_pfn ? __ffs(start_pfn) : MAX_PAGE_ORDER;
> >
> > This should be __ffs(start_pfn) - 1, no?
> >
> > If you have the lowest bit set in the pfn, you should check the
> > order-1 buddy to the left first. But ffs(1) is already 1, which means
> > the loop will check order-2 next.
> 
> __ffs() seems different from usespace ffs() and is 0-index, so __ffs() is
> what you mean ffs() - 1.

I'm going back to reading school.

Acked-by: Johannes Weiner <hannes@cmpxchg.org>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Patch v2] mm/page_alloc: find_large_buddy() from start_pfn aligned order
  2025-09-02  2:58 [Patch v2] mm/page_alloc: find_large_buddy() from start_pfn aligned order Wei Yang
  2025-09-02  8:04 ` Vlastimil Babka
  2025-09-02 14:33 ` Johannes Weiner
@ 2025-09-02 16:25 ` Vishal Moola (Oracle)
  2 siblings, 0 replies; 6+ messages in thread
From: Vishal Moola (Oracle) @ 2025-09-02 16:25 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, vbabka, hannes, ziy, linux-mm, David Hildenbrand

On Tue, Sep 02, 2025 at 02:58:07AM +0000, Wei Yang wrote:
> We iterate pfn from order 0 to MAX_PAGE_ORDER aligned to find large buddy.
> While if the order is less than start_pfn aligned order, we would get the
> same pfn and do the same check again.
> 
> Iterate from start_pfn aligned order to reduce duplicated work.
> 
> Link: https://lkml.kernel.org/r/20250828091618.7869-1-richard.weiyang@gmail.com
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: David Hildenbrand <david@redhat.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Reviewed-by: Zi Yan <ziy@nvidia.com>

Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-09-02 16:25 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-02  2:58 [Patch v2] mm/page_alloc: find_large_buddy() from start_pfn aligned order Wei Yang
2025-09-02  8:04 ` Vlastimil Babka
2025-09-02 14:33 ` Johannes Weiner
2025-09-02 14:36   ` Zi Yan
2025-09-02 15:12     ` Johannes Weiner
2025-09-02 16:25 ` Vishal Moola (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).