linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
@ 2024-07-31  5:46 Baolin Wang
  2024-07-31  5:46 ` [PATCH 2/2] mm: shmem: fix incorrect aligned index when checking conflicts Baolin Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Baolin Wang @ 2024-07-31  5:46 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, 21cnbao, ryan.roberts, ziy, gshan, ioworker0,
	baolin.wang, linux-mm, linux-kernel

Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache
if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is
64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER.
This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for
shmem to filter allowable huge orders.

Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/shmem.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 2faa9daaf54b..a4332a97558c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
 	unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
 	unsigned long vm_flags = vma->vm_flags;
 	/*
-	 * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
+	 * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
 	 * are enabled for this vma.
 	 */
-	unsigned long orders = BIT(PMD_ORDER + 1) - 1;
+	unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT;
 	loff_t i_size;
 	int order;
 
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] mm: shmem: fix incorrect aligned index when checking conflicts
  2024-07-31  5:46 [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Baolin Wang
@ 2024-07-31  5:46 ` Baolin Wang
  2024-07-31  9:18   ` David Hildenbrand
  2024-07-31  6:18 ` [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Barry Song
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Baolin Wang @ 2024-07-31  5:46 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, 21cnbao, ryan.roberts, ziy, gshan, ioworker0,
	baolin.wang, linux-mm, linux-kernel

In the shmem_suitable_orders() function, xa_find() is used to check for
conflicts in the pagecache to select suitable huge orders. However, when
checking each huge order in every loop, the aligned index is calculated
from the previous iteration, which may cause suitable huge orders to be
missed.

We should use the original index each time in the loop to calculate a
new aligned index for checking conflicts to avoid this issue.

Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/shmem.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index a4332a97558c..6e9836b1bd1d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1686,6 +1686,7 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
 					   unsigned long orders)
 {
 	struct vm_area_struct *vma = vmf->vma;
+	pgoff_t aligned_index;
 	unsigned long pages;
 	int order;
 
@@ -1697,9 +1698,9 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
 	order = highest_order(orders);
 	while (orders) {
 		pages = 1UL << order;
-		index = round_down(index, pages);
-		if (!xa_find(&mapping->i_pages, &index,
-			     index + pages - 1, XA_PRESENT))
+		aligned_index = round_down(index, pages);
+		if (!xa_find(&mapping->i_pages, &aligned_index,
+			     aligned_index + pages - 1, XA_PRESENT))
 			break;
 		order = next_order(&orders, order);
 	}
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
  2024-07-31  5:46 [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Baolin Wang
  2024-07-31  5:46 ` [PATCH 2/2] mm: shmem: fix incorrect aligned index when checking conflicts Baolin Wang
@ 2024-07-31  6:18 ` Barry Song
  2024-07-31  8:56   ` Baolin Wang
  2024-07-31  9:17 ` David Hildenbrand
  2024-07-31 13:09 ` Zi Yan
  3 siblings, 1 reply; 13+ messages in thread
From: Barry Song @ 2024-07-31  6:18 UTC (permalink / raw)
  To: Baolin Wang
  Cc: akpm, hughd, willy, david, ryan.roberts, ziy, gshan, ioworker0,
	linux-mm, linux-kernel

On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache
> if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is
> 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER.
> This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for
> shmem to filter allowable huge orders.
>
> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>

Reviewed-by: Barry Song <baohua@kernel.org>

> ---
>  mm/shmem.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 2faa9daaf54b..a4332a97558c 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
>         unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
>         unsigned long vm_flags = vma->vm_flags;
>         /*
> -        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
> +        * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
>          * are enabled for this vma.

Nit:
THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough.
I feel we don't need this comment?

>          */
> -       unsigned long orders = BIT(PMD_ORDER + 1) - 1;
> +       unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT;
>         loff_t i_size;
>         int order;
>
> --
> 2.39.3
>

Thanks
Barry


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
  2024-07-31  6:18 ` [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Barry Song
@ 2024-07-31  8:56   ` Baolin Wang
  2024-07-31  9:59     ` Kefeng Wang
  0 siblings, 1 reply; 13+ messages in thread
From: Baolin Wang @ 2024-07-31  8:56 UTC (permalink / raw)
  To: Barry Song
  Cc: akpm, hughd, willy, david, ryan.roberts, ziy, gshan, ioworker0,
	linux-mm, linux-kernel



On 2024/7/31 14:18, Barry Song wrote:
> On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang
> <baolin.wang@linux.alibaba.com> wrote:
>>
>> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache
>> if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is
>> 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER.
>> This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for
>> shmem to filter allowable huge orders.
>>
>> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> 
> Reviewed-by: Barry Song <baohua@kernel.org>

Thanks for reviewing.

> 
>> ---
>>   mm/shmem.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 2faa9daaf54b..a4332a97558c 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
>>          unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
>>          unsigned long vm_flags = vma->vm_flags;
>>          /*
>> -        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
>> +        * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
>>           * are enabled for this vma.
> 
> Nit:
> THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough.
> I feel we don't need this comment?

Sure.

Andrew, please help to squash the following changes into this patch. Thanks.

diff --git a/mm/shmem.c b/mm/shmem.c
index 6e9836b1bd1d..432faec21547 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1629,10 +1629,6 @@ unsigned long shmem_allowable_huge_orders(struct 
inode *inode,
         unsigned long mask = READ_ONCE(huge_shmem_orders_always);
         unsigned long within_size_orders = 
READ_ONCE(huge_shmem_orders_within_size);
         unsigned long vm_flags = vma->vm_flags;
-       /*
-        * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
-        * are enabled for this vma.
-        */
         unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT;
         loff_t i_size;
         int order;


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
  2024-07-31  5:46 [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Baolin Wang
  2024-07-31  5:46 ` [PATCH 2/2] mm: shmem: fix incorrect aligned index when checking conflicts Baolin Wang
  2024-07-31  6:18 ` [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Barry Song
@ 2024-07-31  9:17 ` David Hildenbrand
  2024-07-31 13:09 ` Zi Yan
  3 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-07-31  9:17 UTC (permalink / raw)
  To: Baolin Wang, akpm, hughd
  Cc: willy, 21cnbao, ryan.roberts, ziy, gshan, ioworker0, linux-mm,
	linux-kernel

On 31.07.24 07:46, Baolin Wang wrote:
> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache
> if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is
> 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER.
> This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for
> shmem to filter allowable huge orders.
> 
> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>   mm/shmem.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 2faa9daaf54b..a4332a97558c 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
>   	unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
>   	unsigned long vm_flags = vma->vm_flags;
>   	/*
> -	 * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
> +	 * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
>   	 * are enabled for this vma.
>   	 */
> -	unsigned long orders = BIT(PMD_ORDER + 1) - 1;
> +	unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT;
>   	loff_t i_size;
>   	int order;
>   

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] mm: shmem: fix incorrect aligned index when checking conflicts
  2024-07-31  5:46 ` [PATCH 2/2] mm: shmem: fix incorrect aligned index when checking conflicts Baolin Wang
@ 2024-07-31  9:18   ` David Hildenbrand
  0 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-07-31  9:18 UTC (permalink / raw)
  To: Baolin Wang, akpm, hughd
  Cc: willy, 21cnbao, ryan.roberts, ziy, gshan, ioworker0, linux-mm,
	linux-kernel

On 31.07.24 07:46, Baolin Wang wrote:
> In the shmem_suitable_orders() function, xa_find() is used to check for
> conflicts in the pagecache to select suitable huge orders. However, when
> checking each huge order in every loop, the aligned index is calculated
> from the previous iteration, which may cause suitable huge orders to be
> missed.
> 
> We should use the original index each time in the loop to calculate a
> new aligned index for checking conflicts to avoid this issue.
> 
> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>   mm/shmem.c | 7 ++++---
>   1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index a4332a97558c..6e9836b1bd1d 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1686,6 +1686,7 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
>   					   unsigned long orders)
>   {
>   	struct vm_area_struct *vma = vmf->vma;
> +	pgoff_t aligned_index;
>   	unsigned long pages;
>   	int order;
>   
> @@ -1697,9 +1698,9 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
>   	order = highest_order(orders);
>   	while (orders) {
>   		pages = 1UL << order;
> -		index = round_down(index, pages);
> -		if (!xa_find(&mapping->i_pages, &index,
> -			     index + pages - 1, XA_PRESENT))
> +		aligned_index = round_down(index, pages);
> +		if (!xa_find(&mapping->i_pages, &aligned_index,
> +			     aligned_index + pages - 1, XA_PRESENT))
>   			break;
>   		order = next_order(&orders, order);
>   	}

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
  2024-07-31  8:56   ` Baolin Wang
@ 2024-07-31  9:59     ` Kefeng Wang
  2024-07-31 10:22       ` Baolin Wang
  0 siblings, 1 reply; 13+ messages in thread
From: Kefeng Wang @ 2024-07-31  9:59 UTC (permalink / raw)
  To: Baolin Wang, Barry Song
  Cc: akpm, hughd, willy, david, ryan.roberts, ziy, gshan, ioworker0,
	linux-mm, linux-kernel



On 2024/7/31 16:56, Baolin Wang wrote:
> 
> 
> On 2024/7/31 14:18, Barry Song wrote:
>> On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang
>> <baolin.wang@linux.alibaba.com> wrote:
>>>
>>> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page 
>>> cache
>>> if needed"), ARM64 can support 512MB PMD-sized THP when the base page 
>>> size is
>>> 64KB, which is larger than the maximum supported page cache size 
>>> MAX_PAGECACHE_ORDER.
>>> This is not expected. To fix this issue, use 
>>> THP_ORDERS_ALL_FILE_DEFAULT for
>>> shmem to filter allowable huge orders.
>>>
>>> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>
>> Reviewed-by: Barry Song <baohua@kernel.org>
> 
> Thanks for reviewing.
> 
>>
>>> ---
>>>   mm/shmem.c | 4 ++--
>>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/shmem.c b/mm/shmem.c
>>> index 2faa9daaf54b..a4332a97558c 100644
>>> --- a/mm/shmem.c
>>> +++ b/mm/shmem.c
>>> @@ -1630,10 +1630,10 @@ unsigned long 
>>> shmem_allowable_huge_orders(struct inode *inode,
>>>          unsigned long within_size_orders = 
>>> READ_ONCE(huge_shmem_orders_within_size);
>>>          unsigned long vm_flags = vma->vm_flags;
>>>          /*
>>> -        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
>>> +        * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 
>>> that
>>>           * are enabled for this vma.
>>
>> Nit:
>> THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough.
>> I feel we don't need this comment?
> 
> Sure.
> 
> Andrew, please help to squash the following changes into this patch. 
> Thanks.

Maybe drop unsigned long orders too?

diff --git a/mm/shmem.c b/mm/shmem.c
index 6af95f595d6f..8485eb6f2ec4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1638,11 +1638,6 @@ unsigned long shmem_allowable_huge_orders(struct 
inode *inode,
         unsigned long mask = READ_ONCE(huge_shmem_orders_always);
         unsigned long within_size_orders = 
READ_ONCE(huge_shmem_orders_within_size);
         unsigned long vm_flags = vma ? vma->vm_flags : 0;
-       /*
-        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
-        * are enabled for this vma.
-        */
-       unsigned long orders = BIT(PMD_ORDER + 1) - 1;
         bool global_huge;
         loff_t i_size;
         int order;
@@ -1698,7 +1693,7 @@ unsigned long shmem_allowable_huge_orders(struct 
inode *inode,
         if (global_huge)
                 mask |= READ_ONCE(huge_shmem_orders_inherit);

-       return orders & mask;
+       return THP_ORDERS_ALL_FILE_DEFAULT & mask;
  }

> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 6e9836b1bd1d..432faec21547 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1629,10 +1629,6 @@ unsigned long shmem_allowable_huge_orders(struct 
> inode *inode,
>          unsigned long mask = READ_ONCE(huge_shmem_orders_always);
>          unsigned long within_size_orders = 
> READ_ONCE(huge_shmem_orders_within_size);
>          unsigned long vm_flags = vma->vm_flags;
> -       /*
> -        * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
> -        * are enabled for this vma.
> -        */
>          unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT;
>          loff_t i_size;
>          int order;
> 


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
  2024-07-31  9:59     ` Kefeng Wang
@ 2024-07-31 10:22       ` Baolin Wang
  2024-07-31 20:48         ` Andrew Morton
  0 siblings, 1 reply; 13+ messages in thread
From: Baolin Wang @ 2024-07-31 10:22 UTC (permalink / raw)
  To: Kefeng Wang, Barry Song
  Cc: akpm, hughd, willy, david, ryan.roberts, ziy, gshan, ioworker0,
	linux-mm, linux-kernel



On 2024/7/31 17:59, Kefeng Wang wrote:
> 
> 
> On 2024/7/31 16:56, Baolin Wang wrote:
>>
>>
>> On 2024/7/31 14:18, Barry Song wrote:
>>> On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang
>>> <baolin.wang@linux.alibaba.com> wrote:
>>>>
>>>> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size 
>>>> page cache
>>>> if needed"), ARM64 can support 512MB PMD-sized THP when the base 
>>>> page size is
>>>> 64KB, which is larger than the maximum supported page cache size 
>>>> MAX_PAGECACHE_ORDER.
>>>> This is not expected. To fix this issue, use 
>>>> THP_ORDERS_ALL_FILE_DEFAULT for
>>>> shmem to filter allowable huge orders.
>>>>
>>>> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>
>>> Reviewed-by: Barry Song <baohua@kernel.org>
>>
>> Thanks for reviewing.
>>
>>>
>>>> ---
>>>>   mm/shmem.c | 4 ++--
>>>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/shmem.c b/mm/shmem.c
>>>> index 2faa9daaf54b..a4332a97558c 100644
>>>> --- a/mm/shmem.c
>>>> +++ b/mm/shmem.c
>>>> @@ -1630,10 +1630,10 @@ unsigned long 
>>>> shmem_allowable_huge_orders(struct inode *inode,
>>>>          unsigned long within_size_orders = 
>>>> READ_ONCE(huge_shmem_orders_within_size);
>>>>          unsigned long vm_flags = vma->vm_flags;
>>>>          /*
>>>> -        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
>>>> +        * Check all the (large) orders below MAX_PAGECACHE_ORDER + 
>>>> 1 that
>>>>           * are enabled for this vma.
>>>
>>> Nit:
>>> THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough.
>>> I feel we don't need this comment?
>>
>> Sure.
>>
>> Andrew, please help to squash the following changes into this patch. 
>> Thanks.
> 
> Maybe drop unsigned long orders too?
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 6af95f595d6f..8485eb6f2ec4 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1638,11 +1638,6 @@ unsigned long shmem_allowable_huge_orders(struct 
> inode *inode,
>          unsigned long mask = READ_ONCE(huge_shmem_orders_always);
>          unsigned long within_size_orders = 
> READ_ONCE(huge_shmem_orders_within_size);
>          unsigned long vm_flags = vma ? vma->vm_flags : 0;
> -       /*
> -        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
> -        * are enabled for this vma.
> -        */
> -       unsigned long orders = BIT(PMD_ORDER + 1) - 1;
>          bool global_huge;
>          loff_t i_size;
>          int order;
> @@ -1698,7 +1693,7 @@ unsigned long shmem_allowable_huge_orders(struct 
> inode *inode,
>          if (global_huge)
>                  mask |= READ_ONCE(huge_shmem_orders_inherit);
> 
> -       return orders & mask;
> +       return THP_ORDERS_ALL_FILE_DEFAULT & mask;
>   }

Yes. Good point. Thanks.
(Hope Andrew can help to squash these changes :))


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
  2024-07-31  5:46 [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Baolin Wang
                   ` (2 preceding siblings ...)
  2024-07-31  9:17 ` David Hildenbrand
@ 2024-07-31 13:09 ` Zi Yan
  3 siblings, 0 replies; 13+ messages in thread
From: Zi Yan @ 2024-07-31 13:09 UTC (permalink / raw)
  To: Baolin Wang
  Cc: akpm, hughd, willy, david, 21cnbao, ryan.roberts, gshan,
	ioworker0, linux-mm, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1348 bytes --]

On 31 Jul 2024, at 1:46, Baolin Wang wrote:

> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache
> if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is
> 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER.
> This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for
> shmem to filter allowable huge orders.
>
> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>  mm/shmem.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 2faa9daaf54b..a4332a97558c 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
>  	unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
>  	unsigned long vm_flags = vma->vm_flags;
>  	/*
> -	 * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
> +	 * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
>  	 * are enabled for this vma.
>  	 */
> -	unsigned long orders = BIT(PMD_ORDER + 1) - 1;
> +	unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT;
>  	loff_t i_size;
>  	int order;

Acked-by: Zi Yan <ziy@nvidia.com>

Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
  2024-07-31 10:22       ` Baolin Wang
@ 2024-07-31 20:48         ` Andrew Morton
  2024-08-01  0:06           ` Baolin Wang
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2024-07-31 20:48 UTC (permalink / raw)
  To: Baolin Wang
  Cc: Kefeng Wang, Barry Song, hughd, willy, david, ryan.roberts, ziy,
	gshan, ioworker0, linux-mm, linux-kernel

On Wed, 31 Jul 2024 18:22:17 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote:

> (Hope Andrew can help to squash these changes :))

I'm seeing some rejects against, amongst other things, your own "Some
cleanups for shmem" series.

So... v2, please?


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
  2024-07-31 20:48         ` Andrew Morton
@ 2024-08-01  0:06           ` Baolin Wang
  2024-08-01 19:55             ` Andrew Morton
  0 siblings, 1 reply; 13+ messages in thread
From: Baolin Wang @ 2024-08-01  0:06 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kefeng Wang, Barry Song, hughd, willy, david, ryan.roberts, ziy,
	gshan, ioworker0, linux-mm, linux-kernel



On 2024/8/1 04:48, Andrew Morton wrote:
> On Wed, 31 Jul 2024 18:22:17 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote:
> 
>> (Hope Andrew can help to squash these changes :))
> 
> I'm seeing some rejects against, amongst other things, your own "Some
> cleanups for shmem" series.
> 
> So... v2, please?

These two bugfix patches are based on the mm-hotfixes-unstable branch 
and need to be merged into 6.11-rcX, so they should be queued first.

For the 'Some cleanups for shmem' series, I can send a new V4 version to 
you after resolving conflicts with the shmem bugfix patches. Sorry for 
the inconvenience.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
  2024-08-01  0:06           ` Baolin Wang
@ 2024-08-01 19:55             ` Andrew Morton
  2024-08-02  3:11               ` Baolin Wang
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2024-08-01 19:55 UTC (permalink / raw)
  To: Baolin Wang
  Cc: Kefeng Wang, Barry Song, hughd, willy, david, ryan.roberts, ziy,
	gshan, ioworker0, linux-mm, linux-kernel

On Thu, 1 Aug 2024 08:06:59 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote:

> 
> 
> On 2024/8/1 04:48, Andrew Morton wrote:
> > On Wed, 31 Jul 2024 18:22:17 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote:
> > 
> >> (Hope Andrew can help to squash these changes :))
> > 
> > I'm seeing some rejects against, amongst other things, your own "Some
> > cleanups for shmem" series.
> > 
> > So... v2, please?
> 
> These two bugfix patches are based on the mm-hotfixes-unstable branch 
> and need to be merged into 6.11-rcX, so they should be queued first.

OK.

> For the 'Some cleanups for shmem' series, I can send a new V4 version to 
> you after resolving conflicts with the shmem bugfix patches. Sorry for 
> the inconvenience.

I fixed things up.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
  2024-08-01 19:55             ` Andrew Morton
@ 2024-08-02  3:11               ` Baolin Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Baolin Wang @ 2024-08-02  3:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kefeng Wang, Barry Song, hughd, willy, david, ryan.roberts, ziy,
	gshan, ioworker0, linux-mm, linux-kernel



On 2024/8/2 03:55, Andrew Morton wrote:
> On Thu, 1 Aug 2024 08:06:59 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote:
> 
>>
>>
>> On 2024/8/1 04:48, Andrew Morton wrote:
>>> On Wed, 31 Jul 2024 18:22:17 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote:
>>>
>>>> (Hope Andrew can help to squash these changes :))
>>>
>>> I'm seeing some rejects against, amongst other things, your own "Some
>>> cleanups for shmem" series.
>>>
>>> So... v2, please?
>>
>> These two bugfix patches are based on the mm-hotfixes-unstable branch
>> and need to be merged into 6.11-rcX, so they should be queued first.
> 
> OK.
> 
>> For the 'Some cleanups for shmem' series, I can send a new V4 version to
>> you after resolving conflicts with the shmem bugfix patches. Sorry for
>> the inconvenience.
> 
> I fixed things up.

Thank you for your help, Andrew. I have checked the patches after 
resolving the conflicts, and all look good.


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2024-08-02  3:11 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-31  5:46 [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Baolin Wang
2024-07-31  5:46 ` [PATCH 2/2] mm: shmem: fix incorrect aligned index when checking conflicts Baolin Wang
2024-07-31  9:18   ` David Hildenbrand
2024-07-31  6:18 ` [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Barry Song
2024-07-31  8:56   ` Baolin Wang
2024-07-31  9:59     ` Kefeng Wang
2024-07-31 10:22       ` Baolin Wang
2024-07-31 20:48         ` Andrew Morton
2024-08-01  0:06           ` Baolin Wang
2024-08-01 19:55             ` Andrew Morton
2024-08-02  3:11               ` Baolin Wang
2024-07-31  9:17 ` David Hildenbrand
2024-07-31 13:09 ` Zi Yan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).