* [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy
@ 2024-12-19 7:30 Baolin Wang
2024-12-19 7:30 ` [PATCH 2/2] mm: shmem: fix the update of 'shmem_falloc->nr_unswapped' Baolin Wang
2024-12-19 15:35 ` [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy David Hildenbrand
0 siblings, 2 replies; 5+ messages in thread
From: Baolin Wang @ 2024-12-19 7:30 UTC (permalink / raw)
To: akpm, hughd; +Cc: david, baolin.wang, linux-mm, linux-kernel
With enabling the shmem per-size within_size policy, using an incorrect
'order' size to round_up() the index can lead to incorrect i_size checks,
resulting in an inappropriate large orders being returned.
Changing to use '1 << order' to round_up() the index to fix this issue.
Additionally, adding an 'aligned_index' variable to avoid affecting the
index checks.
Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
Hi Andrew,
These two bugfix patches are based on the mm-hotfixes-unstable branch,
and this patch has a slight conflict with my previous patch set:
"Support large folios for tmpfs". However, I think the conflicts are
easy to resolve. If you need me to rebase and resend the
"Support large folios for tmpfs" patch set, please let me know.
Sorry for the troubles :)
---
mm/shmem.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index f6fb053ac50d..dec659e84562 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1689,6 +1689,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
unsigned long mask = READ_ONCE(huge_shmem_orders_always);
unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
unsigned long vm_flags = vma ? vma->vm_flags : 0;
+ pgoff_t aligned_index;
bool global_huge;
loff_t i_size;
int order;
@@ -1723,9 +1724,9 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
/* Allow mTHP that will be fully within i_size. */
order = highest_order(within_size_orders);
while (within_size_orders) {
- index = round_up(index + 1, order);
+ aligned_index = round_up(index + 1, 1 << order);
i_size = round_up(i_size_read(inode), PAGE_SIZE);
- if (i_size >> PAGE_SHIFT >= index) {
+ if (i_size >> PAGE_SHIFT >= aligned_index) {
mask |= within_size_orders;
break;
}
--
2.39.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] mm: shmem: fix the update of 'shmem_falloc->nr_unswapped'
2024-12-19 7:30 [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy Baolin Wang
@ 2024-12-19 7:30 ` Baolin Wang
2024-12-19 15:36 ` David Hildenbrand
2024-12-19 15:35 ` [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy David Hildenbrand
1 sibling, 1 reply; 5+ messages in thread
From: Baolin Wang @ 2024-12-19 7:30 UTC (permalink / raw)
To: akpm, hughd; +Cc: david, baolin.wang, linux-mm, linux-kernel
The 'shmem_falloc->nr_unswapped' is used to record how many writepage
refused to swap out because fallocate() is allocating, but after shmem
supports large folio swap out, the update of 'shmem_falloc->nr_unswapped'
does not use the correct number of pages in the large folio, which may
lead to fallocate() not exiting as soon as possible.
Anyway, this is found through code inspection, and I am not sure whether
it would actually cause serious issues.
Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/shmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index dec659e84562..ac58d4fb2e6f 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1535,7 +1535,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
!shmem_falloc->waitq &&
index >= shmem_falloc->start &&
index < shmem_falloc->next)
- shmem_falloc->nr_unswapped++;
+ shmem_falloc->nr_unswapped += nr_pages;
else
shmem_falloc = NULL;
spin_unlock(&inode->i_lock);
--
2.39.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy
2024-12-19 7:30 [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy Baolin Wang
2024-12-19 7:30 ` [PATCH 2/2] mm: shmem: fix the update of 'shmem_falloc->nr_unswapped' Baolin Wang
@ 2024-12-19 15:35 ` David Hildenbrand
2024-12-20 1:26 ` Baolin Wang
1 sibling, 1 reply; 5+ messages in thread
From: David Hildenbrand @ 2024-12-19 15:35 UTC (permalink / raw)
To: Baolin Wang, akpm, hughd; +Cc: linux-mm, linux-kernel
On 19.12.24 08:30, Baolin Wang wrote:
> With enabling the shmem per-size within_size policy, using an incorrect
> 'order' size to round_up() the index can lead to incorrect i_size checks,
> resulting in an inappropriate large orders being returned.
>
> Changing to use '1 << order' to round_up() the index to fix this issue.
> Additionally, adding an 'aligned_index' variable to avoid affecting the
> index checks.
>
> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> Hi Andrew,
>
> These two bugfix patches are based on the mm-hotfixes-unstable branch,
> and this patch has a slight conflict with my previous patch set:
> "Support large folios for tmpfs". However, I think the conflicts are
> easy to resolve. If you need me to rebase and resend the
> "Support large folios for tmpfs" patch set, please let me know.
> Sorry for the troubles :)
> ---
> mm/shmem.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index f6fb053ac50d..dec659e84562 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1689,6 +1689,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
> unsigned long mask = READ_ONCE(huge_shmem_orders_always);
> unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
> unsigned long vm_flags = vma ? vma->vm_flags : 0;
> + pgoff_t aligned_index;
> bool global_huge;
> loff_t i_size;
> int order;
> @@ -1723,9 +1724,9 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
> /* Allow mTHP that will be fully within i_size. */
> order = highest_order(within_size_orders);
> while (within_size_orders) {
> - index = round_up(index + 1, order);
> + aligned_index = round_up(index + 1, 1 << order);
> i_size = round_up(i_size_read(inode), PAGE_SIZE);
> - if (i_size >> PAGE_SHIFT >= index) {
> + if (i_size >> PAGE_SHIFT >= aligned_index) {
> mask |= within_size_orders;
> break;
> }
Yes, that matches the logic in shmem_huge_global_enabled().
Acked-by: David Hildenbrand <david@redhat.com>
Was wondering if one can factor that out into a helper where one could
pass an optional write_end ...
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] mm: shmem: fix the update of 'shmem_falloc->nr_unswapped'
2024-12-19 7:30 ` [PATCH 2/2] mm: shmem: fix the update of 'shmem_falloc->nr_unswapped' Baolin Wang
@ 2024-12-19 15:36 ` David Hildenbrand
0 siblings, 0 replies; 5+ messages in thread
From: David Hildenbrand @ 2024-12-19 15:36 UTC (permalink / raw)
To: Baolin Wang, akpm, hughd; +Cc: linux-mm, linux-kernel
On 19.12.24 08:30, Baolin Wang wrote:
> The 'shmem_falloc->nr_unswapped' is used to record how many writepage
> refused to swap out because fallocate() is allocating, but after shmem
> supports large folio swap out, the update of 'shmem_falloc->nr_unswapped'
> does not use the correct number of pages in the large folio, which may
> lead to fallocate() not exiting as soon as possible.
>
> Anyway, this is found through code inspection, and I am not sure whether
> it would actually cause serious issues.
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/shmem.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index dec659e84562..ac58d4fb2e6f 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1535,7 +1535,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
> !shmem_falloc->waitq &&
> index >= shmem_falloc->start &&
> index < shmem_falloc->next)
> - shmem_falloc->nr_unswapped++;
> + shmem_falloc->nr_unswapped += nr_pages;
> else
> shmem_falloc = NULL;
> spin_unlock(&inode->i_lock);
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy
2024-12-19 15:35 ` [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy David Hildenbrand
@ 2024-12-20 1:26 ` Baolin Wang
0 siblings, 0 replies; 5+ messages in thread
From: Baolin Wang @ 2024-12-20 1:26 UTC (permalink / raw)
To: David Hildenbrand, akpm, hughd; +Cc: linux-mm, linux-kernel
On 2024/12/19 23:35, David Hildenbrand wrote:
> On 19.12.24 08:30, Baolin Wang wrote:
>> With enabling the shmem per-size within_size policy, using an incorrect
>> 'order' size to round_up() the index can lead to incorrect i_size checks,
>> resulting in an inappropriate large orders being returned.
>>
>> Changing to use '1 << order' to round_up() the index to fix this issue.
>> Additionally, adding an 'aligned_index' variable to avoid affecting the
>> index checks.
>>
>> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>> Hi Andrew,
>>
>> These two bugfix patches are based on the mm-hotfixes-unstable branch,
>> and this patch has a slight conflict with my previous patch set:
>> "Support large folios for tmpfs". However, I think the conflicts are
>> easy to resolve. If you need me to rebase and resend the
>> "Support large folios for tmpfs" patch set, please let me know.
>> Sorry for the troubles :)
>> ---
>> mm/shmem.c | 5 +++--
>> 1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index f6fb053ac50d..dec659e84562 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -1689,6 +1689,7 @@ unsigned long shmem_allowable_huge_orders(struct
>> inode *inode,
>> unsigned long mask = READ_ONCE(huge_shmem_orders_always);
>> unsigned long within_size_orders =
>> READ_ONCE(huge_shmem_orders_within_size);
>> unsigned long vm_flags = vma ? vma->vm_flags : 0;
>> + pgoff_t aligned_index;
>> bool global_huge;
>> loff_t i_size;
>> int order;
>> @@ -1723,9 +1724,9 @@ unsigned long shmem_allowable_huge_orders(struct
>> inode *inode,
>> /* Allow mTHP that will be fully within i_size. */
>> order = highest_order(within_size_orders);
>> while (within_size_orders) {
>> - index = round_up(index + 1, order);
>> + aligned_index = round_up(index + 1, 1 << order);
>> i_size = round_up(i_size_read(inode), PAGE_SIZE);
>> - if (i_size >> PAGE_SHIFT >= index) {
>> + if (i_size >> PAGE_SHIFT >= aligned_index) {
>> mask |= within_size_orders;
>> break;
>> }
>
>
> Yes, that matches the logic in shmem_huge_global_enabled().
>
> Acked-by: David Hildenbrand <david@redhat.com>
>
>
> Was wondering if one can factor that out into a helper where one could
> pass an optional write_end ...
Yes, add it into my TODO list. Thanks for reviewing.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-12-20 1:26 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-19 7:30 [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy Baolin Wang
2024-12-19 7:30 ` [PATCH 2/2] mm: shmem: fix the update of 'shmem_falloc->nr_unswapped' Baolin Wang
2024-12-19 15:36 ` David Hildenbrand
2024-12-19 15:35 ` [PATCH 1/2] mm: shmem: fix incorrect index alignment for within_size policy David Hildenbrand
2024-12-20 1:26 ` Baolin Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).