linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 1/2] mm/khugepaged: clean up refcount check using folio_expected_ref_count()
@ 2025-05-23  9:14 Shivank Garg
  2025-05-23  9:14 ` [PATCH V2 2/2] mm/khugepaged: fix race with folio split/free using temporary reference Shivank Garg
  2025-05-24 21:47 ` [PATCH V2 1/2] mm/khugepaged: clean up refcount check using folio_expected_ref_count() David Hildenbrand
  0 siblings, 2 replies; 4+ messages in thread
From: Shivank Garg @ 2025-05-23  9:14 UTC (permalink / raw)
  To: akpm, david, linux-mm, linux-kernel
  Cc: ziy, baolin.wang, lorenzo.stoakes, Liam.Howlett, npache,
	ryan.roberts, dev.jain, fengwei.yin, shivankg, bharata

Use folio_expected_ref_count() instead of open-coded logic in
is_refcount_suitable(). This avoids code duplication and improves
clarity.

Drop is_refcount_suitable() as it is no longer needed.

Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
 mm/khugepaged.c | 19 +++----------------
 1 file changed, 3 insertions(+), 16 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index cc945c6ab3bd..19aa4142bb99 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -548,19 +548,6 @@ static void release_pte_pages(pte_t *pte, pte_t *_pte,
 	}
 }
 
-static bool is_refcount_suitable(struct folio *folio)
-{
-	int expected_refcount = folio_mapcount(folio);
-
-	if (!folio_test_anon(folio) || folio_test_swapcache(folio))
-		expected_refcount += folio_nr_pages(folio);
-
-	if (folio_test_private(folio))
-		expected_refcount++;
-
-	return folio_ref_count(folio) == expected_refcount;
-}
-
 static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 					unsigned long address,
 					pte_t *pte,
@@ -652,7 +639,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 		 * but not from this process. The other process cannot write to
 		 * the page, only trigger CoW.
 		 */
-		if (!is_refcount_suitable(folio)) {
+		if (folio_expected_ref_count(folio) != folio_ref_count(folio)) {
 			folio_unlock(folio);
 			result = SCAN_PAGE_COUNT;
 			goto out;
@@ -1402,7 +1389,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
 		 * has excessive GUP pins (i.e. 512).  Anyway the same check
 		 * will be done again later the risk seems low.
 		 */
-		if (!is_refcount_suitable(folio)) {
+		if (folio_expected_ref_count(folio) != folio_ref_count(folio)) {
 			result = SCAN_PAGE_COUNT;
 			goto out_unmap;
 		}
@@ -2320,7 +2307,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
 			break;
 		}
 
-		if (!is_refcount_suitable(folio)) {
+		if (folio_expected_ref_count(folio) != folio_ref_count(folio)) {
 			result = SCAN_PAGE_COUNT;
 			break;
 		}
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH V2 2/2] mm/khugepaged: fix race with folio split/free using temporary reference
  2025-05-23  9:14 [PATCH V2 1/2] mm/khugepaged: clean up refcount check using folio_expected_ref_count() Shivank Garg
@ 2025-05-23  9:14 ` Shivank Garg
  2025-05-24 21:48   ` David Hildenbrand
  2025-05-24 21:47 ` [PATCH V2 1/2] mm/khugepaged: clean up refcount check using folio_expected_ref_count() David Hildenbrand
  1 sibling, 1 reply; 4+ messages in thread
From: Shivank Garg @ 2025-05-23  9:14 UTC (permalink / raw)
  To: akpm, david, linux-mm, linux-kernel
  Cc: ziy, baolin.wang, lorenzo.stoakes, Liam.Howlett, npache,
	ryan.roberts, dev.jain, fengwei.yin, shivankg, bharata,
	syzbot+2b99589e33edbe9475ca

hpage_collapse_scan_file() calls folio_expected_ref_count(), which in turn
calls folio_mapcount(). folio_mapcount() checks folio_test_large() before
proceeding to folio_large_mapcount(), but there is a race window where the
folio may get split/freed between these checks, triggering:

  VM_WARN_ON_FOLIO(!folio_test_large(folio), folio)

Take a temporary reference to the folio in hpage_collapse_scan_file().
This stabilizes the folio during refcount check and prevents incorrect
large folio detection due to concurrent split/free.

Fixes: 05c5323b2a34 ("mm: track mapcount of large folios in single value")
Reported-by: syzbot+2b99589e33edbe9475ca@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/6828470d.a70a0220.38f255.000c.GAE@google.com
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
V1: https://lore.kernel.org/linux-mm/20250522093452.6379-1-shivankg@amd.com
---
 mm/khugepaged.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 19aa4142bb99..685eb949f4ce 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2282,6 +2282,17 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
 			continue;
 		}
 
+		if (!folio_try_get(folio)) {
+			xas_reset(&xas);
+			continue;
+		}
+
+		if (unlikely(folio != xas_reload(&xas))) {
+			folio_put(folio);
+			xas_reset(&xas);
+			continue;
+		}
+
 		if (folio_order(folio) == HPAGE_PMD_ORDER &&
 		    folio->index == start) {
 			/* Maybe PMD-mapped */
@@ -2292,23 +2303,27 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
 			 * it's safe to skip LRU and refcount checks before
 			 * returning.
 			 */
+			folio_put(folio);
 			break;
 		}
 
 		node = folio_nid(folio);
 		if (hpage_collapse_scan_abort(node, cc)) {
 			result = SCAN_SCAN_ABORT;
+			folio_put(folio);
 			break;
 		}
 		cc->node_load[node]++;
 
 		if (!folio_test_lru(folio)) {
 			result = SCAN_PAGE_LRU;
+			folio_put(folio);
 			break;
 		}
 
-		if (folio_expected_ref_count(folio) != folio_ref_count(folio)) {
+		if (folio_expected_ref_count(folio) + 1 != folio_ref_count(folio)) {
 			result = SCAN_PAGE_COUNT;
+			folio_put(folio);
 			break;
 		}
 
@@ -2320,6 +2335,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
 		 */
 
 		present += folio_nr_pages(folio);
+		folio_put(folio);
 
 		if (need_resched()) {
 			xas_pause(&xas);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH V2 1/2] mm/khugepaged: clean up refcount check using folio_expected_ref_count()
  2025-05-23  9:14 [PATCH V2 1/2] mm/khugepaged: clean up refcount check using folio_expected_ref_count() Shivank Garg
  2025-05-23  9:14 ` [PATCH V2 2/2] mm/khugepaged: fix race with folio split/free using temporary reference Shivank Garg
@ 2025-05-24 21:47 ` David Hildenbrand
  1 sibling, 0 replies; 4+ messages in thread
From: David Hildenbrand @ 2025-05-24 21:47 UTC (permalink / raw)
  To: Shivank Garg, akpm, linux-mm, linux-kernel
  Cc: ziy, baolin.wang, lorenzo.stoakes, Liam.Howlett, npache,
	ryan.roberts, dev.jain, fengwei.yin, bharata

On 23.05.25 11:14, Shivank Garg wrote:
> Use folio_expected_ref_count() instead of open-coded logic in
> is_refcount_suitable(). This avoids code duplication and improves
> clarity.
> 
> Drop is_refcount_suitable() as it is no longer needed.
> 
> Suggested-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---

Likely we should revert the patches, so we can have the fix in first.

So in this patch here, we would only convert the remaining 2 instances.

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH V2 2/2] mm/khugepaged: fix race with folio split/free using temporary reference
  2025-05-23  9:14 ` [PATCH V2 2/2] mm/khugepaged: fix race with folio split/free using temporary reference Shivank Garg
@ 2025-05-24 21:48   ` David Hildenbrand
  0 siblings, 0 replies; 4+ messages in thread
From: David Hildenbrand @ 2025-05-24 21:48 UTC (permalink / raw)
  To: Shivank Garg, akpm, linux-mm, linux-kernel
  Cc: ziy, baolin.wang, lorenzo.stoakes, Liam.Howlett, npache,
	ryan.roberts, dev.jain, fengwei.yin, bharata,
	syzbot+2b99589e33edbe9475ca

On 23.05.25 11:14, Shivank Garg wrote:
> hpage_collapse_scan_file() calls folio_expected_ref_count(), which in turn
> calls folio_mapcount(). folio_mapcount() checks folio_test_large() before
> proceeding to folio_large_mapcount(), but there is a race window where the
> folio may get split/freed between these checks, triggering:
> 
>    VM_WARN_ON_FOLIO(!folio_test_large(folio), folio)
> 
> Take a temporary reference to the folio in hpage_collapse_scan_file().
> This stabilizes the folio during refcount check and prevents incorrect
> large folio detection due to concurrent split/free.
> 
> Fixes: 05c5323b2a34 ("mm: track mapcount of large folios in single value")
> Reported-by: syzbot+2b99589e33edbe9475ca@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/all/6828470d.a70a0220.38f255.000c.GAE@google.com
> Suggested-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
> V1: https://lore.kernel.org/linux-mm/20250522093452.6379-1-shivankg@amd.com
> ---

Assuming we have this as patch #1:

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-05-24 21:48 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-23  9:14 [PATCH V2 1/2] mm/khugepaged: clean up refcount check using folio_expected_ref_count() Shivank Garg
2025-05-23  9:14 ` [PATCH V2 2/2] mm/khugepaged: fix race with folio split/free using temporary reference Shivank Garg
2025-05-24 21:48   ` David Hildenbrand
2025-05-24 21:47 ` [PATCH V2 1/2] mm/khugepaged: clean up refcount check using folio_expected_ref_count() David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).