public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* + mm-migrate-high-order-folios-in-swap-cache-correctly.patch added to mm-hotfixes-unstable branch
@ 2023-12-14 22:11 Andrew Morton
  2023-12-20  2:52 ` Charan Teja Kalla
  0 siblings, 1 reply; 2+ messages in thread
From: Andrew Morton @ 2023-12-14 22:11 UTC (permalink / raw)
  To: mm-commits, willy, stable, shakeelb, n-horiguchi, kirill.shutemov,
	hannes, david, quic_charante, akpm


The patch titled
     Subject: mm: migrate high-order folios in swap cache correctly
has been added to the -mm mm-hotfixes-unstable branch.  Its filename is
     mm-migrate-high-order-folios-in-swap-cache-correctly.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-migrate-high-order-folios-in-swap-cache-correctly.patch

This patch will later appear in the mm-hotfixes-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Charan Teja Kalla <quic_charante@quicinc.com>
Subject: mm: migrate high-order folios in swap cache correctly
Date: Thu, 14 Dec 2023 04:58:41 +0000

Large folios occupy N consecutive entries in the swap cache instead of
using multi-index entries like the page cache.  However, if a large folio
is re-added to the LRU list, it can be migrated.  The migration code was
not aware of the difference between the swap cache and the page cache and
assumed that a single xas_store() would be sufficient.

This leaves potentially many stale pointers to the now-migrated folio in
the swap cache, which can lead to almost arbitrary data corruption in the
future.  This can also manifest as infinite loops with the RCU read lock
held.

[willy@infradead.org: modifications to the changelog & tweaked the fix]
Fixes: 3417013e0d183be ("mm/migrate: Add folio_migrate_mapping()")
Link: https://lkml.kernel.org/r/20231214045841.961776-1-willy@infradead.org
Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reported-by: Charan Teja Kalla <quic_charante@quicinc.com>
  Closes: https://lkml.kernel.org/r/1700569840-17327-1-git-send-email-quic_charante@quicinc.com
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/migrate.c |    9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

--- a/mm/migrate.c~mm-migrate-high-order-folios-in-swap-cache-correctly
+++ a/mm/migrate.c
@@ -405,6 +405,7 @@ int folio_migrate_mapping(struct address
 	int dirty;
 	int expected_count = folio_expected_refs(mapping, folio) + extra_count;
 	long nr = folio_nr_pages(folio);
+	long entries, i;
 
 	if (!mapping) {
 		/* Anonymous page without mapping */
@@ -442,8 +443,10 @@ int folio_migrate_mapping(struct address
 			folio_set_swapcache(newfolio);
 			newfolio->private = folio_get_private(folio);
 		}
+		entries = nr;
 	} else {
 		VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
+		entries = 1;
 	}
 
 	/* Move dirty while page refs frozen and newpage not yet exposed */
@@ -453,7 +456,11 @@ int folio_migrate_mapping(struct address
 		folio_set_dirty(newfolio);
 	}
 
-	xas_store(&xas, newfolio);
+	/* Swap cache still stores N entries instead of a high-order entry */
+	for (i = 0; i < entries; i++) {
+		xas_store(&xas, newfolio);
+		xas_next(&xas);
+	}
 
 	/*
 	 * Drop cache reference from old page by unfreezing
_

Patches currently in -mm which might be from quic_charante@quicinc.com are

mm-sparsemem-fix-race-in-accessing-memory_section-usage.patch
mm-sparsemem-fix-race-in-accessing-memory_section-usage-v2.patch
mm-migrate-high-order-folios-in-swap-cache-correctly.patch


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: + mm-migrate-high-order-folios-in-swap-cache-correctly.patch added to mm-hotfixes-unstable branch
  2023-12-14 22:11 + mm-migrate-high-order-folios-in-swap-cache-correctly.patch added to mm-hotfixes-unstable branch Andrew Morton
@ 2023-12-20  2:52 ` Charan Teja Kalla
  0 siblings, 0 replies; 2+ messages in thread
From: Charan Teja Kalla @ 2023-12-20  2:52 UTC (permalink / raw)
  To: Andrew Morton, mm-commits, willy, stable, shakeelb, n-horiguchi,
	kirill.shutemov, hannes, david

Hi Andrew,

On 12/15/2023 3:41 AM, Andrew Morton wrote:
> Large folios occupy N consecutive entries in the swap cache instead of
> using multi-index entries like the page cache.  However, if a large folio
> is re-added to the LRU list, it can be migrated.  The migration code was
> not aware of the difference between the swap cache and the page cache and
> assumed that a single xas_store() would be sufficient.
> 
> This leaves potentially many stale pointers to the now-migrated folio in
> the swap cache, which can lead to almost arbitrary data corruption in the
> future.  This can also manifest as infinite loops with the RCU read lock
> held.
> 
> [willy@infradead.org: modifications to the changelog & tweaked the fix]
> Fixes: 3417013e0d183be ("mm/migrate: Add folio_migrate_mapping()")
> Link: https://lkml.kernel.org/r/20231214045841.961776-1-willy@infradead.org
> Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reported-by: Charan Teja Kalla <quic_charante@quicinc.com>
>   Closes: https://lkml.kernel.org/r/1700569840-17327-1-git-send-email-quic_charante@quicinc.com
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Errors were reported from checkpatch.pl.

1) Seems we have used 15chars of sha1.
2) space before Closes:

Summary:

WARNING:BAD_FIXES_TAG: Please use correct Fixes: style 'Fixes: <12 chars
of sha1> ("<title line>")' - ie: 'Fixes: 3417013e0d18 ("mm/migrate: Add
folio_migrate_mapping()")'
#21:
--
WARNING:BAD_REPORTED_BY_LINK: Reported-by: should be immediately
followed by Closes: with a URL to the report
#26:

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-12-20  2:53 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-14 22:11 + mm-migrate-high-order-folios-in-swap-cache-correctly.patch added to mm-hotfixes-unstable branch Andrew Morton
2023-12-20  2:52 ` Charan Teja Kalla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox