From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: <willy@infradead.org>, Miaohe Lin <linmiaohe@huawei.com>,
Naoya Horiguchi <naoya.horiguchi@nec.com>,
David Hildenbrand <david@redhat.com>,
Oscar Salvador <osalvador@suse.de>, Zi Yan <ziy@nvidia.com>,
Hugh Dickins <hughd@google.com>, Jonathan Corbet <corbet@lwn.net>,
<linux-mm@kvack.org>, <linux-doc@vger.kernel.org>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH 1/6] mm: migrate: add isolate_movable_folio()
Date: Wed, 27 Mar 2024 22:10:29 +0800 [thread overview]
Message-ID: <20240327141034.3712697-2-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20240327141034.3712697-1-wangkefeng.wang@huawei.com>
Like isolate_lru_page(), make isolate_movable_page() as a wrapper
around isolate_lru_folio(), since isolate_movable_page() always
fails on a tail page, add a warn for tail page and return immediately.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
include/linux/migrate.h | 3 +++
mm/migrate.c | 41 +++++++++++++++++++++++------------------
2 files changed, 26 insertions(+), 18 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index f9d92482d117..a6c38ee7246a 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
unsigned int *ret_succeeded);
struct folio *alloc_migration_target(struct folio *src, unsigned long private);
bool isolate_movable_page(struct page *page, isolate_mode_t mode);
+bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode);
int migrate_huge_page_move_mapping(struct address_space *mapping,
struct folio *dst, struct folio *src);
@@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src,
{ return NULL; }
static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
{ return false; }
+static inline bool isolate_movable_folio(struct page *page, isolate_mode_t mode)
+ { return false; }
static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
struct folio *dst, struct folio *src)
diff --git a/mm/migrate.c b/mm/migrate.c
index 2228ca681afb..b2195b6ff32c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -57,31 +57,29 @@
#include "internal.h"
-bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode)
{
- struct folio *folio = folio_get_nontail_page(page);
const struct movable_operations *mops;
/*
- * Avoid burning cycles with pages that are yet under __free_pages(),
+ * Avoid burning cycles with folios that are yet under __free_pages(),
* or just got freed under us.
*
- * In case we 'win' a race for a movable page being freed under us and
+ * In case we 'win' a race for a movable folio being freed under us and
* raise its refcount preventing __free_pages() from doing its job
- * the put_page() at the end of this block will take care of
- * release this page, thus avoiding a nasty leakage.
+ * the folio_put() at the end of this block will take care of
+ * release this folio, thus avoiding a nasty leakage.
*/
- if (!folio)
- goto out;
+ folio_get(folio);
if (unlikely(folio_test_slab(folio)))
goto out_putfolio;
/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
smp_rmb();
/*
- * Check movable flag before taking the page lock because
- * we use non-atomic bitops on newly allocated page flags so
- * unconditionally grabbing the lock ruins page's owner side.
+ * Check movable flag before taking the folio lock because
+ * we use non-atomic bitops on newly allocated folio flags so
+ * unconditionally grabbing the lock ruins folio's owner side.
*/
if (unlikely(!__folio_test_movable(folio)))
goto out_putfolio;
@@ -91,13 +89,13 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
goto out_putfolio;
/*
- * As movable pages are not isolated from LRU lists, concurrent
- * compaction threads can race against page migration functions
- * as well as race against the releasing a page.
+ * As movable folios are not isolated from LRU lists, concurrent
+ * compaction threads can race against folio migration functions
+ * as well as race against the releasing a folio.
*
- * In order to avoid having an already isolated movable page
+ * In order to avoid having an already isolated movable folio
* being (wrongly) re-isolated while it is under migration,
- * or to avoid attempting to isolate pages being released,
+ * or to avoid attempting to isolate folios being released,
* lets be sure we have the page lock
* before proceeding with the movable page isolation steps.
*/
@@ -113,7 +111,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
if (!mops->isolate_page(&folio->page, mode))
goto out_no_isolated;
- /* Driver shouldn't use PG_isolated bit of page->flags */
+ /* Driver shouldn't use PG_isolated bit of folio->flags */
WARN_ON_ONCE(folio_test_isolated(folio));
folio_set_isolated(folio);
folio_unlock(folio);
@@ -124,10 +122,17 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
folio_unlock(folio);
out_putfolio:
folio_put(folio);
-out:
return false;
}
+bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+{
+ if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
+ return false;
+
+ return isolate_movable_folio((struct folio *)page, mode);
+}
+
static void putback_movable_folio(struct folio *folio)
{
const struct movable_operations *mops = folio_movable_ops(folio);
--
2.27.0
next prev parent reply other threads:[~2024-03-27 14:13 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-27 14:10 [PATCH 0/6] mm: remove isolate_lru_page() and isolate_movable_page() Kefeng Wang
2024-03-27 14:10 ` Kefeng Wang [this message]
2024-03-27 14:29 ` [PATCH 1/6] mm: migrate: add isolate_movable_folio() Zi Yan
2024-03-27 14:36 ` Kefeng Wang
2024-03-27 18:59 ` Vishal Moola
2024-03-28 5:08 ` Kefeng Wang
2024-03-27 14:10 ` [PATCH 2/6] mm: memory_hotplug: use more folio in do_migrate_range() Kefeng Wang
2024-03-27 14:45 ` Zi Yan
2024-03-27 14:54 ` Matthew Wilcox
2024-03-27 15:10 ` Zi Yan
2024-03-27 15:58 ` Matthew Wilcox
2024-03-28 5:30 ` Kefeng Wang
2024-03-28 5:06 ` Kefeng Wang
2024-03-27 14:10 ` [PATCH 3/6] mm: remove isolate_lru_page() Kefeng Wang
2024-03-28 12:22 ` kernel test robot
2024-03-28 12:56 ` Kefeng Wang
2024-03-28 15:33 ` kernel test robot
2024-03-27 14:10 ` [PATCH 4/6] mm: compaction: use isolate_movable_folio() in isolate_migratepages_block() Kefeng Wang
2024-03-27 18:49 ` Vishal Moola
2024-03-28 12:49 ` Kefeng Wang
2024-03-27 14:10 ` [PATCH 5/6] mm: memory-failure: use isolate_movable_folio() in mf_isolate_folio() Kefeng Wang
2024-03-27 15:12 ` Zi Yan
2024-03-28 16:57 ` kernel test robot
2024-03-27 14:10 ` [PATCH 6/6] mm: migrate: remove isolate_movable_page() Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240327141034.3712697-2-wangkefeng.wang@huawei.com \
--to=wangkefeng.wang@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=corbet@lwn.net \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=linmiaohe@huawei.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=naoya.horiguchi@nec.com \
--cc=osalvador@suse.de \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).