From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>,
Oscar Salvador <osalvador@suse.de>,
Miaohe Lin <linmiaohe@huawei.com>,
Naoya Horiguchi <nao.horiguchi@gmail.com>, <linux-mm@kvack.org>,
Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH v2 5/5] mm: memory_hotplug: unify Huge/LRU/non-LRU movable folio isolation
Date: Fri, 16 Aug 2024 17:04:35 +0800 [thread overview]
Message-ID: <20240816090435.888946-6-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20240816090435.888946-1-wangkefeng.wang@huawei.com>
Use the isolate_folio_to_list() to unify hugetlb/LRU/non-LRU
folio isolation, which cleanup code a bit and save a few calls
to compound_head().
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/memory_hotplug.c | 45 +++++++++++++++++----------------------------
1 file changed, 17 insertions(+), 28 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 02a0d4fbc3fe..cc9c16db2f8c 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1773,14 +1773,14 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
{
unsigned long pfn;
- struct page *page;
LIST_HEAD(source);
+ struct folio *folio;
static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
DEFAULT_RATELIMIT_BURST);
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
- struct folio *folio;
- bool isolated;
+ struct page *page;
+ bool huge;
if (!pfn_valid(pfn))
continue;
@@ -1812,34 +1812,22 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
continue;
}
- if (folio_test_hugetlb(folio)) {
- isolate_hugetlb(folio, &source);
- continue;
+ huge = folio_test_hugetlb(folio);
+ if (!huge) {
+ folio = folio_get_nontail_page(page);
+ if (!folio)
+ continue;
}
- if (!get_page_unless_zero(page))
- continue;
- /*
- * We can skip free pages. And we can deal with pages on
- * LRU and non-lru movable pages.
- */
- if (PageLRU(page))
- isolated = isolate_lru_page(page);
- else
- isolated = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
- if (isolated) {
- list_add_tail(&page->lru, &source);
- if (!__PageMovable(page))
- inc_node_page_state(page, NR_ISOLATED_ANON +
- page_is_file_lru(page));
-
- } else {
+ if (!isolate_folio_to_list(folio, &source)) {
if (__ratelimit(&migrate_rs)) {
pr_warn("failed to isolate pfn %lx\n", pfn);
dump_page(page, "isolation failed");
}
}
- put_page(page);
+
+ if (!huge)
+ folio_put(folio);
}
if (!list_empty(&source)) {
nodemask_t nmask = node_states[N_MEMORY];
@@ -1854,7 +1842,7 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
* We have checked that migration range is on a single zone so
* we can use the nid of the first page to all the others.
*/
- mtc.nid = page_to_nid(list_first_entry(&source, struct page, lru));
+ mtc.nid = folio_nid(list_first_entry(&source, struct folio, lru));
/*
* try to allocate from a different node but reuse this node
@@ -1867,11 +1855,12 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
ret = migrate_pages(&source, alloc_migration_target, NULL,
(unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG, NULL);
if (ret) {
- list_for_each_entry(page, &source, lru) {
+ list_for_each_entry(folio, &source, lru) {
if (__ratelimit(&migrate_rs)) {
pr_warn("migrating pfn %lx failed ret:%d\n",
- page_to_pfn(page), ret);
- dump_page(page, "migration failure");
+ folio_pfn(folio), ret);
+ dump_page(&folio->page,
+ "migration failure");
}
}
putback_movable_pages(&source);
--
2.27.0
next prev parent reply other threads:[~2024-08-16 9:05 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-16 9:04 [PATCH v2 0/5] mm: memory_hotplug: improve do_migrate_range() Kefeng Wang
2024-08-16 9:04 ` [PATCH v2 1/5] mm: memory_hotplug: remove head variable in do_migrate_range() Kefeng Wang
2024-08-16 9:04 ` [PATCH v2 2/5] mm: memory-failure: add unmap_posioned_folio() Kefeng Wang
2024-08-16 9:04 ` [PATCH v2 3/5] mm: memory_hotplug: check hwpoisoned page firstly in do_migrate_range() Kefeng Wang
2024-08-16 9:04 ` [PATCH v2 4/5] mm: migrate: add isolate_folio_to_list() Kefeng Wang
2024-08-16 9:04 ` Kefeng Wang [this message]
2024-08-17 8:43 ` [PATCH v2 0/5] mm: memory_hotplug: improve do_migrate_range() Kefeng Wang
-- strict thread matches above, loose matches on Subject: below --
2024-08-17 8:49 [PATCH resend " Kefeng Wang
2024-08-17 8:49 ` [PATCH v2 5/5] mm: memory_hotplug: unify Huge/LRU/non-LRU movable folio isolation Kefeng Wang
2024-08-22 7:20 ` Miaohe Lin
2024-08-22 12:08 ` Kefeng Wang
2024-08-26 14:55 ` David Hildenbrand
2024-08-27 1:26 ` Kefeng Wang
2024-08-27 15:10 ` David Hildenbrand
2024-08-27 15:35 ` Kefeng Wang
2024-08-27 15:38 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240816090435.888946-6-wangkefeng.wang@huawei.com \
--to=wangkefeng.wang@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linmiaohe@huawei.com \
--cc=linux-mm@kvack.org \
--cc=nao.horiguchi@gmail.com \
--cc=osalvador@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).