From: Bharata B Rao <bharata@amd.com>
To: <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>
Cc: <Jonathan.Cameron@huawei.com>, <dave.hansen@intel.com>,
<gourry@gourry.net>, <hannes@cmpxchg.org>,
<mgorman@techsingularity.net>, <mingo@redhat.com>,
<peterz@infradead.org>, <raghavendra.kt@amd.com>,
<riel@surriel.com>, <rientjes@google.com>, <sj@kernel.org>,
<weixugc@google.com>, <willy@infradead.org>,
<ying.huang@linux.alibaba.com>, <ziy@nvidia.com>,
<dave@stgolabs.net>, <nifan.cxl@gmail.com>,
<xuezhengchu@huawei.com>, <yiannis@zptcorp.com>,
<akpm@linux-foundation.org>, <david@redhat.com>,
<bharata@amd.com>
Subject: [RFC PATCH v1 2/4] migrate: implement migrate_misplaced_folios_batch
Date: Mon, 16 Jun 2025 19:09:29 +0530 [thread overview]
Message-ID: <20250616133931.206626-3-bharata@amd.com> (raw)
In-Reply-To: <20250616133931.206626-1-bharata@amd.com>
From: Gregory Price <gourry@gourry.net>
A common operation in tiering is to migrate multiple pages at once.
The migrate_misplaced_folio function requires one call for each
individual folio. Expose a batch-variant of the same call for use
when doing batch migrations.
Signed-off-by: Gregory Price <gourry@gourry.net>
Signed-off-by: Bharata B Rao <bharata@amd.com>
---
include/linux/migrate.h | 6 ++++++
mm/migrate.c | 31 +++++++++++++++++++++++++++++++
2 files changed, 37 insertions(+)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index aaa2114498d6..90baf5ef4660 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -145,6 +145,7 @@ const struct movable_operations *page_movable_ops(struct page *page)
int migrate_misplaced_folio_prepare(struct folio *folio,
struct vm_area_struct *vma, int node);
int migrate_misplaced_folio(struct folio *folio, int node);
+int migrate_misplaced_folios_batch(struct list_head *foliolist, int node);
#else
static inline int migrate_misplaced_folio_prepare(struct folio *folio,
struct vm_area_struct *vma, int node)
@@ -155,6 +156,11 @@ static inline int migrate_misplaced_folio(struct folio *folio, int node)
{
return -EAGAIN; /* can't migrate now */
}
+static inline int migrate_misplaced_folios_batch(struct list_head *foliolist,
+ int node)
+{
+ return -EAGAIN; /* can't migrate now */
+}
#endif /* CONFIG_NUMA_BALANCING */
#ifdef CONFIG_MIGRATION
diff --git a/mm/migrate.c b/mm/migrate.c
index 9fdc2cc3dd1c..748b5cd48a1f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2675,5 +2675,36 @@ int migrate_misplaced_folio(struct folio *folio, int node)
BUG_ON(!list_empty(&migratepages));
return nr_remaining ? -EAGAIN : 0;
}
+
+/*
+ * Batch variant of migrate_misplaced_folio. Attempts to migrate
+ * a folio list to the specified destination.
+ *
+ * Caller is expected to have isolated the folios by calling
+ * migrate_misplaced_folio_prepare(), which will result in an
+ * elevated reference count on the folio.
+ *
+ * This function will un-isolate the folios, dereference them, and
+ * remove them from the list before returning.
+ */
+int migrate_misplaced_folios_batch(struct list_head *folio_list, int node)
+{
+ pg_data_t *pgdat = NODE_DATA(node);
+ unsigned int nr_succeeded;
+ int nr_remaining;
+
+ nr_remaining = migrate_pages(folio_list, alloc_misplaced_dst_folio,
+ NULL, node, MIGRATE_ASYNC,
+ MR_NUMA_MISPLACED, &nr_succeeded);
+ if (nr_remaining)
+ putback_movable_pages(folio_list);
+
+ if (nr_succeeded) {
+ count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
+ mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded);
+ }
+ BUG_ON(!list_empty(folio_list));
+ return nr_remaining ? -EAGAIN : 0;
+}
#endif /* CONFIG_NUMA_BALANCING */
#endif /* CONFIG_NUMA */
--
2.34.1
next prev parent reply other threads:[~2025-06-16 13:40 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-16 13:39 [RFC PATCH v1 0/4] Kernel thread based async batch migration Bharata B Rao
2025-06-16 13:39 ` [RFC PATCH v1 1/4] mm: migrate: Allow misplaced migration without VMA too Bharata B Rao
2025-06-16 13:39 ` Bharata B Rao [this message]
2025-06-16 13:39 ` [RFC PATCH v1 3/4] mm: kmigrated - Async kernel migration thread Bharata B Rao
2025-06-16 14:05 ` page_ext and memdescs Matthew Wilcox
2025-06-17 8:28 ` Bharata B Rao
2025-06-24 9:47 ` David Hildenbrand
2025-07-07 9:36 ` [RFC PATCH v1 3/4] mm: kmigrated - Async kernel migration thread Byungchul Park
2025-07-08 3:43 ` Bharata B Rao
2025-06-16 13:39 ` [RFC PATCH v1 4/4] mm: sched: Batch-migrate misplaced pages Bharata B Rao
2025-06-20 6:39 ` [RFC PATCH v1 0/4] Kernel thread based async batch migration Huang, Ying
2025-06-20 8:58 ` Bharata B Rao
2025-06-20 9:59 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250616133931.206626-3-bharata@amd.com \
--to=bharata@amd.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@intel.com \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@redhat.com \
--cc=nifan.cxl@gmail.com \
--cc=peterz@infradead.org \
--cc=raghavendra.kt@amd.com \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=sj@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=xuezhengchu@huawei.com \
--cc=yiannis@zptcorp.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).