From: "Huang, Ying" <ying.huang@linux.alibaba.com>
To: Shivank Garg <shivankg@amd.com>
Cc: <akpm@linux-foundation.org>, <david@kernel.org>,
<lorenzo.stoakes@oracle.com>, <Liam.Howlett@oracle.com>,
<vbabka@kernel.org>, <willy@infradead.org>, <rppt@kernel.org>,
<surenb@google.com>, <mhocko@suse.com>, <ziy@nvidia.com>,
<matthew.brost@intel.com>, <joshua.hahnjy@gmail.com>,
<rakie.kim@sk.com>, <byungchul@sk.com>, <gourry@gourry.net>,
<apopple@nvidia.com>, <dave@stgolabs.net>,
<Jonathan.Cameron@huawei.com>, <rkodsara@amd.com>,
<vkoul@kernel.org>, <bharata@amd.com>, <sj@kernel.org>,
<weixugc@google.com>, <dan.j.williams@intel.com>,
<rientjes@google.com>, <xuezhengchu@huawei.com>,
<yiannis@zptcorp.com>, <dave.hansen@intel.com>,
<hannes@cmpxchg.org>, <jhubbard@nvidia.com>,
<peterx@redhat.com>, <riel@surriel.com>,
<shakeel.butt@linux.dev>, <stalexan@redhat.com>,
<tj@kernel.org>, <nifan.cxl@gmail.com>,
<linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>
Subject: Re: [RFC PATCH v4 3/6] mm/migrate: add batch-copy path in migrate_pages_batch
Date: Tue, 24 Mar 2026 16:42:00 +0800 [thread overview]
Message-ID: <87se9pzkiv.fsf@DESKTOP-5N7EMDA> (raw)
In-Reply-To: <20260309120725.308854-10-shivankg@amd.com> (Shivank Garg's message of "Mon, 9 Mar 2026 12:07:27 +0000")
Shivank Garg <shivankg@amd.com> writes:
> Split unmapped folios into batch-eligible (src_batch/dst_batch) and
> standard (src_std/dst_std) lists, gated by the migrate_offload_enabled
> which is off by default. So, when no offload driver is active, the
> branch is never taken and everything goes through the standard path.
>
> After TLB flush, batch copy the eligible folios via folios_mc_copy()
> and pass already_copied=true into migrate_folios_move() so
> __migrate_folio() skips the per-folio copy.
>
> On batch copy failure, already_copied flag stays false and each folio
> fall back to individual copy.
>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
> mm/migrate.c | 55 +++++++++++++++++++++++++++++++++++++++++-----------
> 1 file changed, 44 insertions(+), 11 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 1d8c1fb627c9..69daa16f9cf3 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -43,6 +43,7 @@
> #include <linux/sched/sysctl.h>
> #include <linux/memory-tiers.h>
> #include <linux/pagewalk.h>
> +#include <linux/jump_label.h>
>
> #include <asm/tlbflush.h>
>
> @@ -51,6 +52,8 @@
> #include "internal.h"
> #include "swap.h"
>
> +DEFINE_STATIC_KEY_FALSE(migrate_offload_enabled);
> +
> static const struct movable_operations *offline_movable_ops;
> static const struct movable_operations *zsmalloc_movable_ops;
>
> @@ -1706,6 +1709,12 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio,
> return nr_failed;
> }
>
> +/* movable_ops folios have their own migrate path */
> +static bool folio_supports_batch_copy(struct folio *folio)
> +{
> + return likely(!page_has_movable_ops(&folio->page));
> +}
> +
> static void migrate_folios_move(struct list_head *src_folios,
> struct list_head *dst_folios,
> free_folio_t put_new_folio, unsigned long private,
> @@ -1805,8 +1814,12 @@ static int migrate_pages_batch(struct list_head *from,
> bool is_large = false;
> struct folio *folio, *folio2, *dst = NULL;
> int rc, rc_saved = 0, nr_pages;
> - LIST_HEAD(unmap_folios);
> - LIST_HEAD(dst_folios);
> + unsigned int nr_batch = 0;
> + bool batch_copied = false;
> + LIST_HEAD(src_batch);
> + LIST_HEAD(dst_batch);
> + LIST_HEAD(src_std);
> + LIST_HEAD(dst_std);
IMHO, the naming appears too copy centric, how about unmap_batch and
unmap_single? "unmap" is one step of migration.
> bool nosplit = (reason == MR_NUMA_MISPLACED);
>
> VM_WARN_ON_ONCE(mode != MIGRATE_ASYNC &&
> @@ -1943,7 +1956,7 @@ static int migrate_pages_batch(struct list_head *from,
unmap/dst_folios in comments need to be changed too.
rc = migrate_folio_unmap(get_new_folio, put_new_folio,
private, folio, &dst, mode, ret_folios);
/*
* The rules are:
* 0: folio will be put on unmap_folios list,
* dst folio put on dst_folios list
* -EAGAIN: stay on the from list
* -ENOMEM: stay on the from list
* Other errno: put on ret_folios list
*/
> /* nr_failed isn't updated for not used */
> stats->nr_thp_failed += thp_retry;
> rc_saved = rc;
> - if (list_empty(&unmap_folios))
> + if (list_empty(&src_batch) && list_empty(&src_std))
> goto out;
> else
> goto move;
> @@ -1953,8 +1966,15 @@ static int migrate_pages_batch(struct list_head *from,
> nr_retry_pages += nr_pages;
> break;
> case 0:
> - list_move_tail(&folio->lru, &unmap_folios);
> - list_add_tail(&dst->lru, &dst_folios);
> + if (static_branch_unlikely(&migrate_offload_enabled) &&
> + folio_supports_batch_copy(folio)) {
> + list_move_tail(&folio->lru, &src_batch);
> + list_add_tail(&dst->lru, &dst_batch);
> + nr_batch++;
> + } else {
> + list_move_tail(&folio->lru, &src_std);
> + list_add_tail(&dst->lru, &dst_std);
> + }
> break;
> default:
> /*
> @@ -1977,17 +1997,28 @@ static int migrate_pages_batch(struct list_head *from,
> /* Flush TLBs for all unmapped folios */
> try_to_unmap_flush();
>
> + /* Batch-copy eligible folios before the move phase */
> + if (!list_empty(&src_batch)) {
> + rc = folios_mc_copy(&dst_batch, &src_batch, nr_batch);
> + batch_copied = (rc == 0);
> + }
> +
> retry = 1;
> for (pass = 0; pass < nr_pass && retry; pass++) {
> retry = 0;
> thp_retry = 0;
> nr_retry_pages = 0;
>
> - /* Move the unmapped folios */
> - migrate_folios_move(&unmap_folios, &dst_folios,
> - put_new_folio, private, mode, reason,
> - ret_folios, stats, &retry, &thp_retry,
> - &nr_failed, &nr_retry_pages, false);
> + if (!list_empty(&src_batch))
> + migrate_folios_move(&src_batch, &dst_batch, put_new_folio,
> + private, mode, reason, ret_folios, stats,
> + &retry, &thp_retry, &nr_failed,
> + &nr_retry_pages, batch_copied);
> + if (!list_empty(&src_std))
> + migrate_folios_move(&src_std, &dst_std, put_new_folio,
> + private, mode, reason, ret_folios, stats,
> + &retry, &thp_retry, &nr_failed,
> + &nr_retry_pages, false);
> }
> nr_failed += retry;
> stats->nr_thp_failed += thp_retry;
> @@ -1996,7 +2027,9 @@ static int migrate_pages_batch(struct list_head *from,
> rc = rc_saved ? : nr_failed;
> out:
> /* Cleanup remaining folios */
> - migrate_folios_undo(&unmap_folios, &dst_folios,
> + migrate_folios_undo(&src_batch, &dst_batch,
> + put_new_folio, private, ret_folios);
> + migrate_folios_undo(&src_std, &dst_std,
> put_new_folio, private, ret_folios);
>
> return rc;
---
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2026-03-24 8:42 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-09 12:07 [RFC PATCH v4 0/6] Accelerate page migration with batch copying and hardware offload Shivank Garg
2026-03-09 12:07 ` [RFC PATCH v4 1/6] mm: introduce folios_mc_copy() for batch folio copying Shivank Garg
2026-03-12 9:41 ` David Hildenbrand (Arm)
2026-03-15 18:09 ` Garg, Shivank
2026-03-09 12:07 ` [RFC PATCH v4 2/6] mm/migrate: skip data copy for already-copied folios Shivank Garg
2026-03-12 9:44 ` David Hildenbrand (Arm)
2026-03-15 18:25 ` Garg, Shivank
2026-03-23 12:20 ` David Hildenbrand (Arm)
2026-03-24 8:22 ` Huang, Ying
2026-03-09 12:07 ` [RFC PATCH v4 3/6] mm/migrate: add batch-copy path in migrate_pages_batch Shivank Garg
2026-03-24 8:42 ` Huang, Ying [this message]
2026-03-09 12:07 ` [RFC PATCH v4 4/6] mm/migrate: add copy offload registration infrastructure Shivank Garg
2026-03-09 17:54 ` Gregory Price
2026-03-10 10:07 ` Garg, Shivank
2026-03-24 10:54 ` Huang, Ying
2026-03-09 12:07 ` [RFC PATCH v4 5/6] drivers/migrate_offload: add DMA batch copy driver (dcbm) Shivank Garg
2026-03-09 18:04 ` Gregory Price
2026-03-12 9:33 ` Garg, Shivank
2026-03-24 8:10 ` Huang, Ying
2026-03-09 12:07 ` [RFC PATCH v4 6/6] mm/migrate: adjust NR_MAX_BATCHED_MIGRATION for testing Shivank Garg
2026-03-18 14:29 ` [RFC PATCH v4 0/6] Accelerate page migration with batch copying and hardware offload Garg, Shivank
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87se9pzkiv.fsf@DESKTOP-5N7EMDA \
--to=ying.huang@linux.alibaba.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=bharata@amd.com \
--cc=byungchul@sk.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=dave@stgolabs.net \
--cc=david@kernel.org \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=jhubbard@nvidia.com \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=nifan.cxl@gmail.com \
--cc=peterx@redhat.com \
--cc=rakie.kim@sk.com \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=rkodsara@amd.com \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=shivankg@amd.com \
--cc=sj@kernel.org \
--cc=stalexan@redhat.com \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=vbabka@kernel.org \
--cc=vkoul@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=xuezhengchu@huawei.com \
--cc=yiannis@zptcorp.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox