From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Shivank Garg <shivankg@amd.com>, akpm@linux-foundation.org
Cc: kinseyho@google.com, weixugc@google.com, ljs@kernel.org,
Liam.Howlett@oracle.com, vbabka@kernel.org, willy@infradead.org,
rppt@kernel.org, surenb@google.com, mhocko@suse.com,
ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com,
rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net,
ying.huang@linux.alibaba.com, apopple@nvidia.com,
dave@stgolabs.net, Jonathan.Cameron@huawei.com, rkodsara@amd.com,
vkoul@kernel.org, bharata@amd.com, sj@kernel.org,
rientjes@google.com, xuezhengchu@huawei.com, yiannis@zptcorp.com,
dave.hansen@intel.com, hannes@cmpxchg.org, jhubbard@nvidia.com,
peterx@redhat.com, riel@surriel.com, shakeel.butt@linux.dev,
stalexan@redhat.com, tj@kernel.org, nifan.cxl@gmail.com,
jic23@kernel.org, aneesh.kumar@kernel.org, nathan.lynch@amd.com,
Frank.li@nxp.com, djbw@kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [PATCH 4/7] mm/migrate: add batch-copy path in migrate_pages_batch
Date: Mon, 11 May 2026 17:40:58 +0200 [thread overview]
Message-ID: <b7956894-86b9-472f-b2e9-bb50c0316c39@kernel.org> (raw)
In-Reply-To: <20260428155043.39251-10-shivankg@amd.com>
On 4/28/26 17:50, Shivank Garg wrote:
> Add folios_mc_copy() which walks list of src and dst folios in lockstep,
> and copies folio content via folio_mc_copy(). folios_cnt parameter is
> unused here, but is part of the offload_copy callback signature used by
> later patches in the series.
>
> Split unmapped folios into batch-eligible (unmap_batch/dst_batch) and
> standard (unmap_single/dst_single) lists, gated by the
> migrate_offload_enabled which is off by default. So, when no offload
> driver is active, the branch is never taken and everything goes
> through the standard path.
>
> After TLB flush, batch copy the eligible folios via folios_mc_copy()
> and pass already_copied=true into migrate_folios_move() so
> __migrate_folio() skips the per-folio copy.
>
> On batch copy failure, already_copied flag stays false and each folio
> fall back to individual copy.
>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
> include/linux/mm.h | 2 ++
> mm/migrate.c | 61 +++++++++++++++++++++++++++++++++++-----------
> mm/util.c | 30 +++++++++++++++++++++++
> 3 files changed, 79 insertions(+), 14 deletions(-)
[...]
>
> +DEFINE_STATIC_KEY_FALSE(migrate_offload_enabled);
> +
> static const struct movable_operations *offline_movable_ops;
> static const struct movable_operations *zsmalloc_movable_ops;
>
> @@ -1724,6 +1727,12 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio,
> return nr_failed;
> }
>
> +/* movable_ops folios have their own migrate path */
> +static bool folio_supports_batch_copy(struct folio *folio)
> +{
> + return likely(!page_has_movable_ops(&folio->page));
> +}
As these things are not actually folios (and callers will have to be taught to
distinguish them way, way earlier), I guess you should make this
/* movable_ops pages have a separate migration path */
static bool page_supports_batch_copy(struct page *page)
...
> +
> static void migrate_folios_move(struct list_head *src_folios,
> struct list_head *dst_folios,
> free_folio_t put_new_folio, unsigned long private,
> @@ -1752,7 +1761,7 @@ static void migrate_folios_move(struct list_head *src_folios,
> /*
> * The rules are:
> * 0: folio will be freed
> - * -EAGAIN: stay on the unmap_folios list
> + * -EAGAIN: stay on the src_folios list
> * Other errno: put on ret_folios list
> */
> switch (rc) {
[...]
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -778,6 +778,36 @@ int folio_mc_copy(struct folio *dst, struct folio *src)
> }
> EXPORT_SYMBOL(folio_mc_copy);
>
> +/**
> + * folios_mc_copy - Copy the contents of list of folios.
> + * @dst_list: destination folio list.
> + * @src_list: source folio list.
> + * @folios_cnt: unused here, present for callback signature compatibility.
> + *
> + * Walks list of src and dst folios in lockstep and copies folio
> + * content via folio_mc_copy(). The caller must ensure both lists have
> + * the same number of entries. This may sleep.
This *function*
> + *
> + * Return: 0 on success, negative errno on failure.
> + */
> +int folios_mc_copy(struct list_head *dst_list, struct list_head *src_list,
> + unsigned int __always_unused folios_cnt)
> +{
> + struct folio *src, *dst;
> + int ret;
> +
> + dst = list_first_entry(dst_list, struct folio, lru);
> + list_for_each_entry(src, src_list, lru) {
> + ret = folio_mc_copy(dst, src);
> + if (ret)
> + return ret;
> + dst = list_next_entry(dst, lru);
> + }
Wouldn't it be cleaner to remember "already copied" immediately after we ...
performed the copy? (succeeded with folio_mc_copy)
--
Cheers,
David
next prev parent reply other threads:[~2026-05-11 15:41 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20260428155043.39251-2-shivankg@amd.com>
[not found] ` <20260428155043.39251-6-shivankg@amd.com>
2026-05-07 9:43 ` [PATCH 2/7] mm/migrate: use migrate_info field instead of private Huang, Ying
2026-05-11 15:22 ` David Hildenbrand (Arm)
2026-05-07 9:58 ` [PATCH 0/7] Accelerate page migration with batch copying and hardware offload Huang, Ying
2026-05-11 15:19 ` David Hildenbrand (Arm)
2026-05-12 1:45 ` Huang, Ying
[not found] ` <87zf2kvnqy.fsf@DESKTOP-5N7EMDA>
2026-05-08 11:04 ` Garg, Shivank
2026-05-08 11:28 ` Huang, Ying
2026-05-08 12:34 ` Garg, Shivank
2026-05-09 7:49 ` Huang, Ying
2026-05-10 15:03 ` Garg, Shivank
2026-05-12 2:15 ` Huang, Ying
[not found] ` <20260428155043.39251-8-shivankg@amd.com>
2026-05-11 15:35 ` [PATCH 3/7] mm/migrate: skip data copy for already-copied folios David Hildenbrand (Arm)
[not found] ` <20260428155043.39251-10-shivankg@amd.com>
2026-05-11 15:40 ` David Hildenbrand (Arm) [this message]
[not found] ` <20260428155043.39251-12-shivankg@amd.com>
2026-05-11 15:46 ` [PATCH 5/7] mm/migrate: add copy offload registration infrastructure David Hildenbrand (Arm)
2026-05-11 15:50 ` David Hildenbrand (Arm)
2026-05-11 15:53 ` [PATCH 0/7] Accelerate page migration with batch copying and hardware offload David Hildenbrand (Arm)
2026-05-12 2:35 ` Huang, Ying
2026-05-12 6:34 ` David Hildenbrand (Arm)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b7956894-86b9-472f-b2e9-bb50c0316c39@kernel.org \
--to=david@kernel.org \
--cc=Frank.li@nxp.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@kernel.org \
--cc=apopple@nvidia.com \
--cc=bharata@amd.com \
--cc=byungchul@sk.com \
--cc=dave.hansen@intel.com \
--cc=dave@stgolabs.net \
--cc=djbw@kernel.org \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=jhubbard@nvidia.com \
--cc=jic23@kernel.org \
--cc=joshua.hahnjy@gmail.com \
--cc=kinseyho@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=nathan.lynch@amd.com \
--cc=nifan.cxl@gmail.com \
--cc=peterx@redhat.com \
--cc=rakie.kim@sk.com \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=rkodsara@amd.com \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=shivankg@amd.com \
--cc=sj@kernel.org \
--cc=stalexan@redhat.com \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=vbabka@kernel.org \
--cc=vkoul@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=xuezhengchu@huawei.com \
--cc=yiannis@zptcorp.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox