From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Shivank Garg <shivankg@amd.com>, akpm@linux-foundation.org
Cc: kinseyho@google.com, weixugc@google.com, ljs@kernel.org,
Liam.Howlett@oracle.com, vbabka@kernel.org, willy@infradead.org,
rppt@kernel.org, surenb@google.com, mhocko@suse.com,
ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com,
rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net,
ying.huang@linux.alibaba.com, apopple@nvidia.com,
dave@stgolabs.net, Jonathan.Cameron@huawei.com, rkodsara@amd.com,
vkoul@kernel.org, bharata@amd.com, sj@kernel.org,
rientjes@google.com, xuezhengchu@huawei.com, yiannis@zptcorp.com,
dave.hansen@intel.com, hannes@cmpxchg.org, jhubbard@nvidia.com,
peterx@redhat.com, riel@surriel.com, shakeel.butt@linux.dev,
stalexan@redhat.com, tj@kernel.org, nifan.cxl@gmail.com,
jic23@kernel.org, aneesh.kumar@kernel.org, nathan.lynch@amd.com,
Frank.li@nxp.com, djbw@kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [PATCH 3/7] mm/migrate: skip data copy for already-copied folios
Date: Mon, 11 May 2026 17:35:39 +0200 [thread overview]
Message-ID: <810e9a58-9c08-4f5e-af5f-866685ca09b2@kernel.org> (raw)
In-Reply-To: <20260428155043.39251-8-shivankg@amd.com>
On 4/28/26 17:50, Shivank Garg wrote:
> Add a FOLIO_ALREADY_COPIED flag to the dst->migrate_info migration
> state. When set, __migrate_folio() skips folio_mc_copy() and
> performs metadata-only migration. All callers currently pass
> already_copied=false. The batch-copy path enables it later in a
> subsequent patch.
>
> Move the dst->migrate_info state enum earlier in the file so
> __migrate_folio() and move_to_new_folio() can see FOLIO_ALREADY_COPIED.
>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
> mm/migrate.c | 53 +++++++++++++++++++++++++++++++---------------------
> 1 file changed, 32 insertions(+), 21 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 03c2a6f7e5e4..c493e67e359d 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -850,6 +850,19 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
> }
> EXPORT_SYMBOL(folio_migrate_flags);
>
> +/*
> + * To record some information during migration, we use the migrate_info
> + * field of struct folio of the newly allocated destination folio.
> + * This is safe because nobody is using it except us.
> + */
> +enum {
> + FOLIO_WAS_MAPPED = BIT(0),
> + FOLIO_WAS_MLOCKED = BIT(1),
> + FOLIO_ALREADY_COPIED = BIT(2),
I wonder whether we want to talk about "folio content copied", to not confuse it
with folio flags copied etc.
FOLIO_CONTENT_COPIED.
Thoughts?
> + FOLIO_OLD_STATES = FOLIO_WAS_MAPPED | FOLIO_WAS_MLOCKED |
> + FOLIO_ALREADY_COPIED,
> +};
> +
> /************************************************************
> * Migration functions
> ***********************************************************/
> @@ -859,14 +872,20 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst,
> enum migrate_mode mode)
> {
> int rc, expected_count = folio_expected_ref_count(src) + 1;
> + bool already_copied = (dst->migrate_info & FOLIO_ALREADY_COPIED);
const, and no need for ().
> +
> + if (already_copied)
> + dst->migrate_info = 0;
Hm, why is that required? Might deserve a comment.
Likely you want to clear the "already copied" marker?
dst->migrate_info &= ~FOLIO_ALREADY_COPIED;
?
But I wonder if this really belongs exactly here.
>
> /* Check whether src does not have extra refs before we do more work */
> if (folio_ref_count(src) != expected_count)
> return -EAGAIN;
>
> - rc = folio_mc_copy(dst, src);
> - if (unlikely(rc))
> - return rc;
> + if (!already_copied) {
> + rc = folio_mc_copy(dst, src);
> + if (unlikely(rc))
> + return rc;
> + }
>
> rc = __folio_migrate_mapping(mapping, dst, src, expected_count);
> if (rc)
> @@ -1090,7 +1109,7 @@ static int fallback_migrate_folio(struct address_space *mapping,
> * 0 - success
> */
> static int move_to_new_folio(struct folio *dst, struct folio *src,
> - enum migrate_mode mode)
> + enum migrate_mode mode, bool already_copied)
> {
> struct address_space *mapping = folio_mapping(src);
> int rc = -EAGAIN;
> @@ -1098,6 +1117,9 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
> VM_BUG_ON_FOLIO(!folio_test_locked(src), src);
> VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst);
>
> + if (already_copied)
> + dst->migrate_info = FOLIO_ALREADY_COPIED;
|= ?
> +
> if (!mapping)
> rc = migrate_folio(mapping, dst, src, mode);
> else if (mapping_inaccessible(mapping))
> @@ -1129,17 +1151,6 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
> return rc;
> }
>
> -/*
> - * To record some information during migration, we use the migrate_info
> - * field of struct folio of the newly allocated destination folio.
> - * This is safe because nobody is using it except us.
> - */
> -enum {
> - FOLIO_WAS_MAPPED = BIT(0),
> - FOLIO_WAS_MLOCKED = BIT(1),
> - FOLIO_OLD_STATES = FOLIO_WAS_MAPPED | FOLIO_WAS_MLOCKED,
> -};
> -
> static void __migrate_folio_record(struct folio *dst,
> int old_folio_state, struct anon_vma *anon_vma)
> {
> @@ -1353,7 +1364,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
> static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
> struct folio *src, struct folio *dst,
> enum migrate_mode mode, enum migrate_reason reason,
> - struct list_head *ret)
> + struct list_head *ret, bool already_copied)
> {
> int rc;
> int old_folio_state = 0;
> @@ -1379,7 +1390,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
> src_partially_mapped = folio_test_partially_mapped(src);
> }
>
> - rc = move_to_new_folio(dst, src, mode);
> + rc = move_to_new_folio(dst, src, mode, already_copied);
> if (rc)
> goto out;
>
> @@ -1536,7 +1547,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
> }
>
> if (!folio_mapped(src))
> - rc = move_to_new_folio(dst, src, mode);
> + rc = move_to_new_folio(dst, src, mode, false);
... mode, /* already_copied = */ false
>
> if (page_was_mapped)
> remove_migration_ptes(src, !rc ? dst : src, ttu);
> @@ -1720,7 +1731,7 @@ static void migrate_folios_move(struct list_head *src_folios,
> struct list_head *ret_folios,
> struct migrate_pages_stats *stats,
> int *retry, int *thp_retry, int *nr_failed,
> - int *nr_retry_pages)
> + int *nr_retry_pages, bool already_copied)
> {
> struct folio *folio, *folio2, *dst, *dst2;
> bool is_thp;
> @@ -1737,7 +1748,7 @@ static void migrate_folios_move(struct list_head *src_folios,
>
> rc = migrate_folio_move(put_new_folio, private,
> folio, dst, mode,
> - reason, ret_folios);
> + reason, ret_folios, already_copied);
> /*
> * The rules are:
> * 0: folio will be freed
> @@ -1994,7 +2005,7 @@ static int migrate_pages_batch(struct list_head *from,
> migrate_folios_move(&unmap_folios, &dst_folios,
> put_new_folio, private, mode, reason,
> ret_folios, stats, &retry, &thp_retry,
> - &nr_failed, &nr_retry_pages);
> + &nr_failed, &nr_retry_pages, false);
> }
dito.
--
Cheers,
David
next prev parent reply other threads:[~2026-05-11 15:35 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20260428155043.39251-2-shivankg@amd.com>
2026-05-07 9:58 ` [PATCH 0/7] Accelerate page migration with batch copying and hardware offload Huang, Ying
2026-05-11 15:19 ` David Hildenbrand (Arm)
[not found] ` <87zf2kvnqy.fsf@DESKTOP-5N7EMDA>
2026-05-08 11:04 ` Garg, Shivank
2026-05-08 11:28 ` Huang, Ying
2026-05-08 12:34 ` Garg, Shivank
2026-05-09 7:49 ` Huang, Ying
2026-05-10 15:03 ` Garg, Shivank
[not found] ` <20260428155043.39251-6-shivankg@amd.com>
2026-05-07 9:43 ` [PATCH 2/7] mm/migrate: use migrate_info field instead of private Huang, Ying
2026-05-11 15:22 ` David Hildenbrand (Arm)
[not found] ` <20260428155043.39251-8-shivankg@amd.com>
2026-05-11 15:35 ` David Hildenbrand (Arm) [this message]
[not found] ` <20260428155043.39251-10-shivankg@amd.com>
2026-05-11 15:40 ` [PATCH 4/7] mm/migrate: add batch-copy path in migrate_pages_batch David Hildenbrand (Arm)
[not found] ` <20260428155043.39251-12-shivankg@amd.com>
2026-05-11 15:46 ` [PATCH 5/7] mm/migrate: add copy offload registration infrastructure David Hildenbrand (Arm)
2026-05-11 15:50 ` David Hildenbrand (Arm)
2026-05-11 15:53 ` [PATCH 0/7] Accelerate page migration with batch copying and hardware offload David Hildenbrand (Arm)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=810e9a58-9c08-4f5e-af5f-866685ca09b2@kernel.org \
--to=david@kernel.org \
--cc=Frank.li@nxp.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@kernel.org \
--cc=apopple@nvidia.com \
--cc=bharata@amd.com \
--cc=byungchul@sk.com \
--cc=dave.hansen@intel.com \
--cc=dave@stgolabs.net \
--cc=djbw@kernel.org \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=jhubbard@nvidia.com \
--cc=jic23@kernel.org \
--cc=joshua.hahnjy@gmail.com \
--cc=kinseyho@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=nathan.lynch@amd.com \
--cc=nifan.cxl@gmail.com \
--cc=peterx@redhat.com \
--cc=rakie.kim@sk.com \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=rkodsara@amd.com \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=shivankg@amd.com \
--cc=sj@kernel.org \
--cc=stalexan@redhat.com \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=vbabka@kernel.org \
--cc=vkoul@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=xuezhengchu@huawei.com \
--cc=yiannis@zptcorp.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox