From: "Huang, Ying" <ying.huang@linux.alibaba.com>
To: "David Hildenbrand (Arm)" <david@kernel.org>
Cc: Shivank Garg <shivankg@amd.com>,
akpm@linux-foundation.org, kinseyho@google.com,
weixugc@google.com, ljs@kernel.org, Liam.Howlett@oracle.com,
vbabka@kernel.org, willy@infradead.org, rppt@kernel.org,
surenb@google.com, mhocko@suse.com, ziy@nvidia.com,
matthew.brost@intel.com, joshua.hahnjy@gmail.com,
rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net,
apopple@nvidia.com, dave@stgolabs.net,
Jonathan.Cameron@huawei.com, rkodsara@amd.com,
vkoul@kernel.org, bharata@amd.com, sj@kernel.org,
rientjes@google.com, xuezhengchu@huawei.com,
yiannis@zptcorp.com, dave.hansen@intel.com, hannes@cmpxchg.org,
jhubbard@nvidia.com, peterx@redhat.com, riel@surriel.com,
shakeel.butt@linux.dev, stalexan@redhat.com, tj@kernel.org,
nifan.cxl@gmail.com, jic23@kernel.org, aneesh.kumar@kernel.org,
nathan.lynch@amd.com, Frank.li@nxp.com, djbw@kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 0/7] Accelerate page migration with batch copying and hardware offload
Date: Tue, 12 May 2026 10:35:02 +0800 [thread overview]
Message-ID: <874ikdwe21.fsf@DESKTOP-5N7EMDA> (raw)
In-Reply-To: <98a16642-35b7-4cf9-9ee2-8de15e877920@kernel.org> (David Hildenbrand's message of "Mon, 11 May 2026 17:53:24 +0200")
"David Hildenbrand (Arm)" <david@kernel.org> writes:
> On 4/28/26 17:50, Shivank Garg wrote:
>> This is the fifth RFC of the patchset to enhance page migration by
[snip]
>
>>
>> 3. Per-caller offload selection: Today eligibility is by migrate_reason
>> only. Some are latency-tolerant, others may be not. Is reason the
>> right granularity, or do we want a per-caller hint?
>
> Isn't it sufficient to just do it based on the #folios or sth like that?
>
> If someone migrates a handful of folios, latency is likely more important (and
> batching less beneficial).
>
> I'd assume when migrating many folios, batching could just always be done. Or
> what's the concern?
IIUC, for callers like migrate_pages syscall, it's possible that almost all
folios of a process are passed to migrate_pages(). However, I think that
we still need to keep the folio inaccessible time reasonable.
[snip]
---
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2026-05-12 2:35 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-28 15:50 [PATCH 0/7] Accelerate page migration with batch copying and hardware offload Shivank Garg
2026-04-28 15:50 ` [PATCH 1/7] mm/migrate: rename PAGE_ migration flags to FOLIO_ Shivank Garg
2026-04-30 9:07 ` Huang, Ying
2026-04-28 15:50 ` [PATCH 2/7] mm/migrate: use migrate_info field instead of private Shivank Garg
2026-05-07 9:43 ` Huang, Ying
2026-05-11 15:22 ` David Hildenbrand (Arm)
2026-04-28 15:50 ` [PATCH 3/7] mm/migrate: skip data copy for already-copied folios Shivank Garg
2026-05-11 15:35 ` David Hildenbrand (Arm)
2026-04-28 15:50 ` [PATCH 4/7] mm/migrate: add batch-copy path in migrate_pages_batch Shivank Garg
2026-05-11 15:40 ` David Hildenbrand (Arm)
2026-04-28 15:50 ` [PATCH 5/7] mm/migrate: add copy offload registration infrastructure Shivank Garg
2026-05-11 15:46 ` David Hildenbrand (Arm)
2026-05-11 15:50 ` David Hildenbrand (Arm)
2026-04-28 15:50 ` [PATCH 6/7] drivers/migrate_offload: add DMA batch copy driver (dcbm) Shivank Garg
2026-04-28 15:50 ` [PATCH 7/7] mm/migrate: adjust NR_MAX_BATCHED_MIGRATION for testing Shivank Garg
2026-04-28 17:11 ` [PATCH 0/7] Accelerate page migration with batch copying and hardware offload Garg, Shivank
2026-04-28 19:33 ` David Hildenbrand (Arm)
2026-04-29 5:51 ` Garg, Shivank
2026-04-30 8:47 ` Huang, Ying
2026-05-08 11:04 ` Garg, Shivank
2026-05-08 11:28 ` Huang, Ying
2026-05-08 12:34 ` Garg, Shivank
2026-05-09 7:49 ` Huang, Ying
2026-05-10 15:03 ` Garg, Shivank
2026-05-12 2:15 ` Huang, Ying
2026-05-07 9:58 ` Huang, Ying
2026-05-11 15:19 ` David Hildenbrand (Arm)
2026-05-12 1:45 ` Huang, Ying
2026-05-11 15:53 ` David Hildenbrand (Arm)
2026-05-12 2:35 ` Huang, Ying [this message]
2026-05-12 6:34 ` David Hildenbrand (Arm)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=874ikdwe21.fsf@DESKTOP-5N7EMDA \
--to=ying.huang@linux.alibaba.com \
--cc=Frank.li@nxp.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@kernel.org \
--cc=apopple@nvidia.com \
--cc=bharata@amd.com \
--cc=byungchul@sk.com \
--cc=dave.hansen@intel.com \
--cc=dave@stgolabs.net \
--cc=david@kernel.org \
--cc=djbw@kernel.org \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=jhubbard@nvidia.com \
--cc=jic23@kernel.org \
--cc=joshua.hahnjy@gmail.com \
--cc=kinseyho@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=nathan.lynch@amd.com \
--cc=nifan.cxl@gmail.com \
--cc=peterx@redhat.com \
--cc=rakie.kim@sk.com \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=rkodsara@amd.com \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=shivankg@amd.com \
--cc=sj@kernel.org \
--cc=stalexan@redhat.com \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=vbabka@kernel.org \
--cc=vkoul@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=xuezhengchu@huawei.com \
--cc=yiannis@zptcorp.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox