From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: SeongJae Park <sj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
<damon@lists.linux.dev>, <linux-mm@kvack.org>,
<sunnanyong@huawei.com>
Subject: Re: [PATCH] mm/damon/vaddr: attempt per-vma lock during page table walk
Date: Tue, 12 May 2026 21:59:26 +0800 [thread overview]
Message-ID: <618380f2-ea7b-4855-9dc5-d655dc30d5a7@huawei.com> (raw)
In-Reply-To: <20260512013116.80435-1-sj@kernel.org>
On 5/12/2026 9:31 AM, SeongJae Park wrote:
> On Mon, 11 May 2026 21:25:46 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
>
>> Currently, DAMON virtual address operations use mmap_read_lock
>> during page table walks, which can cause unnecessary contention
>> under high concurrency.
>>
>> Introduce damon_va_walk_page_range() to first attempt acquiring a
>> per-vma lock. If the VMA is found and the range is fully contained
>> within it, the page table walk proceeds with the per-vma lock
>> instead of mmap_read_lock.
>>
>> This optimization is particularly effective for damon_va_young()
>> and damon_va_mkold(), which are frequently called and typically
>> operate within a single VMA.
>
> Makes sense. Do you have some measurements?
In fact, I do not have performance-related tests.
>
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> Looks good to me. Nonetheless, because I'm not familiar with per-vma locking,
> I'd like to wait for Sashiko review.
>
Sashiko review reports a issue about handling VMAs with the VM_PFNMAP
flag inconsistently[1],We indeed do not need to handle the vma of
VM_PFNMAP for damon, so a quick fix is as follows,
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index eed2ae132a58..d27147603564 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -251,8 +251,11 @@ static void damon_va_walk_page_range(struct
mm_struct *mm, unsigned long start,
goto lock_mmap;
}
- ops->walk_lock = PGWALK_VMA_RDLOCK_VERIFY;
- walk_page_range_vma(vma, start, end, ops, private);
+ if (!(vma->vm_flags & VM_PFNMAP)) {
+ ops->walk_lock = PGWALK_VMA_RDLOCK_VERIFY;
+ walk_page_range_vma(vma, start, end, ops, private);
+ }
+
vma_end_read(vma);
return;
Any more comments?
Thanks.
[1]
https://sashiko.dev/#/patchset/20260511132546.1973270-1-wangkefeng.wang@huawei.com?part=1
>
> Thanks,
> SJ
>
> [...]
>
next prev parent reply other threads:[~2026-05-12 13:59 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-11 13:25 [PATCH] mm/damon/vaddr: attempt per-vma lock during page table walk Kefeng Wang
2026-05-12 1:31 ` SeongJae Park
2026-05-12 13:59 ` Kefeng Wang [this message]
2026-05-12 14:08 ` SeongJae Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=618380f2-ea7b-4855-9dc5-d655dc30d5a7@huawei.com \
--to=wangkefeng.wang@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=damon@lists.linux.dev \
--cc=linux-mm@kvack.org \
--cc=sj@kernel.org \
--cc=sunnanyong@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox