From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3992CC83F17 for ; Fri, 18 Jul 2025 05:05:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA5006B009D; Fri, 18 Jul 2025 01:05:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C7D2A6B009F; Fri, 18 Jul 2025 01:05:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB9916B00A0; Fri, 18 Jul 2025 01:05:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AEE186B009D for ; Fri, 18 Jul 2025 01:05:05 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 23DBE1602AC for ; Fri, 18 Jul 2025 05:05:05 +0000 (UTC) X-FDA: 83676196170.17.B911470 Received: from out30-119.freemail.mail.aliyun.com (out30-119.freemail.mail.aliyun.com [115.124.30.119]) by imf25.hostedemail.com (Postfix) with ESMTP id C16E1A0003 for ; Fri, 18 Jul 2025 05:05:01 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=C1EFui4u; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf25.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.119 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752815103; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YKO4x322R/4libGDkuLPQbhsl6xeu44Ft0umh6Z5k6I=; b=pvlKeD8Xj0sIFyZCjzk4YO/mhGGkqVK69vVlLZuOMS7Ns1lng4faUzV6TNb+R82V/OBz1/ lExAOwXveHTsPasq7Zrjf+ZqtaN5UdXogCmvpM7xnN/fOuo+wQ7OLuoCfmma9owqUV7WZC bTcjfUk+aRH5LmAygP2cxIzTYw3r++8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752815103; a=rsa-sha256; cv=none; b=3iVbQ6UWuyymroC7oUEYLfyPYY48MSzs4A/0L/sCKI/7KgCBxcZPw6JIsa7DqWNNtVQPl1 xWE8vVLrR+V1eWY8ejjXRDIdgBRjQAO/wU1QY/r+1CjLD/Aw5TElMJ1NoEiSVdqdpN7SVS 080ZfnYxWPhf+9rgRyUX2cLXvqNuII0= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=C1EFui4u; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf25.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.119 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1752815098; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=YKO4x322R/4libGDkuLPQbhsl6xeu44Ft0umh6Z5k6I=; b=C1EFui4uPk5UNVYfSbNdaMYnrouhr473YK8aHi20SdSSLTFfTZgCUHQqPHddnMvo+HBPSXn31Dw0VlNlFCcMTIU6xAUuvSAVoOBGgOGUc1mhPRUAeybmzC7a/f9+KcL6NWFVkJRQifteXVgWCuBW/JviI53R3oZZzQPIP9PySwg= Received: from 30.74.144.111(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WjBB8C1_1752815095 cluster:ay36) by smtp.aliyun-inc.com; Fri, 18 Jul 2025 13:04:56 +0800 Message-ID: <94c8899a-f116-4b6a-94d3-f8295ee3f535@linux.alibaba.com> Date: Fri, 18 Jul 2025 13:04:54 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v9 13/14] khugepaged: add per-order mTHP khugepaged stats To: Nico Pache , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: david@redhat.com, ziy@nvidia.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, akpm@linux-foundation.org, baohua@kernel.org, willy@infradead.org, peterx@redhat.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org, hughd@google.com References: <20250714003207.113275-1-npache@redhat.com> <20250714003207.113275-14-npache@redhat.com> From: Baolin Wang In-Reply-To: <20250714003207.113275-14-npache@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: 18owreoatyy3gwb5e3faym6o6z9rshpg X-Rspamd-Queue-Id: C16E1A0003 X-Rspamd-Server: rspam10 X-Rspam-User: X-HE-Tag: 1752815101-257600 X-HE-Meta: U2FsdGVkX1/b2HYAAGKTFEOWVhxF3zvhQg/27ONuAnJj8V2UqFSfnEaA7AB3t+RM3sSubTVxQ2zwxysaMaE94sXyXZRyuhxTZeUI64tbuomn4n5L9iF7clb6Ym5CDvKKEo82uT0TeuQ/mheWnSv7Ogv9Q6Bkwq1a5uQeI3w+RMZ+WLEb+RWHmXJCwRSO4+tiGtBcSMpEb74urC7tNi5lcXky7AEpwgaRlAGa+9gkZsBlFbc1jqkzN/YrayZZfbRSRyepfx49ekmWxpKesRuN8XXqbed690+39BgYuEqQqM/al4jw7qHlQcrUooVlzsoCBFOXC30sJiqyT2wXJ14zoQ+/QOWVhR6g/YC9FH3OYhHP0iqqQdXuVl7QPHlnnkJ2a2docOKQZGHILEY0FVJydcM4gHUEuKJqB6OxKY0WP4pivpzBz4dNjqfgALkFJNja8CzNCL5/G6Y0e6TOkYU2rA1r5zgyEQfMPERJvSkAGj5LRZGCrzM4HUS50X9QWX/9rb+hIdbWJsvOeQ5HMev6p+KG9XfPuhKCdXtCRe9adudBW7y+Ew5a07sLzeTaq/tAT7BnkdVS8sfGgBouZDO5v5vyh94JQTOFqg6kjVJvoNCXYeEl8nl3otjI7HPPGUvlhnlHHakOst9E/SQ/XtfO1rkOuRUyqZuVObe0SPmUmRiprPvENRmRCX4dU2gpFyBaG6JPt/EStjd/xBVYyIocJa/bAblmOjP0kT8IEAaPAtr328CdHN32TlIWOtvrgIB1vYVbXsUeB6qWPmUoAd3MmUdPSj4rCxTh7kGo0worfDkWWV4Tr6MQ57gfon+1SiRI4G3UloVSQgJPoF5Xev5u9PCSzFZ2zBg7FQ/EwJ0LD74qYLv2g1LKkREgIZAzRCPPzXf7NLzMmlMFb+s2ZGp37ActjjCOPGea26MK0ME74wijTf4Pgwvl4vi1nt3t7ytwrsKUcysUZmJjV+CrrEv Npl6kWnV cCoYHE2ToefULGfwo8bxCmOuXUlARQPHgpTn5ul+uRBbkqqhF7Xg9uT3GqQPDtw/arrvWrmY+K2gpXoVaM2xLl6XKYsewUa+ZJ70dBp73MS6dABS33abvYit00Tsm/zBdIHaSR9iF4LFZItvuG6f0+Ybpqfj5WtRSB4z0nC0Ytl2JJh6MOKBZZ54PPdWpyg4HHK5Bz3Ei5YMH88OnH3drYJpAZFOfvjcVKmY1dU7wvtBBx1lckNiBMxFPwN3tyKcmEthihr47TOjv/YmzwUUWNHJyTQxN/bn7wpyqFuxI4u73mb7AkEkw/A/vP8ihxBZVLKx2W31cfQ07uCkdXHxYafD0zuglwFeao3BjvlSP4mYqOZBQM1tUdpTwy+wzYs/I5tOjyawNSbXrBZoZsHBs3D83kx///i11UuoA3e1EzetD7spWdd9TOYl625Jgp3HkuYpLLY550sEDEQQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/7/14 08:32, Nico Pache wrote: > With mTHP support inplace, let add the per-order mTHP stats for > exceeding NONE, SWAP, and SHARED. > > Signed-off-by: Nico Pache > --- > Documentation/admin-guide/mm/transhuge.rst | 17 +++++++++++++++++ > include/linux/huge_mm.h | 3 +++ > mm/huge_memory.c | 7 +++++++ > mm/khugepaged.c | 15 ++++++++++++--- > 4 files changed, 39 insertions(+), 3 deletions(-) > > diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst > index 2c523dce6bc7..28c8af61efba 100644 > --- a/Documentation/admin-guide/mm/transhuge.rst > +++ b/Documentation/admin-guide/mm/transhuge.rst > @@ -658,6 +658,23 @@ nr_anon_partially_mapped > an anonymous THP as "partially mapped" and count it here, even though it > is not actually partially mapped anymore. > > +collapse_exceed_swap_pte > + The number of anonymous THP which contain at least one swap PTE. > + Currently khugepaged does not support collapsing mTHP regions that > + contain a swap PTE. > + > +collapse_exceed_none_pte > + The number of anonymous THP which have exceeded the none PTE threshold. > + With mTHP collapse, a bitmap is used to gather the state of a PMD region > + and is then recursively checked from largest to smallest order against > + the scaled max_ptes_none count. This counter indicates that the next > + enabled order will be checked. > + > +collapse_exceed_shared_pte > + The number of anonymous THP which contain at least one shared PTE. > + Currently khugepaged does not support collapsing mTHP regions that > + contain a shared PTE. > + > As the system ages, allocating huge pages may be expensive as the > system uses memory compaction to copy data around memory to free a > huge page for use. There are some counters in ``/proc/vmstat`` to help > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index 4042078e8cc9..e0a27f80f390 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -141,6 +141,9 @@ enum mthp_stat_item { > MTHP_STAT_SPLIT_DEFERRED, > MTHP_STAT_NR_ANON, > MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, > + MTHP_STAT_COLLAPSE_EXCEED_SWAP, > + MTHP_STAT_COLLAPSE_EXCEED_NONE, > + MTHP_STAT_COLLAPSE_EXCEED_SHARED, > __MTHP_STAT_COUNT > }; > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index e2ed9493df77..57e5699cf638 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -632,6 +632,10 @@ DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED); > DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED); > DEFINE_MTHP_STAT_ATTR(nr_anon, MTHP_STAT_NR_ANON); > DEFINE_MTHP_STAT_ATTR(nr_anon_partially_mapped, MTHP_STAT_NR_ANON_PARTIALLY_MAPPED); > +DEFINE_MTHP_STAT_ATTR(collapse_exceed_swap_pte, MTHP_STAT_COLLAPSE_EXCEED_SWAP); > +DEFINE_MTHP_STAT_ATTR(collapse_exceed_none_pte, MTHP_STAT_COLLAPSE_EXCEED_NONE); > +DEFINE_MTHP_STAT_ATTR(collapse_exceed_shared_pte, MTHP_STAT_COLLAPSE_EXCEED_SHARED); > + > > static struct attribute *anon_stats_attrs[] = { > &anon_fault_alloc_attr.attr, > @@ -648,6 +652,9 @@ static struct attribute *anon_stats_attrs[] = { > &split_deferred_attr.attr, > &nr_anon_attr.attr, > &nr_anon_partially_mapped_attr.attr, > + &collapse_exceed_swap_pte_attr.attr, > + &collapse_exceed_none_pte_attr.attr, > + &collapse_exceed_shared_pte_attr.attr, > NULL, > }; > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index d0c99b86b304..8a5873d0a23a 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -594,7 +594,10 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > continue; > } else { > result = SCAN_EXCEED_NONE_PTE; > - count_vm_event(THP_SCAN_EXCEED_NONE_PTE); > + if (order == HPAGE_PMD_ORDER) > + count_vm_event(THP_SCAN_EXCEED_NONE_PTE); > + else > + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_NONE); Please follow the same logic as other mTHP statistics, meaning there is no need to filter out PMD-sized orders, because mTHP also supports PMD-sized orders. So logic should be: if (order == HPAGE_PMD_ORDER) count_vm_event(THP_SCAN_EXCEED_NONE_PTE); count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_NONE); > goto out; > } > } > @@ -623,8 +626,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > /* See khugepaged_scan_pmd(). */ > if (folio_maybe_mapped_shared(folio)) { > ++shared; > - if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged && > - shared > khugepaged_max_ptes_shared)) { > + if (order != HPAGE_PMD_ORDER) { > + result = SCAN_EXCEED_SHARED_PTE; > + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SHARED); > + goto out; > + } Ditto. > + > + if (cc->is_khugepaged && > + shared > khugepaged_max_ptes_shared) { > result = SCAN_EXCEED_SHARED_PTE; > count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); > goto out;