From: "Huang, Ying" <ying.huang@linux.alibaba.com>
To: Shiyang Ruan <ruansy.fnst@fujitsu.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
lkp@intel.com, akpm@linux-foundation.org, y-goto@fujitsu.com,
mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, mgorman@suse.de, vschneid@redhat.com,
Li Zhijian <lizhijian@fujitsu.com>,
Ben Segall <bsegall@google.com>
Subject: Re: [PATCH RFC v3] mm: memory-tiering: Fix PGPROMOTE_CANDIDATE counting
Date: Fri, 25 Jul 2025 14:39:00 +0800 [thread overview]
Message-ID: <87v7ng3hbv.fsf@DESKTOP-5N7EMDA> (raw)
In-Reply-To: <982da1b2-0024-4c01-b586-02c0b8a41e95@fujitsu.com> (Shiyang Ruan's message of "Fri, 25 Jul 2025 10:20:44 +0800")
Shiyang Ruan <ruansy.fnst@fujitsu.com> writes:
> 在 2025/7/24 15:36, Huang, Ying 写道:
>> Shiyang Ruan <ruansy.fnst@fujitsu.com> writes:
>>
>>> 在 2025/7/23 11:09, Huang, Ying 写道:
>>>> Ruan Shiyang <ruansy.fnst@fujitsu.com> writes:
>>>>
>>>>> From: Li Zhijian <lizhijian@fujitsu.com>
>>>>>
>>>>> ===
>>>>> Changes since v2:
>>>>> 1. According to Huang's suggestion, add a new stat to not count these
>>>>> pages into PGPROMOTE_CANDIDATE, to avoid changing the rate limit
>>>>> mechanism.
>>>>> ===
>>>> This isn't the popular place for changelog, please refer to other
>>>> patch
>>>> email.
>>>
>>> OK. I'll move this part down below.>
>>>>> Goto-san reported confusing pgpromote statistics where the
>>>>> pgpromote_success count significantly exceeded pgpromote_candidate.
>>>>>
>>>>> On a system with three nodes (nodes 0-1: DRAM 4GB, node 2: NVDIMM 4GB):
>>>>> # Enable demotion only
>>>>> echo 1 > /sys/kernel/mm/numa/demotion_enabled
>>>>> numactl -m 0-1 memhog -r200 3500M >/dev/null &
>>>>> pid=$!
>>>>> sleep 2
>>>>> numactl memhog -r100 2500M >/dev/null &
>>>>> sleep 10
>>>>> kill -9 $pid # terminate the 1st memhog
>>>>> # Enable promotion
>>>>> echo 2 > /proc/sys/kernel/numa_balancing
>>>>>
>>>>> After a few seconds, we observeed `pgpromote_candidate < pgpromote_success`
>>>>> $ grep -e pgpromote /proc/vmstat
>>>>> pgpromote_success 2579
>>>>> pgpromote_candidate 0
>>>>>
>>>>> In this scenario, after terminating the first memhog, the conditions for
>>>>> pgdat_free_space_enough() are quickly met, and triggers promotion.
>>>>> However, these migrated pages are only counted for in PGPROMOTE_SUCCESS,
>>>>> not in PGPROMOTE_CANDIDATE.
>>>>>
>>>>> To solve this confusing statistics, introduce this
>>>>> PGPROMOTE_CANDIDATE_NOLIMIT to count the missed promotion pages. And
>>>>> also, not counting these pages into PGPROMOTE_CANDIDATE is to avoid
>>>>> changing the existing algorithm or performance of the promotion rate
>>>>> limit.
>>>>>
>>>>> Perhaps PGPROMOTE_CANDIDATE_NOLIMIT is not well named, please comment if
>>>>> you have a better idea.
>>>> Yes. Naming is hard. I guess that the name comes from the
>>>> promotion
>>>> that isn't rate limited. I have asked Deepseek that what is the good
>>>> abbreviation for "not rate limited". Its answer is "NRL". I don't know
>>>> whether it's good. However, "NOT_RATE_LIMITED" appears too long.
>>>
>>> "NRL" Sounds good to me.
>>>
>>> I'm thinking another one: since it's not rate limited, it could be
>>> migrated quickly/fast. How about PGPROMOTE_CANDIDATE_FAST?
>> This sounds good to me, Thanks!
>
> Gemini 2.5 gave me a more radical name for it:
>
> /*
> * Candidate pages for promotion based on hint fault latency. This counter
> * is used by the feedback mechanism to control the promotion rate and
> * adjust the hot threshold.
> */
> PGPROMOTE_CANDIDATE,
> /*
> * Pages promoted aggressively to a fast-tier node when it has sufficient
> * free space. These promotions bypass the regular hotness checks and do
> * NOT influence the promotion rate-limiter or threshold-adjustment logic.
> * This is for statistics/monitoring purposes.
> */
> PGPROMOTED_AGGRESSIVE,
>
> I think this one is concise and easy to understand with the
> comments. What do you think? If this one is not appropriate, then I
> will go with "_NRL" as you suggested.
In fact, we still count candidate pages here. Although there's enough
free space in the target node, the promotion may still fail for say
increased refcount.
---
Best Regards,
Huang, Ying
[snip]
next prev parent reply other threads:[~2025-07-25 6:39 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-22 14:16 [PATCH RFC v3] mm: memory-tiering: Fix PGPROMOTE_CANDIDATE counting Ruan Shiyang
2025-07-23 3:09 ` Huang, Ying
2025-07-24 2:39 ` Shiyang Ruan
2025-07-24 7:36 ` Huang, Ying
2025-07-25 2:20 ` Shiyang Ruan
2025-07-25 6:39 ` Huang, Ying [this message]
2025-07-24 3:35 ` Zhijian Li (Fujitsu)
2025-07-24 7:35 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87v7ng3hbv.fsf@DESKTOP-5N7EMDA \
--to=ying.huang@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizhijian@fujitsu.com \
--cc=lkp@intel.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=ruansy.fnst@fujitsu.com \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=y-goto@fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).