From: Huan Yang <link@vivo.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Tejun Heo <tj@kernel.org>, Zefan Li <lizefan.x@bytedance.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Jonathan Corbet <corbet@lwn.net>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeelb@google.com>,
Muchun Song <muchun.song@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Matthew Wilcox <willy@infradead.org>,
Huang Ying <ying.huang@intel.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Peter Xu <peterx@redhat.com>,
"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
Yosry Ahmed <yosryahmed@google.com>,
Liu Shixin <liushixin2@huawei.com>,
Hugh Dickins <hughd@google.com>,
cgroups@vger.kernel.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
opensource.kernel@vivo.com
Subject: Re: [RFC 0/4] Introduce unbalance proactive reclaim
Date: Thu, 9 Nov 2023 21:10:26 +0800 [thread overview]
Message-ID: <dd209adc-e14b-4760-846b-cea2c625f21f@vivo.com> (raw)
In-Reply-To: <ZUzUhWsrzDQwMKQ-@tiehlicka>
在 2023/11/9 20:45, Michal Hocko 写道:
> [Some people who received this message don't often get email from mhocko@suse.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
>
> On Thu 09-11-23 18:55:09, Huan Yang wrote:
>> 在 2023/11/9 17:53, Michal Hocko 写道:
>>> [Some people who received this message don't often get email from mhocko@suse.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
>>>
>>> On Thu 09-11-23 09:56:46, Huan Yang wrote:
>>>> 在 2023/11/8 22:06, Michal Hocko 写道:
>>>>> [Some people who received this message don't often get email from mhocko@suse.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
>>>>>
>>>>> On Wed 08-11-23 14:58:11, Huan Yang wrote:
>>>>>> In some cases, we need to selectively reclaim file pages or anonymous
>>>>>> pages in an unbalanced manner.
>>>>>>
>>>>>> For example, when an application is pushed to the background and frozen,
>>>>>> it may not be opened for a long time, and we can safely reclaim the
>>>>>> application's anonymous pages, but we do not want to touch the file pages.
>>>>> Could you explain why? And also why do you need to swap out in that
>>>>> case?
>>>> When an application is frozen, it usually means that we predict that
>>>> it will not be used for a long time. In order to proactively save some
>>>> memory, our strategy will choose to compress the application's private
>>>> data into zram. And we will also select some of the cold application
>>>> data that we think is in zram and swap it out.
>>>>
>>>> The above operations assume that anonymous pages are private to the
>>>> application. After the application is frozen, compressing these pages
>>>> into zram can save memory to some extent without worrying about
>>>> frequent refaults.
>>> Why don't you rely on the default reclaim heuristics? In other words do
>> As I mentioned earlier, the madvise approach may not be suitable for my
>> needs.
> I was asking about default reclaim behavior not madvise here.
Sorry for the misunderstand.
>
>>> you have any numbers showing that a selective reclaim results in a much
>> In the mobile field, we have a core metric called application residency.
> As already pointed out in other reply, make sure you explain this so
> that we, who are not active in mobile field, can understand the metric,
> how it is affected by the tooling relying on this interface.
OK.
>
>> This mechanism can help us improve the application residency if we can
>> provide a good freeze detection and proactive reclamation policy.
>>
>> I can only provide specific data from our internal tests, and it may
>> be older data, and it tested using cgroup v1:
>>
>> In 12G ram phone, app residency improve from 29 to 38.
> cgroup v1 is in maintenance mode and new extension would need to pass
> even a higher feasibility test than v2 based interface. Also make sure
> that you are testing the current upstream kernel.
OK, if patchset v2 expect, I will change work into cgroup v2 and give
test data.
>
> Also let me stress out that you are proposing an extension to the user
> visible API and we will have to maintain that for ever. So make sure
> your justification is solid and understandable.
Thank you very much for your explanation. Let's focus on these
discussions in
another email.
> --
> Michal Hocko
> SUSE Labs
--
Thanks,
Huan Yang
next prev parent reply other threads:[~2023-11-09 13:10 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-08 6:58 [RFC 0/4] Introduce unbalance proactive reclaim Huan Yang
2023-11-08 6:58 ` [PATCH 1/4] mm: vmscan: LRU unbalance cgroup reclaim Huan Yang
2023-11-08 6:58 ` [PATCH 2/4] mm: multi-gen LRU: MGLRU unbalance reclaim Huan Yang
2023-11-08 12:34 ` kernel test robot
2023-11-09 11:08 ` kernel test robot
2023-12-04 6:53 ` Dan Carpenter
2023-11-08 6:58 ` [PATCH 3/4] mm: memcg: implement unbalance proactive reclaim Huan Yang
2023-11-08 6:58 ` [PATCH 4/4] mm: memcg: apply proactive reclaim into cgroupv1 Huan Yang
2023-11-08 21:06 ` kernel test robot
2023-11-08 7:35 ` [RFC 0/4] Introduce unbalance proactive reclaim Huang, Ying
2023-11-08 7:53 ` Huan Yang
2023-11-08 8:09 ` Huang, Ying
2023-11-08 8:14 ` Yosry Ahmed
2023-11-08 8:21 ` Huan Yang
2023-11-08 9:00 ` Yosry Ahmed
2023-11-08 9:05 ` Huan Yang
2023-11-08 8:00 ` Yosry Ahmed
2023-11-08 8:26 ` Huan Yang
2023-11-08 8:59 ` Yosry Ahmed
2023-11-08 9:12 ` Huan Yang
2023-11-08 14:06 ` Michal Hocko
2023-11-09 1:56 ` Huan Yang
2023-11-09 3:15 ` Huang, Ying
2023-11-09 3:38 ` Huan Yang
2023-11-09 9:57 ` Michal Hocko
2023-11-09 10:29 ` Huan Yang
2023-11-09 10:39 ` Michal Hocko
2023-11-09 10:50 ` Huan Yang
2023-11-09 12:40 ` Michal Hocko
2023-11-09 13:07 ` Huan Yang
2023-11-09 13:46 ` Michal Hocko
2023-11-10 3:48 ` Huan Yang
2023-11-10 12:24 ` Michal Hocko
2023-11-13 2:17 ` Huan Yang
2023-11-13 6:10 ` Huang, Ying
2023-11-13 6:28 ` Huan Yang
2023-11-13 8:05 ` Huang, Ying
2023-11-13 8:26 ` Huan Yang
2023-11-14 9:54 ` Michal Hocko
2023-11-14 9:56 ` Michal Hocko
2023-11-15 6:52 ` Huang, Ying
2023-11-14 9:50 ` Michal Hocko
2023-11-10 1:19 ` Huang, Ying
2023-11-10 2:44 ` Huan Yang
2023-11-10 4:00 ` Huang, Ying
2023-11-10 6:21 ` Huan Yang
2023-11-10 12:32 ` Michal Hocko
2023-11-13 1:54 ` Huan Yang
2023-11-14 10:04 ` Michal Hocko
2023-11-14 12:37 ` Huan Yang
2023-11-14 13:03 ` Michal Hocko
2023-11-15 2:11 ` Huan Yang
2023-11-09 9:53 ` Michal Hocko
2023-11-09 10:55 ` Huan Yang
2023-11-09 12:45 ` Michal Hocko
2023-11-09 13:10 ` Huan Yang [this message]
2023-11-08 16:14 ` Andrew Morton
2023-11-09 1:58 ` Huan Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dd209adc-e14b-4760-846b-cea2c625f21f@vivo.com \
--to=link@vivo.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liushixin2@huawei.com \
--cc=lizefan.x@bytedance.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=opensource.kernel@vivo.com \
--cc=peterx@redhat.com \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
--cc=vishal.moola@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox