linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: mawupeng <mawupeng1@huawei.com>
Cc: ying.huang@intel.com, akpm@linux-foundation.org,
	mgorman@techsingularity.net, dmaluka@chromium.org,
	liushixin2@huawei.com, wangkefeng.wang@huawei.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm, proc: collect percpu free pages into the free pages
Date: Wed, 4 Sep 2024 09:28:28 +0200	[thread overview]
Message-ID: <ZtgMHFQ4NwdvL7_e@tiehlicka> (raw)
In-Reply-To: <ed533d8b-40b7-447f-8453-e03b291340fa@huawei.com>

On Wed 04-09-24 14:49:20, mawupeng wrote:
> 
> 
> On 2024/9/3 16:09, Michal Hocko wrote:
> > On Tue 03-09-24 09:50:48, mawupeng wrote:
> >>> Drain remote PCP may be not that expensive now after commit 4b23a68f9536
> >>> ("mm/page_alloc: protect PCP lists with a spinlock").  No IPI is needed
> >>> to drain the remote PCP.
> >>
> >> This looks really great, we can think a way to drop pcp before goto slowpath
> >> before swap.
> > 
> > We currently drain after first unsuccessful direct reclaim run. Is that
> > insufficient? 
> 
> The reason i said the drain of pcp is insufficient or expensive is based
> on you comment[1] :-). Since IPIs is not requiered since commit 4b23a68f9536
> ("mm/page_alloc: protect PCP lists with a spinlock"). This could be much
> better.
> 
> [1]: https://lore.kernel.org/linux-mm/ZWRYZmulV0B-Jv3k@tiehlicka/

there are other reasons I have mentioned in that reply which play role
as well.

> > Should we do a less aggressive draining sooner? Ideally
> > restricted to cpus on the same NUMA node maybe? Do you have any specific
> > workloads that would benefit from this?
> 
> Current the problem is amount the pcp, which can increase to 4.6%(24644M)
> of the total 512G memory.

Why is that a problem? Just because some tools are miscalculating memory
pressure because they are based on MemAvailable? Or does this lead to
performance regressions on the kernel side? In other words would the
same workload behaved better if the amount of pcp-cache was reduced
without any userspace intervention?
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2024-09-04  7:28 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-30  1:44 [PATCH] mm, proc: collect percpu free pages into the free pages Wupeng Ma
2024-08-30  7:53 ` Huang, Ying
2024-09-02  1:11   ` mawupeng
2024-09-02  1:29     ` Huang, Ying
2024-09-03  1:50       ` mawupeng
2024-09-03  8:09         ` Michal Hocko
2024-09-04  6:49           ` mawupeng
2024-09-04  7:28             ` Michal Hocko [this message]
2024-09-10 12:11               ` mawupeng
2024-09-10 13:11                 ` Michal Hocko
2024-09-11  5:37                 ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZtgMHFQ4NwdvL7_e@tiehlicka \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=dmaluka@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liushixin2@huawei.com \
    --cc=mawupeng1@huawei.com \
    --cc=mgorman@techsingularity.net \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).