From: "Leonardo Brás" <leobras@redhat.com>
To: Michal Hocko <mhocko@suse.com>,
Roman Gushchin <roman.gushchin@linux.dev>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Shakeel Butt <shakeelb@google.com>,
Muchun Song <muchun.song@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
cgroups@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org,
Frederic Weisbecker <fweisbecker@suse.de>
Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining
Date: Fri, 27 Jan 2023 04:22:20 -0300 [thread overview]
Message-ID: <52a0f1e593b1ec0ca7e417ba37680d65df22de82.camel@redhat.com> (raw)
In-Reply-To: <Y9N5CI8PpsfiaY9c@dhcp22.suse.cz>
On Fri, 2023-01-27 at 08:11 +0100, Michal Hocko wrote:
> [Cc Frederic]
>
> On Thu 26-01-23 15:12:35, Roman Gushchin wrote:
> > On Thu, Jan 26, 2023 at 08:41:34AM +0100, Michal Hocko wrote:
> [...]
> > > > Essentially each cpu will try to grab the remains of the memory quota
> > > > and move it locally. I wonder in such circumstances if we need to disable the pcp-caching
> > > > on per-cgroup basis.
> > >
> > > I think it would be more than sufficient to disable pcp charging on an
> > > isolated cpu.
> >
> > It might have significant performance consequences.
>
> Is it really significant?
>
> > I'd rather opt out of stock draining for isolated cpus: it might slightly reduce
> > the accuracy of memory limits and slightly increase the memory footprint (all
> > those dying memcgs...), but the impact will be limited. Actually it is limited
> > by the number of cpus.
>
> Hmm, OK, I have misunderstood your proposal. Yes, the overal pcp charges
> potentially left behind should be small and that shouldn't really be a
> concern for memcg oom situations (unless the limit is very small and
> workloads on isolated cpus using small hard limits is way beyond my
> imagination).
>
> My first thought was that those charges could be left behind without any
> upper bound but in reality sooner or later something should be running
> on those cpus and if the memcg is gone the pcp cache would get refilled
> and old charges gone.
>
> So yes, this is actually a better and even simpler solution. All we need
> is something like this
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ab457f0394ab..13b84bbd70ba 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2344,6 +2344,9 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
> struct mem_cgroup *memcg;
> bool flush = false;
>
> + if (cpu_is_isolated(cpu))
> + continue;
> +
> rcu_read_lock();
> memcg = stock->cached;
> if (memcg && stock->nr_pages &&
>
> There is no such cpu_is_isolated() AFAICS so we would need a help from
> NOHZ and cpuisol people to create one for us. Frederic, would such an
> abstraction make any sense from your POV?
IIUC, 'if (cpu_is_isolated())' would be instead:
if (!housekeeping_cpu(smp_processor_id(), HK_TYPE_DOMAIN) ||
!housekeeping_cpu(smp_processor_id(), HK_TYPE_WQ)
next prev parent reply other threads:[~2023-01-27 7:22 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-25 7:34 [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Leonardo Bras
2023-01-25 7:34 ` [PATCH v2 1/5] mm/memcontrol: Align percpu memcg_stock to cache Leonardo Bras
2023-01-25 7:34 ` [PATCH v2 2/5] mm/memcontrol: Change stock_lock type from local_lock_t to spinlock_t Leonardo Bras
2023-01-25 7:35 ` [PATCH v2 3/5] mm/memcontrol: Reorder memcg_stock_pcp members to avoid holes Leonardo Bras
2023-01-25 7:35 ` [PATCH v2 4/5] mm/memcontrol: Perform all stock drain in current CPU Leonardo Bras
2023-01-25 7:35 ` [PATCH v2 5/5] mm/memcontrol: Remove flags from memcg_stock_pcp Leonardo Bras
2023-01-25 8:33 ` [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Michal Hocko
2023-01-25 11:06 ` Leonardo Brás
2023-01-25 11:39 ` Michal Hocko
2023-01-25 18:22 ` Marcelo Tosatti
2023-01-25 23:14 ` Roman Gushchin
2023-01-26 7:41 ` Michal Hocko
2023-01-26 18:03 ` Marcelo Tosatti
2023-01-26 19:20 ` Michal Hocko
2023-01-27 0:32 ` Marcelo Tosatti
2023-01-27 6:58 ` Michal Hocko
2023-02-01 18:31 ` Roman Gushchin
2023-01-26 23:12 ` Roman Gushchin
2023-01-27 7:11 ` Michal Hocko
2023-01-27 7:22 ` Leonardo Brás [this message]
2023-01-27 8:12 ` Leonardo Brás
2023-01-27 9:23 ` Michal Hocko
2023-01-27 13:03 ` Frederic Weisbecker
2023-01-27 13:58 ` Michal Hocko
2023-01-27 18:18 ` Roman Gushchin
2023-02-03 15:21 ` Michal Hocko
2023-02-03 19:25 ` Roman Gushchin
2023-02-13 13:36 ` Michal Hocko
2023-01-27 7:14 ` Leonardo Brás
2023-01-27 7:20 ` Michal Hocko
2023-01-27 7:35 ` Leonardo Brás
2023-01-27 9:29 ` Michal Hocko
2023-01-27 19:29 ` Leonardo Brás
2023-01-27 23:50 ` Roman Gushchin
2023-01-26 18:19 ` Marcelo Tosatti
2023-01-27 5:40 ` Leonardo Brás
2023-01-26 2:01 ` Hillf Danton
2023-01-26 7:45 ` Michal Hocko
2023-01-26 18:14 ` Marcelo Tosatti
2023-01-26 19:13 ` Michal Hocko
2023-01-27 6:55 ` Leonardo Brás
2023-01-31 11:35 ` Marcelo Tosatti
2023-02-01 4:36 ` Leonardo Brás
2023-02-01 12:52 ` Michal Hocko
2023-02-01 12:41 ` Michal Hocko
2023-02-04 4:55 ` Leonardo Brás
2023-02-05 19:49 ` Roman Gushchin
2023-02-07 3:18 ` Leonardo Brás
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52a0f1e593b1ec0ca7e417ba37680d65df22de82.camel@redhat.com \
--to=leobras@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=fweisbecker@suse.de \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=mtosatti@redhat.com \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).