From: "Leonardo Brás" <leobras@redhat.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeelb@google.com>,
Muchun Song <muchun.song@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
Marcelo Tosatti <mtosatti@redhat.com>,
cgroups@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining
Date: Wed, 25 Jan 2023 08:06:46 -0300 [thread overview]
Message-ID: <9e61ab53e1419a144f774b95230b789244895424.camel@redhat.com> (raw)
In-Reply-To: <Y9DpbVF+JR/G+5Or@dhcp22.suse.cz>
On Wed, 2023-01-25 at 09:33 +0100, Michal Hocko wrote:
> On Wed 25-01-23 04:34:57, Leonardo Bras wrote:
> > Disclaimer:
> > a - The cover letter got bigger than expected, so I had to split it in
> > sections to better organize myself. I am not very confortable with it.
> > b - Performance numbers below did not include patch 5/5 (Remove flags
> > from memcg_stock_pcp), which could further improve performance for
> > drain_all_stock(), but I could only notice the optimization at the
> > last minute.
> >
> >
> > 0 - Motivation:
> > On current codebase, when drain_all_stock() is ran, it will schedule a
> > drain_local_stock() for each cpu that has a percpu stock associated with a
> > descendant of a given root_memcg.
> >
> > This happens even on 'isolated cpus', a feature commonly used on workloads that
> > are sensitive to interruption and context switching such as vRAN and Industrial
> > Control Systems.
> >
> > Since this scheduling behavior is a problem to those workloads, the proposal is
> > to replace the current local_lock + schedule_work_on() solution with a per-cpu
> > spinlock.
>
> If IIRC we have also discussed that isolated CPUs can simply opt out
> from the pcp caching and therefore the problem would be avoided
> altogether without changes to the locking scheme. I do not see anything
> regarding that in this submission. Could you elaborate why you have
> abandoned this option?
Hello Michal,
I understand pcp caching is a nice to have.
So while I kept the idea of disabling pcp caching in mind as an option, I first
tried to understand what kind of impacts we would be seeing when trying to
change the locking scheme.
After I raised the data in the cover letter, I found that the performance impact
appears not be that big. So in order to try keeping the pcp cache on isolated
cpus active, I decided to focus effort on the locking scheme change.
I mean, my rationale is: if is there a non-expensive way of keeping the feature,
why should we abandon it?
Best regards,
Leo
next prev parent reply other threads:[~2023-01-25 11:07 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-25 7:34 [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Leonardo Bras
2023-01-25 7:34 ` [PATCH v2 1/5] mm/memcontrol: Align percpu memcg_stock to cache Leonardo Bras
2023-01-25 7:34 ` [PATCH v2 2/5] mm/memcontrol: Change stock_lock type from local_lock_t to spinlock_t Leonardo Bras
2023-01-25 7:35 ` [PATCH v2 3/5] mm/memcontrol: Reorder memcg_stock_pcp members to avoid holes Leonardo Bras
2023-01-25 7:35 ` [PATCH v2 4/5] mm/memcontrol: Perform all stock drain in current CPU Leonardo Bras
2023-01-25 7:35 ` [PATCH v2 5/5] mm/memcontrol: Remove flags from memcg_stock_pcp Leonardo Bras
2023-01-25 8:33 ` [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Michal Hocko
2023-01-25 11:06 ` Leonardo Brás [this message]
2023-01-25 11:39 ` Michal Hocko
2023-01-25 18:22 ` Marcelo Tosatti
2023-01-25 23:14 ` Roman Gushchin
2023-01-26 7:41 ` Michal Hocko
2023-01-26 18:03 ` Marcelo Tosatti
2023-01-26 19:20 ` Michal Hocko
2023-01-27 0:32 ` Marcelo Tosatti
2023-01-27 6:58 ` Michal Hocko
2023-02-01 18:31 ` Roman Gushchin
2023-01-26 23:12 ` Roman Gushchin
2023-01-27 7:11 ` Michal Hocko
2023-01-27 7:22 ` Leonardo Brás
2023-01-27 8:12 ` Leonardo Brás
2023-01-27 9:23 ` Michal Hocko
2023-01-27 13:03 ` Frederic Weisbecker
2023-01-27 13:58 ` Michal Hocko
2023-01-27 18:18 ` Roman Gushchin
2023-02-03 15:21 ` Michal Hocko
2023-02-03 19:25 ` Roman Gushchin
2023-02-13 13:36 ` Michal Hocko
2023-01-27 7:14 ` Leonardo Brás
2023-01-27 7:20 ` Michal Hocko
2023-01-27 7:35 ` Leonardo Brás
2023-01-27 9:29 ` Michal Hocko
2023-01-27 19:29 ` Leonardo Brás
2023-01-27 23:50 ` Roman Gushchin
2023-01-26 18:19 ` Marcelo Tosatti
2023-01-27 5:40 ` Leonardo Brás
2023-01-26 2:01 ` Hillf Danton
2023-01-26 7:45 ` Michal Hocko
2023-01-26 18:14 ` Marcelo Tosatti
2023-01-26 19:13 ` Michal Hocko
2023-01-27 6:55 ` Leonardo Brás
2023-01-31 11:35 ` Marcelo Tosatti
2023-02-01 4:36 ` Leonardo Brás
2023-02-01 12:52 ` Michal Hocko
2023-02-01 12:41 ` Michal Hocko
2023-02-04 4:55 ` Leonardo Brás
2023-02-05 19:49 ` Roman Gushchin
2023-02-07 3:18 ` Leonardo Brás
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9e61ab53e1419a144f774b95230b789244895424.camel@redhat.com \
--to=leobras@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=mtosatti@redhat.com \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).