From: Qi Zheng <zhengqi.arch@bytedance.com>
To: Kirill Tkhai <tkhai@ya.ru>
Cc: sultan@kerneltoast.com, dave@stgolabs.net,
penguin-kernel@I-love.SAKURA.ne.jp, paulmck@kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Shakeel Butt <shakeelb@google.com>,
Michal Hocko <mhocko@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Muchun Song <muchun.song@linux.dev>,
David Hildenbrand <david@redhat.com>,
Yang Shi <shy828301@gmail.com>
Subject: Re: [PATCH 2/5] mm: vmscan: make memcg slab shrink lockless
Date: Thu, 23 Feb 2023 12:36:00 +0800 [thread overview]
Message-ID: <46020fa1-d55b-c719-3bde-df66c93cd0d0@bytedance.com> (raw)
In-Reply-To: <715594a8-1eca-7f80-adc0-3655153adffa@ya.ru>
On 2023/2/23 03:58, Kirill Tkhai wrote:
> On 22.02.2023 10:32, Qi Zheng wrote:
>>
>>
>> On 2023/2/22 05:28, Kirill Tkhai wrote:
>>> On 20.02.2023 12:16, Qi Zheng wrote:
>> <...>
>>>> void reparent_shrinker_deferred(struct mem_cgroup *memcg)
>>>> {
>>>> - int i, nid;
>>>> + int i, nid, srcu_idx;
>>>> long nr;
>>>> struct mem_cgroup *parent;
>>>> struct shrinker_info *child_info, *parent_info;
>>>> @@ -429,16 +443,16 @@ void reparent_shrinker_deferred(struct mem_cgroup *memcg)
>>>> parent = root_mem_cgroup;
>>>> /* Prevent from concurrent shrinker_info expand */
>>>> - down_read(&shrinker_rwsem);
>>>> + srcu_idx = srcu_read_lock(&shrinker_srcu);
>>>
>>> Don't we still have to be protected against parallel expand_one_shrinker_info()?
>>>
>>> It looks like parent->nodeinfo[nid]->shrinker_info pointer may be changed in expand*
>>> right after we've dereferenced it here.
>>
>> Hi Kirill,
>>
>> Oh, indeed. We may wrongly reparent the child's nr_deferred to the old
>> parent's nr_deferred (it is about to be freed by call_srcu).
>>
>> The reparent_shrinker_deferred() will only be called on the offline
>> path (not a hotspot path), so we may be able to use shrinker_mutex
>> introduced later for protection. What do you think?
>
> It looks good for me. One more thing I'd analyzed is whether we want to have
> is two reparent_shrinker_deferred() are executing in parallel.
I see that mem_cgroup_css_offline() is already protected by
cgroup_mutex, so maybe shrinker_mutex is enough here. :)
>
> Possible, we should leave rwsem there as it was used before..
>
>>>
>>>> for_each_node(nid) {
>>>> - child_info = shrinker_info_protected(memcg, nid);
>>>> - parent_info = shrinker_info_protected(parent, nid);
>>>> + child_info = shrinker_info_srcu(memcg, nid);
>>>> + parent_info = shrinker_info_srcu(parent, nid);
>>>> for (i = 0; i < shrinker_nr_max; i++) {
>>>> nr = atomic_long_read(&child_info->nr_deferred[i]);
>>>> atomic_long_add(nr, &parent_info->nr_deferred[i]);
>>>> }
>>>> }
>>>> - up_read(&shrinker_rwsem);
>>>> + srcu_read_unlock(&shrinker_srcu, srcu_idx);
>>>> }
>>>> static bool cgroup_reclaim(struct scan_control *sc)
>>>> @@ -891,15 +905,14 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
>>>> {
>>>> struct shrinker_info *info;
>>>> unsigned long ret, freed = 0;
>>>> + int srcu_idx;
>>>> int i;
>>>> if (!mem_cgroup_online(memcg))
>>>> return 0;
>>>> - if (!down_read_trylock(&shrinker_rwsem))
>>>> - return 0;
>>>> -
>>>> - info = shrinker_info_protected(memcg, nid);
>>>> + srcu_idx = srcu_read_lock(&shrinker_srcu);
>>>> + info = shrinker_info_srcu(memcg, nid);
>>>> if (unlikely(!info))
>>>> goto unlock;
>>>> @@ -949,14 +962,9 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
>>>> set_shrinker_bit(memcg, nid, i);
>>>> }
>>>> freed += ret;
>>>> -
>>>> - if (rwsem_is_contended(&shrinker_rwsem)) {
>>>> - freed = freed ? : 1;
>>>> - break;
>>>> - }
>>>> }
>>>> unlock:
>>>> - up_read(&shrinker_rwsem);
>>>> + srcu_read_unlock(&shrinker_srcu, srcu_idx);
>>>> return freed;
>>>> }
>>>> #else /* CONFIG_MEMCG */
>>>
>>
>
--
Thanks,
Qi
next prev parent reply other threads:[~2023-02-23 4:36 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-20 9:16 [PATCH 0/5] make slab shrink lockless Qi Zheng
2023-02-20 9:16 ` [PATCH 1/5] mm: vmscan: make global " Qi Zheng
2023-02-20 9:16 ` [PATCH 2/5] mm: vmscan: make memcg " Qi Zheng
2023-02-21 21:28 ` Kirill Tkhai
2023-02-22 7:32 ` Qi Zheng
2023-02-22 19:58 ` Kirill Tkhai
2023-02-23 4:36 ` Qi Zheng [this message]
2023-02-21 21:43 ` Kirill Tkhai
2023-02-22 8:16 ` Qi Zheng
2023-02-22 8:21 ` Qi Zheng
2023-02-22 20:05 ` Kirill Tkhai
2023-02-23 4:37 ` Qi Zheng
2023-02-20 9:16 ` [PATCH 3/5] mm: shrinkers: make count and scan in shrinker debugfs lockless Qi Zheng
2023-02-20 13:22 ` kernel test robot
2023-02-20 9:16 ` [PATCH 4/5] mm: vmscan: remove shrinker_rwsem from synchronize_shrinkers() Qi Zheng
2023-02-20 9:16 ` [PATCH 5/5] mm: shrinkers: convert shrinker_rwsem to mutex Qi Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=46020fa1-d55b-c719-3bde-df66c93cd0d0@bytedance.com \
--to=zhengqi.arch@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=paulmck@kernel.org \
--cc=penguin-kernel@I-love.SAKURA.ne.jp \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=shy828301@gmail.com \
--cc=sultan@kerneltoast.com \
--cc=tkhai@ya.ru \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).