public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Qi Zheng <qi.zheng@linux.dev>
To: Usama Arif <usama.arif@linux.dev>
Cc: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com,
	roman.gushchin@linux.dev, shakeel.butt@linux.dev,
	muchun.song@linux.dev, david@kernel.org,
	lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com,
	yosry.ahmed@linux.dev, imran.f.khan@oracle.com,
	kamalesh.babulal@oracle.com, axelrasmussen@google.com,
	yuanchu@google.com, weixugc@google.com,
	chenridong@huaweicloud.com, mkoutny@suse.com,
	akpm@linux-foundation.org, hamzamahfooz@linux.microsoft.com,
	apais@linux.microsoft.com, lance.yang@linux.dev, bhe@redhat.com,
	usamaarif642@gmail.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	Qi Zheng <zhengqi.arch@bytedance.com>
Subject: Re: [PATCH v6 31/33] mm: memcontrol: convert objcg to be per-memcg per-node type
Date: Mon, 9 Mar 2026 10:59:18 +0800	[thread overview]
Message-ID: <d5eb31b8-a099-49bf-9b3e-09e525242968@linux.dev> (raw)
In-Reply-To: <898e8ca7-efcb-4bd7-8016-871b37be830e@linux.dev>



On 3/7/26 7:08 PM, Usama Arif wrote:
> 
> 
> On 07/03/2026 08:51, Qi Zheng wrote:
>> Hi Usama,
>>
>> On 3/7/26 4:29 AM, Usama Arif wrote:
>>> On Thu,  5 Mar 2026 19:52:49 +0800 Qi Zheng <qi.zheng@linux.dev> wrote:
>>>
>>>> From: Qi Zheng <zhengqi.arch@bytedance.com>
>>>>
>>>> Convert objcg to be per-memcg per-node type, so that when reparent LRU
>>>> folios later, we can hold the lru lock at the node level, thus avoiding
>>>> holding too many lru locks at once.
>>>>
>>>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>>>> Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
>>>> ---
>>>>    include/linux/memcontrol.h | 23 +++++------
>>>>    include/linux/sched.h      |  2 +-
>>>>    mm/memcontrol.c            | 79 +++++++++++++++++++++++---------------
>>>>    3 files changed, 62 insertions(+), 42 deletions(-)
>>>>
>>>
>>> [...]
>>>
>>>> @@ -4087,7 +4100,13 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
>>>>        xa_store(&mem_cgroup_private_ids, memcg->id.id, memcg, GFP_KERNEL);
>>>>          return 0;
>>>> -free_shrinker:
>>>> +free_objcg:
>>>> +    for_each_node(nid) {
>>>> +        struct mem_cgroup_per_node *pn = memcg->nodeinfo[nid];
>>>> +
>>>> +        if (pn && pn->orig_objcg)
>>>> +            obj_cgroup_put(pn->orig_objcg);
>>>
>>> Is it possible that you might call obj_cgroup_put twice on the same cgroup?
>>
>> Oh, I think you are right. Here pn->orig_objcg was not reset to NULL, so
>> obj_cgroup_put() will be called in __mem_cgroup_free() again.
>>
>>>
>>> If css_create fails, css_free_rwork_fn is queued, which ends up calling
>>> mem_cgroup_css_free which calls obj_cgroup_put again?
>>>
>>> Maybe adding pn->orig_objcg = NULL overhere after obj_cgroup_put
>>> is enough to prevent the double put from causing issues?
>>
>> Agree.
>>
>> Like this?
>>
> 
> Yes below looks good! Might be good to add a comment as well why setting
> it to NULL.

OK, will add the following comment:

/*
  * Reset pn->orig_objcg to NULL to prevent obj_cgroup_put()
  * from being called agagin in __mem_cgroup_free().
  */

> 
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index 992a3f5caa62b..e0795aec4356b 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -4140,8 +4140,10 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
>>          for_each_node(nid) {
>>                  struct mem_cgroup_per_node *pn = memcg->nodeinfo[nid];
>>
>> -               if (pn && pn->orig_objcg)
>> +               if (pn && pn->orig_objcg) {
>>                          obj_cgroup_put(pn->orig_objcg);
>> +                       pn->orig_objcg = NULL;
>> +               }
>>          }
>>          free_shrinker_info(memcg);
>>   offline_kmem:
>>
>> If there are no problems, I will send a fix patch later.
>>
>> Thanks,
>> Qi
>>
>>>
>>>> +    }
>>>>        free_shrinker_info(memcg);
>>>>    offline_kmem:
>>>>        memcg_offline_kmem(memcg);
>>>> -- 
>>>> 2.20.1
>>>>
>>>>
>>
> 



  reply	other threads:[~2026-03-09  2:59 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-05 11:52 [PATCH v6 00/33] Eliminate Dying Memory Cgroup Qi Zheng
2026-03-05 11:52 ` [PATCH v6 01/33] mm: memcontrol: remove dead code of checking parent memory cgroup Qi Zheng
2026-03-05 11:52 ` [PATCH v6 02/33] mm: workingset: use folio_lruvec() in workingset_refault() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 03/33] mm: rename unlock_page_lruvec_irq and its variants Qi Zheng
2026-03-05 11:52 ` [PATCH v6 04/33] mm: vmscan: prepare for the refactoring the move_folios_to_lru() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 05/33] mm: vmscan: refactor move_folios_to_lru() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 06/33] mm: memcontrol: allocate object cgroup for non-kmem case Qi Zheng
2026-03-05 11:52 ` [PATCH v6 07/33] mm: memcontrol: return root object cgroup for root memory cgroup Qi Zheng
2026-03-05 11:52 ` [PATCH v6 08/33] mm: memcontrol: prevent memory cgroup release in get_mem_cgroup_from_folio() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 09/33] buffer: prevent memory cgroup release in folio_alloc_buffers() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 10/33] writeback: prevent memory cgroup release in writeback module Qi Zheng
2026-03-05 11:52 ` [PATCH v6 11/33] mm: memcontrol: prevent memory cgroup release in count_memcg_folio_events() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 12/33] mm: page_io: prevent memory cgroup release in page_io module Qi Zheng
2026-03-05 11:52 ` [PATCH v6 13/33] mm: migrate: prevent memory cgroup release in folio_migrate_mapping() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 14/33] mm: mglru: prevent memory cgroup release in mglru Qi Zheng
2026-03-05 11:52 ` [PATCH v6 15/33] mm: memcontrol: prevent memory cgroup release in mem_cgroup_swap_full() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 16/33] mm: workingset: prevent memory cgroup release in lru_gen_eviction() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 17/33] mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 18/33] mm: zswap: prevent memory cgroup release in zswap_compress() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 19/33] mm: workingset: prevent lruvec release in workingset_refault() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 20/33] mm: zswap: prevent lruvec release in zswap_folio_swapin() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 21/33] mm: swap: prevent lruvec release in lru_gen_clear_refs() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 22/33] mm: workingset: prevent lruvec release in workingset_activation() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 23/33] mm: do not open-code lruvec lock Qi Zheng
2026-03-05 11:52 ` [PATCH v6 24/33] mm: memcontrol: prepare for reparenting LRU pages for " Qi Zheng
2026-03-05 11:52 ` [PATCH v6 25/33] mm: vmscan: prepare for reparenting traditional LRU folios Qi Zheng
2026-03-05 11:52 ` [PATCH v6 26/33] mm: vmscan: prepare for reparenting MGLRU folios Qi Zheng
2026-03-23 13:29   ` Harry Yoo (Oracle)
2026-03-24  2:46     ` Qi Zheng
2026-03-24 11:49   ` [PATCH] fix: " Qi Zheng
2026-03-25  0:28     ` Harry Yoo (Oracle)
2026-03-05 11:52 ` [PATCH v6 27/33] mm: memcontrol: refactor memcg_reparent_objcgs() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 28/33] mm: workingset: use lruvec_lru_size() to get the number of lru pages Qi Zheng
2026-03-05 11:52 ` [PATCH v6 29/33] mm: memcontrol: refactor mod_memcg_state() and mod_memcg_lruvec_state() Qi Zheng
2026-03-05 11:52 ` [PATCH v6 30/33] mm: memcontrol: prepare for reparenting non-hierarchical stats Qi Zheng
2026-03-13 16:22   ` Michal Koutný
2026-03-16  3:47     ` Qi Zheng
2026-03-23  7:53   ` Harry Yoo (Oracle)
2026-03-23  9:47     ` Qi Zheng
2026-03-23 12:25       ` Harry Yoo (Oracle)
2026-03-24  2:54         ` Qi Zheng
2026-03-24  4:05           ` Harry Yoo (Oracle)
2026-03-24  4:25             ` Qi Zheng
2026-03-24  4:40               ` Harry Yoo (Oracle)
2026-03-05 11:52 ` [PATCH v6 31/33] mm: memcontrol: convert objcg to be per-memcg per-node type Qi Zheng
2026-03-06 20:29   ` Usama Arif
2026-03-07  8:51     ` Qi Zheng
2026-03-07 11:08       ` Usama Arif
2026-03-09  2:59         ` Qi Zheng [this message]
2026-03-09 11:29   ` [PATCH] fix: " Qi Zheng
2026-03-09 11:33     ` Usama Arif
2026-03-09 11:43       ` Qi Zheng
2026-03-05 11:52 ` [PATCH v6 32/33] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios Qi Zheng
2026-03-05 11:52 ` [PATCH v6 33/33] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance helpers Qi Zheng
2026-03-06  0:51 ` [PATCH v6 00/33] Eliminate Dying Memory Cgroup Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d5eb31b8-a099-49bf-9b3e-09e525242968@linux.dev \
    --to=qi.zheng@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=apais@linux.microsoft.com \
    --cc=axelrasmussen@google.com \
    --cc=bhe@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=chenridong@huaweicloud.com \
    --cc=david@kernel.org \
    --cc=hamzamahfooz@linux.microsoft.com \
    --cc=hannes@cmpxchg.org \
    --cc=harry.yoo@oracle.com \
    --cc=hughd@google.com \
    --cc=imran.f.khan@oracle.com \
    --cc=kamalesh.babulal@oracle.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=mkoutny@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeel.butt@linux.dev \
    --cc=usama.arif@linux.dev \
    --cc=usamaarif642@gmail.com \
    --cc=weixugc@google.com \
    --cc=yosry.ahmed@linux.dev \
    --cc=yuanchu@google.com \
    --cc=zhengqi.arch@bytedance.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox