From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-181.mta0.migadu.com (out-181.mta0.migadu.com [91.218.175.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 615E726F46F for ; Mon, 9 Mar 2026 02:59:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.181 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773025198; cv=none; b=h9s0rsVneaghX+BZynGerr4vZhcVF2TLWR76vN2t/rjgRbmPbd6aqb4cg+/gXF5j4q3dBCIfZW/l6zcsSf3rXzeevbSNsjOJrfZHr1V7LwMU3l8s8nhcVi7/6v0x1p9yeqDyANrEIaahDk1JPcnx6FI75sNDpd8WVppNIGhwzkE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773025198; c=relaxed/simple; bh=3vrMSbleQkE76XbiVtFYLgQg/gQqb03KQaEI1CaCUE4=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=fTmhy5Fh9R0pwdNqFhy6tZHVX9cTsg7w7GXnjSnyr8Z0WidfwoOrWxIOhAGoVuOVzY8/TngStoAlm2qmzDrNBfmBDgr5bEqibVxM14XByJEOLnjdTtyez/Qqn3E3EWMkl5wyGOH7hv2GZlARxiNbLuza2j+LV01CaUdUJX+Qt5c= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=FJawJpKa; arc=none smtp.client-ip=91.218.175.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="FJawJpKa" Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1773025183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=surD7Tl7I0c7IM/NAN7h4/O0mH6dbWmZP+2JPPoPahM=; b=FJawJpKa2+SghPS7x3wAb1/ADkqlaG+EU48QgAFLhye7ep1KXmfv8aR211KO6QMm+aM9jE lkSfHq7NYA6CmA4qgbrbz1Bb+64sic2GD+OsuriZ38V6yHQAVTjuMN9h63nPZs8OqpFDZL E7/eoswn88XvnfZ+A7mJ8TC5NNgcmWI= Date: Mon, 9 Mar 2026 10:59:18 +0800 Precedence: bulk X-Mailing-List: cgroups@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [PATCH v6 31/33] mm: memcontrol: convert objcg to be per-memcg per-node type To: Usama Arif Cc: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@kernel.org, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, yosry.ahmed@linux.dev, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, chenridong@huaweicloud.com, mkoutny@suse.com, akpm@linux-foundation.org, hamzamahfooz@linux.microsoft.com, apais@linux.microsoft.com, lance.yang@linux.dev, bhe@redhat.com, usamaarif642@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Qi Zheng References: <20260306202931.3878822-1-usama.arif@linux.dev> <898e8ca7-efcb-4bd7-8016-871b37be830e@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Qi Zheng In-Reply-To: <898e8ca7-efcb-4bd7-8016-871b37be830e@linux.dev> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT On 3/7/26 7:08 PM, Usama Arif wrote: > > > On 07/03/2026 08:51, Qi Zheng wrote: >> Hi Usama, >> >> On 3/7/26 4:29 AM, Usama Arif wrote: >>> On Thu,  5 Mar 2026 19:52:49 +0800 Qi Zheng wrote: >>> >>>> From: Qi Zheng >>>> >>>> Convert objcg to be per-memcg per-node type, so that when reparent LRU >>>> folios later, we can hold the lru lock at the node level, thus avoiding >>>> holding too many lru locks at once. >>>> >>>> Signed-off-by: Qi Zheng >>>> Acked-by: Shakeel Butt >>>> --- >>>>   include/linux/memcontrol.h | 23 +++++------ >>>>   include/linux/sched.h      |  2 +- >>>>   mm/memcontrol.c            | 79 +++++++++++++++++++++++--------------- >>>>   3 files changed, 62 insertions(+), 42 deletions(-) >>>> >>> >>> [...] >>> >>>> @@ -4087,7 +4100,13 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) >>>>       xa_store(&mem_cgroup_private_ids, memcg->id.id, memcg, GFP_KERNEL); >>>>         return 0; >>>> -free_shrinker: >>>> +free_objcg: >>>> +    for_each_node(nid) { >>>> +        struct mem_cgroup_per_node *pn = memcg->nodeinfo[nid]; >>>> + >>>> +        if (pn && pn->orig_objcg) >>>> +            obj_cgroup_put(pn->orig_objcg); >>> >>> Is it possible that you might call obj_cgroup_put twice on the same cgroup? >> >> Oh, I think you are right. Here pn->orig_objcg was not reset to NULL, so >> obj_cgroup_put() will be called in __mem_cgroup_free() again. >> >>> >>> If css_create fails, css_free_rwork_fn is queued, which ends up calling >>> mem_cgroup_css_free which calls obj_cgroup_put again? >>> >>> Maybe adding pn->orig_objcg = NULL overhere after obj_cgroup_put >>> is enough to prevent the double put from causing issues? >> >> Agree. >> >> Like this? >> > > Yes below looks good! Might be good to add a comment as well why setting > it to NULL. OK, will add the following comment: /* * Reset pn->orig_objcg to NULL to prevent obj_cgroup_put() * from being called agagin in __mem_cgroup_free(). */ > >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c >> index 992a3f5caa62b..e0795aec4356b 100644 >> --- a/mm/memcontrol.c >> +++ b/mm/memcontrol.c >> @@ -4140,8 +4140,10 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) >>         for_each_node(nid) { >>                 struct mem_cgroup_per_node *pn = memcg->nodeinfo[nid]; >> >> -               if (pn && pn->orig_objcg) >> +               if (pn && pn->orig_objcg) { >>                         obj_cgroup_put(pn->orig_objcg); >> +                       pn->orig_objcg = NULL; >> +               } >>         } >>         free_shrinker_info(memcg); >>  offline_kmem: >> >> If there are no problems, I will send a fix patch later. >> >> Thanks, >> Qi >> >>> >>>> +    } >>>>       free_shrinker_info(memcg); >>>>   offline_kmem: >>>>       memcg_offline_kmem(memcg); >>>> -- >>>> 2.20.1 >>>> >>>> >> >