From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-171.mta1.migadu.com (out-171.mta1.migadu.com [95.215.58.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F0342D97B4 for ; Tue, 10 Feb 2026 03:13:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.171 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770693189; cv=none; b=q3+dl6R5DUIA56fD1ifY/2JiRD9OwRj9vL84skmKL9uzsX/vklNJmH8Jn7q5810rBg47NoYDxReOpheolUbIauZujQnmuuyXo9T9ZiZHgNZW3n75jqj00sW8XYPaRTL/dmaXcAKOGLAlauEfDwGSR2aAvlQgiw4qR+Tj+Ha4dOA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770693189; c=relaxed/simple; bh=xuf38ePAkpsRw7Wgr6uDwnRDXSWhFYegBEQdPOYFPcI=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=IQdLV/i7He3k69ciHdqGnhVh71nHqZ37+1MQ/ANTRtIqY7HcvJJqw3QWK+XFaCJUOjpFYeX9g8nb8YIA8ndQX+JogoAX+j4/oHFSfPZHtI0Krw9FcMsckHZrzdn9G5+P6p4W18wu9cNy/kwJFt7FzdlYKm5XXYr5EYtdDM9XQ40= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=Tw7AmvSd; arc=none smtp.client-ip=95.215.58.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="Tw7AmvSd" Message-ID: <37b79f65-7de3-483a-a675-75eab94d2776@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1770693176; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yubXWpReL8k8hbkA3y11vW86dPL2+7IUWmPBxrke92U=; b=Tw7AmvSdFVwfbDs8VZgjXs2x8ddtNiTMeickGrjOWzTBV6Yh/PrtCcJVPvlHTiEQT4idPm PsZ5jlhuAvA8/s0ojPGTXRbI69ELL4rTgSgyXh5Cy+vx4TIoBa4afdD4iaqP/FnrhFCIUY OjMtGYUzjzSSOBP26w82VxQkWD8+BRw= Date: Tue, 10 Feb 2026 11:11:47 +0800 Precedence: bulk X-Mailing-List: cgroups@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [PATCH v4 30/31] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios To: Shakeel Butt Cc: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, muchun.song@linux.dev, david@kernel.org, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, yosry.ahmed@linux.dev, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, chenridong@huaweicloud.com, mkoutny@suse.com, akpm@linux-foundation.org, hamzamahfooz@linux.microsoft.com, apais@linux.microsoft.com, lance.yang@linux.dev, bhe@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng References: <9e332cc8436b6092dd6ef9c2d5f69072bb38eaf6.1770279888.git.zhengqi.arch@bytedance.com> <2a0e4ae2-457b-4d16-a7b9-7372fd665337@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Qi Zheng In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT On 2/10/26 1:53 AM, Shakeel Butt wrote: > On Mon, Feb 09, 2026 at 11:49:43AM +0800, Qi Zheng wrote: >> >> >> On 2/8/26 6:25 AM, Shakeel Butt wrote: >>> On Thu, Feb 05, 2026 at 05:01:49PM +0800, Qi Zheng wrote: >>>> From: Muchun Song >>>> >>>> Now that everything is set up, switch folio->memcg_data pointers to >>>> objcgs, update the accessors, and execute reparenting on cgroup death. >>>> >>>> Finally, folio->memcg_data of LRU folios and kmem folios will always >>>> point to an object cgroup pointer. The folio->memcg_data of slab >>>> folios will point to an vector of object cgroups. >>>> >>>> Signed-off-by: Muchun Song >>>> Signed-off-by: Qi Zheng >>>> /* >>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c >>>> index e7d4e4ff411b6..0e0efaa511d3d 100644 >>>> --- a/mm/memcontrol.c >>>> +++ b/mm/memcontrol.c >>>> @@ -247,11 +247,25 @@ static inline void reparent_state_local(struct mem_cgroup *memcg, struct mem_cgr >>>> static inline void reparent_locks(struct mem_cgroup *memcg, struct mem_cgroup *parent) >>>> { >>>> + int nid, nest = 0; >>>> + >>>> spin_lock_irq(&objcg_lock); >>>> + for_each_node(nid) { >>>> + spin_lock_nested(&mem_cgroup_lruvec(memcg, >>>> + NODE_DATA(nid))->lru_lock, nest++); >>>> + spin_lock_nested(&mem_cgroup_lruvec(parent, >>>> + NODE_DATA(nid))->lru_lock, nest++); >>> >>> Is there a reason to acquire locks for all the node together? Why not do >>> the for_each_node(nid) in memcg_reparent_objcgs() and then reparent the >>> LRUs for each node one by one and taking and releasing lock >>> individually. Though the lock for the offlining memcg might not be >> >> To do this, we first need to convert objcg from per-memcg to per-memcg >> per-node. In this way, we can hold the lru lock and objcg lock for >> each node to reparent the folio and the corresponding objcg together. > > Oh we want reparenting of both objcg and folio atomic. Let's add a Right. > comment here with the explanation. OK, will do this refactoring and send v5. Thanks, Qi >