From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-189.mta0.migadu.com (out-189.mta0.migadu.com [91.218.175.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF34537B413 for ; Mon, 27 Apr 2026 07:24:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.189 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777274679; cv=none; b=pY3w6grAWJDoAvVzONa3hDBDrudkhawJGlgFBEcaHW0IKy4pawFuLVGYSu8B6II6TrTgTrm9DQGVS599Jwd+ic2XzlWAzyuwMYzhn/gIO2XF1/tsbFEgHpJnJ+OqpjV+TM2R3g+vzac5epypOhq6ECXpvq2IfdDij15O7altqjw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777274679; c=relaxed/simple; bh=HgmFKGoL+Wpx8FUzOucJVQ6YC6zDmeeUIhygNR89wFg=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=FjC2E8vWoJH1uV9WxlX1Cj73emia95mKUGt1Xs83oIH9zzB+45E785LQs1uceZJbVzSHb9+diXxcTxfZWtf5IX9vQTGiXycx+x7n2xDRTrh/0oBm7rJPgPafIf/AUF3EDl6j44wKebi7nmm0WXSiTKN1/1pp3Lt5WCqdP0OtUgI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=bqtLQT0D; arc=none smtp.client-ip=91.218.175.189 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="bqtLQT0D" Message-ID: <3591c663-a4a9-4c22-97cf-b58b2e7d8a41@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1777274674; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9FYLkegxwphet4SRas0qyIUVtFR79TYwjGwBKxPSGZw=; b=bqtLQT0D5/ZpzgmKaiQ+8vqeYCsK9hLF9yzBn9KkiOlI+qU0A1lxs3Le7/sE6PqyJgvmc6 HUwTIrK19SkB3dbx2zZ2+ebgLrjsZ8m0Ow45GRpPm4vPLvtr1JlDfxXWcb2Dv4UB+xpC3M RD9Z4r+CxOw0+FbtvqmHou9vEerPV7M= Date: Mon, 27 Apr 2026 15:24:10 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [syzbot] [mm?] WARNING: bad unlock balance in do_wp_page To: Andrew Morton Cc: shakeel.butt@linux.dev, syzbot , Liam.Howlett@oracle.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ljs@kernel.org, surenb@google.com, syzkaller-bugs@googlegroups.com, vbabka@kernel.org, Muchun Song References: <69edca15.170a0220.38e3f1.0000.GAE@google.com> <20260426034938.db29d74982a8eb8463f8cf3a@linux-foundation.org> <20260426105532.43768b24a42744f1b52fdff2@linux-foundation.org> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Qi Zheng In-Reply-To: <20260426105532.43768b24a42744f1b52fdff2@linux-foundation.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT On 4/27/26 1:55 AM, Andrew Morton wrote: > On Sun, 26 Apr 2026 23:57:42 +0800 Qi Zheng wrote: > >> Hi Andrew, >> >> On 4/26/26 6:49 PM, Andrew Morton wrote: >>> On Sun, 26 Apr 2026 01:17:25 -0700 syzbot wrote: >>> >>>> Hello, >>>> >>>> syzbot found the following issue on: >>>> >>>> HEAD commit: 6596a02b2078 Merge tag 'drm-next-2026-04-22' of https://gi.. >>>> git tree: upstream >>>> console output: https://syzkaller.appspot.com/x/log.txt?x=12483702580000 >>>> kernel config: https://syzkaller.appspot.com/x/.config?x=24c8da4692f901cb >>>> dashboard link: https://syzkaller.appspot.com/bug?extid=7d60b33a8a546263da7c >>>> compiler: gcc (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44 >>>> userspace arch: i386 >>>> >>>> Unfortunately, I don't have any reproducer for this issue yet. >>> >>> argh, that dreaded sentence. >>> >>> Thanks. >>> >>> Something's definitely amiss. This is at least the fifth report of >>> rcu_read_lock() imbalance post-7.0. Others: >>> >>> https://lore.kernel.org/69eab803.a00a0220.17a17.004a.GAE@google.com >>> https://lore.kernel.org/69eab803.a00a0220.17a17.004b.GAE@google.com >>> https://lore.kernel.org/69eafb0e.a00a0220.9259.0031.GAE@google.com >>> https://lore.kernel.org/69ebcbe2.a00a0220.7773.0005.GAE@google.com >> >> All the kernel configs mentioned above include 'CONFIG_MEMCG_V1=y'. >> >> Theoretically, a rebind_subsystems() can lead a rcu unbalance, see my >> previous discussion with Shakeel for details: >> >> https://lore.kernel.org/all/358c60e1-fa91-40a1-9e00-84c93340c04e@linux.dev/ > > Right, that looks similar. > > The rcu locking under lruvec_stat_mod_folio() is very simple, and that > return in get_non_dying_memcg_end() does look super suspicious. Why > does it omit the unlock? > > otoh, in > https://lore.kernel.org/all/69eafb0e.a00a0220.9259.0031.GAE@google.com/ > we're trying to release an rcu_read_lock() which isn't presently held. > But if cgroup_subsys_on_dfl() were to become false between the > get_non_dying_memcg_start/end pair, that's what would happen. > > So yup, I agree, concurrent rebind_subsystems() activity could cause > all of this. The reports are pretty common - is there some debugging > patch we can temporarily add to confirm this theory? And/or is it > possible to cook up a selftest which will trigger this? I've been trying to reproduce this locally, but unfortunately I haven't succeeded yet. > >> However, in a production environment, this is practically impossible. > > Can you expand on this? > > sysbot isn't a production environment ;) Rebinding only works when the hierarchy is completely empty. This is generally not the case in a production environment (e.g. when systemd is used). BTW, it seems rebinding is about to be deprecated: cgroup1_reconfigure --> pr_warn("option changes via remount are deprecated (pid=%d comm=%s)\n", task_tgid_nr(current), current->comm); Also, it appears the current memcg subsystem assumes that cgroup_subsys_on_dfl(memory_cgrp_subsys) cannot be changed at runtime. (Please correct me if I missed anything.) If we can get a reproducer, we can try the following fix, or simply drop rebinding altogether? From 6ae41b91339625dd7bf0f819f775f26e78171a73 Mon Sep 17 00:00:00 2001 From: Qi Zheng Date: Mon, 27 Apr 2026 11:20:21 +0800 Subject: [PATCH] mm: memcontrol: fix rcu unbalance in get_non_dying_memcg_end() Signed-off-by: Qi Zheng --- mm/memcontrol.c | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c3d98ab41f1f1..46ff40faf295a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -805,10 +805,15 @@ static long memcg_state_val_in_pages(int idx, long val) * Used in mod_memcg_state() and mod_memcg_lruvec_state() to avoid race with * reparenting of non-hierarchical state_locals. */ -static inline struct mem_cgroup *get_non_dying_memcg_start(struct mem_cgroup *memcg) +static inline struct mem_cgroup *get_non_dying_memcg_start(struct mem_cgroup *memcg, + bool *locked) { - if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) + if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) { + *locked = false; return memcg; + } + + *locked = true; rcu_read_lock(); @@ -818,20 +823,22 @@ static inline struct mem_cgroup *get_non_dying_memcg_start(struct mem_cgroup *me return memcg; } -static inline void get_non_dying_memcg_end(void) +static inline void get_non_dying_memcg_end(bool rcu_locked) { - if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) + if (!rcu_locked) return; rcu_read_unlock(); } #else -static inline struct mem_cgroup *get_non_dying_memcg_start(struct mem_cgroup *memcg) +static inline struct mem_cgroup *get_non_dying_memcg_start(struct mem_cgroup *memcg, + bool *locked) { + *locked = false; return memcg; } -static inline void get_non_dying_memcg_end(void) +static inline void get_non_dying_memcg_end(bool rcu_locked) { } #endif @@ -865,12 +872,14 @@ static void __mod_memcg_state(struct mem_cgroup *memcg, void mod_memcg_state(struct mem_cgroup *memcg, enum memcg_stat_item idx, int val) { + bool locked; + if (mem_cgroup_disabled()) return; - memcg = get_non_dying_memcg_start(memcg); + memcg = get_non_dying_memcg_start(memcg, &locked); __mod_memcg_state(memcg, idx, val); - get_non_dying_memcg_end(); + get_non_dying_memcg_end(locked); } #ifdef CONFIG_MEMCG_V1 @@ -933,14 +942,15 @@ static void mod_memcg_lruvec_state(struct lruvec *lruvec, struct pglist_data *pgdat = lruvec_pgdat(lruvec); struct mem_cgroup_per_node *pn; struct mem_cgroup *memcg; + bool locked; pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); - memcg = get_non_dying_memcg_start(pn->memcg); + memcg = get_non_dying_memcg_start(pn->memcg, &locked); pn = memcg->nodeinfo[pgdat->node_id]; __mod_memcg_lruvec_state(pn, idx, val); - get_non_dying_memcg_end(); + get_non_dying_memcg_end(locked); } /** -- 2.20.1 Thanks, Qi > >> So Shakeel and I chose to wait for a reproducer at the time. :( >> >>> >>> In some cases we released it too often, in other cases we failed to >>> release it. >>> >>> The first one is slightly more useful in that it tells us that the >>> not-released rcu_read_lock() was taken in folio_lruvec_lock_irqsave(). >> >> I double-checked some callers of folio_lruvec_lock_irqsave() (such as >> folios_put_refs()), but didn't find anything suspicious. :( > > Right - it's rare and smells of a race condition. >