From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4FBF536606E for ; Sat, 25 Apr 2026 14:57:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777129074; cv=none; b=Fk0rE30LmwQvAidDfFESFo0ZKTT5KFM2tft0IEcDSLouqbhZwJtAIaHIOfepcK/Ze+AtL1SQbtesTZGcFc/GZrQcstjexmJH6p3j4I2nhhIbiBbiYz/K/k9jeIsHRUUnXG7GX51GOztb8yK2lXF9kHAW+GmDCxeuRGiFgyKBRj4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777129074; c=relaxed/simple; bh=p9NIKZ3p2iPPLemB41z0f9r9GTpX1JVIj3yIsQ7bB10=; h=Date:From:To:Subject:Message-Id:In-Reply-To:References: Mime-Version:Content-Type; b=q/T75aN3e8tW3EPA7TiQbEd0KLOMbs6rI/eTlHilUhcDBaW6+sAqUb79cVk3zSoquq/UBCVtmKZ/+6LMPdfHLryCpWfSnIhTcIVTVWOwMRWDAYf6eZQrky3tPgd9LIuXukdMFbVQSaS1E+ZeeD0k3FGC7FUHSggp0k0JRzwxr/k= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=rEKeu3N0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="rEKeu3N0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B80DBC2BCB0; Sat, 25 Apr 2026 14:57:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1777129074; bh=p9NIKZ3p2iPPLemB41z0f9r9GTpX1JVIj3yIsQ7bB10=; h=Date:From:To:Subject:In-Reply-To:References:From; b=rEKeu3N0k017FBmeBmrcD/bKroFhUGzm573E+hdrbA5/uv4T10BbkTgPrGjEs7QYc XjHV2b71Olf2DjNTL850JXx4TicaA6oZYSXZuTYGFQM4S2LoCwgkje2ip1GOcS3SJ3 xHPn5X8BUK+Pz1UwCMQewaT/OX9zV2zyyFoBduq8= Date: Sat, 25 Apr 2026 07:57:53 -0700 From: Andrew Morton To: Deepanshu Kartikey , muchun.song@linux.dev, osalvador@suse.de, david@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, syzbot+226c1f947186f8fef796@syzkaller.appspotmail.com, Mina Almasry Subject: Re: [PATCH] mm/hugetlb: fix hugetlb cgroup rsvd charge/uncharge mismatch Message-Id: <20260425075753.eab83d221fd6ad59241e0f1d@linux-foundation.org> In-Reply-To: <20260330131525.630b8ff8913ade1e0e5c2054@linux-foundation.org> References: <20260328065534.346053-1-kartikey406@gmail.com> <20260330131525.630b8ff8913ade1e0e5c2054@linux-foundation.org> X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Mon, 30 Mar 2026 13:15:25 -0700 Andrew Morton wrote: > On Sat, 28 Mar 2026 12:25:34 +0530 Deepanshu Kartikey wrote: > > > In alloc_hugetlb_folio(), a single h_cg pointer is used for both > > the rsvd and non-rsvd hugetlb cgroup charges. When map_chg is set, > > hugetlb_cgroup_charge_cgroup_rsvd() stores the charged cgroup in > > h_cg, but the immediately following hugetlb_cgroup_charge_cgroup() > > overwrites h_cg with the non-rsvd cgroup pointer. > > > > As a result, hugetlb_cgroup_commit_charge_rsvd() stores the wrong > > (non-rsvd) cgroup pointer into the folio's rsvd slot. > > > > When the folio is later freed, free_huge_folio() unconditionally > > calls both hugetlb_cgroup_uncharge_folio() and > > hugetlb_cgroup_uncharge_folio_rsvd(). The rsvd uncharge reads back > > the wrong cgroup from the folio and decrements a counter that was > > never charged for that cgroup, causing a page_counter underflow: > > > > page_counter underflow: -512 nr_pages=512 > > WARNING: mm/page_counter.c:61 at page_counter_cancel > > > > Fix this by introducing a separate h_cg_rsvd pointer exclusively > > for the rsvd charge path, keeping the rsvd and non-rsvd charges > > fully independent through their charge, commit, and error uncharge > > paths. > > Thanks. > > > Fixes: 08cf9faf7558 ("hugetlb_cgroup: support noreserve mappings") > > Merged in 2020! > > Could reviewers please give consideration to whether we should backport > this? > OK, then ;) I'll queue this up and shall add the cc:stable - that underflow warning needs to be addressed. I'll add a needs-review note-to-self. From: Deepanshu Kartikey Subject: mm/hugetlb: fix hugetlb cgroup rsvd charge/uncharge mismatch Date: Sat, 28 Mar 2026 12:25:34 +0530 In alloc_hugetlb_folio(), a single h_cg pointer is used for both the rsvd and non-rsvd hugetlb cgroup charges. When map_chg is set, hugetlb_cgroup_charge_cgroup_rsvd() stores the charged cgroup in h_cg, but the immediately following hugetlb_cgroup_charge_cgroup() overwrites h_cg with the non-rsvd cgroup pointer. As a result, hugetlb_cgroup_commit_charge_rsvd() stores the wrong (non-rsvd) cgroup pointer into the folio's rsvd slot. When the folio is later freed, free_huge_folio() unconditionally calls both hugetlb_cgroup_uncharge_folio() and hugetlb_cgroup_uncharge_folio_rsvd(). The rsvd uncharge reads back the wrong cgroup from the folio and decrements a counter that was never charged for that cgroup, causing a page_counter underflow: page_counter underflow: -512 nr_pages=512 WARNING: mm/page_counter.c:61 at page_counter_cancel Fix this by introducing a separate h_cg_rsvd pointer exclusively for the rsvd charge path, keeping the rsvd and non-rsvd charges fully independent through their charge, commit, and error uncharge paths. Link: https://lore.kernel.org/20260328065534.346053-1-kartikey406@gmail.com Fixes: 08cf9faf7558 ("hugetlb_cgroup: support noreserve mappings") Reported-by: syzbot+226c1f947186f8fef796@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=226c1f947186f8fef796 Signed-off-by: Deepanshu Kartikey Cc: David Hildenbrand Cc: Muchun Song Cc: Oscar Salvador Cc: Mina Almasry Cc: Signed-off-by: Andrew Morton --- mm/hugetlb.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) --- a/mm/hugetlb.c~mm-hugetlb-fix-hugetlb-cgroup-rsvd-charge-uncharge-mismatch +++ a/mm/hugetlb.c @@ -2879,6 +2879,7 @@ struct folio *alloc_hugetlb_folio(struct map_chg_state map_chg; int ret, idx; struct hugetlb_cgroup *h_cg = NULL; + struct hugetlb_cgroup *h_cg_rsvd = NULL; gfp_t gfp = htlb_alloc_mask(h) | __GFP_RETRY_MAYFAIL; idx = hstate_index(h); @@ -2929,7 +2930,7 @@ struct folio *alloc_hugetlb_folio(struct */ if (map_chg) { ret = hugetlb_cgroup_charge_cgroup_rsvd( - idx, pages_per_huge_page(h), &h_cg); + idx, pages_per_huge_page(h), &h_cg_rsvd); if (ret) goto out_subpool_put; } @@ -2971,7 +2972,7 @@ struct folio *alloc_hugetlb_folio(struct */ if (map_chg) { hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h), - h_cg, folio); + h_cg_rsvd, folio); } spin_unlock_irq(&hugetlb_lock); @@ -3023,7 +3024,7 @@ out_uncharge_cgroup: out_uncharge_cgroup_reservation: if (map_chg) hugetlb_cgroup_uncharge_cgroup_rsvd(idx, pages_per_huge_page(h), - h_cg); + h_cg_rsvd); out_subpool_put: /* * put page to subpool iff the quota of subpool's rsv_hpages is used _