From: Zhao Li <enderaoelyther@gmail.com>
To: Oscar Salvador <osalvador@suse.de>
Cc: Zhao Li <enderaoelyther@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Muchun Song <muchun.song@linux.dev>,
David Hildenbrand <david@kernel.org>,
Lance Yang <lance.yang@linux.dev>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] mm/hugetlb: fix subpool accounting after cgroup charge failure
Date: Tue, 28 Apr 2026 19:41:27 +0800 [thread overview]
Message-ID: <20260428114126.92091-2-enderaoelyther@gmail.com> (raw)
In-Reply-To: <afB49BIqmLL2RKHM@localhost.localdomain>
On Tue, Apr 28, 2026 at 11:08:04AM +0200, Oscar Salvador wrote:
> I found that last sentence misleading, because we do not really
> care about hugetlb cgroup charge/uncharge (besides that being of
> the reasons we end up on error path) but rather the fact that we
> fiddle with subpool->used_hpages and we need to undo that when we
> rollback.
Agreed - reframed in v3. The commit body now states the bug as
the unwind missing the used_hpages rollback, without pinning it to
the cgroup-charge case, and the subject is narrowed to "fix
max-only subpool accounting on alloc_hugetlb_folio failure".
> Well, that does not quite explain the problem I think, at least
> not clear enough? [...]
Fair - that explanation got tangled because v2's design itself was
trying to compensate for racing min crossings. v3 sidesteps it
entirely: the gbl_chg > 0 cleanup is now restricted to
(max_hpages != -1, min_hpages == -1). In that configuration
hugepage_subpool_put_pages()'s min-restoration branch is dead, so a
direct used_hpages-- under spool->lock is the exact inverse of the
speculative bump - no h->resv_huge_pages++ needed, no rsv_hpages
publication, no racing-put reasoning.
Mounts with min_hpages != -1 are left at v1 behaviour for now.
That quadrant has an inherited race that also exists at
hugetlb_reserve_pages()'s out_put_pages cleanup, so a coordinated
fix belongs in a separate RFC rather than this stable backport.
> I would split the comment in two parts and place them within the
> block they belong, otherwise it sounds confusing.
>
> Subpools, reservations and hugetlb make a very head-spinning
> situation, so let us make our life easier.
Done - one short comment per branch placed inside the relevant
code block in v3. Hopefully easier to follow now.
v3:
https://lore.kernel.org/linux-mm/20260428113037.88766-2-enderaoelyther@gmail.com/
Thanks for the review.
--
Zhao Li
next prev parent reply other threads:[~2026-04-28 11:41 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-27 14:52 [PATCH] mm/hugetlb: fix subpool accounting after cgroup charge failure Catherine
2026-04-27 15:12 ` Andrew Morton
2026-04-27 15:19 ` Catherine
2026-04-27 21:12 ` Andrew Morton
2026-04-28 3:07 ` [PATCH v2] " Zhao Li
2026-04-28 9:08 ` Oscar Salvador
2026-04-28 11:30 ` Lance Yang
2026-04-28 11:41 ` Zhao Li
2026-04-28 11:41 ` Zhao Li [this message]
2026-04-28 11:30 ` [PATCH v3] mm/hugetlb: fix max-only subpool accounting on alloc_hugetlb_folio failure Zhao Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260428114126.92091-2-enderaoelyther@gmail.com \
--to=enderaoelyther@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox