From: Wei Yang <richard.weiyang@gmail.com>
To: Zi Yan <ziy@nvidia.com>
Cc: linmiaohe@huawei.com, david@redhat.com, jane.chu@oracle.com,
kernel@pankajraghav.com, akpm@linux-foundation.org,
mcgrof@kernel.org, nao.horiguchi@gmail.com,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Wei Yang <richard.weiyang@gmail.com>,
Yang Shi <shy828301@gmail.com>,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
Date: Fri, 31 Oct 2025 23:36:10 +0000 [thread overview]
Message-ID: <20251031233610.ftpqyeosb4cedwtp@master> (raw)
In-Reply-To: <20251031162001.670503-4-ziy@nvidia.com>
On Fri, Oct 31, 2025 at 12:20:01PM -0400, Zi Yan wrote:
[...]
>diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>index 0e24bb7e90d0..ad2fc52651a6 100644
>--- a/mm/huge_memory.c
>+++ b/mm/huge_memory.c
>@@ -3567,8 +3567,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> ClearPageCompound(&folio->page);
> }
>
>-/*
>- * It splits an unmapped @folio to lower order smaller folios in two ways.
>+/**
>+ * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
>+ * two ways: uniform split or non-uniform split.
> * @folio: the to-be-split folio
> * @new_order: the smallest order of the after split folios (since buddy
> * allocator like split generates folios with orders from @folio's
>@@ -3589,22 +3590,22 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> * uniform_split is false.
> *
> * The high level flow for these two methods are:
>- * 1. uniform split: a single __split_folio_to_order() is called to split the
>- * @folio into @new_order, then we traverse all the resulting folios one by
>- * one in PFN ascending order and perform stats, unfreeze, adding to list,
>- * and file mapping index operations.
>- * 2. non-uniform split: in general, folio_order - @new_order calls to
>- * __split_folio_to_order() are made in a for loop to split the @folio
>- * to one lower order at a time. The resulting small folios are processed
>- * like what is done during the traversal in 1, except the one containing
>- * @page, which is split in next for loop.
>+ * 1. uniform split: @xas is split with no expectation of failure and a single
>+ * __split_folio_to_order() is called to split the @folio into @new_order
>+ * along with stats update.
>+ * 2. non-uniform split: folio_order - @new_order calls to
>+ * __split_folio_to_order() are expected to be made in a for loop to split
>+ * the @folio to one lower order at a time. The folio containing @page is
Hope it is not annoying.
The parameter's name is @split_at, maybe we misuse it?
s/containing @page/containing @split_at/
>+ * split in each iteration. @xas is split into half in each iteration and
>+ * can fail. A failed @xas split leaves split folios as is without merging
>+ * them back.
> *
> * After splitting, the caller's folio reference will be transferred to the
> * folio containing @page. The caller needs to unlock and/or free after-split
The same above.
And probably there is another one in above this comment(not shown here).
> * folios if necessary.
> *
>- * For !uniform_split, when -ENOMEM is returned, the original folio might be
>- * split. The caller needs to check the input folio.
>+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
>+ * split but not to @new_order, the caller needs to check)
> */
> static int __split_unmapped_folio(struct folio *folio, int new_order,
> struct page *split_at, struct xa_state *xas,
--
Wei Yang
Help you, Help me
next prev parent reply other threads:[~2025-10-31 23:36 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-31 16:19 [PATCH v5 0/3] Optimize folio split in memory failure Zi Yan
2025-10-31 16:19 ` [PATCH v5 1/3] mm/huge_memory: add split_huge_page_to_order() Zi Yan
2025-10-31 16:20 ` [PATCH v5 2/3] mm/memory-failure: improve large block size folio handling Zi Yan
2025-10-31 16:20 ` [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
2025-10-31 23:36 ` Wei Yang [this message]
2025-10-31 23:52 ` Zi Yan
2025-11-01 0:08 ` Wei Yang
2025-11-03 16:38 ` David Hildenbrand (Red Hat)
2025-11-05 16:10 ` Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251031233610.ftpqyeosb4cedwtp@master \
--to=richard.weiyang@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=jane.chu@oracle.com \
--cc=kernel@pankajraghav.com \
--cc=lance.yang@linux.dev \
--cc=linmiaohe@huawei.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mcgrof@kernel.org \
--cc=nao.horiguchi@gmail.com \
--cc=npache@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).