From: jane.chu@oracle.com
To: Kefeng Wang <wangkefeng.wang@huawei.com>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Oscar Salvador <osalvador@suse.de>,
Muchun Song <muchun.song@linux.dev>
Cc: sidhartha.kumar@oracle.com, Zi Yan <ziy@nvidia.com>,
Vlastimil Babka <vbabka@suse.cz>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
linux-mm@kvack.org
Subject: Re: [PATCH v2 8/9] mm: cma: add __cma_release()
Date: Mon, 8 Sep 2025 17:15:36 -0700 [thread overview]
Message-ID: <2501aa93-4e4c-4bb5-b3f8-a6259d4e24a1@oracle.com> (raw)
In-Reply-To: <20250902124820.3081488-9-wangkefeng.wang@huawei.com>
On 9/2/2025 5:48 AM, Kefeng Wang wrote:
> Kill cma_pages_valid() which only used in cma_release(), also
> cleanup code duplication between cma pages valid checking and
> cma memrange finding, add __cma_release() helper to prepare for
> the upcoming frozen page release.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> include/linux/cma.h | 1 -
> mm/cma.c | 57 ++++++++++++---------------------------------
> 2 files changed, 15 insertions(+), 43 deletions(-)
>
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> index 62d9c1cf6326..e5745d2aec55 100644
> --- a/include/linux/cma.h
> +++ b/include/linux/cma.h
> @@ -49,7 +49,6 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
> struct cma **res_cma);
> extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align,
> bool no_warn);
> -extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count);
> extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
>
> extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
> diff --git a/mm/cma.c b/mm/cma.c
> index 3f3c96be67f7..b4413e382d5d 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -936,34 +936,36 @@ struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp)
> return page ? page_folio(page) : NULL;
> }
>
> -bool cma_pages_valid(struct cma *cma, const struct page *pages,
> - unsigned long count)
> +static bool __cma_release(struct cma *cma, const struct page *pages,
> + unsigned long count)
> {
> unsigned long pfn, end;
> int r;
> struct cma_memrange *cmr;
> - bool ret;
> +
> + pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count);
>
> if (!cma || !pages || count > cma->count)
> return false;
>
> pfn = page_to_pfn(pages);
> - ret = false;
>
> for (r = 0; r < cma->nranges; r++) {
> cmr = &cma->ranges[r];
> end = cmr->base_pfn + cmr->count;
> - if (pfn >= cmr->base_pfn && pfn < end) {
> - ret = pfn + count <= end;
> + if (pfn >= cmr->base_pfn && pfn < end && pfn + count <= end)
> break;
> - }
> }
The only difference from the previous code is the now missing VM_BUG_ON
upon a given range that stretches across the CMA ranges.
The warning was introduced by
c64be2bb1c6eb drivers: add Contiguous Memory Allocator
about 15 years ago as a caution only.
I am okay to just return false in the potential error case.
>
> - if (!ret)
> - pr_debug("%s(page %p, count %lu)\n",
> - __func__, (void *)pages, count);
> + if (r == cma->nranges)
> + return false;
>
> - return ret;
> + free_contig_range(pfn, count);
> + cma_clear_bitmap(cma, cmr, pfn, count);
> + cma_sysfs_account_release_pages(cma, count);
> + trace_cma_release(cma->name, pfn, pages, count);
> +
> + return true;
> }
>
> /**
> @@ -979,36 +981,7 @@ bool cma_pages_valid(struct cma *cma, const struct page *pages,
> bool cma_release(struct cma *cma, const struct page *pages,
> unsigned long count)
> {
> - struct cma_memrange *cmr;
> - unsigned long pfn, end_pfn;
> - int r;
> -
> - pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count);
> -
> - if (!cma_pages_valid(cma, pages, count))
> - return false;
> -
> - pfn = page_to_pfn(pages);
> - end_pfn = pfn + count;
> -
> - for (r = 0; r < cma->nranges; r++) {
> - cmr = &cma->ranges[r];
> - if (pfn >= cmr->base_pfn &&
> - pfn < (cmr->base_pfn + cmr->count)) {
> - VM_BUG_ON(end_pfn > cmr->base_pfn + cmr->count);
> - break;
> - }
> - }
> -
> - if (r == cma->nranges)
> - return false;
> -
> - free_contig_range(pfn, count);
> - cma_clear_bitmap(cma, cmr, pfn, count);
> - cma_sysfs_account_release_pages(cma, count);
> - trace_cma_release(cma->name, pfn, pages, count);
> -
> - return true;
> + return __cma_release(cma, pages, count);
> }
>
> bool cma_free_folio(struct cma *cma, const struct folio *folio)
> @@ -1016,7 +989,7 @@ bool cma_free_folio(struct cma *cma, const struct folio *folio)
> if (WARN_ON(!folio_test_large(folio)))
> return false;
>
> - return cma_release(cma, &folio->page, folio_nr_pages(folio));
> + return __cma_release(cma, &folio->page, folio_nr_pages(folio));
> }
>
> int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data)
Nice clean up.
Reviewed-by: Jane Chu <jane.chu@oracle.com>
-jane
next prev parent reply other threads:[~2025-09-09 0:15 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-02 12:48 [PATCH v2 0/9] mm: hugetlb: cleanup and allocate frozen hugetlb folio Kefeng Wang
2025-09-02 12:48 ` [PATCH v2 1/9] mm: hugetlb: convert to use more alloc_fresh_hugetlb_folio() Kefeng Wang
2025-09-08 9:21 ` Oscar Salvador
2025-09-08 12:59 ` Kefeng Wang
2025-09-09 0:54 ` Zi Yan
2025-09-02 12:48 ` [PATCH v2 2/9] mm: hugetlb: convert to account_new_hugetlb_folio() Kefeng Wang
2025-09-08 9:26 ` Oscar Salvador
2025-09-08 13:20 ` Kefeng Wang
2025-09-08 13:38 ` Oscar Salvador
2025-09-08 13:40 ` Oscar Salvador
2025-09-09 7:04 ` Kefeng Wang
2025-09-09 0:59 ` Zi Yan
2025-09-02 12:48 ` [PATCH v2 3/9] mm: hugetlb: directly pass order when allocate a hugetlb folio Kefeng Wang
2025-09-08 9:29 ` Oscar Salvador
2025-09-09 1:11 ` Zi Yan
2025-09-09 7:11 ` Kefeng Wang
2025-09-02 12:48 ` [PATCH v2 4/9] mm: hugetlb: remove struct hstate from init_new_hugetlb_folio() Kefeng Wang
2025-09-08 9:31 ` Oscar Salvador
2025-09-09 1:13 ` Zi Yan
2025-09-02 12:48 ` [PATCH v2 5/9] mm: hugeltb: check NUMA_NO_NODE in only_alloc_fresh_hugetlb_folio() Kefeng Wang
2025-09-08 9:34 ` Oscar Salvador
2025-09-09 1:16 ` Zi Yan
2025-09-02 12:48 ` [PATCH v2 6/9] mm: page_alloc: add alloc_contig_frozen_pages() Kefeng Wang
2025-09-09 0:21 ` jane.chu
2025-09-09 1:44 ` Zi Yan
2025-09-09 7:29 ` Kefeng Wang
2025-09-09 8:11 ` Oscar Salvador
2025-09-09 18:55 ` Matthew Wilcox
2025-09-09 19:08 ` Zi Yan
2025-09-10 2:05 ` Kefeng Wang
2025-09-02 12:48 ` [PATCH v2 7/9] mm: cma: add alloc flags for __cma_alloc() Kefeng Wang
2025-09-09 0:19 ` jane.chu
2025-09-09 2:03 ` Zi Yan
2025-09-09 8:05 ` Oscar Salvador
2025-09-02 12:48 ` [PATCH v2 8/9] mm: cma: add __cma_release() Kefeng Wang
2025-09-09 0:15 ` jane.chu [this message]
2025-09-02 12:48 ` [PATCH v2 9/9] mm: hugetlb: allocate frozen pages in alloc_gigantic_folio() Kefeng Wang
2025-09-09 1:48 ` jane.chu
2025-09-09 7:33 ` Kefeng Wang
2025-09-09 2:02 ` Zi Yan
2025-09-09 7:34 ` Kefeng Wang
2025-09-02 13:51 ` [PATCH v2 0/9] mm: hugetlb: cleanup and allocate frozen hugetlb folio Oscar Salvador
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2501aa93-4e4c-4bb5-b3f8-a6259d4e24a1@oracle.com \
--to=jane.chu@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=sidhartha.kumar@oracle.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox