From: Gang Li <gang.li@linux.dev>
To: Mike Kravetz <mike.kravetz@oracle.com>,
Muchun Song <muchun.song@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Gang Li <ligang.bdlg@bytedance.com>
Subject: [RFC PATCH v1 1/4] hugetlb: code clean for hugetlb_hstate_alloc_pages
Date: Thu, 23 Nov 2023 21:30:33 +0800 [thread overview]
Message-ID: <20231123133036.68540-2-gang.li@linux.dev> (raw)
In-Reply-To: <20231123133036.68540-1-gang.li@linux.dev>
From: Gang Li <ligang.bdlg@bytedance.com>
This patch focus on cleaning up the code related to per node allocation
and error reporting in the hugetlb alloc:
- hugetlb_hstate_alloc_pages_node_specific() to handle iterates through
each online node and performs allocation if necessary.
- hugetlb_hstate_alloc_pages_report() report error during allocation.
And the value of h->max_huge_pages is updated accordingly.
This patch has no functional changes.
Signed-off-by: Gang Li <ligang.bdlg@bytedance.com>
---
mm/hugetlb.c | 46 +++++++++++++++++++++++++++++-----------------
1 file changed, 29 insertions(+), 17 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c466551e2fd9..7af2ee08ad1b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3482,6 +3482,33 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid)
h->max_huge_pages_node[nid] = i;
}
+static bool __init hugetlb_hstate_alloc_pages_node_specific(struct hstate *h)
+{
+ int i;
+ bool node_specific_alloc = false;
+
+ for_each_online_node(i) {
+ if (h->max_huge_pages_node[i] > 0) {
+ hugetlb_hstate_alloc_pages_onenode(h, i);
+ node_specific_alloc = true;
+ }
+ }
+
+ return node_specific_alloc;
+}
+
+static void __init hugetlb_hstate_alloc_pages_report(unsigned long allocated, struct hstate *h)
+{
+ if (allocated < h->max_huge_pages) {
+ char buf[32];
+
+ string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32);
+ pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n",
+ h->max_huge_pages, buf, allocated);
+ h->max_huge_pages = allocated;
+ }
+}
+
/*
* NOTE: this routine is called in different contexts for gigantic and
* non-gigantic pages.
@@ -3499,7 +3526,6 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
struct folio *folio;
LIST_HEAD(folio_list);
nodemask_t *node_alloc_noretry;
- bool node_specific_alloc = false;
/* skip gigantic hugepages allocation if hugetlb_cma enabled */
if (hstate_is_gigantic(h) && hugetlb_cma_size) {
@@ -3508,14 +3534,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
}
/* do node specific alloc */
- for_each_online_node(i) {
- if (h->max_huge_pages_node[i] > 0) {
- hugetlb_hstate_alloc_pages_onenode(h, i);
- node_specific_alloc = true;
- }
- }
-
- if (node_specific_alloc)
+ if (hugetlb_hstate_alloc_pages_node_specific(h))
return;
/* below will do all node balanced alloc */
@@ -3558,14 +3577,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
/* list will be empty if hstate_is_gigantic */
prep_and_add_allocated_folios(h, &folio_list);
- if (i < h->max_huge_pages) {
- char buf[32];
-
- string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32);
- pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n",
- h->max_huge_pages, buf, i);
- h->max_huge_pages = i;
- }
+ hugetlb_hstate_alloc_pages_report(i, h);
kfree(node_alloc_noretry);
}
--
2.20.1
next prev parent reply other threads:[~2023-11-23 13:31 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-23 13:30 [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot Gang Li
2023-11-23 13:30 ` Gang Li [this message]
2023-11-23 13:30 ` [RFC PATCH v1 2/4] hugetlb: split hugetlb_hstate_alloc_pages Gang Li
2023-11-23 13:30 ` [RFC PATCH v1 3/4] hugetlb: add timing to hugetlb allocations on boot Gang Li
2023-11-23 13:30 ` [RFC PATCH v1 4/4] hugetlb: parallelize hugetlb page allocation Gang Li
2023-11-23 13:58 ` [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot Gang Li
2023-11-23 14:10 ` David Hildenbrand
2023-11-24 19:44 ` David Rientjes
2023-11-24 19:47 ` David Hildenbrand
2023-11-24 20:00 ` David Rientjes
2023-11-28 3:18 ` Gang Li
2023-11-28 6:52 ` Gang Li
2023-11-28 8:09 ` David Hildenbrand
2023-11-29 19:41 ` David Rientjes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231123133036.68540-2-gang.li@linux.dev \
--to=gang.li@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=ligang.bdlg@bytedance.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=muchun.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).