From: Muchun Song <muchun.song@linux.dev>
To: Gang Li <gang.li@linux.dev>, David Hildenbrand <david@redhat.com>,
David Rientjes <rientjes@google.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
Andrew Morton <akpm@linux-foundation.org>,
Tim Chen <tim.c.chen@linux.intel.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
ligang.bdlg@bytedance.com
Subject: Re: [PATCH v4 6/7] hugetlb: parallelize 2M hugetlb allocation and initialization
Date: Mon, 22 Jan 2024 15:10:07 +0800 [thread overview]
Message-ID: <ddf37da4-4cbc-478a-be9b-3060b0aebc90@linux.dev> (raw)
In-Reply-To: <20240118123911.88833-7-gang.li@linux.dev>
On 2024/1/18 20:39, Gang Li wrote:
> By distributing both the allocation and the initialization tasks across
> multiple threads, the initialization of 2M hugetlb will be faster,
> thereby improving the boot speed.
>
> Here are some test results:
> test no patch(ms) patched(ms) saved
> ------------------- -------------- ------------- --------
> 256c2t(4 node) 2M 3336 1051 68.52%
> 128c1t(2 node) 2M 1943 716 63.15%
>
> Signed-off-by: Gang Li <gang.li@linux.dev>
> Tested-by: David Rientjes <rientjes@google.com>
> ---
> mm/hugetlb.c | 70 ++++++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 52 insertions(+), 18 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index effe5539e545..9b348ba418f5 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -35,6 +35,7 @@
> #include <linux/delayacct.h>
> #include <linux/memory.h>
> #include <linux/mm_inline.h>
> +#include <linux/padata.h>
>
> #include <asm/page.h>
> #include <asm/pgalloc.h>
> @@ -3510,43 +3511,76 @@ static void __init hugetlb_hstate_alloc_pages_errcheck(unsigned long allocated,
> }
> }
>
> -static unsigned long __init hugetlb_gigantic_pages_alloc_boot(struct hstate *h)
> +static void __init hugetlb_alloc_node(unsigned long start, unsigned long end, void *arg)
> {
> - unsigned long i;
> + struct hstate *h = (struct hstate *)arg;
> + int i, num = end - start;
> + nodemask_t node_alloc_noretry;
> + unsigned long flags;
> + int next_node = 0;
This should be first_online_node which may be not zero.
>
> - for (i = 0; i < h->max_huge_pages; ++i) {
> - if (!alloc_bootmem_huge_page(h, NUMA_NO_NODE))
> + /* Bit mask controlling how hard we retry per-node allocations.*/
> + nodes_clear(node_alloc_noretry);
> +
> + for (i = 0; i < num; ++i) {
> + struct folio *folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY],
> + &node_alloc_noretry, &next_node);
> + if (!folio)
> break;
> + spin_lock_irqsave(&hugetlb_lock, flags);
I suspect there will more contention on this lock when parallelizing.
I want to know why you chose to drop prep_and_add_allocated_folios()
call in the original hugetlb_pages_alloc_boot()?
> + __prep_account_new_huge_page(h, folio_nid(folio));
> + enqueue_hugetlb_folio(h, folio);
> + spin_unlock_irqrestore(&hugetlb_lock, flags);
> cond_resched();
> }
> +}
>
> - return i;
> +static void __init hugetlb_vmemmap_optimize_node(unsigned long start, unsigned long end, void *arg)
> +{
> + struct hstate *h = (struct hstate *)arg;
> + int nid = start;
> +
> + hugetlb_vmemmap_optimize_folios(h, &h->hugepage_freelists[nid]);
> }
>
> -static unsigned long __init hugetlb_pages_alloc_boot(struct hstate *h)
> +static unsigned long __init hugetlb_gigantic_pages_alloc_boot(struct hstate *h)
> {
> unsigned long i;
> - struct folio *folio;
> - LIST_HEAD(folio_list);
> - nodemask_t node_alloc_noretry;
> -
> - /* Bit mask controlling how hard we retry per-node allocations.*/
> - nodes_clear(node_alloc_noretry);
>
> for (i = 0; i < h->max_huge_pages; ++i) {
> - folio = alloc_pool_huge_folio(h, &node_states[N_MEMORY],
> - &node_alloc_noretry);
> - if (!folio)
> + if (!alloc_bootmem_huge_page(h, NUMA_NO_NODE))
> break;
> - list_add(&folio->lru, &folio_list);
> cond_resched();
> }
>
> - prep_and_add_allocated_folios(h, &folio_list);
> -
> return i;
> }
>
> +static unsigned long __init hugetlb_pages_alloc_boot(struct hstate *h)
> +{
> + struct padata_mt_job job = {
> + .fn_arg = h,
> + .align = 1,
> + .numa_aware = true
> + };
> +
> + job.thread_fn = hugetlb_alloc_node;
> + job.start = 0;
> + job.size = h->max_huge_pages;
> + job.min_chunk = h->max_huge_pages / num_node_state(N_MEMORY) / 2;
> + job.max_threads = num_node_state(N_MEMORY) * 2;
I am curious the magic number of 2 used in assignments of ->min_chunk
and ->max_threads, does it from your experiment? I thinke it should
be a comment here.
And I am also sceptical about the optimization for a small amount of
allocation of hugepages. Given 4 hugepags needed to be allocated on UMA
system, job.min_chunk will be 2, job.max_threads will be 2. Then, 2
workers will be scheduled, however each worker will just allocate 2 pages,
how much the cost of scheduling? What if allocate 4 pages in single
worker? Do you have any numbers on parallelism vs non-parallelism in
a small allocation case? If we cannot gain from this case, I think we shold
assign a reasonable value to ->min_chunk based on experiment.
Thanks.
> + padata_do_multithreaded(&job);
> +
> + job.thread_fn = hugetlb_vmemmap_optimize_node;
> + job.start = 0;
> + job.size = num_node_state(N_MEMORY);
> + job.min_chunk = 1;
> + job.max_threads = num_node_state(N_MEMORY);
> + padata_do_multithreaded(&job);
> +
> + return h->nr_huge_pages;
> +}
> +
> /*
> * NOTE: this routine is called in different contexts for gigantic and
> * non-gigantic pages.
next prev parent reply other threads:[~2024-01-22 7:10 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-18 12:39 [RESEND PATCH v4 0/7] hugetlb: parallelize hugetlb page init on boot Gang Li
2024-01-18 12:39 ` [PATCH v4 1/7] hugetlb: code clean for hugetlb_hstate_alloc_pages Gang Li
2024-01-18 12:39 ` [PATCH v4 2/7] hugetlb: split hugetlb_hstate_alloc_pages Gang Li
2024-01-22 3:43 ` Muchun Song
2024-01-18 12:39 ` [PATCH v4 3/7] padata: dispatch works on different nodes Gang Li
2024-01-18 23:04 ` Tim Chen
2024-01-19 15:05 ` Gang Li
2024-01-19 2:59 ` Muchun Song
2024-01-19 15:04 ` Gang Li
2024-01-18 12:39 ` [PATCH v4 4/7] hugetlb: pass *next_nid_to_alloc directly to for_each_node_mask_to_alloc Gang Li
2024-01-18 23:01 ` Tim Chen
2024-01-19 2:54 ` Muchun Song
2024-01-22 6:16 ` Muchun Song
2024-01-22 9:14 ` Gang Li
2024-01-22 9:50 ` Muchun Song
2024-01-18 12:39 ` [PATCH v4 5/7] hugetlb: have CONFIG_HUGETLBFS select CONFIG_PADATA Gang Li
2024-01-18 12:39 ` [PATCH v4 6/7] hugetlb: parallelize 2M hugetlb allocation and initialization Gang Li
2024-01-22 7:10 ` Muchun Song [this message]
2024-01-22 10:12 ` Gang Li
2024-01-22 11:30 ` Muchun Song
2024-01-23 2:12 ` Gang Li
2024-01-23 3:32 ` Muchun Song
2024-01-18 12:39 ` [PATCH v4 7/7] hugetlb: parallelize 1G hugetlb initialization Gang Li
2024-01-18 14:22 ` Kefeng Wang
2024-01-19 14:45 ` Gang Li
2024-01-24 9:23 ` Muchun Song
2024-01-24 10:52 ` Gang Li
2024-01-25 2:48 ` Muchun Song
2024-01-25 3:47 ` Gang Li
2024-01-25 3:56 ` Gang Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ddf37da4-4cbc-478a-be9b-3060b0aebc90@linux.dev \
--to=muchun.song@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=gang.li@linux.dev \
--cc=ligang.bdlg@bytedance.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=rientjes@google.com \
--cc=tim.c.chen@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).