From: Richard Palethorpe <rpalethorpe@suse.de>
To: Li Wang <liwang@redhat.com>
Cc: ltp@lists.linux.it
Subject: Re: [LTP] [PATCH v2 1/2] lib: add support for kinds of hpsize reservation
Date: Wed, 18 Oct 2023 10:10:01 +0100 [thread overview]
Message-ID: <87mswgw8bz.fsf@suse.de> (raw)
In-Reply-To: <CAEemH2ejp3iLbv7fXAr6H3WuC+TQ3O05V0t2HUyYSbsNYixFaQ@mail.gmail.com>
Hello,
Li Wang <liwang@redhat.com> writes:
> Hi Cyril,
>
> [Please hold off on merging this patch]
>
> The hesitating part of this method (from myself) is the new field
> 'hp->hpsize'.
> It seems not wise to leave it to users to fill the gigantic page size
> manually,
> as some arches support different huge/gigantic page sizes:
Yes, good idea.
>
> x86_64 and x86: 2MB and 1GB.
> PowerPC: ranging from 64KB to 16GB.
> ARM64: 2MB and 1GB.
> IA-64 (Itanium): from 4KB to 256MB.
>
> we probably need a intelengent way to detect and reserve whatever
> hugepage or gitantic-page that all acmplish that in ltp-library or setup().
> Then people don't need to fill any byte which avoiding typo or
> something wrong.
It seems like a special flag is needed in mmap if you want to allocate a
gigantic page other than 1GB?
>
> What I can think of the improved way is to extend the hugepage policy
> with "_G" subfix to specified the gigantic pages.
>
> Is this sounds better? What do you think?
>
> Something drafted base on my patch V2:
>
> --- a/include/tst_hugepage.h
> +++ b/include/tst_hugepage.h
> @@ -20,14 +20,15 @@ extern char *nr_opt; /* -s num Set the number of the
> been allocated hugepages
> extern char *Hopt; /* -H /.. Location of hugetlbfs, i.e. -H
> /var/hugetlbfs */
>
> enum tst_hp_policy {
> - TST_REQUEST,
> - TST_NEEDS,
> + TST_REQUEST_H = 0x0,
> + TST_REQUEST_G = 0x1,
> + TST_NEEDS_H = 0x2,
> + TST_NEEDS_G = 0x4,
> };
>
> struct tst_hugepage {
> const unsigned long number;
> enum tst_hp_policy policy;
> - const unsigned long hpsize;
> };
Why not keep hpsize and add enum tst_hp_size { TST_HUGE, TST_GIGANTIC }?
In theory more sizes can be added.
>
> /*
> @@ -35,6 +36,11 @@ struct tst_hugepage {
> */
> size_t tst_get_hugepage_size(void);
>
> +/*
> + * Get the gigantic hugepage size. Returns 0 if hugepages are not
> supported.
> + */
> +size_t tst_get_gigantic_size(void);
> +
> /*
> * Try the best to request a specified number of huge pages from system,
> * it will store the reserved hpage number in tst_hugepages.
> diff --git a/lib/tst_hugepage.c b/lib/tst_hugepage.c
> index f4b18bbbf..568884fbb 100644
> --- a/lib/tst_hugepage.c
> +++ b/lib/tst_hugepage.c
> @@ -21,6 +21,30 @@ size_t tst_get_hugepage_size(void)
> return SAFE_READ_MEMINFO("Hugepagesize:") * 1024;
> }
>
> +/* Check if hugetlb page is gigantic */
> +static inline int is_hugetlb_gigantic(unsigned long hpage_size)
> +{
> + return (hpage_size / getpagesize()) >> 11;
> +}
What is 11? If it is the order or shift of hugepages then that is not
constant (see below).
> +
> +size_t tst_get_gigantic_size(void)
> +{
> + DIR *dir;
> + struct dirent *ent;
> + unsigned long g_hpage_size;
> +
> + dir = SAFE_OPENDIR(PATH_HUGEPAGES);
> + while ((ent = SAFE_READDIR(dir))) {
> + if ((sscanf(ent->d_name, "hugepages-%lukB", &g_hpage_size)
> == 1) &&
> + is_hugetlb_gigantic(g_hpage_size * 1024)) {
> + break;
> + }
> + }
I guess in theory more gigantic page sizes could be added. I'm not sure
what size we should pick, but we don't want it to be random because it
would make debugging more difficult.
So could we search for the smallest size (hugepagesize) and second
smallest (smallest gigantic page)?
> +
> + SAFE_CLOSEDIR(dir);
> + return g_hpage_size * 1024;
> +}
> +
> unsigned long tst_reserve_hugepages(struct tst_hugepage *hp)
> {
> unsigned long val, max_hpages, hpsize;
> @@ -43,10 +67,10 @@ unsigned long tst_reserve_hugepages(struct tst_hugepage
> *hp)
> else
> tst_hugepages = hp->number;
>
> - if (hp->hpsize)
> - hpsize = hp->hpsize;
> + if (hp->policy & (TST_NEEDS_G | TST_REQUEST_G))
> + hpsize = tst_get_gigantic_size() / 1024;
> else
> - hpsize = SAFE_READ_MEMINFO(MEMINFO_HPAGE_SIZE);
> + hpsize = tst_get_hugepage_size() / 1024;
>
> sprintf(hugepage_path,
> PATH_HUGEPAGES"/hugepages-%lukB/nr_hugepages", hpsize);
> if (access(hugepage_path, F_OK)) {
>
>
>
>
> --
> Regards,
> Li Wang
--
Thank you,
Richard.
--
Mailing list info: https://lists.linux.it/listinfo/ltp
next prev parent reply other threads:[~2023-10-18 10:09 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-24 9:39 [LTP] [RFC PATCH 1/2] lib: add support for kinds of hpsize reservation Li Wang
2023-05-24 9:39 ` [LTP] [RFC PATCH 2/2] hugemmap33: Test to detect bug with migrating gigantic pages Li Wang
2023-09-07 11:30 ` Cyril Hrubis
2023-09-11 7:47 ` Li Wang
2023-09-11 8:11 ` Cyril Hrubis
2023-09-11 8:25 ` Li Wang
2023-09-07 9:26 ` [LTP] [RFC PATCH 1/2] lib: add support for kinds of hpsize reservation Cyril Hrubis
2023-09-07 12:37 ` Li Wang
2023-09-07 12:52 ` Li Wang
2023-09-07 14:10 ` Cyril Hrubis
2023-09-11 8:02 ` [LTP] [PATCH v2 " Li Wang
2023-09-11 8:02 ` [LTP] [PATCH v2 2/2] hugemmap34: Test to detect bug with migrating gigantic pages Li Wang
2023-09-11 8:33 ` Li Wang
2023-09-14 9:44 ` [LTP] [PATCH v2 1/2] lib: add support for kinds of hpsize reservation Li Wang
2023-10-18 9:10 ` Richard Palethorpe [this message]
2023-10-19 8:22 ` Li Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87mswgw8bz.fsf@suse.de \
--to=rpalethorpe@suse.de \
--cc=liwang@redhat.com \
--cc=ltp@lists.linux.it \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox