From: Mike Kravetz <mike.kravetz@oracle.com>
To: Muchun Song <songmuchun@bytedance.com>,
corbet@lwn.net, tglx@linutronix.de, mingo@redhat.com,
bp@alien8.de, x86@kernel.org, hpa@zytor.com,
dave.hansen@linux.intel.com, luto@kernel.org,
peterz@infradead.org, viro@zeniv.linux.org.uk,
akpm@linux-foundation.org, paulmck@kernel.org,
mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com,
rdunlap@infradead.org, oneukum@suse.com,
anshuman.khandual@arm.com, jroedel@suse.de,
almasrymina@google.com, rientjes@google.com, willy@infradead.org,
osalvador@suse.de, mhocko@suse.com
Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
Date: Thu, 19 Nov 2020 15:37:25 -0800 [thread overview]
Message-ID: <b3cadba4-5cef-a1ba-418d-bd08af60c656@oracle.com> (raw)
In-Reply-To: <20201113105952.11638-6-songmuchun@bytedance.com>
On 11/13/20 2:59 AM, Muchun Song wrote:
> On x86_64, vmemmap is always PMD mapped if the machine has hugepages
> support and if we have 2MB contiguos pages and PMD aligned. If we want
contiguous alignment
> to free the unused vmemmap pages, we have to split the huge pmd firstly.
> So we should pre-allocate pgtable to split PMD to PTE.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
> mm/hugetlb_vmemmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> mm/hugetlb_vmemmap.h | 12 +++++++++
> 2 files changed, 85 insertions(+)
Thanks for the cleanup.
Oscar made some other comments. I only have one additional minor comment
below.
With those minor cleanups,
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
...
> +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> +{
> + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> +
> + /* Store preallocated pages on huge page lru list */
Let's expland the above comment to something like this:
/*
* Use the huge page lru list to temporarily store the preallocated
* pages. The preallocated pages are used and the list is emptied
* before the huge page is put into use. When the huge page is put
* into use by prep_new_huge_page() the list will be reinitialized.
*/
> + INIT_LIST_HEAD(&page->lru);
> +
> + while (nr--) {
> + pte_t *pte_p;
> +
> + pte_p = pte_alloc_one_kernel(&init_mm);
> + if (!pte_p)
> + goto out;
> + list_add(&virt_to_page(pte_p)->lru, &page->lru);
> + }
> +
> + return 0;
> +out:
> + vmemmap_pgtable_free(page);
> + return -ENOMEM;
> +}
--
Mike Kravetz
next prev parent reply other threads:[~2020-11-19 23:37 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
2020-11-13 10:59 ` [PATCH v4 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
2020-11-16 13:50 ` Oscar Salvador
2020-11-13 10:59 ` [PATCH v4 02/21] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
2020-11-16 13:52 ` Oscar Salvador
2020-11-13 10:59 ` [PATCH v4 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2020-11-18 22:38 ` Mike Kravetz
2020-11-19 2:57 ` [External] " Muchun Song
2020-11-13 10:59 ` [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2020-11-16 13:33 ` Oscar Salvador
2020-11-16 15:40 ` [External] " Muchun Song
2020-11-18 22:54 ` Mike Kravetz
2020-11-18 23:48 ` Mike Kravetz
2020-11-19 3:00 ` [External] " Muchun Song
2020-11-13 10:59 ` [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
2020-11-17 15:06 ` Oscar Salvador
2020-11-17 15:29 ` [External] " Muchun Song
2020-11-19 6:17 ` Muchun Song
2020-11-19 23:21 ` Mike Kravetz
2020-11-20 2:52 ` Muchun Song
2020-11-19 23:37 ` Mike Kravetz [this message]
2020-11-13 10:59 ` [PATCH v4 06/21] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
2020-11-13 10:59 ` [PATCH v4 07/21] mm/bootmem_info: Combine bootmem info and type into page->freelist Muchun Song
2020-11-13 10:59 ` [PATCH v4 08/21] mm/hugetlb: Initialize page table lock for vmemmap Muchun Song
2020-11-13 10:59 ` [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
2020-11-17 9:54 ` Song Bao Hua (Barry Song)
2020-11-17 10:26 ` [External] " Muchun Song
2020-11-18 3:21 ` Song Bao Hua (Barry Song)
2020-11-13 10:59 ` [PATCH v4 10/21] mm/hugetlb: Defer freeing of hugetlb pages Muchun Song
2020-11-13 10:59 ` [PATCH v4 11/21] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
2020-11-13 10:59 ` [PATCH v4 12/21] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Muchun Song
2020-11-13 10:59 ` [PATCH v4 13/21] mm/hugetlb: Use PG_slab to indicate split pmd Muchun Song
2020-11-13 10:59 ` [PATCH v4 14/21] mm/hugetlb: Support freeing vmemmap pages of gigantic page Muchun Song
2020-11-13 10:59 ` [PATCH v4 15/21] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
2020-11-13 10:59 ` [PATCH v4 16/21] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
2020-11-13 10:59 ` [PATCH v4 17/21] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
2020-11-13 10:59 ` [PATCH v4 18/21] mm/hugetlb: Merge pte to huge pmd only for gigantic page Muchun Song
2020-11-13 10:59 ` [PATCH v4 19/21] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
2020-11-13 10:59 ` [PATCH v4 20/21] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
2020-11-13 10:59 ` [PATCH v4 21/21] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Muchun Song
2020-11-17 10:15 ` [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Song Bao Hua (Barry Song)
2020-11-17 10:49 ` [External] " Muchun Song
2020-11-17 11:07 ` Song Bao Hua (Barry Song)
2020-11-17 16:29 ` Muchun Song
2020-11-17 19:22 ` Matthew Wilcox
2020-11-18 2:43 ` Muchun Song
2020-11-17 19:45 ` Oscar Salvador
2020-11-18 3:27 ` Muchun Song
2020-11-18 3:27 ` Song Bao Hua (Barry Song)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b3cadba4-5cef-a1ba-418d-bd08af60c656@oracle.com \
--to=mike.kravetz@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=almasrymina@google.com \
--cc=anshuman.khandual@arm.com \
--cc=bp@alien8.de \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=duanxiongchun@bytedance.com \
--cc=hpa@zytor.com \
--cc=jroedel@suse.de \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mchehab+huawei@kernel.org \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=oneukum@suse.com \
--cc=osalvador@suse.de \
--cc=paulmck@kernel.org \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=peterz@infradead.org \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=songmuchun@bytedance.com \
--cc=tglx@linutronix.de \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).