From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0EE4C83F14 for ; Wed, 30 Aug 2023 08:33:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 38DD08E0049; Wed, 30 Aug 2023 04:33:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 33D728E0009; Wed, 30 Aug 2023 04:33:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2058B8E0049; Wed, 30 Aug 2023 04:33:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0DE418E0009 for ; Wed, 30 Aug 2023 04:33:57 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C6A55160329 for ; Wed, 30 Aug 2023 08:33:56 +0000 (UTC) X-FDA: 81180108072.10.74C49FA Received: from out-244.mta0.migadu.com (out-244.mta0.migadu.com [91.218.175.244]) by imf01.hostedemail.com (Postfix) with ESMTP id E227F4000C for ; Wed, 30 Aug 2023 08:33:54 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=PwlMGDPH; spf=pass (imf01.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.244 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693384435; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qNcl9bj/10ndkSIDdn03jA2QtmhoKxtipRR1x5PTweE=; b=pVScJq6PtOnPr0PbrVrgWyFBqqdZzWmePoBah3K3oSyC3/5sLuyT/yyOblfuHDgdx76Eul hLOOeIyC1/HoHVxYpZWTNE6du8QpnF46vpggFkDajigxXpOHossC2eYT9FomaWSCj+Om14 oDaCgMRQrl5sSObPrIWKYZxar2AyBeE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693384435; a=rsa-sha256; cv=none; b=N9+40jf8c9FFOpsAlUDu74NVvX/2fzp8Q9z/kCaTEpo0iDnQCRO5anIqvwVie6AXAAZouk qr91q1VueAHrNS2iXvIF1immVylsOuhN9yQ0XCZRreQdLm7fXg7igJ0E9uD/dnjnUfDva5 +fFTz+V2G2C9eitbMeP6cFtmu4CYAEk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=PwlMGDPH; spf=pass (imf01.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.244 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693384432; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qNcl9bj/10ndkSIDdn03jA2QtmhoKxtipRR1x5PTweE=; b=PwlMGDPHYu9mve/FbPxnfHhsYbpBkGLGujHjSGPEIHQwQurNQJvJFMwf0HkoNJWxp1Jury ow2Q9k4s4D/JtMnRXMp0YVFKwlY9GMzdgN7RQhUtyk15tYx1aTGEqpAneyCsLVQ7F4mEIh 21HM/1QC73Tm6SX5lhhUXMFzz1ZRb08= Date: Wed, 30 Aug 2023 16:33:39 +0800 MIME-Version: 1.0 Subject: Re: [PATCH 07/12] hugetlb: perform vmemmap restoration on a list of pages To: Mike Kravetz Cc: Muchun Song , Joao Martins , Oscar Salvador , David Hildenbrand , Miaohe Lin , David Rientjes , Anshuman Khandual , Naoya Horiguchi , Barry Song , Michal Hocko , Matthew Wilcox , Xiongchun Duan , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20230825190436.55045-1-mike.kravetz@oracle.com> <20230825190436.55045-8-mike.kravetz@oracle.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20230825190436.55045-8-mike.kravetz@oracle.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: E227F4000C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: f5b76t9efxgp8gemf4tb9wtzfezmompp X-HE-Tag: 1693384434-93452 X-HE-Meta: U2FsdGVkX19BUS+Dx77PCRfwQ5RUAVFkDm8CEZMLiOTCZYCvfYkM/1vzq+FzPT6PabW3+5aQH4z4L/t0+GjAq6yg+2Dc10prr3+K4WpBe9XNZuQDodPILsMonMMudNCPtSSc3RMIhQXoaL7cAg7OHE2Zl5kTIj01rxHrzPcdeQdO2TkA3adnIJYBeNVLU6Bm2o4Cc9cRNBTom7Quv2LuaCADWVJXKl1R6MZaR6zTeH7JkBfWuy23IFjQW9/95DGnCzxFT6nRo73yFy6Peq/vpenlv1jXYgudwgDV87ntvabKrgDfou8L+o1vSvsLqabCv1qQot43P5Ys8M7XN/SFaZbv5GRY4CUaFpdBI8PAmoRdpzFlEidKSM/InDZKPuWej/U3Js/CgwviHB4JYcHunaPZm6XNikgjg81X1w0vTSJoTfy5MDvbpYp3lc1LqndyIaECGtQNLDJg8gflMmOE4Ginnq2lZ7UyKqky+K6E5+wp+8oTFMCOjwPKboyJOTHNN4QG7nvtXQ9flasEotL07Z5KDiObfXSj6UqjMv/KcYuiKbgVprF5uK5F+KQxq7+NuOB7ndp2k6G3Qykng3kM5lpScheAojFKybforIEPAT+1JGbIjxZBxnX7AaFQkNk50gwbzaHQa2dq/t+4FK8UmEY74vSzBzjkfzE8RyKsk8aTd1LD24xIumRCySkBO0vs7XIboORpfhYYLv4Rb57s8aDUNh3Xz9r/BH61NFoTDk8/5Yo9OfQm3cYaYDG/iPqE67XU47K4hoFSNOxB4g9PgXbVNa45pA2Ya95tpSrzoZTWvNAUFRFiclj3NL5TCfmdDGcCaK5EG0Fy8buq07T16NdHNG4OvSAl96QuggVN3mbKDBFvDCggfnFl1tN9ohqhq3uttzFh/HW8W3n0sZpKQIjZIPuFplrMipSDzmJshGXzx6dt3zc7ZV7Usfob5jlgSlhP0Mipjo/MOfTDXvs uB/CghUw 1FTYv6Ox4W3fyOxCbLA5E1A3uF/ZqnOznutjHvVOyC2LgS1ArD+AzL9XIRbF9Tt9lBpQVUIO5CN/Vrov9c0E8/BR5hRGd1xB1FLrc04S+KgR1qJR/7mMuRkTF1Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/8/26 03:04, Mike Kravetz wrote: > When removing hugetlb pages from the pool, we first create a list > of removed pages and then free those pages back to low level allocators. > Part of the 'freeing process' is to restore vmemmap for all base pages > if necessary. Pass this list of pages to a new routine > hugetlb_vmemmap_restore_folios() so that vmemmap restoration can be > performed in bulk. > > Signed-off-by: Mike Kravetz > --- > mm/hugetlb.c | 3 +++ > mm/hugetlb_vmemmap.c | 8 ++++++++ > mm/hugetlb_vmemmap.h | 6 ++++++ > 3 files changed, 17 insertions(+) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 3133dbd89696..1bde5e234d5c 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1833,6 +1833,9 @@ static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) > { > struct folio *folio, *t_folio; > > + /* First restore vmemmap for all pages on list. */ > + hugetlb_vmemmap_restore_folios(h, list); > + > list_for_each_entry_safe(folio, t_folio, list, lru) { > update_and_free_hugetlb_folio(h, folio, false); > cond_resched(); > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index 147018a504a6..d5e6b6c76dce 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -479,6 +479,14 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > return ret; > } > Because it is a void function, I'd like to add a comment here like:     This function only tries to restore a list of folios' vmemmap pages and     does not guarantee that the restoration will succeed after it returns. Thanks. > +void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list) > +{ > + struct folio *folio; > + > + list_for_each_entry(folio, folio_list, lru) > + hugetlb_vmemmap_restore(h, &folio->page); > +} > + > /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */ > static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) > { > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h > index 036494e040ca..b7074672ceb2 100644 > --- a/mm/hugetlb_vmemmap.h > +++ b/mm/hugetlb_vmemmap.h > @@ -12,6 +12,7 @@ > > #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP > int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head); > +void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list); > void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head); > void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list); > > @@ -44,6 +45,11 @@ static inline int hugetlb_vmemmap_restore(const struct hstate *h, struct page *h > return 0; > } > > +static inline void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list) > +{ > + return 0; > +} > + > static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head) > { > }