From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A75ACE7A81 for ; Mon, 25 Sep 2023 13:55:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8611D6B0151; Mon, 25 Sep 2023 09:55:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 811426B0156; Mon, 25 Sep 2023 09:55:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D8736B0154; Mon, 25 Sep 2023 09:55:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5E5B36B011D for ; Mon, 25 Sep 2023 09:55:01 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 25EA8401DC for ; Mon, 25 Sep 2023 13:55:01 +0000 (UTC) X-FDA: 81275266002.11.B4C889D Received: from out-203.mta1.migadu.com (out-203.mta1.migadu.com [95.215.58.203]) by imf09.hostedemail.com (Postfix) with ESMTP id 1102314000F for ; Mon, 25 Sep 2023 13:54:58 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=Q9QYn9E9; spf=pass (imf09.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.203 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695650099; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=U7PeCfopp1dkpysdI50qY12ImGdLtXs+K8zy41OBDps=; b=yoTWq0pTs57d717kruhvtZbnvwrq6yZ4hFCXfiu6BSaRsmyPswhh904sKO0fvyqha7C7nX BJo92RuQxueAQ3K+GvTKmidI++4qVBWuH531mOsZE8w4LWMFLkojyfmnre0zNUKvmK6+yN dhc1+55oqMvhXkmJ2Idcu2vmX0MNNoE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695650099; a=rsa-sha256; cv=none; b=ziph/Ipn/RfQzuJPmOXyEZwQR0SHncfh0KgJo7nQoK+ppalM0EbSLXVGIzofev0vuuYKHT UJtDOTF8Sp2XcaI5pHF7wTXGlvGDGy1wuFSI/N+0MSnb8/oO6lqZOzz4IZNz4vdPbT8CDF cBgr9bC5UUrb4Zm8kKb8qdTM1zNBF9Y= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=Q9QYn9E9; spf=pass (imf09.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.203 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695650096; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U7PeCfopp1dkpysdI50qY12ImGdLtXs+K8zy41OBDps=; b=Q9QYn9E9hF1Dc2EEyDGLwn+mHZefrtqxFuThGg2RsVaMr+5jdmGOeJMQcyXmqULAHOG2Ay ymYpHXVXz8QCdrf+kVEPTOEP4vV8ge0+MFhDXcRDoTb2XJd7lWCBkZy0DQtEJfBCF641ph kJZQ5VDbwnqTFquaQK5wsOsNN0OpR54= Date: Mon, 25 Sep 2023 21:54:49 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v5 4/8] hugetlb: perform vmemmap restoration on a list of pages To: Mike Kravetz , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Muchun Song , Joao Martins , Oscar Salvador , David Hildenbrand , Miaohe Lin , David Rientjes , Anshuman Khandual , Naoya Horiguchi , Barry Song <21cnbao@gmail.com>, Michal Hocko , Matthew Wilcox , Xiongchun Duan , Andrew Morton References: <20230925003953.142620-1-mike.kravetz@oracle.com> <20230925003953.142620-5-mike.kravetz@oracle.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20230925003953.142620-5-mike.kravetz@oracle.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 1102314000F X-Rspam-User: X-Stat-Signature: xrf5xqjq89ehjaqym616y6ckwnwfm6t7 X-Rspamd-Server: rspam03 X-HE-Tag: 1695650098-714164 X-HE-Meta: U2FsdGVkX1/P0dtAWgDnhxoYvo9vsE1MMyUvR6zdlT953rgRvRdm/f2qgQhugMXhRTTU11sFBZ2PeVUIFyEIpP8hzjARmwGTdBZmRvSW808S20T2WZLhZ9gbFcAn7sBdf4+tLc2u/yzJ2rjhj8/l4YA/QbyFnsWEieGREjOk48BcN6OlgbfsZwfrgaI1hd3pmb401jVwQZGaWHkA1EnsUyEiDiYxIP4fuea9eQpFTFzJByvldOXZ5t/Bu45wchhqRzouwSx5ntXVipwg7qzW037IoHHFZaeXT4c/b2KEE6kuY6gzRsWGPT7uKECw2Ycd1SQ6SuKQuBYy2t2nMs+sC6ZsUID0/tVKxheWmt8RdCd9c0ziAKPhxP/4pKDXcCt59OETK7lr9UK36zpdcM3Bi9a7lwIXznGDznKJM3GZvEISSViS1iV2ZhFtBSJ47OXXTYhhy3bSPhnlqJUGKTj5l046CcktSe+urRsImFmjhZ1ISHS4dGnnYOwpX0oF/BgupZ3Pi0kr7IqeFMRsnngGhv28ekcH6KEUQ3pAYid0Up7Y92f881aW2sbNrY0oOSeCtJGWtfpMT4apXxA+4uNpL6Yr595LpYOfGM3f31ZqHDWoyfG2yj59QKLrmnz3V0kFtvtj4Hjjy69+OcUp6BlNyiJ9rKHsS1ztBg27KEAUvePkaAqRYOFWKzcAWfpk3tFuo2SSqBVaoC+HBjR+gK+xyPuQKeLadWbAe1/EvnL4+XQrt71CB1Kjsp1EuAa0S8nPe0p4iLdclHkwN0iyjhSswWx+1160sTYPBu7gZKnAxsIWc19K9NMItH6OTWLKQQmlcR3R1j94ovFDoqGzvX7TnBk8laf10mImvrTqqwA0SPgLiifo/3bgBgPf3hrvfEEjx3UiSoG0pt0fcmYtI6hfeFNzLY+WvtRM1Q+tusOeXCKASHEawztXrfis2W3/pLRTu8QoYKlGfc00kzOE7Aq Dropagux Cl+Pz0TjBGr+KwUIp6bIXVXY3mUNDpHjQ5E3bL1Evwir1Fkc0FyTgfZfv/amZrEW5j/pHQPV+ZTbUBfYpsKISmANHmYgp6WFWO9EcxRjdrKoJdNcNyU+TO5X5Yw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/9/25 08:39, Mike Kravetz wrote: > The routine update_and_free_pages_bulk already performs vmemmap > restoration on the list of hugetlb pages in a separate step. In > preparation for more functionality to be added in this step, create a > new routine hugetlb_vmemmap_restore_folios() that will restore > vmemmap for a list of folios. > > This new routine must provide sufficient feedback about errors and > actual restoration performed so that update_and_free_pages_bulk can > perform optimally. > > Special care must be taken when encountering an error from > hugetlb_vmemmap_restore_folios. We want to continue making as much > forward progress as possible. A new routine bulk_vmemmap_restore_error > handles this specific situation. > > Signed-off-by: Mike Kravetz > --- > mm/hugetlb.c | 98 +++++++++++++++++++++++++++++++------------- > mm/hugetlb_vmemmap.c | 38 +++++++++++++++++ > mm/hugetlb_vmemmap.h | 10 +++++ > 3 files changed, 118 insertions(+), 28 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index da0ebd370b5f..53df35fbc3f2 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1834,50 +1834,92 @@ static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio, > schedule_work(&free_hpage_work); > } > > -static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) > +static void bulk_vmemmap_restore_error(struct hstate *h, > + struct list_head *folio_list, > + struct list_head *non_hvo_folios) > { > struct folio *folio, *t_folio; > - bool clear_dtor = false; > > - /* > - * First allocate required vmemmmap (if necessary) for all folios on > - * list. If vmemmap can not be allocated, we can not free folio to > - * lower level allocator, so add back as hugetlb surplus page. > - * add_hugetlb_folio() removes the page from THIS list. > - * Use clear_dtor to note if vmemmap was successfully allocated for > - * ANY page on the list. > - */ > - list_for_each_entry_safe(folio, t_folio, list, lru) { > - if (folio_test_hugetlb_vmemmap_optimized(folio)) { > + if (!list_empty(non_hvo_folios)) { > + /* > + * Free any restored hugetlb pages so that restore of the > + * entire list can be retried. > + * The idea is that in the common case of ENOMEM errors freeing > + * hugetlb pages with vmemmap we will free up memory so that we > + * can allocate vmemmap for more hugetlb pages. > + */ > + list_for_each_entry_safe(folio, t_folio, non_hvo_folios, lru) { > + list_del(&folio->lru); > + spin_lock_irq(&hugetlb_lock); > + __clear_hugetlb_destructor(h, folio); > + spin_unlock_irq(&hugetlb_lock); > + update_and_free_hugetlb_folio(h, folio, false); > + cond_resched(); > + } > + } else { > + /* > + * In the case where there are no folios which can be > + * immediately freed, we loop through the list trying to restore > + * vmemmap individually in the hope that someone elsewhere may > + * have done something to cause success (such as freeing some > + * memory). If unable to restore a hugetlb page, the hugetlb > + * page is made a surplus page and removed from the list. > + * If are able to restore vmemmap and free one hugetlb page, we > + * quit processing the list to retry the bulk operation. > + */ > + list_for_each_entry_safe(folio, t_folio, folio_list, lru) > if (hugetlb_vmemmap_restore(h, &folio->page)) { IIUC, the folio should be deleted from the folio list since this huge page will be added to the free list (the list is corrupted), right? > spin_lock_irq(&hugetlb_lock); > add_hugetlb_folio(h, folio, true); > spin_unlock_irq(&hugetlb_lock); > - } else > - clear_dtor = true; > - } > + } else { > + list_del(&folio->lru); > + spin_lock_irq(&hugetlb_lock); > + __clear_hugetlb_destructor(h, folio); > + spin_unlock_irq(&hugetlb_lock); > + update_and_free_hugetlb_folio(h, folio, false); > + cond_resched(); > + break; > + } > } > +} > + > +static void update_and_free_pages_bulk(struct hstate *h, > + struct list_head *folio_list) > +{ > + long ret; > + struct folio *folio, *t_folio; > + LIST_HEAD(non_hvo_folios); > > /* > - * If vmemmmap allocation was performed on any folio above, take lock > - * to clear destructor of all folios on list. This avoids the need to > - * lock/unlock for each individual folio. > - * The assumption is vmemmap allocation was performed on all or none > - * of the folios on the list. This is true expect in VERY rare cases. > + * First allocate required vmemmmap (if necessary) for all folios. > + * Carefully handle errors and free up any available hugetlb pages > + * in an effort to make forward progress. > */ > - if (clear_dtor) { > +retry: > + ret = hugetlb_vmemmap_restore_folios(h, folio_list, &non_hvo_folios); > + if (ret < 0) { > + bulk_vmemmap_restore_error(h, folio_list, &non_hvo_folios); > + goto retry; > + } > + > + /* > + * At this point, list should be empty, ret should be >= 0 and there > + * should only be pages on the non_hvo_folios list. > + * Do note that the non_hvo_folios list could be empty. > + * Without HVO enabled, ret will be 0 and there is no need to call > + * __clear_hugetlb_destructor as this was done previously. > + */ > + VM_WARN_ON(!list_empty(folio_list)); > + VM_WARN_ON(ret < 0); > + if (!list_empty(&non_hvo_folios) && ret) { > spin_lock_irq(&hugetlb_lock); > - list_for_each_entry(folio, list, lru) > + list_for_each_entry(folio, &non_hvo_folios, lru) > __clear_hugetlb_destructor(h, folio); > spin_unlock_irq(&hugetlb_lock); > } > > - /* > - * Free folios back to low level allocators. vmemmap and destructors > - * were taken care of above, so update_and_free_hugetlb_folio will > - * not need to take hugetlb lock. > - */ > - list_for_each_entry_safe(folio, t_folio, list, lru) { > + list_for_each_entry_safe(folio, t_folio, &non_hvo_folios, lru) { > update_and_free_hugetlb_folio(h, folio, false); > cond_resched(); > } > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index 4558b814ffab..77f44b81ff01 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -480,6 +480,44 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > return ret; > } > > +/** > + * hugetlb_vmemmap_restore_folios - restore vmemmap for every folio on the list. > + * @h: hstate. > + * @folio_list: list of folios. > + * @non_hvo_folios: Output list of folios for which vmemmap exists. > + * > + * Return: number of folios for which vmemmap was restored, or an error code > + * if an error was encountered restoring vmemmap for a folio. > + * Folios that have vmemmap are moved to the non_hvo_folios > + * list. Processing of entries stops when the first error is > + * encountered. The folio that experienced the error and all > + * non-processed folios will remain on folio_list. > + */ > +long hugetlb_vmemmap_restore_folios(const struct hstate *h, > + struct list_head *folio_list, > + struct list_head *non_hvo_folios) > +{ > + struct folio *folio, *t_folio; > + long restored = 0; > + long ret = 0; > + > + list_for_each_entry_safe(folio, t_folio, folio_list, lru) { > + if (folio_test_hugetlb_vmemmap_optimized(folio)) { > + ret = hugetlb_vmemmap_restore(h, &folio->page); > + if (ret) > + break; > + restored++; > + } > + > + /* Add non-optimized folios to output list */ > + list_move(&folio->lru, non_hvo_folios); > + } > + > + if (!ret) > + ret = restored; > + return ret; > +} > + > /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */ > static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) > { > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h > index c512e388dbb4..0b7710f90e38 100644 > --- a/mm/hugetlb_vmemmap.h > +++ b/mm/hugetlb_vmemmap.h > @@ -19,6 +19,9 @@ > > #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP > int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head); > +long hugetlb_vmemmap_restore_folios(const struct hstate *h, > + struct list_head *folio_list, > + struct list_head *non_hvo_folios); > void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head); > void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list); > > @@ -45,6 +48,13 @@ static inline int hugetlb_vmemmap_restore(const struct hstate *h, struct page *h > return 0; > } > > +static long hugetlb_vmemmap_restore_folios(const struct hstate *h, > + struct list_head *folio_list, > + struct list_head *non_hvo_folios) > +{ > + return 0; > +} > + > static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head) > { > }