linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Kravetz <mike.kravetz@oracle.com>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: Muchun Song <songmuchun@bytedance.com>,
	Joao Martins <joao.m.martins@oracle.com>,
	Oscar Salvador <osalvador@suse.de>,
	David Hildenbrand <david@redhat.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	David Rientjes <rientjes@google.com>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Naoya Horiguchi <naoya.horiguchi@linux.dev>,
	Barry Song <21cnbao@gmail.com>, Michal Hocko <mhocko@suse.com>,
	Matthew Wilcox <willy@infradead.org>,
	Xiongchun Duan <duanxiongchun@bytedance.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH v6 4/8] hugetlb: perform vmemmap restoration on a list of pages
Date: Fri, 29 Sep 2023 15:10:18 -0700	[thread overview]
Message-ID: <20230929221018.GB10357@monkey> (raw)
In-Reply-To: <20230925234837.86786-5-mike.kravetz@oracle.com>

On 09/25/23 16:48, Mike Kravetz wrote:
<snip>
> +static void update_and_free_pages_bulk(struct hstate *h,
> +						struct list_head *folio_list)
> +{
> +	long ret;
> +	struct folio *folio, *t_folio;
> +	LIST_HEAD(non_hvo_folios);
>  
>  	/*
> -	 * If vmemmmap allocation was performed on any folio above, take lock
> -	 * to clear destructor of all folios on list.  This avoids the need to
> -	 * lock/unlock for each individual folio.
> -	 * The assumption is vmemmap allocation was performed on all or none
> -	 * of the folios on the list.  This is true expect in VERY rare cases.
> +	 * First allocate required vmemmmap (if necessary) for all folios.
> +	 * Carefully handle errors and free up any available hugetlb pages
> +	 * in an effort to make forward progress.
>  	 */
> -	if (clear_dtor) {
> +retry:
> +	ret = hugetlb_vmemmap_restore_folios(h, folio_list, &non_hvo_folios);
> +	if (ret < 0) {
> +		bulk_vmemmap_restore_error(h, folio_list, &non_hvo_folios);
> +		goto retry;
> +	}
> +
> +	/*
> +	 * At this point, list should be empty, ret should be >= 0 and there
> +	 * should only be pages on the non_hvo_folios list.
> +	 * Do note that the non_hvo_folios list could be empty.
> +	 * Without HVO enabled, ret will be 0 and there is no need to call
> +	 * __clear_hugetlb_destructor as this was done previously.
> +	 */
> +	VM_WARN_ON(!list_empty(folio_list));
> +	VM_WARN_ON(ret < 0);
> +	if (!list_empty(&non_hvo_folios) && ret) {
>  		spin_lock_irq(&hugetlb_lock);
> -		list_for_each_entry(folio, list, lru)
> +		list_for_each_entry(folio, &non_hvo_folios, lru)
>  			__clear_hugetlb_destructor(h, folio);
>  		spin_unlock_irq(&hugetlb_lock);
>  	}
>  
> -	/*
> -	 * Free folios back to low level allocators.  vmemmap and destructors
> -	 * were taken care of above, so update_and_free_hugetlb_folio will
> -	 * not need to take hugetlb lock.
> -	 */
> -	list_for_each_entry_safe(folio, t_folio, list, lru) {
> +	list_for_each_entry_safe(folio, t_folio, &non_hvo_folios, lru) {
>  		update_and_free_hugetlb_folio(h, folio, false);
>  		cond_resched();
>  	}
<snip>
> diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> index c512e388dbb4..0b7710f90e38 100644
> --- a/mm/hugetlb_vmemmap.h
> +++ b/mm/hugetlb_vmemmap.h
> @@ -19,6 +19,9 @@
>  
>  #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
>  int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head);
> +long hugetlb_vmemmap_restore_folios(const struct hstate *h,
> +					struct list_head *folio_list,
> +					struct list_head *non_hvo_folios);
>  void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head);
>  void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list);
>  
> @@ -45,6 +48,13 @@ static inline int hugetlb_vmemmap_restore(const struct hstate *h, struct page *h
>  	return 0;
>  }
>  
> +static long hugetlb_vmemmap_restore_folios(const struct hstate *h,
> +					struct list_head *folio_list,
> +					struct list_head *non_hvo_folios)
> +{
> +	return 0;
> +}

update_and_free_pages_bulk depends on pages with complete vmemmap being
moved from folio_list to non_hvo_folios.  In the case where we return 0,
it expects ALL pages to be moved.  Therefore, in the case where
!CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP the stub above must perform

	list_splice_init(folio_list, non_hvo_folios);

before returning 0.

I will update and send a new version along with any changes needed to
address the arm64 boot issue reported with patch 2.
-- 
Mike Kravetz

  parent reply	other threads:[~2023-09-29 22:11 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-25 23:48 [PATCH v6 0/8] Batch hugetlb vmemmap modification operations Mike Kravetz
2023-09-25 23:48 ` [PATCH v6 1/8] hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles Mike Kravetz
2023-09-25 23:48 ` [PATCH v6 2/8] hugetlb: restructure pool allocations Mike Kravetz
2023-09-27 11:26   ` Konrad Dybcio
2023-09-29 20:57     ` Mike Kravetz
2023-10-02  9:57       ` Konrad Dybcio
2023-10-06  3:08         ` Mike Kravetz
2023-10-06 21:39           ` Konrad Dybcio
2023-10-06 22:35             ` Mike Kravetz
2023-10-09  3:29               ` Mike Kravetz
2023-10-09 10:11                 ` Konrad Dybcio
2023-10-09 15:04                   ` Mike Kravetz
2023-10-09 15:15                     ` Mike Kravetz
2023-10-09 21:09                       ` Konrad Dybcio
2023-10-10  1:26                         ` Mike Kravetz
2023-10-10  0:07                       ` Andrew Morton
2023-10-10 21:30                         ` Konrad Dybcio
2023-10-10 21:45                           ` Mike Kravetz
2023-10-11  9:36                             ` Konrad Dybcio
2023-10-09 21:09                     ` Konrad Dybcio
2023-10-07  1:51             ` Jane Chu
2023-10-09 10:13               ` Konrad Dybcio
2023-09-25 23:48 ` [PATCH v6 3/8] hugetlb: perform vmemmap optimization on a list of pages Mike Kravetz
2023-09-25 23:48 ` [PATCH v6 4/8] hugetlb: perform vmemmap restoration " Mike Kravetz
2023-09-26  2:27   ` Muchun Song
2023-09-29 22:10   ` Mike Kravetz [this message]
2023-09-25 23:48 ` [PATCH v6 5/8] hugetlb: batch freeing of vmemmap pages Mike Kravetz
2023-09-25 23:48 ` [PATCH v6 6/8] hugetlb: batch PMD split for bulk vmemmap dedup Mike Kravetz
2023-09-25 23:48 ` [PATCH v6 7/8] hugetlb: batch TLB flushes when freeing vmemmap Mike Kravetz
2023-09-25 23:48 ` [PATCH v6 8/8] hugetlb: batch TLB flushes when restoring vmemmap Mike Kravetz
2023-09-26  2:20   ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230929221018.GB10357@monkey \
    --to=mike.kravetz@oracle.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=david@redhat.com \
    --cc=duanxiongchun@bytedance.com \
    --cc=joao.m.martins@oracle.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=naoya.horiguchi@linux.dev \
    --cc=osalvador@suse.de \
    --cc=rientjes@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).