From: Miaohe Lin <linmiaohe@huawei.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: <linux-mm@kvack.org>, Naoya Horiguchi <naoya.horiguchi@nec.com>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 5/8] mm: Convert hugetlb_page_mapping_lock_write to folio
Date: Fri, 8 Mar 2024 16:33:05 +0800 [thread overview]
Message-ID: <f7231713-6fcd-a849-58db-bd1d3834d74b@huawei.com> (raw)
In-Reply-To: <20240229212036.2160900-6-willy@infradead.org>
On 2024/3/1 5:20, Matthew Wilcox (Oracle) wrote:
> The page is only used to get the mapping, so the folio will do just
> as well. Both callers already have a folio available, so this saves
> a call to compound_head().
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Looks good to me. Thanks.
Acked-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
> include/linux/hugetlb.h | 6 +++---
> mm/hugetlb.c | 6 +++---
> mm/memory-failure.c | 2 +-
> mm/migrate.c | 2 +-
> 4 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 77b30a8c6076..acb1096ecdaa 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -175,7 +175,7 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx);
> pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
> unsigned long addr, pud_t *pud);
>
> -struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage);
> +struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio);
>
> extern int sysctl_hugetlb_shm_group;
> extern struct list_head huge_boot_pages[MAX_NUMNODES];
> @@ -298,8 +298,8 @@ static inline unsigned long hugetlb_total_pages(void)
> return 0;
> }
>
> -static inline struct address_space *hugetlb_page_mapping_lock_write(
> - struct page *hpage)
> +static inline struct address_space *hugetlb_folio_mapping_lock_write(
> + struct folio *folio)
> {
> return NULL;
> }
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index bb17e5c22759..0e464a8f1aa9 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2178,13 +2178,13 @@ EXPORT_SYMBOL_GPL(PageHuge);
> /*
> * Find and lock address space (mapping) in write mode.
> *
> - * Upon entry, the page is locked which means that page_mapping() is
> + * Upon entry, the folio is locked which means that folio_mapping() is
> * stable. Due to locking order, we can only trylock_write. If we can
> * not get the lock, simply return NULL to caller.
> */
> -struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage)
> +struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio)
> {
> - struct address_space *mapping = page_mapping(hpage);
> + struct address_space *mapping = folio_mapping(folio);
>
> if (!mapping)
> return mapping;
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 27dc21063552..fe4959e994d0 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1624,7 +1624,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
> * TTU_RMAP_LOCKED to indicate we have taken the lock
> * at this higher level.
> */
> - mapping = hugetlb_page_mapping_lock_write(hpage);
> + mapping = hugetlb_folio_mapping_lock_write(folio);
> if (mapping) {
> try_to_unmap(folio, ttu|TTU_RMAP_LOCKED);
> i_mmap_unlock_write(mapping);
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 73a052a382f1..0aef867d600b 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1425,7 +1425,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
> * semaphore in write mode here and set TTU_RMAP_LOCKED
> * to let lower levels know we have taken the lock.
> */
> - mapping = hugetlb_page_mapping_lock_write(&src->page);
> + mapping = hugetlb_folio_mapping_lock_write(src);
> if (unlikely(!mapping))
> goto unlock_put_anon;
>
>
next prev parent reply other threads:[~2024-03-08 8:33 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-29 21:20 [PATCH 0/8] Some cleanups for memory-failure Matthew Wilcox (Oracle)
2024-02-29 21:20 ` [PATCH 1/8] mm/memory-failure: Remove fsdax_pgoff argument from __add_to_kill Matthew Wilcox (Oracle)
2024-03-04 12:09 ` Miaohe Lin
2024-03-13 2:07 ` Jane Chu
2024-03-13 3:23 ` Matthew Wilcox
2024-03-13 18:11 ` Jane Chu
2024-03-14 3:51 ` Matthew Wilcox
2024-03-14 17:54 ` Jane Chu
2024-03-19 0:36 ` Dan Williams
2024-02-29 21:20 ` [PATCH 2/8] mm/memory-failure: Pass addr to __add_to_kill() Matthew Wilcox (Oracle)
2024-03-04 12:10 ` Miaohe Lin
2024-02-29 21:20 ` [PATCH 3/8] mm: Return the address from page_mapped_in_vma() Matthew Wilcox (Oracle)
2024-03-04 12:31 ` Miaohe Lin
2024-03-05 20:09 ` Matthew Wilcox
2024-03-06 8:10 ` Miaohe Lin
2024-03-06 8:17 ` Miaohe Lin
2024-02-29 21:20 ` [PATCH 4/8] mm/memory-failure: Convert shake_page() to shake_folio() Matthew Wilcox (Oracle)
2024-03-06 9:31 ` Miaohe Lin
2024-04-08 15:36 ` Matthew Wilcox
2024-04-08 18:31 ` Jane Chu
2024-04-10 4:01 ` Miaohe Lin
2024-02-29 21:20 ` [PATCH 5/8] mm: Convert hugetlb_page_mapping_lock_write to folio Matthew Wilcox (Oracle)
2024-03-08 8:33 ` Miaohe Lin [this message]
2024-02-29 21:20 ` [PATCH 6/8] mm/memory-failure: Convert memory_failure() to use a folio Matthew Wilcox (Oracle)
2024-03-08 8:48 ` Miaohe Lin
2024-03-11 12:31 ` Matthew Wilcox
2024-03-12 7:07 ` Miaohe Lin
2024-03-12 14:14 ` Matthew Wilcox
2024-03-13 1:23 ` Jane Chu
2024-03-14 2:34 ` Miaohe Lin
2024-03-14 18:15 ` Jane Chu
2024-03-15 6:25 ` Miaohe Lin
2024-03-15 8:32 ` Miaohe Lin
2024-03-15 19:22 ` Jane Chu
2024-03-18 2:28 ` Miaohe Lin
2024-02-29 21:20 ` [PATCH 7/8] mm/memory-failure: Convert hwpoison_user_mappings to take " Matthew Wilcox (Oracle)
2024-03-11 11:44 ` Miaohe Lin
2024-02-29 21:20 ` [PATCH 8/8] mm/memory-failure: Add some folio conversions to unpoison_memory Matthew Wilcox (Oracle)
2024-03-11 11:29 ` Miaohe Lin
2024-03-01 6:28 ` [PATCH 0/8] Some cleanups for memory-failure Miaohe Lin
2024-03-01 12:40 ` Muhammad Usama Anjum
2024-03-04 1:55 ` Miaohe Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f7231713-6fcd-a849-58db-bd1d3834d74b@huawei.com \
--to=linmiaohe@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=linux-mm@kvack.org \
--cc=naoya.horiguchi@nec.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).