* [PATCH 0/2] Convert two functions in hugetlb.c to use a folio
@ 2023-06-02 1:54 Peng Zhang
2023-06-02 1:54 ` [PATCH 1/2] mm/hugetlb: Use a folio in copy_hugetlb_page_range() Peng Zhang
2023-06-02 1:54 ` [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp() Peng Zhang
0 siblings, 2 replies; 11+ messages in thread
From: Peng Zhang @ 2023-06-02 1:54 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, willy, mike.kravetz
Cc: muchun.song, sidhartha.kumar, vishal.moola, wangkefeng.wang,
sunnanyong, ZhangPeng
From: ZhangPeng <zhangpeng362@huawei.com>
This patch series converts two functions in hugetlb.c to use a folio,
which can remove several implicit calls to compound_head().
ZhangPeng (2):
mm/hugetlb: Use a folio in copy_hugetlb_page_range()
mm/hugetlb: Use a folio in hugetlb_wp()
mm/hugetlb.c | 36 ++++++++++++++++++++----------------
1 file changed, 20 insertions(+), 16 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/2] mm/hugetlb: Use a folio in copy_hugetlb_page_range()
2023-06-02 1:54 [PATCH 0/2] Convert two functions in hugetlb.c to use a folio Peng Zhang
@ 2023-06-02 1:54 ` Peng Zhang
2023-06-02 3:14 ` Muchun Song
2023-06-02 16:44 ` Sidhartha Kumar
2023-06-02 1:54 ` [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp() Peng Zhang
1 sibling, 2 replies; 11+ messages in thread
From: Peng Zhang @ 2023-06-02 1:54 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, willy, mike.kravetz
Cc: muchun.song, sidhartha.kumar, vishal.moola, wangkefeng.wang,
sunnanyong, ZhangPeng
From: ZhangPeng <zhangpeng362@huawei.com>
We can replace five implict calls to compound_head() with one by using
pte_folio. However, we still need to keep ptepage because we need to know
which page in the folio we are copying.
Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
---
mm/hugetlb.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ea24718db4af..0b774dd3d57b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5017,6 +5017,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
{
pte_t *src_pte, *dst_pte, entry;
struct page *ptepage;
+ struct folio *pte_folio;
unsigned long addr;
bool cow = is_cow_mapping(src_vma->vm_flags);
struct hstate *h = hstate_vma(src_vma);
@@ -5116,7 +5117,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
} else {
entry = huge_ptep_get(src_pte);
ptepage = pte_page(entry);
- get_page(ptepage);
+ pte_folio = page_folio(ptepage);
+ folio_get(pte_folio);
/*
* Failing to duplicate the anon rmap is a rare case
@@ -5128,7 +5130,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
* need to be without the pgtable locks since we could
* sleep during the process.
*/
- if (!PageAnon(ptepage)) {
+ if (!folio_test_anon(pte_folio)) {
page_dup_file_rmap(ptepage, true);
} else if (page_try_dup_anon_rmap(ptepage, true,
src_vma)) {
@@ -5140,14 +5142,14 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
/* Do not use reserve as it's private owned */
new_folio = alloc_hugetlb_folio(dst_vma, addr, 1);
if (IS_ERR(new_folio)) {
- put_page(ptepage);
+ folio_put(pte_folio);
ret = PTR_ERR(new_folio);
break;
}
ret = copy_user_large_folio(new_folio,
- page_folio(ptepage),
- addr, dst_vma);
- put_page(ptepage);
+ pte_folio,
+ addr, dst_vma);
+ folio_put(pte_folio);
if (ret) {
folio_put(new_folio);
break;
--
2.25.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp()
2023-06-02 1:54 [PATCH 0/2] Convert two functions in hugetlb.c to use a folio Peng Zhang
2023-06-02 1:54 ` [PATCH 1/2] mm/hugetlb: Use a folio in copy_hugetlb_page_range() Peng Zhang
@ 2023-06-02 1:54 ` Peng Zhang
2023-06-02 3:17 ` Muchun Song
` (2 more replies)
1 sibling, 3 replies; 11+ messages in thread
From: Peng Zhang @ 2023-06-02 1:54 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, willy, mike.kravetz
Cc: muchun.song, sidhartha.kumar, vishal.moola, wangkefeng.wang,
sunnanyong, ZhangPeng
From: ZhangPeng <zhangpeng362@huawei.com>
We can replace nine implict calls to compound_head() with one by using
old_folio. However, we still need to keep old_page because we need to
know which page in the folio we are copying.
Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
---
mm/hugetlb.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0b774dd3d57b..f0ab6e8adf6f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5543,6 +5543,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
pte_t pte = huge_ptep_get(ptep);
struct hstate *h = hstate_vma(vma);
struct page *old_page;
+ struct folio *old_folio;
struct folio *new_folio;
int outside_reserve = 0;
vm_fault_t ret = 0;
@@ -5574,6 +5575,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
}
old_page = pte_page(pte);
+ old_folio = page_folio(old_page);
delayacct_wpcopy_start();
@@ -5582,7 +5584,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
* If no-one else is actually using this page, we're the exclusive
* owner and can reuse this page.
*/
- if (page_mapcount(old_page) == 1 && PageAnon(old_page)) {
+ if (page_mapcount(old_page) == 1 && folio_test_anon(old_folio)) {
if (!PageAnonExclusive(old_page))
page_move_anon_rmap(old_page, vma);
if (likely(!unshare))
@@ -5591,8 +5593,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
delayacct_wpcopy_end();
return 0;
}
- VM_BUG_ON_PAGE(PageAnon(old_page) && PageAnonExclusive(old_page),
- old_page);
+ VM_BUG_ON_PAGE(folio_test_anon(old_folio) &&
+ PageAnonExclusive(old_page), old_page);
/*
* If the process that created a MAP_PRIVATE mapping is about to
@@ -5604,10 +5606,10 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
* of the full address range.
*/
if (is_vma_resv_set(vma, HPAGE_RESV_OWNER) &&
- page_folio(old_page) != pagecache_folio)
+ old_folio != pagecache_folio)
outside_reserve = 1;
- get_page(old_page);
+ folio_get(old_folio);
/*
* Drop page table lock as buddy allocator may be called. It will
@@ -5629,7 +5631,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
pgoff_t idx;
u32 hash;
- put_page(old_page);
+ folio_put(old_folio);
/*
* Drop hugetlb_fault_mutex and vma_lock before
* unmapping. unmapping needs to hold vma_lock
@@ -5674,7 +5676,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
goto out_release_all;
}
- if (copy_user_large_folio(new_folio, page_folio(old_page), address, vma)) {
+ if (copy_user_large_folio(new_folio, old_folio, address, vma)) {
ret = VM_FAULT_HWPOISON_LARGE;
goto out_release_all;
}
@@ -5703,7 +5705,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
set_huge_pte_at(mm, haddr, ptep, newpte);
folio_set_hugetlb_migratable(new_folio);
/* Make the old page be freed below */
- new_folio = page_folio(old_page);
+ new_folio = old_folio;
}
spin_unlock(ptl);
mmu_notifier_invalidate_range_end(&range);
@@ -5712,11 +5714,11 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
* No restore in case of successful pagetable update (Break COW or
* unshare)
*/
- if (new_folio != page_folio(old_page))
+ if (new_folio != old_folio)
restore_reserve_on_error(h, vma, haddr, new_folio);
folio_put(new_folio);
out_release_old:
- put_page(old_page);
+ folio_put(old_folio);
spin_lock(ptl); /* Caller expects lock to be held */
--
2.25.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] mm/hugetlb: Use a folio in copy_hugetlb_page_range()
2023-06-02 1:54 ` [PATCH 1/2] mm/hugetlb: Use a folio in copy_hugetlb_page_range() Peng Zhang
@ 2023-06-02 3:14 ` Muchun Song
2023-06-02 16:44 ` Sidhartha Kumar
1 sibling, 0 replies; 11+ messages in thread
From: Muchun Song @ 2023-06-02 3:14 UTC (permalink / raw)
To: Peng Zhang
Cc: Linux Memory Management List, linux-kernel, Andrew Morton,
Matthew Wilcox, mike.kravetz, sidhartha.kumar, vishal.moola,
wangkefeng.wang, sunnanyong
> On Jun 2, 2023, at 09:54, Peng Zhang <zhangpeng362@huawei.com> wrote:
>
> From: ZhangPeng <zhangpeng362@huawei.com>
>
> We can replace five implict calls to compound_head() with one by using
> pte_folio. However, we still need to keep ptepage because we need to know
> which page in the folio we are copying.
>
> Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Thanks.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp()
2023-06-02 1:54 ` [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp() Peng Zhang
@ 2023-06-02 3:17 ` Muchun Song
2023-06-02 17:47 ` Sidhartha Kumar
2023-06-02 20:17 ` Matthew Wilcox
2 siblings, 0 replies; 11+ messages in thread
From: Muchun Song @ 2023-06-02 3:17 UTC (permalink / raw)
To: Peng Zhang
Cc: Linux Memory Management List, linux-kernel, akpm, willy,
mike.kravetz, sidhartha.kumar, vishal.moola, wangkefeng.wang,
sunnanyong
> On Jun 2, 2023, at 09:54, Peng Zhang <zhangpeng362@huawei.com> wrote:
>
> From: ZhangPeng <zhangpeng362@huawei.com>
>
> We can replace nine implict calls to compound_head() with one by using
> old_folio. However, we still need to keep old_page because we need to
> know which page in the folio we are copying.
>
> Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Thanks.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] mm/hugetlb: Use a folio in copy_hugetlb_page_range()
2023-06-02 1:54 ` [PATCH 1/2] mm/hugetlb: Use a folio in copy_hugetlb_page_range() Peng Zhang
2023-06-02 3:14 ` Muchun Song
@ 2023-06-02 16:44 ` Sidhartha Kumar
1 sibling, 0 replies; 11+ messages in thread
From: Sidhartha Kumar @ 2023-06-02 16:44 UTC (permalink / raw)
To: Peng Zhang, linux-mm, linux-kernel, akpm, willy, mike.kravetz
Cc: muchun.song, vishal.moola, wangkefeng.wang, sunnanyong
On 6/1/23 6:54 PM, Peng Zhang wrote:
> From: ZhangPeng <zhangpeng362@huawei.com>
>
> We can replace five implict calls to compound_head() with one by using
> pte_folio. However, we still need to keep ptepage because we need to know
> which page in the folio we are copying.
>
> Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
> ---
> mm/hugetlb.c | 14 ++++++++------
> 1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index ea24718db4af..0b774dd3d57b 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5017,6 +5017,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> {
> pte_t *src_pte, *dst_pte, entry;
> struct page *ptepage;
> + struct folio *pte_folio;
> unsigned long addr;
> bool cow = is_cow_mapping(src_vma->vm_flags);
> struct hstate *h = hstate_vma(src_vma);
> @@ -5116,7 +5117,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> } else {
> entry = huge_ptep_get(src_pte);
> ptepage = pte_page(entry);
> - get_page(ptepage);
> + pte_folio = page_folio(ptepage);
> + folio_get(pte_folio);
>
> /*
> * Failing to duplicate the anon rmap is a rare case
> @@ -5128,7 +5130,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> * need to be without the pgtable locks since we could
> * sleep during the process.
> */
> - if (!PageAnon(ptepage)) {
> + if (!folio_test_anon(pte_folio)) {
> page_dup_file_rmap(ptepage, true);
> } else if (page_try_dup_anon_rmap(ptepage, true,
> src_vma)) {
> @@ -5140,14 +5142,14 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> /* Do not use reserve as it's private owned */
> new_folio = alloc_hugetlb_folio(dst_vma, addr, 1);
> if (IS_ERR(new_folio)) {
> - put_page(ptepage);
> + folio_put(pte_folio);
> ret = PTR_ERR(new_folio);
> break;
> }
> ret = copy_user_large_folio(new_folio,
> - page_folio(ptepage),
> - addr, dst_vma);
> - put_page(ptepage);
> + pte_folio,
> + addr, dst_vma);
> + folio_put(pte_folio);
> if (ret) {
> folio_put(new_folio);
> break;
Reviewed-by Sidhartha Kumar <sidhartha.kumar@oracle.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp()
2023-06-02 1:54 ` [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp() Peng Zhang
2023-06-02 3:17 ` Muchun Song
@ 2023-06-02 17:47 ` Sidhartha Kumar
2023-06-02 20:17 ` Matthew Wilcox
2 siblings, 0 replies; 11+ messages in thread
From: Sidhartha Kumar @ 2023-06-02 17:47 UTC (permalink / raw)
To: Peng Zhang, linux-mm, linux-kernel, akpm, willy, mike.kravetz
Cc: muchun.song, vishal.moola, wangkefeng.wang, sunnanyong
On 6/1/23 6:54 PM, Peng Zhang wrote:
> From: ZhangPeng <zhangpeng362@huawei.com>
>
> We can replace nine implict calls to compound_head() with one by using
> old_folio. However, we still need to keep old_page because we need to
> know which page in the folio we are copying.
>
> Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
> ---
> mm/hugetlb.c | 22 ++++++++++++----------
> 1 file changed, 12 insertions(+), 10 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 0b774dd3d57b..f0ab6e8adf6f 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5543,6 +5543,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
> pte_t pte = huge_ptep_get(ptep);
> struct hstate *h = hstate_vma(vma);
> struct page *old_page;
> + struct folio *old_folio;
> struct folio *new_folio;
> int outside_reserve = 0;
> vm_fault_t ret = 0;
> @@ -5574,6 +5575,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
> }
>
> old_page = pte_page(pte);
> + old_folio = page_folio(old_page);
>
> delayacct_wpcopy_start();
>
> @@ -5582,7 +5584,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
> * If no-one else is actually using this page, we're the exclusive
> * owner and can reuse this page.
> */
> - if (page_mapcount(old_page) == 1 && PageAnon(old_page)) {
> + if (page_mapcount(old_page) == 1 && folio_test_anon(old_folio)) {
> if (!PageAnonExclusive(old_page))
> page_move_anon_rmap(old_page, vma);
> if (likely(!unshare))
> @@ -5591,8 +5593,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
> delayacct_wpcopy_end();
> return 0;
> }
> - VM_BUG_ON_PAGE(PageAnon(old_page) && PageAnonExclusive(old_page),
> - old_page);
> + VM_BUG_ON_PAGE(folio_test_anon(old_folio) &&
> + PageAnonExclusive(old_page), old_page);
>
> /*
> * If the process that created a MAP_PRIVATE mapping is about to
> @@ -5604,10 +5606,10 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
> * of the full address range.
> */
> if (is_vma_resv_set(vma, HPAGE_RESV_OWNER) &&
> - page_folio(old_page) != pagecache_folio)
> + old_folio != pagecache_folio)
> outside_reserve = 1;
>
> - get_page(old_page);
> + folio_get(old_folio);
>
> /*
> * Drop page table lock as buddy allocator may be called. It will
> @@ -5629,7 +5631,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
> pgoff_t idx;
> u32 hash;
>
> - put_page(old_page);
> + folio_put(old_folio);
> /*
> * Drop hugetlb_fault_mutex and vma_lock before
> * unmapping. unmapping needs to hold vma_lock
> @@ -5674,7 +5676,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
> goto out_release_all;
> }
>
> - if (copy_user_large_folio(new_folio, page_folio(old_page), address, vma)) {
> + if (copy_user_large_folio(new_folio, old_folio, address, vma)) {
> ret = VM_FAULT_HWPOISON_LARGE;
> goto out_release_all;
> }
> @@ -5703,7 +5705,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
> set_huge_pte_at(mm, haddr, ptep, newpte);
> folio_set_hugetlb_migratable(new_folio);
> /* Make the old page be freed below */
> - new_folio = page_folio(old_page);
> + new_folio = old_folio;
> }
> spin_unlock(ptl);
> mmu_notifier_invalidate_range_end(&range);
> @@ -5712,11 +5714,11 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
> * No restore in case of successful pagetable update (Break COW or
> * unshare)
> */
> - if (new_folio != page_folio(old_page))
> + if (new_folio != old_folio)
> restore_reserve_on_error(h, vma, haddr, new_folio);
> folio_put(new_folio);
> out_release_old:
> - put_page(old_page);
> + folio_put(old_folio);
>
> spin_lock(ptl); /* Caller expects lock to be held */
>
Reviewed-by Sidhartha Kumar <sidhartha.kumar@oracle.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp()
2023-06-02 1:54 ` [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp() Peng Zhang
2023-06-02 3:17 ` Muchun Song
2023-06-02 17:47 ` Sidhartha Kumar
@ 2023-06-02 20:17 ` Matthew Wilcox
2023-06-02 20:52 ` Mike Kravetz
2023-06-05 11:15 ` zhangpeng (AS)
2 siblings, 2 replies; 11+ messages in thread
From: Matthew Wilcox @ 2023-06-02 20:17 UTC (permalink / raw)
To: Peng Zhang
Cc: linux-mm, linux-kernel, akpm, mike.kravetz, muchun.song,
sidhartha.kumar, vishal.moola, wangkefeng.wang, sunnanyong
On Fri, Jun 02, 2023 at 09:54:08AM +0800, Peng Zhang wrote:
> From: ZhangPeng <zhangpeng362@huawei.com>
>
> We can replace nine implict calls to compound_head() with one by using
> old_folio. However, we still need to keep old_page because we need to
> know which page in the folio we are copying.
Do we? It's my understanding (and I am far from an expert here ...)
that the 'pte_t *' we are passed *inside hugetlbfs* is not in fact a pte
pointer at all but actually a pmd or pud pointer. See how we do this:
pte_t pte = huge_ptep_get(ptep);
and so the page we get back is always a head page, and we can go
directly to a folio. ie this is different from the THP cases.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp()
2023-06-02 20:17 ` Matthew Wilcox
@ 2023-06-02 20:52 ` Mike Kravetz
2023-06-05 11:15 ` zhangpeng (AS)
2023-06-05 11:15 ` zhangpeng (AS)
1 sibling, 1 reply; 11+ messages in thread
From: Mike Kravetz @ 2023-06-02 20:52 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Peng Zhang, linux-mm, linux-kernel, akpm, muchun.song,
sidhartha.kumar, vishal.moola, wangkefeng.wang, sunnanyong
On 06/02/23 21:17, Matthew Wilcox wrote:
> On Fri, Jun 02, 2023 at 09:54:08AM +0800, Peng Zhang wrote:
> > From: ZhangPeng <zhangpeng362@huawei.com>
> >
> > We can replace nine implict calls to compound_head() with one by using
> > old_folio. However, we still need to keep old_page because we need to
> > know which page in the folio we are copying.
>
> Do we? It's my understanding (and I am far from an expert here ...)
> that the 'pte_t *' we are passed *inside hugetlbfs* is not in fact a pte
> pointer at all but actually a pmd or pud pointer.
That may not be technically true in some arch specific cases such as
arm64 with CONT_PTES and CONT_PMDS.
> See how we do this:
>
> pte_t pte = huge_ptep_get(ptep);
>
> and so the page we get back is always a head page, and we can go
> directly to a folio. ie this is different from the THP cases.
However, it is true that ptep will always be associated with the head
page. This is because the associated virtual address is hugetlb page
aligned.
So, I agree with Matthew that there is no need to keep old_page.
Note that if old_page was NOT the head page, then
copy_user_huge_page(&new_folio->page, old_page, address, vma,
pages_per_huge_page(h));
would write beyond the end of range as it assumes old_page is head.
--
Mike Kravetz
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp()
2023-06-02 20:17 ` Matthew Wilcox
2023-06-02 20:52 ` Mike Kravetz
@ 2023-06-05 11:15 ` zhangpeng (AS)
1 sibling, 0 replies; 11+ messages in thread
From: zhangpeng (AS) @ 2023-06-05 11:15 UTC (permalink / raw)
To: Matthew Wilcox
Cc: linux-mm, linux-kernel, akpm, mike.kravetz, muchun.song,
sidhartha.kumar, vishal.moola, wangkefeng.wang, sunnanyong
On 2023/6/3 4:17, Matthew Wilcox wrote:
> On Fri, Jun 02, 2023 at 09:54:08AM +0800, Peng Zhang wrote:
>> From: ZhangPeng <zhangpeng362@huawei.com>
>>
>> We can replace nine implict calls to compound_head() with one by using
>> old_folio. However, we still need to keep old_page because we need to
>> know which page in the folio we are copying.
> Do we? It's my understanding (and I am far from an expert here ...)
> that the 'pte_t *' we are passed *inside hugetlbfs* is not in fact a pte
> pointer at all but actually a pmd or pud pointer. See how we do this:
>
> pte_t pte = huge_ptep_get(ptep);
>
> and so the page we get back is always a head page, and we can go
> directly to a folio. ie this is different from the THP cases.
Yes, I'll remove ptepage and old_page in a v2. Thanks.
Best Regards,
Peng
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp()
2023-06-02 20:52 ` Mike Kravetz
@ 2023-06-05 11:15 ` zhangpeng (AS)
0 siblings, 0 replies; 11+ messages in thread
From: zhangpeng (AS) @ 2023-06-05 11:15 UTC (permalink / raw)
To: Mike Kravetz, Matthew Wilcox
Cc: linux-mm, linux-kernel, akpm, muchun.song, sidhartha.kumar,
vishal.moola, wangkefeng.wang, sunnanyong
On 2023/6/3 4:52, Mike Kravetz wrote:
> On 06/02/23 21:17, Matthew Wilcox wrote:
>> On Fri, Jun 02, 2023 at 09:54:08AM +0800, Peng Zhang wrote:
>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>
>>> We can replace nine implict calls to compound_head() with one by using
>>> old_folio. However, we still need to keep old_page because we need to
>>> know which page in the folio we are copying.
>> Do we? It's my understanding (and I am far from an expert here ...)
>> that the 'pte_t *' we are passed *inside hugetlbfs* is not in fact a pte
>> pointer at all but actually a pmd or pud pointer.
> That may not be technically true in some arch specific cases such as
> arm64 with CONT_PTES and CONT_PMDS.
>
>> See how we do this:
>>
>> pte_t pte = huge_ptep_get(ptep);
>>
>> and so the page we get back is always a head page, and we can go
>> directly to a folio. ie this is different from the THP cases.
> However, it is true that ptep will always be associated with the head
> page. This is because the associated virtual address is hugetlb page
> aligned.
>
> So, I agree with Matthew that there is no need to keep old_page.
> Note that if old_page was NOT the head page, then
>
> copy_user_huge_page(&new_folio->page, old_page, address, vma,
> pages_per_huge_page(h));
>
> would write beyond the end of range as it assumes old_page is head.
Agreed. I'll send a v2 soon. Thanks.
Best Regards,
Peng
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-06-05 11:15 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-02 1:54 [PATCH 0/2] Convert two functions in hugetlb.c to use a folio Peng Zhang
2023-06-02 1:54 ` [PATCH 1/2] mm/hugetlb: Use a folio in copy_hugetlb_page_range() Peng Zhang
2023-06-02 3:14 ` Muchun Song
2023-06-02 16:44 ` Sidhartha Kumar
2023-06-02 1:54 ` [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp() Peng Zhang
2023-06-02 3:17 ` Muchun Song
2023-06-02 17:47 ` Sidhartha Kumar
2023-06-02 20:17 ` Matthew Wilcox
2023-06-02 20:52 ` Mike Kravetz
2023-06-05 11:15 ` zhangpeng (AS)
2023-06-05 11:15 ` zhangpeng (AS)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox