* [PATCH v6 0/2] Do not shatter hugezeropage on wp-fault
@ 2024-09-30 5:28 Dev Jain
2024-09-30 5:28 ` [PATCH v6 1/2] mm: Abstract THP allocation Dev Jain
2024-09-30 5:28 ` [PATCH v6 2/2] mm: Allocate THP on hugezeropage wp-fault Dev Jain
0 siblings, 2 replies; 5+ messages in thread
From: Dev Jain @ 2024-09-30 5:28 UTC (permalink / raw)
To: akpm, david, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-arm-kernel, linux-kernel, linux-mm,
Dev Jain
It was observed at [1] and [2] that the current kernel behaviour of
shattering a hugezeropage is inconsistent and suboptimal. For a VMA with
a THP allowable order, when we write-fault on it, the kernel installs a
PMD-mapped THP. On the other hand, if we first get a read fault, we get
a PMD pointing to the hugezeropage; subsequent write will trigger a
write-protection fault, shattering the hugezeropage into one writable
page, and all the other PTEs write-protected. The conclusion being, as
compared to the case of a single write-fault, applications have to suffer
512 extra page faults if they were to use the VMA as such, plus we get
the overhead of khugepaged trying to replace that area with a THP anyway.
Instead, replace the hugezeropage with a THP on wp-fault.
[1]: https://lore.kernel.org/all/3743d7e1-0b79-4eaf-82d5-d1ca29fe347d@arm.com/
[2]: https://lore.kernel.org/all/1cfae0c0-96a2-4308-9c62-f7a640520242@arm.com/
The patchset has been rebased on the mm-unstable branch.
v5->v6:
- More goto ommissions, remove build warning for !CONFIG_NUMA
v4->v5:
- Directly return VM_FAULT_FALLBACK in case of !folio
v3->v4:
- Renames: pmd_thp_fault_alloc -> vma_alloc_anon_folio_pmd,
map_pmd_thp -> map_anon_folio_pmd
- Instead of passing around, compute haddr at various places, similar
with gfp flags
- Pass haddr to update_mmu_cache_pmd() instead of unaligned address
- Do not pass vmf to map_anon_folio_pmd
- Do declarations in reverse xmas tree order
- Drop a new line which was introduced accidentally
- Call __pmd_thp_fault_success_stats from map_anon_folio_pmd
- Correctly return NULL from vma_alloc_anon_folio_pmd
- Initialize pgtable to NULL in __do_huge_pmd_anonymous_page, to
prevent freeing pgtable when not even allocated
- Drop if conditions from map_anon_folio_pmd, let the caller handle that
v2->v3:
- Drop foliop and order parameters, prefix the thp functions with pmd_
- First allocate THP, then pgtable, not vice-versa
- Move pgtable_trans_huge_deposit() from map_pmd_thp() to caller
- Drop exposing functions in include/linux/huge_mm.h
- Open code do_huge_zero_wp_pmd_locked()
- Release folio in case of pmd change after taking the lock, or
check_stable_address_space() returning VM_FAULT_SIGBUS
- Drop uffd-wp preservation. Looking at page_table_check_pmd_flags(),
preserving uffd-wp on a writable entry is invalid. Looking at
mfill_atomic(), uffd_copy() is a null operation when pmd is marked
uffd-wp.
v1->v2:
- Wrap do_huge_zero_wp_pmd_locked() around lock and unlock
- Call thp_fault_alloc() before do_huge_zero_wp_pmd_locked() to avoid
- calling sleeping function from spinlock context
Dev Jain (2):
mm: Abstract THP allocation
mm: Allocate THP on hugezeropage wp-fault
mm/huge_memory.c | 139 +++++++++++++++++++++++++++++++++--------------
1 file changed, 97 insertions(+), 42 deletions(-)
--
2.30.2
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v6 1/2] mm: Abstract THP allocation
2024-09-30 5:28 [PATCH v6 0/2] Do not shatter hugezeropage on wp-fault Dev Jain
@ 2024-09-30 5:28 ` Dev Jain
2024-09-30 8:45 ` David Hildenbrand
2024-09-30 5:28 ` [PATCH v6 2/2] mm: Allocate THP on hugezeropage wp-fault Dev Jain
1 sibling, 1 reply; 5+ messages in thread
From: Dev Jain @ 2024-09-30 5:28 UTC (permalink / raw)
To: akpm, david, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-arm-kernel, linux-kernel, linux-mm,
Dev Jain
In preparation for the second patch, abstract away the THP allocation
logic present in the create_huge_pmd() path, which corresponds to the
faulting case when no page is present.
There should be no functional change as a result of applying this patch,
except that, as David notes at [1], a PMD-aligned address should
be passed to update_mmu_cache_pmd().
[1]: https://lore.kernel.org/all/ddd3fcd2-48b3-4170-bcaa-2fe66e093f43@redhat.com/
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
mm/huge_memory.c | 98 ++++++++++++++++++++++++++++--------------------
1 file changed, 57 insertions(+), 41 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4e34b7f89daf..e3bcdbc9baa2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1148,47 +1148,81 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
}
EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
-static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
- struct page *page, gfp_t gfp)
+static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
+ unsigned long addr)
{
- struct vm_area_struct *vma = vmf->vma;
- struct folio *folio = page_folio(page);
- pgtable_t pgtable;
- unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
- vm_fault_t ret = 0;
+ gfp_t gfp = vma_thp_gfp_mask(vma);
+ const int order = HPAGE_PMD_ORDER;
+ struct folio *folio;
- VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
+ folio = vma_alloc_folio(gfp, order, vma, addr & HPAGE_PMD_MASK, true);
+ if (unlikely(!folio)) {
+ count_vm_event(THP_FAULT_FALLBACK);
+ count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
+ return NULL;
+ }
+
+ VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
folio_put(folio);
count_vm_event(THP_FAULT_FALLBACK);
count_vm_event(THP_FAULT_FALLBACK_CHARGE);
- count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK);
- count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
- return VM_FAULT_FALLBACK;
+ count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
+ count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
+ return NULL;
}
folio_throttle_swaprate(folio, gfp);
- pgtable = pte_alloc_one(vma->vm_mm);
- if (unlikely(!pgtable)) {
- ret = VM_FAULT_OOM;
- goto release;
- }
-
- folio_zero_user(folio, vmf->address);
+ folio_zero_user(folio, addr);
/*
* The memory barrier inside __folio_mark_uptodate makes sure that
* folio_zero_user writes become visible before the set_pmd_at()
* write.
*/
__folio_mark_uptodate(folio);
+ return folio;
+}
+
+static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd,
+ struct vm_area_struct *vma, unsigned long haddr)
+{
+ pmd_t entry;
+
+ entry = mk_huge_pmd(&folio->page, vma->vm_page_prot);
+ entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+ folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
+ folio_add_lru_vma(folio, vma);
+ set_pmd_at(vma->vm_mm, haddr, pmd, entry);
+ update_mmu_cache_pmd(vma, haddr, pmd);
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ count_vm_event(THP_FAULT_ALLOC);
+ count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
+ count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
+}
+
+static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
+{
+ unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
+ struct vm_area_struct *vma = vmf->vma;
+ struct folio *folio;
+ pgtable_t pgtable;
+ vm_fault_t ret = 0;
+
+ folio = vma_alloc_anon_folio_pmd(vma, vmf->address);
+ if (unlikely(!folio))
+ return VM_FAULT_FALLBACK;
+
+ pgtable = pte_alloc_one(vma->vm_mm);
+ if (unlikely(!pgtable)) {
+ ret = VM_FAULT_OOM;
+ goto release;
+ }
vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
if (unlikely(!pmd_none(*vmf->pmd))) {
goto unlock_release;
} else {
- pmd_t entry;
-
ret = check_stable_address_space(vma->vm_mm);
if (ret)
goto unlock_release;
@@ -1202,21 +1236,11 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
VM_BUG_ON(ret & VM_FAULT_FALLBACK);
return ret;
}
-
- entry = mk_huge_pmd(page, vma->vm_page_prot);
- entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
- folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
- folio_add_lru_vma(folio, vma);
pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
- set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
- update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
- add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ map_anon_folio_pmd(folio, vmf->pmd, vma, haddr);
mm_inc_nr_ptes(vma->vm_mm);
deferred_split_folio(folio, false);
spin_unlock(vmf->ptl);
- count_vm_event(THP_FAULT_ALLOC);
- count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
- count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
}
return 0;
@@ -1283,8 +1307,6 @@ static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm,
vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- gfp_t gfp;
- struct folio *folio;
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
vm_fault_t ret;
@@ -1335,14 +1357,8 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
}
return ret;
}
- gfp = vma_thp_gfp_mask(vma);
- folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr, true);
- if (unlikely(!folio)) {
- count_vm_event(THP_FAULT_FALLBACK);
- count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK);
- return VM_FAULT_FALLBACK;
- }
- return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp);
+
+ return __do_huge_pmd_anonymous_page(vmf);
}
static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
--
2.30.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v6 2/2] mm: Allocate THP on hugezeropage wp-fault
2024-09-30 5:28 [PATCH v6 0/2] Do not shatter hugezeropage on wp-fault Dev Jain
2024-09-30 5:28 ` [PATCH v6 1/2] mm: Abstract THP allocation Dev Jain
@ 2024-09-30 5:28 ` Dev Jain
1 sibling, 0 replies; 5+ messages in thread
From: Dev Jain @ 2024-09-30 5:28 UTC (permalink / raw)
To: akpm, david, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-arm-kernel, linux-kernel, linux-mm,
Dev Jain
Introduce do_huge_zero_wp_pmd() to handle wp-fault on a hugezeropage and
replace it with a PMD-mapped THP. Remember to flush TLB entry
corresponding to the hugezeropage. In case of failure, fallback
to splitting the PMD.
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
mm/huge_memory.c | 41 ++++++++++++++++++++++++++++++++++++++++-
1 file changed, 40 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e3bcdbc9baa2..4677ed76953c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1790,6 +1790,38 @@ void huge_pmd_set_accessed(struct vm_fault *vmf)
spin_unlock(vmf->ptl);
}
+static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf)
+{
+ unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
+ struct vm_area_struct *vma = vmf->vma;
+ struct mmu_notifier_range range;
+ struct folio *folio;
+ vm_fault_t ret = 0;
+
+ folio = vma_alloc_anon_folio_pmd(vma, vmf->address);
+ if (unlikely(!folio))
+ return VM_FAULT_FALLBACK;
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, haddr,
+ haddr + HPAGE_PMD_SIZE);
+ mmu_notifier_invalidate_range_start(&range);
+ vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
+ if (unlikely(!pmd_same(pmdp_get(vmf->pmd), vmf->orig_pmd)))
+ goto release;
+ ret = check_stable_address_space(vma->vm_mm);
+ if (ret)
+ goto release;
+ (void)pmdp_huge_clear_flush(vma, haddr, vmf->pmd);
+ map_anon_folio_pmd(folio, vmf->pmd, vma, haddr);
+ goto unlock;
+release:
+ folio_put(folio);
+unlock:
+ spin_unlock(vmf->ptl);
+ mmu_notifier_invalidate_range_end(&range);
+ return ret;
+}
+
vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
{
const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
@@ -1802,8 +1834,15 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd);
VM_BUG_ON_VMA(!vma->anon_vma, vma);
- if (is_huge_zero_pmd(orig_pmd))
+ if (is_huge_zero_pmd(orig_pmd)) {
+ vm_fault_t ret = do_huge_zero_wp_pmd(vmf);
+
+ if (!(ret & VM_FAULT_FALLBACK))
+ return ret;
+
+ /* Fallback to splitting PMD if THP cannot be allocated */
goto fallback;
+ }
spin_lock(vmf->ptl);
--
2.30.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v6 1/2] mm: Abstract THP allocation
2024-09-30 5:28 ` [PATCH v6 1/2] mm: Abstract THP allocation Dev Jain
@ 2024-09-30 8:45 ` David Hildenbrand
2024-10-07 4:38 ` Dev Jain
0 siblings, 1 reply; 5+ messages in thread
From: David Hildenbrand @ 2024-09-30 8:45 UTC (permalink / raw)
To: Dev Jain, akpm, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-arm-kernel, linux-kernel, linux-mm
On 30.09.24 07:28, Dev Jain wrote:
> In preparation for the second patch, abstract away the THP allocation
> logic present in the create_huge_pmd() path, which corresponds to the
> faulting case when no page is present.
>
> There should be no functional change as a result of applying this patch,
> except that, as David notes at [1], a PMD-aligned address should
> be passed to update_mmu_cache_pmd().
>
> [1]: https://lore.kernel.org/all/ddd3fcd2-48b3-4170-bcaa-2fe66e093f43@redhat.com/
>
> Acked-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> mm/huge_memory.c | 98 ++++++++++++++++++++++++++++--------------------
> 1 file changed, 57 insertions(+), 41 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 4e34b7f89daf..e3bcdbc9baa2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1148,47 +1148,81 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
> }
> EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
>
> -static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
> - struct page *page, gfp_t gfp)
> +static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
> + unsigned long addr)
Just a nit as I am skimming over this once more:
We try to make any new code / code we touch to use a 2-tab
indentation for the second parameter line.
E.g.,
static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
unsigned long addr)
{
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v6 1/2] mm: Abstract THP allocation
2024-09-30 8:45 ` David Hildenbrand
@ 2024-10-07 4:38 ` Dev Jain
0 siblings, 0 replies; 5+ messages in thread
From: Dev Jain @ 2024-10-07 4:38 UTC (permalink / raw)
To: David Hildenbrand, akpm, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-arm-kernel, linux-kernel, linux-mm
On 9/30/24 14:15, David Hildenbrand wrote:
> On 30.09.24 07:28, Dev Jain wrote:
>> In preparation for the second patch, abstract away the THP allocation
>> logic present in the create_huge_pmd() path, which corresponds to the
>> faulting case when no page is present.
>>
>> There should be no functional change as a result of applying this patch,
>> except that, as David notes at [1], a PMD-aligned address should
>> be passed to update_mmu_cache_pmd().
>>
>> [1]:
>> https://lore.kernel.org/all/ddd3fcd2-48b3-4170-bcaa-2fe66e093f43@redhat.com/
>>
>> Acked-by: David Hildenbrand <david@redhat.com>
>> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>> mm/huge_memory.c | 98 ++++++++++++++++++++++++++++--------------------
>> 1 file changed, 57 insertions(+), 41 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 4e34b7f89daf..e3bcdbc9baa2 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -1148,47 +1148,81 @@ unsigned long thp_get_unmapped_area(struct
>> file *filp, unsigned long addr,
>> }
>> EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
>> -static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
>> - struct page *page, gfp_t gfp)
>> +static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct
>> *vma,
>> + unsigned long addr)
>
> Just a nit as I am skimming over this once more:
>
> We try to make any new code / code we touch to use a 2-tab
> indentation for the second parameter line.
>
> E.g.,
>
> static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
> unsigned long addr)
> {
Ah sorry, didn't know about this. I used to align it with the function
parameter opening bracket.
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-10-07 4:38 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-30 5:28 [PATCH v6 0/2] Do not shatter hugezeropage on wp-fault Dev Jain
2024-09-30 5:28 ` [PATCH v6 1/2] mm: Abstract THP allocation Dev Jain
2024-09-30 8:45 ` David Hildenbrand
2024-10-07 4:38 ` Dev Jain
2024-09-30 5:28 ` [PATCH v6 2/2] mm: Allocate THP on hugezeropage wp-fault Dev Jain
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).