* [PATCH v2 0/2] support multi-size THP numa balancing @ 2024-03-29 6:56 Baolin Wang 2024-03-29 6:56 ` [PATCH v2 1/2] mm: factor out the numa mapping rebuilding into a new helper Baolin Wang 2024-03-29 6:56 ` [PATCH v2 2/2] mm: support multi-size THP numa balancing Baolin Wang 0 siblings, 2 replies; 7+ messages in thread From: Baolin Wang @ 2024-03-29 6:56 UTC (permalink / raw) To: akpm Cc: david, mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao, ryan.roberts, baolin.wang, linux-mm, linux-kernel This patchset tries to support mTHP numa balancing, as a simple solution to start, the NUMA balancing algorithm for mTHP will follow the THP strategy as the basic support. Please find details in each patch. Changes from v1: - Fix the issue where the end address might exceed the range of the folio size, suggested by Huang, Ying. - Simplify the folio validation. - Add pte_modify() before checking pte writable. - Update the performance data. Changes from RFC v2: - Follow the THP algorithm per Huang, Ying. Changes from RFC v1: - Add some preformance data per Huang, Ying. - Allow mTHP scanning per David Hildenbrand. - Avoid sharing mapping for numa balancing to avoid false sharing. - Add more commit message. Baolin Wang (2): mm: factor out the numa mapping rebuilding into a new helper mm: support multi-size THP numa balancing mm/memory.c | 77 +++++++++++++++++++++++++++++++++++++++++---------- mm/mprotect.c | 3 +- 2 files changed, 65 insertions(+), 15 deletions(-) -- 2.39.3 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/2] mm: factor out the numa mapping rebuilding into a new helper 2024-03-29 6:56 [PATCH v2 0/2] support multi-size THP numa balancing Baolin Wang @ 2024-03-29 6:56 ` Baolin Wang 2024-03-29 6:56 ` [PATCH v2 2/2] mm: support multi-size THP numa balancing Baolin Wang 1 sibling, 0 replies; 7+ messages in thread From: Baolin Wang @ 2024-03-29 6:56 UTC (permalink / raw) To: akpm Cc: david, mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao, ryan.roberts, baolin.wang, linux-mm, linux-kernel To support large folio's numa balancing, factor out the numa mapping rebuilding into a new helper as a preparation. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/memory.c | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 62ee4a15092a..c30fb4b95e15 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5054,6 +5054,20 @@ int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, return mpol_misplaced(folio, vmf, addr); } +static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_struct *vma, + bool writable) +{ + pte_t pte, old_pte; + + old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte); + pte = pte_modify(old_pte, vma->vm_page_prot); + pte = pte_mkyoung(pte); + if (writable) + pte = pte_mkwrite(pte, vma); + ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte); + update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); +} + static vm_fault_t do_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; @@ -5159,13 +5173,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) * Make it present again, depending on how arch implements * non-accessible ptes, some can allow access by kernel mode. */ - old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte); - pte = pte_modify(old_pte, vma->vm_page_prot); - pte = pte_mkyoung(pte); - if (writable) - pte = pte_mkwrite(pte, vma); - ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte); - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); + numa_rebuild_single_mapping(vmf, vma, writable); pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } -- 2.39.3 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/2] mm: support multi-size THP numa balancing 2024-03-29 6:56 [PATCH v2 0/2] support multi-size THP numa balancing Baolin Wang 2024-03-29 6:56 ` [PATCH v2 1/2] mm: factor out the numa mapping rebuilding into a new helper Baolin Wang @ 2024-03-29 6:56 ` Baolin Wang 2024-04-01 2:50 ` Huang, Ying 2024-04-01 3:47 ` Kefeng Wang 1 sibling, 2 replies; 7+ messages in thread From: Baolin Wang @ 2024-03-29 6:56 UTC (permalink / raw) To: akpm Cc: david, mgorman, wangkefeng.wang, jhubbard, ying.huang, 21cnbao, ryan.roberts, baolin.wang, linux-mm, linux-kernel Now the anonymous page allocation already supports multi-size THP (mTHP), but the numa balancing still prohibits mTHP migration even though it is an exclusive mapping, which is unreasonable. Allow scanning mTHP: Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section pages") skips shared CoW pages' NUMA page migration to avoid shared data segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to NUMA-migrate COW pages that have other uses") change to use page_count() to avoid GUP pages migration, that will also skip the mTHP numa scaning. Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP issue, although there is still a GUP race, the issue seems to have been resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared() to skip shared CoW pages though this is not a precise sharers count. To check if the folio is shared, ideally we want to make sure every page is mapped to the same process, but doing that seems expensive and using the estimated mapcount seems can work when running autonuma benchmark. Allow migrating mTHP: As mentioned in the previous thread[1], large folios (including THP) are more susceptible to false sharing issues among threads than 4K base page, leading to pages ping-pong back and forth during numa balancing, which is currently not easy to resolve. Therefore, as a start to support mTHP numa balancing, we can follow the PMD mapped THP's strategy, that means we can reuse the 2-stage filter in should_numa_migrate_memory() to check if the mTHP is being heavily contended among threads (through checking the CPU id and pid of the last access) to avoid false sharing at some degree. Thus, we can restore all PTE maps upon the first hint page fault of a large folio to follow the PMD mapped THP's strategy. In the future, we can continue to optimize the NUMA balancing algorithm to avoid the false sharing issue with large folios as much as possible. Performance data: Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum Base: 2024-03-25 mm-unstable branch Enable mTHP to run autonuma-benchmark mTHP:16K Base Patched numa01 numa01 224.70 143.48 numa01_THREAD_ALLOC numa01_THREAD_ALLOC 118.05 47.43 numa02 numa02 13.45 9.29 numa02_SMT numa02_SMT 14.80 7.50 mTHP:64K Base Patched numa01 numa01 216.15 114.40 numa01_THREAD_ALLOC numa01_THREAD_ALLOC 115.35 47.41 numa02 numa02 13.24 9.25 numa02_SMT numa02_SMT 14.67 7.34 mTHP:128K Base Patched numa01 numa01 205.13 144.45 numa01_THREAD_ALLOC numa01_THREAD_ALLOC 112.93 41.88 numa02 numa02 13.16 9.18 numa02_SMT numa02_SMT 14.81 7.49 [1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/ Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/memory.c | 57 +++++++++++++++++++++++++++++++++++++++++++-------- mm/mprotect.c | 3 ++- 2 files changed, 51 insertions(+), 9 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index c30fb4b95e15..2aca19e4fbd8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5068,16 +5068,56 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); } +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma, + struct folio *folio, pte_t fault_pte, bool ignore_writable) +{ + int nr = pte_pfn(fault_pte) - folio_pfn(folio); + unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start); + unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end); + pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; + bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); + unsigned long addr; + + /* Restore all PTEs' mapping of the large folio */ + for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) { + pte_t pte, old_pte; + pte_t ptent = ptep_get(start_ptep); + bool writable = false; + + if (!pte_present(ptent) || !pte_protnone(ptent)) + continue; + + if (pfn_folio(pte_pfn(ptent)) != folio) + continue; + + if (!ignore_writable) { + ptent = pte_modify(ptent, vma->vm_page_prot); + writable = pte_write(ptent); + if (!writable && pte_write_upgrade && + can_change_pte_writable(vma, addr, ptent)) + writable = true; + } + + old_pte = ptep_modify_prot_start(vma, addr, start_ptep); + pte = pte_modify(old_pte, vma->vm_page_prot); + pte = pte_mkyoung(pte); + if (writable) + pte = pte_mkwrite(pte, vma); + ptep_modify_prot_commit(vma, addr, start_ptep, old_pte, pte); + update_mmu_cache_range(vmf, vma, addr, start_ptep, 1); + } +} + static vm_fault_t do_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct folio *folio = NULL; int nid = NUMA_NO_NODE; - bool writable = false; + bool writable = false, ignore_writable = false; int last_cpupid; int target_nid; pte_t pte, old_pte; - int flags = 0; + int flags = 0, nr_pages; /* * The pte cannot be used safely until we verify, while holding the page @@ -5107,10 +5147,6 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) if (!folio || folio_is_zone_device(folio)) goto out_map; - /* TODO: handle PTE-mapped THP */ - if (folio_test_large(folio)) - goto out_map; - /* * Avoid grouping on RO pages in general. RO pages shouldn't hurt as * much anyway since they can be in shared cache state. This misses @@ -5130,6 +5166,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) flags |= TNF_SHARED; nid = folio_nid(folio); + nr_pages = folio_nr_pages(folio); /* * For memory tiering mode, cpupid of slow memory page is used * to record page access time. So use default value. @@ -5146,6 +5183,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) } pte_unmap_unlock(vmf->pte, vmf->ptl); writable = false; + ignore_writable = true; /* Migrate to the requested node */ if (migrate_misplaced_folio(folio, vma, target_nid)) { @@ -5166,14 +5204,17 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) out: if (nid != NUMA_NO_NODE) - task_numa_fault(last_cpupid, nid, 1, flags); + task_numa_fault(last_cpupid, nid, nr_pages, flags); return 0; out_map: /* * Make it present again, depending on how arch implements * non-accessible ptes, some can allow access by kernel mode. */ - numa_rebuild_single_mapping(vmf, vma, writable); + if (folio && folio_test_large(folio)) + numa_rebuild_large_mapping(vmf, vma, folio, pte, ignore_writable); + else + numa_rebuild_single_mapping(vmf, vma, writable); pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } diff --git a/mm/mprotect.c b/mm/mprotect.c index f8a4544b4601..94878c39ee32 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -129,7 +129,8 @@ static long change_pte_range(struct mmu_gather *tlb, /* Also skip shared copy-on-write pages */ if (is_cow_mapping(vma->vm_flags) && - folio_ref_count(folio) != 1) + (folio_maybe_dma_pinned(folio) || + folio_likely_mapped_shared(folio))) continue; /* -- 2.39.3 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] mm: support multi-size THP numa balancing 2024-03-29 6:56 ` [PATCH v2 2/2] mm: support multi-size THP numa balancing Baolin Wang @ 2024-04-01 2:50 ` Huang, Ying 2024-04-01 9:43 ` Baolin Wang 2024-04-01 3:47 ` Kefeng Wang 1 sibling, 1 reply; 7+ messages in thread From: Huang, Ying @ 2024-04-01 2:50 UTC (permalink / raw) To: Baolin Wang Cc: akpm, david, mgorman, wangkefeng.wang, jhubbard, 21cnbao, ryan.roberts, linux-mm, linux-kernel Baolin Wang <baolin.wang@linux.alibaba.com> writes: > Now the anonymous page allocation already supports multi-size THP (mTHP), > but the numa balancing still prohibits mTHP migration even though it is an > exclusive mapping, which is unreasonable. > > Allow scanning mTHP: > Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section > pages") skips shared CoW pages' NUMA page migration to avoid shared data > segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to > NUMA-migrate COW pages that have other uses") change to use page_count() > to avoid GUP pages migration, that will also skip the mTHP numa scaning. > Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP > issue, although there is still a GUP race, the issue seems to have been > resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared() > to skip shared CoW pages though this is not a precise sharers count. To > check if the folio is shared, ideally we want to make sure every page is > mapped to the same process, but doing that seems expensive and using > the estimated mapcount seems can work when running autonuma benchmark. > > Allow migrating mTHP: > As mentioned in the previous thread[1], large folios (including THP) are > more susceptible to false sharing issues among threads than 4K base page, > leading to pages ping-pong back and forth during numa balancing, which is > currently not easy to resolve. Therefore, as a start to support mTHP numa > balancing, we can follow the PMD mapped THP's strategy, that means we can > reuse the 2-stage filter in should_numa_migrate_memory() to check if the > mTHP is being heavily contended among threads (through checking the CPU id > and pid of the last access) to avoid false sharing at some degree. Thus, > we can restore all PTE maps upon the first hint page fault of a large folio > to follow the PMD mapped THP's strategy. In the future, we can continue to > optimize the NUMA balancing algorithm to avoid the false sharing issue with > large folios as much as possible. > > Performance data: > Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum > Base: 2024-03-25 mm-unstable branch > Enable mTHP to run autonuma-benchmark > > mTHP:16K > Base Patched > numa01 numa01 > 224.70 143.48 > numa01_THREAD_ALLOC numa01_THREAD_ALLOC > 118.05 47.43 > numa02 numa02 > 13.45 9.29 > numa02_SMT numa02_SMT > 14.80 7.50 > > mTHP:64K > Base Patched > numa01 numa01 > 216.15 114.40 > numa01_THREAD_ALLOC numa01_THREAD_ALLOC > 115.35 47.41 > numa02 numa02 > 13.24 9.25 > numa02_SMT numa02_SMT > 14.67 7.34 > > mTHP:128K > Base Patched > numa01 numa01 > 205.13 144.45 > numa01_THREAD_ALLOC numa01_THREAD_ALLOC > 112.93 41.88 > numa02 numa02 > 13.16 9.18 > numa02_SMT numa02_SMT > 14.81 7.49 > > [1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/ > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/memory.c | 57 +++++++++++++++++++++++++++++++++++++++++++-------- > mm/mprotect.c | 3 ++- > 2 files changed, 51 insertions(+), 9 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index c30fb4b95e15..2aca19e4fbd8 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5068,16 +5068,56 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str > update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > } > > +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma, > + struct folio *folio, pte_t fault_pte, bool ignore_writable) > +{ > + int nr = pte_pfn(fault_pte) - folio_pfn(folio); > + unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start); > + unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end); > + pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; > + bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); We call vma_wants_manual_pte_write_upgrade() in do_numa_page() already. It seems that we can make "ignore_writable = true" if "vma_wants_manual_pte_write_upgrade() == false" in do_numa_page() to remove one call. Otherwise, the patchset LGTM, feel free to add Reviewed-by: "Huang, Ying" <ying.huang@intel.com> in the future versions. -- Best Regards, Huang, Ying > + unsigned long addr; > + > + /* Restore all PTEs' mapping of the large folio */ > + for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) { > + pte_t pte, old_pte; > + pte_t ptent = ptep_get(start_ptep); > + bool writable = false; > + > + if (!pte_present(ptent) || !pte_protnone(ptent)) > + continue; > + > + if (pfn_folio(pte_pfn(ptent)) != folio) > + continue; > + > + if (!ignore_writable) { > + ptent = pte_modify(ptent, vma->vm_page_prot); > + writable = pte_write(ptent); > + if (!writable && pte_write_upgrade && > + can_change_pte_writable(vma, addr, ptent)) > + writable = true; > + } > + > + old_pte = ptep_modify_prot_start(vma, addr, start_ptep); > + pte = pte_modify(old_pte, vma->vm_page_prot); > + pte = pte_mkyoung(pte); > + if (writable) > + pte = pte_mkwrite(pte, vma); > + ptep_modify_prot_commit(vma, addr, start_ptep, old_pte, pte); > + update_mmu_cache_range(vmf, vma, addr, start_ptep, 1); > + } > +} > + > static vm_fault_t do_numa_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > struct folio *folio = NULL; > int nid = NUMA_NO_NODE; > - bool writable = false; > + bool writable = false, ignore_writable = false; > int last_cpupid; > int target_nid; > pte_t pte, old_pte; > - int flags = 0; > + int flags = 0, nr_pages; > > /* > * The pte cannot be used safely until we verify, while holding the page > @@ -5107,10 +5147,6 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > if (!folio || folio_is_zone_device(folio)) > goto out_map; > > - /* TODO: handle PTE-mapped THP */ > - if (folio_test_large(folio)) > - goto out_map; > - > /* > * Avoid grouping on RO pages in general. RO pages shouldn't hurt as > * much anyway since they can be in shared cache state. This misses > @@ -5130,6 +5166,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > flags |= TNF_SHARED; > > nid = folio_nid(folio); > + nr_pages = folio_nr_pages(folio); > /* > * For memory tiering mode, cpupid of slow memory page is used > * to record page access time. So use default value. > @@ -5146,6 +5183,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > } > pte_unmap_unlock(vmf->pte, vmf->ptl); > writable = false; > + ignore_writable = true; > > /* Migrate to the requested node */ > if (migrate_misplaced_folio(folio, vma, target_nid)) { > @@ -5166,14 +5204,17 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > > out: > if (nid != NUMA_NO_NODE) > - task_numa_fault(last_cpupid, nid, 1, flags); > + task_numa_fault(last_cpupid, nid, nr_pages, flags); > return 0; > out_map: > /* > * Make it present again, depending on how arch implements > * non-accessible ptes, some can allow access by kernel mode. > */ > - numa_rebuild_single_mapping(vmf, vma, writable); > + if (folio && folio_test_large(folio)) > + numa_rebuild_large_mapping(vmf, vma, folio, pte, ignore_writable); > + else > + numa_rebuild_single_mapping(vmf, vma, writable); > pte_unmap_unlock(vmf->pte, vmf->ptl); > goto out; > } > diff --git a/mm/mprotect.c b/mm/mprotect.c > index f8a4544b4601..94878c39ee32 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -129,7 +129,8 @@ static long change_pte_range(struct mmu_gather *tlb, > > /* Also skip shared copy-on-write pages */ > if (is_cow_mapping(vma->vm_flags) && > - folio_ref_count(folio) != 1) > + (folio_maybe_dma_pinned(folio) || > + folio_likely_mapped_shared(folio))) > continue; > > /* ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] mm: support multi-size THP numa balancing 2024-04-01 2:50 ` Huang, Ying @ 2024-04-01 9:43 ` Baolin Wang 0 siblings, 0 replies; 7+ messages in thread From: Baolin Wang @ 2024-04-01 9:43 UTC (permalink / raw) To: Huang, Ying Cc: akpm, david, mgorman, wangkefeng.wang, jhubbard, 21cnbao, ryan.roberts, linux-mm, linux-kernel On 2024/4/1 10:50, Huang, Ying wrote: > Baolin Wang <baolin.wang@linux.alibaba.com> writes: > >> Now the anonymous page allocation already supports multi-size THP (mTHP), >> but the numa balancing still prohibits mTHP migration even though it is an >> exclusive mapping, which is unreasonable. >> >> Allow scanning mTHP: >> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section >> pages") skips shared CoW pages' NUMA page migration to avoid shared data >> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to >> NUMA-migrate COW pages that have other uses") change to use page_count() >> to avoid GUP pages migration, that will also skip the mTHP numa scaning. >> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP >> issue, although there is still a GUP race, the issue seems to have been >> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared() >> to skip shared CoW pages though this is not a precise sharers count. To >> check if the folio is shared, ideally we want to make sure every page is >> mapped to the same process, but doing that seems expensive and using >> the estimated mapcount seems can work when running autonuma benchmark. >> >> Allow migrating mTHP: >> As mentioned in the previous thread[1], large folios (including THP) are >> more susceptible to false sharing issues among threads than 4K base page, >> leading to pages ping-pong back and forth during numa balancing, which is >> currently not easy to resolve. Therefore, as a start to support mTHP numa >> balancing, we can follow the PMD mapped THP's strategy, that means we can >> reuse the 2-stage filter in should_numa_migrate_memory() to check if the >> mTHP is being heavily contended among threads (through checking the CPU id >> and pid of the last access) to avoid false sharing at some degree. Thus, >> we can restore all PTE maps upon the first hint page fault of a large folio >> to follow the PMD mapped THP's strategy. In the future, we can continue to >> optimize the NUMA balancing algorithm to avoid the false sharing issue with >> large folios as much as possible. >> >> Performance data: >> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum >> Base: 2024-03-25 mm-unstable branch >> Enable mTHP to run autonuma-benchmark >> >> mTHP:16K >> Base Patched >> numa01 numa01 >> 224.70 143.48 >> numa01_THREAD_ALLOC numa01_THREAD_ALLOC >> 118.05 47.43 >> numa02 numa02 >> 13.45 9.29 >> numa02_SMT numa02_SMT >> 14.80 7.50 >> >> mTHP:64K >> Base Patched >> numa01 numa01 >> 216.15 114.40 >> numa01_THREAD_ALLOC numa01_THREAD_ALLOC >> 115.35 47.41 >> numa02 numa02 >> 13.24 9.25 >> numa02_SMT numa02_SMT >> 14.67 7.34 >> >> mTHP:128K >> Base Patched >> numa01 numa01 >> 205.13 144.45 >> numa01_THREAD_ALLOC numa01_THREAD_ALLOC >> 112.93 41.88 >> numa02 numa02 >> 13.16 9.18 >> numa02_SMT numa02_SMT >> 14.81 7.49 >> >> [1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/ >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> --- >> mm/memory.c | 57 +++++++++++++++++++++++++++++++++++++++++++-------- >> mm/mprotect.c | 3 ++- >> 2 files changed, 51 insertions(+), 9 deletions(-) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index c30fb4b95e15..2aca19e4fbd8 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -5068,16 +5068,56 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str >> update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); >> } >> >> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma, >> + struct folio *folio, pte_t fault_pte, bool ignore_writable) >> +{ >> + int nr = pte_pfn(fault_pte) - folio_pfn(folio); >> + unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start); >> + unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end); >> + pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; >> + bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); > > We call vma_wants_manual_pte_write_upgrade() in do_numa_page() already. > It seems that we can make "ignore_writable = true" if > "vma_wants_manual_pte_write_upgrade() == false" in do_numa_page() to > remove one call. From the original logics, we should also call pte_mkwrite() for the new mapping if the pte_write() is true while vma_wants_manual_pte_write_upgrade() is false. But I can add a new boolean parameter for numa_rebuild_large_mapping() to remove the same function call. > Otherwise, the patchset LGTM, feel free to add > > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > > in the future versions. Thanks for your valuable input! ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] mm: support multi-size THP numa balancing 2024-03-29 6:56 ` [PATCH v2 2/2] mm: support multi-size THP numa balancing Baolin Wang 2024-04-01 2:50 ` Huang, Ying @ 2024-04-01 3:47 ` Kefeng Wang 2024-04-01 9:47 ` Baolin Wang 1 sibling, 1 reply; 7+ messages in thread From: Kefeng Wang @ 2024-04-01 3:47 UTC (permalink / raw) To: Baolin Wang, akpm Cc: david, mgorman, jhubbard, ying.huang, 21cnbao, ryan.roberts, linux-mm, linux-kernel On 2024/3/29 14:56, Baolin Wang wrote: > Now the anonymous page allocation already supports multi-size THP (mTHP), > but the numa balancing still prohibits mTHP migration even though it is an > exclusive mapping, which is unreasonable. > > Allow scanning mTHP: > Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section > pages") skips shared CoW pages' NUMA page migration to avoid shared data > segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to > NUMA-migrate COW pages that have other uses") change to use page_count() > to avoid GUP pages migration, that will also skip the mTHP numa scaning. > Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP > issue, although there is still a GUP race, the issue seems to have been > resolved by commit 80d47f5de5e3. Meanwhile, use the folio_likely_mapped_shared() > to skip shared CoW pages though this is not a precise sharers count. To > check if the folio is shared, ideally we want to make sure every page is > mapped to the same process, but doing that seems expensive and using > the estimated mapcount seems can work when running autonuma benchmark. > > Allow migrating mTHP: > As mentioned in the previous thread[1], large folios (including THP) are > more susceptible to false sharing issues among threads than 4K base page, > leading to pages ping-pong back and forth during numa balancing, which is > currently not easy to resolve. Therefore, as a start to support mTHP numa > balancing, we can follow the PMD mapped THP's strategy, that means we can > reuse the 2-stage filter in should_numa_migrate_memory() to check if the > mTHP is being heavily contended among threads (through checking the CPU id > and pid of the last access) to avoid false sharing at some degree. Thus, > we can restore all PTE maps upon the first hint page fault of a large folio > to follow the PMD mapped THP's strategy. In the future, we can continue to > optimize the NUMA balancing algorithm to avoid the false sharing issue with > large folios as much as possible. > > Performance data: > Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum > Base: 2024-03-25 mm-unstable branch > Enable mTHP to run autonuma-benchmark > > mTHP:16K > Base Patched > numa01 numa01 > 224.70 143.48 > numa01_THREAD_ALLOC numa01_THREAD_ALLOC > 118.05 47.43 > numa02 numa02 > 13.45 9.29 > numa02_SMT numa02_SMT > 14.80 7.50 > > mTHP:64K > Base Patched > numa01 numa01 > 216.15 114.40 > numa01_THREAD_ALLOC numa01_THREAD_ALLOC > 115.35 47.41 > numa02 numa02 > 13.24 9.25 > numa02_SMT numa02_SMT > 14.67 7.34 > > mTHP:128K > Base Patched > numa01 numa01 > 205.13 144.45 > numa01_THREAD_ALLOC numa01_THREAD_ALLOC > 112.93 41.88 > numa02 numa02 > 13.16 9.18 > numa02_SMT numa02_SMT > 14.81 7.49 > > [1] https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/ > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/memory.c | 57 +++++++++++++++++++++++++++++++++++++++++++-------- > mm/mprotect.c | 3 ++- > 2 files changed, 51 insertions(+), 9 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index c30fb4b95e15..2aca19e4fbd8 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5068,16 +5068,56 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str > update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > } > > +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma, > + struct folio *folio, pte_t fault_pte, bool ignore_writable) > +{ > + int nr = pte_pfn(fault_pte) - folio_pfn(folio); > + unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start); > + unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end); > + pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; > + bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); > + unsigned long addr; > + > + /* Restore all PTEs' mapping of the large folio */ > + for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) { > + pte_t pte, old_pte; > + pte_t ptent = ptep_get(start_ptep); > + bool writable = false; > + > + if (!pte_present(ptent) || !pte_protnone(ptent)) > + continue; > + > + if (pfn_folio(pte_pfn(ptent)) != folio) > + continue; > + > + if (!ignore_writable) { > + ptent = pte_modify(ptent, vma->vm_page_prot); > + writable = pte_write(ptent); > + if (!writable && pte_write_upgrade && > + can_change_pte_writable(vma, addr, ptent)) > + writable = true; > + } > + > + old_pte = ptep_modify_prot_start(vma, addr, start_ptep); > + pte = pte_modify(old_pte, vma->vm_page_prot); > + pte = pte_mkyoung(pte); > + if (writable) > + pte = pte_mkwrite(pte, vma); > + ptep_modify_prot_commit(vma, addr, start_ptep, old_pte, pte); > + update_mmu_cache_range(vmf, vma, addr, start_ptep, 1); Maybe pass "unsigned long address, pte_t *ptep" to numa_rebuild_single_mapping(), then, just call it here. > + } > +} > + > static vm_fault_t do_numa_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > struct folio *folio = NULL; > int nid = NUMA_NO_NODE; > - bool writable = false; > + bool writable = false, ignore_writable = false; > int last_cpupid; > int target_nid; > pte_t pte, old_pte; > - int flags = 0; > + int flags = 0, nr_pages; > > /* > * The pte cannot be used safely until we verify, while holding the page > @@ -5107,10 +5147,6 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > if (!folio || folio_is_zone_device(folio)) > goto out_map; > > - /* TODO: handle PTE-mapped THP */ > - if (folio_test_large(folio)) > - goto out_map; > - > /* > * Avoid grouping on RO pages in general. RO pages shouldn't hurt as > * much anyway since they can be in shared cache state. This misses > @@ -5130,6 +5166,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > flags |= TNF_SHARED; > > nid = folio_nid(folio); > + nr_pages = folio_nr_pages(folio); > /* > * For memory tiering mode, cpupid of slow memory page is used > * to record page access time. So use default value. > @@ -5146,6 +5183,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > } > pte_unmap_unlock(vmf->pte, vmf->ptl); > writable = false; > + ignore_writable = true; > > /* Migrate to the requested node */ > if (migrate_misplaced_folio(folio, vma, target_nid)) { > @@ -5166,14 +5204,17 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > > out: > if (nid != NUMA_NO_NODE) > - task_numa_fault(last_cpupid, nid, 1, flags); > + task_numa_fault(last_cpupid, nid, nr_pages, flags); > return 0; > out_map: > /* > * Make it present again, depending on how arch implements > * non-accessible ptes, some can allow access by kernel mode. > */ > - numa_rebuild_single_mapping(vmf, vma, writable); > + if (folio && folio_test_large(folio)) initialize nr_pages and then call if (nr_pages > 1) > + numa_rebuild_large_mapping(vmf, vma, folio, pte, ignore_writable); > + else > + numa_rebuild_single_mapping(vmf, vma, writable); > pte_unmap_unlock(vmf->pte, vmf->ptl); > goto out; > } > diff --git a/mm/mprotect.c b/mm/mprotect.c > index f8a4544b4601..94878c39ee32 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -129,7 +129,8 @@ static long change_pte_range(struct mmu_gather *tlb, > > /* Also skip shared copy-on-write pages */ > if (is_cow_mapping(vma->vm_flags) && > - folio_ref_count(folio) != 1) > + (folio_maybe_dma_pinned(folio) || > + folio_likely_mapped_shared(folio))) > continue; > > /* ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] mm: support multi-size THP numa balancing 2024-04-01 3:47 ` Kefeng Wang @ 2024-04-01 9:47 ` Baolin Wang 0 siblings, 0 replies; 7+ messages in thread From: Baolin Wang @ 2024-04-01 9:47 UTC (permalink / raw) To: Kefeng Wang, akpm Cc: david, mgorman, jhubbard, ying.huang, 21cnbao, ryan.roberts, linux-mm, linux-kernel On 2024/4/1 11:47, Kefeng Wang wrote: > > > On 2024/3/29 14:56, Baolin Wang wrote: >> Now the anonymous page allocation already supports multi-size THP (mTHP), >> but the numa balancing still prohibits mTHP migration even though it >> is an >> exclusive mapping, which is unreasonable. >> >> Allow scanning mTHP: >> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section >> pages") skips shared CoW pages' NUMA page migration to avoid shared data >> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to >> NUMA-migrate COW pages that have other uses") change to use page_count() >> to avoid GUP pages migration, that will also skip the mTHP numa scaning. >> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP >> issue, although there is still a GUP race, the issue seems to have been >> resolved by commit 80d47f5de5e3. Meanwhile, use the >> folio_likely_mapped_shared() >> to skip shared CoW pages though this is not a precise sharers count. To >> check if the folio is shared, ideally we want to make sure every page is >> mapped to the same process, but doing that seems expensive and using >> the estimated mapcount seems can work when running autonuma benchmark. >> >> Allow migrating mTHP: >> As mentioned in the previous thread[1], large folios (including THP) are >> more susceptible to false sharing issues among threads than 4K base page, >> leading to pages ping-pong back and forth during numa balancing, which is >> currently not easy to resolve. Therefore, as a start to support mTHP numa >> balancing, we can follow the PMD mapped THP's strategy, that means we can >> reuse the 2-stage filter in should_numa_migrate_memory() to check if the >> mTHP is being heavily contended among threads (through checking the >> CPU id >> and pid of the last access) to avoid false sharing at some degree. Thus, >> we can restore all PTE maps upon the first hint page fault of a large >> folio >> to follow the PMD mapped THP's strategy. In the future, we can >> continue to >> optimize the NUMA balancing algorithm to avoid the false sharing issue >> with >> large folios as much as possible. >> >> Performance data: >> Machine environment: 2 nodes, 128 cores Intel(R) Xeon(R) Platinum >> Base: 2024-03-25 mm-unstable branch >> Enable mTHP to run autonuma-benchmark >> >> mTHP:16K >> Base Patched >> numa01 numa01 >> 224.70 143.48 >> numa01_THREAD_ALLOC numa01_THREAD_ALLOC >> 118.05 47.43 >> numa02 numa02 >> 13.45 9.29 >> numa02_SMT numa02_SMT >> 14.80 7.50 >> >> mTHP:64K >> Base Patched >> numa01 numa01 >> 216.15 114.40 >> numa01_THREAD_ALLOC numa01_THREAD_ALLOC >> 115.35 47.41 >> numa02 numa02 >> 13.24 9.25 >> numa02_SMT numa02_SMT >> 14.67 7.34 >> >> mTHP:128K >> Base Patched >> numa01 numa01 >> 205.13 144.45 >> numa01_THREAD_ALLOC numa01_THREAD_ALLOC >> 112.93 41.88 >> numa02 numa02 >> 13.16 9.18 >> numa02_SMT numa02_SMT >> 14.81 7.49 >> >> [1] >> https://lore.kernel.org/all/20231117100745.fnpijbk4xgmals3k@techsingularity.net/ >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> --- >> mm/memory.c | 57 +++++++++++++++++++++++++++++++++++++++++++-------- >> mm/mprotect.c | 3 ++- >> 2 files changed, 51 insertions(+), 9 deletions(-) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index c30fb4b95e15..2aca19e4fbd8 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -5068,16 +5068,56 @@ static void numa_rebuild_single_mapping(struct >> vm_fault *vmf, struct vm_area_str >> update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); >> } >> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct >> vm_area_struct *vma, >> + struct folio *folio, pte_t fault_pte, bool >> ignore_writable) >> +{ >> + int nr = pte_pfn(fault_pte) - folio_pfn(folio); >> + unsigned long start = max(vmf->address - nr * PAGE_SIZE, >> vma->vm_start); >> + unsigned long end = min(vmf->address + (folio_nr_pages(folio) - >> nr) * PAGE_SIZE, vma->vm_end); >> + pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; >> + bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); >> + unsigned long addr; >> + >> + /* Restore all PTEs' mapping of the large folio */ >> + for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) { >> + pte_t pte, old_pte; >> + pte_t ptent = ptep_get(start_ptep); >> + bool writable = false; >> + >> + if (!pte_present(ptent) || !pte_protnone(ptent)) >> + continue; >> + >> + if (pfn_folio(pte_pfn(ptent)) != folio) >> + continue; >> + >> + if (!ignore_writable) { >> + ptent = pte_modify(ptent, vma->vm_page_prot); >> + writable = pte_write(ptent); >> + if (!writable && pte_write_upgrade && >> + can_change_pte_writable(vma, addr, ptent)) >> + writable = true; >> + } >> + >> + old_pte = ptep_modify_prot_start(vma, addr, start_ptep); >> + pte = pte_modify(old_pte, vma->vm_page_prot); >> + pte = pte_mkyoung(pte); >> + if (writable) >> + pte = pte_mkwrite(pte, vma); >> + ptep_modify_prot_commit(vma, addr, start_ptep, old_pte, pte); >> + update_mmu_cache_range(vmf, vma, addr, start_ptep, 1); > > Maybe pass "unsigned long address, pte_t *ptep" to > numa_rebuild_single_mapping(), > then, just call it here. Yes, sounds reasonable. Will do in next version. >> static vm_fault_t do_numa_page(struct vm_fault *vmf) >> { >> struct vm_area_struct *vma = vmf->vma; >> struct folio *folio = NULL; >> int nid = NUMA_NO_NODE; >> - bool writable = false; >> + bool writable = false, ignore_writable = false; >> int last_cpupid; >> int target_nid; >> pte_t pte, old_pte; >> - int flags = 0; >> + int flags = 0, nr_pages; >> /* >> * The pte cannot be used safely until we verify, while holding >> the page >> @@ -5107,10 +5147,6 @@ static vm_fault_t do_numa_page(struct vm_fault >> *vmf) >> if (!folio || folio_is_zone_device(folio)) >> goto out_map; >> - /* TODO: handle PTE-mapped THP */ >> - if (folio_test_large(folio)) >> - goto out_map; >> - >> /* >> * Avoid grouping on RO pages in general. RO pages shouldn't >> hurt as >> * much anyway since they can be in shared cache state. This misses >> @@ -5130,6 +5166,7 @@ static vm_fault_t do_numa_page(struct vm_fault >> *vmf) >> flags |= TNF_SHARED; >> nid = folio_nid(folio); >> + nr_pages = folio_nr_pages(folio); >> /* >> * For memory tiering mode, cpupid of slow memory page is used >> * to record page access time. So use default value. >> @@ -5146,6 +5183,7 @@ static vm_fault_t do_numa_page(struct vm_fault >> *vmf) >> } >> pte_unmap_unlock(vmf->pte, vmf->ptl); >> writable = false; >> + ignore_writable = true; >> /* Migrate to the requested node */ >> if (migrate_misplaced_folio(folio, vma, target_nid)) { >> @@ -5166,14 +5204,17 @@ static vm_fault_t do_numa_page(struct vm_fault >> *vmf) >> out: >> if (nid != NUMA_NO_NODE) >> - task_numa_fault(last_cpupid, nid, 1, flags); >> + task_numa_fault(last_cpupid, nid, nr_pages, flags); >> return 0; >> out_map: >> /* >> * Make it present again, depending on how arch implements >> * non-accessible ptes, some can allow access by kernel mode. >> */ >> - numa_rebuild_single_mapping(vmf, vma, writable); >> + if (folio && folio_test_large(folio)) > initialize nr_pages and then call > > if (nr_pages > 1) Umm, IMO, folio_test_large() is more readable for me. ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-04-01 9:47 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-03-29 6:56 [PATCH v2 0/2] support multi-size THP numa balancing Baolin Wang 2024-03-29 6:56 ` [PATCH v2 1/2] mm: factor out the numa mapping rebuilding into a new helper Baolin Wang 2024-03-29 6:56 ` [PATCH v2 2/2] mm: support multi-size THP numa balancing Baolin Wang 2024-04-01 2:50 ` Huang, Ying 2024-04-01 9:43 ` Baolin Wang 2024-04-01 3:47 ` Kefeng Wang 2024-04-01 9:47 ` Baolin Wang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).