* [PATCH] mm: huge_memory: batch tlb flush when splitting a pte-mapped THP
@ 2023-10-30 1:11 Baolin Wang
2023-10-30 1:53 ` Huang, Ying
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Baolin Wang @ 2023-10-30 1:11 UTC (permalink / raw)
To: akpm; +Cc: shy828301, ying.huang, baolin.wang, linux-mm, linux-kernel
I can observe an obvious tlb flush hotpot when splitting a pte-mapped THP on
my ARM64 server, and the distribution of this hotspot is as follows:
- 16.85% split_huge_page_to_list
+ 7.80% down_write
- 7.49% try_to_migrate
- 7.48% rmap_walk_anon
7.23% ptep_clear_flush
+ 1.52% __split_huge_page
The reason is that the split_huge_page_to_list() will build migration entries
for each subpage of a pte-mapped Anon THP by try_to_migrate(), or unmap for
file THP, and it will clear and tlb flush for each subpage's pte. Moreover,
the split_huge_page_to_list() will set TTU_SPLIT_HUGE_PMD flag to ensure
the THP is already a pte-mapped THP before splitting it to some normal pages.
Actually, there is no need to flush tlb for each subpage immediately, instead
we can batch tlb flush for the pte-mapped THP to improve the performance.
After this patch, we can see the batch tlb flush can improve the latency
obviously when running thpscale.
k6.5-base patched
Amean fault-both-1 1071.17 ( 0.00%) 901.83 * 15.81%*
Amean fault-both-3 2386.08 ( 0.00%) 1865.32 * 21.82%*
Amean fault-both-5 2851.10 ( 0.00%) 2273.84 * 20.25%*
Amean fault-both-7 3679.91 ( 0.00%) 2881.66 * 21.69%*
Amean fault-both-12 5916.66 ( 0.00%) 4369.55 * 26.15%*
Amean fault-both-18 7981.36 ( 0.00%) 6303.57 * 21.02%*
Amean fault-both-24 10950.79 ( 0.00%) 8752.56 * 20.07%*
Amean fault-both-30 14077.35 ( 0.00%) 10170.01 * 27.76%*
Amean fault-both-32 13061.57 ( 0.00%) 11630.08 * 10.96%*
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/huge_memory.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f31f02472396..0e4c14bf6872 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2379,7 +2379,7 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
static void unmap_folio(struct folio *folio)
{
enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
- TTU_SYNC;
+ TTU_SYNC | TTU_BATCH_FLUSH;
VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
@@ -2392,6 +2392,8 @@ static void unmap_folio(struct folio *folio)
try_to_migrate(folio, ttu_flags);
else
try_to_unmap(folio, ttu_flags | TTU_IGNORE_MLOCK);
+
+ try_to_unmap_flush();
}
static void remap_page(struct folio *folio, unsigned long nr)
--
2.39.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: huge_memory: batch tlb flush when splitting a pte-mapped THP
2023-10-30 1:11 [PATCH] mm: huge_memory: batch tlb flush when splitting a pte-mapped THP Baolin Wang
@ 2023-10-30 1:53 ` Huang, Ying
2023-10-31 18:26 ` Yang Shi
2023-11-01 6:13 ` Alistair Popple
2 siblings, 0 replies; 5+ messages in thread
From: Huang, Ying @ 2023-10-30 1:53 UTC (permalink / raw)
To: Baolin Wang; +Cc: akpm, shy828301, linux-mm, linux-kernel
Baolin Wang <baolin.wang@linux.alibaba.com> writes:
> I can observe an obvious tlb flush hotpot when splitting a pte-mapped THP on
> my ARM64 server, and the distribution of this hotspot is as follows:
>
> - 16.85% split_huge_page_to_list
> + 7.80% down_write
> - 7.49% try_to_migrate
> - 7.48% rmap_walk_anon
> 7.23% ptep_clear_flush
> + 1.52% __split_huge_page
>
> The reason is that the split_huge_page_to_list() will build migration entries
> for each subpage of a pte-mapped Anon THP by try_to_migrate(), or unmap for
> file THP, and it will clear and tlb flush for each subpage's pte. Moreover,
> the split_huge_page_to_list() will set TTU_SPLIT_HUGE_PMD flag to ensure
> the THP is already a pte-mapped THP before splitting it to some normal pages.
>
> Actually, there is no need to flush tlb for each subpage immediately, instead
> we can batch tlb flush for the pte-mapped THP to improve the performance.
>
> After this patch, we can see the batch tlb flush can improve the latency
> obviously when running thpscale.
> k6.5-base patched
> Amean fault-both-1 1071.17 ( 0.00%) 901.83 * 15.81%*
> Amean fault-both-3 2386.08 ( 0.00%) 1865.32 * 21.82%*
> Amean fault-both-5 2851.10 ( 0.00%) 2273.84 * 20.25%*
> Amean fault-both-7 3679.91 ( 0.00%) 2881.66 * 21.69%*
> Amean fault-both-12 5916.66 ( 0.00%) 4369.55 * 26.15%*
> Amean fault-both-18 7981.36 ( 0.00%) 6303.57 * 21.02%*
> Amean fault-both-24 10950.79 ( 0.00%) 8752.56 * 20.07%*
> Amean fault-both-30 14077.35 ( 0.00%) 10170.01 * 27.76%*
> Amean fault-both-32 13061.57 ( 0.00%) 11630.08 * 10.96%*
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
LGTM, Thanks!
Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
> ---
> mm/huge_memory.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index f31f02472396..0e4c14bf6872 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2379,7 +2379,7 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
> static void unmap_folio(struct folio *folio)
> {
> enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
> - TTU_SYNC;
> + TTU_SYNC | TTU_BATCH_FLUSH;
>
> VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>
> @@ -2392,6 +2392,8 @@ static void unmap_folio(struct folio *folio)
> try_to_migrate(folio, ttu_flags);
> else
> try_to_unmap(folio, ttu_flags | TTU_IGNORE_MLOCK);
> +
> + try_to_unmap_flush();
> }
>
> static void remap_page(struct folio *folio, unsigned long nr)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: huge_memory: batch tlb flush when splitting a pte-mapped THP
2023-10-30 1:11 [PATCH] mm: huge_memory: batch tlb flush when splitting a pte-mapped THP Baolin Wang
2023-10-30 1:53 ` Huang, Ying
@ 2023-10-31 18:26 ` Yang Shi
2023-11-01 6:13 ` Alistair Popple
2 siblings, 0 replies; 5+ messages in thread
From: Yang Shi @ 2023-10-31 18:26 UTC (permalink / raw)
To: Baolin Wang; +Cc: akpm, ying.huang, linux-mm, linux-kernel
On Sun, Oct 29, 2023 at 6:12 PM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
> I can observe an obvious tlb flush hotpot when splitting a pte-mapped THP on
> my ARM64 server, and the distribution of this hotspot is as follows:
>
> - 16.85% split_huge_page_to_list
> + 7.80% down_write
> - 7.49% try_to_migrate
> - 7.48% rmap_walk_anon
> 7.23% ptep_clear_flush
> + 1.52% __split_huge_page
>
> The reason is that the split_huge_page_to_list() will build migration entries
> for each subpage of a pte-mapped Anon THP by try_to_migrate(), or unmap for
> file THP, and it will clear and tlb flush for each subpage's pte. Moreover,
> the split_huge_page_to_list() will set TTU_SPLIT_HUGE_PMD flag to ensure
> the THP is already a pte-mapped THP before splitting it to some normal pages.
>
> Actually, there is no need to flush tlb for each subpage immediately, instead
> we can batch tlb flush for the pte-mapped THP to improve the performance.
>
> After this patch, we can see the batch tlb flush can improve the latency
> obviously when running thpscale.
> k6.5-base patched
> Amean fault-both-1 1071.17 ( 0.00%) 901.83 * 15.81%*
> Amean fault-both-3 2386.08 ( 0.00%) 1865.32 * 21.82%*
> Amean fault-both-5 2851.10 ( 0.00%) 2273.84 * 20.25%*
> Amean fault-both-7 3679.91 ( 0.00%) 2881.66 * 21.69%*
> Amean fault-both-12 5916.66 ( 0.00%) 4369.55 * 26.15%*
> Amean fault-both-18 7981.36 ( 0.00%) 6303.57 * 21.02%*
> Amean fault-both-24 10950.79 ( 0.00%) 8752.56 * 20.07%*
> Amean fault-both-30 14077.35 ( 0.00%) 10170.01 * 27.76%*
> Amean fault-both-32 13061.57 ( 0.00%) 11630.08 * 10.96%*
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
> ---
> mm/huge_memory.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index f31f02472396..0e4c14bf6872 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2379,7 +2379,7 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
> static void unmap_folio(struct folio *folio)
> {
> enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
> - TTU_SYNC;
> + TTU_SYNC | TTU_BATCH_FLUSH;
>
> VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>
> @@ -2392,6 +2392,8 @@ static void unmap_folio(struct folio *folio)
> try_to_migrate(folio, ttu_flags);
> else
> try_to_unmap(folio, ttu_flags | TTU_IGNORE_MLOCK);
> +
> + try_to_unmap_flush();
> }
>
> static void remap_page(struct folio *folio, unsigned long nr)
> --
> 2.39.3
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: huge_memory: batch tlb flush when splitting a pte-mapped THP
2023-10-30 1:11 [PATCH] mm: huge_memory: batch tlb flush when splitting a pte-mapped THP Baolin Wang
2023-10-30 1:53 ` Huang, Ying
2023-10-31 18:26 ` Yang Shi
@ 2023-11-01 6:13 ` Alistair Popple
2023-11-02 7:10 ` Baolin Wang
2 siblings, 1 reply; 5+ messages in thread
From: Alistair Popple @ 2023-11-01 6:13 UTC (permalink / raw)
To: Baolin Wang; +Cc: akpm, shy828301, ying.huang, linux-mm, linux-kernel
Baolin Wang <baolin.wang@linux.alibaba.com> writes:
> I can observe an obvious tlb flush hotpot when splitting a pte-mapped THP on
A tlb flush hotpot does sound delicious, but I think you meant hotspot :-)
> my ARM64 server, and the distribution of this hotspot is as follows:
>
> - 16.85% split_huge_page_to_list
> + 7.80% down_write
> - 7.49% try_to_migrate
> - 7.48% rmap_walk_anon
> 7.23% ptep_clear_flush
> + 1.52% __split_huge_page
>
> The reason is that the split_huge_page_to_list() will build migration entries
> for each subpage of a pte-mapped Anon THP by try_to_migrate(), or unmap for
> file THP, and it will clear and tlb flush for each subpage's pte. Moreover,
> the split_huge_page_to_list() will set TTU_SPLIT_HUGE_PMD flag to ensure
> the THP is already a pte-mapped THP before splitting it to some normal pages.
The only other user of TTU_SPLIT_HUGE_PMD is vmscan which also sets
TTU_BATCH_FLUSH so we could make the former imply the latter but that
seem dangerous given the requirement to call try_to_unmap_flush() so
best not to.
Reviewed-by: Alistair Popple <apopple@nvidia.com>
> Actually, there is no need to flush tlb for each subpage immediately, instead
> we can batch tlb flush for the pte-mapped THP to improve the performance.
>
> After this patch, we can see the batch tlb flush can improve the latency
> obviously when running thpscale.
> k6.5-base patched
> Amean fault-both-1 1071.17 ( 0.00%) 901.83 * 15.81%*
> Amean fault-both-3 2386.08 ( 0.00%) 1865.32 * 21.82%*
> Amean fault-both-5 2851.10 ( 0.00%) 2273.84 * 20.25%*
> Amean fault-both-7 3679.91 ( 0.00%) 2881.66 * 21.69%*
> Amean fault-both-12 5916.66 ( 0.00%) 4369.55 * 26.15%*
> Amean fault-both-18 7981.36 ( 0.00%) 6303.57 * 21.02%*
> Amean fault-both-24 10950.79 ( 0.00%) 8752.56 * 20.07%*
> Amean fault-both-30 14077.35 ( 0.00%) 10170.01 * 27.76%*
> Amean fault-both-32 13061.57 ( 0.00%) 11630.08 * 10.96%*
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/huge_memory.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index f31f02472396..0e4c14bf6872 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2379,7 +2379,7 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
> static void unmap_folio(struct folio *folio)
> {
> enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
> - TTU_SYNC;
> + TTU_SYNC | TTU_BATCH_FLUSH;
>
> VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>
> @@ -2392,6 +2392,8 @@ static void unmap_folio(struct folio *folio)
> try_to_migrate(folio, ttu_flags);
> else
> try_to_unmap(folio, ttu_flags | TTU_IGNORE_MLOCK);
> +
> + try_to_unmap_flush();
> }
>
> static void remap_page(struct folio *folio, unsigned long nr)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: huge_memory: batch tlb flush when splitting a pte-mapped THP
2023-11-01 6:13 ` Alistair Popple
@ 2023-11-02 7:10 ` Baolin Wang
0 siblings, 0 replies; 5+ messages in thread
From: Baolin Wang @ 2023-11-02 7:10 UTC (permalink / raw)
To: Alistair Popple; +Cc: akpm, shy828301, ying.huang, linux-mm, linux-kernel
On 11/1/2023 2:13 PM, Alistair Popple wrote:
>
> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>
>> I can observe an obvious tlb flush hotpot when splitting a pte-mapped THP on
>
> A tlb flush hotpot does sound delicious, but I think you meant hotspot :-)
Ah, yes. Hope Andrew can help to fix it :)
>> my ARM64 server, and the distribution of this hotspot is as follows:
>>
>> - 16.85% split_huge_page_to_list
>> + 7.80% down_write
>> - 7.49% try_to_migrate
>> - 7.48% rmap_walk_anon
>> 7.23% ptep_clear_flush
>> + 1.52% __split_huge_page
>>
>> The reason is that the split_huge_page_to_list() will build migration entries
>> for each subpage of a pte-mapped Anon THP by try_to_migrate(), or unmap for
>> file THP, and it will clear and tlb flush for each subpage's pte. Moreover,
>> the split_huge_page_to_list() will set TTU_SPLIT_HUGE_PMD flag to ensure
>> the THP is already a pte-mapped THP before splitting it to some normal pages.
>
> The only other user of TTU_SPLIT_HUGE_PMD is vmscan which also sets
> TTU_BATCH_FLUSH so we could make the former imply the latter but that
> seem dangerous given the requirement to call try_to_unmap_flush() so
> best not to.
>
> Reviewed-by: Alistair Popple <apopple@nvidia.com>
Thanks for reviewing, and also thanks to Ying and Yang.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2023-11-02 7:20 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-30 1:11 [PATCH] mm: huge_memory: batch tlb flush when splitting a pte-mapped THP Baolin Wang
2023-10-30 1:53 ` Huang, Ying
2023-10-31 18:26 ` Yang Shi
2023-11-01 6:13 ` Alistair Popple
2023-11-02 7:10 ` Baolin Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).