* Re: [PATCH v2 1/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range() [not found] ` <85363861af65fac66c7a98c251906afc0d9c8098.1695291046.git.wangjiexun@tinylab.org> @ 2023-09-22 0:41 ` Andrew Morton 2024-01-26 3:12 ` Sergey Senozhatsky 2024-01-26 3:25 ` [PATCH] mm/madvise: don't forget to leave lazy MMU mode " Sergey Senozhatsky 0 siblings, 2 replies; 5+ messages in thread From: Andrew Morton @ 2023-09-22 0:41 UTC (permalink / raw) To: Jiexun Wang; +Cc: brauner, falcon, linux-kernel, linux-mm, tangjinyu On Thu, 21 Sep 2023 20:27:51 +0800 Jiexun Wang <wangjiexun@tinylab.org> wrote: > Currently the madvise_cold_or_pageout_pte_range() function exhibits > significant latency under memory pressure, which can be effectively > reduced by adding cond_resched() within the loop. > > When the batch_count reaches SWAP_CLUSTER_MAX, we reschedule > the task to ensure fairness and avoid long lock holding times. > > ... > > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -354,6 +354,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > struct folio *folio = NULL; > LIST_HEAD(folio_list); > bool pageout_anon_only_filter; > + unsigned int batch_count = 0; > > if (fatal_signal_pending(current)) > return -EINTR; > @@ -433,6 +434,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > regular_folio: > #endif > tlb_change_page_size(tlb, PAGE_SIZE); > +restart: > start_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); The handling of start_pte looks OK. > if (!start_pte) > return 0; > @@ -441,6 +443,15 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > for (; addr < end; pte++, addr += PAGE_SIZE) { > ptent = ptep_get(pte); > > + if (++batch_count == SWAP_CLUSTER_MAX) { > + batch_count = 0; > + if (need_resched()) { > + pte_unmap_unlock(start_pte, ptl); > + cond_resched(); > + goto restart; > + } > + } > + > if (pte_none(ptent)) > continue; > I think this patch looks OK, but would appreciate careful review from others, please. ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2 1/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range() 2023-09-22 0:41 ` [PATCH v2 1/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range() Andrew Morton @ 2024-01-26 3:12 ` Sergey Senozhatsky 2024-01-26 3:25 ` [PATCH] mm/madvise: don't forget to leave lazy MMU mode " Sergey Senozhatsky 1 sibling, 0 replies; 5+ messages in thread From: Sergey Senozhatsky @ 2024-01-26 3:12 UTC (permalink / raw) To: Andrew Morton, Jiexun Wang Cc: brauner, falcon, linux-kernel, linux-mm, tangjinyu On (23/09/21 17:41), Andrew Morton wrote: > > Currently the madvise_cold_or_pageout_pte_range() function exhibits > > significant latency under memory pressure, which can be effectively > > reduced by adding cond_resched() within the loop. > > > > When the batch_count reaches SWAP_CLUSTER_MAX, we reschedule > > the task to ensure fairness and avoid long lock holding times. > > > > ... > > > > --- a/mm/madvise.c > > +++ b/mm/madvise.c > > @@ -354,6 +354,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > > struct folio *folio = NULL; > > LIST_HEAD(folio_list); > > bool pageout_anon_only_filter; > > + unsigned int batch_count = 0; > > > > if (fatal_signal_pending(current)) > > return -EINTR; > > @@ -433,6 +434,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > > regular_folio: > > #endif > > tlb_change_page_size(tlb, PAGE_SIZE); > > +restart: > > start_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); > > The handling of start_pte looks OK. > > > if (!start_pte) > > return 0; > > @@ -441,6 +443,15 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > > for (; addr < end; pte++, addr += PAGE_SIZE) { > > ptent = ptep_get(pte); > > > > + if (++batch_count == SWAP_CLUSTER_MAX) { > > + batch_count = 0; > > + if (need_resched()) { > > + pte_unmap_unlock(start_pte, ptl); Shouldn't it leave lazy MMU mode here? --- diff --git a/mm/madvise.c b/mm/madvise.c index 0f222d464254..127f0c7b69ac 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -451,6 +451,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (++batch_count == SWAP_CLUSTER_MAX) { batch_count = 0; if (need_resched()) { + arch_leave_lazy_mmu_mode(); pte_unmap_unlock(start_pte, ptl); cond_resched(); goto restart; ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH] mm/madvise: don't forget to leave lazy MMU mode in madvise_cold_or_pageout_pte_range() 2023-09-22 0:41 ` [PATCH v2 1/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range() Andrew Morton 2024-01-26 3:12 ` Sergey Senozhatsky @ 2024-01-26 3:25 ` Sergey Senozhatsky 2024-01-26 6:53 ` Andrew Morton 1 sibling, 1 reply; 5+ messages in thread From: Sergey Senozhatsky @ 2024-01-26 3:25 UTC (permalink / raw) To: Andrew Morton; +Cc: Jiexun Wang, linux-kernel, linux-mm, Sergey Senozhatsky We need to leave lazy MMU mode before unlocking. Fixes: b2f557a21bc8 ("mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range()") Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> --- mm/madvise.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/madvise.c b/mm/madvise.c index 0f222d464254..127f0c7b69ac 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -451,6 +451,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (++batch_count == SWAP_CLUSTER_MAX) { batch_count = 0; if (need_resched()) { + arch_leave_lazy_mmu_mode(); pte_unmap_unlock(start_pte, ptl); cond_resched(); goto restart; -- 2.43.0.429.g432eaa2c6b-goog ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/madvise: don't forget to leave lazy MMU mode in madvise_cold_or_pageout_pte_range() 2024-01-26 3:25 ` [PATCH] mm/madvise: don't forget to leave lazy MMU mode " Sergey Senozhatsky @ 2024-01-26 6:53 ` Andrew Morton 2024-01-26 7:00 ` Sergey Senozhatsky 0 siblings, 1 reply; 5+ messages in thread From: Andrew Morton @ 2024-01-26 6:53 UTC (permalink / raw) To: Sergey Senozhatsky; +Cc: Jiexun Wang, linux-kernel, linux-mm On Fri, 26 Jan 2024 12:25:48 +0900 Sergey Senozhatsky <senozhatsky@chromium.org> wrote: > We need to leave lazy MMU mode before unlocking. What might be the userspace-visible effects of this? > Fixes: b2f557a21bc8 ("mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range()" > Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> I'll add a cc:stable. > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -451,6 +451,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > if (++batch_count == SWAP_CLUSTER_MAX) { > batch_count = 0; > if (need_resched()) { > + arch_leave_lazy_mmu_mode(); > pte_unmap_unlock(start_pte, ptl); > cond_resched(); > goto restart; ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/madvise: don't forget to leave lazy MMU mode in madvise_cold_or_pageout_pte_range() 2024-01-26 6:53 ` Andrew Morton @ 2024-01-26 7:00 ` Sergey Senozhatsky 0 siblings, 0 replies; 5+ messages in thread From: Sergey Senozhatsky @ 2024-01-26 7:00 UTC (permalink / raw) To: Andrew Morton; +Cc: Sergey Senozhatsky, Jiexun Wang, linux-kernel, linux-mm On (24/01/25 22:53), Andrew Morton wrote: > Date: Thu, 25 Jan 2024 22:53:36 -0800 > From: Andrew Morton <akpm@linux-foundation.org> > To: Sergey Senozhatsky <senozhatsky@chromium.org> > Cc: Jiexun Wang <wangjiexun@tinylab.org>, linux-kernel@vger.kernel.org, > linux-mm@kvack.org > Subject: Re: [PATCH] mm/madvise: don't forget to leave lazy MMU mode in > madvise_cold_or_pageout_pte_range() > Message-Id: <20240125225336.6a444c01d9d9812a23a6890b@linux-foundation.org> > X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) > > On Fri, 26 Jan 2024 12:25:48 +0900 Sergey Senozhatsky <senozhatsky@chromium.org> wrote: > > > We need to leave lazy MMU mode before unlocking. It depends on the arch, as far as I understand it. We can enter lazy MMU mode (on each goto restart) more times than leave it, and, for isntance, on powerpc that means that we can preempt_disable() more times than preempt_enable(). That's how enter/leave lazy MMU mode is implemented there: static inline void arch_enter_lazy_mmu_mode(void) { struct ppc64_tlb_batch *batch; if (radix_enabled()) return; /* * apply_to_page_range can call us this preempt enabled when * operating on kernel page tables. */ preempt_disable(); batch = this_cpu_ptr(&ppc64_tlb_batch); batch->active = 1; } static inline void arch_leave_lazy_mmu_mode(void) { struct ppc64_tlb_batch *batch; if (radix_enabled()) return; batch = this_cpu_ptr(&ppc64_tlb_batch); if (batch->index) __flush_tlb_pending(batch); batch->active = 0; preempt_enable(); } > What might be the userspace-visible effects of this? > > > Fixes: b2f557a21bc8 ("mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range()" > > Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> > > I'll add a cc:stable. Thanks. ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-01-26 7:00 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <cover.1695291046.git.wangjiexun@tinylab.org>
[not found] ` <85363861af65fac66c7a98c251906afc0d9c8098.1695291046.git.wangjiexun@tinylab.org>
2023-09-22 0:41 ` [PATCH v2 1/1] mm/madvise: add cond_resched() in madvise_cold_or_pageout_pte_range() Andrew Morton
2024-01-26 3:12 ` Sergey Senozhatsky
2024-01-26 3:25 ` [PATCH] mm/madvise: don't forget to leave lazy MMU mode " Sergey Senozhatsky
2024-01-26 6:53 ` Andrew Morton
2024-01-26 7:00 ` Sergey Senozhatsky
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox