linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/vmscan: call vmpressure_prio() in kswapd reclaim path
@ 2019-06-10  8:42 Yafang Shao
  2019-06-10 21:12 ` Andrew Morton
  0 siblings, 1 reply; 3+ messages in thread
From: Yafang Shao @ 2019-06-10  8:42 UTC (permalink / raw)
  To: akpm, mhocko, anton.vorontsov; +Cc: linux-mm, shaoyafang, Yafang Shao

Once the reclaim scanning depth goes too deep, it always mean we are
under memory pressure now.
This behavior should be captured by vmpressure_prio(), which should run
every time when the vmscan's reclaiming priority (scanning depth)
changes.
It's possible the scanning depth goes deep in kswapd reclaim path,
so vmpressure_prio() should be called in this path.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 mm/vmscan.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index b79f584..1fbd3be 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3609,8 +3609,11 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
 		if (nr_boost_reclaim && !nr_reclaimed)
 			break;
 
-		if (raise_priority || !nr_reclaimed)
+		if (raise_priority || !nr_reclaimed) {
+			vmpressure_prio(sc.gfp_mask, sc.target_mem_cgroup,
+					sc.priority);
 			sc.priority--;
+		}
 	} while (sc.priority >= 1);
 
 	if (!sc.nr_reclaimed)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm/vmscan: call vmpressure_prio() in kswapd reclaim path
  2019-06-10  8:42 [PATCH] mm/vmscan: call vmpressure_prio() in kswapd reclaim path Yafang Shao
@ 2019-06-10 21:12 ` Andrew Morton
  2019-06-11 12:24   ` Yafang Shao
  0 siblings, 1 reply; 3+ messages in thread
From: Andrew Morton @ 2019-06-10 21:12 UTC (permalink / raw)
  To: Yafang Shao; +Cc: mhocko, anton.vorontsov, linux-mm, shaoyafang

On Mon, 10 Jun 2019 16:42:27 +0800 Yafang Shao <laoar.shao@gmail.com> wrote:

> Once the reclaim scanning depth goes too deep, it always mean we are
> under memory pressure now.
> This behavior should be captured by vmpressure_prio(), which should run
> every time when the vmscan's reclaiming priority (scanning depth)
> changes.
> It's possible the scanning depth goes deep in kswapd reclaim path,
> so vmpressure_prio() should be called in this path.
> 
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>

What effect does this change have upon userspace?

Presumably you observed some behaviour(?) and that behaviour was
undesirable(?) and the patch changed that behaviour to something
else(?) and this new behaviour is better for some reason(?).


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm/vmscan: call vmpressure_prio() in kswapd reclaim path
  2019-06-10 21:12 ` Andrew Morton
@ 2019-06-11 12:24   ` Yafang Shao
  0 siblings, 0 replies; 3+ messages in thread
From: Yafang Shao @ 2019-06-11 12:24 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Michal Hocko, Linux MM, shaoyafang

On Tue, Jun 11, 2019 at 5:12 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Mon, 10 Jun 2019 16:42:27 +0800 Yafang Shao <laoar.shao@gmail.com> wrote:
>
> > Once the reclaim scanning depth goes too deep, it always mean we are
> > under memory pressure now.
> > This behavior should be captured by vmpressure_prio(), which should run
> > every time when the vmscan's reclaiming priority (scanning depth)
> > changes.
> > It's possible the scanning depth goes deep in kswapd reclaim path,
> > so vmpressure_prio() should be called in this path.
> >
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
>
> What effect does this change have upon userspace?
>
> Presumably you observed some behaviour(?) and that behaviour was
> undesirable(?) and the patch changed that behaviour to something
> else(?) and this new behaviour is better for some reason(?).
>

When there're few free memory,
the usespace can receive the critical memory pressure event earlier,
because when we try to do direct reclaim we always wakeup the kswapd first.
Currently the vmpressure work (vmpressure_work_fn) can only be
scheduled in direct reclaim path,
and with this change, the vmpressure work can be scheduled in kswapd
reclaim path.

I think receiving the critical memory pressure event earlier can give
the userspace more chance to do something to prevent random OOM.

With this change, the vmpressure work will be scheduled more frequent
than before when the system is under memory pressure.

Thanks
Yafang


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-06-11 12:25 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-06-10  8:42 [PATCH] mm/vmscan: call vmpressure_prio() in kswapd reclaim path Yafang Shao
2019-06-10 21:12 ` Andrew Morton
2019-06-11 12:24   ` Yafang Shao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).