linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yafang Shao <laoar.shao@gmail.com>
To: akpm@linux-foundation.org, ktkhai@virtuozzo.com, mhocko@suse.com,
	hannes@cmpxchg.org, vdavydov.dev@gmail.com,
	mgorman@techsingularity.net
Cc: linux-mm@kvack.org, Yafang Shao <laoar.shao@gmail.com>
Subject: [PATCH 2/2] mm/vmscan: calculate reclaimed slab caches in all reclaim paths
Date: Fri, 21 Jun 2019 18:14:46 +0800	[thread overview]
Message-ID: <1561112086-6169-3-git-send-email-laoar.shao@gmail.com> (raw)
In-Reply-To: <1561112086-6169-1-git-send-email-laoar.shao@gmail.com>

There're six different reclaim paths by now,
- kswapd reclaim path
- node reclaim path
- hibernate preallocate memory reclaim path
- direct reclaim path
- memcg reclaim path
- memcg softlimit reclaim path

The slab caches reclaimed in these paths are only calculated in the above
three paths.

There're some drawbacks if we don't calculate the reclaimed slab caches.
- The sc->nr_reclaimed isn't correct if there're some slab caches
  relcaimed in this path.
- The slab caches may be reclaimed thoroughly if there're lots of
  reclaimable slab caches and few page caches.
  Let's take an easy example for this case.
  If one memcg is full of slab caches and the limit of it is 512M, in
  other words there're approximately 512M slab caches in this memcg.
  Then the limit of the memcg is reached and the memcg reclaim begins,
  and then in this memcg reclaim path it will continuesly reclaim the
  slab caches until the sc->priority drops to 0.
  After this reclaim stops, you will find there're few slab caches left,
  which is less than 20M in my test case.
  While after this patch applied the number is greater than 300M and
  the sc->priority only drops to 3.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 mm/vmscan.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 18a66e5..d6c3fc8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3164,11 +3164,13 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 	if (throttle_direct_reclaim(sc.gfp_mask, zonelist, nodemask))
 		return 1;
 
+	current->reclaim_state = &sc.reclaim_state;
 	trace_mm_vmscan_direct_reclaim_begin(order, sc.gfp_mask);
 
 	nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
 
 	trace_mm_vmscan_direct_reclaim_end(nr_reclaimed);
+	current->reclaim_state = NULL;
 
 	return nr_reclaimed;
 }
@@ -3191,6 +3193,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
 	};
 	unsigned long lru_pages;
 
+	current->reclaim_state = &sc.reclaim_state;
 	sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
 			(GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
 
@@ -3212,7 +3215,9 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
 					cgroup_ino(memcg->css.cgroup),
 					sc.nr_reclaimed);
 
+	current->reclaim_state = NULL;
 	*nr_scanned = sc.nr_scanned;
+
 	return sc.nr_reclaimed;
 }
 
@@ -3239,6 +3244,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
 		.may_shrinkslab = 1,
 	};
 
+	current->reclaim_state = &sc.reclaim_state;
 	/*
 	 * Unlike direct reclaim via alloc_pages(), memcg's reclaim doesn't
 	 * take care of from where we get pages. So the node where we start the
@@ -3263,6 +3269,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
 	trace_mm_vmscan_memcg_reclaim_end(
 				cgroup_ino(memcg->css.cgroup),
 				nr_reclaimed);
+	current->reclaim_state = NULL;
 
 	return nr_reclaimed;
 }
-- 
1.8.3.1


  parent reply	other threads:[~2019-06-21 10:15 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-21 10:14 [PATCH 0/2] mm/vmscan: calculate reclaimed slab in all reclaim paths Yafang Shao
2019-06-21 10:14 ` [PATCH 1/2] mm/vmscan: add a new member reclaim_state in struct shrink_control Yafang Shao
2019-06-21 10:14 ` Yafang Shao [this message]
2019-06-22  3:30   ` [PATCH 2/2] mm/vmscan: calculate reclaimed slab caches in all reclaim paths Andrew Morton
2019-06-22  6:31     ` Yafang Shao
2019-06-24  8:53   ` Kirill Tkhai
2019-06-24 12:30     ` Yafang Shao
2019-06-24 12:33       ` Kirill Tkhai
2019-06-24 12:40         ` Yafang Shao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1561112086-6169-3-git-send-email-laoar.shao@gmail.com \
    --to=laoar.shao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=ktkhai@virtuozzo.com \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).