From: Konstantin Khlebnikov <khlebnikov@openvz.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH 12/12] mm/vmscan: kill struct mem_cgroup_zone
Date: Thu, 26 Apr 2012 11:54:34 +0400 [thread overview]
Message-ID: <20120426075434.18961.47496.stgit@zurg> (raw)
In-Reply-To: <20120426074632.18961.17803.stgit@zurg>
This patch kills struct mem_cgroup_zone and renames shrink_mem_cgroup_zone()
into shrink_lruvec(), it always shrinks one lruvec which it takes as argument.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
mm/vmscan.c | 26 ++++++--------------------
1 file changed, 6 insertions(+), 20 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a9114739..34cd8a5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -94,11 +94,6 @@ struct scan_control {
nodemask_t *nodemask;
};
-struct mem_cgroup_zone {
- struct mem_cgroup *mem_cgroup;
- struct zone *zone;
-};
-
#define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
#ifdef ARCH_HAS_PREFETCH
@@ -1811,8 +1806,7 @@ static inline bool should_continue_reclaim(struct lruvec *lruvec,
/*
* This is a basic per-zone page freer. Used by both kswapd and direct reclaim.
*/
-static void shrink_mem_cgroup_zone(struct mem_cgroup_zone *mz,
- struct scan_control *sc)
+static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
{
unsigned long nr[NR_LRU_LISTS];
unsigned long nr_to_scan;
@@ -1820,9 +1814,6 @@ static void shrink_mem_cgroup_zone(struct mem_cgroup_zone *mz,
unsigned long nr_reclaimed, nr_scanned;
unsigned long nr_to_reclaim = sc->nr_to_reclaim;
struct blk_plug plug;
- struct lruvec *lruvec;
-
- lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
restart:
nr_reclaimed = 0;
@@ -1884,12 +1875,10 @@ static void shrink_zone(struct zone *zone, struct scan_control *sc)
memcg = mem_cgroup_iter(root, NULL, &reclaim);
do {
- struct mem_cgroup_zone mz = {
- .mem_cgroup = memcg,
- .zone = zone,
- };
+ struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
+
+ shrink_lruvec(lruvec, sc);
- shrink_mem_cgroup_zone(&mz, sc);
/*
* Limit reclaim has historically picked one memcg and
* scanned it with decreasing priority levels until
@@ -2214,10 +2203,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
.priority = 0,
.target_mem_cgroup = memcg,
};
- struct mem_cgroup_zone mz = {
- .mem_cgroup = memcg,
- .zone = zone,
- };
+ struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
(GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
@@ -2233,7 +2219,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
* will pick up pages from other mem cgroup's as well. We hack
* the priority and make it zero.
*/
- shrink_mem_cgroup_zone(&mz, &sc);
+ shrink_lruvec(lruvec, &sc);
trace_mm_vmscan_memcg_softlimit_reclaim_end(sc.nr_reclaimed);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-04-26 7:54 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-04-26 7:53 [PATCH next 00/12] mm: replace struct mem_cgroup_zone with struct lruvec Konstantin Khlebnikov
2012-04-26 7:53 ` [PATCH 01/12] mm/vmscan: store "priority" in struct scan_control Konstantin Khlebnikov
2012-04-26 7:53 ` [PATCH 02/12] mm: add link from struct lruvec to struct zone Konstantin Khlebnikov
2012-05-02 5:52 ` [PATCH v2 " Konstantin Khlebnikov
2012-04-26 7:53 ` [PATCH 03/12] mm/vmscan: push lruvec pointer into isolate_lru_pages() Konstantin Khlebnikov
2012-04-26 7:54 ` [PATCH 04/12] mm/vmscan: push zone pointer into shrink_page_list() Konstantin Khlebnikov
2012-04-26 7:54 ` [PATCH 05/12] mm/vmscan: push zone pointer into update_isolated_counts() Konstantin Khlebnikov
2012-04-26 13:17 ` [PATCH v2 05/12] mm/vmscan: remove update_isolated_counts() Konstantin Khlebnikov
2012-04-27 10:28 ` Mel Gorman
2012-04-26 7:54 ` [PATCH 06/12] mm/vmscan: push lruvec pointer into putback_inactive_pages() Konstantin Khlebnikov
2012-04-26 7:54 ` [PATCH 07/12] mm/vmscan: replace zone_nr_lru_pages() with get_lruvec_size() Konstantin Khlebnikov
2012-04-26 7:54 ` [PATCH 08/12] mm/vmscan: push lruvec pointer into inactive_list_is_low() Konstantin Khlebnikov
2012-04-26 7:54 ` [PATCH 09/12] mm/vmscan: push lruvec pointer into shrink_list() Konstantin Khlebnikov
2012-04-26 7:54 ` [PATCH 10/12] mm/vmscan: push lruvec pointer into get_scan_count() Konstantin Khlebnikov
2012-04-26 7:54 ` [PATCH 11/12] mm/vmscan: push lruvec pointer into should_continue_reclaim() Konstantin Khlebnikov
2012-04-26 7:54 ` Konstantin Khlebnikov [this message]
2012-04-26 23:25 ` [PATCH next 00/12] mm: replace struct mem_cgroup_zone with struct lruvec Andrew Morton
2012-04-27 7:45 ` Konstantin Khlebnikov
2012-05-02 4:09 ` Hugh Dickins
2012-05-02 6:13 ` Konstantin Khlebnikov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120426075434.18961.47496.stgit@zurg \
--to=khlebnikov@openvz.org \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).