From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail143.messagelabs.com (mail143.messagelabs.com [216.82.254.35]) by kanga.kvack.org (Postfix) with ESMTP id C314590010B for ; Thu, 12 May 2011 15:19:54 -0400 (EDT) Received: from wpaz37.hot.corp.google.com (wpaz37.hot.corp.google.com [172.24.198.101]) by smtp-out.google.com with ESMTP id p4CJJnbU017717 for ; Thu, 12 May 2011 12:19:50 -0700 Received: from qyk2 (qyk2.prod.google.com [10.241.83.130]) by wpaz37.hot.corp.google.com with ESMTP id p4CJJhlX019405 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NOT) for ; Thu, 12 May 2011 12:19:48 -0700 Received: by qyk2 with SMTP id 2so1117868qyk.9 for ; Thu, 12 May 2011 12:19:45 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <1305212038-15445-4-git-send-email-hannes@cmpxchg.org> References: <1305212038-15445-1-git-send-email-hannes@cmpxchg.org> <1305212038-15445-4-git-send-email-hannes@cmpxchg.org> Date: Thu, 12 May 2011 12:19:45 -0700 Message-ID: Subject: Re: [rfc patch 3/6] mm: memcg-aware global reclaim From: Ying Han Content-Type: multipart/alternative; boundary=002354470aa86ecd7904a319120c Sender: owner-linux-mm@kvack.org List-ID: To: Johannes Weiner Cc: KAMEZAWA Hiroyuki , Daisuke Nishimura , Balbir Singh , Michal Hocko , Andrew Morton , Rik van Riel , Minchan Kim , KOSAKI Motohiro , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org --002354470aa86ecd7904a319120c Content-Type: text/plain; charset=ISO-8859-1 On Thu, May 12, 2011 at 7:53 AM, Johannes Weiner wrote: > A page charged to a memcg is linked to a lru list specific to that > memcg. At the same time, traditional global reclaim is obvlivious to > memcgs, and all the pages are also linked to a global per-zone list. > > This patch changes traditional global reclaim to iterate over all > existing memcgs, so that it no longer relies on the global list being > present. > This is one step forward in integrating memcg code better into the > rest of memory management. It is also a prerequisite to get rid of > the global per-zone lru lists. > > Sorry If i misunderstood something here. I assume this patch has not much to do with the global soft_limit reclaim, but only allow the system only scan per-memcg lru under global memory pressure. > RFC: > > The algorithm implemented in this patch is very naive. For each zone > scanned at each priority level, it iterates over all existing memcgs > and considers them for scanning. > > This is just a prototype and I did not optimize it yet because I am > unsure about the maximum number of memcgs that still constitute a sane > configuration in comparison to the machine size. > So we also scan memcg which has no page allocated on this zone? I will read the following patch in case i missed something here :) --Ying > > It is perfectly fair since all memcgs are scanned at each priority > level. > > On my 4G quadcore laptop with 1000 memcgs, a significant amount of CPU > time was spent just iterating memcgs during reclaim. But it can not > really be claimed that the old code was much better, either: global > LRU reclaim could mean that a few hundred memcgs would have been > emptied out completely, while others stayed untouched. > > I am open to solutions that trade fairness against CPU-time but don't > want to have an extreme in either direction. Maybe break out early if > a number of memcgs has been successfully reclaimed from and remember > the last one scanned. > > Signed-off-by: Johannes Weiner > --- > include/linux/memcontrol.h | 7 ++ > mm/memcontrol.c | 148 > +++++++++++++++++++++++++++++--------------- > mm/vmscan.c | 21 +++++-- > 3 files changed, 120 insertions(+), 56 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 5e9840f5..58728c7 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -104,6 +104,7 @@ extern void mem_cgroup_end_migration(struct mem_cgroup > *mem, > /* > * For memory reclaim. > */ > +void mem_cgroup_hierarchy_walk(struct mem_cgroup *, struct mem_cgroup **); > int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg); > int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg); > unsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg, > @@ -289,6 +290,12 @@ static inline bool mem_cgroup_disabled(void) > return true; > } > > +static inline void mem_cgroup_hierarchy_walk(struct mem_cgroup *start, > + struct mem_cgroup **iter) > +{ > + *iter = start; > +} > + > static inline int > mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg) > { > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index bf5ab87..edcd55a 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -313,7 +313,7 @@ static bool move_file(void) > } > > /* > - * Maximum loops in mem_cgroup_hierarchical_reclaim(), used for soft > + * Maximum loops in mem_cgroup_soft_reclaim(), used for soft > * limit reclaim to prevent infinite loops, if they ever occur. > */ > #define MEM_CGROUP_MAX_RECLAIM_LOOPS (100) > @@ -339,16 +339,6 @@ enum charge_type { > /* Used for OOM nofiier */ > #define OOM_CONTROL (0) > > -/* > - * Reclaim flags for mem_cgroup_hierarchical_reclaim > - */ > -#define MEM_CGROUP_RECLAIM_NOSWAP_BIT 0x0 > -#define MEM_CGROUP_RECLAIM_NOSWAP (1 << > MEM_CGROUP_RECLAIM_NOSWAP_BIT) > -#define MEM_CGROUP_RECLAIM_SHRINK_BIT 0x1 > -#define MEM_CGROUP_RECLAIM_SHRINK (1 << > MEM_CGROUP_RECLAIM_SHRINK_BIT) > -#define MEM_CGROUP_RECLAIM_SOFT_BIT 0x2 > -#define MEM_CGROUP_RECLAIM_SOFT (1 << > MEM_CGROUP_RECLAIM_SOFT_BIT) > - > static void mem_cgroup_get(struct mem_cgroup *mem); > static void mem_cgroup_put(struct mem_cgroup *mem); > static struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *mem); > @@ -1381,6 +1371,86 @@ u64 mem_cgroup_get_limit(struct mem_cgroup *memcg) > return min(limit, memsw); > } > > +void mem_cgroup_hierarchy_walk(struct mem_cgroup *start, > + struct mem_cgroup **iter) > +{ > + struct mem_cgroup *mem = *iter; > + int id; > + > + if (!start) > + start = root_mem_cgroup; > + /* > + * Even without hierarchy explicitely enabled in the root > + * memcg, it is the ultimate parent of all memcgs. > + */ > + if (!(start == root_mem_cgroup || start->use_hierarchy)) { > + *iter = start; > + return; > + } > + > + if (!mem) > + id = css_id(&start->css); > + else { > + id = css_id(&mem->css); > + css_put(&mem->css); > + mem = NULL; > + } > + > + do { > + struct cgroup_subsys_state *css; > + > + rcu_read_lock(); > + css = css_get_next(&mem_cgroup_subsys, id+1, &start->css, > &id); > + /* > + * The caller must already have a reference to the > + * starting point of this hierarchy walk, do not grab > + * another one. This way, the loop can be finished > + * when the hierarchy root is returned, without any > + * further cleanup required. > + */ > + if (css && (css == &start->css || css_tryget(css))) > + mem = container_of(css, struct mem_cgroup, css); > + rcu_read_unlock(); > + if (!css) > + id = 0; > + } while (!mem); > + > + if (mem == root_mem_cgroup) > + mem = NULL; > + > + *iter = mem; > +} > + > +static unsigned long mem_cgroup_target_reclaim(struct mem_cgroup *mem, > + gfp_t gfp_mask, > + bool noswap, > + bool shrink) > +{ > + unsigned long total = 0; > + int loop; > + > + if (mem->memsw_is_minimum) > + noswap = true; > + > + for (loop = 0; loop < MEM_CGROUP_MAX_RECLAIM_LOOPS; loop++) { > + drain_all_stock_async(); > + total += try_to_free_mem_cgroup_pages(mem, gfp_mask, > noswap, > + get_swappiness(mem)); > + if (total && shrink) > + break; > + if (mem_cgroup_margin(mem)) > + break; > + /* > + * If we have not been able to reclaim anything after > + * two reclaim attempts, there may be no reclaimable > + * pages under this hierarchy. > + */ > + if (loop && !total) > + break; > + } > + return total; > +} > + > /* > * Visit the first child (need not be the first child as per the ordering > * of the cgroup list, since we track last_scanned_child) of @mem and use > @@ -1427,21 +1497,16 @@ mem_cgroup_select_victim(struct mem_cgroup > *root_mem) > * > * We give up and return to the caller when we visit root_mem twice. > * (other groups can be removed while we're walking....) > - * > - * If shrink==true, for avoiding to free too much, this returns > immedieately. > */ > -static int mem_cgroup_hierarchical_reclaim(struct mem_cgroup *root_mem, > - struct zone *zone, > - gfp_t gfp_mask, > - unsigned long > reclaim_options) > +static int mem_cgroup_soft_reclaim(struct mem_cgroup *root_mem, > + struct zone *zone, > + gfp_t gfp_mask) > { > struct mem_cgroup *victim; > int ret, total = 0; > int loop = 0; > - bool noswap = reclaim_options & MEM_CGROUP_RECLAIM_NOSWAP; > - bool shrink = reclaim_options & MEM_CGROUP_RECLAIM_SHRINK; > - bool check_soft = reclaim_options & MEM_CGROUP_RECLAIM_SOFT; > unsigned long excess; > + bool noswap = false; > > excess = res_counter_soft_limit_excess(&root_mem->res) >> > PAGE_SHIFT; > > @@ -1461,7 +1526,7 @@ static int mem_cgroup_hierarchical_reclaim(struct > mem_cgroup *root_mem, > * anything, it might because there are > * no reclaimable pages under this hierarchy > */ > - if (!check_soft || !total) { > + if (!total) { > css_put(&victim->css); > break; > } > @@ -1484,25 +1549,11 @@ static int mem_cgroup_hierarchical_reclaim(struct > mem_cgroup *root_mem, > continue; > } > /* we use swappiness of local cgroup */ > - if (check_soft) > - ret = mem_cgroup_shrink_node_zone(victim, gfp_mask, > + ret = mem_cgroup_shrink_node_zone(victim, gfp_mask, > noswap, get_swappiness(victim), zone); > - else > - ret = try_to_free_mem_cgroup_pages(victim, > gfp_mask, > - noswap, > get_swappiness(victim)); > css_put(&victim->css); > - /* > - * At shrinking usage, we can't check we should stop here > or > - * reclaim more. It's depends on callers. > last_scanned_child > - * will work enough for keeping fairness under tree. > - */ > - if (shrink) > - return ret; > total += ret; > - if (check_soft) { > - if (!res_counter_soft_limit_excess(&root_mem->res)) > - return total; > - } else if (mem_cgroup_margin(root_mem)) > + if (!res_counter_soft_limit_excess(&root_mem->res)) > return total; > } > return total; > @@ -1897,7 +1948,7 @@ static int mem_cgroup_do_charge(struct mem_cgroup > *mem, gfp_t gfp_mask, > unsigned long csize = nr_pages * PAGE_SIZE; > struct mem_cgroup *mem_over_limit; > struct res_counter *fail_res; > - unsigned long flags = 0; > + bool noswap = false; > int ret; > > ret = res_counter_charge(&mem->res, csize, &fail_res); > @@ -1911,7 +1962,7 @@ static int mem_cgroup_do_charge(struct mem_cgroup > *mem, gfp_t gfp_mask, > > res_counter_uncharge(&mem->res, csize); > mem_over_limit = mem_cgroup_from_res_counter(fail_res, > memsw); > - flags |= MEM_CGROUP_RECLAIM_NOSWAP; > + noswap = true; > } else > mem_over_limit = mem_cgroup_from_res_counter(fail_res, res); > /* > @@ -1927,8 +1978,8 @@ static int mem_cgroup_do_charge(struct mem_cgroup > *mem, gfp_t gfp_mask, > if (!(gfp_mask & __GFP_WAIT)) > return CHARGE_WOULDBLOCK; > > - ret = mem_cgroup_hierarchical_reclaim(mem_over_limit, NULL, > - gfp_mask, flags); > + ret = mem_cgroup_target_reclaim(mem_over_limit, gfp_mask, > + noswap, false); > if (mem_cgroup_margin(mem_over_limit) >= nr_pages) > return CHARGE_RETRY; > /* > @@ -3085,7 +3136,7 @@ void mem_cgroup_end_migration(struct mem_cgroup *mem, > > /* > * A call to try to shrink memory usage on charge failure at shmem's > swapin. > - * Calling hierarchical_reclaim is not enough because we should update > + * Calling target_reclaim is not enough because we should update > * last_oom_jiffies to prevent pagefault_out_of_memory from invoking global > OOM. > * Moreover considering hierarchy, we should reclaim from the > mem_over_limit, > * not from the memcg which this page would be charged to. > @@ -3167,7 +3218,7 @@ static int mem_cgroup_resize_limit(struct mem_cgroup > *memcg, > int enlarge; > > /* > - * For keeping hierarchical_reclaim simple, how long we should > retry > + * For keeping target_reclaim simple, how long we should retry > * is depends on callers. We set our retry-count to be function > * of # of children which we should visit in this loop. > */ > @@ -3210,8 +3261,7 @@ static int mem_cgroup_resize_limit(struct mem_cgroup > *memcg, > if (!ret) > break; > > - mem_cgroup_hierarchical_reclaim(memcg, NULL, GFP_KERNEL, > - MEM_CGROUP_RECLAIM_SHRINK); > + mem_cgroup_target_reclaim(memcg, GFP_KERNEL, false, false); > curusage = res_counter_read_u64(&memcg->res, RES_USAGE); > /* Usage is reduced ? */ > if (curusage >= oldusage) > @@ -3269,9 +3319,7 @@ static int mem_cgroup_resize_memsw_limit(struct > mem_cgroup *memcg, > if (!ret) > break; > > - mem_cgroup_hierarchical_reclaim(memcg, NULL, GFP_KERNEL, > - MEM_CGROUP_RECLAIM_NOSWAP | > - MEM_CGROUP_RECLAIM_SHRINK); > + mem_cgroup_target_reclaim(memcg, GFP_KERNEL, true, false); > curusage = res_counter_read_u64(&memcg->memsw, RES_USAGE); > /* Usage is reduced ? */ > if (curusage >= oldusage) > @@ -3311,9 +3359,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(struct > zone *zone, int order, > if (!mz) > break; > > - reclaimed = mem_cgroup_hierarchical_reclaim(mz->mem, zone, > - gfp_mask, > - MEM_CGROUP_RECLAIM_SOFT); > + reclaimed = mem_cgroup_soft_reclaim(mz->mem, zone, > gfp_mask); > nr_reclaimed += reclaimed; > spin_lock(&mctz->lock); > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index ceeb2a5..e2a3647 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1900,8 +1900,8 @@ static inline bool should_continue_reclaim(struct > zone *zone, > /* > * This is a basic per-zone page freer. Used by both kswapd and direct > reclaim. > */ > -static void shrink_zone(int priority, struct zone *zone, > - struct scan_control *sc) > +static void do_shrink_zone(int priority, struct zone *zone, > + struct scan_control *sc) > { > unsigned long nr[NR_LRU_LISTS]; > unsigned long nr_to_scan; > @@ -1914,8 +1914,6 @@ restart: > nr_scanned = sc->nr_scanned; > get_scan_count(zone, sc, nr, priority); > > - sc->current_memcg = sc->memcg; > - > while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] || > nr[LRU_INACTIVE_FILE]) { > for_each_evictable_lru(l) { > @@ -1954,6 +1952,19 @@ restart: > goto restart; > > throttle_vm_writeout(sc->gfp_mask); > +} > + > +static void shrink_zone(int priority, struct zone *zone, > + struct scan_control *sc) > +{ > + struct mem_cgroup *root = sc->memcg; > + struct mem_cgroup *mem = NULL; > + > + do { > + mem_cgroup_hierarchy_walk(root, &mem); > + sc->current_memcg = mem; > + do_shrink_zone(priority, zone, sc); > + } while (mem != root); > > /* For good measure, noone higher up the stack should look at it */ > sc->current_memcg = NULL; > @@ -2190,7 +2201,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct > mem_cgroup *mem, > * will pick up pages from other mem cgroup's as well. We hack > * the priority and make it zero. > */ > - shrink_zone(0, zone, &sc); > + do_shrink_zone(0, zone, &sc); > > trace_mm_vmscan_memcg_softlimit_reclaim_end(sc.nr_reclaimed); > > -- > 1.7.5.1 > > --002354470aa86ecd7904a319120c Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

On Thu, May 12, 2011 at 7:53 AM, Johanne= s Weiner <hannes= @cmpxchg.org> wrote:
A page charged to a memcg is linked to a lru list specific to that
memcg. =A0At the same time, traditional global reclaim is obvlivious to
memcgs, and all the pages are also linked to a global per-zone list.

This patch changes traditional global reclaim to iterate over all
existing memcgs, so that it no longer relies on the global list being
present.

Thi= s is one step forward in integrating memcg code better into the
rest of memory management. =A0It is also a prerequisite to get rid of
the global per-zone lru lists.

Sorry If i misunderstood something here. I assume this = patch has not much to do with the
global soft_limit reclaim, but = only allow the system only scan per-memcg lru under global
memory pressure.=A0
=A0
RFC:

The algorithm implemented in this patch is very naive. =A0For each zone
scanned at each priority level, it iterates over all existing memcgs
and considers them for scanning.

This is just a prototype and I did not optimize it yet because I am
unsure about the maximum number of memcgs that still constitute a sane
configuration in comparison to the machine size.

<= /div>
So we also scan memcg which has no page allocated on this zone? I= will read the following
patch in case i missed something here :)=

--Ying=A0

It is perfectly fair since all memcgs are scanned at each priority
level.

On my 4G quadcore laptop with 1000 memcgs, a significant amount of CPU
time was spent just iterating memcgs during reclaim. =A0But it can not
really be claimed that the old code was much better, either: global
LRU reclaim could mean that a few hundred memcgs would have been
emptied out completely, while others stayed untouched.

I am open to solutions that trade fairness against CPU-time but don't want to have an extreme in either direction. =A0Maybe break out early if a number of memcgs has been successfully reclaimed from and remember
the last one scanned.

Signed-off-by: Johannes Weiner <ha= nnes@cmpxchg.org>
---
=A0include/linux/memcontrol.h | =A0 =A07 ++
=A0mm/memcontrol.c =A0 =A0 =A0 =A0 =A0 =A0| =A0148 ++++++++++++++++++++++++= +++++---------------
=A0mm/vmscan.c =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| =A0 21 +++++--
=A03 files changed, 120 insertions(+), 56 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 5e9840f5..58728c7 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -104,6 +104,7 @@ extern void mem_cgroup_end_migration(struct mem_cgroup = *mem,
=A0/*
=A0* For memory reclaim.
=A0*/
+void mem_cgroup_hierarchy_walk(struct mem_cgroup *, struct mem_cgroup **);=
=A0int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg);
=A0int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg);
=A0unsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg,
@@ -289,6 +290,12 @@ static inline bool mem_cgroup_disabled(void)
=A0 =A0 =A0 =A0return true;
=A0}

+static inline void mem_cgroup_hierarchy_walk(struct mem_cgroup *start,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0struct mem_cgroup **iter)
+{
+ =A0 =A0 =A0 *iter =3D start;
+}
+
=A0static inline int
=A0mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg)
=A0{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index bf5ab87..edcd55a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -313,7 +313,7 @@ static bool move_file(void)
=A0}

=A0/*
- * Maximum loops in mem_cgroup_hierarchical_reclaim(), used for soft
+ * Maximum loops in mem_cgroup_soft_reclaim(), used for soft
=A0* limit reclaim to prevent infinite loops, if they ever occur.
=A0*/
=A0#define =A0 =A0 =A0 =A0MEM_CGROUP_MAX_RECLAIM_LOOPS =A0 =A0 =A0 =A0 =A0 = =A0(100)
@@ -339,16 +339,6 @@ enum charge_type {
=A0/* Used for OOM nofiier */
=A0#define OOM_CONTROL =A0 =A0 =A0 =A0 =A0 =A0(0)

-/*
- * Reclaim flags for mem_cgroup_hierarchical_reclaim
- */
-#define MEM_CGROUP_RECLAIM_NOSWAP_BIT =A00x0
-#define MEM_CGROUP_RECLAIM_NOSWAP =A0 =A0 =A0(1 << MEM_CGROUP_RECLAI= M_NOSWAP_BIT)
-#define MEM_CGROUP_RECLAIM_SHRINK_BIT =A00x1
-#define MEM_CGROUP_RECLAIM_SHRINK =A0 =A0 =A0(1 << MEM_CGROUP_RECLAI= M_SHRINK_BIT)
-#define MEM_CGROUP_RECLAIM_SOFT_BIT =A0 =A00x2
-#define MEM_CGROUP_RECLAIM_SOFT =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0(1 <<= MEM_CGROUP_RECLAIM_SOFT_BIT)
-
=A0static void mem_cgroup_get(struct mem_cgroup *mem);
=A0static void mem_cgroup_put(struct mem_cgroup *mem);
=A0static struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *mem);
@@ -1381,6 +1371,86 @@ u64 mem_cgroup_get_limit(struct mem_cgroup *memcg) =A0 =A0 =A0 =A0return min(limit, memsw);
=A0}

+void mem_cgroup_hierarchy_walk(struct mem_cgroup *start,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0struct mem_cgr= oup **iter)
+{
+ =A0 =A0 =A0 struct mem_cgroup *mem =3D *iter;
+ =A0 =A0 =A0 int id;
+
+ =A0 =A0 =A0 if (!start)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 start =3D root_mem_cgroup;
+ =A0 =A0 =A0 /*
+ =A0 =A0 =A0 =A0* Even without hierarchy explicitely enabled in the root + =A0 =A0 =A0 =A0* memcg, it is the ultimate parent of all memcgs.
+ =A0 =A0 =A0 =A0*/
+ =A0 =A0 =A0 if (!(start =3D=3D root_mem_cgroup || start->use_hierarchy= )) {
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 *iter =3D start;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;
+ =A0 =A0 =A0 }
+
+ =A0 =A0 =A0 if (!mem)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 id =3D css_id(&start->css);
+ =A0 =A0 =A0 else {
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 id =3D css_id(&mem->css);
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 css_put(&mem->css);
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 mem =3D NULL;
+ =A0 =A0 =A0 }
+
+ =A0 =A0 =A0 do {
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct cgroup_subsys_state *css;
+
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 rcu_read_lock();
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 css =3D css_get_next(&mem_cgroup_subsys, = id+1, &start->css, &id);
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 /*
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* The caller must already have a reference= to the
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* starting point of this hierarchy walk, d= o not grab
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* another one. =A0This way, the loop can b= e finished
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* when the hierarchy root is returned, wit= hout any
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* further cleanup required.
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (css && (css =3D=3D &start->= ;css || css_tryget(css)))
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 mem =3D container_of(css, str= uct mem_cgroup, css);
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 rcu_read_unlock();
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!css)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 id =3D 0;
+ =A0 =A0 =A0 } while (!mem);
+
+ =A0 =A0 =A0 if (mem =3D=3D root_mem_cgroup)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 mem =3D NULL;
+
+ =A0 =A0 =A0 *iter =3D mem;
+}
+
+static unsigned long mem_cgroup_target_reclaim(struct mem_cgroup *mem,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0gfp_t gfp_mask,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0bool noswap,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0bool shrink)
+{
+ =A0 =A0 =A0 unsigned long total =3D 0;
+ =A0 =A0 =A0 int loop;
+
+ =A0 =A0 =A0 if (mem->memsw_is_minimum)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 noswap =3D true;
+
+ =A0 =A0 =A0 for (loop =3D 0; loop < MEM_CGROUP_MAX_RECLAIM_LOOPS; loop= ++) {
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 drain_all_stock_async();
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 total +=3D try_to_free_mem_cgroup_pages(mem, = gfp_mask, noswap,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 get_swappiness(mem));
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (total && shrink)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (mem_cgroup_margin(mem))
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 /*
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* If we have not been able to reclaim anyt= hing after
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* two reclaim attempts, there may be no re= claimable
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* pages under this hierarchy.
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (loop && !total)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
+ =A0 =A0 =A0 }
+ =A0 =A0 =A0 return total;
+}
+
=A0/*
=A0* Visit the first child (need not be the first child as per the orderin= g
=A0* of the cgroup list, since we track last_scanned_child) of @mem and us= e
@@ -1427,21 +1497,16 @@ mem_cgroup_select_victim(struct mem_cgroup *root_me= m)
=A0*
=A0* We give up and return to the caller when we visit root_mem twice.
=A0* (other groups can be removed while we're walking....)
- *
- * If shrink=3D=3Dtrue, for avoiding to free too much, this returns immedi= eately.
=A0*/
-static int mem_cgroup_hierarchical_reclaim(struct mem_cgroup *root_mem, - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 struct zone *zone,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 gfp_t gfp_mask,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 unsigned long reclaim_options)
+static int mem_cgroup_soft_reclaim(struct mem_cgroup *root_mem,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0struct= zone *zone,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0gfp_t = gfp_mask)
=A0{
=A0 =A0 =A0 =A0struct mem_cgroup *victim;
=A0 =A0 =A0 =A0int ret, total =3D 0;
=A0 =A0 =A0 =A0int loop =3D 0;
- =A0 =A0 =A0 bool noswap =3D reclaim_options & MEM_CGROUP_RECLAIM_NOSW= AP;
- =A0 =A0 =A0 bool shrink =3D reclaim_options & MEM_CGROUP_RECLAIM_SHRI= NK;
- =A0 =A0 =A0 bool check_soft =3D reclaim_options & MEM_CGROUP_RECLAIM_= SOFT;
=A0 =A0 =A0 =A0unsigned long excess;
+ =A0 =A0 =A0 bool noswap =3D false;

=A0 =A0 =A0 =A0excess =3D res_counter_soft_limit_excess(&root_mem->= res) >> PAGE_SHIFT;

@@ -1461,7 +1526,7 @@ static int mem_cgroup_hierarchical_reclaim(struct mem= _cgroup *root_mem,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 * anything= , it might because there are
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 * no recla= imable pages under this hierarchy
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 */
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!check_so= ft || !total) {
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!total) {=
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0css_put(&victim->css);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0break;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0}
@@ -1484,25 +1549,11 @@ static int mem_cgroup_hierarchical_reclaim(struct m= em_cgroup *root_mem,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0continue;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0}
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* we use swappiness of local cgroup */
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (check_soft)
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ret =3D mem_cgroup_shrink_nod= e_zone(victim, gfp_mask,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 ret =3D mem_cgroup_shrink_node_zone(victim, g= fp_mask,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0noswap, get= _swappiness(victim), zone);
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 else
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ret =3D try_to_free_mem_cgrou= p_pages(victim, gfp_mask,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 noswap, get_swappiness(victim));
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0css_put(&victim->css);
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 /*
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* At shrinking usage, we can't check w= e should stop here or
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* reclaim more. It's depends on caller= s. last_scanned_child
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* will work enough for keeping fairness un= der tree.
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (shrink)
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return ret;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0total +=3D ret;
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (check_soft) {
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!res_counter_soft_limit_e= xcess(&root_mem->res))
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return total;=
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 } else if (mem_cgroup_margin(root_mem))
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!res_counter_soft_limit_excess(&root_= mem->res))
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0return total;
=A0 =A0 =A0 =A0}
=A0 =A0 =A0 =A0return total;
@@ -1897,7 +1948,7 @@ static int mem_cgroup_do_charge(struct mem_cgroup *me= m, gfp_t gfp_mask,
=A0 =A0 =A0 =A0unsigned long csize =3D nr_pages * PAGE_SIZE;
=A0 =A0 =A0 =A0struct mem_cgroup *mem_over_limit;
=A0 =A0 =A0 =A0struct res_counter *fail_res;
- =A0 =A0 =A0 unsigned long flags =3D 0;
+ =A0 =A0 =A0 bool noswap =3D false;
=A0 =A0 =A0 =A0int ret;

=A0 =A0 =A0 =A0ret =3D res_counter_charge(&mem->res, csize, &fa= il_res);
@@ -1911,7 +1962,7 @@ static int mem_cgroup_do_charge(struct mem_cgroup *me= m, gfp_t gfp_mask,

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0res_counter_uncharge(&mem->res, csiz= e);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0mem_over_limit =3D mem_cgroup_from_res_coun= ter(fail_res, memsw);
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 flags |=3D MEM_CGROUP_RECLAIM_NOSWAP;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 noswap =3D true;
=A0 =A0 =A0 =A0} else
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0mem_over_limit =3D mem_cgroup_from_res_coun= ter(fail_res, res);
=A0 =A0 =A0 =A0/*
@@ -1927,8 +1978,8 @@ static int mem_cgroup_do_charge(struct mem_cgroup *me= m, gfp_t gfp_mask,
=A0 =A0 =A0 =A0if (!(gfp_mask & __GFP_WAIT))
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0return CHARGE_WOULDBLOCK;

- =A0 =A0 =A0 ret =3D mem_cgroup_hierarchical_reclaim(mem_over_limit, NULL,=
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 gfp_mask, flags);
+ =A0 =A0 =A0 ret =3D mem_cgroup_target_reclaim(mem_over_limit, gfp_mask, + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 noswap, false);
=A0 =A0 =A0 =A0if (mem_cgroup_margin(mem_over_limit) >=3D nr_pages)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0return CHARGE_RETRY;
=A0 =A0 =A0 =A0/*
@@ -3085,7 +3136,7 @@ void mem_cgroup_end_migration(struct mem_cgroup *mem,=

=A0/*
=A0* A call to try to shrink memory usage on charge failure at shmem's= swapin.
- * Calling hierarchical_reclaim is not enough because we should update
+ * Calling target_reclaim is not enough because we should update
=A0* last_oom_jiffies to prevent pagefault_out_of_memory from invoking glo= bal OOM.
=A0* Moreover considering hierarchy, we should reclaim from the mem_over_l= imit,
=A0* not from the memcg which this page would be charged to.
@@ -3167,7 +3218,7 @@ static int mem_cgroup_resize_limit(struct mem_cgroup = *memcg,
=A0 =A0 =A0 =A0int enlarge;

=A0 =A0 =A0 =A0/*
- =A0 =A0 =A0 =A0* For keeping hierarchical_reclaim simple, how long we sho= uld retry
+ =A0 =A0 =A0 =A0* For keeping target_reclaim simple, how long we should re= try
=A0 =A0 =A0 =A0 * is depends on callers. We set our retry-count to be func= tion
=A0 =A0 =A0 =A0 * of # of children which we should visit in this loop.
=A0 =A0 =A0 =A0 */
@@ -3210,8 +3261,7 @@ static int mem_cgroup_resize_limit(struct mem_cgroup = *memcg,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (!ret)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0break;

- =A0 =A0 =A0 =A0 =A0 =A0 =A0 mem_cgroup_hierarchical_reclaim(memcg, NULL, = GFP_KERNEL,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 MEM_CGROUP_RECLAIM_SHRINK);
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 mem_cgroup_target_reclaim(memcg, GFP_KERNEL, = false, false);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0curusage =3D res_counter_read_u64(&memc= g->res, RES_USAGE);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* Usage is reduced ? */
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (curusage >=3D oldusage)
@@ -3269,9 +3319,7 @@ static int mem_cgroup_resize_memsw_limit(struct mem_c= group *memcg,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (!ret)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0break;

- =A0 =A0 =A0 =A0 =A0 =A0 =A0 mem_cgroup_hierarchical_reclaim(memcg, NULL, = GFP_KERNEL,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 MEM_CGROUP_RECLAIM_NOSWAP |
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 MEM_CGROUP_RECLAIM_SHRINK);
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 mem_cgroup_target_reclaim(memcg, GFP_KERNEL, = true, false);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0curusage =3D res_counter_read_u64(&memc= g->memsw, RES_USAGE);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* Usage is reduced ? */
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (curusage >=3D oldusage)
@@ -3311,9 +3359,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(struct zo= ne *zone, int order,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (!mz)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0break;

- =A0 =A0 =A0 =A0 =A0 =A0 =A0 reclaimed =3D mem_cgroup_hierarchical_reclaim= (mz->mem, zone,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 gfp_mask,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 MEM_CGROUP_RECLAIM_SOFT);
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 reclaimed =3D mem_cgroup_soft_reclaim(mz->= mem, zone, gfp_mask);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nr_reclaimed +=3D reclaimed;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0spin_lock(&mctz->lock);

diff --git a/mm/vmscan.c b/mm/vmscan.c
index ceeb2a5..e2a3647 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1900,8 +1900,8 @@ static inline bool should_continue_reclaim(struct zon= e *zone,
=A0/*
=A0* This is a basic per-zone page freer. =A0Used by both kswapd and direc= t reclaim.
=A0*/
-static void shrink_zone(int priority, struct zone *zone,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct scan_c= ontrol *sc)
+static void do_shrink_zone(int priority, struct zone *zone,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0struct scan_control *s= c)
=A0{
=A0 =A0 =A0 =A0unsigned long nr[NR_LRU_LISTS];
=A0 =A0 =A0 =A0unsigned long nr_to_scan;
@@ -1914,8 +1914,6 @@ restart:
=A0 =A0 =A0 =A0nr_scanned =3D sc->nr_scanned;
=A0 =A0 =A0 =A0get_scan_count(zone, sc, nr, priority);

- =A0 =A0 =A0 sc->current_memcg =3D sc->memcg;
-
=A0 =A0 =A0 =A0while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0nr[LRU_INACTIVE_FILE]) {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0for_each_evictable_lru(l) {
@@ -1954,6 +1952,19 @@ restart:
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto restart;

=A0 =A0 =A0 =A0throttle_vm_writeout(sc->gfp_mask);
+}
+
+static void shrink_zone(int priority, struct zone *zone,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct scan_control *sc)
+{
+ =A0 =A0 =A0 struct mem_cgroup *root =3D sc->memcg;
+ =A0 =A0 =A0 struct mem_cgroup *mem =3D NULL;
+
+ =A0 =A0 =A0 do {
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 mem_cgroup_hierarchy_walk(root, &mem); + =A0 =A0 =A0 =A0 =A0 =A0 =A0 sc->current_memcg =3D mem;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 do_shrink_zone(priority, zone, sc);
+ =A0 =A0 =A0 } while (mem !=3D root);

=A0 =A0 =A0 =A0/* For good measure, noone higher up the stack should look = at it */
=A0 =A0 =A0 =A0sc->current_memcg =3D NULL;
@@ -2190,7 +2201,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_= cgroup *mem,
=A0 =A0 =A0 =A0 * will pick up pages from other mem cgroup's as well. = We hack
=A0 =A0 =A0 =A0 * the priority and make it zero.
=A0 =A0 =A0 =A0 */
- =A0 =A0 =A0 shrink_zone(0, zone, &sc);
+ =A0 =A0 =A0 do_shrink_zone(0, zone, &sc);

=A0 =A0 =A0 =A0trace_mm_vmscan_memcg_softlimit_reclaim_end(sc.nr_reclaimed= );

--
1.7.5.1


--002354470aa86ecd7904a319120c-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org