From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail191.messagelabs.com (mail191.messagelabs.com [216.82.242.19]) by kanga.kvack.org (Postfix) with ESMTP id 22DD990010D for ; Tue, 26 Apr 2011 19:19:47 -0400 (EDT) Received: from wpaz33.hot.corp.google.com (wpaz33.hot.corp.google.com [172.24.198.97]) by smtp-out.google.com with ESMTP id p3QNJhS9014130 for ; Tue, 26 Apr 2011 16:19:43 -0700 Received: from qwf7 (qwf7.prod.google.com [10.241.194.71]) by wpaz33.hot.corp.google.com with ESMTP id p3QNIocJ009847 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NOT) for ; Tue, 26 Apr 2011 16:19:42 -0700 Received: by qwf7 with SMTP id 7so693360qwf.24 for ; Tue, 26 Apr 2011 16:19:42 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20110425184219.285c2396.kamezawa.hiroyu@jp.fujitsu.com> References: <20110425182529.c7c37bb4.kamezawa.hiroyu@jp.fujitsu.com> <20110425184219.285c2396.kamezawa.hiroyu@jp.fujitsu.com> Date: Tue, 26 Apr 2011 16:19:41 -0700 Message-ID: Subject: Re: [PATCH 7/7] memcg watermark reclaim workqueue. From: Ying Han Content-Type: multipart/alternative; boundary=0016360e3f5c0e58e104a1da8f14 Sender: owner-linux-mm@kvack.org List-ID: To: KAMEZAWA Hiroyuki Cc: "linux-mm@kvack.org" , "kosaki.motohiro@jp.fujitsu.com" , "balbir@linux.vnet.ibm.com" , "nishimura@mxp.nes.nec.co.jp" , "akpm@linux-foundation.org" , Johannes Weiner , "minchan.kim@gmail.com" , Michal Hocko --0016360e3f5c0e58e104a1da8f14 Content-Type: text/plain; charset=ISO-8859-1 On Mon, Apr 25, 2011 at 2:42 AM, KAMEZAWA Hiroyuki < kamezawa.hiroyu@jp.fujitsu.com> wrote: > By default the per-memcg background reclaim is disabled when the > limit_in_bytes > is set the maximum. The kswapd_run() is called when the memcg is being > resized, > and kswapd_stop() is called when the memcg is being deleted. > > The per-memcg kswapd is waked up based on the usage and low_wmark, which is > checked once per 1024 increments per cpu. The memcg's kswapd is waked up if > the > usage is larger than the low_wmark. > > At each iteration of work, the work frees memory at most 2048 pages and > switch > to next work for round robin. And if the memcg seems congested, it adds > delay for the next work. > > Signed-off-by: KAMEZAWA Hiroyuki > --- > include/linux/memcontrol.h | 2 - > mm/memcontrol.c | 86 > +++++++++++++++++++++++++++++++++++++++++++++ > mm/vmscan.c | 23 +++++++----- > 3 files changed, 102 insertions(+), 9 deletions(-) > > Index: memcg/mm/memcontrol.c > =================================================================== > --- memcg.orig/mm/memcontrol.c > +++ memcg/mm/memcontrol.c > @@ -111,10 +111,12 @@ enum mem_cgroup_events_index { > enum mem_cgroup_events_target { > MEM_CGROUP_TARGET_THRESH, > MEM_CGROUP_TARGET_SOFTLIMIT, > + MEM_CGROUP_WMARK_EVENTS_THRESH, > MEM_CGROUP_NTARGETS, > }; > #define THRESHOLDS_EVENTS_TARGET (128) > #define SOFTLIMIT_EVENTS_TARGET (1024) > +#define WMARK_EVENTS_TARGET (1024) > > struct mem_cgroup_stat_cpu { > long count[MEM_CGROUP_STAT_NSTATS]; > @@ -267,6 +269,11 @@ struct mem_cgroup { > struct list_head oom_notify; > > /* > + * For high/low watermark. > + */ > + bool bgreclaim_resched; > + struct delayed_work bgreclaim_work; > + /* > * Should we move charges of a task when a task is moved into this > * mem_cgroup ? And what type of charges should we move ? > */ > @@ -374,6 +381,8 @@ static void mem_cgroup_put(struct mem_cg > static struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *mem); > static void drain_all_stock_async(void); > > +static void wake_memcg_kswapd(struct mem_cgroup *mem); > + > static struct mem_cgroup_per_zone * > mem_cgroup_zoneinfo(struct mem_cgroup *mem, int nid, int zid) > { > @@ -552,6 +561,12 @@ mem_cgroup_largest_soft_limit_node(struc > return mz; > } > > +static void mem_cgroup_check_wmark(struct mem_cgroup *mem) > +{ > + if (!mem_cgroup_watermark_ok(mem, CHARGE_WMARK_LOW)) > + wake_memcg_kswapd(mem); > +} > + > /* > * Implementation Note: reading percpu statistics for memcg. > * > @@ -702,6 +717,9 @@ static void __mem_cgroup_target_update(s > case MEM_CGROUP_TARGET_SOFTLIMIT: > next = val + SOFTLIMIT_EVENTS_TARGET; > break; > + case MEM_CGROUP_WMARK_EVENTS_THRESH: > + next = val + WMARK_EVENTS_TARGET; > + break; > default: > return; > } > @@ -725,6 +743,10 @@ static void memcg_check_events(struct me > __mem_cgroup_target_update(mem, > MEM_CGROUP_TARGET_SOFTLIMIT); > } > + if (unlikely(__memcg_event_check(mem, > + MEM_CGROUP_WMARK_EVENTS_THRESH))){ > + mem_cgroup_check_wmark(mem); > + } > } > } > > @@ -3661,6 +3683,67 @@ unsigned long mem_cgroup_soft_limit_recl > return nr_reclaimed; > } > > +struct workqueue_struct *memcg_bgreclaimq; > + > +static int memcg_bgreclaim_init(void) > +{ > + /* > + * use UNBOUND workqueue because we traverse nodes (no locality) > and > + * the work is cpu-intensive. > + */ > + memcg_bgreclaimq = alloc_workqueue("memcg", > + WQ_MEM_RECLAIM | WQ_UNBOUND | WQ_FREEZABLE, 0); > + return 0; > +} > I read about the documentation of workqueue. So the WQ_UNBOUND support the max 512 execution contexts per CPU. Does the execution context means thread? I think I understand the motivation of that flag, so we can have more concurrency of bg reclaim workitems. But one question is on the workqueue scheduling mechanism. If we can queue the item anywhere as long as they are inserted in the queue, do we have mechanism to support the load balancing like the system scheduler? The scenario I am thinking is that one CPU has 512 work items and the other one has 1. I don't think this is directly related issue for this patch, and I just hope the workqueue mechanism already support something like that for load balancing. --Ying > +module_init(memcg_bgreclaim_init); > + > +static void memcg_bgreclaim(struct work_struct *work) > +{ > + struct delayed_work *dw = to_delayed_work(work); > + struct mem_cgroup *mem = > + container_of(dw, struct mem_cgroup, bgreclaim_work); > + int delay = 0; > + unsigned long long required, usage, hiwat; > + > + hiwat = res_counter_read_u64(&mem->res, RES_HIGH_WMARK_LIMIT); > + usage = res_counter_read_u64(&mem->res, RES_USAGE); > + required = usage - hiwat; > + if (required >= 0) { > + required = ((usage - hiwat) >> PAGE_SHIFT) + 1; > + delay = shrink_mem_cgroup(mem, (long)required); > + } > + if (!mem->bgreclaim_resched || > + mem_cgroup_watermark_ok(mem, CHARGE_WMARK_HIGH)) { > + cgroup_release_and_wakeup_rmdir(&mem->css); > + return; > + } > + /* need reschedule */ > + if (!queue_delayed_work(memcg_bgreclaimq, &mem->bgreclaim_work, > delay)) > + cgroup_release_and_wakeup_rmdir(&mem->css); > +} > + > +static void wake_memcg_kswapd(struct mem_cgroup *mem) > +{ > + if (delayed_work_pending(&mem->bgreclaim_work)) > + return; > + cgroup_exclude_rmdir(&mem->css); > + if (!queue_delayed_work(memcg_bgreclaimq, &mem->bgreclaim_work, 0)) > + cgroup_release_and_wakeup_rmdir(&mem->css); > + return; > +} > + > +static void stop_memcg_kswapd(struct mem_cgroup *mem) > +{ > + /* > + * at destroy(), there is no task and we don't need to take care of > + * new bgreclaim work queued. But we need to prevent it from > reschedule > + * use bgreclaim_resched to tell no more reschedule. > + */ > + mem->bgreclaim_resched = false; > + flush_delayed_work(&mem->bgreclaim_work); > + mem->bgreclaim_resched = true; > +} > + > /* > * This routine traverse page_cgroup in given list and drop them all. > * *And* this routine doesn't reclaim page itself, just removes > page_cgroup. > @@ -3742,6 +3825,7 @@ move_account: > ret = -EBUSY; > if (cgroup_task_count(cgrp) || !list_empty(&cgrp->children)) > goto out; > + stop_memcg_kswapd(mem); > ret = -EINTR; > if (signal_pending(current)) > goto out; > @@ -4804,6 +4888,8 @@ static struct mem_cgroup *mem_cgroup_all > if (!mem->stat) > goto out_free; > spin_lock_init(&mem->pcp_counter_lock); > + INIT_DELAYED_WORK(&mem->bgreclaim_work, memcg_bgreclaim); > + mem->bgreclaim_resched = true; > return mem; > > out_free: > Index: memcg/include/linux/memcontrol.h > =================================================================== > --- memcg.orig/include/linux/memcontrol.h > +++ memcg/include/linux/memcontrol.h > @@ -89,7 +89,7 @@ extern int mem_cgroup_last_scanned_node( > extern int mem_cgroup_select_victim_node(struct mem_cgroup *mem, > const nodemask_t *nodes); > > -unsigned long shrink_mem_cgroup(struct mem_cgroup *mem); > +int shrink_mem_cgroup(struct mem_cgroup *mem, long required); > > static inline > int mm_match_cgroup(const struct mm_struct *mm, const struct mem_cgroup > *cgroup) > Index: memcg/mm/vmscan.c > =================================================================== > --- memcg.orig/mm/vmscan.c > +++ memcg/mm/vmscan.c > @@ -2373,20 +2373,19 @@ shrink_memcg_node(int nid, int priority, > /* > * Per cgroup background reclaim. > */ > -unsigned long shrink_mem_cgroup(struct mem_cgroup *mem) > +int shrink_mem_cgroup(struct mem_cgroup *mem, long required) > { > - int nid, priority, next_prio; > + int nid, priority, next_prio, delay; > nodemask_t nodes; > unsigned long total_scanned; > struct scan_control sc = { > .gfp_mask = GFP_HIGHUSER_MOVABLE, > .may_unmap = 1, > .may_swap = 1, > - .nr_to_reclaim = SWAP_CLUSTER_MAX, > .order = 0, > .mem_cgroup = mem, > }; > - > + /* writepage will be set later per zone */ > sc.may_writepage = 0; > sc.nr_reclaimed = 0; > total_scanned = 0; > @@ -2400,9 +2399,12 @@ unsigned long shrink_mem_cgroup(struct m > * Now, we scan MEMCG_BGRECLAIM_SCAN_LIMIT pages per scan. > * We use static priority 0. > */ > + sc.nr_to_reclaim = min(required, (long)MEMCG_BGSCAN_LIMIT/2); > next_prio = min(SWAP_CLUSTER_MAX * num_node_state(N_HIGH_MEMORY), > MEMCG_BGSCAN_LIMIT/8); > priority = DEF_PRIORITY; > + /* delay for next work at congestion */ > + delay = HZ/10; > while ((total_scanned < MEMCG_BGSCAN_LIMIT) && > !nodes_empty(nodes) && > (sc.nr_to_reclaim > sc.nr_reclaimed)) { > @@ -2423,12 +2425,17 @@ unsigned long shrink_mem_cgroup(struct m > priority--; > next_prio <<= 1; > } > - if (sc.nr_scanned && > - total_scanned > sc.nr_reclaimed * 2) > - congestion_wait(WRITE, HZ/10); > + /* give up early ? */ > + if (total_scanned > MEMCG_BGSCAN_LIMIT/8 && > + total_scanned > sc.nr_reclaimed * 4) > + goto out; > } > + /* We scanned enough...If we reclaimed half of requested, no delay > */ > + if (sc.nr_reclaimed > sc.nr_to_reclaim/2) > + delay = 0; > +out: > current->flags &= ~PF_SWAPWRITE; > - return sc.nr_reclaimed; > + return delay; > } > #endif > > > --0016360e3f5c0e58e104a1da8f14 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

On Mon, Apr 25, 2011 at 2:42 AM, KAMEZAW= A Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
By default the per-memcg background reclaim is disabled when the limit_in_b= ytes
is set the maximum. The kswapd_run() is called when the memcg is being resi= zed,
and kswapd_stop() is called when the memcg is being deleted.

The per-memcg kswapd is waked up based on the usage and low_wmark, which is=
checked once per 1024 increments per cpu. The memcg's kswapd is waked u= p if the
usage is larger than the low_wmark.

At each iteration of work, the work frees memory at most 2048 pages and swi= tch
to next work for round robin. And if the memcg seems congested, it adds
delay for the next work.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
=A0include/linux/memcontrol.h | =A0 =A02 -
=A0mm/memcontrol.c =A0 =A0 =A0 =A0 =A0 =A0| =A0 86 ++++++++++++++++++++++++= +++++++++++++++++++++
=A0mm/vmscan.c =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| =A0 23 +++++++-----
=A03 files changed, 102 insertions(+), 9 deletions(-)

Index: memcg/mm/memcontrol.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- memcg.orig/mm/memcontrol.c
+++ memcg/mm/memcontrol.c
@@ -111,10 +111,12 @@ enum mem_cgroup_events_index {
=A0enum mem_cgroup_events_target {
=A0 =A0 =A0 =A0MEM_CGROUP_TARGET_THRESH,
=A0 =A0 =A0 =A0MEM_CGROUP_TARGET_SOFTLIMIT,
+ =A0 =A0 =A0 MEM_CGROUP_WMARK_EVENTS_THRESH,
=A0 =A0 =A0 =A0MEM_CGROUP_NTARGETS,
=A0};
=A0#define THRESHOLDS_EVENTS_TARGET (128)
=A0#define SOFTLIMIT_EVENTS_TARGET (1024)
+#define WMARK_EVENTS_TARGET (1024)

=A0struct mem_cgroup_stat_cpu {
=A0 =A0 =A0 =A0long count[MEM_CGROUP_STAT_NSTATS];
@@ -267,6 +269,11 @@ struct mem_cgroup {
=A0 =A0 =A0 =A0struct list_head oom_notify;

=A0 =A0 =A0 =A0/*
+ =A0 =A0 =A0 =A0* For high/low watermark.
+ =A0 =A0 =A0 =A0*/
+ =A0 =A0 =A0 bool =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0bgreclaim_resched= ;
+ =A0 =A0 =A0 struct delayed_work =A0 =A0 bgreclaim_work;
+ =A0 =A0 =A0 /*
=A0 =A0 =A0 =A0 * Should we move charges of a task when a task is moved in= to this
=A0 =A0 =A0 =A0 * mem_cgroup ? And what type of charges should we move ? =A0 =A0 =A0 =A0 */
@@ -374,6 +381,8 @@ static void mem_cgroup_put(struct mem_cg
=A0static struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *mem);
=A0static void drain_all_stock_async(void);

+static void wake_memcg_kswapd(struct mem_cgroup *mem);
+
=A0static struct mem_cgroup_per_zone *
=A0mem_cgroup_zoneinfo(struct mem_cgroup *mem, int nid, int zid)
=A0{
@@ -552,6 +561,12 @@ mem_cgroup_largest_soft_limit_node(struc
=A0 =A0 =A0 =A0return mz;
=A0}

+static void mem_cgroup_check_wmark(struct mem_cgroup *mem)
+{
+ =A0 =A0 =A0 if (!mem_cgroup_watermark_ok(mem, CHARGE_WMARK_LOW))
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 wake_memcg_kswapd(mem);
+}
+
=A0/*
=A0* Implementation Note: reading percpu statistics for memcg.
=A0*
@@ -702,6 +717,9 @@ static void __mem_cgroup_target_update(s
=A0 =A0 =A0 =A0case MEM_CGROUP_TARGET_SOFTLIMIT:
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0next =3D val + SOFTLIMIT_EVENTS_TARGET;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0break;
+ =A0 =A0 =A0 case MEM_CGROUP_WMARK_EVENTS_THRESH:
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 next =3D val + WMARK_EVENTS_TARGET;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
=A0 =A0 =A0 =A0default:
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0return;
=A0 =A0 =A0 =A0}
@@ -725,6 +743,10 @@ static void memcg_check_events(struct me
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0__mem_cgroup_target_update(= mem,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0MEM_CGROUP_= TARGET_SOFTLIMIT);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0}
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (unlikely(__memcg_event_check(mem,
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 MEM_CGROUP_WMARK_EVENTS_THRES= H))){
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 mem_cgroup_check_wmark(mem);<= br> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 }
=A0 =A0 =A0 =A0}
=A0}

@@ -3661,6 +3683,67 @@ unsigned long mem_cgroup_soft_limit_recl
=A0 =A0 =A0 =A0return nr_reclaimed;
=A0}

+struct workqueue_struct *memcg_bgreclaimq;
+
+static int memcg_bgreclaim_init(void)
+{
+ =A0 =A0 =A0 /*
+ =A0 =A0 =A0 =A0* use UNBOUND workqueue because we traverse nodes (no loca= lity) and
+ =A0 =A0 =A0 =A0* the work is cpu-intensive.
+ =A0 =A0 =A0 =A0*/
+ =A0 =A0 =A0 memcg_bgreclaimq =3D alloc_workqueue("memcg",
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 WQ_MEM_RECLAIM | WQ_UNBOUND |= WQ_FREEZABLE, 0);
+ =A0 =A0 =A0 return 0;
+}

I read about the=A0documentation=A0o= f workqueue. So the WQ_UNBOUND support the max 512=A0execution contexts per= CPU. Does the execution context means thread?

I t= hink I understand the motivation of that flag, so we can have more concurre= ncy of bg reclaim workitems. But one question is on the workqueue schedulin= g mechanism. If we can queue the item anywhere as long as they are inserted= in the queue, do we have=A0mechanism=A0to support the load balancing like = the system scheduler? The=A0scenario I am thinking is that one CPU has 512 = work items and the other one has 1.=A0

I don't think this is directly related issue for th= is patch, and I just hope the workqueue mechanism already support something= like that for load balancing.

--Ying
=A0
=A0
+module_init(memcg_bgreclaim_init);
+
+static void memcg_bgreclaim(struct work_struct *work)
+{
+ =A0 =A0 =A0 struct delayed_work *dw =3D to_delayed_work(work);
+ =A0 =A0 =A0 struct mem_cgroup *mem =3D
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 container_of(dw, struct mem_cgroup, bgreclaim= _work);
+ =A0 =A0 =A0 int delay =3D 0;
+ =A0 =A0 =A0 unsigned long long required, usage, hiwat;
+
+ =A0 =A0 =A0 hiwat =3D res_counter_read_u64(&mem->res, RES_HIGH_WMA= RK_LIMIT);
+ =A0 =A0 =A0 usage =3D res_counter_read_u64(&mem->res, RES_USAGE);<= br> + =A0 =A0 =A0 required =3D usage - hiwat;
+ =A0 =A0 =A0 if (required >=3D 0) =A0{
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 required =3D ((usage - hiwat) >> PAGE_S= HIFT) + 1;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 delay =3D shrink_mem_cgroup(mem, (long)requir= ed);
+ =A0 =A0 =A0 }
+ =A0 =A0 =A0 if (!mem->bgreclaim_resched =A0||
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 mem_cgroup_watermark_ok(mem, CHARGE_WMARK_HIG= H)) {
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 cgroup_release_and_wakeup_rmdir(&mem->= css);
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;
+ =A0 =A0 =A0 }
+ =A0 =A0 =A0 /* need reschedule */
+ =A0 =A0 =A0 if (!queue_delayed_work(memcg_bgreclaimq, &mem->bgrecl= aim_work, delay))
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 cgroup_release_and_wakeup_rmdir(&mem->= css);
+}
+
+static void wake_memcg_kswapd(struct mem_cgroup *mem)
+{
+ =A0 =A0 =A0 if (delayed_work_pending(&mem->bgreclaim_work))
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;
+ =A0 =A0 =A0 cgroup_exclude_rmdir(&mem->css);
+ =A0 =A0 =A0 if (!queue_delayed_work(memcg_bgreclaimq, &mem->bgrecl= aim_work, 0))
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 cgroup_release_and_wakeup_rmdir(&mem->= css);
+ =A0 =A0 =A0 return;
+}
+
+static void stop_memcg_kswapd(struct mem_cgroup *mem)
+{
+ =A0 =A0 =A0 /*
+ =A0 =A0 =A0 =A0* at destroy(), there is no task and we don't need to = take care of
+ =A0 =A0 =A0 =A0* new bgreclaim work queued. But we need to prevent it fro= m reschedule
+ =A0 =A0 =A0 =A0* use bgreclaim_resched to tell no more reschedule.
+ =A0 =A0 =A0 =A0*/
+ =A0 =A0 =A0 mem->bgreclaim_resched =3D false;
+ =A0 =A0 =A0 flush_delayed_work(&mem->bgreclaim_work);
+ =A0 =A0 =A0 mem->bgreclaim_resched =3D true;
+}
+
=A0/*
=A0* This routine traverse page_cgroup in given list and drop them all. =A0* *And* this routine doesn't reclaim page itself, just removes page= _cgroup.
@@ -3742,6 +3825,7 @@ move_account:
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ret =3D -EBUSY;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (cgroup_task_count(cgrp) || !list_empty(= &cgrp->children))
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto out;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 stop_memcg_kswapd(mem);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ret =3D -EINTR;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (signal_pending(current))
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto out;
@@ -4804,6 +4888,8 @@ static struct mem_cgroup *mem_cgroup_all
=A0 =A0 =A0 =A0if (!mem->stat)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto out_free;
=A0 =A0 =A0 =A0spin_lock_init(&mem->pcp_counter_lock);
+ =A0 =A0 =A0 INIT_DELAYED_WORK(&mem->bgreclaim_work, memcg_bgreclai= m);
+ =A0 =A0 =A0 mem->bgreclaim_resched =3D true;
=A0 =A0 =A0 =A0return mem;

=A0out_free:
Index: memcg/include/linux/memcontrol.h
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- memcg.orig/include/linux/memcontrol.h
+++ memcg/include/linux/memcontrol.h
@@ -89,7 +89,7 @@ extern int mem_cgroup_last_scanned_node(
=A0extern int mem_cgroup_select_victim_node(struct mem_cgroup *mem,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0const nodemask_t *nodes);

-unsigned long shrink_mem_cgroup(struct mem_cgroup *mem);
+int shrink_mem_cgroup(struct mem_cgroup *mem, long required);

=A0static inline
=A0int mm_match_cgroup(const struct mm_struct *mm, const struct mem_cgroup = *cgroup)
Index: memcg/mm/vmscan.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- memcg.orig/mm/vmscan.c
+++ memcg/mm/vmscan.c
@@ -2373,20 +2373,19 @@ shrink_memcg_node(int nid, int priority,
=A0/*
=A0* Per cgroup background reclaim.
=A0*/
-unsigned long shrink_mem_cgroup(struct mem_cgroup *mem)
+int shrink_mem_cgroup(struct mem_cgroup *mem, long required)
=A0{
- =A0 =A0 =A0 int nid, priority, next_prio;
+ =A0 =A0 =A0 int nid, priority, next_prio, delay;
=A0 =A0 =A0 =A0nodemask_t nodes;
=A0 =A0 =A0 =A0unsigned long total_scanned;
=A0 =A0 =A0 =A0struct scan_control sc =3D {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0.gfp_mask =3D GFP_HIGHUSER_MOVABLE,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0.may_unmap =3D 1,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0.may_swap =3D 1,
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 .nr_to_reclaim =3D SWAP_CLUSTER_MAX,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0.order =3D 0,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0.mem_cgroup =3D mem,
=A0 =A0 =A0 =A0};
-
+ =A0 =A0 =A0 /* writepage will be set later per zone */
=A0 =A0 =A0 =A0sc.may_writepage =3D 0;
=A0 =A0 =A0 =A0sc.nr_reclaimed =3D 0;
=A0 =A0 =A0 =A0total_scanned =3D 0;
@@ -2400,9 +2399,12 @@ unsigned long shrink_mem_cgroup(struct m
=A0 =A0 =A0 =A0 * Now, we scan MEMCG_BGRECLAIM_SCAN_LIMIT pages per scan.<= br> =A0 =A0 =A0 =A0 * We use static priority 0.
=A0 =A0 =A0 =A0 */
+ =A0 =A0 =A0 sc.nr_to_reclaim =3D min(required, (long)MEMCG_BGSCAN_LIMIT/2= );
=A0 =A0 =A0 =A0next_prio =3D min(SWAP_CLUSTER_MAX * num_node_state(N_HIGH_= MEMORY),
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0MEMCG_BGSCAN_LIMIT/8);
=A0 =A0 =A0 =A0priority =3D DEF_PRIORITY;
+ =A0 =A0 =A0 /* delay for next work at congestion */
+ =A0 =A0 =A0 delay =3D HZ/10;
=A0 =A0 =A0 =A0while ((total_scanned < MEMCG_BGSCAN_LIMIT) && =A0 =A0 =A0 =A0 =A0 =A0 =A0 !nodes_empty(nodes) &&
=A0 =A0 =A0 =A0 =A0 =A0 =A0 (sc.nr_to_reclaim > sc.nr_reclaimed)) {
@@ -2423,12 +2425,17 @@ unsigned long shrink_mem_cgroup(struct m
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0priority--;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0next_prio <<=3D 1; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0}
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (sc.nr_scanned &&
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 total_scanned > sc.nr_reclaimed * = 2)
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 congestion_wait(WRITE, HZ/10)= ;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* give up early ? */
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (total_scanned > MEMCG_BGSCAN_LIMIT/8 &= amp;&
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 total_scanned > sc.nr_reclaimed * = 4)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 goto out;
=A0 =A0 =A0 =A0}
+ =A0 =A0 =A0 /* We scanned enough...If we reclaimed half of requested, no = delay */
+ =A0 =A0 =A0 if (sc.nr_reclaimed > sc.nr_to_reclaim/2)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 delay =3D 0;
+out:
=A0 =A0 =A0 =A0current->flags &=3D ~PF_SWAPWRITE;
- =A0 =A0 =A0 return sc.nr_reclaimed;
+ =A0 =A0 =A0 return delay;
=A0}
=A0#endif



--0016360e3f5c0e58e104a1da8f14-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org