From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail172.messagelabs.com (mail172.messagelabs.com [216.82.254.3]) by kanga.kvack.org (Postfix) with ESMTP id 0700B9000C1 for ; Tue, 26 Apr 2011 20:03:31 -0400 (EDT) Received: from wpaz17.hot.corp.google.com (wpaz17.hot.corp.google.com [172.24.198.81]) by smtp-out.google.com with ESMTP id p3R03Sth014507 for ; Tue, 26 Apr 2011 17:03:28 -0700 Received: from qyk2 (qyk2.prod.google.com [10.241.83.130]) by wpaz17.hot.corp.google.com with ESMTP id p3R02xhh015806 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NOT) for ; Tue, 26 Apr 2011 17:03:27 -0700 Received: by qyk2 with SMTP id 2so725337qyk.16 for ; Tue, 26 Apr 2011 17:03:27 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <20110426095524.F348.A69D9226@jp.fujitsu.com> References: <1303752134-4856-2-git-send-email-yinghan@google.com> <20110426094356.F341.A69D9226@jp.fujitsu.com> <20110426095524.F348.A69D9226@jp.fujitsu.com> Date: Tue, 26 Apr 2011 17:03:26 -0700 Message-ID: Subject: Re: [PATCH V2 1/2] change the shrink_slab by passing shrink_control From: Ying Han Content-Type: multipart/alternative; boundary=0016e64aefda85f0d104a1db2b3c Sender: owner-linux-mm@kvack.org List-ID: To: KOSAKI Motohiro Cc: Nick Piggin , Minchan Kim , Daisuke Nishimura , Balbir Singh , Tejun Heo , Pavel Emelyanov , KAMEZAWA Hiroyuki , Andrew Morton , Li Zefan , Mel Gorman , Rik van Riel , Johannes Weiner , Hugh Dickins , Michal Hocko , Dave Hansen , Zhu Yanhai , linux-mm@kvack.org --0016e64aefda85f0d104a1db2b3c Content-Type: text/plain; charset=ISO-8859-1 On Mon, Apr 25, 2011 at 5:53 PM, KOSAKI Motohiro < kosaki.motohiro@jp.fujitsu.com> wrote: > > > This patch consolidates existing parameters to shrink_slab() to > > > a new shrink_control struct. This is needed later to pass the same > > > struct to shrinkers. > > > > > > changelog v2..v1: > > > 1. define a new struct shrink_control and only pass some values down > > > to the shrinker instead of the scan_control. > > > > > > Signed-off-by: Ying Han > > > --- > > > fs/drop_caches.c | 6 +++++- > > > include/linux/mm.h | 13 +++++++++++-- > > > mm/vmscan.c | 30 ++++++++++++++++++++++-------- > > > 3 files changed, 38 insertions(+), 11 deletions(-) > > > > Reviewed-by: KOSAKI Motohiro > > Sigh. No. This patch seems premature. > > > > This patch consolidates existing parameters to shrink_slab() to > > a new shrink_control struct. This is needed later to pass the same > > struct to shrinkers. > > > > changelog v2..v1: > > 1. define a new struct shrink_control and only pass some values down > > to the shrinker instead of the scan_control. > > > > Signed-off-by: Ying Han > > --- > > fs/drop_caches.c | 6 +++++- > > include/linux/mm.h | 13 +++++++++++-- > > mm/vmscan.c | 30 ++++++++++++++++++++++-------- > > 3 files changed, 38 insertions(+), 11 deletions(-) > > > > diff --git a/fs/drop_caches.c b/fs/drop_caches.c > > index 816f88e..c671290 100644 > > --- a/fs/drop_caches.c > > +++ b/fs/drop_caches.c > > @@ -36,9 +36,13 @@ static void drop_pagecache_sb(struct super_block *sb, > void *unused) > > static void drop_slab(void) > > { > > int nr_objects; > > + struct shrink_control shrink = { > > + .gfp_mask = GFP_KERNEL, > > + .nr_scanned = 1000, > > + }; > > > > do { > > - nr_objects = shrink_slab(1000, GFP_KERNEL, 1000); > > + nr_objects = shrink_slab(&shrink, 1000); > > } while (nr_objects > 10); > > } > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 0716517..7a2f657 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -1131,6 +1131,15 @@ static inline void sync_mm_rss(struct task_struct > *task, struct mm_struct *mm) > > #endif > > > > /* > > + * This struct is used to pass information from page reclaim to the > shrinkers. > > + * We consolidate the values for easier extention later. > > + */ > > +struct shrink_control { > > + unsigned long nr_scanned; > > nr_to_scan is better. sc.nr_scanned mean how much _finished_ scan pages. > Ok, the name is changed. > eg. > scan_control { > (snip) > /* Number of pages freed so far during a call to > shrink_zones() */ > unsigned long nr_reclaimed; > > /* How many pages shrink_list() should reclaim */ > unsigned long nr_to_reclaim; > > > > > + gfp_t gfp_mask; > > +}; > > + > > +/* > > * A callback you can register to apply pressure to ageable caches. > > * > > * 'shrink' is passed a count 'nr_to_scan' and a 'gfpmask'. It should > > @@ -1601,8 +1610,8 @@ int in_gate_area_no_task(unsigned long addr); > > > > int drop_caches_sysctl_handler(struct ctl_table *, int, > > void __user *, size_t *, loff_t *); > > -unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask, > > - unsigned long lru_pages); > > +unsigned long shrink_slab(struct shrink_control *shrink, > > + unsigned long lru_pages); > > > > #ifndef CONFIG_MMU > > #define randomize_va_space 0 > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 060e4c1..40edf73 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -220,11 +220,13 @@ EXPORT_SYMBOL(unregister_shrinker); > > * > > * Returns the number of slab objects which we shrunk. > > */ > > -unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask, > > - unsigned long lru_pages) > > +unsigned long shrink_slab(struct shrink_control *shrink, > > + unsigned long lru_pages) > > { > > struct shrinker *shrinker; > > unsigned long ret = 0; > > + unsigned long scanned = shrink->nr_scanned; > > + gfp_t gfp_mask = shrink->gfp_mask; > > > > if (scanned == 0) > > scanned = SWAP_CLUSTER_MAX; > > @@ -2032,7 +2034,8 @@ static bool all_unreclaimable(struct zonelist > *zonelist, > > * else, the number of pages reclaimed > > */ > > static unsigned long do_try_to_free_pages(struct zonelist *zonelist, > > - struct scan_control *sc) > > + struct scan_control *sc, > > + struct shrink_control *shrink) > > { > > Worthless argument addition. gfpmask can be getting from scan_control and > .nr_scanned is calculated in this function. > changed. > > > > > int priority; > > unsigned long total_scanned = 0; > > @@ -2066,7 +2069,8 @@ static unsigned long do_try_to_free_pages(struct > zonelist *zonelist, > > lru_pages += zone_reclaimable_pages(zone); > > } > > > > - shrink_slab(sc->nr_scanned, sc->gfp_mask, > lru_pages); > > + shrink->nr_scanned = sc->nr_scanned; > > + shrink_slab(shrink, lru_pages); > > if (reclaim_state) { > > sc->nr_reclaimed += > reclaim_state->reclaimed_slab; > > reclaim_state->reclaimed_slab = 0; > > @@ -2130,12 +2134,15 @@ unsigned long try_to_free_pages(struct zonelist > *zonelist, int order, > > .mem_cgroup = NULL, > > .nodemask = nodemask, > > }; > > + struct shrink_control shrink = { > > + .gfp_mask = sc.gfp_mask, > > + }; > > > > trace_mm_vmscan_direct_reclaim_begin(order, > > sc.may_writepage, > > gfp_mask); > > > > - nr_reclaimed = do_try_to_free_pages(zonelist, &sc); > > + nr_reclaimed = do_try_to_free_pages(zonelist, &sc, &shrink); > > > > trace_mm_vmscan_direct_reclaim_end(nr_reclaimed); > > > > @@ -2333,6 +2340,9 @@ static unsigned long balance_pgdat(pg_data_t > *pgdat, int order, > > .order = order, > > .mem_cgroup = NULL, > > }; > > + struct shrink_control shrink = { > > + .gfp_mask = sc.gfp_mask, > > + }; > > loop_again: > > total_scanned = 0; > > sc.nr_reclaimed = 0; > > @@ -2432,8 +2442,8 @@ loop_again: > > end_zone, 0)) > > shrink_zone(priority, zone, &sc); > > reclaim_state->reclaimed_slab = 0; > > - nr_slab = shrink_slab(sc.nr_scanned, GFP_KERNEL, > > - lru_pages); > > + shrink.nr_scanned = sc.nr_scanned; > > + nr_slab = shrink_slab(&shrink, lru_pages); > > sc.nr_reclaimed += reclaim_state->reclaimed_slab; > > total_scanned += sc.nr_scanned; > > > > @@ -2969,6 +2979,9 @@ static int __zone_reclaim(struct zone *zone, gfp_t > gfp_mask, unsigned int order) > > .swappiness = vm_swappiness, > > .order = order, > > }; > > + struct shrink_control shrink = { > > + .gfp_mask = sc.gfp_mask, > > + }; > > unsigned long nr_slab_pages0, nr_slab_pages1; > > > > cond_resched(); > > @@ -2995,6 +3008,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t > gfp_mask, unsigned int order) > > } > > > > nr_slab_pages0 = zone_page_state(zone, NR_SLAB_RECLAIMABLE); > > + shrink.nr_scanned = sc.nr_scanned; > > if (nr_slab_pages0 > zone->min_slab_pages) { > > strange. this assignment should be move into this if brace. > changed. > > > /* > > * shrink_slab() does not currently allow us to determine > how > > @@ -3010,7 +3024,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t > gfp_mask, unsigned int order) > > unsigned long lru_pages = > zone_reclaimable_pages(zone); > > > > /* No reclaimable slab or very low memory pressure > */ > > - if (!shrink_slab(sc.nr_scanned, gfp_mask, > lru_pages)) > > + if (!shrink_slab(&shrink, lru_pages)) > > break; > > > > /* Freed enough memory */ > > -- > > 1.7.3.1 > > > > -- > > To unsubscribe, send a message with 'unsubscribe linux-mm' in > > the body to majordomo@kvack.org. For more info on Linux MM, > > see: http://www.linux-mm.org/ . > > Fight unfair telecom internet charges in Canada: sign > http://stopthemeter.ca/ > > Don't email: email@kvack.org > > > --0016e64aefda85f0d104a1db2b3c Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

On Mon, Apr 25, 2011 at 5:53 PM, KOSAKI = Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:
> > This patch consolidates existin= g parameters to shrink_slab() to
> > a new shrink_control struct. This is needed later to pass the sam= e
> > struct to shrinkers.
> >
> > changelog v2..v1:
> > 1. define a new struct shrink_control and only pass some values d= own
> > to the shrinker instead of the scan_control.
> >
> > Signed-off-by: Ying Han <yinghan@google.com>
> > ---
> > =A0fs/drop_caches.c =A0 | =A0 =A06 +++++-
> > =A0include/linux/mm.h | =A0 13 +++++++++++--
> > =A0mm/vmscan.c =A0 =A0 =A0 =A0| =A0 30 ++++++++++++++++++++++----= ----
> > =A03 files changed, 38 insertions(+), 11 deletions(-)
>
> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>

Sigh. No. This patch seems premature.


> This patch consolidates existing parameters to shrink_slab() to
> a new shrink_control struct. This is needed later to pass the same
> struct to shrinkers.
>
> changelog v2..v1:
> 1. define a new struct shrink_control and only pass some values down > to the shrinker instead of the scan_control.
>
> Signed-off-by: Ying Han <ying= han@google.com>
> ---
> =A0fs/drop_caches.c =A0 | =A0 =A06 +++++-
> =A0include/linux/mm.h | =A0 13 +++++++++++--
> =A0mm/vmscan.c =A0 =A0 =A0 =A0| =A0 30 ++++++++++++++++++++++--------<= br> > =A03 files changed, 38 insertions(+), 11 deletions(-)
>
> diff --git a/fs/drop_caches.c b/fs/drop_caches.c
> index 816f88e..c671290 100644
> --- a/fs/drop_caches.c
> +++ b/fs/drop_caches.c
> @@ -36,9 +36,13 @@ static void drop_pagecache_sb(struct super_block *s= b, void *unused)
> =A0static void drop_slab(void)
> =A0{
> =A0 =A0 =A0 int nr_objects;
> + =A0 =A0 struct shrink_control shrink =3D {
> + =A0 =A0 =A0 =A0 =A0 =A0 .gfp_mask =3D GFP_KERNEL,
> + =A0 =A0 =A0 =A0 =A0 =A0 .nr_scanned =3D 1000,
> + =A0 =A0 };
>
> =A0 =A0 =A0 do {
> - =A0 =A0 =A0 =A0 =A0 =A0 nr_objects =3D shrink_slab(1000, GFP_KERNEL,= 1000);
> + =A0 =A0 =A0 =A0 =A0 =A0 nr_objects =3D shrink_slab(&shrink, 1000= );
> =A0 =A0 =A0 } while (nr_objects > 10);
> =A0}
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 0716517..7a2f657 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1131,6 +1131,15 @@ static inline void sync_mm_rss(struct task_stru= ct *task, struct mm_struct *mm)
> =A0#endif
>
> =A0/*
> + * This struct is used to pass information from page reclaim to the s= hrinkers.
> + * We consolidate the values for easier extention later.
> + */
> +struct shrink_control {
> + =A0 =A0 unsigned long nr_scanned;

nr_to_scan is better. sc.nr_scanned mean how much _finished_ scan pages.

Ok, the name is changed.
=A0
eg.
=A0 =A0 =A0 =A0scan_control {
=A0 =A0 =A0 =A0(snip)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* Number of pages freed so far during a ca= ll to shrink_zones() */
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0unsigned long nr_reclaimed;

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* How many pages shrink_list() should recl= aim */
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0unsigned long nr_to_reclaim;



> + =A0 =A0 gfp_t gfp_mask;
> +};
> +
> +/*
> =A0 * A callback you can register to apply pressure to ageable caches.=
> =A0 *
> =A0 * 'shrink' is passed a count 'nr_to_scan' and a &#= 39;gfpmask'. =A0It should
> @@ -1601,8 +1610,8 @@ int in_gate_area_no_task(unsigned long addr); >
> =A0int drop_caches_sysctl_handler(struct ctl_table *, int,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 void __user *, size_t *, loff_t *);
> -unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
> - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 unsigned long lru_pages); > +unsigned long shrink_slab(struct shrink_control *shrink,
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 unsigned lon= g lru_pages);
>
> =A0#ifndef CONFIG_MMU
> =A0#define randomize_va_space 0
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 060e4c1..40edf73 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -220,11 +220,13 @@ EXPORT_SYMBOL(unregister_shrinker);
> =A0 *
> =A0 * Returns the number of slab objects which we shrunk.
> =A0 */
> -unsigned long shrink_slab(unsigned long scanned, gfp_t gfp_mask,
> - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 unsigned long lru_pages)
> +unsigned long shrink_slab(struct shrink_control *shrink,
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 unsigned long lru_pages)=
> =A0{
> =A0 =A0 =A0 struct shrinker *shrinker;
> =A0 =A0 =A0 unsigned long ret =3D 0;
> + =A0 =A0 unsigned long scanned =3D shrink->nr_scanned;
> + =A0 =A0 gfp_t gfp_mask =3D shrink->gfp_mask;
>
> =A0 =A0 =A0 if (scanned =3D=3D 0)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 scanned =3D SWAP_CLUSTER_MAX;
> @@ -2032,7 +2034,8 @@ static bool all_unreclaimable(struct zonelist *z= onelist,
> =A0 * =A0 =A0 =A0 =A0 =A0 else, the number of pages reclaimed
> =A0 */
> =A0static unsigned long do_try_to_free_pages(struct zonelist *zonelist= ,
> - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 struct scan_control *sc)
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 struct scan_control *sc,
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 struct shrink_control *shrink)
> =A0{

Worthless argument addition. gfpmask can be getting from scan_control and .nr_scanned is calculated in this function.

=
changed.=A0



> =A0 =A0 =A0 int priority;
> =A0 =A0 =A0 unsigned long total_scanned =3D 0;
> @@ -2066,7 +2069,8 @@ static unsigned long do_try_to_free_pages(struct= zonelist *zonelist,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 lru_pages = +=3D zone_reclaimable_pages(zone);
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 }
>
> - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 shrink_slab(sc->nr_scanne= d, sc->gfp_mask, lru_pages);
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 shrink->nr_scanned =3D sc= ->nr_scanned;
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 shrink_slab(shrink, lru_page= s);
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (reclaim_state) {
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 sc->nr_= reclaimed +=3D reclaim_state->reclaimed_slab;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 reclaim_st= ate->reclaimed_slab =3D 0;
> @@ -2130,12 +2134,15 @@ unsigned long try_to_free_pages(struct zonelis= t *zonelist, int order,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 .mem_cgroup =3D NULL,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 .nodemask =3D nodemask,
> =A0 =A0 =A0 };
> + =A0 =A0 struct shrink_control shrink =3D {
> + =A0 =A0 =A0 =A0 =A0 =A0 .gfp_mask =3D sc.gfp_mask,
> + =A0 =A0 };
>
> =A0 =A0 =A0 trace_mm_vmscan_direct_reclaim_begin(order,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 sc.may_wri= tepage,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gfp_mask);=
>
> - =A0 =A0 nr_reclaimed =3D do_try_to_free_pages(zonelist, &sc); > + =A0 =A0 nr_reclaimed =3D do_try_to_free_pages(zonelist, &sc, &am= p;shrink);
>
> =A0 =A0 =A0 trace_mm_vmscan_direct_reclaim_end(nr_reclaimed);
>
> @@ -2333,6 +2340,9 @@ static unsigned long balance_pgdat(pg_data_t *pg= dat, int order,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 .order =3D order,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 .mem_cgroup =3D NULL,
> =A0 =A0 =A0 };
> + =A0 =A0 struct shrink_control shrink =3D {
> + =A0 =A0 =A0 =A0 =A0 =A0 .gfp_mask =3D sc.gfp_mask,
> + =A0 =A0 };
> =A0loop_again:
> =A0 =A0 =A0 total_scanned =3D 0;
> =A0 =A0 =A0 sc.nr_reclaimed =3D 0;
> @@ -2432,8 +2442,8 @@ loop_again:
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 end_zone, 0))
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 shrink_zon= e(priority, zone, &sc);
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 reclaim_state->reclaime= d_slab =3D 0;
> - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nr_slab =3D shrink_slab(sc.n= r_scanned, GFP_KERNEL,
> - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 lru_pages);
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 shrink.nr_scanned =3D sc.nr_= scanned;
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nr_slab =3D shrink_slab(&= ;shrink, lru_pages);
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 sc.nr_reclaimed +=3D recla= im_state->reclaimed_slab;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 total_scanned +=3D sc.nr_s= canned;
>
> @@ -2969,6 +2979,9 @@ static int __zone_reclaim(struct zone *zone, gfp= _t gfp_mask, unsigned int order)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 .swappiness =3D vm_swappiness,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 .order =3D order,
> =A0 =A0 =A0 };
> + =A0 =A0 struct shrink_control shrink =3D {
> + =A0 =A0 =A0 =A0 =A0 =A0 .gfp_mask =3D sc.gfp_mask,
> + =A0 =A0 };
> =A0 =A0 =A0 unsigned long nr_slab_pages0, nr_slab_pages1;
>
> =A0 =A0 =A0 cond_resched();
> @@ -2995,6 +3008,7 @@ static int __zone_reclaim(struct zone *zone, gfp= _t gfp_mask, unsigned int order)
> =A0 =A0 =A0 }
>
> =A0 =A0 =A0 nr_slab_pages0 =3D zone_page_state(zone, NR_SLAB_RECLAIMAB= LE);
> + =A0 =A0 shrink.nr_scanned =3D sc.nr_scanned;
> =A0 =A0 =A0 if (nr_slab_pages0 > zone->min_slab_pages) {

strange. this assignment should be move into this if brace.
changed.

> =A0 =A0 =A0 =A0 =A0 =A0 =A0 /*
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* shrink_slab() does not currently allo= w us to determine how
> @@ -3010,7 +3024,7 @@ static int __zone_reclaim(struct zone *zone, gfp= _t gfp_mask, unsigned int order)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 unsigned long lru_pages = =3D zone_reclaimable_pages(zone);
>
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* No reclaimable slab or = very low memory pressure */
> - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!shrink_slab(sc.nr_scann= ed, gfp_mask, lru_pages))
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!shrink_slab(&shrink= , lru_pages))
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
>
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* Freed enough memory */<= br> > --
> 1.7.3.1
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in<= br> > the body to majordomo@kvack.org= . =A0For more info on Linux MM,
> see: http://www= .linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=3Dmailto:"dont@kvack.org"> emai= l@kvack.org </a>



--0016e64aefda85f0d104a1db2b3c-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org