linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Glauber Costa <glommer@openvz.org>,
	linux-fsdevel@vger.kernel.org, Mel Gorman <mgorman@suse.de>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	kamezawa.hiroyu@jp.fujitsu.com, Michal Hocko <mhocko@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	hughd@google.com, Greg Thelen <gthelen@google.com>,
	Dave Chinner <dchinner@redhat.com>
Subject: Re: [PATCH v10 13/35] vmscan: per-node deferred work
Date: Thu, 6 Jun 2013 13:37:42 +1000	[thread overview]
Message-ID: <20130606033742.GS29338@dastard> (raw)
In-Reply-To: <20130605160815.fb69f7d4d1736455727fc669@linux-foundation.org>

On Wed, Jun 05, 2013 at 04:08:15PM -0700, Andrew Morton wrote:
> On Mon,  3 Jun 2013 23:29:42 +0400 Glauber Costa <glommer@openvz.org> wrote:
> 
> > We already keep per-node LRU lists for objects being shrunk, but the
> > work that is deferred from one run to another is kept global. This
> > creates an impedance problem, where upon node pressure, work deferred
> > will accumulate and end up being flushed in other nodes.
> 
> This changelog would be more useful if it had more specificity.  Where
> do we keep these per-node LRU lists (names of variables?).

In the per-node LRU lists the shrinker walks ;)

> Where do we
> keep the global data? 

In the struct shrinker

> In what function does this other-node flushing
> happen?

Any shrinker that is run on a different node.

> Generally so that readers can go and look at the data structures and
> functions which you're talking about.
> 
> > In large machines, many nodes can accumulate at the same time, all
> > adding to the global counter.
> 
> What global counter?

shrinker->nr

> >  As we accumulate more and more, we start
> > to ask for the caches to flush even bigger numbers.
> 
> Where does this happen?

The shrinker scan loop ;)

> > @@ -186,6 +208,116 @@ static inline int do_shrinker_shrink(struct shrinker *shrinker,
> >  }
> >  
> >  #define SHRINK_BATCH 128
> > +
> > +static unsigned long
> > +shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker,
> > +		 unsigned long nr_pages_scanned, unsigned long lru_pages,
> > +		 atomic_long_t *deferred)
> > +{
> > +	unsigned long freed = 0;
> > +	unsigned long long delta;
> > +	long total_scan;
> > +	long max_pass;
> > +	long nr;
> > +	long new_nr;
> > +	long batch_size = shrinker->batch ? shrinker->batch
> > +					  : SHRINK_BATCH;
> > +
> > +	if (shrinker->scan_objects) {
> > +		max_pass = shrinker->count_objects(shrinker, shrinkctl);
> > +		WARN_ON(max_pass < 0);
> > +	} else
> > +		max_pass = do_shrinker_shrink(shrinker, shrinkctl, 0);
> > +	if (max_pass <= 0)
> > +		return 0;
> > +
> > +	/*
> > +	 * copy the current shrinker scan count into a local variable
> > +	 * and zero it so that other concurrent shrinker invocations
> > +	 * don't also do this scanning work.
> > +	 */
> > +	nr = atomic_long_xchg(deferred, 0);
> 
> This comment seems wrong.  It implies that "deferred" refers to "the
> current shrinker scan count".  But how are these two the same thing?  A
> "scan count" would refer to the number of objects to be scanned (or
> which were scanned - it's unclear).  Whereas "deferred" would refer to
> the number of those to-be-scanned objects which we didn't process and
> is hence less than or equal to the "scan count".
> 
> It's all very foggy :(  This whole concept of deferral needs more
> explanation, please.

You wrote the shrinker deferal code way back in 2.5.42 (IIRC), so
maybe you can explain it to us? :)

> 
> > +	total_scan = nr;
> > +	delta = (4 * nr_pages_scanned) / shrinker->seeks;
> > +	delta *= max_pass;
> > +	do_div(delta, lru_pages + 1);
> > +	total_scan += delta;
> > +	if (total_scan < 0) {
> > +		printk(KERN_ERR
> > +		"shrink_slab: %pF negative objects to delete nr=%ld\n",
> > +		       shrinker->shrink, total_scan);
> > +		total_scan = max_pass;
> > +	}
> > +
> > +	/*
> > +	 * We need to avoid excessive windup on filesystem shrinkers
> > +	 * due to large numbers of GFP_NOFS allocations causing the
> > +	 * shrinkers to return -1 all the time. This results in a large
> > +	 * nr being built up so when a shrink that can do some work
> > +	 * comes along it empties the entire cache due to nr >>>
> > +	 * max_pass.  This is bad for sustaining a working set in
> > +	 * memory.
> > +	 *
> > +	 * Hence only allow the shrinker to scan the entire cache when
> > +	 * a large delta change is calculated directly.
> > +	 */
> 
> That was an important comment.  So the whole problem we're tackling
> here is fs shrinkers baling out in GFP_NOFS allocations?

commit 3567b59aa80ac4417002bf58e35dce5c777d4164
Author: Dave Chinner <dchinner@redhat.com>
Date:   Fri Jul 8 14:14:36 2011 +1000

    vmscan: reduce wind up shrinker->nr when shrinker can't do work
    
    When a shrinker returns -1 to shrink_slab() to indicate it cannot do
    any work given the current memory reclaim requirements, it adds the
    entire total_scan count to shrinker->nr. The idea ehind this is that
    whenteh shrinker is next called and can do work, it will do the work
    of the previously aborted shrinker call as well.
    
    However, if a filesystem is doing lots of allocation with GFP_NOFS
    set, then we get many, many more aborts from the shrinkers than we
    do successful calls. The result is that shrinker->nr winds up to
    it's maximum permissible value (twice the current cache size) and
    then when the next shrinker call that can do work is issued, it
    has enough scan count built up to free the entire cache twice over.
    
    This manifests itself in the cache going from full to empty in a
    matter of seconds, even when only a small part of the cache is
    needed to be emptied to free sufficient memory.
    
    Under metadata intensive workloads on ext4 and XFS, I'm seeing the
    VFS caches increase memory consumption up to 75% of memory (no page
    cache pressure) over a period of 30-60s, and then the shrinker
    empties them down to zero in the space of 2-3s. This cycle repeats
    over and over again, with the shrinker completely trashing the inode
    and dentry caches every minute or so the workload continues.
    
    This behaviour was made obvious by the shrink_slab tracepoints added
    earlier in the series, and made worse by the patch that corrected
    the concurrent accounting of shrinker->nr.
    
    To avoid this problem, stop repeated small increments of the total
    scan value from winding shrinker->nr up to a value that can cause
    the entire cache to be freed. We still need to allow it to wind up,
    so use the delta as the "large scan" threshold check - if the delta
    is more than a quarter of the entire cache size, then it is a large
    scan and allowed to cause lots of windup because we are clearly
    needing to free lots of memory.
    
    If it isn't a large scan then limit the total scan to half the size
    of the cache so that windup never increases to consume the whole
    cache. Reducing the total scan limit further does not allow enough
    wind-up to maintain the current levels of performance, whilst a
    higher threshold does not prevent the windup from freeing the entire
    cache under sustained workloads.
    
    Signed-off-by: Dave Chinner <dchinner@redhat.com>
    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>



-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-06-06  3:37 UTC|newest]

Thread overview: 103+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-03 19:29 [PATCH v10 00/35] kmemcg shrinkers Glauber Costa
2013-06-03 19:29 ` [PATCH v10 01/35] fs: bump inode and dentry counters to long Glauber Costa
2013-06-03 19:29 ` [PATCH v10 02/35] super: fix calculation of shrinkable objects for small numbers Glauber Costa
2013-06-03 19:29 ` [PATCH v10 03/35] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-06-05 23:07   ` Andrew Morton
2013-06-06  1:45     ` Dave Chinner
2013-06-06  2:48       ` Andrew Morton
2013-06-06  4:02         ` Dave Chinner
2013-06-06 12:40         ` Glauber Costa
2013-06-06 22:25           ` Andrew Morton
2013-06-06 23:42             ` Dave Chinner
2013-06-07  6:03               ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 04/35] dentry: move to per-sb LRU locks Glauber Costa
2013-06-05 23:07   ` Andrew Morton
2013-06-06  1:56     ` Dave Chinner
2013-06-06  8:03     ` Glauber Costa
2013-06-06 12:51       ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 05/35] dcache: remove dentries from LRU before putting on dispose list Glauber Costa
2013-06-05 23:07   ` Andrew Morton
2013-06-06  8:04     ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 06/35] mm: new shrinker API Glauber Costa
2013-06-05 23:07   ` Andrew Morton
2013-06-06  7:58     ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 07/35] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-06-03 19:29 ` [PATCH v10 08/35] list: add a new LRU list type Glauber Costa
2013-06-05 23:07   ` Andrew Morton
2013-06-06  2:49     ` Dave Chinner
2013-06-06  3:05       ` Andrew Morton
2013-06-06  4:44         ` Dave Chinner
2013-06-06  7:04           ` Andrew Morton
2013-06-06  9:03             ` Glauber Costa
2013-06-06  9:55               ` Andrew Morton
2013-06-06 11:47                 ` Glauber Costa
2013-06-06 14:28       ` Glauber Costa
2013-06-06  8:10     ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 09/35] inode: convert inode lru list to generic lru list code Glauber Costa
2013-06-03 19:29 ` [PATCH v10 10/35] dcache: convert to use new lru list infrastructure Glauber Costa
2013-06-03 19:29 ` [PATCH v10 11/35] list_lru: per-node " Glauber Costa
2013-06-05 23:08   ` Andrew Morton
2013-06-06  3:21     ` Dave Chinner
2013-06-06  3:51       ` Andrew Morton
2013-06-06  8:21       ` Glauber Costa
2013-06-06 16:15     ` Glauber Costa
2013-06-06 16:48       ` Andrew Morton
2013-06-03 19:29 ` [PATCH v10 12/35] shrinker: add node awareness Glauber Costa
2013-06-05 23:08   ` Andrew Morton
2013-06-06  3:26     ` Dave Chinner
2013-06-06  3:54       ` Andrew Morton
2013-06-06  8:23     ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 13/35] vmscan: per-node deferred work Glauber Costa
2013-06-05 23:08   ` Andrew Morton
2013-06-06  3:37     ` Dave Chinner [this message]
2013-06-06  4:59       ` Dave Chinner
2013-06-06  7:12         ` Andrew Morton
2013-06-06  9:00     ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 14/35] list_lru: per-node API Glauber Costa
2013-06-03 19:29 ` [PATCH v10 15/35] fs: convert inode and dentry shrinking to be node aware Glauber Costa
2013-06-03 19:29 ` [PATCH v10 16/35] xfs: convert buftarg LRU to generic code Glauber Costa
2013-06-03 19:29 ` [PATCH v10 17/35] xfs: rework buffer dispose list tracking Glauber Costa
2013-06-03 19:29 ` [PATCH v10 18/35] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-06-03 19:29 ` [PATCH v10 19/35] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-06-03 19:29 ` [PATCH v10 20/35] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-06-03 19:29 ` [PATCH v10 21/35] i915: bail out earlier when shrinker cannot acquire mutex Glauber Costa
2013-06-03 19:29 ` [PATCH v10 22/35] shrinker: convert remaining shrinkers to count/scan API Glauber Costa
2013-06-05 23:08   ` Andrew Morton
2013-06-06  3:41     ` Dave Chinner
2013-06-06  8:27       ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 23/35] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-06-03 19:29 ` [PATCH v10 24/35] shrinker: Kill old ->shrink API Glauber Costa
2013-06-03 19:29 ` [PATCH v10 25/35] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-06-03 19:29 ` [PATCH v10 26/35] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
2013-06-05 23:08   ` Andrew Morton
2013-06-06  8:52     ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 27/35] lru: add an element to a memcg list Glauber Costa
2013-06-05 23:08   ` Andrew Morton
2013-06-06  8:44     ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 28/35] list_lru: per-memcg walks Glauber Costa
2013-06-05 23:08   ` Andrew Morton
2013-06-06  8:37     ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 29/35] memcg: per-memcg kmem shrinking Glauber Costa
2013-06-05 23:08   ` Andrew Morton
2013-06-06  8:35     ` Glauber Costa
2013-06-06  9:49       ` Andrew Morton
2013-06-06 12:09         ` Glauber Costa
2013-06-06 22:23           ` Andrew Morton
2013-06-07  6:10             ` Glauber Costa
2013-06-03 19:29 ` [PATCH v10 30/35] memcg: scan cache objects hierarchically Glauber Costa
2013-06-05 23:08   ` Andrew Morton
2013-06-03 19:30 ` [PATCH v10 31/35] vmscan: take at least one pass with shrinkers Glauber Costa
2013-06-03 19:30 ` [PATCH v10 32/35] super: targeted memcg reclaim Glauber Costa
2013-06-03 19:30 ` [PATCH v10 33/35] memcg: move initialization to memcg creation Glauber Costa
2013-06-03 19:30 ` [PATCH v10 34/35] vmpressure: in-kernel notifications Glauber Costa
2013-06-03 19:30 ` [PATCH v10 35/35] memcg: reap dead memcgs upon global memory pressure Glauber Costa
2013-06-05 23:09   ` Andrew Morton
2013-06-06  8:33     ` Glauber Costa
2013-06-05 23:07 ` [PATCH v10 00/35] kmemcg shrinkers Andrew Morton
2013-06-06  3:44   ` Dave Chinner
2013-06-06  5:51   ` Glauber Costa
2013-06-06  7:18     ` Andrew Morton
2013-06-06  7:37       ` Glauber Costa
2013-06-06  7:47         ` Andrew Morton
2013-06-06  7:59           ` Glauber Costa
2013-06-07 14:15   ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130606033742.GS29338@dastard \
    --to=david@fromorbit.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=dchinner@redhat.com \
    --cc=glommer@openvz.org \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).