linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Glauber Costa <glommer@parallels.com>,
	linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	containers@lists.linux-foundation.org,
	Michal Hocko <mhocko@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	kamezawa.hiroyu@jp.fujitsu.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Greg Thelen <gthelen@google.com>,
	hughd@google.com, yinghan@google.com,
	Theodore Ts'o <tytso@mit.edu>, Al Viro <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH v2 02/28] vmscan: take at least one pass with shrinkers
Date: Wed, 10 Apr 2013 11:51:16 +0900	[thread overview]
Message-ID: <20130410025115.GA5872@lge.com> (raw)
In-Reply-To: <20130409123008.GM17758@dastard>

Hello, Dave.

On Tue, Apr 09, 2013 at 10:30:08PM +1000, Dave Chinner wrote:
> On Tue, Apr 09, 2013 at 11:05:05AM +0900, Joonsoo Kim wrote:
> > On Tue, Apr 09, 2013 at 11:29:31AM +1000, Dave Chinner wrote:
> > > On Tue, Apr 09, 2013 at 09:55:47AM +0900, Joonsoo Kim wrote:
> > > > lowmemkiller makes spare memory via killing a task.
> > > > 
> > > > Below is code from lowmem_shrink() in lowmemorykiller.c
> > > > 
> > > >         for (i = 0; i < array_size; i++) {
> > > >                 if (other_free < lowmem_minfree[i] &&
> > > >                     other_file < lowmem_minfree[i]) {
> > > >                         min_score_adj = lowmem_adj[i];
> > > >                         break;
> > > >                 }   
> > > >         } 
> > > 
> > > I don't think you understand what the current lowmemkiller shrinker
> > > hackery actually does.
> > > 
> > >         rem = global_page_state(NR_ACTIVE_ANON) +
> > >                 global_page_state(NR_ACTIVE_FILE) +
> > >                 global_page_state(NR_INACTIVE_ANON) +
> > >                 global_page_state(NR_INACTIVE_FILE);
> > >         if (sc->nr_to_scan <= 0 || min_score_adj == OOM_SCORE_ADJ_MAX + 1) {
> > >                 lowmem_print(5, "lowmem_shrink %lu, %x, return %d\n",
> > >                              sc->nr_to_scan, sc->gfp_mask, rem);
> > >                 return rem;
> > >         }
> > > 
> > > So, when nr_to_scan == 0 (i.e. the count phase), the shrinker is
> > > going to return a count of active/inactive pages in the cache. That
> > > is almost always going to be non-zero, and almost always be > 1000
> > > because of the minimum working set needed to run the system.
> > > Even after applying the seek count adjustment, total_scan is almost
> > > always going to be larger than the shrinker default batch size of
> > > 128, and that means this shrinker will almost always run at least
> > > once per shrink_slab() call.
> > 
> > I don't think so.
> > Yes, lowmem_shrink() return number of (in)active lru pages
> > when nr_to_scan is 0. And in shrink_slab(), we divide it by lru_pages.
> > lru_pages can vary where shrink_slab() is called, anyway, perhaps this
> > logic makes total_scan below 128.
> 
> "perhaps"
> 
> 
> There is no "perhaps" here - there is *zero* guarantee of the
> behaviour you are claiming the lowmem killer shrinker is dependent
> on with the existing shrinker infrastructure. So, lets say we have:
> 
> 	nr_pages_scanned = 1000
> 	lru_pages = 100,000
> 
> Your shrinker is going to return 100,000 when nr_to_scan = 0. So,
> we have:
> 
> 	batch_size = SHRINK_BATCH = 128
> 	max_pass= 100,000
> 
> 	total_scan = shrinker->nr_in_batch = 0
> 	delta = 4 * 1000 / 32 = 128
> 	delta = 128 * 100,000 = 12,800,000
> 	delta = 12,800,000 / 100,001 = 127
> 	total_scan += delta = 127
> 
> Assuming the LRU pages count does not change(*), nr_pages_scanned is
> irrelevant and delta always comes in 1 count below the batch size,
> and the shrinker is not called. The remainder is then:
> 
> 	shrinker->nr_in_batch += total_scan = 127
> 
> (*) the lru page count will change, because reclaim and shrinkers
> run concurrently, and so we can't even make a simple contrived case
> where delta is consistently < batch_size here.
> 
> Anyway, the next time the shrinker is entered, we start with:
> 
> 	total_scan = shrinker->nr_in_batch = 127
> 	.....
> 	total_scan += delta = 254
> 
> 	<shrink once, total scan -= batch_size = 126>
> 
> 	shrinker->nr_in_batch += total_scan = 126
> 
> And so on for all the subsequent shrink_slab calls....
> 
> IOWs, this algorithm effectively causes the shrinker to be called
> 127 times out of 128 in this arbitrary scenario. It does not behave
> as you are assuming it to, and as such any code based on those
> assumptions is broken....

Thanks for good example. I got your point :)
But, my concern is not solved entirely, because this is not problem
just for lowmem killer and I can think counter example. And other drivers
can be suffered from this change.

I look at the code for "huge_zero_page_shrinker".
They return HPAGE_PMD_NR if there is shrikerable object.

I try to borrow your example for this case.

 	nr_pages_scanned = 1,000
 	lru_pages = 100,000
 	batch_size = SHRINK_BATCH = 128
 	max_pass= 512 (HPAGE_PMD_NR)
 
 	total_scan = shrinker->nr_in_batch = 0
 	delta = 4 * 1,000 / 2 = 2,000
 	delta = 2,000 * 512 = 1,024,000
 	delta = 1,024,000 / 100,001 = 10
 	total_scan += delta = 10

As you can see, before this patch, do_shrinker_shrink() for
"huge_zero_page_shrinker" is not called until we call shrink_slab() more
than 13 times. *Frequency* we call do_shrinker_shrink() actually is
largely different with before. With this patch, we actually call
do_shrinker_shrink() for "huge_zero_page_shrinker" 12 times more
than before. Can we be convinced that there will be no problem?

This is why I worry about this change.
Am I worried too much? :)

I show another scenario what I am thinking for lowmem killer.

In reality, 'nr_pages_scanned' reflect sc->priority.
You can see it get_scan_count() in vmscan.c

	size = get_lru_size(lruvec, lru);
	scan = size >> sc->priority;

So, I try to re-construct your example with above assumption.

If sc->priority is DEF_PRIORITY (12)

 	nr_pages_scanned = 25 (100,000 / 4,096)
 	lru_pages = 100,000
 	batch_size = SHRINK_BATCH = 128
 	max_pass= 100,000
 
 	total_scan = shrinker->nr_in_batch = 0
 	delta = 4 * 25 / 32 = 3
 	delta = 3 * 100,000 = 300,000
 	delta = 300,000 / 100,001 = 3
 	total_scan += delta = 3

So, do_shrinker_shrink() is not called for lowmem killer until
we call shrink_slab() more than 40 times if sc->priority is DEF_PRIORITY.
So, AICT, if we don't have trouble too much in reclaiming memory, it will not
triggered frequently.

I like this patchset, and I think shrink_slab interface should be
re-worked. What I want to say is just that this patch is not trivial
change and should notify user to test it.
I want to say again, I don't want to become a stopper for this patchset :)

Please let me know what I am missing.

Thanks.

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-04-10  2:50 UTC|newest]

Thread overview: 95+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-29  9:13 [PATCH v2 00/28] memcg-aware slab shrinking Glauber Costa
     [not found] ` <1364548450-28254-1-git-send-email-glommer-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-03-29  9:13   ` [PATCH v2 01/28] super: fix calculation of shrinkable objects for small numbers Glauber Costa
     [not found]     ` <1364548450-28254-2-git-send-email-glommer-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-01  7:16       ` Kamezawa Hiroyuki
2013-03-29  9:13   ` [PATCH v2 02/28] vmscan: take at least one pass with shrinkers Glauber Costa
2013-04-01  7:26     ` Kamezawa Hiroyuki
2013-04-01  8:10       ` Glauber Costa
     [not found]         ` <515940E4.8050704-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-10  5:09           ` Ric Mason
2013-04-10  7:32             ` Glauber Costa
2013-04-10  9:19             ` Dave Chinner
2013-04-08  8:42     ` Joonsoo Kim
2013-04-08  8:47       ` Glauber Costa
2013-04-08  9:01         ` Joonsoo Kim
2013-04-08  9:05           ` Glauber Costa
2013-04-09  0:55             ` Joonsoo Kim
     [not found]               ` <20130409005547.GC21654-Hm3cg6mZ9cc@public.gmane.org>
2013-04-09  1:29                 ` Dave Chinner
2013-04-09  2:05                   ` Joonsoo Kim
2013-04-09  7:43                     ` Glauber Costa
     [not found]                       ` <5163C6A5.5050307-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-09  9:08                         ` Joonsoo Kim
     [not found]                     ` <20130409020505.GA4218-Hm3cg6mZ9cc@public.gmane.org>
2013-04-09 12:30                       ` Dave Chinner
2013-04-10  2:51                         ` Joonsoo Kim [this message]
2013-04-10  7:30                           ` Glauber Costa
     [not found]                             ` <51651518.4010007-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-10  8:19                               ` Joonsoo Kim
     [not found]                           ` <20130410025115.GA5872-Hm3cg6mZ9cc@public.gmane.org>
2013-04-10  8:46                             ` Wanpeng Li
2013-04-10  8:46                           ` Wanpeng Li
     [not found]                           ` <20130410084606.GA10235@hacker.(null)>
2013-04-10 10:07                             ` Dave Chinner
2013-04-10 14:03                               ` JoonSoo Kim
     [not found]                                 ` <CAAmzW4OMyZ=nVbHK_AiifPK5LVxvhOQUXmsD5NGfo33CBjf=eA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-04-11  0:41                                   ` Dave Chinner
2013-04-11  7:27                                     ` Wanpeng Li
2013-04-11  7:27                                     ` Wanpeng Li
     [not found]                                     ` <20130411072729.GA3605@hacker.(null)>
2013-04-11  9:25                                       ` Dave Chinner
2013-03-29  9:13 ` [PATCH v2 03/28] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-04-05  1:09   ` Greg Thelen
     [not found]     ` <xr93r4ipkcl0.fsf-aSPv4SP+Du0KgorLzL7FmE7CuiCeIGUxQQ4Iyu8u01E@public.gmane.org>
2013-04-05  1:15       ` Dave Chinner
2013-04-08  9:14         ` Glauber Costa
2013-04-08 13:18           ` Glauber Costa
     [not found]           ` <51628A88.2090002-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-08 23:26             ` Dave Chinner
2013-04-09  8:02               ` Glauber Costa
     [not found]                 ` <5163CB0D.1040000-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-09 12:47                   ` Dave Chinner
2013-03-29  9:13 ` [PATCH v2 04/28] dentry: move to per-sb LRU locks Glauber Costa
2013-03-29  9:13 ` [PATCH v2 05/28] dcache: remove dentries from LRU before putting on dispose list Glauber Costa
     [not found]   ` <1364548450-28254-6-git-send-email-glommer-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-03  6:51     ` Sha Zhengju
2013-04-03  8:55       ` Glauber Costa
2013-04-04  6:19       ` Dave Chinner
2013-04-04  6:56         ` Glauber Costa
2013-03-29  9:13 ` [PATCH v2 06/28] mm: new shrinker API Glauber Costa
2013-04-05  1:09   ` Greg Thelen
2013-03-29  9:13 ` [PATCH v2 07/28] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-03-29  9:13 ` [PATCH v2 08/28] list: add a new LRU list type Glauber Costa
2013-04-04 21:53   ` Greg Thelen
2013-04-05  1:20     ` Dave Chinner
2013-04-05  8:01       ` Glauber Costa
2013-04-06  0:04         ` Dave Chinner
2013-03-29  9:13 ` [PATCH v2 09/28] inode: convert inode lru list to generic lru list code Glauber Costa
2013-03-29  9:13 ` [PATCH v2 10/28] dcache: convert to use new lru list infrastructure Glauber Costa
2013-04-08 13:14   ` Glauber Costa
     [not found]     ` <5162C2C4.7010807-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-08 23:28       ` Dave Chinner
2013-03-29  9:13 ` [PATCH v2 11/28] list_lru: per-node " Glauber Costa
2013-03-29  9:13 ` [PATCH v2 12/28] shrinker: add node awareness Glauber Costa
2013-03-29  9:13 ` [PATCH v2 13/28] fs: convert inode and dentry shrinking to be node aware Glauber Costa
2013-03-29  9:13 ` [PATCH v2 14/28] xfs: convert buftarg LRU to generic code Glauber Costa
2013-03-29  9:13 ` [PATCH v2 15/28] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-03-29  9:13 ` [PATCH v2 16/28] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-03-29  9:13 ` [PATCH v2 17/28] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-03-29  9:14 ` [PATCH v2 18/28] shrinker: convert remaining shrinkers to " Glauber Costa
2013-03-29  9:14 ` [PATCH v2 19/28] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-03-29  9:14 ` [PATCH v2 20/28] shrinker: Kill old ->shrink API Glauber Costa
2013-03-29  9:14 ` [PATCH v2 21/28] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-04-01  7:46   ` Kamezawa Hiroyuki
2013-04-01  8:51     ` Glauber Costa
     [not found]   ` <1364548450-28254-22-git-send-email-glommer-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-03 10:11     ` Sha Zhengju
2013-04-03 10:43       ` Glauber Costa
2013-04-04  9:35         ` Sha Zhengju
2013-04-05  8:25           ` Glauber Costa
2013-03-29  9:14 ` [PATCH v2 22/28] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
     [not found]   ` <1364548450-28254-23-git-send-email-glommer-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-01  8:05     ` [PATCH v2 22/28] memcg, list_lru: " Kamezawa Hiroyuki
2013-04-01  8:22       ` [PATCH v2 22/28] memcg,list_lru: " Glauber Costa
2013-03-29  9:14 ` [PATCH v2 23/28] lru: add an element to a memcg list Glauber Costa
2013-04-01  8:18   ` Kamezawa Hiroyuki
2013-04-01  8:29     ` Glauber Costa
2013-03-29  9:14 ` [PATCH v2 24/28] list_lru: also include memcg lists in counts and scans Glauber Costa
2013-03-29  9:14 ` [PATCH v2 25/28] list_lru: per-memcg walks Glauber Costa
2013-03-29  9:14 ` [PATCH v2 26/28] memcg: per-memcg kmem shrinking Glauber Costa
2013-04-01  8:31   ` Kamezawa Hiroyuki
2013-04-01  8:48     ` Glauber Costa
     [not found]       ` <515949EB.7020400-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-01  9:01         ` Kamezawa Hiroyuki
2013-04-01  9:14           ` Glauber Costa
2013-04-01  9:35           ` Kamezawa Hiroyuki
2013-03-29  9:14 ` [PATCH v2 27/28] list_lru: reclaim proportionaly between memcgs and nodes Glauber Costa
2013-03-29  9:14 ` [PATCH v2 28/28] super: targeted memcg reclaim Glauber Costa
2013-04-01 12:38 ` [PATCH v2 00/28] memcg-aware slab shrinking Serge Hallyn
2013-04-01 12:45   ` Glauber Costa
     [not found]     ` <51598168.4050404-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-04-01 14:12       ` Serge Hallyn
2013-04-08  8:11         ` Glauber Costa
2013-04-02  4:58   ` Dave Chinner
2013-04-02  7:55     ` Glauber Costa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130410025115.GA5872@lge.com \
    --to=iamjoonsoo.kim@lge.com \
    --cc=akpm@linux-foundation.org \
    --cc=containers@lists.linux-foundation.org \
    --cc=david@fromorbit.com \
    --cc=glommer@parallels.com \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=tytso@mit.edu \
    --cc=viro@zeniv.linux.org.uk \
    --cc=yinghan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).