linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan.kim@gmail.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>,
	Balbir Singh <balbir@linux.vnet.ibm.com>,
	Ying Han <yinghan@google.com>, Michal Hocko <mhocko@suse.cz>,
	Andrew Morton <akpm@linux-foundation.org>,
	Rik van Riel <riel@redhat.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	Mel Gorman <mgorman@suse.de>, Greg Thelen <gthelen@google.com>,
	Michel Lespinasse <walken@google.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [patch 2/8] mm: memcg-aware global reclaim
Date: Fri, 10 Jun 2011 09:48:38 +0900	[thread overview]
Message-ID: <BANLkTinwCFgocsPOvutV-s4Z33-+YFRJfw@mail.gmail.com> (raw)
In-Reply-To: <20110610003407.GA27964@cmpxchg.org>

On Fri, Jun 10, 2011 at 9:34 AM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> On Fri, Jun 10, 2011 at 08:47:55AM +0900, Minchan Kim wrote:
>> On Fri, Jun 10, 2011 at 8:41 AM, Minchan Kim <minchan.kim@gmail.com> wrote:
>> > On Fri, Jun 10, 2011 at 2:23 AM, Johannes Weiner <hannes@cmpxchg.org> wrote:
>> >> On Fri, Jun 10, 2011 at 12:48:39AM +0900, Minchan Kim wrote:
>> >>> On Wed, Jun 01, 2011 at 08:25:13AM +0200, Johannes Weiner wrote:
>> >>> > When a memcg hits its hard limit, hierarchical target reclaim is
>> >>> > invoked, which goes through all contributing memcgs in the hierarchy
>> >>> > below the offending memcg and reclaims from the respective per-memcg
>> >>> > lru lists.  This distributes pressure fairly among all involved
>> >>> > memcgs, and pages are aged with respect to their list buddies.
>> >>> >
>> >>> > When global memory pressure arises, however, all this is dropped
>> >>> > overboard.  Pages are reclaimed based on global lru lists that have
>> >>> > nothing to do with container-internal age, and some memcgs may be
>> >>> > reclaimed from much more than others.
>> >>> >
>> >>> > This patch makes traditional global reclaim consider container
>> >>> > boundaries and no longer scan the global lru lists.  For each zone
>> >>> > scanned, the memcg hierarchy is walked and pages are reclaimed from
>> >>> > the per-memcg lru lists of the respective zone.  For now, the
>> >>> > hierarchy walk is bounded to one full round-trip through the
>> >>> > hierarchy, or if the number of reclaimed pages reach the overall
>> >>> > reclaim target, whichever comes first.
>> >>> >
>> >>> > Conceptually, global memory pressure is then treated as if the root
>> >>> > memcg had hit its limit.  Since all existing memcgs contribute to the
>> >>> > usage of the root memcg, global reclaim is nothing more than target
>> >>> > reclaim starting from the root memcg.  The code is mostly the same for
>> >>> > both cases, except for a few heuristics and statistics that do not
>> >>> > always apply.  They are distinguished by a newly introduced
>> >>> > global_reclaim() primitive.
>> >>> >
>> >>> > One implication of this change is that pages have to be linked to the
>> >>> > lru lists of the root memcg again, which could be optimized away with
>> >>> > the old scheme.  The costs are not measurable, though, even with
>> >>> > worst-case microbenchmarks.
>> >>> >
>> >>> > As global reclaim no longer relies on global lru lists, this change is
>> >>> > also in preparation to remove those completely.
>> >>
>> >> [cut diff]
>> >>
>> >>> I didn't look at all, still. You might change the logic later patches.
>> >>> If I understand this patch right, it does round-robin reclaim in all memcgs
>> >>> when global memory pressure happens.
>> >>>
>> >>> Let's consider this memcg size unbalance case.
>> >>>
>> >>> If A-memcg has lots of LRU pages, scanning count for reclaim would be bigger
>> >>> so the chance to reclaim the pages would be higher.
>> >>> If we reclaim A-memcg, we can reclaim the number of pages we want easily and break.
>> >>> Next reclaim will happen at some time and reclaim will start the B-memcg of A-memcg
>> >>> we reclaimed successfully before. But unfortunately B-memcg has small lru so
>> >>> scanning count would be small and small memcg's LRU aging is higher than bigger memcg.
>> >>> It means small memcg's working set can be evicted easily than big memcg.
>> >>> my point is that we should not set next memcg easily.
>> >>> We have to consider memcg LRU size.
>> >>
>> >> I may be missing something, but you said yourself that B had a smaller
>> >> scan count compared to A, so the aging speed should be proportional to
>> >> respective size.
>> >>
>> >> The number of pages scanned per iteration is essentially
>> >>
>> >>        number of lru pages in memcg-zone >> priority
>> >>
>> >> so we scan relatively more pages from B than from A each round.
>> >>
>> >> It's the exact same logic we have been applying traditionally to
>> >> distribute pressure fairly among zones to equalize their aging speed.
>> >>
>> >> Is that what you meant or are we talking past each other?
>> >
>> > True if we can reclaim pages easily(ie, default priority) in all memcgs.
>> > But let's think about it.
>> > Normally direct reclaim path reclaims only SWAP_CLUSTER_MAX size.
>> > If we have small memcg, scan window size would be smaller and it is
>> > likely to be hard reclaim in the priority compared to bigger memcg. It
>> > means it can raise priority easily in small memcg and even it might
>> > call lumpy or compaction in case of global memory pressure. It can
>> > churn all LRU order. :(
>> > Of course, we have bailout routine so we might make such unfair aging
>> > effect small but it's not same with old behavior(ie, single LRU list,
>> > fair aging POV global according to priority raise)
>>
>> To make fair, how about considering turn over different memcg before
>> raise up priority?
>> It can make aging speed fairly while it can make high contention of
>> lru_lock. :(
>
> Actually, the way you describe it is how it used to work for limit
> reclaim before my patches.  It would select one memcg, then reclaim
> with increasing priority until SWAP_CLUSTER_MAX were reclaimed.
>
>        memcg = select_victim()
>        for each prio:
>          for each zone:
>            shrink_zone(prio, zone, sc = { .mem_cgroup = memcg })
>
> What it's supposed to do with my patches is scan all memcgs in the
> hierarchy at the same priority.  If it hasn't made progress, it will
> increase the priority and iterate again over the hierarchy.
>
>        for each prio:
>          for each zone:
>            for each memcg:
>              do_shrink_zone(prio, zone, sc = { .mem_cgroup = memcg })
>
>

Right you are. I got confused with old behavior which wasn't good.
Your way is very desirable to me and my concern disappear.
Thanks, Hannes.

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-06-10  0:48 UTC|newest]

Thread overview: 110+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-01  6:25 [patch 0/8] mm: memcg naturalization -rc2 Johannes Weiner
2011-06-01  6:25 ` [patch 1/8] memcg: remove unused retry signal from reclaim Johannes Weiner
2011-06-01  6:25 ` [patch 2/8] mm: memcg-aware global reclaim Johannes Weiner
2011-06-02 13:59   ` Hiroyuki Kamezawa
2011-06-02 15:01     ` Johannes Weiner
2011-06-02 16:14       ` Hiroyuki Kamezawa
2011-06-02 17:29         ` Johannes Weiner
2011-06-09 14:01           ` Michal Hocko
2011-06-07 12:25   ` Christoph Hellwig
2011-06-08  9:30     ` Johannes Weiner
2011-06-09  9:26       ` Christoph Hellwig
2011-06-09 16:57         ` Johannes Weiner
2011-06-09 13:12   ` Michal Hocko
2011-06-09 13:45     ` Johannes Weiner
2011-06-09 15:48   ` Minchan Kim
2011-06-09 17:23     ` Johannes Weiner
2011-06-09 23:41       ` Minchan Kim
2011-06-09 23:47         ` Minchan Kim
2011-06-10  0:34           ` Johannes Weiner
2011-06-10  0:48             ` Minchan Kim [this message]
2011-08-11 20:39   ` Ying Han
2011-08-11 21:09     ` Johannes Weiner
2011-08-29  7:15       ` Ying Han
2011-08-29  7:22         ` Ying Han
2011-08-29  7:57           ` Johannes Weiner
2011-08-30  6:08             ` Ying Han
2011-08-29 19:04           ` Johannes Weiner
2011-08-29 20:36             ` Ying Han
2011-08-29 21:05               ` Johannes Weiner
2011-08-30  7:07                 ` Ying Han
2011-08-30 15:14                   ` Johannes Weiner
2011-08-31 22:58                     ` Ying Han
2011-09-21  8:44                       ` Johannes Weiner
2011-08-29  8:07         ` Johannes Weiner
2011-06-01  6:25 ` [patch 3/8] memcg: reclaim statistics Johannes Weiner
2011-06-01  6:25 ` [patch 4/8] memcg: rework soft limit reclaim Johannes Weiner
2011-06-02  5:37   ` Ying Han
2011-06-02 21:55   ` Ying Han
2011-06-03  5:25     ` Ying Han
2011-06-09 15:00       ` Michal Hocko
2011-06-10  7:36         ` Michal Hocko
2011-06-15 22:57           ` Ying Han
2011-06-16  0:33             ` Ying Han
2011-06-16 11:45             ` Michal Hocko
2011-06-15 22:48         ` Ying Han
2011-06-16 11:41           ` Michal Hocko
2011-06-01  6:25 ` [patch 5/8] memcg: remove unused soft limit code Johannes Weiner
2011-06-13  9:26   ` Michal Hocko
2011-06-01  6:25 ` [patch 6/8] vmscan: change zone_nr_lru_pages to take memcg instead of scan control Johannes Weiner
2011-06-02 13:30   ` Hiroyuki Kamezawa
2011-06-02 14:28     ` Johannes Weiner
2011-06-13  9:29   ` Michal Hocko
2011-06-01  6:25 ` [patch 7/8] vmscan: memcg-aware unevictable page rescue scanner Johannes Weiner
2011-06-02 13:27   ` Hiroyuki Kamezawa
2011-06-02 14:27     ` Johannes Weiner
2011-06-02 21:02     ` Ying Han
2011-06-02 22:01       ` Hiroyuki Kamezawa
2011-06-02 22:19         ` Johannes Weiner
2011-06-02 23:15           ` Hiroyuki Kamezawa
2011-06-03  5:08           ` Ying Han
2011-06-13  9:42   ` Michal Hocko
2011-06-13 10:30     ` Johannes Weiner
2011-06-13 11:18       ` Michal Hocko
2011-07-19 22:47   ` Ying Han
2011-07-20  0:36     ` Johannes Weiner
2011-08-29  7:28       ` Ying Han
2011-08-29  7:59         ` Johannes Weiner
2011-06-01  6:25 ` [patch 8/8] mm: make per-memcg lru lists exclusive Johannes Weiner
2011-06-02 13:16   ` Hiroyuki Kamezawa
2011-06-02 14:24     ` Johannes Weiner
2011-06-02 15:54       ` Hiroyuki Kamezawa
2011-06-02 17:57         ` Johannes Weiner
2011-06-08 15:04           ` Michal Hocko
2011-06-07 12:42   ` Christoph Hellwig
2011-06-08  8:54     ` Johannes Weiner
2011-06-09  9:23       ` Christoph Hellwig
2011-08-11 20:33   ` Ying Han
2011-08-12  8:34     ` Johannes Weiner
2011-08-12 17:08       ` Ying Han
2011-08-12 19:17         ` Johannes Weiner
2011-08-15  3:01           ` Ying Han
2011-08-15  1:34       ` Ying Han
2011-08-15  9:39         ` Johannes Weiner
2011-06-01 23:52 ` [patch 0/8] mm: memcg naturalization -rc2 Hiroyuki Kamezawa
2011-06-02  0:35   ` Greg Thelen
2011-06-09  1:13     ` Rik van Riel
2011-06-02  4:05   ` Ying Han
2011-06-02  7:50     ` Johannes Weiner
2011-06-02 15:51       ` Ying Han
2011-06-02 17:51         ` Johannes Weiner
2011-06-08  3:45           ` Ying Han
2011-06-08  3:53           ` Ying Han
2011-06-08 15:32             ` Johannes Weiner
2011-06-09  3:52               ` Ying Han
2011-06-09  8:35                 ` Johannes Weiner
2011-06-09 17:36                   ` Ying Han
2011-06-09 18:36                     ` Johannes Weiner
2011-06-09 21:38                       ` Ying Han
2011-06-09 22:30                       ` Ying Han
2011-06-09 23:31                         ` Johannes Weiner
2011-06-10  0:17                           ` Ying Han
2011-06-02  7:33   ` Johannes Weiner
2011-06-02  9:06     ` Hiroyuki Kamezawa
2011-06-02 10:00       ` Johannes Weiner
2011-06-02 12:59         ` Hiroyuki Kamezawa
2011-06-09  1:15           ` Rik van Riel
2011-06-09  8:43             ` Johannes Weiner
2011-06-09  9:31               ` Christoph Hellwig
2011-06-13  9:47 ` Michal Hocko
2011-06-13 10:35   ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BANLkTinwCFgocsPOvutV-s4Z33-+YFRJfw@mail.gmail.com \
    --to=minchan.kim@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.cz \
    --cc=nishimura@mxp.nes.nec.co.jp \
    --cc=riel@redhat.com \
    --cc=walken@google.com \
    --cc=yinghan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).