linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Glauber Costa <glommer@openvz.org>
To: linux-mm@kvack.org
Cc: cgroups@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Greg Thelen <gthelen@google.com>,
	kamezawa.hiroyu@jp.fujitsu.com, Michal Hocko <mhocko@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>
Subject: [PATCH v4 00/31] kmemcg shrinkers
Date: Sat, 27 Apr 2013 03:18:56 +0400	[thread overview]
Message-ID: <1367018367-11278-1-git-send-email-glommer@openvz.org> (raw)

Hi,

This patchset implements targeted shrinking for memcg when kmem limits are
present. So far, we've been accounting kernel objects but failing allocations
when short of memory. This is because our only option would be to call the
global shrinker, depleting objects from all caches and breaking isolation.

The main idea is to associate per-memcg lists with each of the LRUs. The main
LRU still provides a single entry point and when adding or removing an element
from the LRU, we use the page information to figure out which memcg it belongs
to and relay it to the right list.

Last time Andrew showed intent to possibly merge it around -rc1. Since we're
not there yet, I don't expect, then, this version to be merged. But I believe
it to be quite close to completion and would greatly benefit from comments
before I send a final version around Andrew's desired timeframe. (Andrew,
please advise about your target for this)

Base work:
==========

Please note that this builds upon the recent work from Dave Chinner that
sanitizes the LRU shrinking API and make the shrinkers node aware. Node
awareness is not *strictly* needed for my work, but I still perceive it
as an advantage. The API unification is a major need, and I build upon it
heavily. That allows us to manipulate the LRUs without knowledge of the
underlying objects with ease. This time, I am including that work here as
a baseline.

Main changes from *v3:
* Merged suggestions from mailing list.
* Removed the memcg-walking code from LRU. vmscan now drives all the hierarchy
  decisions, which makes more sense
* lazily free the old memcg arrays (needs now to be saved in struct lru). Since
  we need to call synchronize_rcu, calling it for every LRU can become expensive
* Moved the dead memcg shrinker to vmpressure. Already independently sent to
  linux-mm for review.
* Changed locking convention for LRU_RETRY. It now needs to return locked, which
  silents warnings about possible lock unbalance (although previous code was
  correct)

Main changes from *v2:
* shrink dead memcgs when global pressure kicks in. Uses the new lru API.
* bugfixes and comments from the mailing list.
* proper hierarchy-aware walk in shrink_slab.

Main changes from *v1:
* merged comments from the mailing list
* reworked lru-memcg API
* effective proportional shrinking
* sanitized locking on the memcg side
* bill user memory first when kmem == umem
* various bugfixes

Numbers (not updated since last time):
======================================

I've run kernbench with 2Gb setups and 3 different kernels. All of them are
capable of cgroup kmem accounting,  but the first two ones won't be able to
shrink it.

Kernels
-------
base: the current -mm
davelru: that + dave's patches applied
fulllru: that + my patches applied.

I've ran all of them in a 1st level cgroup. Please note that the first
two kernels are not capable of shrinking metadata, so I had to select a
size that is enough to be in relatively constant pressure, but at the
same time not having that pressure to be exclusively from kernel memory.
2Gb did the job. This is a 2-node 24-way machine.

Results:
--------

Base:
Average Optimal load -j 24 Run (std deviation):
Elapsed Time 415.988 (8.37909)
User Time 4142 (759.964)
System Time 418.483 (62.0377)
Percent CPU 1030.7 (267.462)
Context Switches 391509 (268361)
Sleeps 738483 (149934)

Dave:
Average Optimal load -j 24 Run (std deviation):
Elapsed Time 424.486 (16.7365) ( + 2 % vs base)
User Time 4146.8 (764.012) ( + 0.84 % vs base)
System Time 419.24 (62.4507) (+ 0.18 % vs base)
Percent CPU 1012.1 (264.558) (-1.8 % vs base)
Context Switches 393363 (268899) (+ 0.47 % vs base)
Sleeps 739905 (147344) (+ 0.19 % vs base)


Full:
Average Optimal load -j 24 Run (std deviation):
Elapsed Time 456.644 (15.3567) ( + 9.7 % vs base)
User Time 4036.3 (645.261) ( - 2.5 % vs base)
System Time 438.134 (82.251) ( + 4.7 % vs base)
Percent CPU 973 (168.581) ( - 5.6 % vs base)
Context Switches 350796 (229700) ( - 10 % vs base)
Sleeps 728156 (138808) ( - 1.4 % vs base )

Discussion
-----------

First-level analysis: All figures fall within the std dev, except for
Full LRU wall time. It does fall within 2 std devs, though.
On the other hand, Full LRU kernel leads to better cpu utilization and
greater efficiency.

Details: The reclaim patterns in the three kernels are expected to be
different. User memory will always be the main driver, but in case of
pressure the first two kernels will shrink it while keeping the metadata
intact. This should lead to smaller system times figure at expense of
bigger user time figures, since user pages will be evicted more often.
This is consistent with the figures I've found.

Full LRU kernels have a 2.5 % better user time utilization, with 5.6 %
less CPU consumed and 10 % less context switches.

This comes at the expense of a 4.7 % loss of system time. Because we
will have to bring more dentry and inode objects back from caches, we
will stress more the slab code.

Because this is a benchmark that stresses a lot of metadata, it is
expected that this increase affects the end wall result proportionally.
We notice that the mere introduction of LRU code (Dave's Kernel) does
not affect the end wall time result outside the standard deviation.
Shrinking those objects, however, will lead to bigger wall times. This
is within the expected. No one would ever argue that the right kernel
behavior for all cases should keep the metadata in memory at expense of
user memory (and even if we should, we should do it the same way for the
cgroups).

My final conclusions is that performance wise the work is sound and
operates within expectations.


Dave Chinner (17):
  dcache: convert dentry_stat.nr_unused to per-cpu counters
  dentry: move to per-sb LRU locks
  dcache: remove dentries from LRU before putting on dispose list
  mm: new shrinker API
  shrinker: convert superblock shrinkers to new API
  list: add a new LRU list type
  inode: convert inode lru list to generic lru list code.
  dcache: convert to use new lru list infrastructure
  list_lru: per-node list infrastructure
  shrinker: add node awareness
  fs: convert inode and dentry shrinking to be node aware
  xfs: convert buftarg LRU to generic code
  xfs: convert dquot cache lru to list_lru
  fs: convert fs shrinkers to new scan/count API
  drivers: convert shrinkers to new count/scan API
  shrinker: convert remaining shrinkers to count/scan API
  shrinker: Kill old ->shrink API.

Glauber Costa (14):
  super: fix calculation of shrinkable objects for small numbers
  vmscan: take at least one pass with shrinkers
  hugepage: convert huge zero page shrinker to new shrinker API
  vmscan: also shrink slab in memcg pressure
  memcg,list_lru: duplicate LRUs upon kmemcg creation
  lru: add an element to a memcg list
  list_lru: per-memcg walks
  memcg: per-memcg kmem shrinking
  memcg: scan cache objects hierarchically
  super: targeted memcg reclaim
  memcg: move initialization to memcg creation
  vmpressure: in-kernel notifications
  memcg: reap dead memcgs upon global memory pressure.
  memcg: debugging facility to access dangling memcgs

 Documentation/cgroups/memory.txt           |  16 +
 arch/x86/kvm/mmu.c                         |  28 +-
 drivers/gpu/drm/i915/i915_dma.c            |   4 +-
 drivers/gpu/drm/i915/i915_drv.h            |   2 +-
 drivers/gpu/drm/i915/i915_gem.c            |  69 +++-
 drivers/gpu/drm/i915/i915_gem_evict.c      |  10 +-
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |   2 +-
 drivers/gpu/drm/ttm/ttm_page_alloc.c       |  48 ++-
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c   |  55 ++-
 drivers/md/bcache/btree.c                  |  30 +-
 drivers/md/bcache/sysfs.c                  |   2 +-
 drivers/md/dm-bufio.c                      |  65 +--
 drivers/staging/android/ashmem.c           |  46 ++-
 drivers/staging/android/lowmemorykiller.c  |  40 +-
 drivers/staging/zcache/zcache-main.c       |  29 +-
 fs/dcache.c                                | 233 ++++++-----
 fs/drop_caches.c                           |   1 +
 fs/ext4/extents_status.c                   |  30 +-
 fs/gfs2/glock.c                            |  30 +-
 fs/gfs2/main.c                             |   3 +-
 fs/gfs2/quota.c                            |  14 +-
 fs/gfs2/quota.h                            |   4 +-
 fs/inode.c                                 | 175 ++++----
 fs/internal.h                              |   5 +
 fs/mbcache.c                               |  53 +--
 fs/nfs/dir.c                               |  20 +-
 fs/nfs/internal.h                          |   4 +-
 fs/nfs/super.c                             |   3 +-
 fs/nfsd/nfscache.c                         |  31 +-
 fs/quota/dquot.c                           |  39 +-
 fs/super.c                                 | 107 +++--
 fs/ubifs/shrinker.c                        |  20 +-
 fs/ubifs/super.c                           |   3 +-
 fs/ubifs/ubifs.h                           |   3 +-
 fs/xfs/xfs_buf.c                           | 169 ++++----
 fs/xfs/xfs_buf.h                           |   5 +-
 fs/xfs/xfs_dquot.c                         |   7 +-
 fs/xfs/xfs_icache.c                        |   4 +-
 fs/xfs/xfs_icache.h                        |   2 +-
 fs/xfs/xfs_qm.c                            | 275 ++++++-------
 fs/xfs/xfs_qm.h                            |   4 +-
 fs/xfs/xfs_super.c                         |  12 +-
 include/linux/dcache.h                     |   4 +
 include/linux/fs.h                         |  25 +-
 include/linux/list_lru.h                   | 132 +++++++
 include/linux/memcontrol.h                 |  45 +++
 include/linux/shrinker.h                   |  45 ++-
 include/linux/swap.h                       |   2 +
 include/linux/vmpressure.h                 |   6 +
 include/trace/events/vmscan.h              |   4 +-
 init/Kconfig                               |  17 +
 lib/Makefile                               |   2 +-
 lib/list_lru.c                             | 430 ++++++++++++++++++++
 mm/huge_memory.c                           |  17 +-
 mm/memcontrol.c                            | 614 ++++++++++++++++++++++++++---
 mm/memory-failure.c                        |   2 +
 mm/slab_common.c                           |   1 -
 mm/vmpressure.c                            |  52 ++-
 mm/vmscan.c                                | 319 ++++++++++-----
 net/sunrpc/auth.c                          |  45 ++-
 60 files changed, 2528 insertions(+), 936 deletions(-)
 create mode 100644 include/linux/list_lru.h
 create mode 100644 lib/list_lru.c

-- 
1.8.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

             reply	other threads:[~2013-04-26 23:18 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-04-26 23:18 Glauber Costa [this message]
2013-04-26 23:18 ` [PATCH v4 01/31] super: fix calculation of shrinkable objects for small numbers Glauber Costa
2013-04-30 13:03   ` Mel Gorman
2013-04-26 23:18 ` [PATCH v4 02/31] vmscan: take at least one pass with shrinkers Glauber Costa
2013-04-30 13:22   ` Mel Gorman
2013-04-30 13:31     ` Glauber Costa
2013-04-30 15:37       ` Mel Gorman
2013-05-07 13:35         ` Glauber Costa
2013-04-26 23:18 ` [PATCH v4 03/31] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-04-30 13:37   ` Mel Gorman
2013-04-26 23:19 ` [PATCH v4 04/31] dentry: move to per-sb LRU locks Glauber Costa
2013-04-30 14:01   ` Mel Gorman
2013-04-26 23:19 ` [PATCH v4 05/31] dcache: remove dentries from LRU before putting on dispose list Glauber Costa
2013-04-30 14:14   ` Mel Gorman
2013-04-26 23:19 ` [PATCH v4 06/31] mm: new shrinker API Glauber Costa
2013-04-30 14:40   ` Mel Gorman
2013-04-30 15:03     ` Glauber Costa
2013-04-30 15:32       ` Mel Gorman
2013-04-26 23:19 ` [PATCH v4 07/31] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-04-30 14:49   ` Mel Gorman
2013-04-26 23:19 ` [PATCH v4 08/31] list: add a new LRU list type Glauber Costa
2013-04-30 15:18   ` Mel Gorman
2013-04-30 16:01     ` Glauber Costa
2013-04-26 23:19 ` [PATCH v4 09/31] inode: convert inode lru list to generic lru list code Glauber Costa
2013-04-30 15:46   ` Mel Gorman
2013-05-07 13:47     ` Glauber Costa
2013-04-26 23:19 ` [PATCH v4 10/31] dcache: convert to use new lru list infrastructure Glauber Costa
2013-04-30 16:04   ` Mel Gorman
2013-04-30 16:13     ` Glauber Costa
2013-04-26 23:19 ` [PATCH v4 11/31] list_lru: per-node " Glauber Costa
2013-04-30 16:33   ` Mel Gorman
2013-04-30 21:44     ` Glauber Costa
2013-04-26 23:19 ` [PATCH v4 12/31] shrinker: add node awareness Glauber Costa
2013-04-30 16:35   ` Mel Gorman
2013-04-26 23:19 ` [PATCH v4 13/31] fs: convert inode and dentry shrinking to be node aware Glauber Costa
2013-04-30 17:39   ` Mel Gorman
2013-04-26 23:19 ` [PATCH v4 14/31] xfs: convert buftarg LRU to generic code Glauber Costa
2013-04-26 23:19 ` [PATCH v4 15/31] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-04-26 23:19 ` [PATCH v4 16/31] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-04-26 23:19 ` [PATCH v4 17/31] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-04-30 21:53   ` Mel Gorman
2013-04-30 22:00     ` Kent Overstreet
2013-05-02  9:37       ` Mel Gorman
2013-05-02 13:37         ` Glauber Costa
2013-05-01 15:26     ` Daniel Vetter
2013-05-02  9:31       ` Mel Gorman
2013-04-26 23:19 ` [PATCH v4 18/31] shrinker: convert remaining shrinkers to " Glauber Costa
2013-04-26 23:19 ` [PATCH v4 19/31] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-04-26 23:19 ` [PATCH v4 20/31] shrinker: Kill old ->shrink API Glauber Costa
2013-04-30 21:57   ` Mel Gorman
2013-04-26 23:19 ` [PATCH v4 21/31] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-04-26 23:19 ` [PATCH v4 22/31] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
2013-04-26 23:19 ` [PATCH v4 23/31] lru: add an element to a memcg list Glauber Costa
2013-04-26 23:19 ` [PATCH v4 24/31] list_lru: per-memcg walks Glauber Costa
2013-04-26 23:19 ` [PATCH v4 25/31] memcg: per-memcg kmem shrinking Glauber Costa
2013-04-26 23:19 ` [PATCH v4 26/31] memcg: scan cache objects hierarchically Glauber Costa
2013-04-26 23:19 ` [PATCH v4 27/31] super: targeted memcg reclaim Glauber Costa
2013-04-26 23:19 ` [PATCH v4 28/31] memcg: move initialization to memcg creation Glauber Costa
2013-04-26 23:19 ` [PATCH v4 29/31] vmpressure: in-kernel notifications Glauber Costa
2013-04-26 23:19 ` [PATCH v4 30/31] memcg: reap dead memcgs upon global memory pressure Glauber Costa
2013-04-26 23:19 ` [PATCH v4 31/31] memcg: debugging facility to access dangling memcgs Glauber Costa
2013-04-30 22:47 ` [PATCH v4 00/31] kmemcg shrinkers Mel Gorman
2013-05-01  9:05   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1367018367-11278-1-git-send-email-glommer@openvz.org \
    --to=glommer@openvz.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).