linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Glauber Costa <glommer@openvz.org>
Cc: linux-mm@kvack.org, cgroups@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Greg Thelen <gthelen@google.com>,
	kamezawa.hiroyu@jp.fujitsu.com, Michal Hocko <mhocko@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH v6 00/31] kmemcg shrinkers
Date: Mon, 13 May 2013 17:14:00 +1000	[thread overview]
Message-ID: <20130513071359.GM32675@dastard> (raw)
In-Reply-To: <1368382432-25462-1-git-send-email-glommer@openvz.org>

On Sun, May 12, 2013 at 10:13:21PM +0400, Glauber Costa wrote:
> Initial notes:
> ==============
> 
> Mel, Dave, this is the last round of fixes I have for the series. The fixes are
> few, and I was mostly interested in getting this out based on an up2date tree
> so Dave can verify it. This should apply fine ontop of Friday's linux-next.
> Alternatively, please grab from branch "kmemcg-lru-shrinker" at:
> 
> 	git://git.kernel.org/pub/scm/linux/kernel/git/glommer/memcg.git
> 
> Main changes from *v5:
> * Rebased to linux-next, and fix the conflicts with the dcache.
> * Make sure LRU_RETRY only retry once
> * Prevent the bcache shrinker to scan the caches when disabled (by returning
>   0 in the count function)
> * Fix i915 return code when mutex cannot be acquired.
> * Only scan less-than-batch objects in memcg scenarios

Ok, this is behaving a *lot* better than v5 in terms of initial
balance and sustained behaviour under pure inode/dentry press
workloads. The previous version was all over the place, not to
mention unstable and prone to unrealted lockups in the block layer.

However, I'm not sure that the LRUness of reclaim is working
correctly at this point. When I switch from a write only workload to
a read-only workload (i.e. fsmark finishes and find starts), I see
this:

 OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
1923037 1807201  93%    1.12K  68729       28   2199328K xfs_inode
1914624 490812  25%    0.22K  53184       36    425472K xfs_ili

Note the xfs_ili slab capacity - there's half a million objects
still in the cache, and they are only present on *dirty* inodes.
Now, the read-only workload is iterating through a cold-cache lookup
workload of 50 million inodes - at roughly 150,000/s. It's a
touch-once workload, so shoul dbe turning the cache over completely
every 10 seconds. However, in the time it's taken for me to explain
this:

 OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
1954493 1764661  90%    1.12K  69831       28   2234592K xfs_inode
1643868 281962  17%    0.22K  45663       36    365304K xfs_ili   

Only 200k xfs_ili's have been freed. So the rate of reclaim of them
is roughly 5k/s. Given the read-only nature of this workload, they
should be gone from the cache in a few seconds. Another indication
of problems here is the level of internal fragmentation of the
xfs_ili slab. They should cycle out of the cache in LRU manner, just
like inodes - the modify workload is a "touch once" workload as
well, so there should be no internal fragmentation of the slab
cache.

The stats I have of cache residency during the read-only part of the
workload looks really bad. No steady state is reached, while on 3.9
a perfect steady state is reached within seconds and maintained
until the workload changes. Part way through the read-only workload,
this happened:

[  562.673080] sh (5007): dropped kernel caches: 3
[  629.617303] lowmemorykiller: send sigkill to 3953 (winbindd), adj 0, size 195
[  629.625499] lowmemorykiller: send sigkill to 3439 (pmcd), adj 0, size 158

And when the read-only workload finishes it's walk, I then start
another "touch once" workload that removes all the files. that
triggered:

[ 1002.183604] lowmemorykiller: send sigkill to 5827 (winbindd), adj 0, size 246
[ 1002.187822] lowmemorykiller: send sigkill to 3904 (winbindd), adj 0, size 134

Yeah, that stupid broken low memory killer is now kicking in,
killing random processes - last run it killed two of the rm
processes doing work, this time it killed winbindd and the PCP
collection daemon that I use for remote stats monitoring.

So, yeah, there's still some broken stuff in this patchset that
needs fixing.  The script that I'm running to trigger these problems
is pretty basic - it's the same workload I've been using for the
past 3 years for measuring metadata performance of filesystems:

$ cat ./fsmark-50-test-xfs.sh 
#!/bin/bash

sudo umount /mnt/scratch > /dev/null 2>&1
sudo mkfs.xfs -f $@ -l size=131072b,sunit=8 /dev/vdc
sudo mount -o nobarrier,logbsize=256k /dev/vdc /mnt/scratch
sudo chmod 777 /mnt/scratch
cd /home/dave/src/fs_mark-3.3/
time ./fs_mark  -D  10000  -S0  -n  100000  -s  0  -L  63 \
        -d  /mnt/scratch/0  -d  /mnt/scratch/1 \
        -d  /mnt/scratch/2  -d  /mnt/scratch/3 \
        -d  /mnt/scratch/4  -d  /mnt/scratch/5 \
        -d  /mnt/scratch/6  -d  /mnt/scratch/7 \
        | tee >(stats --trim-outliers | tail -1 1>&2)
sync
sleep 30
sync

echo walking files
sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
time (
        for d in /mnt/scratch/[0-9]* ; do

                for i in $d/*; do
                        (
                                echo $i
                                find $i -ctime 1 > /dev/null
                        ) > /dev/null 2>&1
                done &
        done
        wait
)

echo removing files
for f in /mnt/scratch/* ; do time rm -rf $f &  done
wait
$

It's running on an 8p, 4GB RAM, 4-node fake numa virtual machine
with a 100TB sparse image file being used for the test filesystem.

I'll spend some time over the next few days trying to work out what
is causing these issues....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2013-05-13  7:14 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-12 18:13 [PATCH v6 00/31] kmemcg shrinkers Glauber Costa
2013-05-12 18:13 ` [PATCH v6 01/31] super: fix calculation of shrinkable objects for small numbers Glauber Costa
2013-05-12 18:13 ` [PATCH v6 02/31] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-05-12 18:13 ` [PATCH v6 03/31] dentry: move to per-sb LRU locks Glauber Costa
2013-05-12 18:13 ` [PATCH v6 04/31] dcache: remove dentries from LRU before putting on dispose list Glauber Costa
2013-05-14  2:02   ` Dave Chinner
2013-05-14  5:46   ` [PATCH v7 " Dave Chinner
2013-05-14  7:10     ` Dave Chinner
2013-05-14 12:43     ` Glauber Costa
     [not found]       ` <51923158.7040002-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-05-14 20:32         ` Dave Chinner
2013-05-12 18:13 ` [PATCH v6 05/31] mm: new shrinker API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 06/31] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 07/31] list: add a new LRU list type Glauber Costa
2013-05-13  9:25   ` Mel Gorman
2013-05-12 18:13 ` [PATCH v6 08/31] inode: convert inode lru list to generic lru list code Glauber Costa
2013-05-12 18:13 ` [PATCH v6 09/31] dcache: convert to use new lru list infrastructure Glauber Costa
     [not found]   ` <1368382432-25462-10-git-send-email-glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
2013-05-14  6:59     ` Dave Chinner
2013-05-14  7:50       ` Glauber Costa
2013-05-14 14:01       ` Glauber Costa
2013-05-12 18:13 ` [PATCH v6 10/31] list_lru: per-node " Glauber Costa
2013-05-12 18:13 ` [PATCH v6 11/31] shrinker: add node awareness Glauber Costa
2013-05-12 18:13 ` [PATCH v6 12/31] fs: convert inode and dentry shrinking to be node aware Glauber Costa
     [not found]   ` <1368382432-25462-13-git-send-email-glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
2013-05-14  9:52     ` Dave Chinner
2013-05-15 15:27       ` Glauber Costa
2013-05-16  0:02         ` Dave Chinner
2013-05-16  8:03           ` Glauber Costa
2013-05-16 19:14           ` Glauber Costa
2013-05-17  0:51             ` Dave Chinner
2013-05-17  7:29               ` Glauber Costa
     [not found]                 ` <5195DC59.8000205-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-05-17 14:49                   ` Glauber Costa
     [not found]                     ` <51964381.8010406-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-05-17 22:54                       ` Glauber Costa
2013-05-18  3:39                     ` Dave Chinner
2013-05-18  7:20                       ` Glauber Costa
2013-05-12 18:13 ` [PATCH v6 13/31] xfs: convert buftarg LRU to generic code Glauber Costa
2013-05-12 18:13 ` [PATCH v6 14/31] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-05-12 18:13 ` [PATCH v6 15/31] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-05-13  6:12   ` Artem Bityutskiy
     [not found]     ` <1368425530.3208.13.camel-Bxnoe/o8FG+Ef9UqXRslZEEOCMrvLtNR@public.gmane.org>
2013-05-13  7:28       ` Glauber Costa
     [not found]         ` <51909610.1010801-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-05-13  7:43           ` Artem Bityutskiy
2013-05-13 10:36   ` Jan Kara
2013-05-12 18:13 ` [PATCH v6 16/31] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 17/31] i915: bail out earlier when shrinker cannot acquire mutex Glauber Costa
2013-05-12 18:13 ` [PATCH v6 18/31] shrinker: convert remaining shrinkers to count/scan API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 19/31] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 20/31] shrinker: Kill old ->shrink API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 21/31] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-05-12 18:13 ` [PATCH v6 22/31] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
2013-05-12 18:13 ` [PATCH v6 23/31] lru: add an element to a memcg list Glauber Costa
2013-05-12 18:13 ` [PATCH v6 24/31] list_lru: per-memcg walks Glauber Costa
2013-05-12 18:13 ` [PATCH v6 25/31] memcg: per-memcg kmem shrinking Glauber Costa
2013-05-12 18:13 ` [PATCH v6 26/31] memcg: scan cache objects hierarchically Glauber Costa
2013-05-12 18:13 ` [PATCH v6 27/31] vmscan: take at least one pass with shrinkers Glauber Costa
2013-05-12 18:13 ` [PATCH v6 28/31] super: targeted memcg reclaim Glauber Costa
2013-05-12 18:13 ` [PATCH v6 29/31] memcg: move initialization to memcg creation Glauber Costa
2013-05-12 18:13 ` [PATCH v6 30/31] vmpressure: in-kernel notifications Glauber Costa
2013-05-12 18:13 ` [PATCH v6 31/31] memcg: reap dead memcgs upon global memory pressure Glauber Costa
2013-05-13  7:14 ` Dave Chinner [this message]
2013-05-13  7:21   ` [PATCH v6 00/31] kmemcg shrinkers Dave Chinner
2013-05-13  8:00   ` Glauber Costa
     [not found]     ` <51909D84.7040800-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2013-05-14  1:48       ` Dave Chinner
2013-05-14  5:22         ` Dave Chinner
2013-05-14  5:45           ` Dave Chinner
2013-05-14  7:38           ` Glauber Costa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130513071359.GM32675@dastard \
    --to=david@fromorbit.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=glommer@openvz.org \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).