linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Wu Fengguang <fengguang.wu@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	stable@kernel.org, Rik van Riel <riel@redhat.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	Mel Gorman <mel@csn.ul.ie>, Christoph Hellwig <hch@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Dave Chinner <david@fromorbit.com>,
	Chris Mason <chris.mason@oracle.com>,
	Nick Piggin <npiggin@suse.de>,
	Johannes Weiner <hannes@cmpxchg.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Minchan Kim <minchan.kim@gmail.com>, Andreas Mohr <andi@lisas.de>,
	Bill Davidsen <davidsen@tmr.com>,
	Ben Gamari <bgamari.foss@gmail.com>
Subject: Re: [PATCH] vmscan: raise the bar to PAGEOUT_IO_SYNC stalls
Date: Sat, 31 Jul 2010 13:33:28 -0400	[thread overview]
Message-ID: <20100731173328.GA21072@infradead.org> (raw)
In-Reply-To: <20100731161358.GA5147@localhost>

On Sun, Aug 01, 2010 at 12:13:58AM +0800, Wu Fengguang wrote:
> FYI I did some memory stress test and find there are much more order-1
> (and higher) users than fork(). This means lots of running applications
> may stall on direct reclaim.
> 
> Basically all of these slab caches will do high order allocations:

It looks much, much worse on my system.  Basically all inode structures,
and also tons of frequently allocated xfs structures fall into this
category,  None of them actually anywhere near the size of a page, which
makes me wonder why we do such high order allocations:

slabinfo - version: 2.1
# name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
nfsd4_stateowners      0      0    424   19    2 : tunables    0    0    0 : slabdata      0      0      0
kvm_vcpu               0      0  10400    3    8 : tunables    0    0    0 : slabdata      0      0      0
kmalloc_dma-512       32     32    512   16    2 : tunables    0    0    0 : slabdata      2      2      0
mqueue_inode_cache     18     18    896   18    4 : tunables    0    0    0 : slabdata      1      1      0
xfs_inode         279008 279008   1024   16    4 : tunables    0    0    0 : slabdata  17438  17438      0
xfs_efi_item          44     44    360   22    2 : tunables    0    0    0 : slabdata      2      2      0
xfs_efd_item          44     44    368   22    2 : tunables    0    0    0 : slabdata      2      2      0
xfs_trans             40     40    800   20    4 : tunables    0    0    0 : slabdata      2      2      0
xfs_da_state          32     32    488   16    2 : tunables    0    0    0 : slabdata      2      2      0
nfs_inode_cache        0      0   1016   16    4 : tunables    0    0    0 : slabdata      0      0      0
isofs_inode_cache      0      0    632   25    4 : tunables    0    0    0 : slabdata      0      0      0
fat_inode_cache        0      0    664   12    2 : tunables    0    0    0 : slabdata      0      0      0
hugetlbfs_inode_cache     14     14    584   14    2 : tunables    0    0    0 : slabdata      1      1      0
ext4_inode_cache       0      0    968   16    4 : tunables    0    0    0 : slabdata      0      0      0
ext2_inode_cache      21     21    776   21    4 : tunables    0    0    0 : slabdata      1      1      0
ext3_inode_cache       0      0    800   20    4 : tunables    0    0    0 : slabdata      0      0      0
rpc_inode_cache       19     19    832   19    4 : tunables    0    0    0 : slabdata      1      1      0
UDP-Lite               0      0    768   21    4 : tunables    0    0    0 : slabdata      0      0      0
ip_dst_cache         170    378    384   21    2 : tunables    0    0    0 : slabdata     18     18      0
RAW                   63     63    768   21    4 : tunables    0    0    0 : slabdata      3      3      0
UDP                   52     84    768   21    4 : tunables    0    0    0 : slabdata      4      4      0
TCP                   60    100   1600   20    8 : tunables    0    0    0 : slabdata      5      5      0
blkdev_queue          42     42   2216   14    8 : tunables    0    0    0 : slabdata      3      3      0
sock_inode_cache     650    713    704   23    4 : tunables    0    0    0 : slabdata     31     31      0
skbuff_fclone_cache     36     36    448   18    2 : tunables    0    0    0 : slabdata      2      2      0
shmem_inode_cache   3620   3948    776   21    4 : tunables    0    0    0 : slabdata    188    188      0
proc_inode_cache    1818   1875    632   25    4 : tunables    0    0    0 : slabdata     75     75      0
bdev_cache            57     57    832   19    4 : tunables    0    0    0 : slabdata      3      3      0
inode_cache         7934   7938    584   14    2 : tunables    0    0    0 : slabdata    567    567      0
files_cache          689    713    704   23    4 : tunables    0    0    0 : slabdata     31     31      0
signal_cache         301    342    896   18    4 : tunables    0    0    0 : slabdata     19     19      0
sighand_cache        192    210   2112   15    8 : tunables    0    0    0 : slabdata     14     14      0
task_struct          311    325   5616    5    8 : tunables    0    0    0 : slabdata     65     65      0
idr_layer_cache      578    585    544   15    2 : tunables    0    0    0 : slabdata     39     39      0
radix_tree_node    74738  74802    560   14    2 : tunables    0    0    0 : slabdata   5343   5343      0
kmalloc-8192          29     32   8192    4    8 : tunables    0    0    0 : slabdata      8      8      0
kmalloc-4096         194    208   4096    8    8 : tunables    0    0    0 : slabdata     26     26      0
kmalloc-2048         310    352   2048   16    8 : tunables    0    0    0 : slabdata     22     22      0
kmalloc-1024        1607   1616   1024   16    4 : tunables    0    0    0 : slabdata    101    101      0
kmalloc-512          484    512    512   16    2 : tunables    0    0    0 : slabdata     32     32      0

  reply	other threads:[~2010-07-31 17:34 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-28  7:17 [PATCH] vmscan: raise the bar to PAGEOUT_IO_SYNC stalls Wu Fengguang
2010-07-28  7:49 ` Minchan Kim
2010-07-28  8:46   ` [PATCH] vmscan: remove wait_on_page_writeback() from pageout() Wu Fengguang
2010-07-28  9:10     ` Mel Gorman
2010-07-28  9:30       ` Wu Fengguang
2010-07-28  9:45         ` Mel Gorman
2010-07-28  9:43       ` KOSAKI Motohiro
2010-07-28  9:50         ` Mel Gorman
2010-07-28  9:59           ` KOSAKI Motohiro
2010-08-01  5:27             ` Wu Fengguang
2010-08-01  5:49               ` Wu Fengguang
2010-08-01  8:32               ` KOSAKI Motohiro
2010-08-01  8:35                 ` Wu Fengguang
2010-08-01  8:40                   ` KOSAKI Motohiro
2010-08-01  5:17         ` Wu Fengguang
2010-07-28 16:29     ` Minchan Kim
2010-07-28 11:40 ` Why PAGEOUT_IO_SYNC stalls for a long time KOSAKI Motohiro
2010-07-28 13:10   ` Mel Gorman
2010-07-29 10:34     ` KOSAKI Motohiro
2010-07-29 14:24       ` Mel Gorman
2010-07-30  4:54         ` KOSAKI Motohiro
2010-07-30 10:30           ` Mel Gorman
2010-08-01  8:47             ` KOSAKI Motohiro
2010-08-04 11:10               ` Mel Gorman
2010-08-05  6:20                 ` KOSAKI Motohiro
2010-08-05  8:09                   ` Andreas Mohr
2010-07-28 17:30   ` Andrew Morton
2010-07-29  1:01     ` KOSAKI Motohiro
2010-07-30 13:17 ` [PATCH] vmscan: raise the bar to PAGEOUT_IO_SYNC stalls Andrea Arcangeli
2010-07-30 13:31   ` Mel Gorman
2010-07-31 16:13 ` Wu Fengguang
2010-07-31 17:33   ` Christoph Hellwig [this message]
2010-07-31 17:55     ` Pekka Enberg
2010-07-31 17:59       ` Christoph Hellwig
2010-07-31 18:09         ` Pekka Enberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100731173328.GA21072@infradead.org \
    --to=hch@infradead.org \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi@lisas.de \
    --cc=bgamari.foss@gmail.com \
    --cc=chris.mason@oracle.com \
    --cc=david@fromorbit.com \
    --cc=davidsen@tmr.com \
    --cc=fengguang.wu@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=minchan.kim@gmail.com \
    --cc=npiggin@suse.de \
    --cc=riel@redhat.com \
    --cc=stable@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).