From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wu Fengguang Subject: [PATCH 45/47] nfs: adapt congestion threshold to dirty threshold Date: Mon, 13 Dec 2010 14:43:34 +0800 Message-ID: <20101213064842.559030592@intel.com> References: <20101213064249.648862451@intel.com> Return-path: Received: from kanga.kvack.org ([205.233.56.17]) by lo.gmane.org with esmtp (Exim 4.69) (envelope-from ) id 1PS2FT-0005rX-VB for glkm-linux-mm-2@m.gmane.org; Mon, 13 Dec 2010 07:51:04 +0100 Received: from mail138.messagelabs.com (mail138.messagelabs.com [216.82.249.35]) by kanga.kvack.org (Postfix) with SMTP id DF8216B00AE for ; Mon, 13 Dec 2010 01:49:42 -0500 (EST) Content-Disposition: inline; filename=nfs-congestion-thresh.patch Sender: owner-linux-mm@kvack.org To: Andrew Morton Cc: Jan Kara , Trond Myklebust , Wu Fengguang , Christoph Hellwig , Dave Chinner , Theodore Ts'o , Chris Mason , Peter Zijlstra , Mel Gorman , Rik van Riel , KOSAKI Motohiro , Greg Thelen , Minchan Kim , linux-mm , linux-fsdevel@vger.kernel.org, LKML List-Id: linux-mm.kvack.org nfs_congestion_kb is to control the max allowed writeback and in-commit pages. It's not reasonable for them to outnumber dirty and to-commit pages. So each of them should not take more than 1/4 dirty threshold. Considering that nfs_init_writepagecache() is called on fresh boot, at the time dirty_thresh is much higher than the real dirty limit after lots of user space memory consumptions, use 1/8 instead. We might update nfs_congestion_kb when global dirty limit is changed at runtime, but whatever, do it simple first. CC: Trond Myklebust Signed-off-by: Wu Fengguang --- fs/nfs/write.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) --- linux-next.orig/fs/nfs/write.c 2010-12-08 22:44:37.000000000 +0800 +++ linux-next/fs/nfs/write.c 2010-12-08 22:44:37.000000000 +0800 @@ -1698,6 +1698,9 @@ out: int __init nfs_init_writepagecache(void) { + unsigned long background_thresh; + unsigned long dirty_thresh; + nfs_wdata_cachep = kmem_cache_create("nfs_write_data", sizeof(struct nfs_write_data), 0, SLAB_HWCACHE_ALIGN, @@ -1735,6 +1738,16 @@ int __init nfs_init_writepagecache(void) if (nfs_congestion_kb > 256*1024) nfs_congestion_kb = 256*1024; + /* + * Limit to 1/8 dirty threshold, so that writeback+in_commit pages + * won't overnumber dirty+to_commit pages. + */ + global_dirty_limits(&background_thresh, &dirty_thresh); + dirty_thresh <<= PAGE_SHIFT - 10; + + if (nfs_congestion_kb > dirty_thresh / 8) + nfs_congestion_kb = dirty_thresh / 8; + return 0; } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: email@kvack.org