From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx180.postini.com [74.125.245.180]) by kanga.kvack.org (Postfix) with SMTP id ED5BA6B002B for ; Thu, 8 Nov 2012 21:36:42 -0500 (EST) Date: Fri, 9 Nov 2012 10:36:38 +0800 From: Fengguang Wu Subject: Re: [PATCHv2] mm: Fix calculation of dirtyable memory Message-ID: <20121109023638.GA11105@localhost> References: <1352422353-11229-1-git-send-email-sonnyrao@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1352422353-11229-1-git-send-email-sonnyrao@chromium.org> Sender: owner-linux-mm@kvack.org List-ID: To: Sonny Rao Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Andrew Morton , Michal Hocko , linux-mm@kvack.org, Mandeep Singh Baines , Johannes Weiner , Olof Johansson , Will Drewry , Kees Cook , Aaron Durbin , Puneet Kumar On Thu, Nov 08, 2012 at 04:52:33PM -0800, Sonny Rao wrote: > The system uses global_dirtyable_memory() to calculate > number of dirtyable pages/pages that can be allocated > to the page cache. A bug causes an underflow thus making > the page count look like a big unsigned number. This in turn > confuses the dirty writeback throttling to aggressively write > back pages as they become dirty (usually 1 page at a time). > > Fix is to ensure there is no underflow while doing the math. Good catch, thanks! > Signed-off-by: Sonny Rao > Signed-off-by: Puneet Kumar > --- > v2: added apkm's suggestion to make the highmem calculation better > mm/page-writeback.c | 17 +++++++++++++++-- > 1 files changed, 15 insertions(+), 2 deletions(-) > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > index 830893b..ce62442 100644 > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -201,6 +201,18 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) > zone_reclaimable_pages(z) - z->dirty_balance_reserve; > } > /* > + * Unreclaimable memory (kernel memory or anonymous memory > + * without swap) can bring down the dirtyable pages below > + * the zone's dirty balance reserve and the above calculation > + * will underflow. However we still want to add in nodes > + * which are below threshold (negative values) to get a more > + * accurate calculation but make sure that the total never > + * underflows. > + */ > + if ((long)x < 0) > + x = 0; > + > + /* > * Make sure that the number of highmem pages is never larger > * than the number of the total dirtyable memory. This can only > * occur in very strange VM situations but we want to make sure > @@ -222,8 +234,9 @@ static unsigned long global_dirtyable_memory(void) > { > unsigned long x; > > - x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() - > - dirty_balance_reserve; > + x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages(); > + if (x >= dirty_balance_reserve) > + x -= dirty_balance_reserve; That can be converted to "if ((long)x < 0) x = 0;", too. And I suspect zone_dirtyable_memory() needs similar fix, too. Thanks, Fengguang -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org