From mboxrd@z Thu Jan 1 00:00:00 1970 From: Kamezawa Hiroyuki Subject: Re: [PATCH v2 02/28] vmscan: take at least one pass with shrinkers Date: Mon, 01 Apr 2013 16:26:45 +0900 Message-ID: <515936B5.8070501@jp.fujitsu.com> References: <1364548450-28254-1-git-send-email-glommer@parallels.com> <1364548450-28254-3-git-send-email-glommer@parallels.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-2022-JP Content-Transfer-Encoding: 7bit Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, containers@lists.linux-foundation.org, Michal Hocko , Johannes Weiner , Andrew Morton , Dave Shrinnker , Greg Thelen , hughd@google.com, yinghan@google.com, Theodore Ts'o , Al Viro To: Glauber Costa Return-path: Received: from fgwmail6.fujitsu.co.jp ([192.51.44.36]:47033 "EHLO fgwmail6.fujitsu.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757555Ab3DAH1S (ORCPT ); Mon, 1 Apr 2013 03:27:18 -0400 Received: from m2.gw.fujitsu.co.jp (unknown [10.0.50.72]) by fgwmail6.fujitsu.co.jp (Postfix) with ESMTP id 7E9E13EE0C0 for ; Mon, 1 Apr 2013 16:27:17 +0900 (JST) Received: from smail (m2 [127.0.0.1]) by outgoing.m2.gw.fujitsu.co.jp (Postfix) with ESMTP id 4F83545DE5E for ; Mon, 1 Apr 2013 16:27:17 +0900 (JST) Received: from s2.gw.fujitsu.co.jp (s2.gw.fujitsu.co.jp [10.0.50.92]) by m2.gw.fujitsu.co.jp (Postfix) with ESMTP id 284BF45DE59 for ; Mon, 1 Apr 2013 16:27:17 +0900 (JST) Received: from s2.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s2.gw.fujitsu.co.jp (Postfix) with ESMTP id 1A6D81DB802C for ; Mon, 1 Apr 2013 16:27:17 +0900 (JST) Received: from m1001.s.css.fujitsu.com (m1001.s.css.fujitsu.com [10.240.81.139]) by s2.gw.fujitsu.co.jp (Postfix) with ESMTP id B54011DB8042 for ; Mon, 1 Apr 2013 16:27:16 +0900 (JST) In-Reply-To: <1364548450-28254-3-git-send-email-glommer@parallels.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: (2013/03/29 18:13), Glauber Costa wrote: > In very low free kernel memory situations, it may be the case that we > have less objects to free than our initial batch size. If this is the > case, it is better to shrink those, and open space for the new workload > then to keep them and fail the new allocations. > > More specifically, this happens because we encode this in a loop with > the condition: "while (total_scan >= batch_size)". So if we are in such > a case, we'll not even enter the loop. > > This patch modifies turns it into a do () while {} loop, that will > guarantee that we scan it at least once, while keeping the behaviour > exactly the same for the cases in which total_scan > batch_size. > > Signed-off-by: Glauber Costa > Reviewed-by: Dave Chinner > Reviewed-by: Carlos Maiolino > CC: "Theodore Ts'o" > CC: Al Viro > --- > mm/vmscan.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > Doesn't this break == /* * copy the current shrinker scan count into a local variable * and zero it so that other concurrent shrinker invocations * don't also do this scanning work. */ nr = atomic_long_xchg(&shrinker->nr_in_batch, 0); == This xchg magic ? Thnks, -Kame > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 88c5fed..fc6d45a 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -280,7 +280,7 @@ unsigned long shrink_slab(struct shrink_control *shrink, > nr_pages_scanned, lru_pages, > max_pass, delta, total_scan); > > - while (total_scan >= batch_size) { > + do { > int nr_before; > > nr_before = do_shrinker_shrink(shrinker, shrink, 0); > @@ -294,7 +294,7 @@ unsigned long shrink_slab(struct shrink_control *shrink, > total_scan -= batch_size; > > cond_resched(); > - } > + } while (total_scan >= batch_size); > > /* > * move the unused scan count back into the shrinker in a >