From mboxrd@z Thu Jan 1 00:00:00 1970 From: Glauber Costa Subject: [PATCH v3 02/32] vmscan: take at least one pass with shrinkers Date: Mon, 8 Apr 2013 18:00:29 +0400 Message-ID: <1365429659-22108-3-git-send-email-glommer@parallels.com> References: <1365429659-22108-1-git-send-email-glommer@parallels.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1365429659-22108-1-git-send-email-glommer-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: containers-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: containers-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org Cc: Theodore Ts'o , hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, Dave Shrinnker , Michal Hocko , Al Viro , Johannes Weiner , linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Andrew Morton In very low free kernel memory situations, it may be the case that we have less objects to free than our initial batch size. If this is the case, it is better to shrink those, and open space for the new workload then to keep them and fail the new allocations. More specifically, this happens because we encode this in a loop with the condition: "while (total_scan >= batch_size)". So if we are in such a case, we'll not even enter the loop. This patch modifies turns it into a do () while {} loop, that will guarantee that we scan it at least once, while keeping the behaviour exactly the same for the cases in which total_scan > batch_size. Signed-off-by: Glauber Costa Reviewed-by: Dave Chinner Reviewed-by: Carlos Maiolino CC: "Theodore Ts'o" CC: Al Viro --- mm/vmscan.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 88c5fed..fc6d45a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -280,7 +280,7 @@ unsigned long shrink_slab(struct shrink_control *shrink, nr_pages_scanned, lru_pages, max_pass, delta, total_scan); - while (total_scan >= batch_size) { + do { int nr_before; nr_before = do_shrinker_shrink(shrinker, shrink, 0); @@ -294,7 +294,7 @@ unsigned long shrink_slab(struct shrink_control *shrink, total_scan -= batch_size; cond_resched(); - } + } while (total_scan >= batch_size); /* * move the unused scan count back into the shrinker in a -- 1.8.1.4