From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933992Ab1EWUDa (ORCPT ); Mon, 23 May 2011 16:03:30 -0400 Received: from smtp1.linux-foundation.org ([140.211.169.13]:44833 "EHLO smtp1.linux-foundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933224Ab1EWUD1 (ORCPT ); Mon, 23 May 2011 16:03:27 -0400 Date: Mon, 23 May 2011 13:03:03 -0700 From: Andrew Morton To: Mel Gorman Cc: James Bottomley , Colin King , Raghavendra D Prabhu , Jan Kara , Chris Mason , Christoph Lameter , Pekka Enberg , Rik van Riel , Johannes Weiner , Minchan Kim , linux-fsdevel , linux-mm , linux-kernel , linux-ext4 , stable Subject: Re: [PATCH 2/2] mm: vmscan: Correctly check if reclaimer should schedule during shrink_slab Message-Id: <20110523130303.6b7dad1c.akpm@linux-foundation.org> In-Reply-To: <1306144435-2516-3-git-send-email-mgorman@suse.de> References: <1306144435-2516-1-git-send-email-mgorman@suse.de> <1306144435-2516-3-git-send-email-mgorman@suse.de> X-Mailer: Sylpheed 3.0.2 (GTK+ 2.20.1; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 23 May 2011 10:53:55 +0100 Mel Gorman wrote: > It has been reported on some laptops that kswapd is consuming large > amounts of CPU and not being scheduled when SLUB is enabled during > large amounts of file copying. It is expected that this is due to > kswapd missing every cond_resched() point because; > > shrink_page_list() calls cond_resched() if inactive pages were isolated > which in turn may not happen if all_unreclaimable is set in > shrink_zones(). If for whatver reason, all_unreclaimable is > set on all zones, we can miss calling cond_resched(). > > balance_pgdat() only calls cond_resched if the zones are not > balanced. For a high-order allocation that is balanced, it > checks order-0 again. During that window, order-0 might have > become unbalanced so it loops again for order-0 and returns > that it was reclaiming for order-0 to kswapd(). It can then > find that a caller has rewoken kswapd for a high-order and > re-enters balance_pgdat() without ever calling cond_resched(). > > shrink_slab only calls cond_resched() if we are reclaiming slab > pages. If there are a large number of direct reclaimers, the > shrinker_rwsem can be contended and prevent kswapd calling > cond_resched(). > > This patch modifies the shrink_slab() case. If the semaphore is > contended, the caller will still check cond_resched(). After each > successful call into a shrinker, the check for cond_resched() remains > in case one shrinker is particularly slow. So CONFIG_PREEMPT=y kernels don't exhibit this problem? I'm still unconvinced that we know what's going on here. What's kswapd *doing* with all those cycles? And if kswapd is now scheduling away, who is doing that work instead? Direct reclaim?