From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759318Ab2I1Uih (ORCPT ); Fri, 28 Sep 2012 16:38:37 -0400 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:58150 "EHLO out1-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759258Ab2I1Uib (ORCPT ); Fri, 28 Sep 2012 16:38:31 -0400 X-Sasl-enc: e6Tm83HfWBn67AMheSn3HvYTbgVqrgBbT7efswrhjc+2 1348864710 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , alan@lxorguk.ukuu.org.uk, Satoru Moriya , Minchan Kim , Rik van Riel , Jerome Marchand , Andrew Morton , Linus Torvalds Subject: [ 203/218] mm: avoid swapping out with swappiness==0 Date: Fri, 28 Sep 2012 13:17:00 -0700 Message-Id: <20120928201524.875299290@linuxfoundation.org> X-Mailer: git-send-email 1.7.12.1.428.g652398a In-Reply-To: <20120928201501.208384923@linuxfoundation.org> References: <20120928201501.208384923@linuxfoundation.org> User-Agent: quilt/0.60-2.1.2 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Satoru Moriya commit fe35004fbf9eaf67482b074a2e032abb9c89b1dd upstream. Sometimes we'd like to avoid swapping out anonymous memory. In particular, avoid swapping out pages of important process or process groups while there is a reasonable amount of pagecache on RAM so that we can satisfy our customers' requirements. OTOH, we can control how aggressive the kernel will swap memory pages with /proc/sys/vm/swappiness for global and /sys/fs/cgroup/memory/memory.swappiness for each memcg. But with current reclaim implementation, the kernel may swap out even if we set swappiness=0 and there is pagecache in RAM. This patch changes the behavior with swappiness==0. If we set swappiness==0, the kernel does not swap out completely (for global reclaim until the amount of free pages and filebacked pages in a zone has been reduced to something very very small (nr_free + nr_filebacked < high watermark)). Signed-off-by: Satoru Moriya Acked-by: Minchan Kim Reviewed-by: Rik van Riel Acked-by: Jerome Marchand Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/vmscan.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1983,10 +1983,10 @@ static void get_scan_count(struct mem_cg * proportional to the fraction of recently scanned pages on * each list that were recently referenced and in active use. */ - ap = (anon_prio + 1) * (reclaim_stat->recent_scanned[0] + 1); + ap = anon_prio * (reclaim_stat->recent_scanned[0] + 1); ap /= reclaim_stat->recent_rotated[0] + 1; - fp = (file_prio + 1) * (reclaim_stat->recent_scanned[1] + 1); + fp = file_prio * (reclaim_stat->recent_scanned[1] + 1); fp /= reclaim_stat->recent_rotated[1] + 1; spin_unlock_irq(&mz->zone->lru_lock); @@ -1999,7 +1999,7 @@ out: unsigned long scan; scan = zone_nr_lru_pages(mz, lru); - if (priority || noswap) { + if (priority || noswap || !vmscan_swappiness(mz, sc)) { scan >>= priority; if (!scan && force_scan) scan = SWAP_CLUSTER_MAX;