From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967566AbcA1VTz (ORCPT ); Thu, 28 Jan 2016 16:19:55 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:35176 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933476AbcA1VTx (ORCPT ); Thu, 28 Jan 2016 16:19:53 -0500 From: Michal Hocko To: Andrew Morton Cc: Linus Torvalds , Johannes Weiner , Mel Gorman , David Rientjes , Tetsuo Handa , Hillf Danton , KAMEZAWA Hiroyuki , , LKML , Michal Hocko Subject: [PATCH 5/3] mm, vmscan: make zone_reclaimable_pages more precise Date: Thu, 28 Jan 2016 22:19:39 +0100 Message-Id: <1454015979-9985-1-git-send-email-mhocko@kernel.org> X-Mailer: git-send-email 2.7.0.rc3 In-Reply-To: <1450203586-10959-1-git-send-email-mhocko@kernel.org> References: <1450203586-10959-1-git-send-email-mhocko@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Michal Hocko zone_reclaimable_pages is used in should_reclaim_retry which uses it to calculate the target for the watermark check. This means that precise numbers are important for the correct decision. zone_reclaimable_pages uses zone_page_state which can contain stale data with per-cpu diffs not synced yet (the last vmstat_update might have run 1s in the past). Use zone_page_state_snapshot in zone_reclaimable_pages instead. None of the current callers is in a hot path where getting the precise value (which involves per-cpu iteration) would cause an unreasonable overhead. Suggested-by: David Rientjes Signed-off-by: Michal Hocko --- mm/vmscan.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 489212252cd6..9145e3f89eab 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -196,21 +196,21 @@ unsigned long zone_reclaimable_pages(struct zone *zone) { unsigned long nr; - nr = zone_page_state(zone, NR_ACTIVE_FILE) + - zone_page_state(zone, NR_INACTIVE_FILE) + - zone_page_state(zone, NR_ISOLATED_FILE); + nr = zone_page_state_snapshot(zone, NR_ACTIVE_FILE) + + zone_page_state_snapshot(zone, NR_INACTIVE_FILE) + + zone_page_state_snapshot(zone, NR_ISOLATED_FILE); if (get_nr_swap_pages() > 0) - nr += zone_page_state(zone, NR_ACTIVE_ANON) + - zone_page_state(zone, NR_INACTIVE_ANON) + - zone_page_state(zone, NR_ISOLATED_ANON); + nr += zone_page_state_snapshot(zone, NR_ACTIVE_ANON) + + zone_page_state_snapshot(zone, NR_INACTIVE_ANON) + + zone_page_state_snapshot(zone, NR_ISOLATED_ANON); return nr; } bool zone_reclaimable(struct zone *zone) { - return zone_page_state(zone, NR_PAGES_SCANNED) < + return zone_page_state_snapshot(zone, NR_PAGES_SCANNED) < zone_reclaimable_pages(zone) * 6; } -- 2.7.0.rc3