From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753331AbcA2Dm4 (ORCPT ); Thu, 28 Jan 2016 22:42:56 -0500 Received: from mail113-249.mail.alibaba.com ([205.204.113.249]:59372 "EHLO us-alimail-mta1.hst.scl.en.alidc.net" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752027AbcA2Dmy (ORCPT ); Thu, 28 Jan 2016 22:42:54 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01l07449;MF=hillf.zj@alibaba-inc.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_----4UizgdG_1454038913; Reply-To: "Hillf Danton" From: "Hillf Danton" To: "'Michal Hocko'" , "'Andrew Morton'" Cc: "'Linus Torvalds'" , "'Johannes Weiner'" , "'Mel Gorman'" , "'David Rientjes'" , "'Tetsuo Handa'" , "'KAMEZAWA Hiroyuki'" , , "'LKML'" , "'Michal Hocko'" References: <1450203586-10959-1-git-send-email-mhocko@kernel.org> <1454015979-9985-1-git-send-email-mhocko@kernel.org> In-Reply-To: <1454015979-9985-1-git-send-email-mhocko@kernel.org> Subject: Re: [PATCH 5/3] mm, vmscan: make zone_reclaimable_pages more precise Date: Fri, 29 Jan 2016 11:41:53 +0800 Message-ID: <05f101d15a46$fef53b70$fcdfb250$@alibaba-inc.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Outlook 14.0 Thread-Index: AQLrn4RotpjS2Y3s/D1LpIleWH8/2gIpQDVNnMwEC/A= Content-Language: zh-cn Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > From: Michal Hocko > > zone_reclaimable_pages is used in should_reclaim_retry which uses it to > calculate the target for the watermark check. This means that precise > numbers are important for the correct decision. zone_reclaimable_pages > uses zone_page_state which can contain stale data with per-cpu diffs > not synced yet (the last vmstat_update might have run 1s in the past). > > Use zone_page_state_snapshot in zone_reclaimable_pages instead. None > of the current callers is in a hot path where getting the precise value > (which involves per-cpu iteration) would cause an unreasonable overhead. > > Suggested-by: David Rientjes > Signed-off-by: Michal Hocko > --- Acked-by: Hillf Danton > mm/vmscan.c | 14 +++++++------- > 1 file changed, 7 insertions(+), 7 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 489212252cd6..9145e3f89eab 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -196,21 +196,21 @@ unsigned long zone_reclaimable_pages(struct zone *zone) > { > unsigned long nr; > > - nr = zone_page_state(zone, NR_ACTIVE_FILE) + > - zone_page_state(zone, NR_INACTIVE_FILE) + > - zone_page_state(zone, NR_ISOLATED_FILE); > + nr = zone_page_state_snapshot(zone, NR_ACTIVE_FILE) + > + zone_page_state_snapshot(zone, NR_INACTIVE_FILE) + > + zone_page_state_snapshot(zone, NR_ISOLATED_FILE); > > if (get_nr_swap_pages() > 0) > - nr += zone_page_state(zone, NR_ACTIVE_ANON) + > - zone_page_state(zone, NR_INACTIVE_ANON) + > - zone_page_state(zone, NR_ISOLATED_ANON); > + nr += zone_page_state_snapshot(zone, NR_ACTIVE_ANON) + > + zone_page_state_snapshot(zone, NR_INACTIVE_ANON) + > + zone_page_state_snapshot(zone, NR_ISOLATED_ANON); > > return nr; > } > > bool zone_reclaimable(struct zone *zone) > { > - return zone_page_state(zone, NR_PAGES_SCANNED) < > + return zone_page_state_snapshot(zone, NR_PAGES_SCANNED) < > zone_reclaimable_pages(zone) * 6; > } > > -- > 2.7.0.rc3