From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753212AbcE2VZp (ORCPT ); Sun, 29 May 2016 17:25:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48481 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752848AbcE2VZo (ORCPT ); Sun, 29 May 2016 17:25:44 -0400 Date: Sun, 29 May 2016 23:25:40 +0200 From: Oleg Nesterov To: Michal Hocko Cc: Andrew Morton , Andrea Arcangeli , Mel Gorman , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: zone_reclaimable() leads to livelock in __alloc_pages_slowpath() Message-ID: <20160529212540.GA15180@redhat.com> References: <20160520202817.GA22201@redhat.com> <20160523072904.GC2278@dhcp22.suse.cz> <20160523151419.GA8284@redhat.com> <20160524071619.GB8259@dhcp22.suse.cz> <20160524224341.GA11961@redhat.com> <20160525120957.GH20132@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160525120957.GH20132@dhcp22.suse.cz> User-Agent: Mutt/1.5.24 (2015-08-30) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Sun, 29 May 2016 21:25:43 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org sorry for delay, On 05/25, Michal Hocko wrote: > > On Wed 25-05-16 00:43:41, Oleg Nesterov wrote: > > > > But. It _seems to me_ that the kernel "leaks" some pages in LRU_INACTIVE_FILE > > list because inactive_file_is_low() returns the wrong value. And do not even > > ask me why I think so, unlikely I will be able to explain ;) to remind, I never > > tried to read vmscan.c before. No, this is not because of inactive_file_is_low(), but > > > > But. if I change lruvec_lru_size() > > > > - return zone_page_state(lruvec_zone(lruvec), NR_LRU_BASE + lru); > > + return zone_page_state_snapshot(lruvec_zone(lruvec), NR_LRU_BASE + lru); > > > > the problem goes away too. Yes, > This is a bit surprising but my testing shows that the result shouldn't > make much difference. I can see some discrepancies between lru_vec size > and zone_reclaimable_pages but they are too small to actually matter. Yes, the difference is small but it does matter. I do not pretend I understand this all, but finally it seems I understand whats going on on my system when it hangs. At least, why the change in lruvec_lru_size() or calculate_normal_threshold() makes a difference. This single change in get_scan_count() under for_each_evictable_lru() loop - size = lruvec_lru_size(lruvec, lru); + size = zone_page_state_snapshot(lruvec_zone(lruvec), NR_LRU_BASE + lru); fixes the problem too. Without this change shrink*() continues to scan the LRU_ACTIVE_FILE list while it is empty. LRU_INACTIVE_FILE is not empty (just a few pages) but we do not even try to scan it, lruvec_lru_size() returns zero. Then later we recheck zone_reclaimable() and it notices the INACTIVE_FILE counter because it uses the _snapshot variant, this leads to livelock. I guess this doesn't really matter, but in my particular case these ACTIVE/INACTIVE counters were screwed by the recent putback_inactive_pages() logic. The pages we "leak" in INACTIVE list were recently moved from ACTIVE to INACTIVE list, and this updated only the per-cpu ->vm_stat_diff[] counters, so the "non snapshot" lruvec_lru_size() in get_scan_count() sees the "old" numbers. I even added more printk's, and yes when the system hangs I have something like, say, ->vm_stat[ACTIVE] = NR; // small number ->vm_stat_diff[ACTIVE] = -NR; // so it is actually zero but // get_scan_count() sees NR ->vm_stat[INACTIVE] = 0; // this is what get_scan_count() sees ->vm_stat_diff[INACTIVE] = NR; // and this is what zone_reclaimable() Oleg.