From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1163938AbdEXMNo (ORCPT ); Wed, 24 May 2017 08:13:44 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:6814 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1764628AbdEXMNI (ORCPT ); Wed, 24 May 2017 08:13:08 -0400 Message-ID: <5925784E.802@huawei.com> Date: Wed, 24 May 2017 20:10:54 +0800 From: Xishi Qiu User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:12.0) Gecko/20120428 Thunderbird/12.0.1 MIME-Version: 1.0 To: Vlastimil Babka CC: Yisheng Xie , Kefeng Wang , , , zhongjiang Subject: Re: [Question] Mlocked count will not be decreased References: <85591559-2a99-f46b-7a5a-bc7affb53285@huawei.com> <93f1b063-6288-d109-117d-d3c1cf152a8e@suse.cz> <5925709F.1030105@huawei.com> In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.25.179] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.592578D2.023C,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 8071d1fad4ed286f2dedc4a26d446d2f Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2017/5/24 19:52, Vlastimil Babka wrote: > On 05/24/2017 01:38 PM, Xishi Qiu wrote: >>> >>> Race condition with what? Who else would isolate our pages? >>> >> >> Hi Vlastimil, >> >> I find the root cause, if the page was not cached on the current cpu, >> lru_add_drain() will not push it to LRU. So we should handle fail >> case in mlock_vma_page(). > > Yeah that would explain it. > >> follow_page_pte() >> ... >> if (page->mapping && trylock_page(page)) { >> lru_add_drain(); /* push cached pages to LRU */ >> /* >> * Because we lock page here, and migration is >> * blocked by the pte's page reference, and we >> * know the page is still mapped, we don't even >> * need to check for file-cache page truncation. >> */ >> mlock_vma_page(page); >> unlock_page(page); >> } >> ... >> >> I think we should add yisheng's patch, also we should add the following change. >> I think it is better than use lru_add_drain_all(). > > I agree about yisheng's fix (but v2 didn't address my comments). I don't > think we should add the hunk below, as that deviates from the rest of > the design. Hi Vlastimil, The rest of the design is that mlock should always success here, right? If we don't handle the fail case, the page will be in anon/file lru list later when call __pagevec_lru_add(), but NR_MLOCK increased, this is wrong, right? Thanks, Xishi Qiu > > Thanks, > Vlastimil > >> diff --git a/mm/mlock.c b/mm/mlock.c >> index 3d3ee6c..ca2aeb9 100644 >> --- a/mm/mlock.c >> +++ b/mm/mlock.c >> @@ -88,6 +88,11 @@ void mlock_vma_page(struct page *page) >> count_vm_event(UNEVICTABLE_PGMLOCKED); >> if (!isolate_lru_page(page)) >> putback_lru_page(page); >> + else { >> + ClearPageMlocked(page); >> + mod_zone_page_state(page_zone(page), NR_MLOCK, >> + -hpage_nr_pages(page)); >> + } >> } >> } >> >> Thanks, >> Xishi Qiu >> > > > . >