From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1163674AbdEXLmv (ORCPT ); Wed, 24 May 2017 07:42:51 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:6895 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1163688AbdEXLmo (ORCPT ); Wed, 24 May 2017 07:42:44 -0400 Message-ID: <5925709F.1030105@huawei.com> Date: Wed, 24 May 2017 19:38:07 +0800 From: Xishi Qiu User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:12.0) Gecko/20120428 Thunderbird/12.0.1 MIME-Version: 1.0 To: Vlastimil Babka CC: Yisheng Xie , Kefeng Wang , , , zhongjiang Subject: Re: [Question] Mlocked count will not be decreased References: <85591559-2a99-f46b-7a5a-bc7affb53285@huawei.com> <93f1b063-6288-d109-117d-d3c1cf152a8e@suse.cz> In-Reply-To: <93f1b063-6288-d109-117d-d3c1cf152a8e@suse.cz> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.177.25.179] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090201.592571AD.0238,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 2a645ed076d288b5baea6f72ba48e7eb Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2017/5/24 18:32, Vlastimil Babka wrote: > On 05/24/2017 10:32 AM, Yisheng Xie wrote: >> Hi Kefeng, >> Could you please try this patch. >> >> Thanks >> Yisheng Xie >> ------------- >> From a70ae975756e8e97a28d49117ab25684da631689 Mon Sep 17 00:00:00 2001 >> From: Yisheng Xie >> Date: Wed, 24 May 2017 16:01:24 +0800 >> Subject: [PATCH] mlock: fix mlock count can not decrease in race condition >> >> Kefeng reported that when run the follow test the mlock count in meminfo >> cannot be decreased: >> [1] testcase >> linux:~ # cat test_mlockal >> grep Mlocked /proc/meminfo >> for j in `seq 0 10` >> do >> for i in `seq 4 15` >> do >> ./p_mlockall >> log & >> done >> sleep 0.2 >> done >> sleep 5 # wait some time to let mlock decrease >> grep Mlocked /proc/meminfo >> >> linux:~ # cat p_mlockall.c >> #include >> #include >> #include >> >> #define SPACE_LEN 4096 >> >> int main(int argc, char ** argv) >> { >> int ret; >> void *adr = malloc(SPACE_LEN); >> if (!adr) >> return -1; >> >> ret = mlockall(MCL_CURRENT | MCL_FUTURE); >> printf("mlcokall ret = %d\n", ret); >> >> ret = munlockall(); >> printf("munlcokall ret = %d\n", ret); >> >> free(adr); >> return 0; >> } >> >> When __munlock_pagevec, we ClearPageMlock but isolation_failed in race >> condition, and we do not count these page into delta_munlocked, which cause mlock > > Race condition with what? Who else would isolate our pages? > Hi Vlastimil, I find the root cause, if the page was not cached on the current cpu, lru_add_drain() will not push it to LRU. So we should handle fail case in mlock_vma_page(). follow_page_pte() ... if (page->mapping && trylock_page(page)) { lru_add_drain(); /* push cached pages to LRU */ /* * Because we lock page here, and migration is * blocked by the pte's page reference, and we * know the page is still mapped, we don't even * need to check for file-cache page truncation. */ mlock_vma_page(page); unlock_page(page); } ... I think we should add yisheng's patch, also we should add the following change. I think it is better than use lru_add_drain_all(). diff --git a/mm/mlock.c b/mm/mlock.c index 3d3ee6c..ca2aeb9 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -88,6 +88,11 @@ void mlock_vma_page(struct page *page) count_vm_event(UNEVICTABLE_PGMLOCKED); if (!isolate_lru_page(page)) putback_lru_page(page); + else { + ClearPageMlocked(page); + mod_zone_page_state(page_zone(page), NR_MLOCK, + -hpage_nr_pages(page)); + } } } Thanks, Xishi Qiu