From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f70.google.com (mail-it0-f70.google.com [209.85.214.70]) by kanga.kvack.org (Postfix) with ESMTP id AAB106B0275 for ; Mon, 26 Sep 2016 06:19:01 -0400 (EDT) Received: by mail-it0-f70.google.com with SMTP id e20so202061546itc.3 for ; Mon, 26 Sep 2016 03:19:01 -0700 (PDT) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com. [58.251.152.64]) by mx.google.com with ESMTP id b72si6612033itb.34.2016.09.26.03.18.59 for ; Mon, 26 Sep 2016 03:19:01 -0700 (PDT) Message-ID: <57E8F5CE.908@huawei.com> Date: Mon, 26 Sep 2016 18:17:50 +0800 From: Xishi Qiu MIME-Version: 1.0 Subject: Re: [RFC] mm: a question about high-order check in __zone_watermark_ok() References: <57E8E0BD.2070603@huawei.com> <20160926085850.GB28550@dhcp22.suse.cz> <57E8E786.8030703@huawei.com> <20160926094333.GD28550@dhcp22.suse.cz> In-Reply-To: <20160926094333.GD28550@dhcp22.suse.cz> Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Michal Hocko Cc: Mel Gorman , Johannes Weiner , Vlastimil Babka , LKML , Linux MM , Yisheng Xie On 2016/9/26 17:43, Michal Hocko wrote: > On Mon 26-09-16 17:16:54, Xishi Qiu wrote: >> On 2016/9/26 16:58, Michal Hocko wrote: >> >>> On Mon 26-09-16 16:47:57, Xishi Qiu wrote: >>>> commit 97a16fc82a7c5b0cfce95c05dfb9561e306ca1b1 >>>> (mm, page_alloc: only enforce watermarks for order-0 allocations) >>>> rewrite the high-order check in __zone_watermark_ok(), but I think it >>>> quietly fix a bug. Please see the following. >>>> >>>> Before this patch, the high-order check is this: >>>> __zone_watermark_ok() >>>> ... >>>> for (o = 0; o < order; o++) { >>>> /* At the next order, this order's pages become unavailable */ >>>> free_pages -= z->free_area[o].nr_free << o; >>>> >>>> /* Require fewer higher order pages to be free */ >>>> min >>= 1; >>>> >>>> if (free_pages <= min) >>>> return false; >>>> } >>>> ... >>>> >>>> If we have cma memory, and we alloc a high-order movable page, then it's right. >>>> >>>> But if we alloc a high-order unmovable page(e.g. alloc kernel stack in dup_task_struct()), >>>> and there are a lot of high-order cma pages, but little high-order unmovable >>>> pages, the it is still return *true*, but we will alloc *failed* finally, because >>>> we cannot fallback from migrate_unmovable to migrate_cma, right? >>> >>> AFAIR CMA wmark check was always tricky and the above commit has made >>> the situation at least a bit more clear. Anyway IIRC >>> >>> #ifdef CONFIG_CMA >>> /* If allocation can't use CMA areas don't use free CMA pages */ >>> if (!(alloc_flags & ALLOC_CMA)) >>> free_cma = zone_page_state(z, NR_FREE_CMA_PAGES); >>> #endif >>> >>> if (free_pages - free_cma <= min + z->lowmem_reserve[classzone_idx]) >>> return false; >>> >>> should reduce the prioblem because a lot of CMA pages should just get us >>> below the wmark + reserve boundary. >> >> Hi Michal, >> >> If we have many high-order cma pages, and the left pages (unmovable/movable/reclaimable) >> are also enough, but they are fragment, then it will triger the problem. >> If we alloc a high-order unmovable page, water mark check return *true*, but we >> will alloc *failed*, right? > > As Vlastimil has written. There were known issues with the wmark checks > and high order requests. Shall we backport to stable? Thanks, Xishi Qiu -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org