From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02C0AC00144 for ; Mon, 1 Aug 2022 12:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233629AbiHAMHW (ORCPT ); Mon, 1 Aug 2022 08:07:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233618AbiHAMHF (ORCPT ); Mon, 1 Aug 2022 08:07:05 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A84BB272A; Mon, 1 Aug 2022 04:55:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DB3ADB81163; Mon, 1 Aug 2022 11:55:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12322C433D6; Mon, 1 Aug 2022 11:55:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1659354910; bh=tuaWM0RS08UKYD1ZGPzHKRoYPvP0Ao/9TYJpx7LX5Ek=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=csvEaXXIQyJk1ld5u7w9NIm0iu3n4cl98KW5peDREWCGDYqVQBS0gfpEzuy8dQiKR JSrgvEKffI0s+d64vZcl1aMVCXENH98hrLDqRgAhkzDjlx51jIN1cm2qSbFc7FwQ9g lS8JPfsNgUPAp0O2GKaPj1wfuYOFTPoDU59ROTLs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jaewon Kim , GyeongHwan Hong , Mel Gorman , Minchan Kim , Baoquan He , Vlastimil Babka , Johannes Weiner , Michal Hocko , Yong-Taek Lee , stable@vger.kerenl.org, Andrew Morton Subject: [PATCH 5.15 64/69] page_alloc: fix invalid watermark check on a negative value Date: Mon, 1 Aug 2022 13:47:28 +0200 Message-Id: <20220801114137.047337783@linuxfoundation.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220801114134.468284027@linuxfoundation.org> References: <20220801114134.468284027@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jaewon Kim commit 9282012fc0aa248b77a69f5eb802b67c5a16bb13 upstream. There was a report that a task is waiting at the throttle_direct_reclaim. The pgscan_direct_throttle in vmstat was increasing. This is a bug where zone_watermark_fast returns true even when the free is very low. The commit f27ce0e14088 ("page_alloc: consider highatomic reserve in watermark fast") changed the watermark fast to consider highatomic reserve. But it did not handle a negative value case which can be happened when reserved_highatomic pageblock is bigger than the actual free. If watermark is considered as ok for the negative value, allocating contexts for order-0 will consume all free pages without direct reclaim, and finally free page may become depleted except highatomic free. Then allocating contexts may fall into throttle_direct_reclaim. This symptom may easily happen in a system where wmark min is low and other reclaimers like kswapd does not make free pages quickly. Handle the negative case by using MIN. Link: https://lkml.kernel.org/r/20220725095212.25388-1-jaewon31.kim@samsung.com Fixes: f27ce0e14088 ("page_alloc: consider highatomic reserve in watermark fast") Signed-off-by: Jaewon Kim Reported-by: GyeongHwan Hong Acked-by: Mel Gorman Cc: Minchan Kim Cc: Baoquan He Cc: Vlastimil Babka Cc: Johannes Weiner Cc: Michal Hocko Cc: Yong-Taek Lee Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/page_alloc.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3928,11 +3928,15 @@ static inline bool zone_watermark_fast(s * need to be calculated. */ if (!order) { - long fast_free; + long usable_free; + long reserved; - fast_free = free_pages; - fast_free -= __zone_watermark_unusable_free(z, 0, alloc_flags); - if (fast_free > mark + z->lowmem_reserve[highest_zoneidx]) + usable_free = free_pages; + reserved = __zone_watermark_unusable_free(z, 0, alloc_flags); + + /* reserved may over estimate high-atomic reserves. */ + usable_free -= min(usable_free, reserved); + if (usable_free > mark + z->lowmem_reserve[highest_zoneidx]) return true; }