From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A9BCC52D6D for ; Tue, 6 Aug 2024 07:52:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D42946B0089; Tue, 6 Aug 2024 03:52:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF2C26B008C; Tue, 6 Aug 2024 03:52:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB9DB6B0092; Tue, 6 Aug 2024 03:52:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9AAD46B0089 for ; Tue, 6 Aug 2024 03:52:49 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 586F9808B0 for ; Tue, 6 Aug 2024 07:52:49 +0000 (UTC) X-FDA: 82421054058.13.A171DD2 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf08.hostedemail.com (Postfix) with ESMTP id 36B0A160028 for ; Tue, 6 Aug 2024 07:52:46 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tJXfPHUv; spf=pass (imf08.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722930736; a=rsa-sha256; cv=none; b=BBBMkug9pNWZpYFq/BFk3jUehdiUBdhg8cDcd7jcISm4M/ETrZ4FCKJ8kYyrFrsjEm6ukk Zi34m84PSJP9YvA2ICAYuTvClYp3gFTx8dejzl+1cDjVeD/rdPKWQPs5vO0XZOqz3u4KVM iFtsgNbTrE53GORRNhxNEX0CLIIunNM= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tJXfPHUv; spf=pass (imf08.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722930736; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OqakPsTmY5g+cNCzYpgnHmWGl8Ih+GwO9R5pZKvM37w=; b=CCBLB+ONJUII9LSysb47/u8FAdHspzfK8Ln5uGxoJiy5bsR3EH2zjNhFBZbam06IcZyghO Vhdy+gCIJlACeh45banpz0JxmHPjT0FN4o48pFs3HNx+dkjBoxUiucYdi967F4YMAaWTXk es39WZ16whq1eUUttzmsShTPVZ7u5Pc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id EAA22CE0DDD; Tue, 6 Aug 2024 07:52:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A92C1C4AF09; Tue, 6 Aug 2024 07:52:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722930763; bh=+X08FKo9StEUMgf9vTjstF6ql0XMNbcWlbP50Nj+EWM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=tJXfPHUvIU0q82x/WwqcmxN6Low65cgiC9meVwuKP/9OQsiwLqwa9x5O7ikonDeAt jMmkKvdUqbPdgmlqETNXWiyfmGmvHfmzM+FtkXpJgNEepFeREoqW6cVm0Un50RKNVs p73GFTrT3UoSGEAd3IbyFtSYk0DlHWDHzfR+E3xCUJesOml7j1Kq5BMpWRPTDGov75 SYDUoB8bGqZdgiCpxuLToOkez+xLmd5Za6Om+lkalGuqLorW4fQmerlFO52w7Gu/QV jftAj3GXQqWwREbsKXJukGuYhLlF3g0hkKsmx27dcZ5zsepH3qqoOmBDos5Ca32457 44NHm/JRLDzkg== Date: Tue, 6 Aug 2024 10:52:35 +0300 From: Mike Rapoport To: "Kirill A. Shutemov" Cc: Andrew Morton , "Borislav Petkov (AMD)" , Mel Gorman , Vlastimil Babka , Tom Lendacky , "Matthew Wilcox (Oracle)" , David Hildenbrand , Johannes Weiner , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/8] mm: Accept memory in __alloc_pages_bulk(). Message-ID: References: <20240805145940.2911011-1-kirill.shutemov@linux.intel.com> <20240805145940.2911011-3-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240805145940.2911011-3-kirill.shutemov@linux.intel.com> X-Stat-Signature: qgsgsr4o3h9u8teh4c1bdq7supgw1xid X-Rspamd-Queue-Id: 36B0A160028 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1722930766-936031 X-HE-Meta: U2FsdGVkX19jcYhU5fZjmGBBMMjLssf0meZD7Jwz/BI2PivkIL818g4hYpSnBFhdhH9vDHcka0spawiiG2FhjFG5qjbgHk4Xb+Nv6qPRKwR30/8IR+7Q37Jag2UHutk0x3DvTA9cVBFmomJhqt/GJFWXo91em7XlubwDtlj6Rst2N3TyhB8YQmoDAdAqqadcBbiPP3/waNMKT21+mzF1wHTpq/+fnRPGNF8hwwHLYoz0AaSsSXFTDJ6BXT7eqOLMNh1k5TOxXauBcJBdZIfhE3+nSjoI8cKEYMh6Wxc0cRZm1dIzpj7pblkaeWCxwYX44/sdV96kbarbZYtn47X/xC6+Tz23OvhJSqokxnYdeFyW/npCS53ktJuHfIIJZVqL57ZKmUHHxzY1PWeJcTOZCv7MERmkbxwKpcfIqaYXy70PtsgMDuJOIVnGStiWYT5oypA1GSRBSRa9IXSGMKxsonhQjFnzqL42qcOMy4OcwW2Zrv5W5QPB6OuFBweLrJ4RX7pNpIh+yoFbGXwjgtUzG5Wemthoo9Y1kDmIQAqC0LLJC3dkDgHljSGDQP6hiup2BufHjqS1r1T9HRuud3dt+/Gus0KPdx3oyPC2r/b8pKdirLhVKHc4YIG2kTQPEpkPYn0kfewtMUvQ/yz9V/l9L0GLEnyx4VppweF8b4QZGJCstMMrklZzrQcVMGGwP2lxXN4dNT0AYASxHqe1hEZmHlpdX+A3b4VUEpiwV6SAvRHhJA3K89kcxuI/xeZTRYcu3mji7/hVlrOUR2eRG8GKOp5qqgSg4pU5gn6xnkkc3o2ElwRmTn0SNwZHSdTdJ0nJLUlDmzBNzJT135ctwOudHBHUmRNZTmQ3K1W1WS7iKAwauqdLBZKnWv3Eb/7Tr9DYe+Nijybd7jUIHoARZ7is5YIMZ2R8eygGNBFfIMVP3eCvrzUG+FTTd83ZNVlMIdOppzx1SYAicofbVf7hMJj 4V32u3ty VZZNkl0z8AHz9Zm3hp2w3KN2W6t8tiF5xSwl6FntmK2gfRARWABrzKSKjiGKkz1AFtyuv7fCTT8rJtsvCVHW0eN9rCk1VqtRJuNFRMLVYdwr5HkELB/UMVX/OMpSuX+uIz/8V4POele3PZAswp09WPKBS20v54RUdXDQQaP0DHJ45WTMfbx7i4d1ueAddTeokDpXtUczSuNb49/cNwkWeXD4Y53krjR1SCQxeURE0qOt6awZSd1aRAHUg1eT8GV2/CFWdwvbMuPmYxP7BIGAV46cnXAjIZEHdkEExgqe78YMxoZ+H9Aaafh+fpji4VYm43uoKeggoqEq1jLqlBuXVg3YBeuo4SS8G4Qs9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Aug 05, 2024 at 05:59:34PM +0300, Kirill A. Shutemov wrote: > Currently, the kernel only accepts memory in get_page_from_freelist(), > but there is another path that directly takes pages from free lists - > __alloc_page_bulk(). This function can consume all accepted memory and > will resort to __alloc_pages_noprof() if necessary. > > Conditionally accepted in __alloc_pages_bulk(). > > The same issue may arise due to deferred page initialization. Kick the > deferred initialization machinery before abandoning the zone, as the > kernel does in get_page_from_freelist(). > > Signed-off-by: Kirill A. Shutemov Acked-by: Mike Rapoport (Microsoft) > --- > mm/page_alloc.c | 13 +++++++++++++ > 1 file changed, 13 insertions(+) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index aa9b1eaa638c..90a1f01d5996 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4576,12 +4576,25 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, > goto failed; > } > > + cond_accept_memory(zone, 0); > +retry_this_zone: > mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK) + nr_pages; > if (zone_watermark_fast(zone, 0, mark, > zonelist_zone_idx(ac.preferred_zoneref), > alloc_flags, gfp)) { > break; > } > + > + if (cond_accept_memory(zone, 0)) > + goto retry_this_zone; > + > +#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT > + /* Try again if zone has deferred pages */ > + if (deferred_pages_enabled()) { > + if (_deferred_grow_zone(zone, 0)) > + goto retry_this_zone; > + } > +#endif > } > > /* > -- > 2.43.0 >