From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1B53C7618E for ; Fri, 21 Apr 2023 13:12:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 49F2E6B0071; Fri, 21 Apr 2023 09:12:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 44F9C6B0072; Fri, 21 Apr 2023 09:12:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33E796B0074; Fri, 21 Apr 2023 09:12:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 21A8B6B0071 for ; Fri, 21 Apr 2023 09:12:34 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D222E40296 for ; Fri, 21 Apr 2023 13:12:33 +0000 (UTC) X-FDA: 80705437386.21.95899DA Received: from outbound-smtp30.blacknight.com (outbound-smtp30.blacknight.com [81.17.249.61]) by imf25.hostedemail.com (Postfix) with ESMTP id C4471A0029 for ; Fri, 21 Apr 2023 13:12:31 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.61 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682082752; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mqJS+mCkXn29Qq93BENBL/BpL0OX/wjoKC4YTgt/h14=; b=QJinHFfs1hh/cnznQz6YT9MXjqqF0oG0QPvlHxj8ZiC/zj4iA3J1NTdcrQLyu252kZqVo4 Ua+fR0ov8lYXfdSjgV7cI130qhtRgt1tRVoXGXiaarbtrBgiflNx1xYrkOXjrRy/VRyYo6 C+C6G6YYXfrQult69ZM3ZLuNL4/++pI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682082752; a=rsa-sha256; cv=none; b=lkW70LWqVF8Bu9v60dxFMU4YgPZuxxvf0g21lwtEI6ZRuV1wgSYN3LdTDkMfWmmmPt1572 SHG1KGfC8AWwruMswyZ2cN/iljHuS/hjD0KHXEB/N6QHi986BdLhNrdG1lhfcXPfZvKvKD mJS3HIkHPGZ6cFZ8rSBEl6OveUK4eNA= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.61 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp30.blacknight.com (Postfix) with ESMTPS id B86F7BAC2C for ; Fri, 21 Apr 2023 14:12:29 +0100 (IST) Received: (qmail 32508 invoked from network); 21 Apr 2023 13:12:29 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.21.103]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 21 Apr 2023 13:12:29 -0000 Date: Fri, 21 Apr 2023 14:12:27 +0100 From: Mel Gorman To: Johannes Weiner Cc: linux-mm@kvack.org, Kaiyang Zhao , Vlastimil Babka , David Rientjes , linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [RFC PATCH 08/26] mm: page_alloc: claim blocks during compaction capturing Message-ID: <20230421131227.k2afmhb6kejdbhui@techsingularity.net> References: <20230418191313.268131-1-hannes@cmpxchg.org> <20230418191313.268131-9-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20230418191313.268131-9-hannes@cmpxchg.org> X-Rspam-User: X-Rspamd-Queue-Id: C4471A0029 X-Rspamd-Server: rspam09 X-Stat-Signature: meq4re43swf3hyq7u6mrqi6o76nr6wht X-HE-Tag: 1682082751-570113 X-HE-Meta: U2FsdGVkX19eYcOCl2eCRALK8wdM5PGw1RZckMaYEEv1R4Z4o58bAXdV93EiKhx9vWzKRkV/zhb2R5nkwmsd5HAW3Q3XBxZYKY4/0WyKjh56qUzJTOEjFsiIw54wuCmcsIC/FAd0XpCM6kvDU9a1ELg6ajgZPOgiSGYJsbAmZfcEPnQqf3GRF8+T2qEMm9A1VphrJBEtdSRXuklgU1XuIJkhi9OxJ7xT/cNYhv5ZA1OXXggU4XD4JBXi6VVeoS96hj7A6BHLlwwrGFLvRVOpe4oVmxA92f+pGNkmpVt6dnCBOYDttvcryrRu9E92XRuOY6e9j8RyWCE9/T6zp5h2u//vleKL3WTVwcEAOBZr/BOWzzcpSlL8qodEDuGc6577ROg45kf7nfiIDnjgRJA1el1kMb1x5aKI+ohoOOWuwijKlPPvwVWSs23B0k7NbxxJB16BnTV30yGz++zBjY3Wi5uZd0soCld5XPvPd4fc46R03mdpj792C6DRc5MBKpnKuK4D3M7vHD6peklmCQYDUZgtXES2XeUKSIjIqJTBCoVVPES4bHtvXMGdZajk/mTZSPEEapO/ZVsOvotZl4xUCi2CcmZLA/cRlU8GkSbsP2t4Asb1cQyTJy6jabOQzpBkjxk2Hi8spvIJ8jFHu33xVKnT2NJESLjw+Pvl4uvUhPa+jAtm/1cgIysUJ7zUGh20Hjof6lm9WqnhNzLXFZbcjjwEf/Fj9UmRNrDo0xEILqR1FJIawRVfiClmywAp4Z/mWjnmxE+phAA662sD8ag6eEPbZlKNot5QdEYTppBjOEmgi9gPiB8UprRGmj5w4mU/se8qgRwVd0rrrmq5sTOZoI0bMJI/hErDQPhzGfAHOQiGkJ7zEK9scIibGF7XB32EoWEevuR1Tmiornr8V1EaRZR6rINCmCxErI/TioSdmZOSicY4VHeuRd0mlftIvPoIl/BuKnYXsegNdYAHyRA JT9tkyEJ 0PW0+0PU/2qkOoW0V1ytxXl1dRmKHCpHu+iDwE0PxqfIvNWWldFIopbfRezN9SCJNPr75sXq6w4FVUurrppio1sodntlRyMGfzHsxb9VOC0ckCvhAob601i0Ugr2UPeoT1ykktzLr9wTD8W3qj80yfRQH8T8zB7m665sQVZv67p/A3t8o9ds8f9XeI/yL1/CbXQU56yu4DNcGBFmcZrPrpvHvdnXGXwg9WhOf0RsgeqXByXiAlvIYXWMmF0sSh8s67vBNCpqaOX1OdsGxBh1Xnz4Uhw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 18, 2023 at 03:12:55PM -0400, Johannes Weiner wrote: > When capturing a whole block, update the migratetype accordingly. For > example, a THP allocation might capture an unmovable block. If the THP > gets split and partially freed later, the remainder should group up > with movable allocations. > > Signed-off-by: Johannes Weiner > --- > mm/internal.h | 1 + > mm/page_alloc.c | 42 ++++++++++++++++++++++++------------------ > 2 files changed, 25 insertions(+), 18 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 024affd4e4b5..39f65a463631 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -432,6 +432,7 @@ struct compact_control { > */ > struct capture_control { > struct compact_control *cc; > + int migratetype; > struct page *page; > }; > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4d20513c83be..8e5996f8b4b4 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -615,6 +615,17 @@ void set_pageblock_migratetype(struct page *page, int migratetype) > page_to_pfn(page), MIGRATETYPE_MASK); > } > > +static void change_pageblock_range(struct page *pageblock_page, > + int start_order, int migratetype) > +{ > + int nr_pageblocks = 1 << (start_order - pageblock_order); > + > + while (nr_pageblocks--) { > + set_pageblock_migratetype(pageblock_page, migratetype); > + pageblock_page += pageblock_nr_pages; > + } > +} > + > #ifdef CONFIG_DEBUG_VM > static int page_outside_zone_boundaries(struct zone *zone, struct page *page) > { > @@ -962,14 +973,19 @@ compaction_capture(struct capture_control *capc, struct page *page, > is_migrate_isolate(migratetype)) > return false; > > - /* > - * Do not let lower order allocations pollute a movable pageblock. > - * This might let an unmovable request use a reclaimable pageblock > - * and vice-versa but no more than normal fallback logic which can > - * have trouble finding a high-order free page. > - */ > - if (order < pageblock_order && migratetype == MIGRATE_MOVABLE) > + if (order >= pageblock_order) { > + migratetype = capc->migratetype; > + change_pageblock_range(page, order, migratetype); > + } else if (migratetype == MIGRATE_MOVABLE) { > + /* > + * Do not let lower order allocations pollute a > + * movable pageblock. This might let an unmovable > + * request use a reclaimable pageblock and vice-versa > + * but no more than normal fallback logic which can > + * have trouble finding a high-order free page. > + */ > return false; > + } > For capturing pageblock order or larger, why not unconditionally make the block MOVABLE? Even if it's a zero page allocation, it would be nice to keep the pageblock for movable pages after the split as long as possible. -- Mel Gorman SUSE Labs