From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C320C47DD9 for ; Fri, 22 Mar 2024 09:20:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 243B36B0083; Fri, 22 Mar 2024 05:20:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F3146B0087; Fri, 22 Mar 2024 05:20:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BBA06B0088; Fri, 22 Mar 2024 05:20:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F07E26B0083 for ; Fri, 22 Mar 2024 05:20:11 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 9981A160C1D for ; Fri, 22 Mar 2024 09:20:11 +0000 (UTC) X-FDA: 81924128622.02.2A567EB Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf20.hostedemail.com (Postfix) with ESMTP id E43041C000D for ; Fri, 22 Mar 2024 09:20:08 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711099209; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F2De0Kv+NEI9y1YXKwEd54Q76eS5a7wHK3LhxmnGT0U=; b=H6XUp7/c3aLim2MuDiiD+PJNC6UIvSWhJ74z60jPie91Qs3Yw3IoXJThN5/NlSEtOzt087 LA+mChbv/BE8nk05LI/lEWRqgHzF78HVub18ZNLL1IoXXKkznEebPsaOigyeiclDx2GnyY 520mcxROPykwe6OO5tbp3Y3GBxwmz3E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711099209; a=rsa-sha256; cv=none; b=TXgtXKHiBQ8p1pMefESts+XCVMX5PMfM9t8InpFMH6vzAGTIk4mQPJQ1mixSyFsFpCrhK8 Z215SCGNC48DYL5rEQJ29z31HtgGAyJ8mN8k7DwFKjqaEF7yb1MFE7vvFbDUlpCb6fUPcZ ZaJMlyS6or+z9U5bdIfKmyCajgDjRaM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4V1Gtx5Wktz1h3Tm; Fri, 22 Mar 2024 17:17:29 +0800 (CST) Received: from canpemm500002.china.huawei.com (unknown [7.192.104.244]) by mail.maildlp.com (Postfix) with ESMTPS id AF49814013B; Fri, 22 Mar 2024 17:20:04 +0800 (CST) Received: from [10.173.135.154] (10.173.135.154) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 22 Mar 2024 17:20:04 +0800 Subject: Re: [PATCH 7/9] mm: Free up PG_slab To: "Matthew Wilcox (Oracle)" CC: , David Hildenbrand , Vlastimil Babka , Muchun Song , Oscar Salvador , Andrew Morton References: <20240321142448.1645400-1-willy@infradead.org> <20240321142448.1645400-8-willy@infradead.org> From: Miaohe Lin Message-ID: Date: Fri, 22 Mar 2024 17:20:03 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20240321142448.1645400-8-willy@infradead.org> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.173.135.154] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500002.china.huawei.com (7.192.104.244) X-Stat-Signature: wjeshssugfm3wnkjtow5c7fzo1q9czpt X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: E43041C000D X-Rspam-User: X-HE-Tag: 1711099208-317871 X-HE-Meta: U2FsdGVkX19C7/vlIxJP6/DEwFwFPhK1cy3Gk94ouOvQ+kPBpwlI8nRn5ean+jR1OXTHPYspC7iiuR+LYqB5/umQzpPaXGhNyOUk+i7rBgj1x/5WL9bZx08VWhhmu0yho0F2gY4r3R/a8hCaTCmR3LRUXx29hMceTnHSEbRRW5MMDSMdCjyxv63/9BAY0Q3R1V6X2RWqpyzzuDbn21+dBLoNtABK97dPuMLfS6oaYZvHnOiwq55Yi7GNgWKw+2X+4cpUXqNjbT078HThQ8KA2ewZE5MRyuL7hAheqgGBlSsmFs8I3MBngMqljCt3D+DXYaPhZ16Nb6WwB4A3AuZwjLFa158d0m/HYDTGO3NGSVKtbQwIzGEJg2m3sxzq/4p2+Nw53BwpKgxvvvDjGZsqgV5gD4Lnrack5/Zxi7qNvWOXL6RhAc4KFp4IPtHd0XNFz6aXvq7/boblR/rzTDbmBq0iyRdTF1nHjWnz772P36F5Ts8trAOPH19SkPduIgUV5fcu6sR4HkpoSIzxz6XVraK0hebFsg7T+rdmMr4Ttod56xm0hGkv6EWUfidzJ1lxE6dmQQqJbeuYpkSGbPYysAYUtP+ZozrtE64MkF1wX//1g66jWFY7WjjwMpKepRPNrvZRRNxeOwiyub/7qjrHj8mlqWaKDTRhzI0xbLxyhiZVPfMC05FG83NaO/1RZzywASjIqIf9mCwLVqQWeH5CgMN2gwee2+ReWRguglr+ounic5WMWokBz8H7TzMcm8j0JLmTsvuPSA80eZPvQkwF8zZYnmZpE63d8ywX6wNFms/6LYaIhcTgLug6HW6ZfCWzWYlT7RU66FLYE042Ju1a6vP5hHUi8xcigO54ccYXXeRleHU1cFkbFmk+eXnNAej0wAp+sTVhekRswSTQ4CGBa55DYl0NBqc3HfZEhvYg6sZG1QeZjiJCNnvRdmAdfO3nPR6FA//HQpdunZucfE3 3GpvGWfE mmw4zG9ybzM3zRyAM0LSHBxko/mtDMl9mnNPswG8Aqw2SrIz50xkeq6N1j4IOR2I8ZNcUmCkXSMZotX+pcZkP1/Yi6sfUoJ7h1OsfvRvWoeUlTOXQocrLVUzfn7aXXl9VttG9p9hNgvZ+jOBsEJit97cEBpyoFww5+cpCYG+QRNwmAtQQ79iaCQqH0KHXr5yx5BgUq8qA6CQBUTeNmIU6/CS3rIHP2EFDIm5FTlE3AaDRD/0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/3/21 22:24, Matthew Wilcox (Oracle) wrote: > Reclaim the Slab page flag by using a spare bit in PageType. We are > perennially short of page flags for various purposes, and now that > the original SLAB allocator has been retired, SLUB does not use the > mapcount/page_type field. This lets us remove a number of special cases > for ignoring mapcount on Slab pages. > > Signed-off-by: Matthew Wilcox (Oracle) > --- > include/linux/page-flags.h | 21 +++++++++++++++++---- > include/trace/events/mmflags.h | 2 +- > mm/memory-failure.c | 9 --------- > mm/slab.h | 2 +- > 4 files changed, 19 insertions(+), 15 deletions(-) > > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > index 94eb8a11a321..73e0b17c7728 100644 > --- a/include/linux/page-flags.h > +++ b/include/linux/page-flags.h > @@ -109,7 +109,6 @@ enum pageflags { > PG_active, > PG_workingset, > PG_error, > - PG_slab, > PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/ > PG_arch_1, > PG_reserved, > @@ -524,7 +523,6 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD) > TESTCLEARFLAG(Active, active, PF_HEAD) > PAGEFLAG(Workingset, workingset, PF_HEAD) > TESTCLEARFLAG(Workingset, workingset, PF_HEAD) > -__PAGEFLAG(Slab, slab, PF_NO_TAIL) > PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */ > > /* Xen */ > @@ -931,7 +929,7 @@ PAGEFLAG_FALSE(HasHWPoisoned, has_hwpoisoned) > #endif > > /* > - * For pages that are never mapped to userspace (and aren't PageSlab), > + * For pages that are never mapped to userspace, > * page_type may be used. Because it is initialised to -1, we invert the > * sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and > * __ClearPageFoo *sets* the bit used for PageFoo. We reserve a few high and > @@ -947,6 +945,7 @@ PAGEFLAG_FALSE(HasHWPoisoned, has_hwpoisoned) > #define PG_table 0x00000200 > #define PG_guard 0x00000400 > #define PG_hugetlb 0x00000800 > +#define PG_slab 0x00001000 > > #define PageType(page, flag) \ > ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) > @@ -1041,6 +1040,20 @@ PAGE_TYPE_OPS(Table, table, pgtable) > */ > PAGE_TYPE_OPS(Guard, guard, guard) > > +FOLIO_TYPE_OPS(slab, slab) > + > +/** > + * PageSlab - Determine if the page belongs to the slab allocator > + * @page: The page to test. > + * > + * Context: Any context. > + * Return: True for slab pages, false for any other kind of page. > + */ > +static inline bool PageSlab(const struct page *page) > +{ > + return folio_test_slab(page_folio(page)); > +} > + > #ifdef CONFIG_HUGETLB_PAGE > FOLIO_TYPE_OPS(hugetlb, hugetlb) > #else > @@ -1121,7 +1134,7 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page) > (1UL << PG_lru | 1UL << PG_locked | \ > 1UL << PG_private | 1UL << PG_private_2 | \ > 1UL << PG_writeback | 1UL << PG_reserved | \ > - 1UL << PG_slab | 1UL << PG_active | \ > + 1UL << PG_active | \ > 1UL << PG_unevictable | __PG_MLOCKED | LRU_GEN_MASK) > > /* > diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h > index d55e53ac91bd..e46d6e82765e 100644 > --- a/include/trace/events/mmflags.h > +++ b/include/trace/events/mmflags.h > @@ -107,7 +107,6 @@ > DEF_PAGEFLAG_NAME(lru), \ > DEF_PAGEFLAG_NAME(active), \ > DEF_PAGEFLAG_NAME(workingset), \ > - DEF_PAGEFLAG_NAME(slab), \ > DEF_PAGEFLAG_NAME(owner_priv_1), \ > DEF_PAGEFLAG_NAME(arch_1), \ > DEF_PAGEFLAG_NAME(reserved), \ > @@ -135,6 +134,7 @@ IF_HAVE_PG_ARCH_X(arch_3) > #define DEF_PAGETYPE_NAME(_name) { PG_##_name, __stringify(_name) } > > #define __def_pagetype_names \ > + DEF_PAGETYPE_NAME(slab), \ > DEF_PAGETYPE_NAME(hugetlb), \ > DEF_PAGETYPE_NAME(offline), \ > DEF_PAGETYPE_NAME(guard), \ > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 9349948f1abf..1cb41ba7870c 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -1239,7 +1239,6 @@ static int me_huge_page(struct page_state *ps, struct page *p) > #define mlock (1UL << PG_mlocked) > #define lru (1UL << PG_lru) > #define head (1UL << PG_head) > -#define slab (1UL << PG_slab) > #define reserved (1UL << PG_reserved) > > static struct page_state error_states[] = { > @@ -1249,13 +1248,6 @@ static struct page_state error_states[] = { > * PG_buddy pages only make a small fraction of all free pages. > */ > > - /* > - * Could in theory check if slab page is free or if we can drop > - * currently unused objects without touching them. But just > - * treat it as standard kernel for now. > - */ > - { slab, slab, MF_MSG_SLAB, me_kernel }, Will it be better to leave the above slab case here to catch possible unhandled obscure races with slab? Though it looks like slab page shouldn't reach here. Thanks.