From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A786F291C2B for ; Mon, 30 Jun 2025 13:01:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751288486; cv=none; b=rcuiLGRaUhZKgjdaNCG6Aq4oPqMPI2Q74jqKnx/HDCA32qg2MLLXtq/I6wuZPYnTFbvTf+rFIotQK5tJI9frWPxpnkR9qf0P5pZyQVsl372nyy3VnCOC6Ycv8RLxz1kmRNmHDcpnqjqb8nXFeAeXY5/3GN+9h8Go/rt0Sv8IzLU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751288486; c=relaxed/simple; bh=D2PbFe0TDfmvy4F7QksDSc2nDP/2ml7oXT564/tmyrg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PgkFh+4B4Qanc8VakYRiSqmkTBVh0CKYDvqKyZ+k1Uu0TAks/YxvmIumPzgDILBR3b3P22BNw2HYSLcH/ucMGvIUQvMepaFjeJp7uOvAD2k/kyyUGDfm5P+fL7+OIXjlapTm4qgI5B19749BWJb2uCSUTDO9HRA+6FvwISu9/Jo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=iLqYr+LC; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="iLqYr+LC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1751288480; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5CWZGmTCKoL1hvQ/VyDOYb4tfyG8bAvHTQtdKrki7q8=; b=iLqYr+LCNAJsxWHnBvXRNaS13t1IcK61zqArg6PqP83THsYiglY4bItZVLXMwQPxCHiPVR MK2UMMobn07yI3RHxzwGX4tbKIOQ8E//ZwZgcdaaipsz6cPYG26XSVu4L8UzCx/pjiF0NI xrzwYC6sCKIhRQ051XVoXwiKL8bYy1w= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-670-UQnJ4RxiPT-_imsHM3crcg-1; Mon, 30 Jun 2025 09:01:15 -0400 X-MC-Unique: UQnJ4RxiPT-_imsHM3crcg-1 X-Mimecast-MFC-AGG-ID: UQnJ4RxiPT-_imsHM3crcg_1751288474 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-3a4e713e05bso954173f8f.3 for ; Mon, 30 Jun 2025 06:01:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751288474; x=1751893274; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5CWZGmTCKoL1hvQ/VyDOYb4tfyG8bAvHTQtdKrki7q8=; b=u3S92iOGkBKhBzQMikFZw8FSKwa1qf6efSAOBZIklOimTYddWpn4bjb0aX9C2ldOQA kk4S0xvSi/OXeVEDT7ZFSoANOkkFhCO7hsFHQI0P5vBM16bmcG3driqGvSvJ0NbrJBYV pE/12H6+K7aYZsAOv2TYXLgzNFkyqaFRbCwdWp6T9uPZUBROD9XCyUErvSWt+GgV1yNW A1fZtlplXDdqON1Lrh740Ddux+lIIrxTDGg7ljttcDP0lZsD5JuQU5AOV6RON03/+u3G xhmVAz0Xayx4UVLzJe9U0EUS127V3818+nO2oWsFa4T4iPqm6E/dw4oDghE+KTIY9UOQ uLsw== X-Forwarded-Encrypted: i=1; AJvYcCU90QeLDgfK3a2mfZsSngKEXOEwqywuFQCOZ3qg+90iLF6M7r/GshNUQsqZzmO+L3p8H3xqv5PNCOaEcy/k@vger.kernel.org X-Gm-Message-State: AOJu0YzQMzqoyI7ZRgKlh9y6EoWrcMI8O14N0mwpBsnD12BtpvM2xXMS xFkak8rSEkH94ZDoF0y/ECe6r2rI00lUHzS7UwUa1l4lpeaXOdeL7en57tTVbkAdxgUUSjy7hik nM32NZbQZbtwxdZultaxeVjuWLpkoAzBzmPEE0DHYwtwCOg+X0FymEMa1lFdyOZEyOg0= X-Gm-Gg: ASbGnctQOdstLUKGNg3rnh+1kHVE4UtUPVhzYzeXpiyy4ac5FOJ3IloEdyTU++p8yrs KaYe/uGLLEdf0nP7/y+Y/I4keoPHKr3yzw/sKIIrBv3DS0uy79n6fSsBswNE1Qg08rXcUU9+ZI7 Nvv6uk8mihjxE/attIi62DFSDHtx2TvLF+FTkDAoQdiD7zoYC/usvdrMPO/gNAXtZJza0gMFBr0 9JGKSx8o3e+O1l7y61m3Qvd2OVXz2vC5/jVLs0oq6oMq+qAdZ5Z++aCUd4eQdC0TbYXmv8I+EBw nJg7/5B5xVrKI7EdFFSPzIsCjMusrZccytqHgyOKM4Pk9rZpyg9LKx/0zxLCkLEd5taPG+SU5Cx yqRu63jw= X-Received: by 2002:a05:6000:2389:b0:3a4:e54c:adf2 with SMTP id ffacd0b85a97d-3a8f577fdc6mr12847968f8f.5.1751288473588; Mon, 30 Jun 2025 06:01:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGJ/9NZM6h/Hx4c51fxjBgSM5e1fSGf05yUg5gzNhGMoVHxQHOqc17Ov8/o/Ak8l4p0X+gKZQ== X-Received: by 2002:a05:6000:2389:b0:3a4:e54c:adf2 with SMTP id ffacd0b85a97d-3a8f577fdc6mr12847789f8f.5.1751288472235; Mon, 30 Jun 2025 06:01:12 -0700 (PDT) Received: from localhost (p200300d82f40b30053f7d260aff47256.dip0.t-ipconnect.de. [2003:d8:2f40:b300:53f7:d260:aff4:7256]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-4538a3fe592sm131954105e9.21.2025.06.30.06.01.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 30 Jun 2025 06:01:11 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux.dev, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Jerrin Shaji George , Arnd Bergmann , Greg Kroah-Hartman , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Viro , Christian Brauner , Jan Kara , Zi Yan , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , "Matthew Wilcox (Oracle)" , Minchan Kim , Sergey Senozhatsky , Brendan Jackman , Johannes Weiner , Jason Gunthorpe , John Hubbard , Peter Xu , Xu Xin , Chengming Zhou , Miaohe Lin , Naoya Horiguchi , Oscar Salvador , Rik van Riel , Harry Yoo , Qi Zheng , Shakeel Butt Subject: [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag Date: Mon, 30 Jun 2025 15:00:01 +0200 Message-ID: <20250630130011.330477-21-david@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250630130011.330477-1-david@redhat.com> References: <20250630130011.330477-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Instead, let's use a page flag. As the page flag can result in false-positives, glue it to the page types for which we support/implement movable_ops page migration. The flag reused by PageMovableOps() might be sued by other pages, so warning in case it is set in page_has_movable_ops() might result in false-positive warnings. Reviewed-by: Zi Yan Signed-off-by: David Hildenbrand --- include/linux/balloon_compaction.h | 2 +- include/linux/migrate.h | 8 ----- include/linux/page-flags.h | 52 ++++++++++++++++++++++++------ mm/compaction.c | 6 ---- mm/zpdesc.h | 2 +- 5 files changed, 44 insertions(+), 26 deletions(-) diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h index a8a1706cc56f3..b222b0737c466 100644 --- a/include/linux/balloon_compaction.h +++ b/include/linux/balloon_compaction.h @@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon, struct page *page) { __SetPageOffline(page); - __SetPageMovable(page); + SetPageMovableOps(page); set_page_private(page, (unsigned long)balloon); list_add(&page->lru, &balloon->pages); } diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 6aece3f3c8be8..acadd41e0b5cf 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -103,14 +103,6 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping, #endif /* CONFIG_MIGRATION */ -#ifdef CONFIG_COMPACTION -void __SetPageMovable(struct page *page); -#else -static inline void __SetPageMovable(struct page *page) -{ -} -#endif - #ifdef CONFIG_NUMA_BALANCING int migrate_misplaced_folio_prepare(struct folio *folio, struct vm_area_struct *vma, int node); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 4c27ebb689e3c..016a6e6fa428a 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -170,6 +170,11 @@ enum pageflags { /* non-lru isolated movable page */ PG_isolated = PG_reclaim, +#ifdef CONFIG_MIGRATION + /* this is a movable_ops page (for selected typed pages only) */ + PG_movable_ops = PG_uptodate, +#endif + /* Only valid for buddy pages. Used to track pages that are reported */ PG_reported = PG_uptodate, @@ -698,9 +703,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted) * bit; and then folio->mapping points, not to an anon_vma, but to a private * structure which KSM associates with that merged page. See ksm.h. * - * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is used for non-lru movable - * page and then folio->mapping points to a struct movable_operations. - * * Please note that, confusingly, "folio_mapping" refers to the inode * address_space which maps the folio from disk; whereas "folio_mapped" * refers to user virtual address space into which the folio is mapped. @@ -743,13 +745,6 @@ static __always_inline bool PageAnon(const struct page *page) { return folio_test_anon(page_folio(page)); } - -static __always_inline bool page_has_movable_ops(const struct page *page) -{ - return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) == - PAGE_MAPPING_MOVABLE; -} - #ifdef CONFIG_KSM /* * A KSM page is one of those write-protected "shared pages" or "merged pages" @@ -1133,6 +1128,43 @@ bool is_free_buddy_page(const struct page *page); PAGEFLAG(Isolated, isolated, PF_ANY); +#ifdef CONFIG_MIGRATION +/* + * This page is migratable through movable_ops (for selected typed pages + * only). + * + * Page migration of such pages might fail, for example, if the page is + * already isolated by somebody else, or if the page is about to get freed. + * + * While a subsystem might set selected typed pages that support page migration + * as being movable through movable_ops, it must never clear this flag. + * + * This flag is only cleared when the page is freed back to the buddy. + * + * Only selected page types support this flag (see page_movable_ops()) and + * the flag might be used in other context for other pages. Always use + * page_has_movable_ops() instead. + */ +PAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL); +#else +PAGEFLAG_FALSE(MovableOps, movable_ops); +#endif + +/** + * page_has_movable_ops - test for a movable_ops page + * @page The page to test. + * + * Test whether this is a movable_ops page. Such pages will stay that + * way until freed. + * + * Returns true if this is a movable_ops page, otherwise false. + */ +static inline bool page_has_movable_ops(const struct page *page) +{ + return PageMovableOps(page) && + (PageOffline(page) || PageZsmalloc(page)); +} + static __always_inline int PageAnonExclusive(const struct page *page) { VM_BUG_ON_PGFLAGS(!PageAnon(page), page); diff --git a/mm/compaction.c b/mm/compaction.c index 348eb754cb227..349f4ea0ec3e5 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -114,12 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages) } #ifdef CONFIG_COMPACTION -void __SetPageMovable(struct page *page) -{ - VM_BUG_ON_PAGE(!PageLocked(page), page); - page->mapping = (void *)(PAGE_MAPPING_MOVABLE); -} -EXPORT_SYMBOL(__SetPageMovable); /* Do not skip compaction more than 64 times */ #define COMPACT_MAX_DEFER_SHIFT 6 diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 6855d9e2732d8..25bf5ea0beb83 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -154,7 +154,7 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn) static inline void __zpdesc_set_movable(struct zpdesc *zpdesc) { - __SetPageMovable(zpdesc_page(zpdesc)); + SetPageMovableOps(zpdesc_page(zpdesc)); } static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc) -- 2.49.0