From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8955302CD1 for ; Wed, 18 Jun 2025 17:41:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750268482; cv=none; b=HyF0SNaeSGEyRzCHQu4/f9Vl2ZCjQhhhNOyqhEVst4f80/Z93RU+PMITff32jY6xEc3ciLNTn9T8pK2W0vbqmJfnIJXvUPz+WPLLKz+/roilWva2B0rvFuhHv7+UFdk7AukPe1YhIaPhT8VPBnkgONgNgf351DJ3jF9ZVMof3kc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750268482; c=relaxed/simple; bh=xdS5HD4MUJ1SAp8T3v4mmHtDVA5N2mYHTeOtyErEsCg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iIpeCJnfFSOH/DmJm3EjSK8u2MgLVLKEv5awEg4/VAXrA+8fI/EAbgpZQJRBb26w4/kI6I1YG1UcDj2JdUluMjc08lFwa3UrT0F6/gB6vW49QPBpnn4DcjTI1hnt/iXxY0CXipRhpBdiQD1ifSQCOil0Tj43AXJfi1AxYL+t8CM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=IWPEK1NR; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="IWPEK1NR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1750268478; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sAf4Dh14UAIqeNeWRGXlrmZ3XJWuUden7rJ9s11c67A=; b=IWPEK1NRTsjKdpRVANd524nHEQfIwrDntQ0rdikrbVqYnFbeJ7zTGqFoAh368Vr5oMvgjJ 4pbXPpVnrJbirb72YHXOzdDkNmdK2CGqzS0wRe607c+6hdoZT9K0C0ZG7lBN6k2/KgL2rR 7h39zhoC4CWJ5HU8RuBvgf0Kxa8FMck= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-260-6dIs1FQdORak3ZTOyb3B8Q-1; Wed, 18 Jun 2025 13:41:12 -0400 X-MC-Unique: 6dIs1FQdORak3ZTOyb3B8Q-1 X-Mimecast-MFC-AGG-ID: 6dIs1FQdORak3ZTOyb3B8Q_1750268471 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-3a4f7ebfd00so3268490f8f.2 for ; Wed, 18 Jun 2025 10:41:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750268471; x=1750873271; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sAf4Dh14UAIqeNeWRGXlrmZ3XJWuUden7rJ9s11c67A=; b=fn2K5Vn+9zfecWDj9legf395asH5Imf/tYbGIqhhbbzYHg2YlziqmO36Xnx48Onsrh dqig/oDWnTt6Z3WU0DtQhsqX6u0voTjZB2GgrkDQjNcmn8C70GWSRzAom/o9Kmggo/Fd QmXdb05CW4NyLz/lELJpbu3szsCK0e2hqTJ4NukJlyKb2N0XbB9lS/0csKidM/g+Rt64 R+TeHtKHJy1K7z4zh1KX69GRGq87g/QsInZdQYkXTcy3lNt0jJcq6RFRkAMuMdwanAUM 3wQEf46QhBhW1icCBpID8j5sJLkgKbb3elFFjZUPQCVVi/gKCn0fIQCyCwwZAuFay9sd 8IKw== X-Forwarded-Encrypted: i=1; AJvYcCUdS945DAizgco1m5QsIDhfCmk+CnfDqiAP5Y3ET1sG2GixcPV/Cc0LgHTactIkXXVAICIc6JPJzEnL+Wwr@vger.kernel.org X-Gm-Message-State: AOJu0YxtyLAl8EtX7PONJjA2KJL1ccSam0s9H+ozGeSlNIW6jdby/EaJ d/Dnp5Chm/UlgIlDB6UOX3Y0pcRJVgB66/QwCY+/U0wGGptfCWi+A7T9LFxtJxDEmRZ7L3QYmTo HUZ6Z6sIIQ2LqE+yvSbl5PTkZOZpUxcKJdWmBrWEzs3HfmijyLSNzd/9H0RMh5vPlTBg= X-Gm-Gg: ASbGncs+kyhMmGhe3SNNATijk2uK4WhdS7SQuqMFfnZ0xknkMVuXNhAQSH6WEA0zf7z MDFN2M+zC+xIIhM4A+WGcTo7GwjEzeJsMTnuT2ktf2oSRtlqk07r2rFWHzaaxRy7IuPnDq9B97n Gjmb77WIt8Xw/jGM4y9ikvA+S7iH4UD3FDHUKUaJa6P66x/rA2oLkQUUYJ5UGcFnGObFUdf2Z+r Ac3FyNXhX/tiRHtgl847t+5cXeDuJBZl+zvTBa3MA70/zethiXJ2YWalJOYJCUzES8/F7IP4ynq W2rEO9JpApeXBzKsIPw0g2Znl6UmEBD+fIrdCZ43F8SxTF9YBTi8s/Gg+rc3QtcwFfl/irnsAc7 CyRObvA== X-Received: by 2002:a05:6000:643:b0:3a5:858f:f6c4 with SMTP id ffacd0b85a97d-3a5858ff721mr7648556f8f.24.1750268471031; Wed, 18 Jun 2025 10:41:11 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE+8+c12bV3QXFpelqf7ZUsWU+nueVuiLXazDhi/qX6uph+PN90J8FEbsBo/IgcCAbkgHz7uw== X-Received: by 2002:a05:6000:643:b0:3a5:858f:f6c4 with SMTP id ffacd0b85a97d-3a5858ff721mr7648508f8f.24.1750268470566; Wed, 18 Jun 2025 10:41:10 -0700 (PDT) Received: from localhost (p200300d82f2d2400405203b5fff94ed0.dip0.t-ipconnect.de. [2003:d8:2f2d:2400:4052:3b5:fff9:4ed0]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-3a568a7ddefsm17533369f8f.39.2025.06.18.10.41.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 18 Jun 2025 10:41:10 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux.dev, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Jerrin Shaji George , Arnd Bergmann , Greg Kroah-Hartman , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Viro , Christian Brauner , Jan Kara , Zi Yan , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , "Matthew Wilcox (Oracle)" , Minchan Kim , Sergey Senozhatsky , Brendan Jackman , Johannes Weiner , Jason Gunthorpe , John Hubbard , Peter Xu , Xu Xin , Chengming Zhou , Miaohe Lin , Naoya Horiguchi , Oscar Salvador , Rik van Riel , Harry Yoo , Qi Zheng , Shakeel Butt Subject: [PATCH RFC 20/29] mm: convert "movable" flag in page->mapping to a page flag Date: Wed, 18 Jun 2025 19:40:03 +0200 Message-ID: <20250618174014.1168640-21-david@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250618174014.1168640-1-david@redhat.com> References: <20250618174014.1168640-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Instead, let's use a page flag. As the page flag can result in false-positives, glue it to the page types for which we support/implement movable_ops page migration. Signed-off-by: David Hildenbrand --- include/linux/balloon_compaction.h | 2 +- include/linux/migrate.h | 8 ----- include/linux/page-flags.h | 52 ++++++++++++++++++++++++------ mm/compaction.c | 6 ---- mm/zpdesc.h | 2 +- 5 files changed, 44 insertions(+), 26 deletions(-) diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h index a8a1706cc56f3..b222b0737c466 100644 --- a/include/linux/balloon_compaction.h +++ b/include/linux/balloon_compaction.h @@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon, struct page *page) { __SetPageOffline(page); - __SetPageMovable(page); + SetPageMovableOps(page); set_page_private(page, (unsigned long)balloon); list_add(&page->lru, &balloon->pages); } diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 64639dab92b91..314cbeba1bb91 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -103,14 +103,6 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping, #endif /* CONFIG_MIGRATION */ -#ifdef CONFIG_COMPACTION -void __SetPageMovable(struct page *page); -#else -static inline void __SetPageMovable(struct page *page) -{ -} -#endif - #ifdef CONFIG_NUMA_BALANCING int migrate_misplaced_folio_prepare(struct folio *folio, struct vm_area_struct *vma, int node); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 4c27ebb689e3c..016a6e6fa428a 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -170,6 +170,11 @@ enum pageflags { /* non-lru isolated movable page */ PG_isolated = PG_reclaim, +#ifdef CONFIG_MIGRATION + /* this is a movable_ops page (for selected typed pages only) */ + PG_movable_ops = PG_uptodate, +#endif + /* Only valid for buddy pages. Used to track pages that are reported */ PG_reported = PG_uptodate, @@ -698,9 +703,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted) * bit; and then folio->mapping points, not to an anon_vma, but to a private * structure which KSM associates with that merged page. See ksm.h. * - * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is used for non-lru movable - * page and then folio->mapping points to a struct movable_operations. - * * Please note that, confusingly, "folio_mapping" refers to the inode * address_space which maps the folio from disk; whereas "folio_mapped" * refers to user virtual address space into which the folio is mapped. @@ -743,13 +745,6 @@ static __always_inline bool PageAnon(const struct page *page) { return folio_test_anon(page_folio(page)); } - -static __always_inline bool page_has_movable_ops(const struct page *page) -{ - return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) == - PAGE_MAPPING_MOVABLE; -} - #ifdef CONFIG_KSM /* * A KSM page is one of those write-protected "shared pages" or "merged pages" @@ -1133,6 +1128,43 @@ bool is_free_buddy_page(const struct page *page); PAGEFLAG(Isolated, isolated, PF_ANY); +#ifdef CONFIG_MIGRATION +/* + * This page is migratable through movable_ops (for selected typed pages + * only). + * + * Page migration of such pages might fail, for example, if the page is + * already isolated by somebody else, or if the page is about to get freed. + * + * While a subsystem might set selected typed pages that support page migration + * as being movable through movable_ops, it must never clear this flag. + * + * This flag is only cleared when the page is freed back to the buddy. + * + * Only selected page types support this flag (see page_movable_ops()) and + * the flag might be used in other context for other pages. Always use + * page_has_movable_ops() instead. + */ +PAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL); +#else +PAGEFLAG_FALSE(MovableOps, movable_ops); +#endif + +/** + * page_has_movable_ops - test for a movable_ops page + * @page The page to test. + * + * Test whether this is a movable_ops page. Such pages will stay that + * way until freed. + * + * Returns true if this is a movable_ops page, otherwise false. + */ +static inline bool page_has_movable_ops(const struct page *page) +{ + return PageMovableOps(page) && + (PageOffline(page) || PageZsmalloc(page)); +} + static __always_inline int PageAnonExclusive(const struct page *page) { VM_BUG_ON_PGFLAGS(!PageAnon(page), page); diff --git a/mm/compaction.c b/mm/compaction.c index a10f23df9396e..86d671a520e91 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -114,12 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages) } #ifdef CONFIG_COMPACTION -void __SetPageMovable(struct page *page) -{ - VM_BUG_ON_PAGE(!PageLocked(page), page); - page->mapping = (void *)(PAGE_MAPPING_MOVABLE); -} -EXPORT_SYMBOL(__SetPageMovable); /* Do not skip compaction more than 64 times */ #define COMPACT_MAX_DEFER_SHIFT 6 diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 6855d9e2732d8..25bf5ea0beb83 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -154,7 +154,7 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn) static inline void __zpdesc_set_movable(struct zpdesc *zpdesc) { - __SetPageMovable(zpdesc_page(zpdesc)); + SetPageMovableOps(zpdesc_page(zpdesc)); } static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc) -- 2.49.0