From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A66E23AD50D for ; Tue, 28 Apr 2026 15:24:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777389863; cv=none; b=pPeB6ucfLoipiA3zO3wz2f+AXK5rzsETBqctqO5vfeJY9zpzsNreNVVGx/D98KZrj7uJECBW+8TMqpl/lOnAmOX4apW/RI0l2mGXNl2tAhqzQ42geFcFW/m2q8DiAjD9TBxewd5cszuTnWQZG8FJIpk9tOwBpetve4i85aCmLNU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777389863; c=relaxed/simple; bh=YrUQqlx7SHwVKPUgOBGIwQzX98iQicc7GP5krBr3kxY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FKZQ0lXnuzFMnP9n4a5x5rHnJ6zMXwS85sor6Xy++3sk8WZyBw5tuj1zkUFok5X2DU4orlxWPEkQ+E/b/481agcDst4xQFGm6UjxO8FhhsIp0yemHZVDh9MCiK9ui/NSGwUSEPzOxJKUOhG8YnwAgrvFkvvpdWN6393Vm/t1Rus= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=aDL42alb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aDL42alb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DA52CC2BCB5; Tue, 28 Apr 2026 15:24:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777389863; bh=YrUQqlx7SHwVKPUgOBGIwQzX98iQicc7GP5krBr3kxY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aDL42alb6V4hk22xyQBDiiZWqJvQf4mkyic4ksHiBnoSE9JtHO8Fu6iThKXlgB70M wv0/VFABQVc8Gsga4kdrHfGxDJFysPLDFC9GTZ7kSCobDsjM7Yyc1BD5jjdu2ensc4 ZskSKVoQMrZpOfxBl2GAtV6yEr2IeIx6PFffOSL6mIohX+mMC4PPa/g1u3K7LRsupb rvJzCSF3Btg6VTCSb4jgAakYCR1FuE0N9zuH2cc7X4+tMT1i33ICKH42KZo2BHg6Fx aMF+nnbPd2sa2cIcpTiOR41m0I2Ecw6VkO+ADsEJOX/AM3xibRvNvMVrGxh9/pca+A YG4EMQfJ8qofA== From: Sasha Levin To: stable@vger.kernel.org Cc: David Hildenbrand , Zi Yan , Harry Yoo , Lorenzo Stoakes , Alistair Popple , Al Viro , Arnd Bergmann , Brendan Jackman , Byungchul Park , Chengming Zhou , Christian Brauner , Christophe Leroy , =?UTF-8?q?Eugenio=20P=C3=A9=20rez?= , Greg Kroah-Hartman , Gregory Price , "Huang, Ying" , Jan Kara , Jason Gunthorpe , Jason Wang , Jerrin Shaji George , Johannes Weiner , John Hubbard , Jonathan Corbet , Joshua Hahn , Liam Howlett , Madhavan Srinivasan , Mathew Brost , "Matthew Wilcox (Oracle)" , Miaohe Lin , Michael Ellerman , "Michael S. Tsirkin" , Michal Hocko , Mike Rapoport , Minchan Kim , Naoya Horiguchi , Nicholas Piggin , Oscar Salvador , Peter Xu , Qi Zheng , Rakie Kim , Rik van Riel , Sergey Senozhatsky , Shakeel Butt , Suren Baghdasaryan , Vlastimil Babka , Xuan Zhuo , xu xin , Andrew Morton , Sasha Levin Subject: [PATCH 6.12.y 2/3] mm/migrate: move movable_ops page handling out of move_to_new_folio() Date: Tue, 28 Apr 2026 11:24:11 -0400 Message-ID: <20260428152412.3034119-2-sashal@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260428152412.3034119-1-sashal@kernel.org> References: <2026042723-large-sedan-f63a@gregkh> <20260428152412.3034119-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: David Hildenbrand [ Upstream commit be4a3e9c185264e9ad0fe02c1c5d81b8386bd50c ] Let's move that handling directly into migrate_folio_move(), so we can simplify move_to_new_folio(). While at it, fixup the documentation a bit. Note that unmap_and_move_huge_page() does not care, because it only deals with actual folios. (we only support migration of individual movable_ops pages) Link: https://lkml.kernel.org/r/20250704102524.326966-12-david@redhat.com Signed-off-by: David Hildenbrand Reviewed-by: Zi Yan Reviewed-by: Harry Yoo Reviewed-by: Lorenzo Stoakes Cc: Alistair Popple Cc: Al Viro Cc: Arnd Bergmann Cc: Brendan Jackman Cc: Byungchul Park Cc: Chengming Zhou Cc: Christian Brauner Cc: Christophe Leroy Cc: Eugenio Pé rez Cc: Greg Kroah-Hartman Cc: Gregory Price Cc: "Huang, Ying" Cc: Jan Kara Cc: Jason Gunthorpe Cc: Jason Wang Cc: Jerrin Shaji George Cc: Johannes Weiner Cc: John Hubbard Cc: Jonathan Corbet Cc: Joshua Hahn Cc: Liam Howlett Cc: Madhavan Srinivasan Cc: Mathew Brost Cc: Matthew Wilcox (Oracle) Cc: Miaohe Lin Cc: Michael Ellerman Cc: "Michael S. Tsirkin" Cc: Michal Hocko Cc: Mike Rapoport Cc: Minchan Kim Cc: Naoya Horiguchi Cc: Nicholas Piggin Cc: Oscar Salvador Cc: Peter Xu Cc: Qi Zheng Cc: Rakie Kim Cc: Rik van Riel Cc: Sergey Senozhatsky Cc: Shakeel Butt Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Xuan Zhuo Cc: xu xin Signed-off-by: Andrew Morton Stable-dep-of: a2e0c0668a34 ("mm: migrate: requeue destination folio on deferred split queue") Signed-off-by: Sasha Levin --- mm/migrate.c | 63 +++++++++++++++++++++++++--------------------------- 1 file changed, 30 insertions(+), 33 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 44f00fac7a33e..d541612c7377d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1047,11 +1047,12 @@ static int fallback_migrate_folio(struct address_space *mapping, } /* - * Move a page to a newly allocated page - * The page is locked and all ptes have been successfully removed. + * Move a src folio to a newly allocated dst folio. * - * The new page will have replaced the old page if this function - * is successful. + * The src and dst folios are locked and the src folios was unmapped from + * the page tables. + * + * On success, the src folio was replaced by the dst folio. * * Return value: * < 0 - error code @@ -1060,34 +1061,30 @@ static int fallback_migrate_folio(struct address_space *mapping, static int move_to_new_folio(struct folio *dst, struct folio *src, enum migrate_mode mode) { + struct address_space *mapping = folio_mapping(src); int rc = -EAGAIN; - bool is_lru = !__folio_test_movable(src); VM_BUG_ON_FOLIO(!folio_test_locked(src), src); VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst); - if (likely(is_lru)) { - struct address_space *mapping = folio_mapping(src); - - if (!mapping) - rc = migrate_folio(mapping, dst, src, mode); - else if (mapping_inaccessible(mapping)) - rc = -EOPNOTSUPP; - else if (mapping->a_ops->migrate_folio) - /* - * Most folios have a mapping and most filesystems - * provide a migrate_folio callback. Anonymous folios - * are part of swap space which also has its own - * migrate_folio callback. This is the most common path - * for page migration. - */ - rc = mapping->a_ops->migrate_folio(mapping, dst, src, - mode); - else - rc = fallback_migrate_folio(mapping, dst, src, mode); + if (!mapping) + rc = migrate_folio(mapping, dst, src, mode); + else if (mapping_inaccessible(mapping)) + rc = -EOPNOTSUPP; + else if (mapping->a_ops->migrate_folio) + /* + * Most folios have a mapping and most filesystems + * provide a migrate_folio callback. Anonymous folios + * are part of swap space which also has its own + * migrate_folio callback. This is the most common path + * for page migration. + */ + rc = mapping->a_ops->migrate_folio(mapping, dst, src, + mode); + else + rc = fallback_migrate_folio(mapping, dst, src, mode); - if (rc != MIGRATEPAGE_SUCCESS) - goto out; + if (rc == MIGRATEPAGE_SUCCESS) { /* * For pagecache folios, src->mapping must be cleared before src * is freed. Anonymous folios must stay anonymous until freed. @@ -1097,10 +1094,7 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, if (likely(!folio_is_zone_device(dst))) flush_dcache_folio(dst); - } else { - rc = migrate_movable_ops_page(&dst->page, &src->page, mode); } -out: return rc; } @@ -1351,20 +1345,23 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, int rc; int old_page_state = 0; struct anon_vma *anon_vma = NULL; - bool is_lru = !__folio_test_movable(src); struct list_head *prev; __migrate_folio_extract(dst, &old_page_state, &anon_vma); prev = dst->lru.prev; list_del(&dst->lru); + if (unlikely(__folio_test_movable(src))) { + rc = migrate_movable_ops_page(&dst->page, &src->page, mode); + if (rc) + goto out; + goto out_unlock_both; + } + rc = move_to_new_folio(dst, src, mode); if (rc) goto out; - if (unlikely(!is_lru)) - goto out_unlock_both; - /* * When successful, push dst to LRU immediately: so that if it * turns out to be an mlocked page, remove_migration_ptes() will -- 2.53.0