From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B95043DEFE1; Mon, 4 May 2026 14:26:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777904814; cv=none; b=mgLmMXRJ9Hz1WQldfGtbre8HaEuLWbI7vchHYsgUHH+hppp3eINdfCF5cNWixyfH8+LF86RM6i0xjYE3rjkYKHOJSIr0mm4O//nMxoh8bA4i+2d558IZlk9siDbkUn67DVFUaf9ZJm+tpyrAFwxXnwOCLXN+xNPuN3JQPMt5/ow= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777904814; c=relaxed/simple; bh=Ya3fw2t8eRDPR1WBKiFU0fAXHb0r1m+93hQRjpzciEE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=S6Ivkmo862mQ7WwFEni8GdSwkdcKa95oZLa4ugwy0MSDJjMxYykloGHhPde2tMLXwAxUnSPb90HCGRiz5kQ8BEC0tSJTyZvQRDjpZEXDJvRBhK6izlrEycizvD2PbA6SvGS3hWlXtWWLNOeCDsVZVjV0Htl4O9aBQSLGpo/UkMI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=wuIRmTsN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="wuIRmTsN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9C1DFC2BCB8; Mon, 4 May 2026 14:26:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1777904814; bh=Ya3fw2t8eRDPR1WBKiFU0fAXHb0r1m+93hQRjpzciEE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wuIRmTsN4Nc8dQtDOzBeGqF/dY//j7It9lhr7x3hVOxIhUgniqSgxy55NHOCiO4dQ y9DLFi5caYBqe7RDMdCEgKuIkgP1vN7tRzfZ2WZVXIjlnjl6HMbWdwMY9ekuaw7ebc Vt/pUk2dGgotHqAwmyEVG+Ncr38pxp+S/ULAXL64= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, David Hildenbrand , Zi Yan , Harry Yoo , Lorenzo Stoakes , Alistair Popple , Al Viro , Arnd Bergmann , Brendan Jackman , Byungchul Park , Chengming Zhou , Christian Brauner , Christophe Leroy , =?UTF-8?q?Eugenio=20P=C3=A9=20rez?= , Gregory Price , "Huang, Ying" , Jan Kara , Jason Gunthorpe , Jason Wang , Jerrin Shaji George , Johannes Weiner , John Hubbard , Jonathan Corbet , Joshua Hahn , Liam Howlett , Madhavan Srinivasan , Mathew Brost , "Matthew Wilcox (Oracle)" , Miaohe Lin , Michael Ellerman , "Michael S. Tsirkin" , Michal Hocko , Mike Rapoport , Minchan Kim , Naoya Horiguchi , Nicholas Piggin , Oscar Salvador , Peter Xu , Qi Zheng , Rakie Kim , Rik van Riel , Sergey Senozhatsky , Shakeel Butt , Suren Baghdasaryan , Vlastimil Babka , Xuan Zhuo , xu xin , Andrew Morton , Sasha Levin Subject: [PATCH 6.12 185/215] mm/migrate: move movable_ops page handling out of move_to_new_folio() Date: Mon, 4 May 2026 15:53:24 +0200 Message-ID: <20260504135137.021433763@linuxfoundation.org> X-Mailer: git-send-email 2.54.0 In-Reply-To: <20260504135130.169210693@linuxfoundation.org> References: <20260504135130.169210693@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 6.12-stable review patch. If anyone has any objections, please let me know. ------------------ From: David Hildenbrand [ Upstream commit be4a3e9c185264e9ad0fe02c1c5d81b8386bd50c ] Let's move that handling directly into migrate_folio_move(), so we can simplify move_to_new_folio(). While at it, fixup the documentation a bit. Note that unmap_and_move_huge_page() does not care, because it only deals with actual folios. (we only support migration of individual movable_ops pages) Link: https://lkml.kernel.org/r/20250704102524.326966-12-david@redhat.com Signed-off-by: David Hildenbrand Reviewed-by: Zi Yan Reviewed-by: Harry Yoo Reviewed-by: Lorenzo Stoakes Cc: Alistair Popple Cc: Al Viro Cc: Arnd Bergmann Cc: Brendan Jackman Cc: Byungchul Park Cc: Chengming Zhou Cc: Christian Brauner Cc: Christophe Leroy Cc: Eugenio Pé rez Cc: Greg Kroah-Hartman Cc: Gregory Price Cc: "Huang, Ying" Cc: Jan Kara Cc: Jason Gunthorpe Cc: Jason Wang Cc: Jerrin Shaji George Cc: Johannes Weiner Cc: John Hubbard Cc: Jonathan Corbet Cc: Joshua Hahn Cc: Liam Howlett Cc: Madhavan Srinivasan Cc: Mathew Brost Cc: Matthew Wilcox (Oracle) Cc: Miaohe Lin Cc: Michael Ellerman Cc: "Michael S. Tsirkin" Cc: Michal Hocko Cc: Mike Rapoport Cc: Minchan Kim Cc: Naoya Horiguchi Cc: Nicholas Piggin Cc: Oscar Salvador Cc: Peter Xu Cc: Qi Zheng Cc: Rakie Kim Cc: Rik van Riel Cc: Sergey Senozhatsky Cc: Shakeel Butt Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Xuan Zhuo Cc: xu xin Signed-off-by: Andrew Morton Stable-dep-of: a2e0c0668a34 ("mm: migrate: requeue destination folio on deferred split queue") Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- mm/migrate.c | 63 ++++++++++++++++++++++++++++------------------------------- 1 file changed, 30 insertions(+), 33 deletions(-) --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1047,11 +1047,12 @@ static int fallback_migrate_folio(struct } /* - * Move a page to a newly allocated page - * The page is locked and all ptes have been successfully removed. + * Move a src folio to a newly allocated dst folio. * - * The new page will have replaced the old page if this function - * is successful. + * The src and dst folios are locked and the src folios was unmapped from + * the page tables. + * + * On success, the src folio was replaced by the dst folio. * * Return value: * < 0 - error code @@ -1060,34 +1061,30 @@ static int fallback_migrate_folio(struct static int move_to_new_folio(struct folio *dst, struct folio *src, enum migrate_mode mode) { + struct address_space *mapping = folio_mapping(src); int rc = -EAGAIN; - bool is_lru = !__folio_test_movable(src); VM_BUG_ON_FOLIO(!folio_test_locked(src), src); VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst); - if (likely(is_lru)) { - struct address_space *mapping = folio_mapping(src); - - if (!mapping) - rc = migrate_folio(mapping, dst, src, mode); - else if (mapping_inaccessible(mapping)) - rc = -EOPNOTSUPP; - else if (mapping->a_ops->migrate_folio) - /* - * Most folios have a mapping and most filesystems - * provide a migrate_folio callback. Anonymous folios - * are part of swap space which also has its own - * migrate_folio callback. This is the most common path - * for page migration. - */ - rc = mapping->a_ops->migrate_folio(mapping, dst, src, - mode); - else - rc = fallback_migrate_folio(mapping, dst, src, mode); + if (!mapping) + rc = migrate_folio(mapping, dst, src, mode); + else if (mapping_inaccessible(mapping)) + rc = -EOPNOTSUPP; + else if (mapping->a_ops->migrate_folio) + /* + * Most folios have a mapping and most filesystems + * provide a migrate_folio callback. Anonymous folios + * are part of swap space which also has its own + * migrate_folio callback. This is the most common path + * for page migration. + */ + rc = mapping->a_ops->migrate_folio(mapping, dst, src, + mode); + else + rc = fallback_migrate_folio(mapping, dst, src, mode); - if (rc != MIGRATEPAGE_SUCCESS) - goto out; + if (rc == MIGRATEPAGE_SUCCESS) { /* * For pagecache folios, src->mapping must be cleared before src * is freed. Anonymous folios must stay anonymous until freed. @@ -1097,10 +1094,7 @@ static int move_to_new_folio(struct foli if (likely(!folio_is_zone_device(dst))) flush_dcache_folio(dst); - } else { - rc = migrate_movable_ops_page(&dst->page, &src->page, mode); } -out: return rc; } @@ -1351,20 +1345,23 @@ static int migrate_folio_move(free_folio int rc; int old_page_state = 0; struct anon_vma *anon_vma = NULL; - bool is_lru = !__folio_test_movable(src); struct list_head *prev; __migrate_folio_extract(dst, &old_page_state, &anon_vma); prev = dst->lru.prev; list_del(&dst->lru); + if (unlikely(__folio_test_movable(src))) { + rc = migrate_movable_ops_page(&dst->page, &src->page, mode); + if (rc) + goto out; + goto out_unlock_both; + } + rc = move_to_new_folio(dst, src, mode); if (rc) goto out; - if (unlikely(!is_lru)) - goto out_unlock_both; - /* * When successful, push dst to LRU immediately: so that if it * turns out to be an mlocked page, remove_migration_ptes() will