From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35663C71157 for ; Wed, 18 Jun 2025 17:40:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A41C6B00B3; Wed, 18 Jun 2025 13:40:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2058A6B00B4; Wed, 18 Jun 2025 13:40:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 059946B00B5; Wed, 18 Jun 2025 13:40:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E12056B00B3 for ; Wed, 18 Jun 2025 13:40:51 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C0AFC1D56F1 for ; Wed, 18 Jun 2025 17:40:51 +0000 (UTC) X-FDA: 83569236702.11.ED1D65F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 8D844100007 for ; Wed, 18 Jun 2025 17:40:49 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="UCBWO//Y"; spf=pass (imf14.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750268449; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fdGmEg3KxJkHEQxrZBJZz4koG+H5rtVXjjGZN1RlWcM=; b=UZLCo/GI/p57gZO4+eaxz+0HnJ6/dQknROqt6P7+W9ktrdeAkIsibSLJtEKvIOdpK0j3o9 KdI4Szcda4tiYV+5txokgwJUPptTI1KDfrytODOJ8zp/ZBOcK0eKCO96NJBOWaHF3/wqrx kVUtALVfk+PGzxTAlB/ohepEyHP8R5c= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750268449; a=rsa-sha256; cv=none; b=EsIyDUXHDDAc+kdFgipW4H2Nr4QXRP8SRrtaJaWFQYVvuysBwNiSqShfHqoXCny7jrI5Lj 0CydyzqX6k+J4wnblwzEcfxEMIlLW7eAswe+JC/XoihU2hZret1kVw36n9ruTlYDkNZ522 2wyQAO4wM3Wiry5fBH/LitkWsdFem9M= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="UCBWO//Y"; spf=pass (imf14.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1750268448; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fdGmEg3KxJkHEQxrZBJZz4koG+H5rtVXjjGZN1RlWcM=; b=UCBWO//YBQvFTOCacgbXLmiDLZ8jFmX3FgRYHsgJ4RVQHTPyuu8+pT/Qpn2bfGW9+i8qFM R4Fy0FvzboRBdbGiAkTBrk+WjzsNp+rHVTMwH2Lf1GhNvDGibb9tdXYOO76JV7Vyn31H9i mIo/5LqTWN9/GzlKklC1kYV5Gm5mfcI= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-679-S5-O5WWnOnaN-z5JtPvMpQ-1; Wed, 18 Jun 2025 13:40:47 -0400 X-MC-Unique: S5-O5WWnOnaN-z5JtPvMpQ-1 X-Mimecast-MFC-AGG-ID: S5-O5WWnOnaN-z5JtPvMpQ_1750268447 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-450787c8626so55880125e9.1 for ; Wed, 18 Jun 2025 10:40:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750268446; x=1750873246; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fdGmEg3KxJkHEQxrZBJZz4koG+H5rtVXjjGZN1RlWcM=; b=B5Czcdci5poS/zCPM5hJoCh5Ld7mqeVQ+uYd+xi2RqTPT/v6Yp+e5GqBJthkNmLzOl Crv8WAwFUj5KR8+3sd2AGGIOo6Sh0rMwmqv8BKw4DKkJp7p9N2zQXFKrBCexJsQOsz+T 52PWQzsGdxbl4YkBGOl5UtoEsThe2+8ZaHQGcK3b2gWhUyv+o5vw68lYiFEqsnfyC5wS BIGPdessBvBrSFU/vqycUEkZ5uMXnXeRmo4YWvPZCgzOvwc4zGmC4LS06DUN/JIiRlbT Xqs4EMbSjRAH0NiPE2/0e19+CjmZ+s02jIjUxz72A9q4tTmLYMemliN0fw3aVqVgBUw8 gTDg== X-Gm-Message-State: AOJu0YzQN506ezmPaqa3o7D/ewJY5F7fROuk+tViPf4uaYrOZVVp9Hil 2+z4SslovvIAElwZg/N0e6yxO/I6dSP8wO4xXB2fCpvVTh++Klq7SRELAkuf4FUextMSFR4vqOR csZ/9Ozjhs5loEVegRISJbR4edF0Z/z1lPdeGU44UPb4GZ8vXiA9i X-Gm-Gg: ASbGncvbdKU2eTGwG7FPAF6/NTH4UodQWOKCqu6ssxSbmS9RsYKU8SMn5oolIfJ4ahN 0uZGoQ97SkKTYIj/GVJzXpxg1rn3F8IuQd5MJq283wVbv/cdIygjJUngRyI4s16AfuvBJ49tgiL JWHSz20Etmm7BS2hbffXhA42k+atMZ/fYtcgQNQp/pUsMWjd8SJN9pWto1rk/8FgmG/l9s8Dv2L 3A3JmoPIo2VEPNo5qIdspG72b6V5MD40dUF5ypxNkmbsVuV6PwClBoVagy0TvCM+VijNPvbOuGg +95Y9XXlPxXr6bAslEbNNj8ahd+frbjMWGkiSTAlia6UxEPUP4ET5Zs6aZVKRUoQf4TFFbP5h+L RUeLd4g== X-Received: by 2002:a05:600c:5253:b0:453:66f:b96e with SMTP id 5b1f17b1804b1-4534219a64fmr154403015e9.11.1750268446581; Wed, 18 Jun 2025 10:40:46 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEXftk3ZXBXIBGlT1RpZSgQfC+IysEY/Lblmm+ealanQwQ/CfiKMmfIF1BFjPpj67LfRQmzUw== X-Received: by 2002:a05:600c:5253:b0:453:66f:b96e with SMTP id 5b1f17b1804b1-4534219a64fmr154402755e9.11.1750268446175; Wed, 18 Jun 2025 10:40:46 -0700 (PDT) Received: from localhost (p200300d82f2d2400405203b5fff94ed0.dip0.t-ipconnect.de. [2003:d8:2f2d:2400:4052:3b5:fff9:4ed0]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-4535e97aa15sm3975415e9.2.2025.06.18.10.40.44 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 18 Jun 2025 10:40:45 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux.dev, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Jerrin Shaji George , Arnd Bergmann , Greg Kroah-Hartman , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Viro , Christian Brauner , Jan Kara , Zi Yan , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , "Matthew Wilcox (Oracle)" , Minchan Kim , Sergey Senozhatsky , Brendan Jackman , Johannes Weiner , Jason Gunthorpe , John Hubbard , Peter Xu , Xu Xin , Chengming Zhou , Miaohe Lin , Naoya Horiguchi , Oscar Salvador , Rik van Riel , Harry Yoo , Qi Zheng , Shakeel Butt Subject: [PATCH RFC 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio() Date: Wed, 18 Jun 2025 19:39:54 +0200 Message-ID: <20250618174014.1168640-12-david@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250618174014.1168640-1-david@redhat.com> References: <20250618174014.1168640-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: AlhDJ6Xa1nxAwKe1DEvHaFo4W4iFg1SYSDtBEt1blNU_1750268447 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspamd-Server: rspam03 X-Stat-Signature: ufsmyaoj4jqsygtpbba6bhr8pusjb555 X-Rspam-User: X-Rspamd-Queue-Id: 8D844100007 X-HE-Tag: 1750268449-784973 X-HE-Meta: U2FsdGVkX1+vqvcXyDLQLbGUetEJrbLoeRliMXCnbuVGEdo76a6ic7hY21lo6vpsmZVnNh95VOtzqPpi9ab3AsJMJ3bHgNgLkhdM+9mfPHmLIhpHQzFLQcJCkbEG3v0H1kmubL5QCUHSwjv1lb5FLZ0A4PQm2k4175obuPfGrFJQqjG0eZeYunZS7BV9/XZLqCqLHzj5i8b8WuwEDNJ525nGtoF3aCcC9KI1A1J6iEzhtRkCzNFrqBEhCNhkQi3OwTQ5och5WOzA37aT1HcFRH+RmXECIDnITJjFdLDxq25NXywG9q620izGQAioE0T76c2CS3DhGo/y8FhcAK81dTgCZR41HmyDMAb+0Kifohd7vv2i3N/U6haBDe8mweFDZc1yATr7PwqZRa/Nw1m2vfR9CEyX/P5AT4J+PLY/37ETMBvgR9UBPXmsqmPYM/p43VwLWUa9GW+MQZHXn7TQL3UI0Cn8EBNFAZhOcV3M/Kkcf/uc8Ce6g+HuLADvISTD9d8XWz7e+dYqiQY+MAfYnHKV6yCl+Y6gs2n+Lkmwl0K5IRrn3/10QZpgth4KIzcVOqjWGMCq1Byn7U6FZVYS0XthNHpY9Ry2yiKQqcszY/8mMQpvKw4bt1C40+Y9lIrzIywxXqlW+jwzfhJDPeR6Z1/LqJNGDiPGp9QNWpY74mWWUVHqPqs+Y2mrsJ68eTF5a4n0flMI9FUJbWlXNS2VdBJHPoIDga4OJHWiWJruwZVOqeHCda6jELBOcTS9BgmM4X+k99Q53PLzuqAElKX0g07EP2oah+CyKWvZbd+5xi7XzYiZ5KsC5Yqu7UYURf4mguoA9MaQtbwoV2UnEnBibC7juO9Xl19+dCIgE3IvtHWMSEyCCUJKibST4osI+FRuzQGskYN4nALHVFq/GcY47t/r/oZwYafowOEgUxOs5p6Be18ZoFzbvang+j0NqaoYNbegeck/d0TM9m1jDCk GLE+GOnK 8NoSR2Z6LYNcz2KeWgQ4Na9pJXTAHLb5JMjYpBQEqy+oJfkReJG+RMlF1QO7L+pyejzHXy4N0KH/+YugZUi0b3OW879+D25rR1/Ct8aplA6TBpQ0JVhHhn6BiGLcB2zjHkQTehPM8+rKVFFNqmRp4g7F6XgG/kpf1HGY/zJxZugEsucNoDpHEtB0qNJtdbg4PnSSHlyF5BKnDRf9iTv17eiFQx/QhqfJkLLX0/D8EhZZ/YyBYkbTvHidjARKlrVJnLbN/j53DNIwjp5+nfFYzlPHebZbVopj8FUS2yM4+F+1dHZBWS5hSQx4QRQvvWaDnHwtYGpD/4J2JVM/dHDt+OjdeXAZ0/vKODq/5tociITrFr/1lvexRERPAk8jyauftnnts1eE+MVGl416AvLXmPKga00KOZfszjnGzCe2RNTm41sc3oKI7EIrtP9WjY3jeidYBiUEsVB7ghcbpLmGlHEU3JaHv6xBleUBudenzHQgDowq6Lkz3x+Vd/EnctD6MuwgqFWMd8zVNYASta19CiweyRHhPXsVCM/sfSBY8ZF+ZasfMEbVAXMM5tSvHEj/9cxWOd3x2nzQvpx+0InfbtmW7Xf9QrIeGs3yL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's move that handling directly into migrate_folio_move(), so we can simplify move_to_new_folio(). While at it, fixup the documentation a bit. Note that unmap_and_move_huge_page() does not care, because it only deals with actual folios. (we only support migration of individual movable_ops pages) Signed-off-by: David Hildenbrand --- mm/migrate.c | 61 ++++++++++++++++++++++++---------------------------- 1 file changed, 28 insertions(+), 33 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 456e41dad83a2..db807f9bbf975 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1024,11 +1024,12 @@ static int fallback_migrate_folio(struct address_space *mapping, } /* - * Move a page to a newly allocated page - * The page is locked and all ptes have been successfully removed. + * Move a src folio to a newly allocated dst folio. * - * The new page will have replaced the old page if this function - * is successful. + * The src and dst folios are locked and the src folios was unmapped from + * the page tables. + * + * On success, the src folio was replaced by the dst folio. * * Return value: * < 0 - error code @@ -1037,34 +1038,30 @@ static int fallback_migrate_folio(struct address_space *mapping, static int move_to_new_folio(struct folio *dst, struct folio *src, enum migrate_mode mode) { + struct address_space *mapping = folio_mapping(src); int rc = -EAGAIN; - bool is_lru = !__folio_test_movable(src); VM_BUG_ON_FOLIO(!folio_test_locked(src), src); VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst); - if (likely(is_lru)) { - struct address_space *mapping = folio_mapping(src); - - if (!mapping) - rc = migrate_folio(mapping, dst, src, mode); - else if (mapping_inaccessible(mapping)) - rc = -EOPNOTSUPP; - else if (mapping->a_ops->migrate_folio) - /* - * Most folios have a mapping and most filesystems - * provide a migrate_folio callback. Anonymous folios - * are part of swap space which also has its own - * migrate_folio callback. This is the most common path - * for page migration. - */ - rc = mapping->a_ops->migrate_folio(mapping, dst, src, - mode); - else - rc = fallback_migrate_folio(mapping, dst, src, mode); + if (!mapping) + rc = migrate_folio(mapping, dst, src, mode); + else if (mapping_inaccessible(mapping)) + rc = -EOPNOTSUPP; + else if (mapping->a_ops->migrate_folio) + /* + * Most folios have a mapping and most filesystems + * provide a migrate_folio callback. Anonymous folios + * are part of swap space which also has its own + * migrate_folio callback. This is the most common path + * for page migration. + */ + rc = mapping->a_ops->migrate_folio(mapping, dst, src, + mode); + else + rc = fallback_migrate_folio(mapping, dst, src, mode); - if (rc != MIGRATEPAGE_SUCCESS) - goto out; + if (rc == MIGRATEPAGE_SUCCESS) { /* * For pagecache folios, src->mapping must be cleared before src * is freed. Anonymous folios must stay anonymous until freed. @@ -1074,10 +1071,7 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, if (likely(!folio_is_zone_device(dst))) flush_dcache_folio(dst); - } else { - rc = migrate_movable_ops_page(&dst->page, &src->page, mode); } -out: return rc; } @@ -1328,20 +1322,21 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, int rc; int old_page_state = 0; struct anon_vma *anon_vma = NULL; - bool is_lru = !__folio_test_movable(src); struct list_head *prev; __migrate_folio_extract(dst, &old_page_state, &anon_vma); prev = dst->lru.prev; list_del(&dst->lru); + if (unlikely(__folio_test_movable(src))) { + rc = migrate_movable_ops_page(&dst->page, &src->page, mode); + goto out_unlock_both; + } + rc = move_to_new_folio(dst, src, mode); if (rc) goto out; - if (unlikely(!is_lru)) - goto out_unlock_both; - /* * When successful, push dst to LRU immediately: so that if it * turns out to be an mlocked page, remove_migration_ptes() will -- 2.49.0