From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A73C219CCFC for ; Mon, 30 Jun 2025 13:00:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751288445; cv=none; b=OyeJLsLCNffuH4OfqKUT6dUkqgSgJjyz2kYqFFEpqW8Sq98Gf4DasWYXBDoLb+Jl8oncqFLYewMyF4VNv5ESkp4iOsGAlaAQUMLmzxQaD4GQH1L+W2sGSnufaZ7TtAEhalnjCJ7HtjPGRS71aRRNSAMvIxeAGG2fCpk78EE7mlM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751288445; c=relaxed/simple; bh=VUTFI3+8sn+Bo7jfgsxfJB7XgkCxxsX8WPCDC/hYNNk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kGbrSE07X08ua5jgSt0yOdEFPqN3Jsm1MrJb4/VPlrMMgGplVCzHFiAr/t05sqUobYwZ1sftnrmR2owIUVPOfBbU026+Y+Bpr+Q6odvzUXdGYW4aL9DU3nYu0cneLMC4/H36QVQ9DFyceYzusBYrfsXowjxrJbNgYxpw/+nH0pw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CJ9Tua7u; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CJ9Tua7u" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1751288442; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m9VQLTS9DDIxmZmB+neo2FcUhAnDYm//4qg3iAtoZVM=; b=CJ9Tua7u0w5se+ml+UPgIeA6UVMIWc0KqBiwYHzpUAtXNjbk3rFRxm3OAk7C6aGgC5IXrC 7g2Qs1I2/8UNLTUbl7Zp6R3g9j//i3ZR39eqSSkBnYFJGdsO8WBHr7SZ/qpKe1Roal3oa1 805IEPbRR1OzOlDw3b5gjNmqHZLNokQ= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-299-4Z9Y0dZhNyqmeIQHj-a0ow-1; Mon, 30 Jun 2025 09:00:39 -0400 X-MC-Unique: 4Z9Y0dZhNyqmeIQHj-a0ow-1 X-Mimecast-MFC-AGG-ID: 4Z9Y0dZhNyqmeIQHj-a0ow_1751288438 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-43e9b0fd00cso23502785e9.0 for ; Mon, 30 Jun 2025 06:00:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751288438; x=1751893238; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m9VQLTS9DDIxmZmB+neo2FcUhAnDYm//4qg3iAtoZVM=; b=O/PwhXST92LvRHunOWsYPY9E5RjAsb/VpD+W0b/zWGcGr0RkANRV0A53I1IwgXcmR3 GXV5VduYZLbu6T/mjRvGK46zYT25knqRqA60UpSiZE0SLl4wkGQk+xXA4TUT4Xmn8RAk dPzPJpWXkUHZbMrpsAi8irhzrwruu6fs+ov85SToA8NLgV3oG1W9483KtYKbh0fr2V/B 3BRZoRqRLuAgwdRDbgx1HLnMZKmzGgvl0+9Fu7QYVe9CEqu99H0ono8k8+wz+d3HrUk7 Lh+ibzgWJ1gDOQVDTrWcxuj7cKQ7A212ZfBNIHrufYZiuYrSIb/qlZv7CWf6A9ciQE9b gMRQ== X-Forwarded-Encrypted: i=1; AJvYcCUudfp1ubrtIvXGv3Y1iqtAD0So1AR+Sz2jcyzlAloJnoyEKhLRb4esKxQP8iLqmIL4LRihrzzkzSKycEJT@vger.kernel.org X-Gm-Message-State: AOJu0Yx0zuZCQ/nROU0bNmJkuVV85NvL3de8x49o5bRDKhl65AnZsNBh 1rhvv7zOboBDEc/le0nchiv7aPdaDVhs6hisyti00YoR5rHqdz4LGN43RXR60Hz/ipVJ0t/eTLU 8b8vLtc2igzFLrLN4JUwxfg8ybC+YPi2+vJWW1505kDIC6haXqX5krLdCyHJjdx+AxTw= X-Gm-Gg: ASbGncsq4J3nf341CMe4KvO7FZnjsqkhG/3eONcnaqgD/igNik586eQa5VYo/2YObfM 8xzQokJ8KXSzG2JBjhB8HmuEbf4ecuTiX0nkNzPdLKfAywkxAVCvwvTPa3YSU085obuo/KA0hUw PAguDaBBEhXLAthpc3B0yy+ZESHdJzRERbd4VZ7UKxVtk+UmnFxQ1czREE4zcvKBsCW2oKqFnNA XjH6LfUfbeJsPYDXp1iFcsWhijzzjzJYNwE5PJw8AQqQOEMCOctA/ACZlvY7qcYQKvhGn6Q2YdA WsGzMzrXuUwzHiEEUH4tmsw+Cacu78MyGh8x22954E1W38ltkUaIWY5t25cll3nKIfIO+sOE6+c WOIPD5tI= X-Received: by 2002:a05:600c:5251:b0:453:6c45:ce14 with SMTP id 5b1f17b1804b1-4538f2c6a5bmr116867505e9.4.1751288436048; Mon, 30 Jun 2025 06:00:36 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGcKPudq4hAX3ZUQbBxj7a9OFNaExkIT0GfF2UdhGugpVpmsHfKtz6PBCtjyRSKpLMaHzgIYg== X-Received: by 2002:a05:600c:5251:b0:453:6c45:ce14 with SMTP id 5b1f17b1804b1-4538f2c6a5bmr116864905e9.4.1751288433794; Mon, 30 Jun 2025 06:00:33 -0700 (PDT) Received: from localhost (p200300d82f40b30053f7d260aff47256.dip0.t-ipconnect.de. [2003:d8:2f40:b300:53f7:d260:aff4:7256]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-4538a406489sm136741305e9.27.2025.06.30.06.00.31 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 30 Jun 2025 06:00:33 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux.dev, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Jerrin Shaji George , Arnd Bergmann , Greg Kroah-Hartman , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Viro , Christian Brauner , Jan Kara , Zi Yan , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , "Matthew Wilcox (Oracle)" , Minchan Kim , Sergey Senozhatsky , Brendan Jackman , Johannes Weiner , Jason Gunthorpe , John Hubbard , Peter Xu , Xu Xin , Chengming Zhou , Miaohe Lin , Naoya Horiguchi , Oscar Salvador , Rik van Riel , Harry Yoo , Qi Zheng , Shakeel Butt Subject: [PATCH v1 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page() Date: Mon, 30 Jun 2025 14:59:48 +0200 Message-ID: <20250630130011.330477-8-david@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250630130011.330477-1-david@redhat.com> References: <20250630130011.330477-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ... and start moving back to per-page things that will absolutely not be folio things in the future. Add documentation and a comment that the remaining folio stuff (lock, refcount) will have to be reworked as well. While at it, convert the VM_BUG_ON() into a WARN_ON_ONCE() and handle it gracefully (relevant with further changes), and convert a WARN_ON_ONCE() into a VM_WARN_ON_ONCE_PAGE(). Note that we will leave anything that needs a rework (lock, refcount, ->lru) to be using folios for now: that perfectly highlights the problematic bits. Reviewed-by: Zi Yan Reviewed-by: Harry Yoo Signed-off-by: David Hildenbrand --- include/linux/migrate.h | 4 ++-- mm/compaction.c | 2 +- mm/migrate.c | 39 +++++++++++++++++++++++++++++---------- 3 files changed, 32 insertions(+), 13 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index aaa2114498d6d..c0ec7422837bd 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -69,7 +69,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free, unsigned long private, enum migrate_mode mode, int reason, unsigned int *ret_succeeded); struct folio *alloc_migration_target(struct folio *src, unsigned long private); -bool isolate_movable_page(struct page *page, isolate_mode_t mode); +bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode); bool isolate_folio_to_list(struct folio *folio, struct list_head *list); int migrate_huge_page_move_mapping(struct address_space *mapping, @@ -90,7 +90,7 @@ static inline int migrate_pages(struct list_head *l, new_folio_t new, static inline struct folio *alloc_migration_target(struct folio *src, unsigned long private) { return NULL; } -static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode) +static inline bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode) { return false; } static inline bool isolate_folio_to_list(struct folio *folio, struct list_head *list) { return false; } diff --git a/mm/compaction.c b/mm/compaction.c index 3925cb61dbb8f..17455c5a4be05 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1093,7 +1093,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, locked = NULL; } - if (isolate_movable_page(page, mode)) { + if (isolate_movable_ops_page(page, mode)) { folio = page_folio(page); goto isolate_success; } diff --git a/mm/migrate.c b/mm/migrate.c index 767f503f08758..d4b4a7eefb6bd 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -51,8 +51,26 @@ #include "internal.h" #include "swap.h" -bool isolate_movable_page(struct page *page, isolate_mode_t mode) +/** + * isolate_movable_ops_page - isolate a movable_ops page for migration + * @page: The page. + * @mode: The isolation mode. + * + * Try to isolate a movable_ops page for migration. Will fail if the page is + * not a movable_ops page, if the page is already isolated for migration + * or if the page was just was released by its owner. + * + * Once isolated, the page cannot get freed until it is either putback + * or migrated. + * + * Returns true if isolation succeeded, otherwise false. + */ +bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode) { + /* + * TODO: these pages will not be folios in the future. All + * folio dependencies will have to be removed. + */ struct folio *folio = folio_get_nontail_page(page); const struct movable_operations *mops; @@ -73,7 +91,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) * we use non-atomic bitops on newly allocated page flags so * unconditionally grabbing the lock ruins page's owner side. */ - if (unlikely(!__folio_test_movable(folio))) + if (unlikely(!__PageMovable(page))) goto out_putfolio; /* @@ -90,18 +108,19 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) if (unlikely(!folio_trylock(folio))) goto out_putfolio; - if (!folio_test_movable(folio) || folio_test_isolated(folio)) + if (!PageMovable(page) || PageIsolated(page)) goto out_no_isolated; - mops = folio_movable_ops(folio); - VM_BUG_ON_FOLIO(!mops, folio); + mops = page_movable_ops(page); + if (WARN_ON_ONCE(!mops)) + goto out_no_isolated; - if (!mops->isolate_page(&folio->page, mode)) + if (!mops->isolate_page(page, mode)) goto out_no_isolated; /* Driver shouldn't use the isolated flag */ - WARN_ON_ONCE(folio_test_isolated(folio)); - folio_set_isolated(folio); + VM_WARN_ON_ONCE_PAGE(PageIsolated(page), page); + SetPageIsolated(page); folio_unlock(folio); return true; @@ -175,8 +194,8 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list) if (lru) isolated = folio_isolate_lru(folio); else - isolated = isolate_movable_page(&folio->page, - ISOLATE_UNEVICTABLE); + isolated = isolate_movable_ops_page(&folio->page, + ISOLATE_UNEVICTABLE); if (!isolated) return false; -- 2.49.0