From: Rik van Riel <riel@surriel.com>
To: linux-kernel@vger.kernel.org
Cc: kernel-team@meta.com, linux-mm@kvack.org, david@kernel.org,
willy@infradead.org, surenb@google.com, hannes@cmpxchg.org,
ljs@kernel.org, ziy@nvidia.com, usama.arif@linux.dev,
Rik van Riel <riel@meta.com>, Chris Mason <clm@fb.com>,
David Sterba <dsterba@suse.com>,
linux-btrfs@vger.kernel.org, Rik van Riel <riel@surriel.com>
Subject: [RFC PATCH 41/45] btrfs: allocate eb-attached btree pages as movable
Date: Thu, 30 Apr 2026 16:21:10 -0400 [thread overview]
Message-ID: <20260430202233.111010-42-riel@surriel.com> (raw)
In-Reply-To: <20260430202233.111010-1-riel@surriel.com>
From: Rik van Riel <riel@meta.com>
Extent buffer pages allocated by alloc_extent_buffer() are attached to
btree_inode->i_mapping (the buffer_tree path), reach the LRU, and are
served by the btree_migrate_folio aops in fs/btrfs/disk-io.c. They are
migratable in practice once their owning extent buffer hits refs == 1,
which happens naturally as tree roots rotate. The buddy allocator
classifies them by GFP, however, and bare GFP_NOFS lands them in
MIGRATE_UNMOVABLE pageblocks. The result: every btree_inode page we
read in pins an unmovable pageblock from the page-superblock allocator's
perspective, even though the page itself can be moved.
Add __GFP_MOVABLE to that one allocation site (alloc_extent_buffer's
call to alloc_eb_folio_array). Plumb the flag through
alloc_eb_folio_array → btrfs_alloc_page_array as a `gfp_t extra_gfp`
parameter. All other call sites pass 0.
Three categories of caller stay on bare GFP_NOFS, deliberately:
- alloc_dummy_extent_buffer / btrfs_clone_extent_buffer: the
resulting eb is EXTENT_BUFFER_UNMAPPED, folio->mapping stays NULL,
the folios never enter LRU, never get migrate_folio aops. Tagging
them __GFP_MOVABLE would violate the page allocator's migrability
contract and they would defeat compaction in MOVABLE pageblocks
where isolate_migratepages_block skips non-LRU non-movable_ops
pages outright.
- btrfs_alloc_page_array callers in fs/btrfs/raid56.c (stripe
pages), fs/btrfs/inode.c (encoded reads), fs/btrfs/ioctl.c (uring
encoded reads), fs/btrfs/relocation.c (relocation buffers): same
contract violation. raid56 stripe_pages additionally persist in
the stripe cache (RBIO_CACHE_SIZE=1024) well beyond a single I/O,
so they are not transient enough to hand-wave the contract.
- btrfs_alloc_folio_array caller in fs/btrfs/scrub.c (stripe
folios): same — stripe->folios[] are private buffers freed via
folio_put in release_scrub_stripe.
This change targets the dominant fragmentation source observed on the
page-superblock v18 series: ~28 GB of btree_inode pages parked across
many tainted superpageblocks on a 247 GB devvm with btrfs root,
preventing 1 GiB hugepage allocation from those regions. With the
movable hint, those pages now land in MOVABLE pageblocks where the
existing background defragger drains them through the standard
PB_has_movable gate, no LRU-sample fallback needed.
Suggested-by: Rik van Riel <riel@meta.com>
Cc: Chris Mason <clm@fb.com>
Cc: David Sterba <dsterba@suse.com>
Cc: linux-btrfs@vger.kernel.org
Signed-off-by: Rik van Riel <riel@surriel.com>
Assisted-by: Claude:claude-opus-4.7 syzkaller
---
fs/btrfs/extent_io.c | 69 ++++++++++++++++++++++++++++++-------------
fs/btrfs/extent_io.h | 4 +--
fs/btrfs/inode.c | 2 +-
fs/btrfs/ioctl.c | 2 +-
fs/btrfs/raid56.c | 6 ++--
fs/btrfs/relocation.c | 2 +-
fs/btrfs/scrub.c | 3 +-
7 files changed, 59 insertions(+), 29 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 5f97a3d2a8d7..7e28e4a876a0 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -626,24 +626,33 @@ static void end_bbio_data_read(struct btrfs_bio *bbio)
}
/*
- * Populate every free slot in a provided array with folios using GFP_NOFS.
+ * Populate every free slot in a provided array with folios using
+ * GFP_NOFS plus optional caller-supplied flags.
*
- * @nr_folios: number of folios to allocate
- * @order: the order of the folios to be allocated
- * @folio_array: the array to fill with folios; any existing non-NULL entries in
- * the array will be skipped
+ * @nr_folios: number of folios to allocate
+ * @order: folio order
+ * @folio_array: array to fill with folios; non-NULL entries are skipped
+ * @extra_gfp: extra GFP flags OR'd into GFP_NOFS. The only value used
+ * today is __GFP_MOVABLE, which the extent-buffer real-mapping
+ * path (alloc_extent_buffer) passes when the resulting folios
+ * will be attached to btree_inode->i_mapping (added to LRU,
+ * served by the btree_migrate_folio aops). Pass 0 for
+ * everything else; folios allocated by other callers stay in
+ * driver-owned arrays, never reach LRU and never register
+ * movable_ops, so they cannot satisfy the __GFP_MOVABLE
+ * migrability contract.
*
* Return: 0 if all folios were able to be allocated;
* -ENOMEM otherwise, the partially allocated folios would be freed and
* the array slots zeroed
*/
int btrfs_alloc_folio_array(unsigned int nr_folios, unsigned int order,
- struct folio **folio_array)
+ struct folio **folio_array, gfp_t extra_gfp)
{
for (int i = 0; i < nr_folios; i++) {
if (folio_array[i])
continue;
- folio_array[i] = folio_alloc(GFP_NOFS, order);
+ folio_array[i] = folio_alloc(GFP_NOFS | extra_gfp, order);
if (!folio_array[i])
goto error;
}
@@ -658,21 +667,27 @@ int btrfs_alloc_folio_array(unsigned int nr_folios, unsigned int order,
}
/*
- * Populate every free slot in a provided array with pages, using GFP_NOFS.
+ * Populate every free slot in a provided array with pages, using GFP_NOFS
+ * plus optional caller-supplied flags.
*
- * @nr_pages: number of pages to allocate
- * @page_array: the array to fill with pages; any existing non-null entries in
- * the array will be skipped
- * @nofail: whether using __GFP_NOFAIL flag
+ * @nr_pages: number of pages to allocate
+ * @page_array: array to fill; non-NULL entries are skipped
+ * @nofail: whether to use __GFP_NOFAIL
+ * @extra_gfp: extra GFP flags OR'd into the base mask. The only value used
+ * today is __GFP_MOVABLE, which the extent-buffer real-mapping
+ * path passes when the resulting pages will be attached to
+ * btree_inode->i_mapping. See btrfs_alloc_folio_array() for
+ * the full migrability rationale.
*
* Return: 0 if all pages were able to be allocated;
* -ENOMEM otherwise, the partially allocated pages would be freed and
* the array slots zeroed
*/
int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array,
- bool nofail)
+ bool nofail, gfp_t extra_gfp)
{
- const gfp_t gfp = nofail ? (GFP_NOFS | __GFP_NOFAIL) : GFP_NOFS;
+ const gfp_t gfp = (nofail ? (GFP_NOFS | __GFP_NOFAIL) : GFP_NOFS) |
+ extra_gfp;
unsigned int allocated;
for (allocated = 0; allocated < nr_pages;) {
@@ -695,14 +710,23 @@ int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array,
* Populate needed folios for the extent buffer.
*
* For now, the folios populated are always in order 0 (aka, single page).
+ *
+ * @movable: pass true only when the resulting pages will be attached to
+ * btree_inode->i_mapping (the alloc_extent_buffer real path).
+ * Cloned/dummy extent buffers (EXTENT_BUFFER_UNMAPPED) leave
+ * folio->mapping NULL, never enter the LRU, and never get the
+ * btree_migrate_folio aops, so __GFP_MOVABLE would violate the
+ * page-allocator's migrability contract for them.
*/
-static int alloc_eb_folio_array(struct extent_buffer *eb, bool nofail)
+static int alloc_eb_folio_array(struct extent_buffer *eb, bool nofail,
+ bool movable)
{
struct page *page_array[INLINE_EXTENT_BUFFER_PAGES] = { 0 };
int num_pages = num_extent_pages(eb);
int ret;
- ret = btrfs_alloc_page_array(num_pages, page_array, nofail);
+ ret = btrfs_alloc_page_array(num_pages, page_array, nofail,
+ movable ? __GFP_MOVABLE : 0);
if (ret < 0)
return ret;
@@ -3067,7 +3091,7 @@ struct extent_buffer *btrfs_clone_extent_buffer(const struct extent_buffer *src)
*/
set_bit(EXTENT_BUFFER_UNMAPPED, &new->bflags);
- ret = alloc_eb_folio_array(new, false);
+ ret = alloc_eb_folio_array(new, false, false);
if (ret)
goto release_eb;
@@ -3108,7 +3132,7 @@ struct extent_buffer *alloc_dummy_extent_buffer(struct btrfs_fs_info *fs_info,
if (!eb)
return NULL;
- ret = alloc_eb_folio_array(eb, false);
+ ret = alloc_eb_folio_array(eb, false, false);
if (ret)
goto release_eb;
@@ -3461,8 +3485,13 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
}
reallocate:
- /* Allocate all pages first. */
- ret = alloc_eb_folio_array(eb, true);
+ /*
+ * Allocate all pages first. These will be attached to
+ * btree_inode->i_mapping below (added to LRU, served by
+ * btree_migrate_folio), so request __GFP_MOVABLE so the
+ * page allocator places them in MOVABLE pageblocks.
+ */
+ ret = alloc_eb_folio_array(eb, true, true);
if (ret < 0) {
btrfs_free_folio_state(prealloc);
goto out;
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index 8d05f1a58b7c..1a3631abe989 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -363,9 +363,9 @@ void btrfs_clear_buffer_dirty(struct btrfs_trans_handle *trans,
struct extent_buffer *buf);
int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array,
- bool nofail);
+ bool nofail, gfp_t extra_gfp);
int btrfs_alloc_folio_array(unsigned int nr_folios, unsigned int order,
- struct folio **folio_array);
+ struct folio **folio_array, gfp_t extra_gfp);
#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
bool find_lock_delalloc_range(struct inode *inode,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index a6da98435ef7..fcd662203608 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -9645,7 +9645,7 @@ ssize_t btrfs_encoded_read_regular(struct kiocb *iocb, struct iov_iter *iter,
pages = kzalloc_objs(struct page *, nr_pages, GFP_NOFS);
if (!pages)
return -ENOMEM;
- ret = btrfs_alloc_page_array(nr_pages, pages, false);
+ ret = btrfs_alloc_page_array(nr_pages, pages, false, 0);
if (ret) {
ret = -ENOMEM;
goto out;
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index b805dd9227ef..eaf9508fcf1f 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -4614,7 +4614,7 @@ static int btrfs_uring_read_extent(struct kiocb *iocb, struct iov_iter *iter,
pages = kzalloc_objs(struct page *, nr_pages, GFP_NOFS);
if (!pages)
return -ENOMEM;
- ret = btrfs_alloc_page_array(nr_pages, pages, 0);
+ ret = btrfs_alloc_page_array(nr_pages, pages, 0, 0);
if (ret) {
ret = -ENOMEM;
goto out_fail;
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index b4511f560e92..d8531a901535 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -1143,7 +1143,7 @@ static int alloc_rbio_pages(struct btrfs_raid_bio *rbio)
{
int ret;
- ret = btrfs_alloc_page_array(rbio->nr_pages, rbio->stripe_pages, false);
+ ret = btrfs_alloc_page_array(rbio->nr_pages, rbio->stripe_pages, false, 0);
if (ret < 0)
return ret;
/* Mapping all sectors */
@@ -1158,7 +1158,7 @@ static int alloc_rbio_parity_pages(struct btrfs_raid_bio *rbio)
int ret;
ret = btrfs_alloc_page_array(rbio->nr_pages - data_pages,
- rbio->stripe_pages + data_pages, false);
+ rbio->stripe_pages + data_pages, false, 0);
if (ret < 0)
return ret;
@@ -1756,7 +1756,7 @@ static int alloc_rbio_data_pages(struct btrfs_raid_bio *rbio)
const int data_pages = rbio->nr_data * rbio->stripe_npages;
int ret;
- ret = btrfs_alloc_page_array(data_pages, rbio->stripe_pages, false);
+ ret = btrfs_alloc_page_array(data_pages, rbio->stripe_pages, false, 0);
if (ret < 0)
return ret;
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index b2343aed7a5d..814e48003015 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -4046,7 +4046,7 @@ static int copy_remapped_data(struct btrfs_fs_info *fs_info, u64 old_addr,
if (!pages)
return -ENOMEM;
- ret = btrfs_alloc_page_array(nr_pages, pages, 0);
+ ret = btrfs_alloc_page_array(nr_pages, pages, 0, 0);
if (ret) {
ret = -ENOMEM;
goto end;
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index bc94bbc00772..23f6f780eab6 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -369,7 +369,8 @@ static int init_scrub_stripe(struct btrfs_fs_info *fs_info,
ASSERT(BTRFS_STRIPE_LEN >> min_folio_shift <= SCRUB_STRIPE_MAX_FOLIOS);
ret = btrfs_alloc_folio_array(BTRFS_STRIPE_LEN >> min_folio_shift,
- fs_info->block_min_order, stripe->folios);
+ fs_info->block_min_order, stripe->folios,
+ 0);
if (ret < 0)
goto error;
--
2.52.0
next prev parent reply other threads:[~2026-04-30 20:22 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-30 20:20 [00/45 RFC PATCH] 1GB superpageblock memory allocation Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 01/45] mm: page_alloc: replace pageblock_flags bitmap with struct pageblock_data Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 02/45] mm: page_alloc: per-cpu pageblock buddy allocator Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 03/45] mm: page_alloc: use trylock for PCP lock in free path to avoid lock inversion Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 04/45] mm: mm_init: fix zone assignment for pages in unavailable ranges Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 05/45] mm: vmstat: restore per-migratetype free counts in /proc/pagetypeinfo Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 06/45] mm: page_alloc: remove watermark boost mechanism Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 07/45] mm: page_alloc: async evacuation of stolen movable pageblocks Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 08/45] mm: page_alloc: track actual page contents in pageblock flags Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 09/45] mm: page_alloc: introduce superpageblock metadata for 1GB anti-fragmentation Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 10/45] mm: page_alloc: support superpageblock resize for memory hotplug Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 11/45] mm: page_alloc: add superpageblock fullness lists for allocation steering Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 12/45] mm: page_alloc: steer pageblock stealing to tainted superpageblocks Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 13/45] mm: page_alloc: steer movable allocations to fullest clean superpageblocks Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 14/45] mm: page_alloc: extract claim_whole_block from try_to_claim_block Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 15/45] mm: page_alloc: add per-superpageblock free lists Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 16/45] mm: page_alloc: add background superpageblock defragmentation worker Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 17/45] mm: page_alloc: add within-superpageblock compaction for clean superpageblocks Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 18/45] mm: page_alloc: superpageblock-aware contiguous and higher order allocation Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 19/45] mm: page_alloc: prevent atomic allocations from tainting clean SPBs Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 20/45] mm: page_alloc: aggressively pack non-movable allocations in tainted SPBs on large systems Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 21/45] mm: page_alloc: prefer reclaim over tainting clean superpageblocks Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 22/45] mm: page_alloc: adopt partial pageblocks from tainted superpageblocks Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 23/45] mm: page_alloc: add CONFIG_DEBUG_VM sanity checks for SPB counters Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 24/45] mm: page_alloc: targeted evacuation and dynamic reserves for tainted SPBs Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 25/45] mm: page_alloc: skip pageblock compatibility threshold in " Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 26/45] mm: page_alloc: prevent UNMOVABLE/RECLAIMABLE mixing in pageblocks Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 27/45] mm: trigger deferred SPB evacuation when atomic allocs would taint a clean SPB Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 28/45] mm: page_alloc: keep PCP refill in tainted SPBs across owned pageblocks Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 29/45] mm: page_alloc: refuse fragmenting fallback for callers with cheap fallback Rik van Riel
2026-04-30 20:20 ` [RFC PATCH 30/45] mm: page_alloc: drive slab shrink from SPB anti-fragmentation pressure Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 31/45] mm: page_alloc: cross-non-movable buddy borrow within tainted SPBs Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 32/45] mm: page_alloc: proactive high-water trigger for SPB slab shrink Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 33/45] mm: page_alloc: refuse to taint clean SPBs for atomic NORETRY callers Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 34/45] mm: page_reporting: walk per-superpageblock free lists Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 35/45] mm: show_mem: collect migratetype letters from per-superpageblock lists Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 36/45] mm: page_alloc: add alloc_flags parameter to __rmqueue_smallest Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 37/45] mm/slub: kvmalloc — add __GFP_NORETRY to large-kmalloc attempt Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 38/45] mm: page_alloc: per-(zone, order, mt) PASS_1 hint cache Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 39/45] mm: debug: prevent infinite recursion in dump_page() with CMA Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 40/45] PM: hibernate: walk per-superpageblock free lists in mark_free_pages Rik van Riel
2026-04-30 20:21 ` Rik van Riel [this message]
2026-04-30 20:21 ` [RFC PATCH 42/45] mm: page_alloc: cross-MOV borrow within tainted SPBs Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 43/45] mm: page_alloc: trigger defrag from allocator hot path on tainted-SPB pressure Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 44/45] mm: page_alloc: SPB tracepoint instrumentation [DROP-FOR-UPSTREAM] Rik van Riel
2026-04-30 20:21 ` [RFC PATCH 45/45] mm: page_alloc: enlarge and unify spb_evacuate_for_order Rik van Riel
2026-05-01 7:14 ` [00/45 RFC PATCH] 1GB superpageblock memory allocation David Hildenbrand (Arm)
2026-05-01 11:58 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260430202233.111010-42-riel@surriel.com \
--to=riel@surriel.com \
--cc=clm@fb.com \
--cc=david@kernel.org \
--cc=dsterba@suse.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=riel@meta.com \
--cc=surenb@google.com \
--cc=usama.arif@linux.dev \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox