Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: linux-btrfs@vger.kernel.org
Subject: [PATCH 0/4] btrfs: experimental support for huge data folios
Date: Wed, 13 May 2026 14:06:17 +0930	[thread overview]
Message-ID: <cover.1778646753.git.wqu@suse.com> (raw)

[CHANGELOG]
v2:
- Rebased to the latest for-next branch
  There are several conflicts with the removal of locked and ordered
  sub-bitmaps.

  Now huge folios will take full advantage of the reduced sub-bitmap
  size.
  Previously we need 5x64 bytes for the sub-bitmaps, now it's only
  3x64 bytes, a reduction of 40%.

  Although the increase on on-stack memory usage is still there.

- Minor grammar fixes
  Mostly in the commit message.

RFC->v1:
- Rebased to the latest for-next branch
  Which provides a stable baseline that can pass usual fstests runs, and
  no more 2K fs block size support.

- Mark the new huge folio support as experimental
  Since the large folio support itself is moved out of experimental
  features, the huge folio support will need to be hidden behind
  experimental.

- Rework the blocks per folio limit
  Previously blocks per folio limit is always calculated by
  BTRFS_MAX_FOLIO_SIZE / BTRFS_MIN_BLOCK_SIZE, but the real blocks per
  folio is also depending on the fs block size.

  Now introduce a new BTRFS_MAX_BLOCKS_PER_FOLIO macro, which is either
  BITS_PER_LONG (the old one), or 512 (the new experimental one).

  This will allow non-experimental builds to get rid of the enlarged
  bitmap, thus lower the on-stack memory usage for non-experimental
  builds.

Currently btrfs only supports folios as large as BITS_PER_LONG * blocks.
This is an artificial limit introduced to make bitmap operations easier.

Btrfs has two extra bitmaps that are out of btrfs_folio_state structure,
btrfs_bio_ctrl->submit_bitmap and @delalloc_bitmap inside
writepage_delalloc().

Limits the bitmap size to BITS_PER_LONG makes it very easy to handle the
above two bitmaps, we can just use a local unsigned long, no need to do
any memory allocation.

On the other hand, those two external bitmaps are the only thing
limiting huge folios.

The 1st patch will update the comments related to subpage implementation
first.
The 2nd patch will handle the subpage internal operations, mostly to
handle bitmap dumping.
The 3rd patch will prepare btrfs_bio_ctrl::submit_bitmap to be a proper
pointer for the incoming huge folios support.

The final patch will enable the huge folio support, by using on-stack
bitmap that can contain 512 bits.
That will ensure 2MiB folio size, which is order 9 on 4K page sized
systems.

Qu Wenruo (4):
  btrfs: update the out-of-date comments on subpage
  btrfs: prepare subpage operations to support >= BITS_PER_LONG
    sub-bitmaps
  btrfs: migrate btrfs_bio_ctrl::submit_bitmap to support larger bitmaps
  btrfs: introduce support for huge folios

 fs/btrfs/disk-io.c   |  11 ++-
 fs/btrfs/extent_io.c |  71 ++++++++++--------
 fs/btrfs/fs.h        |  16 ++++
 fs/btrfs/subpage.c   | 173 ++++++++++++++++++++++++++-----------------
 fs/btrfs/subpage.h   |   8 +-
 5 files changed, 175 insertions(+), 104 deletions(-)

-- 
2.54.0


             reply	other threads:[~2026-05-13  4:36 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-13  4:36 Qu Wenruo [this message]
2026-05-13  4:36 ` [PATCH 1/4] btrfs: update the out-of-date comments on subpage Qu Wenruo
2026-05-13  4:36 ` [PATCH 2/4] btrfs: prepare subpage operations to support >= BITS_PER_LONG sub-bitmaps Qu Wenruo
2026-05-13  4:36 ` [PATCH 3/4] btrfs: migrate btrfs_bio_ctrl::submit_bitmap to support larger bitmaps Qu Wenruo
2026-05-13  4:36 ` [PATCH 4/4] btrfs: introduce support for huge folios Qu Wenruo
  -- strict thread matches above, loose matches on Subject: below --
2026-04-29  5:03 [PATCH 0/4] btrfs: experimental support for huge data folios Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cover.1778646753.git.wqu@suse.com \
    --to=wqu@suse.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox