* [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1
@ 2026-01-20 0:00 Qu Wenruo
2026-01-20 0:00 ` [PATCH 1/3] btrfs: use folio_iter to handle lzo_decompress_bio() Qu Wenruo
` (5 more replies)
0 siblings, 6 replies; 9+ messages in thread
From: Qu Wenruo @ 2026-01-20 0:00 UTC (permalink / raw)
To: linux-btrfs
Currently we have compressed_bio::compressed_folios[] allowing us to do
random access to any compressed folio, then we queue all folios in that
array into a real btrfs_bio, and submit that btrfs_bio for read/write.
However there is not really any need to do random access of that array.
All compression/decompression is doing sequential folio access.
The part 1 is some easy and safe conversion on decompression path.
The part 2 will handle the compression part, but unfortunately that will
require some changes all compression path, thus will need some extra
work.
And only after compression paths also got converted, we still need
that compressed_folios[] array for now.
Qu Wenruo (3):
btrfs: use folio_iter to handle lzo_decompress_bio()
btrfs: use folio_iter to handle zlib_decompress_bio()
btrfs: use folio_iter to handle zstd_decompress_bio()
fs/btrfs/lzo.c | 48 +++++++++++++++++++++++++++++++++++++++---------
fs/btrfs/zlib.c | 19 ++++++++++++-------
fs/btrfs/zstd.c | 13 +++++++++----
3 files changed, 60 insertions(+), 20 deletions(-)
--
2.52.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/3] btrfs: use folio_iter to handle lzo_decompress_bio()
2026-01-20 0:00 [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 Qu Wenruo
@ 2026-01-20 0:00 ` Qu Wenruo
2026-01-20 0:00 ` [PATCH 2/3] btrfs: use folio_iter to handle zlib_decompress_bio() Qu Wenruo
` (4 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Qu Wenruo @ 2026-01-20 0:00 UTC (permalink / raw)
To: linux-btrfs
Currently lzo_decompress_bio() is using
compressed_bio->compressed_folios[] array to grab each compressed folio.
This is making the code much easier to read, as we only need to maintain
a single iterator, @cur_in, and can easily grab any random folio using
@cur_in >> min_folio_shift as an index.
However lzo_decompress_bio() itself is ensured to only advance to the
next folio at one time, and compressed_folios[] is just a pointer to
each folio of the compressed bio, thus we have no real random access
requirement for lzo_decompress_bio().
Replace the compressed_folios[] access by a helper, get_current_folio(),
which uses folio_iter and an external folio counter to properly switch
the folio when needed.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/lzo.c | 48 +++++++++++++++++++++++++++++++++++++++---------
1 file changed, 39 insertions(+), 9 deletions(-)
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index 4758f66da449..83c106ca1c14 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -310,23 +310,46 @@ int lzo_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
return ret;
}
+static struct folio *get_current_folio(struct compressed_bio *cb,
+ struct folio_iter *fi,
+ u32 *cur_folio_index,
+ u32 cur_in)
+{
+ struct btrfs_fs_info *fs_info = cb_to_fs_info(cb);
+ const u32 min_folio_shift = PAGE_SHIFT + fs_info->block_min_order;
+
+ ASSERT(cur_folio_index);
+
+ /* Need to switch to the next folio. */
+ if (cur_in >> min_folio_shift != *cur_folio_index) {
+ /* We can only do the switch one folio a time. */
+ ASSERT(cur_in >> min_folio_shift == *cur_folio_index + 1);
+
+ bio_next_folio(fi, &cb->bbio.bio);
+ (*cur_folio_index)++;
+ }
+ return fi->folio;
+}
+
/*
* Copy the compressed segment payload into @dest.
*
* For the payload there will be no padding, just need to do page switching.
*/
static void copy_compressed_segment(struct compressed_bio *cb,
+ struct folio_iter *fi,
+ u32 *cur_folio_index,
char *dest, u32 len, u32 *cur_in)
{
- struct btrfs_fs_info *fs_info = cb_to_fs_info(cb);
- const u32 min_folio_shift = PAGE_SHIFT + fs_info->block_min_order;
u32 orig_in = *cur_in;
while (*cur_in < orig_in + len) {
- struct folio *cur_folio = cb->compressed_folios[*cur_in >> min_folio_shift];
- u32 copy_len = min_t(u32, orig_in + len - *cur_in,
- folio_size(cur_folio) - offset_in_folio(cur_folio, *cur_in));
+ struct folio *cur_folio = get_current_folio(cb, fi, cur_folio_index, *cur_in);
+ u32 copy_len;
+ ASSERT(cur_folio);
+ copy_len = min_t(u32, orig_in + len - *cur_in,
+ folio_size(cur_folio) - offset_in_folio(cur_folio, *cur_in));
ASSERT(copy_len);
memcpy_from_folio(dest + *cur_in - orig_in, cur_folio,
@@ -341,7 +364,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
struct workspace *workspace = list_entry(ws, struct workspace, list);
const struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info;
const u32 sectorsize = fs_info->sectorsize;
- const u32 min_folio_shift = PAGE_SHIFT + fs_info->block_min_order;
+ struct folio_iter fi;
char *kaddr;
int ret;
/* Compressed data length, can be unaligned */
@@ -350,8 +373,14 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
u32 cur_in = 0;
/* Bytes decompressed so far */
u32 cur_out = 0;
+ /* The current folio index number inside the bio. */
+ u32 cur_folio_index = 0;
- kaddr = kmap_local_folio(cb->compressed_folios[0], 0);
+ bio_first_folio(&fi, &cb->bbio.bio, 0);
+ /* There must be a compressed folio and matches the sectorsize. */
+ ASSERT(fi.folio);
+ ASSERT(folio_size(fi.folio) == sectorsize);
+ kaddr = kmap_local_folio(fi.folio, 0);
len_in = read_compress_length(kaddr);
kunmap_local(kaddr);
cur_in += LZO_LEN;
@@ -388,7 +417,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
*/
ASSERT(cur_in / sectorsize ==
(cur_in + LZO_LEN - 1) / sectorsize);
- cur_folio = cb->compressed_folios[cur_in >> min_folio_shift];
+ cur_folio = get_current_folio(cb, &fi, &cur_folio_index, cur_in);
ASSERT(cur_folio);
kaddr = kmap_local_folio(cur_folio, 0);
seg_len = read_compress_length(kaddr + offset_in_folio(cur_folio, cur_in));
@@ -410,7 +439,8 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
}
/* Copy the compressed segment payload into workspace */
- copy_compressed_segment(cb, workspace->cbuf, seg_len, &cur_in);
+ copy_compressed_segment(cb, &fi, &cur_folio_index, workspace->cbuf,
+ seg_len, &cur_in);
/* Decompress the data */
ret = lzo1x_decompress_safe(workspace->cbuf, seg_len,
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/3] btrfs: use folio_iter to handle zlib_decompress_bio()
2026-01-20 0:00 [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 Qu Wenruo
2026-01-20 0:00 ` [PATCH 1/3] btrfs: use folio_iter to handle lzo_decompress_bio() Qu Wenruo
@ 2026-01-20 0:00 ` Qu Wenruo
2026-01-20 0:00 ` [PATCH 3/3] btrfs: use folio_iter to handle zstd_decompress_bio() Qu Wenruo
` (3 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Qu Wenruo @ 2026-01-20 0:00 UTC (permalink / raw)
To: linux-btrfs
Currently zlib_decompress_bio() is using
compressed_bio->compressed_folios[] array to grab each compressed folio.
However cb->compressed_folios[] is just a pointer to each folio of the
compressed bio, meaning we can just replace the compressed_folios[]
array by just grabbing the folio inside the compressed bio.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/zlib.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 10ed48d4a846..6871476e6ebf 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -338,18 +338,22 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
{
struct btrfs_fs_info *fs_info = cb_to_fs_info(cb);
struct workspace *workspace = list_entry(ws, struct workspace, list);
+ struct folio_iter fi;
const u32 min_folio_size = btrfs_min_folio_size(fs_info);
int ret = 0, ret2;
int wbits = MAX_WBITS;
char *data_in;
size_t total_out = 0;
- unsigned long folio_in_index = 0;
size_t srclen = cb->compressed_len;
- unsigned long total_folios_in = DIV_ROUND_UP(srclen, min_folio_size);
unsigned long buf_start;
- struct folio **folios_in = cb->compressed_folios;
- data_in = kmap_local_folio(folios_in[folio_in_index], 0);
+ bio_first_folio(&fi, &cb->bbio.bio, 0);
+
+ /* We must have at least one folio here, and has the correct size. */
+ ASSERT(fi.folio);
+ ASSERT(folio_size(fi.folio) == min_folio_size);
+
+ data_in = kmap_local_folio(fi.folio, 0);
workspace->strm.next_in = data_in;
workspace->strm.avail_in = min_t(size_t, srclen, min_folio_size);
workspace->strm.total_in = 0;
@@ -404,12 +408,13 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
if (workspace->strm.avail_in == 0) {
unsigned long tmp;
kunmap_local(data_in);
- folio_in_index++;
- if (folio_in_index >= total_folios_in) {
+ bio_next_folio(&fi, &cb->bbio.bio);
+ if (!fi.folio) {
data_in = NULL;
break;
}
- data_in = kmap_local_folio(folios_in[folio_in_index], 0);
+ ASSERT(folio_size(fi.folio) == min_folio_size);
+ data_in = kmap_local_folio(fi.folio, 0);
workspace->strm.next_in = data_in;
tmp = srclen - workspace->strm.total_in;
workspace->strm.avail_in = min(tmp, min_folio_size);
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/3] btrfs: use folio_iter to handle zstd_decompress_bio()
2026-01-20 0:00 [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 Qu Wenruo
2026-01-20 0:00 ` [PATCH 1/3] btrfs: use folio_iter to handle lzo_decompress_bio() Qu Wenruo
2026-01-20 0:00 ` [PATCH 2/3] btrfs: use folio_iter to handle zlib_decompress_bio() Qu Wenruo
@ 2026-01-20 0:00 ` Qu Wenruo
2026-01-20 17:29 ` [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 David Sterba
` (2 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Qu Wenruo @ 2026-01-20 0:00 UTC (permalink / raw)
To: linux-btrfs
Currently zstd_decompress_bio() is using
compressed_bio->compressed_folios[] array to grab each compressed folio.
However cb->compressed_folios[] is just a pointer to each folio of the
compressed bio, meaning we can just replace the compressed_folios[]
array by just grabbing the folio inside the compressed bio.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/zstd.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index c9cddcfa337b..737bc49652b0 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -589,7 +589,7 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
{
struct btrfs_fs_info *fs_info = cb_to_fs_info(cb);
struct workspace *workspace = list_entry(ws, struct workspace, list);
- struct folio **folios_in = cb->compressed_folios;
+ struct folio_iter fi;
size_t srclen = cb->compressed_len;
zstd_dstream *stream;
int ret = 0;
@@ -612,7 +612,11 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
goto done;
}
- workspace->in_buf.src = kmap_local_folio(folios_in[folio_in_index], 0);
+ bio_first_folio(&fi, &cb->bbio.bio, 0);
+ ASSERT(fi.folio);
+ ASSERT(folio_size(fi.folio) == blocksize);
+
+ workspace->in_buf.src = kmap_local_folio(fi.folio, 0);
workspace->in_buf.pos = 0;
workspace->in_buf.size = min_t(size_t, srclen, min_folio_size);
@@ -660,8 +664,9 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
goto done;
}
srclen -= min_folio_size;
- workspace->in_buf.src =
- kmap_local_folio(folios_in[folio_in_index], 0);
+ bio_next_folio(&fi, &cb->bbio.bio);
+ ASSERT(fi.folio);
+ workspace->in_buf.src = kmap_local_folio(fi.folio, 0);
workspace->in_buf.pos = 0;
workspace->in_buf.size = min_t(size_t, srclen, min_folio_size);
}
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1
2026-01-20 0:00 [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 Qu Wenruo
` (2 preceding siblings ...)
2026-01-20 0:00 ` [PATCH 3/3] btrfs: use folio_iter to handle zstd_decompress_bio() Qu Wenruo
@ 2026-01-20 17:29 ` David Sterba
2026-01-20 20:41 ` Qu Wenruo
2026-01-21 3:47 ` David Sterba
2026-01-24 21:48 ` Qu Wenruo
5 siblings, 1 reply; 9+ messages in thread
From: David Sterba @ 2026-01-20 17:29 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Tue, Jan 20, 2026 at 10:30:07AM +1030, Qu Wenruo wrote:
> Currently we have compressed_bio::compressed_folios[] allowing us to do
> random access to any compressed folio, then we queue all folios in that
> array into a real btrfs_bio, and submit that btrfs_bio for read/write.
>
> However there is not really any need to do random access of that array.
>
> All compression/decompression is doing sequential folio access.
>
> The part 1 is some easy and safe conversion on decompression path.
>
> The part 2 will handle the compression part, but unfortunately that will
> require some changes all compression path, thus will need some extra
> work.
>
> And only after compression paths also got converted, we still need
> that compressed_folios[] array for now.
>
> Qu Wenruo (3):
> btrfs: use folio_iter to handle lzo_decompress_bio()
> btrfs: use folio_iter to handle zlib_decompress_bio()
> btrfs: use folio_iter to handle zstd_decompress_bio()
The change makes sense, however there are some low level effects that
are not desirable in the compression callbacks as they're deep in the IO
path. Using the folio iterator on stack adds 40 bytes for lzo (144 -> 184),
and for zstd it's +24 (120 -> 144). This can be fixed by moving the
iterator to the workspace as we're not using the full slab bucket size
for either (lzo workspace is 40, zstd is 160).
The code increases by ~2800 due to specialized cold versions of the
decompression callbacks but other than increasing the size it's
acceptable.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1
2026-01-20 17:29 ` [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 David Sterba
@ 2026-01-20 20:41 ` Qu Wenruo
2026-01-21 3:47 ` David Sterba
0 siblings, 1 reply; 9+ messages in thread
From: Qu Wenruo @ 2026-01-20 20:41 UTC (permalink / raw)
To: dsterba; +Cc: linux-btrfs
在 2026/1/21 03:59, David Sterba 写道:
> On Tue, Jan 20, 2026 at 10:30:07AM +1030, Qu Wenruo wrote:
>> Currently we have compressed_bio::compressed_folios[] allowing us to do
>> random access to any compressed folio, then we queue all folios in that
>> array into a real btrfs_bio, and submit that btrfs_bio for read/write.
>>
>> However there is not really any need to do random access of that array.
>>
>> All compression/decompression is doing sequential folio access.
>>
>> The part 1 is some easy and safe conversion on decompression path.
>>
>> The part 2 will handle the compression part, but unfortunately that will
>> require some changes all compression path, thus will need some extra
>> work.
>>
>> And only after compression paths also got converted, we still need
>> that compressed_folios[] array for now.
>>
>> Qu Wenruo (3):
>> btrfs: use folio_iter to handle lzo_decompress_bio()
>> btrfs: use folio_iter to handle zlib_decompress_bio()
>> btrfs: use folio_iter to handle zstd_decompress_bio()
>
> The change makes sense, however there are some low level effects that
> are not desirable in the compression callbacks as they're deep in the IO
> path. Using the folio iterator on stack adds 40 bytes for lzo (144 -> 184),
> and for zstd it's +24 (120 -> 144). This can be fixed by moving the
> iterator to the workspace as we're not using the full slab bucket size
> for either (lzo workspace is 40, zstd is 160).
Although we're never going to be that deep into the IO path.
Since commit 4591c3ef751d ("btrfs: make sure all btrfs_bio::end_io are
called in task context"), all btrfs_bio::end_io() callback is called
inside a workqueue.
So end_bbio_compressed_read() is now inside a workqueue, pretty shadow
stacks, thus the increase of the stacks should still be fine.
Thanks,
Qu
>
> The code increases by ~2800 due to specialized cold versions of the
> decompression callbacks but other than increasing the size it's
> acceptable.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1
2026-01-20 20:41 ` Qu Wenruo
@ 2026-01-21 3:47 ` David Sterba
0 siblings, 0 replies; 9+ messages in thread
From: David Sterba @ 2026-01-21 3:47 UTC (permalink / raw)
To: Qu Wenruo; +Cc: dsterba, linux-btrfs
On Wed, Jan 21, 2026 at 07:11:18AM +1030, Qu Wenruo wrote:
>
>
> 在 2026/1/21 03:59, David Sterba 写道:
> > On Tue, Jan 20, 2026 at 10:30:07AM +1030, Qu Wenruo wrote:
> >> Currently we have compressed_bio::compressed_folios[] allowing us to do
> >> random access to any compressed folio, then we queue all folios in that
> >> array into a real btrfs_bio, and submit that btrfs_bio for read/write.
> >>
> >> However there is not really any need to do random access of that array.
> >>
> >> All compression/decompression is doing sequential folio access.
> >>
> >> The part 1 is some easy and safe conversion on decompression path.
> >>
> >> The part 2 will handle the compression part, but unfortunately that will
> >> require some changes all compression path, thus will need some extra
> >> work.
> >>
> >> And only after compression paths also got converted, we still need
> >> that compressed_folios[] array for now.
> >>
> >> Qu Wenruo (3):
> >> btrfs: use folio_iter to handle lzo_decompress_bio()
> >> btrfs: use folio_iter to handle zlib_decompress_bio()
> >> btrfs: use folio_iter to handle zstd_decompress_bio()
> >
> > The change makes sense, however there are some low level effects that
> > are not desirable in the compression callbacks as they're deep in the IO
> > path. Using the folio iterator on stack adds 40 bytes for lzo (144 -> 184),
> > and for zstd it's +24 (120 -> 144). This can be fixed by moving the
> > iterator to the workspace as we're not using the full slab bucket size
> > for either (lzo workspace is 40, zstd is 160).
>
> Although we're never going to be that deep into the IO path.
>
> Since commit 4591c3ef751d ("btrfs: make sure all btrfs_bio::end_io are
> called in task context"), all btrfs_bio::end_io() callback is called
> inside a workqueue.
>
> So end_bbio_compressed_read() is now inside a workqueue, pretty shadow
> stacks, thus the increase of the stacks should still be fine.
Right, it's not necessary after the workqueues, I need to get used to
it.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1
2026-01-20 0:00 [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 Qu Wenruo
` (3 preceding siblings ...)
2026-01-20 17:29 ` [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 David Sterba
@ 2026-01-21 3:47 ` David Sterba
2026-01-24 21:48 ` Qu Wenruo
5 siblings, 0 replies; 9+ messages in thread
From: David Sterba @ 2026-01-21 3:47 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Tue, Jan 20, 2026 at 10:30:07AM +1030, Qu Wenruo wrote:
> Currently we have compressed_bio::compressed_folios[] allowing us to do
> random access to any compressed folio, then we queue all folios in that
> array into a real btrfs_bio, and submit that btrfs_bio for read/write.
>
> However there is not really any need to do random access of that array.
>
> All compression/decompression is doing sequential folio access.
>
> The part 1 is some easy and safe conversion on decompression path.
>
> The part 2 will handle the compression part, but unfortunately that will
> require some changes all compression path, thus will need some extra
> work.
>
> And only after compression paths also got converted, we still need
> that compressed_folios[] array for now.
>
> Qu Wenruo (3):
> btrfs: use folio_iter to handle lzo_decompress_bio()
> btrfs: use folio_iter to handle zlib_decompress_bio()
> btrfs: use folio_iter to handle zstd_decompress_bio()
Reviewed-by: David Sterba <dsterba@suse.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1
2026-01-20 0:00 [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 Qu Wenruo
` (4 preceding siblings ...)
2026-01-21 3:47 ` David Sterba
@ 2026-01-24 21:48 ` Qu Wenruo
5 siblings, 0 replies; 9+ messages in thread
From: Qu Wenruo @ 2026-01-24 21:48 UTC (permalink / raw)
To: linux-btrfs
在 2026/1/20 10:30, Qu Wenruo 写道:
> Currently we have compressed_bio::compressed_folios[] allowing us to do
> random access to any compressed folio, then we queue all folios in that
> array into a real btrfs_bio, and submit that btrfs_bio for read/write.
>
> However there is not really any need to do random access of that array.
>
> All compression/decompression is doing sequential folio access.
Minor update in the for-next branch.
Replace the following pattern:
bio_first_folio(fi, bio, 0);
ASSERT(fi.folio);
With
bio_first_folio(fi, bio, 0);
if (unlikely(!fi.folio))
return -EINVAL;
And for the zstd one, move the bio_first_folio() call and the check to
the beginning of the function.
This is to avoid compiler warning about uninitialized access to
folio_iter members, as if the bio is empty bio_first_folio() only
initialize fi.folio to NULL without touching the remaining members.
Thanks,
Qu
>
> The part 1 is some easy and safe conversion on decompression path.
>
> The part 2 will handle the compression part, but unfortunately that will
> require some changes all compression path, thus will need some extra
> work.
>
> And only after compression paths also got converted, we still need
> that compressed_folios[] array for now.
>
> Qu Wenruo (3):
> btrfs: use folio_iter to handle lzo_decompress_bio()
> btrfs: use folio_iter to handle zlib_decompress_bio()
> btrfs: use folio_iter to handle zstd_decompress_bio()
>
> fs/btrfs/lzo.c | 48 +++++++++++++++++++++++++++++++++++++++---------
> fs/btrfs/zlib.c | 19 ++++++++++++-------
> fs/btrfs/zstd.c | 13 +++++++++----
> 3 files changed, 60 insertions(+), 20 deletions(-)
>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-01-24 21:48 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-20 0:00 [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 Qu Wenruo
2026-01-20 0:00 ` [PATCH 1/3] btrfs: use folio_iter to handle lzo_decompress_bio() Qu Wenruo
2026-01-20 0:00 ` [PATCH 2/3] btrfs: use folio_iter to handle zlib_decompress_bio() Qu Wenruo
2026-01-20 0:00 ` [PATCH 3/3] btrfs: use folio_iter to handle zstd_decompress_bio() Qu Wenruo
2026-01-20 17:29 ` [PATCH 0/3] btrfs: get rid of compressed_bio::compressed_folios[] part 1 David Sterba
2026-01-20 20:41 ` Qu Wenruo
2026-01-21 3:47 ` David Sterba
2026-01-21 3:47 ` David Sterba
2026-01-24 21:48 ` Qu Wenruo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox