* [PATCH] btrfs: extract the max compression chunk size into a macro
@ 2026-02-27 2:45 Qu Wenruo
2026-02-27 9:38 ` Filipe Manana
0 siblings, 1 reply; 2+ messages in thread
From: Qu Wenruo @ 2026-02-27 2:45 UTC (permalink / raw)
To: linux-btrfs
We have two locations using open-coded 512K size, as the async chunk
size.
For compression we have not only the max size a compressed extent can
represent (128K), but also how large an async chunk can be (512K).
Although we have a macro for the maximum compressed extent size, we do
not have any macro for the async chunk size.
Add such macro and replace the two open-coded SZ_512K.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/compression.h | 3 +++
fs/btrfs/inode.c | 4 ++--
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 84600b284e1e..973530e9ce6c 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -36,6 +36,9 @@ struct btrfs_ordered_extent;
#define BTRFS_MAX_COMPRESSED_PAGES (BTRFS_MAX_COMPRESSED / PAGE_SIZE)
static_assert((BTRFS_MAX_COMPRESSED % PAGE_SIZE) == 0);
+/* The max size for a single worker to compress. */
+#define BTRFS_COMPRESSION_CHUNK_SIZE (SZ_512K)
+
/* Maximum size of data before compression */
#define BTRFS_MAX_UNCOMPRESSED (SZ_128K)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 9148ec4a1d19..acfef903ac8b 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1587,7 +1587,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
struct async_cow *ctx;
struct async_chunk *async_chunk;
unsigned long nr_pages;
- u64 num_chunks = DIV_ROUND_UP(end - start, SZ_512K);
+ u64 num_chunks = DIV_ROUND_UP(end - start, BTRFS_COMPRESSION_CHUNK_SIZE);
int i;
unsigned nofs_flag;
const blk_opf_t write_flags = wbc_to_write_flags(wbc);
@@ -1604,7 +1604,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
atomic_set(&ctx->num_chunks, num_chunks);
for (i = 0; i < num_chunks; i++) {
- u64 cur_end = min(end, start + SZ_512K - 1);
+ u64 cur_end = min(end, start + BTRFS_COMPRESSION_CHUNK_SIZE - 1);
/*
* igrab is called higher up in the call chain, take only the
--
2.53.0
^ permalink raw reply related [flat|nested] 2+ messages in thread* Re: [PATCH] btrfs: extract the max compression chunk size into a macro
2026-02-27 2:45 [PATCH] btrfs: extract the max compression chunk size into a macro Qu Wenruo
@ 2026-02-27 9:38 ` Filipe Manana
0 siblings, 0 replies; 2+ messages in thread
From: Filipe Manana @ 2026-02-27 9:38 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Fri, Feb 27, 2026 at 2:46 AM Qu Wenruo <wqu@suse.com> wrote:
>
> We have two locations using open-coded 512K size, as the async chunk
> size.
>
> For compression we have not only the max size a compressed extent can
> represent (128K), but also how large an async chunk can be (512K).
>
> Although we have a macro for the maximum compressed extent size, we do
> not have any macro for the async chunk size.
>
> Add such macro and replace the two open-coded SZ_512K.
Missing an "a" between such and macro.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Thanks.
>
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
> fs/btrfs/compression.h | 3 +++
> fs/btrfs/inode.c | 4 ++--
> 2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
> index 84600b284e1e..973530e9ce6c 100644
> --- a/fs/btrfs/compression.h
> +++ b/fs/btrfs/compression.h
> @@ -36,6 +36,9 @@ struct btrfs_ordered_extent;
> #define BTRFS_MAX_COMPRESSED_PAGES (BTRFS_MAX_COMPRESSED / PAGE_SIZE)
> static_assert((BTRFS_MAX_COMPRESSED % PAGE_SIZE) == 0);
>
> +/* The max size for a single worker to compress. */
> +#define BTRFS_COMPRESSION_CHUNK_SIZE (SZ_512K)
> +
> /* Maximum size of data before compression */
> #define BTRFS_MAX_UNCOMPRESSED (SZ_128K)
>
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 9148ec4a1d19..acfef903ac8b 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -1587,7 +1587,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
> struct async_cow *ctx;
> struct async_chunk *async_chunk;
> unsigned long nr_pages;
> - u64 num_chunks = DIV_ROUND_UP(end - start, SZ_512K);
> + u64 num_chunks = DIV_ROUND_UP(end - start, BTRFS_COMPRESSION_CHUNK_SIZE);
> int i;
> unsigned nofs_flag;
> const blk_opf_t write_flags = wbc_to_write_flags(wbc);
> @@ -1604,7 +1604,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
> atomic_set(&ctx->num_chunks, num_chunks);
>
> for (i = 0; i < num_chunks; i++) {
> - u64 cur_end = min(end, start + SZ_512K - 1);
> + u64 cur_end = min(end, start + BTRFS_COMPRESSION_CHUNK_SIZE - 1);
>
> /*
> * igrab is called higher up in the call chain, take only the
> --
> 2.53.0
>
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-02-27 9:39 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-27 2:45 [PATCH] btrfs: extract the max compression chunk size into a macro Qu Wenruo
2026-02-27 9:38 ` Filipe Manana
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox