public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Filipe Manana <fdmanana@kernel.org>
To: Qu Wenruo <wqu@suse.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH] btrfs: extract the max compression chunk size into a macro
Date: Fri, 27 Feb 2026 09:38:42 +0000	[thread overview]
Message-ID: <CAL3q7H50icvNtDqWdZe4yxh4Y9cXdSyNH=RWbvWTBPeR1fnNjQ@mail.gmail.com> (raw)
In-Reply-To: <fdeb2bf487d20620a0823d30da0b97f9b25dc5a1.1772160339.git.wqu@suse.com>

On Fri, Feb 27, 2026 at 2:46 AM Qu Wenruo <wqu@suse.com> wrote:
>
> We have two locations using open-coded 512K size, as the async chunk
> size.
>
> For compression we have not only the max size a compressed extent can
> represent (128K), but also how large an async chunk can be (512K).
>
> Although we have a macro for the maximum compressed extent size, we do
> not have any macro for the async chunk size.
>
> Add such macro and replace the two open-coded SZ_512K.

Missing an "a" between such and macro.

Reviewed-by: Filipe Manana <fdmanana@suse.com>

Thanks.

>
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>  fs/btrfs/compression.h | 3 +++
>  fs/btrfs/inode.c       | 4 ++--
>  2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
> index 84600b284e1e..973530e9ce6c 100644
> --- a/fs/btrfs/compression.h
> +++ b/fs/btrfs/compression.h
> @@ -36,6 +36,9 @@ struct btrfs_ordered_extent;
>  #define BTRFS_MAX_COMPRESSED_PAGES     (BTRFS_MAX_COMPRESSED / PAGE_SIZE)
>  static_assert((BTRFS_MAX_COMPRESSED % PAGE_SIZE) == 0);
>
> +/* The max size for a single worker to compress. */
> +#define BTRFS_COMPRESSION_CHUNK_SIZE   (SZ_512K)
> +
>  /* Maximum size of data before compression */
>  #define BTRFS_MAX_UNCOMPRESSED         (SZ_128K)
>
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 9148ec4a1d19..acfef903ac8b 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -1587,7 +1587,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
>         struct async_cow *ctx;
>         struct async_chunk *async_chunk;
>         unsigned long nr_pages;
> -       u64 num_chunks = DIV_ROUND_UP(end - start, SZ_512K);
> +       u64 num_chunks = DIV_ROUND_UP(end - start, BTRFS_COMPRESSION_CHUNK_SIZE);
>         int i;
>         unsigned nofs_flag;
>         const blk_opf_t write_flags = wbc_to_write_flags(wbc);
> @@ -1604,7 +1604,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
>         atomic_set(&ctx->num_chunks, num_chunks);
>
>         for (i = 0; i < num_chunks; i++) {
> -               u64 cur_end = min(end, start + SZ_512K - 1);
> +               u64 cur_end = min(end, start + BTRFS_COMPRESSION_CHUNK_SIZE - 1);
>
>                 /*
>                  * igrab is called higher up in the call chain, take only the
> --
> 2.53.0
>
>

      reply	other threads:[~2026-02-27  9:39 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-27  2:45 [PATCH] btrfs: extract the max compression chunk size into a macro Qu Wenruo
2026-02-27  9:38 ` Filipe Manana [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAL3q7H50icvNtDqWdZe4yxh4Y9cXdSyNH=RWbvWTBPeR1fnNjQ@mail.gmail.com' \
    --to=fdmanana@kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox