* [PATCH 01/12] btrfs: remove duplicate calculation of eb offset in btrfs_bin_search()
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-07 19:34 ` Boris Burkov
2026-01-06 16:20 ` [PATCH 02/12] btrfs: unify types for binary search variables David Sterba
` (10 subsequent siblings)
11 siblings, 1 reply; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
In the main search loop the variable 'oil' (offset in folio) is set
twice, one duplicated when the key fits completely to the contiguous
range. We can remove it and while it's just a simple calculation, the
binary search loop is executed many times so micro optimizations add up.
The code size is reduced by 64 bytes on release config, the loop is
reorganized a bit and a few instructions shorter.
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/ctree.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 7267b250266579..1eef80c2108331 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -776,7 +776,6 @@ int btrfs_bin_search(const struct extent_buffer *eb, int first_slot,
const unsigned long idx = get_eb_folio_index(eb, offset);
char *kaddr = folio_address(eb->folios[idx]);
- oil = get_eb_offset_in_folio(eb, offset);
tmp = (struct btrfs_disk_key *)(kaddr + oil);
} else {
read_extent_buffer(eb, &unaligned, offset, key_size);
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* Re: [PATCH 01/12] btrfs: remove duplicate calculation of eb offset in btrfs_bin_search()
2026-01-06 16:20 ` [PATCH 01/12] btrfs: remove duplicate calculation of eb offset in btrfs_bin_search() David Sterba
@ 2026-01-07 19:34 ` Boris Burkov
0 siblings, 0 replies; 25+ messages in thread
From: Boris Burkov @ 2026-01-07 19:34 UTC (permalink / raw)
To: David Sterba; +Cc: linux-btrfs
On Tue, Jan 06, 2026 at 05:20:24PM +0100, David Sterba wrote:
> In the main search loop the variable 'oil' (offset in folio) is set
> twice, one duplicated when the key fits completely to the contiguous
> range. We can remove it and while it's just a simple calculation, the
> binary search loop is executed many times so micro optimizations add up.
>
> The code size is reduced by 64 bytes on release config, the loop is
> reorganized a bit and a few instructions shorter.
>
> Signed-off-by: David Sterba <dsterba@suse.com>
Reviewed-by: Boris Burkov <boris@bur.io>
> ---
> fs/btrfs/ctree.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
> index 7267b250266579..1eef80c2108331 100644
> --- a/fs/btrfs/ctree.c
> +++ b/fs/btrfs/ctree.c
> @@ -776,7 +776,6 @@ int btrfs_bin_search(const struct extent_buffer *eb, int first_slot,
> const unsigned long idx = get_eb_folio_index(eb, offset);
> char *kaddr = folio_address(eb->folios[idx]);
>
> - oil = get_eb_offset_in_folio(eb, offset);
> tmp = (struct btrfs_disk_key *)(kaddr + oil);
> } else {
> read_extent_buffer(eb, &unaligned, offset, key_size);
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 02/12] btrfs: unify types for binary search variables
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
2026-01-06 16:20 ` [PATCH 01/12] btrfs: remove duplicate calculation of eb offset in btrfs_bin_search() David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-07 19:33 ` Boris Burkov
2026-01-06 16:20 ` [PATCH 03/12] btrfs: rename local variable for offset in folio David Sterba
` (9 subsequent siblings)
11 siblings, 1 reply; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
The variables calculating where to jump next are useing mixed in types
which requires some conversions on the instruction level. Using 'u32'
removes one call to 'movslq', making the main loop shorter.
This complements type conversion done in a724f313f84beb ("btrfs: do
unsigned integer division in the extent buffer binary search loop")
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/ctree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 1eef80c2108331..0a7ee47aa8aaab 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -766,7 +766,7 @@ int btrfs_bin_search(const struct extent_buffer *eb, int first_slot,
unsigned long offset;
struct btrfs_disk_key *tmp;
struct btrfs_disk_key unaligned;
- int mid;
+ u32 mid;
mid = (low + high) / 2;
offset = p + mid * item_size;
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* Re: [PATCH 02/12] btrfs: unify types for binary search variables
2026-01-06 16:20 ` [PATCH 02/12] btrfs: unify types for binary search variables David Sterba
@ 2026-01-07 19:33 ` Boris Burkov
2026-01-08 20:45 ` David Sterba
0 siblings, 1 reply; 25+ messages in thread
From: Boris Burkov @ 2026-01-07 19:33 UTC (permalink / raw)
To: David Sterba; +Cc: linux-btrfs
On Tue, Jan 06, 2026 at 05:20:25PM +0100, David Sterba wrote:
> The variables calculating where to jump next are useing mixed in types
> which requires some conversions on the instruction level. Using 'u32'
> removes one call to 'movslq', making the main loop shorter.
>
> This complements type conversion done in a724f313f84beb ("btrfs: do
> unsigned integer division in the extent buffer binary search loop")
>
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
> fs/btrfs/ctree.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
> index 1eef80c2108331..0a7ee47aa8aaab 100644
> --- a/fs/btrfs/ctree.c
> +++ b/fs/btrfs/ctree.c
> @@ -766,7 +766,7 @@ int btrfs_bin_search(const struct extent_buffer *eb, int first_slot,
Does converting first_slot to u32 also add a movslq?
Also, why the "q"? Isn't everything 32 bit sized here? surprised to see
it go to quad.
> unsigned long offset;
> struct btrfs_disk_key *tmp;
> struct btrfs_disk_key unaligned;
> - int mid;
> + u32 mid;
shouldn't it be unsigned long long to theoretically avoid overflow? Not
really sure why we are storing "nritems" as a u32, I doubt we ever want
4 billion items in a leaf :)
If there was a format enforcable limit, I suppose we could make the
low/high smaller unsigned types?
>
> mid = (low + high) / 2;
> offset = p + mid * item_size;
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread* Re: [PATCH 02/12] btrfs: unify types for binary search variables
2026-01-07 19:33 ` Boris Burkov
@ 2026-01-08 20:45 ` David Sterba
0 siblings, 0 replies; 25+ messages in thread
From: David Sterba @ 2026-01-08 20:45 UTC (permalink / raw)
To: Boris Burkov; +Cc: David Sterba, linux-btrfs
On Wed, Jan 07, 2026 at 11:33:52AM -0800, Boris Burkov wrote:
> On Tue, Jan 06, 2026 at 05:20:25PM +0100, David Sterba wrote:
> > The variables calculating where to jump next are useing mixed in types
> > which requires some conversions on the instruction level. Using 'u32'
> > removes one call to 'movslq', making the main loop shorter.
> >
> > This complements type conversion done in a724f313f84beb ("btrfs: do
> > unsigned integer division in the extent buffer binary search loop")
> >
> > Signed-off-by: David Sterba <dsterba@suse.com>
> > ---
> > fs/btrfs/ctree.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
> > index 1eef80c2108331..0a7ee47aa8aaab 100644
> > --- a/fs/btrfs/ctree.c
> > +++ b/fs/btrfs/ctree.c
> > @@ -766,7 +766,7 @@ int btrfs_bin_search(const struct extent_buffer *eb, int first_slot,
>
> Does converting first_slot to u32 also add a movslq?
No, still a plain mov but there's some other effect that removes 2
reloads of the 2nd parameter to some other temporary register. I did not
expect that, there's more instruction level tuning ahead.
> Also, why the "q"? Isn't everything 32 bit sized here? surprised to see
> it go to quad.
>
> > unsigned long offset;
> > struct btrfs_disk_key *tmp;
> > struct btrfs_disk_key unaligned;
> > - int mid;
> > + u32 mid;
>
> shouldn't it be unsigned long long to theoretically avoid overflow? Not
> really sure why we are storing "nritems" as a u32, I doubt we ever want
> 4 billion items in a leaf :)
I don't think overflow can happen as the bounds are checked in many
place before it gets to the bin search. The type width
(btrfs_header::nritems) could be u16 and still cover the 64k leaves as
the item isu >=4 for this to work.
> If there was a format enforcable limit, I suppose we could make the
> low/high smaller unsigned types?
For the format it could be smaller in case we'd want to squeeze every
byte out of that, but this is for the time when the format is designed
and it's not now. At that time one does not know what would be needed so
the types are chosen to be enough, while today we know well how it's
used but can't go back to change the format.
The header size is 101 bytes, what could be reduced in size is flags and
nritems, potentialy reducing it by 8 bytes to 93 (92%). This is per
node/leaf, there are thousands per filesystem. On the biggest one I have
next to me (20T) there are about 1.2m nodes, the metadata size is 21G. The
savings translate to something like 160MiB. So yes it not zero.
For the calculations u32 is more convenient as it matches the native
register size on many architectures.
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 03/12] btrfs: rename local variable for offset in folio
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
2026-01-06 16:20 ` [PATCH 01/12] btrfs: remove duplicate calculation of eb offset in btrfs_bin_search() David Sterba
2026-01-06 16:20 ` [PATCH 02/12] btrfs: unify types for binary search variables David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-07 19:34 ` Boris Burkov
2026-01-06 16:20 ` [PATCH 04/12] btrfs: read eb folio index right before loops David Sterba
` (8 subsequent siblings)
11 siblings, 1 reply; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
Use proper abbreviation of the 'offset in folio' in the variable name,
same as we have in accessors.c.
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/ctree.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 0a7ee47aa8aaab..b959d62f015ef5 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -762,7 +762,7 @@ int btrfs_bin_search(const struct extent_buffer *eb, int first_slot,
while (low < high) {
const int unit_size = eb->folio_size;
- unsigned long oil;
+ unsigned long oif;
unsigned long offset;
struct btrfs_disk_key *tmp;
struct btrfs_disk_key unaligned;
@@ -770,13 +770,13 @@ int btrfs_bin_search(const struct extent_buffer *eb, int first_slot,
mid = (low + high) / 2;
offset = p + mid * item_size;
- oil = get_eb_offset_in_folio(eb, offset);
+ oif = get_eb_offset_in_folio(eb, offset);
- if (oil + key_size <= unit_size) {
+ if (oif + key_size <= unit_size) {
const unsigned long idx = get_eb_folio_index(eb, offset);
char *kaddr = folio_address(eb->folios[idx]);
- tmp = (struct btrfs_disk_key *)(kaddr + oil);
+ tmp = (struct btrfs_disk_key *)(kaddr + oif);
} else {
read_extent_buffer(eb, &unaligned, offset, key_size);
tmp = &unaligned;
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* Re: [PATCH 03/12] btrfs: rename local variable for offset in folio
2026-01-06 16:20 ` [PATCH 03/12] btrfs: rename local variable for offset in folio David Sterba
@ 2026-01-07 19:34 ` Boris Burkov
0 siblings, 0 replies; 25+ messages in thread
From: Boris Burkov @ 2026-01-07 19:34 UTC (permalink / raw)
To: David Sterba; +Cc: linux-btrfs
On Tue, Jan 06, 2026 at 05:20:26PM +0100, David Sterba wrote:
> Use proper abbreviation of the 'offset in folio' in the variable name,
> same as we have in accessors.c.
>
> Signed-off-by: David Sterba <dsterba@suse.com>
Reviewed-by: Boris Burkov <boris@bur.io>
> ---
> fs/btrfs/ctree.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
> index 0a7ee47aa8aaab..b959d62f015ef5 100644
> --- a/fs/btrfs/ctree.c
> +++ b/fs/btrfs/ctree.c
> @@ -762,7 +762,7 @@ int btrfs_bin_search(const struct extent_buffer *eb, int first_slot,
>
> while (low < high) {
> const int unit_size = eb->folio_size;
> - unsigned long oil;
> + unsigned long oif;
> unsigned long offset;
> struct btrfs_disk_key *tmp;
> struct btrfs_disk_key unaligned;
> @@ -770,13 +770,13 @@ int btrfs_bin_search(const struct extent_buffer *eb, int first_slot,
>
> mid = (low + high) / 2;
> offset = p + mid * item_size;
> - oil = get_eb_offset_in_folio(eb, offset);
> + oif = get_eb_offset_in_folio(eb, offset);
>
> - if (oil + key_size <= unit_size) {
> + if (oif + key_size <= unit_size) {
> const unsigned long idx = get_eb_folio_index(eb, offset);
> char *kaddr = folio_address(eb->folios[idx]);
>
> - tmp = (struct btrfs_disk_key *)(kaddr + oil);
> + tmp = (struct btrfs_disk_key *)(kaddr + oif);
> } else {
> read_extent_buffer(eb, &unaligned, offset, key_size);
> tmp = &unaligned;
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 04/12] btrfs: read eb folio index right before loops
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
` (2 preceding siblings ...)
2026-01-06 16:20 ` [PATCH 03/12] btrfs: rename local variable for offset in folio David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-07 22:01 ` Boris Burkov
2026-01-06 16:20 ` [PATCH 05/12] btrfs: use common eb range validation in read_extent_buffer_to_user_nofault() David Sterba
` (7 subsequent siblings)
11 siblings, 1 reply; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
There are generic helpers to access extent buffer folio data of any
length, potentially iterating over a few of them. This is a slow path,
either we use the type based accessors or the eb folio allocation is
contiguous and we can use the memcpy/memcmp helpers.
The initialization of 'i' is done at the beginning though it may not be
needed. Move it right before the folio loop, this has minor effect on
generated code in __write_extent_buffer().
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/extent_io.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index bbc55873cb1678..97cf1ad91e5780 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3960,7 +3960,7 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv,
size_t cur;
size_t offset;
char *dst = (char *)dstv;
- unsigned long i = get_eb_folio_index(eb, start);
+ unsigned long i;
if (check_eb_range(eb, start, len)) {
/*
@@ -3977,7 +3977,7 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv,
}
offset = get_eb_offset_in_folio(eb, start);
-
+ i = get_eb_folio_index(eb, start);
while (len > 0) {
char *kaddr;
@@ -4000,7 +4000,7 @@ int read_extent_buffer_to_user_nofault(const struct extent_buffer *eb,
size_t cur;
size_t offset;
char __user *dst = (char __user *)dstv;
- unsigned long i = get_eb_folio_index(eb, start);
+ unsigned long i;
int ret = 0;
WARN_ON(start > eb->len);
@@ -4013,7 +4013,7 @@ int read_extent_buffer_to_user_nofault(const struct extent_buffer *eb,
}
offset = get_eb_offset_in_folio(eb, start);
-
+ i = get_eb_folio_index(eb, start);
while (len > 0) {
char *kaddr;
@@ -4041,7 +4041,7 @@ int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv,
size_t offset;
char *kaddr;
char *ptr = (char *)ptrv;
- unsigned long i = get_eb_folio_index(eb, start);
+ unsigned long i;
int ret = 0;
if (check_eb_range(eb, start, len))
@@ -4051,7 +4051,7 @@ int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv,
return memcmp(ptrv, eb->addr + start, len);
offset = get_eb_offset_in_folio(eb, start);
-
+ i = get_eb_folio_index(eb, start);
while (len > 0) {
cur = min(len, unit_size - offset);
kaddr = folio_address(eb->folios[i]);
@@ -4111,7 +4111,7 @@ static void __write_extent_buffer(const struct extent_buffer *eb,
size_t offset;
char *kaddr;
const char *src = (const char *)srcv;
- unsigned long i = get_eb_folio_index(eb, start);
+ unsigned long i;
/* For unmapped (dummy) ebs, no need to check their uptodate status. */
const bool check_uptodate = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags);
@@ -4127,7 +4127,7 @@ static void __write_extent_buffer(const struct extent_buffer *eb,
}
offset = get_eb_offset_in_folio(eb, start);
-
+ i = get_eb_folio_index(eb, start);
while (len > 0) {
if (check_uptodate)
assert_eb_folio_uptodate(eb, i);
@@ -4213,7 +4213,7 @@ void copy_extent_buffer(const struct extent_buffer *dst,
size_t cur;
size_t offset;
char *kaddr;
- unsigned long i = get_eb_folio_index(dst, dst_offset);
+ unsigned long i;
if (check_eb_range(dst, dst_offset, len) ||
check_eb_range(src, src_offset, len))
@@ -4223,6 +4223,7 @@ void copy_extent_buffer(const struct extent_buffer *dst,
offset = get_eb_offset_in_folio(dst, dst_offset);
+ i = get_eb_folio_index(dst, dst_offset);
while (len > 0) {
assert_eb_folio_uptodate(dst, i);
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* Re: [PATCH 04/12] btrfs: read eb folio index right before loops
2026-01-06 16:20 ` [PATCH 04/12] btrfs: read eb folio index right before loops David Sterba
@ 2026-01-07 22:01 ` Boris Burkov
2026-01-08 21:04 ` David Sterba
0 siblings, 1 reply; 25+ messages in thread
From: Boris Burkov @ 2026-01-07 22:01 UTC (permalink / raw)
To: David Sterba; +Cc: linux-btrfs
On Tue, Jan 06, 2026 at 05:20:27PM +0100, David Sterba wrote:
> There are generic helpers to access extent buffer folio data of any
> length, potentially iterating over a few of them. This is a slow path,
> either we use the type based accessors or the eb folio allocation is
> contiguous and we can use the memcpy/memcmp helpers.
>
> The initialization of 'i' is done at the beginning though it may not be
> needed. Move it right before the folio loop, this has minor effect on
> generated code in __write_extent_buffer().
This seems fine, but also pointless. One right shift that the compiler
*can* move is causing us real pain? How often are we reading/writing
extent_buffers? Should I be constantly worried about any arithmetic I
ever do a hair early just to make code easier to read?
Anyway, if you do think it's worth it, you can add
Reviewed-by: Boris Burkov <boris@bur.io>
>
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
> fs/btrfs/extent_io.c | 19 ++++++++++---------
> 1 file changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index bbc55873cb1678..97cf1ad91e5780 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -3960,7 +3960,7 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv,
> size_t cur;
> size_t offset;
> char *dst = (char *)dstv;
> - unsigned long i = get_eb_folio_index(eb, start);
> + unsigned long i;
>
> if (check_eb_range(eb, start, len)) {
> /*
> @@ -3977,7 +3977,7 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv,
> }
>
> offset = get_eb_offset_in_folio(eb, start);
> -
> + i = get_eb_folio_index(eb, start);
> while (len > 0) {
> char *kaddr;
>
> @@ -4000,7 +4000,7 @@ int read_extent_buffer_to_user_nofault(const struct extent_buffer *eb,
> size_t cur;
> size_t offset;
> char __user *dst = (char __user *)dstv;
> - unsigned long i = get_eb_folio_index(eb, start);
> + unsigned long i;
> int ret = 0;
>
> WARN_ON(start > eb->len);
> @@ -4013,7 +4013,7 @@ int read_extent_buffer_to_user_nofault(const struct extent_buffer *eb,
> }
>
> offset = get_eb_offset_in_folio(eb, start);
> -
> + i = get_eb_folio_index(eb, start);
> while (len > 0) {
> char *kaddr;
>
> @@ -4041,7 +4041,7 @@ int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv,
> size_t offset;
> char *kaddr;
> char *ptr = (char *)ptrv;
> - unsigned long i = get_eb_folio_index(eb, start);
> + unsigned long i;
> int ret = 0;
>
> if (check_eb_range(eb, start, len))
> @@ -4051,7 +4051,7 @@ int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv,
> return memcmp(ptrv, eb->addr + start, len);
>
> offset = get_eb_offset_in_folio(eb, start);
> -
> + i = get_eb_folio_index(eb, start);
> while (len > 0) {
> cur = min(len, unit_size - offset);
> kaddr = folio_address(eb->folios[i]);
> @@ -4111,7 +4111,7 @@ static void __write_extent_buffer(const struct extent_buffer *eb,
> size_t offset;
> char *kaddr;
> const char *src = (const char *)srcv;
> - unsigned long i = get_eb_folio_index(eb, start);
> + unsigned long i;
> /* For unmapped (dummy) ebs, no need to check their uptodate status. */
> const bool check_uptodate = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags);
>
> @@ -4127,7 +4127,7 @@ static void __write_extent_buffer(const struct extent_buffer *eb,
> }
>
> offset = get_eb_offset_in_folio(eb, start);
> -
> + i = get_eb_folio_index(eb, start);
> while (len > 0) {
> if (check_uptodate)
> assert_eb_folio_uptodate(eb, i);
> @@ -4213,7 +4213,7 @@ void copy_extent_buffer(const struct extent_buffer *dst,
> size_t cur;
> size_t offset;
> char *kaddr;
> - unsigned long i = get_eb_folio_index(dst, dst_offset);
> + unsigned long i;
>
> if (check_eb_range(dst, dst_offset, len) ||
> check_eb_range(src, src_offset, len))
> @@ -4223,6 +4223,7 @@ void copy_extent_buffer(const struct extent_buffer *dst,
>
> offset = get_eb_offset_in_folio(dst, dst_offset);
>
> + i = get_eb_folio_index(dst, dst_offset);
> while (len > 0) {
> assert_eb_folio_uptodate(dst, i);
>
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread* Re: [PATCH 04/12] btrfs: read eb folio index right before loops
2026-01-07 22:01 ` Boris Burkov
@ 2026-01-08 21:04 ` David Sterba
0 siblings, 0 replies; 25+ messages in thread
From: David Sterba @ 2026-01-08 21:04 UTC (permalink / raw)
To: Boris Burkov; +Cc: David Sterba, linux-btrfs
On Wed, Jan 07, 2026 at 02:01:11PM -0800, Boris Burkov wrote:
> On Tue, Jan 06, 2026 at 05:20:27PM +0100, David Sterba wrote:
> > There are generic helpers to access extent buffer folio data of any
> > length, potentially iterating over a few of them. This is a slow path,
> > either we use the type based accessors or the eb folio allocation is
> > contiguous and we can use the memcpy/memcmp helpers.
> >
> > The initialization of 'i' is done at the beginning though it may not be
> > needed. Move it right before the folio loop, this has minor effect on
> > generated code in __write_extent_buffer().
>
> This seems fine, but also pointless. One right shift that the compiler
> *can* move is causing us real pain?
Compiler can move it but it does not, probably because there's a memory
access. I'm not saying it's causing pain but that there's a fast path
when the eb->addr is valid, then the get_eb_folio_index() is simply not
done. The fast path also will apply to the large folios as this means
the address range covering extent pages is contiguous.
> How often are we reading/writing
> extent_buffers?
For metadata heavy operations all the time, often enough that extent
buffer access shows up in the perf top profiles at the top. The function
read_extent_buffer() itself not as it's for unbounded length reads from
extent buffer but otherwise it's accessing the shared structures.
> Should I be constantly worried about any arithmetic I
> ever do a hair early just to make code easier to read?
No you don't have to worry about that (too much), I assume everybody
writing C is somehow aware of various low level effects of the code but
first it should work then we go optimize that. Doing separate
optimization passes works better as there can be patterns spotted, hot
functions identified etc. We're almost out of possible algorithmic
improvements in the extent buffers realm so instruction level it is.
> Anyway, if you do think it's worth it, you can add
>
> Reviewed-by: Boris Burkov <boris@bur.io>
Thanks. I unfortunatelly would be bothered by the code left around
because I cannot unsee it. In the long run chipping away anything that
does not need to be there would add up so we have fast code.
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 05/12] btrfs: use common eb range validation in read_extent_buffer_to_user_nofault()
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
` (3 preceding siblings ...)
2026-01-06 16:20 ` [PATCH 04/12] btrfs: read eb folio index right before loops David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-08 18:32 ` Boris Burkov
2026-01-06 16:20 ` [PATCH 06/12] btrfs: lzo: inline read/write length helpers David Sterba
` (6 subsequent siblings)
11 siblings, 1 reply; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
The extent buffer access is checked in other helpers by
check_eb_range(), which validates the requested start, length against
the extent buffer. While this almost never fails we should still handle
it as an error and not just warn.
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/extent_io.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 97cf1ad91e5780..897ea52167c612 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -4003,8 +4003,8 @@ int read_extent_buffer_to_user_nofault(const struct extent_buffer *eb,
unsigned long i;
int ret = 0;
- WARN_ON(start > eb->len);
- WARN_ON(start + len > eb->start + eb->len);
+ if (check_eb_range(eb, start, len))
+ return -EINVAL;
if (eb->addr) {
if (copy_to_user_nofault(dstv, eb->addr + start, len))
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* Re: [PATCH 05/12] btrfs: use common eb range validation in read_extent_buffer_to_user_nofault()
2026-01-06 16:20 ` [PATCH 05/12] btrfs: use common eb range validation in read_extent_buffer_to_user_nofault() David Sterba
@ 2026-01-08 18:32 ` Boris Burkov
0 siblings, 0 replies; 25+ messages in thread
From: Boris Burkov @ 2026-01-08 18:32 UTC (permalink / raw)
To: David Sterba; +Cc: linux-btrfs
On Tue, Jan 06, 2026 at 05:20:28PM +0100, David Sterba wrote:
> The extent buffer access is checked in other helpers by
> check_eb_range(), which validates the requested start, length against
> the extent buffer. While this almost never fails we should still handle
> it as an error and not just warn.
>
> Signed-off-by: David Sterba <dsterba@suse.com>
Reviewed-by: Boris Burkov <boris@bur.io>
> ---
> fs/btrfs/extent_io.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 97cf1ad91e5780..897ea52167c612 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -4003,8 +4003,8 @@ int read_extent_buffer_to_user_nofault(const struct extent_buffer *eb,
> unsigned long i;
> int ret = 0;
>
> - WARN_ON(start > eb->len);
> - WARN_ON(start + len > eb->start + eb->len);
> + if (check_eb_range(eb, start, len))
> + return -EINVAL;
>
> if (eb->addr) {
> if (copy_to_user_nofault(dstv, eb->addr + start, len))
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 06/12] btrfs: lzo: inline read/write length helpers
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
` (4 preceding siblings ...)
2026-01-06 16:20 ` [PATCH 05/12] btrfs: use common eb range validation in read_extent_buffer_to_user_nofault() David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-06 16:20 ` [PATCH 07/12] btrfs: zlib: drop redundant folio address variable David Sterba
` (5 subsequent siblings)
11 siblings, 0 replies; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
The LZO_LEN read/write helpers are supposed to be trivial and we're
duplicating the put/get unaligned helpers so use them directly.
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/lzo.c | 28 ++++++----------------------
1 file changed, 6 insertions(+), 22 deletions(-)
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index 4758f66da449c0..e2eeee708c7f90 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -106,22 +106,6 @@ struct list_head *lzo_alloc_workspace(struct btrfs_fs_info *fs_info)
return ERR_PTR(-ENOMEM);
}
-static inline void write_compress_length(char *buf, size_t len)
-{
- __le32 dlen;
-
- dlen = cpu_to_le32(len);
- memcpy(buf, &dlen, LZO_LEN);
-}
-
-static inline size_t read_compress_length(const char *buf)
-{
- __le32 dlen;
-
- memcpy(&dlen, buf, LZO_LEN);
- return le32_to_cpu(dlen);
-}
-
/*
* Will do:
*
@@ -165,7 +149,7 @@ static int copy_compressed_data_to_page(struct btrfs_fs_info *fs_info,
}
kaddr = kmap_local_folio(cur_folio, offset_in_folio(cur_folio, *cur_out));
- write_compress_length(kaddr, compressed_size);
+ put_unaligned_le32(compressed_size, kaddr);
*cur_out += LZO_LEN;
orig_out = *cur_out;
@@ -297,7 +281,7 @@ int lzo_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
/* Store the size of all chunks of compressed data */
sizes_ptr = kmap_local_folio(folios[0], 0);
- write_compress_length(sizes_ptr, cur_out);
+ put_unaligned_le32(cur_out, sizes_ptr);
kunmap_local(sizes_ptr);
ret = 0;
@@ -352,7 +336,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
u32 cur_out = 0;
kaddr = kmap_local_folio(cb->compressed_folios[0], 0);
- len_in = read_compress_length(kaddr);
+ len_in = get_unaligned_le32(kaddr);
kunmap_local(kaddr);
cur_in += LZO_LEN;
@@ -391,7 +375,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
cur_folio = cb->compressed_folios[cur_in >> min_folio_shift];
ASSERT(cur_folio);
kaddr = kmap_local_folio(cur_folio, 0);
- seg_len = read_compress_length(kaddr + offset_in_folio(cur_folio, cur_in));
+ seg_len = get_unaligned_le32(kaddr + offset_in_folio(cur_folio, cur_in));
kunmap_local(kaddr);
cur_in += LZO_LEN;
@@ -461,12 +445,12 @@ int lzo_decompress(struct list_head *ws, const u8 *data_in,
if (unlikely(srclen < LZO_LEN || srclen > max_segment_len + LZO_LEN * 2))
return -EUCLEAN;
- in_len = read_compress_length(data_in);
+ in_len = get_unaligned_le32(data_in);
if (unlikely(in_len != srclen))
return -EUCLEAN;
data_in += LZO_LEN;
- in_len = read_compress_length(data_in);
+ in_len = get_unaligned_le32(data_in);
if (unlikely(in_len != srclen - LZO_LEN * 2)) {
ret = -EUCLEAN;
goto out;
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 07/12] btrfs: zlib: drop redundant folio address variable
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
` (5 preceding siblings ...)
2026-01-06 16:20 ` [PATCH 06/12] btrfs: lzo: inline read/write length helpers David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-06 16:20 ` [PATCH 08/12] btrfs: zlib: don't cache sectorsize in a local variable David Sterba
` (4 subsequent siblings)
11 siblings, 0 replies; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
We're caching the current output folio address but it's not really
necessary as we store it in the variable and then pass it to the stream
context. We can read the folio address directly.
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/zlib.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 6caba8be7c845c..d1a680da26ba53 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -155,7 +155,6 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
const u32 min_folio_size = btrfs_min_folio_size(fs_info);
int ret;
char *data_in = NULL;
- char *cfolio_out;
int nr_folios = 0;
struct folio *in_folio = NULL;
struct folio *out_folio = NULL;
@@ -186,13 +185,12 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
ret = -ENOMEM;
goto out;
}
- cfolio_out = folio_address(out_folio);
folios[0] = out_folio;
nr_folios = 1;
workspace->strm.next_in = workspace->buf;
workspace->strm.avail_in = 0;
- workspace->strm.next_out = cfolio_out;
+ workspace->strm.next_out = folio_address(out_folio);
workspace->strm.avail_out = min_folio_size;
while (workspace->strm.total_in < len) {
@@ -270,11 +268,10 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
ret = -ENOMEM;
goto out;
}
- cfolio_out = folio_address(out_folio);
folios[nr_folios] = out_folio;
nr_folios++;
workspace->strm.avail_out = min_folio_size;
- workspace->strm.next_out = cfolio_out;
+ workspace->strm.next_out = folio_address(out_folio);
}
/* we're all done */
if (workspace->strm.total_in >= len)
@@ -306,11 +303,10 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
ret = -ENOMEM;
goto out;
}
- cfolio_out = folio_address(out_folio);
folios[nr_folios] = out_folio;
nr_folios++;
workspace->strm.avail_out = min_folio_size;
- workspace->strm.next_out = cfolio_out;
+ workspace->strm.next_out = folio_address(out_folio);
}
}
zlib_deflateEnd(&workspace->strm);
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 08/12] btrfs: zlib: don't cache sectorsize in a local variable
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
` (6 preceding siblings ...)
2026-01-06 16:20 ` [PATCH 07/12] btrfs: zlib: drop redundant folio address variable David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-06 16:20 ` [PATCH 09/12] btrfs: zlib: remove local variable nr_dest_folios in zlib_compress_folios() David Sterba
` (3 subsequent siblings)
11 siblings, 0 replies; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
The sectorsize is used once or at most twice in the callbacks, no need
to cache it on stack.
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/zlib.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index d1a680da26ba53..bb4a9f70714682 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -71,7 +71,6 @@ static bool need_special_buffer(struct btrfs_fs_info *fs_info)
struct list_head *zlib_alloc_workspace(struct btrfs_fs_info *fs_info, unsigned int level)
{
- const u32 blocksize = fs_info->sectorsize;
struct workspace *workspace;
int workspacesize;
@@ -91,8 +90,8 @@ struct list_head *zlib_alloc_workspace(struct btrfs_fs_info *fs_info, unsigned i
workspace->buf_size = ZLIB_DFLTCC_BUF_SIZE;
}
if (!workspace->buf) {
- workspace->buf = kmalloc(blocksize, GFP_KERNEL);
- workspace->buf_size = blocksize;
+ workspace->buf = kmalloc(fs_info->sectorsize, GFP_KERNEL);
+ workspace->buf_size = fs_info->sectorsize;
}
if (!workspace->strm.workspace || !workspace->buf)
goto fail;
@@ -161,7 +160,6 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
unsigned long len = *total_out;
unsigned long nr_dest_folios = *out_folios;
const unsigned long max_out = nr_dest_folios << min_folio_shift;
- const u32 blocksize = fs_info->sectorsize;
const u64 orig_end = start + len;
*out_folios = 0;
@@ -248,7 +246,7 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
}
/* we're making it bigger, give up */
- if (workspace->strm.total_in > blocksize * 2 &&
+ if (workspace->strm.total_in > fs_info->sectorsize * 2 &&
workspace->strm.total_in <
workspace->strm.total_out) {
ret = -E2BIG;
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 09/12] btrfs: zlib: remove local variable nr_dest_folios in zlib_compress_folios()
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
` (7 preceding siblings ...)
2026-01-06 16:20 ` [PATCH 08/12] btrfs: zlib: don't cache sectorsize in a local variable David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-08 18:37 ` Boris Burkov
2026-01-06 16:20 ` [PATCH 10/12] btrfs: zstd: reuse total in and out parameters for calculations David Sterba
` (2 subsequent siblings)
11 siblings, 1 reply; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
The value of *out_folios does not change and nr_dest_folios is only a
local copy, we can remove it. This saves 8 bytes of stack.
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/zlib.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index bb4a9f70714682..fa35513267ae42 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -158,8 +158,7 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
struct folio *in_folio = NULL;
struct folio *out_folio = NULL;
unsigned long len = *total_out;
- unsigned long nr_dest_folios = *out_folios;
- const unsigned long max_out = nr_dest_folios << min_folio_shift;
+ const unsigned long max_out = *out_folios << min_folio_shift;
const u64 orig_end = start + len;
*out_folios = 0;
@@ -257,7 +256,7 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
* the stream end if required
*/
if (workspace->strm.avail_out == 0) {
- if (nr_folios == nr_dest_folios) {
+ if (nr_folios == *out_folios) {
ret = -E2BIG;
goto out;
}
@@ -292,7 +291,7 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
goto out;
} else if (workspace->strm.avail_out == 0) {
/* Get another folio for the stream end. */
- if (nr_folios == nr_dest_folios) {
+ if (nr_folios == *out_folios) {
ret = -E2BIG;
goto out;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* Re: [PATCH 09/12] btrfs: zlib: remove local variable nr_dest_folios in zlib_compress_folios()
2026-01-06 16:20 ` [PATCH 09/12] btrfs: zlib: remove local variable nr_dest_folios in zlib_compress_folios() David Sterba
@ 2026-01-08 18:37 ` Boris Burkov
2026-01-08 21:14 ` David Sterba
0 siblings, 1 reply; 25+ messages in thread
From: Boris Burkov @ 2026-01-08 18:37 UTC (permalink / raw)
To: David Sterba; +Cc: linux-btrfs
On Tue, Jan 06, 2026 at 05:20:32PM +0100, David Sterba wrote:
> The value of *out_folios does not change and nr_dest_folios is only a
> local copy, we can remove it. This saves 8 bytes of stack.
>
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
> fs/btrfs/zlib.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
> index bb4a9f70714682..fa35513267ae42 100644
> --- a/fs/btrfs/zlib.c
> +++ b/fs/btrfs/zlib.c
> @@ -158,8 +158,7 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> struct folio *in_folio = NULL;
> struct folio *out_folio = NULL;
> unsigned long len = *total_out;
> - unsigned long nr_dest_folios = *out_folios;
> - const unsigned long max_out = nr_dest_folios << min_folio_shift;
> + const unsigned long max_out = *out_folios << min_folio_shift;
> const u64 orig_end = start + len;
>
> *out_folios = 0;
I may be missing something, but it looks like it does change here?
Then it only gets set to nr_folios at the out: label. So in the other
two uses of nr_dest_folios, *out_folios is wrong.
> @@ -257,7 +256,7 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> * the stream end if required
> */
> if (workspace->strm.avail_out == 0) {
> - if (nr_folios == nr_dest_folios) {
> + if (nr_folios == *out_folios) {
> ret = -E2BIG;
> goto out;
> }
> @@ -292,7 +291,7 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> goto out;
> } else if (workspace->strm.avail_out == 0) {
> /* Get another folio for the stream end. */
> - if (nr_folios == nr_dest_folios) {
> + if (nr_folios == *out_folios) {
> ret = -E2BIG;
> goto out;
> }
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread* Re: [PATCH 09/12] btrfs: zlib: remove local variable nr_dest_folios in zlib_compress_folios()
2026-01-08 18:37 ` Boris Burkov
@ 2026-01-08 21:14 ` David Sterba
0 siblings, 0 replies; 25+ messages in thread
From: David Sterba @ 2026-01-08 21:14 UTC (permalink / raw)
To: Boris Burkov; +Cc: David Sterba, linux-btrfs
On Thu, Jan 08, 2026 at 10:37:25AM -0800, Boris Burkov wrote:
> On Tue, Jan 06, 2026 at 05:20:32PM +0100, David Sterba wrote:
> > The value of *out_folios does not change and nr_dest_folios is only a
> > local copy, we can remove it. This saves 8 bytes of stack.
> >
> > Signed-off-by: David Sterba <dsterba@suse.com>
> > ---
> > fs/btrfs/zlib.c | 7 +++----
> > 1 file changed, 3 insertions(+), 4 deletions(-)
> >
> > diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
> > index bb4a9f70714682..fa35513267ae42 100644
> > --- a/fs/btrfs/zlib.c
> > +++ b/fs/btrfs/zlib.c
> > @@ -158,8 +158,7 @@ int zlib_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> > struct folio *in_folio = NULL;
> > struct folio *out_folio = NULL;
> > unsigned long len = *total_out;
> > - unsigned long nr_dest_folios = *out_folios;
> > - const unsigned long max_out = nr_dest_folios << min_folio_shift;
> > + const unsigned long max_out = *out_folios << min_folio_shift;
> > const u64 orig_end = start + len;
> >
> > *out_folios = 0;
>
> I may be missing something, but it looks like it does change here?
> Then it only gets set to nr_folios at the out: label. So in the other
> two uses of nr_dest_folios, *out_folios is wrong.
You're right, the "*out_folios = 0" would have to be removed as well.
The wording was incorect, what I meant is that it was not changed until
the out label, as you say. The nr_folios and valid entries in folios are
always in sync (regarding the code flow) so removing the out_folios
initialization should be sufficient.
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 10/12] btrfs: zstd: reuse total in and out parameters for calculations
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
` (8 preceding siblings ...)
2026-01-06 16:20 ` [PATCH 09/12] btrfs: zlib: remove local variable nr_dest_folios in zlib_compress_folios() David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-08 18:42 ` Boris Burkov
2026-01-06 16:20 ` [PATCH 11/12] btrfs: zstd: don't cache sectorsize in a local variable David Sterba
2026-01-06 16:20 ` [PATCH 12/12] btrfs: zstd: remove local variable nr_dest_folios in zstd_compress_folios() David Sterba
11 siblings, 1 reply; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
Reduce the stack consumption which is 240 bytes on release config by 16
bytes. The local variables are not adding anything on top of the
parameters. As a calling convention if the compression helper returns an
error the parameters are considered invalid.
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/zstd.c | 26 +++++++++++---------------
1 file changed, 11 insertions(+), 15 deletions(-)
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index c9cddcfa337b91..4edc5f6f63a110 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -408,8 +408,6 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
int nr_folios = 0;
struct folio *in_folio = NULL; /* The current folio to read. */
struct folio *out_folio = NULL; /* The current folio to write to. */
- unsigned long tot_in = 0;
- unsigned long tot_out = 0;
unsigned long len = *total_out;
const unsigned long nr_dest_folios = *out_folios;
const u64 orig_end = start + len;
@@ -471,23 +469,23 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
}
/* Check to see if we are making it bigger */
- if (tot_in + workspace->in_buf.pos > blocksize * 2 &&
- tot_in + workspace->in_buf.pos <
- tot_out + workspace->out_buf.pos) {
+ if (*total_in + workspace->in_buf.pos > blocksize * 2 &&
+ *total_in + workspace->in_buf.pos <
+ *total_out + workspace->out_buf.pos) {
ret = -E2BIG;
goto out;
}
/* We've reached the end of our output range */
if (workspace->out_buf.pos >= max_out) {
- tot_out += workspace->out_buf.pos;
+ *total_out += workspace->out_buf.pos;
ret = -E2BIG;
goto out;
}
/* Check if we need more output space */
if (workspace->out_buf.pos == workspace->out_buf.size) {
- tot_out += min_folio_size;
+ *total_out += min_folio_size;
max_out -= min_folio_size;
if (nr_folios == nr_dest_folios) {
ret = -E2BIG;
@@ -506,13 +504,13 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
/* We've reached the end of the input */
if (workspace->in_buf.pos >= len) {
- tot_in += workspace->in_buf.pos;
+ *total_in += workspace->in_buf.pos;
break;
}
/* Check if we need more input */
if (workspace->in_buf.pos == workspace->in_buf.size) {
- tot_in += workspace->in_buf.size;
+ *total_in += workspace->in_buf.size;
kunmap_local(workspace->in_buf.src);
workspace->in_buf.src = NULL;
folio_put(in_folio);
@@ -542,16 +540,16 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
goto out;
}
if (ret2 == 0) {
- tot_out += workspace->out_buf.pos;
+ *total_out += workspace->out_buf.pos;
break;
}
if (workspace->out_buf.pos >= max_out) {
- tot_out += workspace->out_buf.pos;
+ *total_out += workspace->out_buf.pos;
ret = -E2BIG;
goto out;
}
- tot_out += min_folio_size;
+ *total_out += min_folio_size;
max_out -= min_folio_size;
if (nr_folios == nr_dest_folios) {
ret = -E2BIG;
@@ -568,14 +566,12 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
workspace->out_buf.size = min_t(size_t, max_out, min_folio_size);
}
- if (tot_out >= tot_in) {
+ if (*total_out >= *total_in) {
ret = -E2BIG;
goto out;
}
ret = 0;
- *total_in = tot_in;
- *total_out = tot_out;
out:
*out_folios = nr_folios;
if (workspace->in_buf.src) {
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* Re: [PATCH 10/12] btrfs: zstd: reuse total in and out parameters for calculations
2026-01-06 16:20 ` [PATCH 10/12] btrfs: zstd: reuse total in and out parameters for calculations David Sterba
@ 2026-01-08 18:42 ` Boris Burkov
0 siblings, 0 replies; 25+ messages in thread
From: Boris Burkov @ 2026-01-08 18:42 UTC (permalink / raw)
To: David Sterba; +Cc: linux-btrfs
On Tue, Jan 06, 2026 at 05:20:33PM +0100, David Sterba wrote:
> Reduce the stack consumption which is 240 bytes on release config by 16
> bytes. The local variables are not adding anything on top of the
> parameters. As a calling convention if the compression helper returns an
> error the parameters are considered invalid.
I don't think that's technically true, as btrfs_compress_folios() does
an ASSERT(*total_in < orig_len) unconditionally.
However, I don't think it makes your change invalid.
Reviewed-by: Boris Burkov <boris@bur.io>
>
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
> fs/btrfs/zstd.c | 26 +++++++++++---------------
> 1 file changed, 11 insertions(+), 15 deletions(-)
>
> diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
> index c9cddcfa337b91..4edc5f6f63a110 100644
> --- a/fs/btrfs/zstd.c
> +++ b/fs/btrfs/zstd.c
> @@ -408,8 +408,6 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> int nr_folios = 0;
> struct folio *in_folio = NULL; /* The current folio to read. */
> struct folio *out_folio = NULL; /* The current folio to write to. */
> - unsigned long tot_in = 0;
> - unsigned long tot_out = 0;
> unsigned long len = *total_out;
> const unsigned long nr_dest_folios = *out_folios;
> const u64 orig_end = start + len;
> @@ -471,23 +469,23 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> }
>
> /* Check to see if we are making it bigger */
> - if (tot_in + workspace->in_buf.pos > blocksize * 2 &&
> - tot_in + workspace->in_buf.pos <
> - tot_out + workspace->out_buf.pos) {
> + if (*total_in + workspace->in_buf.pos > blocksize * 2 &&
> + *total_in + workspace->in_buf.pos <
> + *total_out + workspace->out_buf.pos) {
> ret = -E2BIG;
> goto out;
> }
>
> /* We've reached the end of our output range */
> if (workspace->out_buf.pos >= max_out) {
> - tot_out += workspace->out_buf.pos;
> + *total_out += workspace->out_buf.pos;
> ret = -E2BIG;
> goto out;
> }
>
> /* Check if we need more output space */
> if (workspace->out_buf.pos == workspace->out_buf.size) {
> - tot_out += min_folio_size;
> + *total_out += min_folio_size;
> max_out -= min_folio_size;
> if (nr_folios == nr_dest_folios) {
> ret = -E2BIG;
> @@ -506,13 +504,13 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
>
> /* We've reached the end of the input */
> if (workspace->in_buf.pos >= len) {
> - tot_in += workspace->in_buf.pos;
> + *total_in += workspace->in_buf.pos;
> break;
> }
>
> /* Check if we need more input */
> if (workspace->in_buf.pos == workspace->in_buf.size) {
> - tot_in += workspace->in_buf.size;
> + *total_in += workspace->in_buf.size;
> kunmap_local(workspace->in_buf.src);
> workspace->in_buf.src = NULL;
> folio_put(in_folio);
> @@ -542,16 +540,16 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> goto out;
> }
> if (ret2 == 0) {
> - tot_out += workspace->out_buf.pos;
> + *total_out += workspace->out_buf.pos;
> break;
> }
> if (workspace->out_buf.pos >= max_out) {
> - tot_out += workspace->out_buf.pos;
> + *total_out += workspace->out_buf.pos;
> ret = -E2BIG;
> goto out;
> }
>
> - tot_out += min_folio_size;
> + *total_out += min_folio_size;
> max_out -= min_folio_size;
> if (nr_folios == nr_dest_folios) {
> ret = -E2BIG;
> @@ -568,14 +566,12 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> workspace->out_buf.size = min_t(size_t, max_out, min_folio_size);
> }
>
> - if (tot_out >= tot_in) {
> + if (*total_out >= *total_in) {
> ret = -E2BIG;
> goto out;
> }
>
> ret = 0;
> - *total_in = tot_in;
> - *total_out = tot_out;
> out:
> *out_folios = nr_folios;
> if (workspace->in_buf.src) {
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 11/12] btrfs: zstd: don't cache sectorsize in a local variable
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
` (9 preceding siblings ...)
2026-01-06 16:20 ` [PATCH 10/12] btrfs: zstd: reuse total in and out parameters for calculations David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-08 18:53 ` Boris Burkov
2026-01-06 16:20 ` [PATCH 12/12] btrfs: zstd: remove local variable nr_dest_folios in zstd_compress_folios() David Sterba
11 siblings, 1 reply; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
The sectorsize is used once or at most twice in the callbacks, no need
to cache it on stack. Minor effect on zstd_compress_folios() where it
saves 8 bytes of stack.
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/zstd.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 4edc5f6f63a110..75294302fe0530 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -370,7 +370,6 @@ void zstd_free_workspace(struct list_head *ws)
struct list_head *zstd_alloc_workspace(struct btrfs_fs_info *fs_info, int level)
{
- const u32 blocksize = fs_info->sectorsize;
struct workspace *workspace;
workspace = kzalloc(sizeof(*workspace), GFP_KERNEL);
@@ -383,7 +382,7 @@ struct list_head *zstd_alloc_workspace(struct btrfs_fs_info *fs_info, int level)
workspace->req_level = level;
workspace->last_used = jiffies;
workspace->mem = kvmalloc(workspace->size, GFP_KERNEL | __GFP_NOWARN);
- workspace->buf = kmalloc(blocksize, GFP_KERNEL);
+ workspace->buf = kmalloc(fs_info->sectorsize, GFP_KERNEL);
if (!workspace->mem || !workspace->buf)
goto fail;
@@ -411,7 +410,6 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
unsigned long len = *total_out;
const unsigned long nr_dest_folios = *out_folios;
const u64 orig_end = start + len;
- const u32 blocksize = fs_info->sectorsize;
const u32 min_folio_size = btrfs_min_folio_size(fs_info);
unsigned long max_out = nr_dest_folios * min_folio_size;
unsigned int cur_len;
@@ -469,7 +467,7 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
}
/* Check to see if we are making it bigger */
- if (*total_in + workspace->in_buf.pos > blocksize * 2 &&
+ if (*total_in + workspace->in_buf.pos > fs_info->sectorsize * 2 &&
*total_in + workspace->in_buf.pos <
*total_out + workspace->out_buf.pos) {
ret = -E2BIG;
@@ -589,7 +587,6 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
size_t srclen = cb->compressed_len;
zstd_dstream *stream;
int ret = 0;
- const u32 blocksize = fs_info->sectorsize;
const unsigned int min_folio_size = btrfs_min_folio_size(fs_info);
unsigned long folio_in_index = 0;
unsigned long total_folios_in = DIV_ROUND_UP(srclen, min_folio_size);
@@ -614,7 +611,7 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
workspace->out_buf.dst = workspace->buf;
workspace->out_buf.pos = 0;
- workspace->out_buf.size = blocksize;
+ workspace->out_buf.size = fs_info->sectorsize;
while (1) {
size_t ret2;
@@ -675,7 +672,6 @@ int zstd_decompress(struct list_head *ws, const u8 *data_in,
{
struct workspace *workspace = list_entry(ws, struct workspace, list);
struct btrfs_fs_info *fs_info = btrfs_sb(folio_inode(dest_folio)->i_sb);
- const u32 sectorsize = fs_info->sectorsize;
zstd_dstream *stream;
int ret = 0;
unsigned long to_copy = 0;
@@ -699,7 +695,7 @@ int zstd_decompress(struct list_head *ws, const u8 *data_in,
workspace->out_buf.dst = workspace->buf;
workspace->out_buf.pos = 0;
- workspace->out_buf.size = sectorsize;
+ workspace->out_buf.size = fs_info->sectorsize;
/*
* Since both input and output buffers should not exceed one sector,
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* Re: [PATCH 11/12] btrfs: zstd: don't cache sectorsize in a local variable
2026-01-06 16:20 ` [PATCH 11/12] btrfs: zstd: don't cache sectorsize in a local variable David Sterba
@ 2026-01-08 18:53 ` Boris Burkov
0 siblings, 0 replies; 25+ messages in thread
From: Boris Burkov @ 2026-01-08 18:53 UTC (permalink / raw)
To: David Sterba; +Cc: linux-btrfs
On Tue, Jan 06, 2026 at 05:20:34PM +0100, David Sterba wrote:
> The sectorsize is used once or at most twice in the callbacks, no need
> to cache it on stack. Minor effect on zstd_compress_folios() where it
> saves 8 bytes of stack.
>
Reviewed-by: Boris Burkov <boris@bur.io>
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
> fs/btrfs/zstd.c | 12 ++++--------
> 1 file changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
> index 4edc5f6f63a110..75294302fe0530 100644
> --- a/fs/btrfs/zstd.c
> +++ b/fs/btrfs/zstd.c
> @@ -370,7 +370,6 @@ void zstd_free_workspace(struct list_head *ws)
>
> struct list_head *zstd_alloc_workspace(struct btrfs_fs_info *fs_info, int level)
> {
> - const u32 blocksize = fs_info->sectorsize;
> struct workspace *workspace;
>
> workspace = kzalloc(sizeof(*workspace), GFP_KERNEL);
> @@ -383,7 +382,7 @@ struct list_head *zstd_alloc_workspace(struct btrfs_fs_info *fs_info, int level)
> workspace->req_level = level;
> workspace->last_used = jiffies;
> workspace->mem = kvmalloc(workspace->size, GFP_KERNEL | __GFP_NOWARN);
> - workspace->buf = kmalloc(blocksize, GFP_KERNEL);
> + workspace->buf = kmalloc(fs_info->sectorsize, GFP_KERNEL);
> if (!workspace->mem || !workspace->buf)
> goto fail;
>
> @@ -411,7 +410,6 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> unsigned long len = *total_out;
> const unsigned long nr_dest_folios = *out_folios;
> const u64 orig_end = start + len;
> - const u32 blocksize = fs_info->sectorsize;
> const u32 min_folio_size = btrfs_min_folio_size(fs_info);
> unsigned long max_out = nr_dest_folios * min_folio_size;
> unsigned int cur_len;
> @@ -469,7 +467,7 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> }
>
> /* Check to see if we are making it bigger */
> - if (*total_in + workspace->in_buf.pos > blocksize * 2 &&
> + if (*total_in + workspace->in_buf.pos > fs_info->sectorsize * 2 &&
> *total_in + workspace->in_buf.pos <
> *total_out + workspace->out_buf.pos) {
> ret = -E2BIG;
> @@ -589,7 +587,6 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
> size_t srclen = cb->compressed_len;
> zstd_dstream *stream;
> int ret = 0;
> - const u32 blocksize = fs_info->sectorsize;
> const unsigned int min_folio_size = btrfs_min_folio_size(fs_info);
> unsigned long folio_in_index = 0;
> unsigned long total_folios_in = DIV_ROUND_UP(srclen, min_folio_size);
> @@ -614,7 +611,7 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
>
> workspace->out_buf.dst = workspace->buf;
> workspace->out_buf.pos = 0;
> - workspace->out_buf.size = blocksize;
> + workspace->out_buf.size = fs_info->sectorsize;
>
> while (1) {
> size_t ret2;
> @@ -675,7 +672,6 @@ int zstd_decompress(struct list_head *ws, const u8 *data_in,
> {
> struct workspace *workspace = list_entry(ws, struct workspace, list);
> struct btrfs_fs_info *fs_info = btrfs_sb(folio_inode(dest_folio)->i_sb);
> - const u32 sectorsize = fs_info->sectorsize;
> zstd_dstream *stream;
> int ret = 0;
> unsigned long to_copy = 0;
> @@ -699,7 +695,7 @@ int zstd_decompress(struct list_head *ws, const u8 *data_in,
>
> workspace->out_buf.dst = workspace->buf;
> workspace->out_buf.pos = 0;
> - workspace->out_buf.size = sectorsize;
> + workspace->out_buf.size = fs_info->sectorsize;
>
> /*
> * Since both input and output buffers should not exceed one sector,
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 12/12] btrfs: zstd: remove local variable nr_dest_folios in zstd_compress_folios()
2026-01-06 16:20 [PATCH 00/12] Short cleanups David Sterba
` (10 preceding siblings ...)
2026-01-06 16:20 ` [PATCH 11/12] btrfs: zstd: don't cache sectorsize in a local variable David Sterba
@ 2026-01-06 16:20 ` David Sterba
2026-01-08 18:44 ` Boris Burkov
11 siblings, 1 reply; 25+ messages in thread
From: David Sterba @ 2026-01-06 16:20 UTC (permalink / raw)
To: linux-btrfs; +Cc: David Sterba
The value of *out_folios does not change and nr_dest_folios is only a
local copy, we can remove it. This saves 8 bytes of stack.
Signed-off-by: David Sterba <dsterba@suse.com>
---
fs/btrfs/zstd.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 75294302fe0530..40cc2a479be63e 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -408,10 +408,9 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
struct folio *in_folio = NULL; /* The current folio to read. */
struct folio *out_folio = NULL; /* The current folio to write to. */
unsigned long len = *total_out;
- const unsigned long nr_dest_folios = *out_folios;
const u64 orig_end = start + len;
const u32 min_folio_size = btrfs_min_folio_size(fs_info);
- unsigned long max_out = nr_dest_folios * min_folio_size;
+ unsigned long max_out = *out_folios * min_folio_size;
unsigned int cur_len;
workspace->params = zstd_get_btrfs_parameters(workspace->req_level, len);
@@ -485,7 +484,7 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
if (workspace->out_buf.pos == workspace->out_buf.size) {
*total_out += min_folio_size;
max_out -= min_folio_size;
- if (nr_folios == nr_dest_folios) {
+ if (nr_folios == *out_folios) {
ret = -E2BIG;
goto out;
}
@@ -549,7 +548,7 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
*total_out += min_folio_size;
max_out -= min_folio_size;
- if (nr_folios == nr_dest_folios) {
+ if (nr_folios == *out_folios) {
ret = -E2BIG;
goto out;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 25+ messages in thread* Re: [PATCH 12/12] btrfs: zstd: remove local variable nr_dest_folios in zstd_compress_folios()
2026-01-06 16:20 ` [PATCH 12/12] btrfs: zstd: remove local variable nr_dest_folios in zstd_compress_folios() David Sterba
@ 2026-01-08 18:44 ` Boris Burkov
0 siblings, 0 replies; 25+ messages in thread
From: Boris Burkov @ 2026-01-08 18:44 UTC (permalink / raw)
To: David Sterba; +Cc: linux-btrfs
On Tue, Jan 06, 2026 at 05:20:35PM +0100, David Sterba wrote:
> The value of *out_folios does not change and nr_dest_folios is only a
> local copy, we can remove it. This saves 8 bytes of stack.
To my eye, this one has the same bug as the zlib one.
>
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
> fs/btrfs/zstd.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
> index 75294302fe0530..40cc2a479be63e 100644
> --- a/fs/btrfs/zstd.c
> +++ b/fs/btrfs/zstd.c
> @@ -408,10 +408,9 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> struct folio *in_folio = NULL; /* The current folio to read. */
> struct folio *out_folio = NULL; /* The current folio to write to. */
> unsigned long len = *total_out;
> - const unsigned long nr_dest_folios = *out_folios;
> const u64 orig_end = start + len;
> const u32 min_folio_size = btrfs_min_folio_size(fs_info);
> - unsigned long max_out = nr_dest_folios * min_folio_size;
> + unsigned long max_out = *out_folios * min_folio_size;
> unsigned int cur_len;
>
> workspace->params = zstd_get_btrfs_parameters(workspace->req_level, len);
> @@ -485,7 +484,7 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
> if (workspace->out_buf.pos == workspace->out_buf.size) {
> *total_out += min_folio_size;
> max_out -= min_folio_size;
> - if (nr_folios == nr_dest_folios) {
> + if (nr_folios == *out_folios) {
> ret = -E2BIG;
> goto out;
> }
> @@ -549,7 +548,7 @@ int zstd_compress_folios(struct list_head *ws, struct btrfs_inode *inode,
>
> *total_out += min_folio_size;
> max_out -= min_folio_size;
> - if (nr_folios == nr_dest_folios) {
> + if (nr_folios == *out_folios) {
> ret = -E2BIG;
> goto out;
> }
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread