* [PATCH] ext4/307: Calculate donor size to avoid failures for 64k bs
@ 2026-02-02 13:27 Ojaswin Mujoo
2026-02-03 7:46 ` Disha Goel
2026-02-08 17:54 ` Zorro Lang
0 siblings, 2 replies; 7+ messages in thread
From: Ojaswin Mujoo @ 2026-02-02 13:27 UTC (permalink / raw)
To: Zorro Lang, fstests; +Cc: Disha Goel
Hard coding donor size to 250M causes the following failure in e4compat
e4compact.c:68: do_defrag_range: Assertion `donor->length >= len' failed.
./common/rc: line 4616: 1583182 Aborted (core dumped)
The issue is that e4compat asserts that the stat.st_size of all files
combined shouldn't be more than that of the donor file. In case of 64k
blocksize, fsstress often creates sparse files which have >3G size even
though their disk utilization is <100M. Since donor file is not big
enough, this trips e4compat causing the failure.
Fix this by dynamically calculating the donor file size based on the
size of all files. Also, make some changes to avoid future ENOSPC like
reducing the -n passed to fsstress to keep the size around 2G and making
sure we use the whole SCRATCH disk instead of 500M.
While we are at it, add some new lines to make the code bit more
readable.
Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
---
tests/ext4/307 | 41 +++++++++++++++++++++++++++++++++++++----
1 file changed, 37 insertions(+), 4 deletions(-)
diff --git a/tests/ext4/307 b/tests/ext4/307
index 1f0e42ca..7e7fb9c8 100755
--- a/tests/ext4/307
+++ b/tests/ext4/307
@@ -12,24 +12,57 @@ _begin_fstest auto ioctl rw defrag prealloc
# Import common functions.
. ./common/filter
. ./common/defrag
+
+# Get the total of stat.st_size of each file to determine the
+# size of donor file. We use st_size here instead of blocks used
+# because thats what e4compat.c checks against.
+get_bytes_used() {
+ local filelist="$1"
+ local total=0
+ local blksz=$(_get_block_size $SCRATCH_MNT)
+
+ while IFS= read -r f; do
+ echo -n "File: $f: " >> $seqres.full
+ [[ -z "$f" ]] && continue
+
+ if [[ -f "$f" ]]; then
+ local bytes
+ bytes=$(stat -c %s "$f")
+ total=$((total + bytes))
+ echo $bytes >> $seqres.full
+ fi
+ done < "$filelist"
+
+ echo "Total bytes: $total" >> $seqres.full
+ echo $total
+}
+
# Disable all sync operations to get higher load
FSSTRESS_AVOID="$FSSTRESS_AVOID -ffsync=0 -fsync=0 -ffdatasync=0"
_workout()
{
+ local blksz=$(_get_block_size $SCRATCH_MNT)
+
echo ""
echo "Run fsstress"
out=$SCRATCH_MNT/fsstress.$$
- args=`_scale_fsstress_args -p4 -n999 -f setattr=1 -d $out`
+ args=`_scale_fsstress_args -p4 -n500 -f setattr=1 -d $out`
echo "fsstress $args" >> $seqres.full
_run_fsstress $args
+
find $out -type f > $out.list
cat $out.list | xargs md5sum > $out.md5sum
- usage=`du -sch $out | tail -n1 | gawk '{ print $1 }'`
+
+ bytes=`get_bytes_used $out.list`
+ echo "Total bytes used: $bytes" >> $seqres.full
+
echo "Allocate donor file"
- $XFS_IO_PROG -c "falloc 0 250M" -f $SCRATCH_MNT/donor | _filter_xfs_io
+ $XFS_IO_PROG -c "falloc 0 $bytes" -f $SCRATCH_MNT/donor | _filter_xfs_io
+
echo "Perform compacting"
cat $out.list | run_check $here/src/e4compact \
-i -v -f $SCRATCH_MNT/donor >> $seqres.full 2>&1
+
echo "Check data"
run_check md5sum -c $out.md5sum
}
@@ -41,7 +74,7 @@ _require_scratch
_require_defrag
_require_xfs_io_command "falloc"
-_scratch_mkfs_sized $((512 * 1024 * 1024)) >> $seqres.full 2>&1
+_scratch_mkfs >> $seqres.full 2>&1
_scratch_mount
_workout
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] ext4/307: Calculate donor size to avoid failures for 64k bs
2026-02-02 13:27 [PATCH] ext4/307: Calculate donor size to avoid failures for 64k bs Ojaswin Mujoo
@ 2026-02-03 7:46 ` Disha Goel
2026-02-08 17:54 ` Zorro Lang
1 sibling, 0 replies; 7+ messages in thread
From: Disha Goel @ 2026-02-03 7:46 UTC (permalink / raw)
To: Ojaswin Mujoo, Zorro Lang, fstests
On 02/02/26 6:57 pm, Ojaswin Mujoo wrote:
> Hard coding donor size to 250M causes the following failure in e4compat
>
> e4compact.c:68: do_defrag_range: Assertion `donor->length >= len' failed.
> ./common/rc: line 4616: 1583182 Aborted (core dumped)
>
> The issue is that e4compat asserts that the stat.st_size of all files
> combined shouldn't be more than that of the donor file. In case of 64k
> blocksize, fsstress often creates sparse files which have >3G size even
> though their disk utilization is <100M. Since donor file is not big
> enough, this trips e4compat causing the failure.
>
> Fix this by dynamically calculating the donor file size based on the
> size of all files. Also, make some changes to avoid future ENOSPC like
> reducing the -n passed to fsstress to keep the size around 2G and making
> sure we use the whole SCRATCH disk instead of 500M.
>
> While we are at it, add some new lines to make the code bit more
> readable.
>
> Reported-by: Disha Goel <disgoel@linux.ibm.com>
> Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
Thanks for the patch Ojaswin. Tested successfully on my setup.
Tested-by: Disha Goel <disgoel@linux.ibm.com>
> ---
> tests/ext4/307 | 41 +++++++++++++++++++++++++++++++++++++----
> 1 file changed, 37 insertions(+), 4 deletions(-)
>
> diff --git a/tests/ext4/307 b/tests/ext4/307
> index 1f0e42ca..7e7fb9c8 100755
> --- a/tests/ext4/307
> +++ b/tests/ext4/307
> @@ -12,24 +12,57 @@ _begin_fstest auto ioctl rw defrag prealloc
> # Import common functions.
> . ./common/filter
> . ./common/defrag
> +
> +# Get the total of stat.st_size of each file to determine the
> +# size of donor file. We use st_size here instead of blocks used
> +# because thats what e4compat.c checks against.
> +get_bytes_used() {
> + local filelist="$1"
> + local total=0
> + local blksz=$(_get_block_size $SCRATCH_MNT)
> +
> + while IFS= read -r f; do
> + echo -n "File: $f: " >> $seqres.full
> + [[ -z "$f" ]] && continue
> +
> + if [[ -f "$f" ]]; then
> + local bytes
> + bytes=$(stat -c %s "$f")
> + total=$((total + bytes))
> + echo $bytes >> $seqres.full
> + fi
> + done < "$filelist"
> +
> + echo "Total bytes: $total" >> $seqres.full
> + echo $total
> +}
> +
> # Disable all sync operations to get higher load
> FSSTRESS_AVOID="$FSSTRESS_AVOID -ffsync=0 -fsync=0 -ffdatasync=0"
> _workout()
> {
> + local blksz=$(_get_block_size $SCRATCH_MNT)
> +
> echo ""
> echo "Run fsstress"
> out=$SCRATCH_MNT/fsstress.$$
> - args=`_scale_fsstress_args -p4 -n999 -f setattr=1 -d $out`
> + args=`_scale_fsstress_args -p4 -n500 -f setattr=1 -d $out`
> echo "fsstress $args" >> $seqres.full
> _run_fsstress $args
> +
> find $out -type f > $out.list
> cat $out.list | xargs md5sum > $out.md5sum
> - usage=`du -sch $out | tail -n1 | gawk '{ print $1 }'`
> +
> + bytes=`get_bytes_used $out.list`
> + echo "Total bytes used: $bytes" >> $seqres.full
> +
> echo "Allocate donor file"
> - $XFS_IO_PROG -c "falloc 0 250M" -f $SCRATCH_MNT/donor | _filter_xfs_io
> + $XFS_IO_PROG -c "falloc 0 $bytes" -f $SCRATCH_MNT/donor | _filter_xfs_io
> +
> echo "Perform compacting"
> cat $out.list | run_check $here/src/e4compact \
> -i -v -f $SCRATCH_MNT/donor >> $seqres.full 2>&1
> +
> echo "Check data"
> run_check md5sum -c $out.md5sum
> }
> @@ -41,7 +74,7 @@ _require_scratch
> _require_defrag
> _require_xfs_io_command "falloc"
>
> -_scratch_mkfs_sized $((512 * 1024 * 1024)) >> $seqres.full 2>&1
> +_scratch_mkfs >> $seqres.full 2>&1
> _scratch_mount
>
> _workout
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] ext4/307: Calculate donor size to avoid failures for 64k bs
2026-02-02 13:27 [PATCH] ext4/307: Calculate donor size to avoid failures for 64k bs Ojaswin Mujoo
2026-02-03 7:46 ` Disha Goel
@ 2026-02-08 17:54 ` Zorro Lang
2026-02-12 8:52 ` Ojaswin Mujoo
2026-02-24 9:45 ` Ojaswin Mujoo
1 sibling, 2 replies; 7+ messages in thread
From: Zorro Lang @ 2026-02-08 17:54 UTC (permalink / raw)
To: Ojaswin Mujoo; +Cc: fstests, Disha Goel
On Mon, Feb 02, 2026 at 06:57:30PM +0530, Ojaswin Mujoo wrote:
> Hard coding donor size to 250M causes the following failure in e4compat
>
> e4compact.c:68: do_defrag_range: Assertion `donor->length >= len' failed.
> ./common/rc: line 4616: 1583182 Aborted (core dumped)
>
> The issue is that e4compat asserts that the stat.st_size of all files
> combined shouldn't be more than that of the donor file. In case of 64k
> blocksize, fsstress often creates sparse files which have >3G size even
> though their disk utilization is <100M. Since donor file is not big
> enough, this trips e4compat causing the failure.
>
> Fix this by dynamically calculating the donor file size based on the
> size of all files. Also, make some changes to avoid future ENOSPC like
> reducing the -n passed to fsstress to keep the size around 2G and making
> sure we use the whole SCRATCH disk instead of 500M.
>
> While we are at it, add some new lines to make the code bit more
> readable.
>
> Reported-by: Disha Goel <disgoel@linux.ibm.com>
> Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
> ---
Hi,
Thanks for your patch, but this patch is conflict with v2026.01.27 :
42c2ccaf ("ext4/307: allocate donor file size dynamically")
both of you were trying to fix the donor file size issue. Please confirm if
that commit helps you. If you still need further change, please rebase your
patch on v2026.01.27.
Thanks,
Zorro
> tests/ext4/307 | 41 +++++++++++++++++++++++++++++++++++++----
> 1 file changed, 37 insertions(+), 4 deletions(-)
>
> diff --git a/tests/ext4/307 b/tests/ext4/307
> index 1f0e42ca..7e7fb9c8 100755
> --- a/tests/ext4/307
> +++ b/tests/ext4/307
> @@ -12,24 +12,57 @@ _begin_fstest auto ioctl rw defrag prealloc
> # Import common functions.
> . ./common/filter
> . ./common/defrag
> +
> +# Get the total of stat.st_size of each file to determine the
> +# size of donor file. We use st_size here instead of blocks used
> +# because thats what e4compat.c checks against.
> +get_bytes_used() {
> + local filelist="$1"
> + local total=0
> + local blksz=$(_get_block_size $SCRATCH_MNT)
> +
> + while IFS= read -r f; do
> + echo -n "File: $f: " >> $seqres.full
> + [[ -z "$f" ]] && continue
> +
> + if [[ -f "$f" ]]; then
> + local bytes
> + bytes=$(stat -c %s "$f")
> + total=$((total + bytes))
> + echo $bytes >> $seqres.full
> + fi
> + done < "$filelist"
> +
> + echo "Total bytes: $total" >> $seqres.full
> + echo $total
> +}
> +
> # Disable all sync operations to get higher load
> FSSTRESS_AVOID="$FSSTRESS_AVOID -ffsync=0 -fsync=0 -ffdatasync=0"
> _workout()
> {
> + local blksz=$(_get_block_size $SCRATCH_MNT)
> +
> echo ""
> echo "Run fsstress"
> out=$SCRATCH_MNT/fsstress.$$
> - args=`_scale_fsstress_args -p4 -n999 -f setattr=1 -d $out`
> + args=`_scale_fsstress_args -p4 -n500 -f setattr=1 -d $out`
> echo "fsstress $args" >> $seqres.full
> _run_fsstress $args
> +
> find $out -type f > $out.list
> cat $out.list | xargs md5sum > $out.md5sum
> - usage=`du -sch $out | tail -n1 | gawk '{ print $1 }'`
> +
> + bytes=`get_bytes_used $out.list`
> + echo "Total bytes used: $bytes" >> $seqres.full
> +
> echo "Allocate donor file"
> - $XFS_IO_PROG -c "falloc 0 250M" -f $SCRATCH_MNT/donor | _filter_xfs_io
> + $XFS_IO_PROG -c "falloc 0 $bytes" -f $SCRATCH_MNT/donor | _filter_xfs_io
> +
> echo "Perform compacting"
> cat $out.list | run_check $here/src/e4compact \
> -i -v -f $SCRATCH_MNT/donor >> $seqres.full 2>&1
> +
> echo "Check data"
> run_check md5sum -c $out.md5sum
> }
> @@ -41,7 +74,7 @@ _require_scratch
> _require_defrag
> _require_xfs_io_command "falloc"
>
> -_scratch_mkfs_sized $((512 * 1024 * 1024)) >> $seqres.full 2>&1
> +_scratch_mkfs >> $seqres.full 2>&1
> _scratch_mount
>
> _workout
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] ext4/307: Calculate donor size to avoid failures for 64k bs
2026-02-08 17:54 ` Zorro Lang
@ 2026-02-12 8:52 ` Ojaswin Mujoo
2026-02-12 18:53 ` Zorro Lang
2026-02-24 9:45 ` Ojaswin Mujoo
1 sibling, 1 reply; 7+ messages in thread
From: Ojaswin Mujoo @ 2026-02-12 8:52 UTC (permalink / raw)
To: Zorro Lang; +Cc: fstests, Disha Goel
On Mon, Feb 09, 2026 at 01:54:43AM +0800, Zorro Lang wrote:
> On Mon, Feb 02, 2026 at 06:57:30PM +0530, Ojaswin Mujoo wrote:
> > Hard coding donor size to 250M causes the following failure in e4compat
> >
> > e4compact.c:68: do_defrag_range: Assertion `donor->length >= len' failed.
> > ./common/rc: line 4616: 1583182 Aborted (core dumped)
> >
> > The issue is that e4compat asserts that the stat.st_size of all files
> > combined shouldn't be more than that of the donor file. In case of 64k
> > blocksize, fsstress often creates sparse files which have >3G size even
> > though their disk utilization is <100M. Since donor file is not big
> > enough, this trips e4compat causing the failure.
> >
> > Fix this by dynamically calculating the donor file size based on the
> > size of all files. Also, make some changes to avoid future ENOSPC like
> > reducing the -n passed to fsstress to keep the size around 2G and making
> > sure we use the whole SCRATCH disk instead of 500M.
> >
> > While we are at it, add some new lines to make the code bit more
> > readable.
> >
> > Reported-by: Disha Goel <disgoel@linux.ibm.com>
> > Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
> > ---
>
> Hi,
>
> Thanks for your patch, but this patch is conflict with v2026.01.27 :
> 42c2ccaf ("ext4/307: allocate donor file size dynamically")
>
> both of you were trying to fix the donor file size issue. Please confirm if
> that commit helps you. If you still need further change, please rebase your
> patch on v2026.01.27.
Hi Zorro,
thanks for pointing this out. Weird that I didn't see this when I
rbeased on master. I'll test the patch and get back to you.
Regards,
ojaswin
>
> Thanks,
> Zorro
>
> > tests/ext4/307 | 41 +++++++++++++++++++++++++++++++++++++----
> > 1 file changed, 37 insertions(+), 4 deletions(-)
> >
> > diff --git a/tests/ext4/307 b/tests/ext4/307
> > index 1f0e42ca..7e7fb9c8 100755
> > --- a/tests/ext4/307
> > +++ b/tests/ext4/307
> > @@ -12,24 +12,57 @@ _begin_fstest auto ioctl rw defrag prealloc
> > # Import common functions.
> > . ./common/filter
> > . ./common/defrag
> > +
> > +# Get the total of stat.st_size of each file to determine the
> > +# size of donor file. We use st_size here instead of blocks used
> > +# because thats what e4compat.c checks against.
> > +get_bytes_used() {
> > + local filelist="$1"
> > + local total=0
> > + local blksz=$(_get_block_size $SCRATCH_MNT)
> > +
> > + while IFS= read -r f; do
> > + echo -n "File: $f: " >> $seqres.full
> > + [[ -z "$f" ]] && continue
> > +
> > + if [[ -f "$f" ]]; then
> > + local bytes
> > + bytes=$(stat -c %s "$f")
> > + total=$((total + bytes))
> > + echo $bytes >> $seqres.full
> > + fi
> > + done < "$filelist"
> > +
> > + echo "Total bytes: $total" >> $seqres.full
> > + echo $total
> > +}
> > +
> > # Disable all sync operations to get higher load
> > FSSTRESS_AVOID="$FSSTRESS_AVOID -ffsync=0 -fsync=0 -ffdatasync=0"
> > _workout()
> > {
> > + local blksz=$(_get_block_size $SCRATCH_MNT)
> > +
> > echo ""
> > echo "Run fsstress"
> > out=$SCRATCH_MNT/fsstress.$$
> > - args=`_scale_fsstress_args -p4 -n999 -f setattr=1 -d $out`
> > + args=`_scale_fsstress_args -p4 -n500 -f setattr=1 -d $out`
> > echo "fsstress $args" >> $seqres.full
> > _run_fsstress $args
> > +
> > find $out -type f > $out.list
> > cat $out.list | xargs md5sum > $out.md5sum
> > - usage=`du -sch $out | tail -n1 | gawk '{ print $1 }'`
> > +
> > + bytes=`get_bytes_used $out.list`
> > + echo "Total bytes used: $bytes" >> $seqres.full
> > +
> > echo "Allocate donor file"
> > - $XFS_IO_PROG -c "falloc 0 250M" -f $SCRATCH_MNT/donor | _filter_xfs_io
> > + $XFS_IO_PROG -c "falloc 0 $bytes" -f $SCRATCH_MNT/donor | _filter_xfs_io
> > +
> > echo "Perform compacting"
> > cat $out.list | run_check $here/src/e4compact \
> > -i -v -f $SCRATCH_MNT/donor >> $seqres.full 2>&1
> > +
> > echo "Check data"
> > run_check md5sum -c $out.md5sum
> > }
> > @@ -41,7 +74,7 @@ _require_scratch
> > _require_defrag
> > _require_xfs_io_command "falloc"
> >
> > -_scratch_mkfs_sized $((512 * 1024 * 1024)) >> $seqres.full 2>&1
> > +_scratch_mkfs >> $seqres.full 2>&1
> > _scratch_mount
> >
> > _workout
> > --
> > 2.52.0
> >
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] ext4/307: Calculate donor size to avoid failures for 64k bs
2026-02-12 8:52 ` Ojaswin Mujoo
@ 2026-02-12 18:53 ` Zorro Lang
2026-02-13 9:21 ` Ojaswin Mujoo
0 siblings, 1 reply; 7+ messages in thread
From: Zorro Lang @ 2026-02-12 18:53 UTC (permalink / raw)
To: Ojaswin Mujoo; +Cc: fstests, Disha Goel
On Thu, Feb 12, 2026 at 02:22:33PM +0530, Ojaswin Mujoo wrote:
> On Mon, Feb 09, 2026 at 01:54:43AM +0800, Zorro Lang wrote:
> > On Mon, Feb 02, 2026 at 06:57:30PM +0530, Ojaswin Mujoo wrote:
> > > Hard coding donor size to 250M causes the following failure in e4compat
> > >
> > > e4compact.c:68: do_defrag_range: Assertion `donor->length >= len' failed.
> > > ./common/rc: line 4616: 1583182 Aborted (core dumped)
> > >
> > > The issue is that e4compat asserts that the stat.st_size of all files
> > > combined shouldn't be more than that of the donor file. In case of 64k
> > > blocksize, fsstress often creates sparse files which have >3G size even
> > > though their disk utilization is <100M. Since donor file is not big
> > > enough, this trips e4compat causing the failure.
> > >
> > > Fix this by dynamically calculating the donor file size based on the
> > > size of all files. Also, make some changes to avoid future ENOSPC like
> > > reducing the -n passed to fsstress to keep the size around 2G and making
> > > sure we use the whole SCRATCH disk instead of 500M.
> > >
> > > While we are at it, add some new lines to make the code bit more
> > > readable.
> > >
> > > Reported-by: Disha Goel <disgoel@linux.ibm.com>
> > > Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
> > > ---
> >
> > Hi,
> >
> > Thanks for your patch, but this patch is conflict with v2026.01.27 :
> > 42c2ccaf ("ext4/307: allocate donor file size dynamically")
> >
> > both of you were trying to fix the donor file size issue. Please confirm if
> > that commit helps you. If you still need further change, please rebase your
> > patch on v2026.01.27.
>
> Hi Zorro,
>
> thanks for pointing this out. Weird that I didn't see this when I
> rbeased on master. I'll test the patch and get back to you.
You can always rebase to "for-next" branch to get latest version, or check
"patches-in-queue" scratch branch. The master branch is lagging behind :)
>
> Regards,
> ojaswin
> >
> > Thanks,
> > Zorro
> >
> > > tests/ext4/307 | 41 +++++++++++++++++++++++++++++++++++++----
> > > 1 file changed, 37 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/tests/ext4/307 b/tests/ext4/307
> > > index 1f0e42ca..7e7fb9c8 100755
> > > --- a/tests/ext4/307
> > > +++ b/tests/ext4/307
> > > @@ -12,24 +12,57 @@ _begin_fstest auto ioctl rw defrag prealloc
> > > # Import common functions.
> > > . ./common/filter
> > > . ./common/defrag
> > > +
> > > +# Get the total of stat.st_size of each file to determine the
> > > +# size of donor file. We use st_size here instead of blocks used
> > > +# because thats what e4compat.c checks against.
> > > +get_bytes_used() {
> > > + local filelist="$1"
> > > + local total=0
> > > + local blksz=$(_get_block_size $SCRATCH_MNT)
> > > +
> > > + while IFS= read -r f; do
> > > + echo -n "File: $f: " >> $seqres.full
> > > + [[ -z "$f" ]] && continue
> > > +
> > > + if [[ -f "$f" ]]; then
> > > + local bytes
> > > + bytes=$(stat -c %s "$f")
> > > + total=$((total + bytes))
> > > + echo $bytes >> $seqres.full
> > > + fi
> > > + done < "$filelist"
> > > +
> > > + echo "Total bytes: $total" >> $seqres.full
> > > + echo $total
> > > +}
> > > +
> > > # Disable all sync operations to get higher load
> > > FSSTRESS_AVOID="$FSSTRESS_AVOID -ffsync=0 -fsync=0 -ffdatasync=0"
> > > _workout()
> > > {
> > > + local blksz=$(_get_block_size $SCRATCH_MNT)
> > > +
> > > echo ""
> > > echo "Run fsstress"
> > > out=$SCRATCH_MNT/fsstress.$$
> > > - args=`_scale_fsstress_args -p4 -n999 -f setattr=1 -d $out`
> > > + args=`_scale_fsstress_args -p4 -n500 -f setattr=1 -d $out`
> > > echo "fsstress $args" >> $seqres.full
> > > _run_fsstress $args
> > > +
> > > find $out -type f > $out.list
> > > cat $out.list | xargs md5sum > $out.md5sum
> > > - usage=`du -sch $out | tail -n1 | gawk '{ print $1 }'`
> > > +
> > > + bytes=`get_bytes_used $out.list`
> > > + echo "Total bytes used: $bytes" >> $seqres.full
> > > +
> > > echo "Allocate donor file"
> > > - $XFS_IO_PROG -c "falloc 0 250M" -f $SCRATCH_MNT/donor | _filter_xfs_io
> > > + $XFS_IO_PROG -c "falloc 0 $bytes" -f $SCRATCH_MNT/donor | _filter_xfs_io
> > > +
> > > echo "Perform compacting"
> > > cat $out.list | run_check $here/src/e4compact \
> > > -i -v -f $SCRATCH_MNT/donor >> $seqres.full 2>&1
> > > +
> > > echo "Check data"
> > > run_check md5sum -c $out.md5sum
> > > }
> > > @@ -41,7 +74,7 @@ _require_scratch
> > > _require_defrag
> > > _require_xfs_io_command "falloc"
> > >
> > > -_scratch_mkfs_sized $((512 * 1024 * 1024)) >> $seqres.full 2>&1
> > > +_scratch_mkfs >> $seqres.full 2>&1
> > > _scratch_mount
> > >
> > > _workout
> > > --
> > > 2.52.0
> > >
> >
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] ext4/307: Calculate donor size to avoid failures for 64k bs
2026-02-12 18:53 ` Zorro Lang
@ 2026-02-13 9:21 ` Ojaswin Mujoo
0 siblings, 0 replies; 7+ messages in thread
From: Ojaswin Mujoo @ 2026-02-13 9:21 UTC (permalink / raw)
To: Zorro Lang; +Cc: fstests, Disha Goel
On Fri, Feb 13, 2026 at 02:53:32AM +0800, Zorro Lang wrote:
> On Thu, Feb 12, 2026 at 02:22:33PM +0530, Ojaswin Mujoo wrote:
> > On Mon, Feb 09, 2026 at 01:54:43AM +0800, Zorro Lang wrote:
> > > On Mon, Feb 02, 2026 at 06:57:30PM +0530, Ojaswin Mujoo wrote:
> > > > Hard coding donor size to 250M causes the following failure in e4compat
> > > >
> > > > e4compact.c:68: do_defrag_range: Assertion `donor->length >= len' failed.
> > > > ./common/rc: line 4616: 1583182 Aborted (core dumped)
> > > >
> > > > The issue is that e4compat asserts that the stat.st_size of all files
> > > > combined shouldn't be more than that of the donor file. In case of 64k
> > > > blocksize, fsstress often creates sparse files which have >3G size even
> > > > though their disk utilization is <100M. Since donor file is not big
> > > > enough, this trips e4compat causing the failure.
> > > >
> > > > Fix this by dynamically calculating the donor file size based on the
> > > > size of all files. Also, make some changes to avoid future ENOSPC like
> > > > reducing the -n passed to fsstress to keep the size around 2G and making
> > > > sure we use the whole SCRATCH disk instead of 500M.
> > > >
> > > > While we are at it, add some new lines to make the code bit more
> > > > readable.
> > > >
> > > > Reported-by: Disha Goel <disgoel@linux.ibm.com>
> > > > Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
> > > > ---
> > >
> > > Hi,
> > >
> > > Thanks for your patch, but this patch is conflict with v2026.01.27 :
> > > 42c2ccaf ("ext4/307: allocate donor file size dynamically")
> > >
> > > both of you were trying to fix the donor file size issue. Please confirm if
> > > that commit helps you. If you still need further change, please rebase your
> > > patch on v2026.01.27.
> >
> > Hi Zorro,
> >
> > thanks for pointing this out. Weird that I didn't see this when I
> > rbeased on master. I'll test the patch and get back to you.
>
> You can always rebase to "for-next" branch to get latest version, or check
> "patches-in-queue" scratch branch. The master branch is lagging behind :)
Ahh got it, thanks :)
Regards,
ojaswin
>
> >
> > Regards,
> > ojaswin
> > >
> > > Thanks,
> > > Zorro
> > >
> > > > tests/ext4/307 | 41 +++++++++++++++++++++++++++++++++++++----
> > > > 1 file changed, 37 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/tests/ext4/307 b/tests/ext4/307
> > > > index 1f0e42ca..7e7fb9c8 100755
> > > > --- a/tests/ext4/307
> > > > +++ b/tests/ext4/307
> > > > @@ -12,24 +12,57 @@ _begin_fstest auto ioctl rw defrag prealloc
> > > > # Import common functions.
> > > > . ./common/filter
> > > > . ./common/defrag
> > > > +
> > > > +# Get the total of stat.st_size of each file to determine the
> > > > +# size of donor file. We use st_size here instead of blocks used
> > > > +# because thats what e4compat.c checks against.
> > > > +get_bytes_used() {
> > > > + local filelist="$1"
> > > > + local total=0
> > > > + local blksz=$(_get_block_size $SCRATCH_MNT)
> > > > +
> > > > + while IFS= read -r f; do
> > > > + echo -n "File: $f: " >> $seqres.full
> > > > + [[ -z "$f" ]] && continue
> > > > +
> > > > + if [[ -f "$f" ]]; then
> > > > + local bytes
> > > > + bytes=$(stat -c %s "$f")
> > > > + total=$((total + bytes))
> > > > + echo $bytes >> $seqres.full
> > > > + fi
> > > > + done < "$filelist"
> > > > +
> > > > + echo "Total bytes: $total" >> $seqres.full
> > > > + echo $total
> > > > +}
> > > > +
> > > > # Disable all sync operations to get higher load
> > > > FSSTRESS_AVOID="$FSSTRESS_AVOID -ffsync=0 -fsync=0 -ffdatasync=0"
> > > > _workout()
> > > > {
> > > > + local blksz=$(_get_block_size $SCRATCH_MNT)
> > > > +
> > > > echo ""
> > > > echo "Run fsstress"
> > > > out=$SCRATCH_MNT/fsstress.$$
> > > > - args=`_scale_fsstress_args -p4 -n999 -f setattr=1 -d $out`
> > > > + args=`_scale_fsstress_args -p4 -n500 -f setattr=1 -d $out`
> > > > echo "fsstress $args" >> $seqres.full
> > > > _run_fsstress $args
> > > > +
> > > > find $out -type f > $out.list
> > > > cat $out.list | xargs md5sum > $out.md5sum
> > > > - usage=`du -sch $out | tail -n1 | gawk '{ print $1 }'`
> > > > +
> > > > + bytes=`get_bytes_used $out.list`
> > > > + echo "Total bytes used: $bytes" >> $seqres.full
> > > > +
> > > > echo "Allocate donor file"
> > > > - $XFS_IO_PROG -c "falloc 0 250M" -f $SCRATCH_MNT/donor | _filter_xfs_io
> > > > + $XFS_IO_PROG -c "falloc 0 $bytes" -f $SCRATCH_MNT/donor | _filter_xfs_io
> > > > +
> > > > echo "Perform compacting"
> > > > cat $out.list | run_check $here/src/e4compact \
> > > > -i -v -f $SCRATCH_MNT/donor >> $seqres.full 2>&1
> > > > +
> > > > echo "Check data"
> > > > run_check md5sum -c $out.md5sum
> > > > }
> > > > @@ -41,7 +74,7 @@ _require_scratch
> > > > _require_defrag
> > > > _require_xfs_io_command "falloc"
> > > >
> > > > -_scratch_mkfs_sized $((512 * 1024 * 1024)) >> $seqres.full 2>&1
> > > > +_scratch_mkfs >> $seqres.full 2>&1
> > > > _scratch_mount
> > > >
> > > > _workout
> > > > --
> > > > 2.52.0
> > > >
> > >
> >
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] ext4/307: Calculate donor size to avoid failures for 64k bs
2026-02-08 17:54 ` Zorro Lang
2026-02-12 8:52 ` Ojaswin Mujoo
@ 2026-02-24 9:45 ` Ojaswin Mujoo
1 sibling, 0 replies; 7+ messages in thread
From: Ojaswin Mujoo @ 2026-02-24 9:45 UTC (permalink / raw)
To: Zorro Lang; +Cc: fstests, Disha Goel
On Mon, Feb 09, 2026 at 01:54:43AM +0800, Zorro Lang wrote:
> On Mon, Feb 02, 2026 at 06:57:30PM +0530, Ojaswin Mujoo wrote:
> > Hard coding donor size to 250M causes the following failure in e4compat
> >
> > e4compact.c:68: do_defrag_range: Assertion `donor->length >= len' failed.
> > ./common/rc: line 4616: 1583182 Aborted (core dumped)
> >
> > The issue is that e4compat asserts that the stat.st_size of all files
> > combined shouldn't be more than that of the donor file. In case of 64k
> > blocksize, fsstress often creates sparse files which have >3G size even
> > though their disk utilization is <100M. Since donor file is not big
> > enough, this trips e4compat causing the failure.
> >
> > Fix this by dynamically calculating the donor file size based on the
> > size of all files. Also, make some changes to avoid future ENOSPC like
> > reducing the -n passed to fsstress to keep the size around 2G and making
> > sure we use the whole SCRATCH disk instead of 500M.
> >
> > While we are at it, add some new lines to make the code bit more
> > readable.
> >
> > Reported-by: Disha Goel <disgoel@linux.ibm.com>
> > Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
> > ---
>
> Hi,
>
> Thanks for your patch, but this patch is conflict with v2026.01.27 :
> 42c2ccaf ("ext4/307: allocate donor file size dynamically")
>
> both of you were trying to fix the donor file size issue. Please confirm if
> that commit helps you. If you still need further change, please rebase your
> patch on v2026.01.27.
>
> Thanks,
> Zorro
Hi Zorro,
Sorry for the delay but we tested this again with 64k bs ext4 but the
test still fails. The issue is that e4compat.c essentially asserts:
(sum of stat.st_size of all files) <= donor stat.st_size
But in the merged patch we are using du -sm which gives the blocks
used instead of the filesize.
Below are the numbers for 64k blocksize run:
du -sm <fsstress dir>:
132 // 132M
/* sum of stat.st_size of all files */
find <fsstress dir> -type f -printf '%s\n' | awk '{s+=$1} END {print s}'
5110092115 // ~4G
stat -c %s donor
138412032 // 132M
Since donor is still smaller than the sum of sizes of all the files,
e4compat eventually hits an assert() causing the crash.
The patch shared in this thread takes care of this. However, it has also
got me wondering whether the right fix would be to instead fix e4compat
to compare blocks used instead of file sizes. I've not looked at the
code closely yet but I don't see a reason why we should rely on donor
being as big as the sum of sizes, when all we need is that the blocks
used by each file should fit in the donor size.
What do you think?
Regards,
ojaswin
>
> > tests/ext4/307 | 41 +++++++++++++++++++++++++++++++++++++----
> > 1 file changed, 37 insertions(+), 4 deletions(-)
> >
> > diff --git a/tests/ext4/307 b/tests/ext4/307
> > index 1f0e42ca..7e7fb9c8 100755
> > --- a/tests/ext4/307
> > +++ b/tests/ext4/307
> > @@ -12,24 +12,57 @@ _begin_fstest auto ioctl rw defrag prealloc
> > # Import common functions.
> > . ./common/filter
> > . ./common/defrag
> > +
> > +# Get the total of stat.st_size of each file to determine the
> > +# size of donor file. We use st_size here instead of blocks used
> > +# because thats what e4compat.c checks against.
> > +get_bytes_used() {
> > + local filelist="$1"
> > + local total=0
> > + local blksz=$(_get_block_size $SCRATCH_MNT)
> > +
> > + while IFS= read -r f; do
> > + echo -n "File: $f: " >> $seqres.full
> > + [[ -z "$f" ]] && continue
> > +
> > + if [[ -f "$f" ]]; then
> > + local bytes
> > + bytes=$(stat -c %s "$f")
> > + total=$((total + bytes))
> > + echo $bytes >> $seqres.full
> > + fi
> > + done < "$filelist"
> > +
> > + echo "Total bytes: $total" >> $seqres.full
> > + echo $total
> > +}
> > +
> > # Disable all sync operations to get higher load
> > FSSTRESS_AVOID="$FSSTRESS_AVOID -ffsync=0 -fsync=0 -ffdatasync=0"
> > _workout()
> > {
> > + local blksz=$(_get_block_size $SCRATCH_MNT)
> > +
> > echo ""
> > echo "Run fsstress"
> > out=$SCRATCH_MNT/fsstress.$$
> > - args=`_scale_fsstress_args -p4 -n999 -f setattr=1 -d $out`
> > + args=`_scale_fsstress_args -p4 -n500 -f setattr=1 -d $out`
> > echo "fsstress $args" >> $seqres.full
> > _run_fsstress $args
> > +
> > find $out -type f > $out.list
> > cat $out.list | xargs md5sum > $out.md5sum
> > - usage=`du -sch $out | tail -n1 | gawk '{ print $1 }'`
> > +
> > + bytes=`get_bytes_used $out.list`
> > + echo "Total bytes used: $bytes" >> $seqres.full
> > +
> > echo "Allocate donor file"
> > - $XFS_IO_PROG -c "falloc 0 250M" -f $SCRATCH_MNT/donor | _filter_xfs_io
> > + $XFS_IO_PROG -c "falloc 0 $bytes" -f $SCRATCH_MNT/donor | _filter_xfs_io
> > +
> > echo "Perform compacting"
> > cat $out.list | run_check $here/src/e4compact \
> > -i -v -f $SCRATCH_MNT/donor >> $seqres.full 2>&1
> > +
> > echo "Check data"
> > run_check md5sum -c $out.md5sum
> > }
> > @@ -41,7 +74,7 @@ _require_scratch
> > _require_defrag
> > _require_xfs_io_command "falloc"
> >
> > -_scratch_mkfs_sized $((512 * 1024 * 1024)) >> $seqres.full 2>&1
> > +_scratch_mkfs >> $seqres.full 2>&1
> > _scratch_mount
> >
> > _workout
> > --
> > 2.52.0
> >
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-02-24 9:45 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-02 13:27 [PATCH] ext4/307: Calculate donor size to avoid failures for 64k bs Ojaswin Mujoo
2026-02-03 7:46 ` Disha Goel
2026-02-08 17:54 ` Zorro Lang
2026-02-12 8:52 ` Ojaswin Mujoo
2026-02-12 18:53 ` Zorro Lang
2026-02-13 9:21 ` Ojaswin Mujoo
2026-02-24 9:45 ` Ojaswin Mujoo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox