* df returns incorrect size of partition due to huge overhead block count in ext4 partition
@ 2022-03-25 6:42 Fariya F
2022-03-25 22:11 ` Theodore Ts'o
0 siblings, 1 reply; 3+ messages in thread
From: Fariya F @ 2022-03-25 6:42 UTC (permalink / raw)
To: linux-ext4, linux-fsdevel
My eMMC partition is ext4 formatted and has about 100MB size. df -h
command lists the size of the partition and the used percentage as
below.
Filesystem Size Used Avail Use% Mounted on
/dev/mmcblk2p4 16Z 16Z 79M 100% /data
For your reference, the returned values for statfs64( ) are
statfs64("/data", 88, {f_type="EXT2_SUPER_MAGIC", f_bsize=1024,
f_blocks=18446744073659310077, f_bfree=87628, f_bavail=80460,
f_files=25688, f_ffree=25189, f_fsid={-1446355608, 1063639410},
f_namelen=255, f_frsize=1024, f_flags=4128}) = 0
The output dumpe2fs returns the following
Block count: 102400
Reserved block count: 5120
Overhead blocks: 50343939
As per my kernel (4.9.31) code, the f_blocks is block_count - overhead
blocks. Considering the subtraction with the above values results in a
negative value this is interpreted as the huge value of
18446744073659310077.
I have a script which monitors the used percentage of the partition
using df -h command and when the used percentage is greater than 70%,
it deletes files until the used percentage comes down. Considering df
is reporting all the time 100% usage, all my files get deleted.
My questions are:
a) Where does overhead blocks get set?
b) Why is this value huge for my partition and how to correct it
considering fsck is also not correcting this
Please note fsck on this partition doesn't report any issues at all.
I am also able to create files in this partition.
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: df returns incorrect size of partition due to huge overhead block count in ext4 partition
2022-03-25 6:42 df returns incorrect size of partition due to huge overhead block count in ext4 partition Fariya F
@ 2022-03-25 22:11 ` Theodore Ts'o
2022-03-28 16:08 ` Fariya F
0 siblings, 1 reply; 3+ messages in thread
From: Theodore Ts'o @ 2022-03-25 22:11 UTC (permalink / raw)
To: Fariya F; +Cc: linux-ext4, linux-fsdevel
On Fri, Mar 25, 2022 at 12:12:30PM +0530, Fariya F wrote:
> The output dumpe2fs returns the following
>
> Block count: 102400
> Reserved block count: 5120
> Overhead blocks: 50343939
Yeah, that value is obviously wrong; I'm not sure how it got
corrupted, but that's the cause of the your problem.
> a) Where does overhead blocks get set?
The kernel can calculate the overhead value, but it can be slow for
very large file systems. For that reason, it is cached in the
superblock. So if the s_overhead_clusters is zero, the kernel will
calculate the overhead value, and then update the superblock.
In newer versions of e2fsprogs, mkfs.ext4 / mke2fs will write the
overhead value into the superblock.
> b) Why is this value huge for my partition and how to correct it
> considering fsck is also not correcting this
The simpleest way is to run the following command with the file system
unmounted:
debugfs -w -R "set_super_value overhead_clusters 0" /dev/sdXX
Then the next time you mount the file system, the correct value should
get caluclated and filled in.
It's a bug that fsck isn't notcing the problem and correcting it.
I'll work on getting that fixed in a future version of e2fsprogs.
My apologies for the inconvenience.
Cheers,
- Ted
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: df returns incorrect size of partition due to huge overhead block count in ext4 partition
2022-03-25 22:11 ` Theodore Ts'o
@ 2022-03-28 16:08 ` Fariya F
0 siblings, 0 replies; 3+ messages in thread
From: Fariya F @ 2022-03-28 16:08 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-ext4, linux-fsdevel
Hi Ted,
Thanks for the response. Really appreciate it. Some questions:
a) This issue is observed on one of the customer board and hence a fix
is a must for us or at least I will need to do a work-around so other
customer boards do not face this issue. As I mentioned my script
relies on df -h output of used percentage. In the case of the board
reporting 16Z of used space and size, the available space is somehow
reported correctly. Should my script rely on available space and not
on the used space% output of df. Will that be a reliable work-around?
Do you see any issue in using the partition from then or some where
down the line the overhead blocks number would create a problem and my
partition would end up misbehaving or any sort of data loss could
occur? Data loss would be a concern for us. Please guide.
//* More info on my script: I have a script which monitors the used
percentage of the partition using df -h command and when the used
percentage is greater than 70%, it deletes files until the used
percentage comes down. Considering df
is reporting all the time 100% usage, all my files get deleted.*//
b) Any other suggestions of a work-around so even if the overhead
blocks reports more blocks than actual blocks on the partition, i am
able to use the partition reliably or do you think it would be a
better suggestion to wait for the fix in e2fsprogs?
I think apart from the fix in e2fsprogs tool, a kernel fix is also
required, wherein it performs check that the overhead blocks should
not be greater than the actual blocks on the partition.
Regards
On Sat, Mar 26, 2022 at 3:41 AM Theodore Ts'o <tytso@mit.edu> wrote:
>
> On Fri, Mar 25, 2022 at 12:12:30PM +0530, Fariya F wrote:
> > The output dumpe2fs returns the following
> >
> > Block count: 102400
> > Reserved block count: 5120
> > Overhead blocks: 50343939
>
> Yeah, that value is obviously wrong; I'm not sure how it got
> corrupted, but that's the cause of the your problem.
>
> > a) Where does overhead blocks get set?
>
> The kernel can calculate the overhead value, but it can be slow for
> very large file systems. For that reason, it is cached in the
> superblock. So if the s_overhead_clusters is zero, the kernel will
> calculate the overhead value, and then update the superblock.
>
> In newer versions of e2fsprogs, mkfs.ext4 / mke2fs will write the
> overhead value into the superblock.
>
> > b) Why is this value huge for my partition and how to correct it
> > considering fsck is also not correcting this
>
> The simpleest way is to run the following command with the file system
> unmounted:
>
> debugfs -w -R "set_super_value overhead_clusters 0" /dev/sdXX
>
> Then the next time you mount the file system, the correct value should
> get caluclated and filled in.
>
> It's a bug that fsck isn't notcing the problem and correcting it.
> I'll work on getting that fixed in a future version of e2fsprogs.
>
> My apologies for the inconvenience.
>
> Cheers,
>
> - Ted
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2022-03-28 16:08 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-03-25 6:42 df returns incorrect size of partition due to huge overhead block count in ext4 partition Fariya F
2022-03-25 22:11 ` Theodore Ts'o
2022-03-28 16:08 ` Fariya F
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox