From: Qu Wenruo <wqu@suse.com>
To: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>,
"fstests@vger.kernel.org" <fstests@vger.kernel.org>
Cc: "linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
"glaubitz@physik.fu-berlin.de" <glaubitz@physik.fu-berlin.de>,
"frank.li@vivo.com" <frank.li@vivo.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>
Subject: Re: [RFC] Why generic/073 is generic but not btrfs specific?
Date: Fri, 7 Nov 2025 07:28:19 +1030 [thread overview]
Message-ID: <fb616f30-5a56-4436-8dc7-0d8fe2b4d772@suse.com> (raw)
In-Reply-To: <92ac4eb8cdc47ddc99edeb145e67882259d3aa0e.camel@ibm.com>
在 2025/11/7 07:10, Viacheslav Dubeyko 写道:
> Hello,
>
> Running generic/073 for the case of HFS+ finishes with volume corruption:
>
> sudo ./check generic/073
> FSTYP -- hfsplus
> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.17.0-rc1+ #4 SMP PREEMPT_DYNAMIC
> Wed Oct 1 15:02:44 PDT 2025
> MKFS_OPTIONS -- /dev/loop51
> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>
> generic/073 _check_generic_filesystem: filesystem on /dev/loop51 is inconsistent
> (see XFSTESTS-2/xfstests-dev/results//generic/073.full for details)
>
> Ran: generic/073
> Failures: generic/073
> Failed 1 of 1 tests
>
> sudo fsck.hfsplus -d /dev/loop51
> ** /dev/loop51
> Using cacheBlockSize=32K cacheTotalBlock=1024 cacheSize=32768K.
> Executing fsck_hfs (version 540.1-Linux).
> ** Checking non-journaled HFS Plus Volume.
> The volume name is untitled
> ** Checking extents overflow file.
> ** Checking catalog file.
> ** Checking multi-linked files.
> ** Checking catalog hierarchy.
> Invalid directory item count
> (It should be 1 instead of 0)
> ** Checking extended attributes file.
> ** Checking volume bitmap.
> ** Checking volume information.
> Verify Status: VIStat = 0x0000, ABTStat = 0x0000 EBTStat = 0x0000
> CBTStat = 0x0000 CatStat = 0x00004000
> ** Repairing volume.
> ** Rechecking volume.
> ** Checking non-journaled HFS Plus Volume.
> The volume name is untitled
> ** Checking extents overflow file.
> ** Checking catalog file.
> ** Checking multi-linked files.
> ** Checking catalog hierarchy.
> ** Checking extended attributes file.
> ** Checking volume bitmap.
> ** Checking volume information.
> ** The volume untitled was repaired successfully.
>
> Initially, I considered that something is wrong with HFS+ driver logic. But
> after testing and debugging the issue, I believe that HFS+ logic is correct.
>
> As far as I can see, the generic/073 is checking specific btrfs related case:
>
> # Test file A fsync after moving one other unrelated file B between directories
> # and fsyncing B's old parent directory before fsyncing the file A. Check that
> # after a crash all the file A data we fsynced is available.
> #
> # This test is motivated by an issue discovered in btrfs which caused the file
> # data to be lost (despite fsync returning success to user space). That btrfs
> # bug was fixed by the following linux kernel patch:
> #
> # Btrfs: fix data loss in the fast fsync path
>
> The test is doing these steps on final phase:
>
> mv $SCRATCH_MNT/testdir_1/bar $SCRATCH_MNT/testdir_2/bar
> $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/testdir_1
> $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foo
>
> So, we move file bar from testdir_1 into testdir_2 folder. It means that HFS+
> logic decrements the number of entries in testdir_1 and increments number of
> entries in testdir_2. Finally, we do fsync only for testdir_1 and foo but not
> for testdir_2.
If the fs is using journal, shouldn't the increments on the testdir_2
already be journaled? Thus after a power loss, the increments on
testdir_2/bar should be replayed thus the end user should still see that
inode.
To me this looks like a bug in HFS logic where something is not properly
journaled (the increment on the testdir_2/bar).
Finally, if you're asking an end user that if it is acceptable that
after moving an inode and fsync the old directory, the inode is no
longer reachable, I'm pretty sure no end user will think it's acceptable.
> As a result, this is the reason why fsck.hfsplus detects the
> volume corruption afterwards. As far as I can see, the HFS+ driver behavior is
> completely correct and nothing needs to be done for fixing in HFS+ logic here.
Then I guess you may also want to ask why EXT4/XFS/Btrfs/F2fs all pass
the test case.
Thanks,
Qu
>
> But what could be the proper solution? Should generic/073 be excluded from
> HFS/HFS+ xfstests run? Or, maybe, generic/073 needs to be btrfs specific? Am I
> missing something here?
>
> Thanks,
> Slava.
next prev parent reply other threads:[~2025-11-06 20:58 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-06 20:40 [RFC] Why generic/073 is generic but not btrfs specific? Viacheslav Dubeyko
2025-11-06 20:58 ` Qu Wenruo [this message]
2025-11-06 21:11 ` Viacheslav Dubeyko
2025-11-06 21:32 ` Qu Wenruo
2025-11-06 22:29 ` Viacheslav Dubeyko
2025-11-08 14:01 ` Theodore Ts'o
2025-11-10 19:41 ` Viacheslav Dubeyko
2025-11-11 1:53 ` Zorro Lang
2025-11-11 2:02 ` Viacheslav Dubeyko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fb616f30-5a56-4436-8dc7-0d8fe2b4d772@suse.com \
--to=wqu@suse.com \
--cc=Slava.Dubeyko@ibm.com \
--cc=frank.li@vivo.com \
--cc=fstests@vger.kernel.org \
--cc=glaubitz@physik.fu-berlin.de \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox