From: "Darrick J. Wong" <djwong@kernel.org>
To: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Cc: "linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>,
hch <hch@lst.de>
Subject: Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
Date: Sun, 8 Feb 2026 22:07:16 -0800 [thread overview]
Message-ID: <20260209060716.GL1535390@frogsfrogsfrogs> (raw)
In-Reply-To: <aYlHZ4bBQI3Vpb3N@shinmob>
On Mon, Feb 09, 2026 at 02:50:00AM +0000, Shinichiro Kawasaki wrote:
> On Feb 06, 2026 / 09:38, Darrick J. Wong wrote:
> > On Fri, Feb 06, 2026 at 08:40:07AM +0000, Shinichiro Kawasaki wrote:
> > > Hello Darrick,
> > >
> > > Recently, my fstests run for null_blk (8GiB size) as SCRATCH_DEV failed at
> > > xfs/802 [3]. I took a look and observed following points:
> > >
> > > 1) xfs_scrub_all command ran as expected. Even though SCRATCH_DEV is mounted,
> > > it did not scrub SCRATCH_DEV. Hence the failure.
> > > 2) xfs_scrub_all uses lsblk command to list all mounted xfs filesystems [1].
> > > However, lsblk command does not report that SCRATCH_DEV is mounted as xfs.
> > > 3) I leanred that lsblk command refers to udev database [2], and udev database
> > > sometimes fails to update the filesystem information. This is the case for
> > > the null_blk as SCRATCH_DEV on my test nodes.
> >
> > Hrm. I wonder if we're starting xfs_scrub_all too soon after the
> > _scratch_cycle_mount? It's possible that if udev is running slowly,
> > it won't yet have poked blkid to update its cache, in which case lsblk
> > won't show it.
> >
> > If you add _udev_wait after _scratch_cycle_mount, does the "Health
> > status has not beel collected" problem go away? I couldn't reproduce
> > this specific problem on my test VMs, but the "udev hasn't caught up and
> > breaks fstests" pattern is very familiar. :/
>
> Unfortunately, no. I made the change below in the test case, but I still see
> the "Health status has not beel collected" message.
>
> diff --git a/tests/xfs/802 b/tests/xfs/802
> index fc4767a..77e09f8 100755
> --- a/tests/xfs/802
> +++ b/tests/xfs/802
> @@ -131,6 +131,8 @@ systemctl cat "$new_scruball_svc" >> $seqres.full
> # Cycle mounts to clear all the incore CHECKED bits.
> _scratch_cycle_mount
>
> +_udev_wait $SCRATCH_DEV
> +
> echo "Scrub Everything"
> run_scrub_service "$new_scruball_svc"
>
>
> I also manually mounted the null_blk device with xfs, and ran "udevadm settle".
> Then still lsblk was failing to report fstype for the null_blk device (FYI, I
> use Fedora 43 to recreate the failure).
Waitaminute, how can you even format xfs on nullblk to run fstests?
Isn't that the bdev that silently discards everything written to it, and
returns zero on reads??
--D
> >
> > > Based on these observations, I think there are two points to improve:
> > >
> > > 1) I found "blkid -p" command reports that null_blk is mounted as xfs, even when
> > > lsblk does not report it. I think xfs_scrub_all can be modified to use
> > > "blkid -p" instead of lsblk to find out xfs filesystems mounted.
> > > 2) When there are other xfs filesystems on the test node than TEST_DEV or
> > > SCRATCH_DEV, xfs_scrub_all changes the status of them. This does not sound
> > > good to me since it affects system status out of the test targets block
> > > devices. I think he test case can be improved to check that there is no other
> > > xfs filesystems mounted other than TEST_DEV or SCRATCH_DEV/s. If not, the
> > > test case should be skipped.
> >
> > I wonder if a better solution would be to add to xfs_scrub_all a
> > --restrict $SCRATCH_MNT --restrict $TEST_DIR option so that it ignores
> > mounts that aren't under test?
>
> Yes, I agree that it will be the better solution since the test case will not be
> skipped even when there are other xfs filesystems mounted.
next prev parent reply other threads:[~2026-02-09 6:07 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-06 8:40 [bug report] xfs/802 failure due to mssing fstype report by lsblk Shinichiro Kawasaki
2026-02-06 17:38 ` Darrick J. Wong
2026-02-09 2:50 ` Shinichiro Kawasaki
2026-02-09 6:07 ` Darrick J. Wong [this message]
2026-02-09 6:28 ` hch
2026-02-09 7:54 ` Shinichiro Kawasaki
2026-02-10 2:00 ` Darrick J. Wong
2026-02-10 6:17 ` Darrick J. Wong
2026-02-10 6:19 ` Shinichiro Kawasaki
2026-02-13 22:14 ` Darrick J. Wong
2026-02-14 6:39 ` Shinichiro Kawasaki
2026-02-14 7:39 ` Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260209060716.GL1535390@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=hch@lst.de \
--cc=linux-xfs@vger.kernel.org \
--cc=shinichiro.kawasaki@wdc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox