* [bug report] xfs/802 failure due to mssing fstype report by lsblk
@ 2026-02-06 8:40 Shinichiro Kawasaki
2026-02-06 17:38 ` Darrick J. Wong
0 siblings, 1 reply; 12+ messages in thread
From: Shinichiro Kawasaki @ 2026-02-06 8:40 UTC (permalink / raw)
To: Darrick J. Wong, linux-xfs@vger.kernel.org; +Cc: hch
Hello Darrick,
Recently, my fstests run for null_blk (8GiB size) as SCRATCH_DEV failed at
xfs/802 [3]. I took a look and observed following points:
1) xfs_scrub_all command ran as expected. Even though SCRATCH_DEV is mounted,
it did not scrub SCRATCH_DEV. Hence the failure.
2) xfs_scrub_all uses lsblk command to list all mounted xfs filesystems [1].
However, lsblk command does not report that SCRATCH_DEV is mounted as xfs.
3) I leanred that lsblk command refers to udev database [2], and udev database
sometimes fails to update the filesystem information. This is the case for
the null_blk as SCRATCH_DEV on my test nodes.
Based on these observations, I think there are two points to improve:
1) I found "blkid -p" command reports that null_blk is mounted as xfs, even when
lsblk does not report it. I think xfs_scrub_all can be modified to use
"blkid -p" instead of lsblk to find out xfs filesystems mounted.
2) When there are other xfs filesystems on the test node than TEST_DEV or
SCRATCH_DEV, xfs_scrub_all changes the status of them. This does not sound
good to me since it affects system status out of the test targets block
devices. I think he test case can be improved to check that there is no other
xfs filesystems mounted other than TEST_DEV or SCRATCH_DEV/s. If not, the
test case should be skipped.
At this moment, I don't have time to create patches for the improvements above.
If anyone can work on them, it will be appreciated.
[1] https://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git/tree/scrub/xfs_scrub_all.py.in#n55
[2] https://unix.stackexchange.com/questions/642598/lsblk-file-system-type-not-appears-from-lsblk#642600
[3] xfs/802 failure console message
xfs/802 - output mismatch (see /home/shin/kts/kernel-test-suite/src/xfstests/results//xfs/802.out.bad)
--- tests/xfs/802.out 2026-02-04 20:44:52.254221182 +0900
+++ /home/shin/kts/kernel-test-suite/src/xfstests/results//xfs/802.out.bad 2026-02-06 17:04:24.336536185 +0900
@@ -2,4 +2,7 @@
Format and populate
Scrub Scratch FS
Scrub Everything
+Health status has not been collected for this filesystem.
+Please run xfs_scrub(8) to remedy this situation.
+cannot find evidence that /var/kts/scratch was scrubbed
Scrub Done
...
(Run 'diff -u /home/shin/kts/kernel-test-suite/src/xfstests/tests/xfs/802.out /home/shin/kts/kernel-test-suite/src/xfstests/results//xfs/802.out.bad' to see the entire diff)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-06 8:40 [bug report] xfs/802 failure due to mssing fstype report by lsblk Shinichiro Kawasaki
@ 2026-02-06 17:38 ` Darrick J. Wong
2026-02-09 2:50 ` Shinichiro Kawasaki
0 siblings, 1 reply; 12+ messages in thread
From: Darrick J. Wong @ 2026-02-06 17:38 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-xfs@vger.kernel.org, hch
On Fri, Feb 06, 2026 at 08:40:07AM +0000, Shinichiro Kawasaki wrote:
> Hello Darrick,
>
> Recently, my fstests run for null_blk (8GiB size) as SCRATCH_DEV failed at
> xfs/802 [3]. I took a look and observed following points:
>
> 1) xfs_scrub_all command ran as expected. Even though SCRATCH_DEV is mounted,
> it did not scrub SCRATCH_DEV. Hence the failure.
> 2) xfs_scrub_all uses lsblk command to list all mounted xfs filesystems [1].
> However, lsblk command does not report that SCRATCH_DEV is mounted as xfs.
> 3) I leanred that lsblk command refers to udev database [2], and udev database
> sometimes fails to update the filesystem information. This is the case for
> the null_blk as SCRATCH_DEV on my test nodes.
Hrm. I wonder if we're starting xfs_scrub_all too soon after the
_scratch_cycle_mount? It's possible that if udev is running slowly,
it won't yet have poked blkid to update its cache, in which case lsblk
won't show it.
If you add _udev_wait after _scratch_cycle_mount, does the "Health
status has not beel collected" problem go away? I couldn't reproduce
this specific problem on my test VMs, but the "udev hasn't caught up and
breaks fstests" pattern is very familiar. :/
> Based on these observations, I think there are two points to improve:
>
> 1) I found "blkid -p" command reports that null_blk is mounted as xfs, even when
> lsblk does not report it. I think xfs_scrub_all can be modified to use
> "blkid -p" instead of lsblk to find out xfs filesystems mounted.
> 2) When there are other xfs filesystems on the test node than TEST_DEV or
> SCRATCH_DEV, xfs_scrub_all changes the status of them. This does not sound
> good to me since it affects system status out of the test targets block
> devices. I think he test case can be improved to check that there is no other
> xfs filesystems mounted other than TEST_DEV or SCRATCH_DEV/s. If not, the
> test case should be skipped.
I wonder if a better solution would be to add to xfs_scrub_all a
--restrict $SCRATCH_MNT --restrict $TEST_DIR option so that it ignores
mounts that aren't under test?
--D
> At this moment, I don't have time to create patches for the improvements above.
> If anyone can work on them, it will be appreciated.
>
> [1] https://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git/tree/scrub/xfs_scrub_all.py.in#n55
> [2] https://unix.stackexchange.com/questions/642598/lsblk-file-system-type-not-appears-from-lsblk#642600
>
> [3] xfs/802 failure console message
>
> xfs/802 - output mismatch (see /home/shin/kts/kernel-test-suite/src/xfstests/results//xfs/802.out.bad)
> --- tests/xfs/802.out 2026-02-04 20:44:52.254221182 +0900
> +++ /home/shin/kts/kernel-test-suite/src/xfstests/results//xfs/802.out.bad 2026-02-06 17:04:24.336536185 +0900
> @@ -2,4 +2,7 @@
> Format and populate
> Scrub Scratch FS
> Scrub Everything
> +Health status has not been collected for this filesystem.
> +Please run xfs_scrub(8) to remedy this situation.
> +cannot find evidence that /var/kts/scratch was scrubbed
> Scrub Done
> ...
> (Run 'diff -u /home/shin/kts/kernel-test-suite/src/xfstests/tests/xfs/802.out /home/shin/kts/kernel-test-suite/src/xfstests/results//xfs/802.out.bad' to see the entire diff)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-06 17:38 ` Darrick J. Wong
@ 2026-02-09 2:50 ` Shinichiro Kawasaki
2026-02-09 6:07 ` Darrick J. Wong
0 siblings, 1 reply; 12+ messages in thread
From: Shinichiro Kawasaki @ 2026-02-09 2:50 UTC (permalink / raw)
To: Darrick J. Wong; +Cc: linux-xfs@vger.kernel.org, hch
On Feb 06, 2026 / 09:38, Darrick J. Wong wrote:
> On Fri, Feb 06, 2026 at 08:40:07AM +0000, Shinichiro Kawasaki wrote:
> > Hello Darrick,
> >
> > Recently, my fstests run for null_blk (8GiB size) as SCRATCH_DEV failed at
> > xfs/802 [3]. I took a look and observed following points:
> >
> > 1) xfs_scrub_all command ran as expected. Even though SCRATCH_DEV is mounted,
> > it did not scrub SCRATCH_DEV. Hence the failure.
> > 2) xfs_scrub_all uses lsblk command to list all mounted xfs filesystems [1].
> > However, lsblk command does not report that SCRATCH_DEV is mounted as xfs.
> > 3) I leanred that lsblk command refers to udev database [2], and udev database
> > sometimes fails to update the filesystem information. This is the case for
> > the null_blk as SCRATCH_DEV on my test nodes.
>
> Hrm. I wonder if we're starting xfs_scrub_all too soon after the
> _scratch_cycle_mount? It's possible that if udev is running slowly,
> it won't yet have poked blkid to update its cache, in which case lsblk
> won't show it.
>
> If you add _udev_wait after _scratch_cycle_mount, does the "Health
> status has not beel collected" problem go away? I couldn't reproduce
> this specific problem on my test VMs, but the "udev hasn't caught up and
> breaks fstests" pattern is very familiar. :/
Unfortunately, no. I made the change below in the test case, but I still see
the "Health status has not beel collected" message.
diff --git a/tests/xfs/802 b/tests/xfs/802
index fc4767a..77e09f8 100755
--- a/tests/xfs/802
+++ b/tests/xfs/802
@@ -131,6 +131,8 @@ systemctl cat "$new_scruball_svc" >> $seqres.full
# Cycle mounts to clear all the incore CHECKED bits.
_scratch_cycle_mount
+_udev_wait $SCRATCH_DEV
+
echo "Scrub Everything"
run_scrub_service "$new_scruball_svc"
I also manually mounted the null_blk device with xfs, and ran "udevadm settle".
Then still lsblk was failing to report fstype for the null_blk device (FYI, I
use Fedora 43 to recreate the failure).
>
> > Based on these observations, I think there are two points to improve:
> >
> > 1) I found "blkid -p" command reports that null_blk is mounted as xfs, even when
> > lsblk does not report it. I think xfs_scrub_all can be modified to use
> > "blkid -p" instead of lsblk to find out xfs filesystems mounted.
> > 2) When there are other xfs filesystems on the test node than TEST_DEV or
> > SCRATCH_DEV, xfs_scrub_all changes the status of them. This does not sound
> > good to me since it affects system status out of the test targets block
> > devices. I think he test case can be improved to check that there is no other
> > xfs filesystems mounted other than TEST_DEV or SCRATCH_DEV/s. If not, the
> > test case should be skipped.
>
> I wonder if a better solution would be to add to xfs_scrub_all a
> --restrict $SCRATCH_MNT --restrict $TEST_DIR option so that it ignores
> mounts that aren't under test?
Yes, I agree that it will be the better solution since the test case will not be
skipped even when there are other xfs filesystems mounted.
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-09 2:50 ` Shinichiro Kawasaki
@ 2026-02-09 6:07 ` Darrick J. Wong
2026-02-09 6:28 ` hch
0 siblings, 1 reply; 12+ messages in thread
From: Darrick J. Wong @ 2026-02-09 6:07 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-xfs@vger.kernel.org, hch
On Mon, Feb 09, 2026 at 02:50:00AM +0000, Shinichiro Kawasaki wrote:
> On Feb 06, 2026 / 09:38, Darrick J. Wong wrote:
> > On Fri, Feb 06, 2026 at 08:40:07AM +0000, Shinichiro Kawasaki wrote:
> > > Hello Darrick,
> > >
> > > Recently, my fstests run for null_blk (8GiB size) as SCRATCH_DEV failed at
> > > xfs/802 [3]. I took a look and observed following points:
> > >
> > > 1) xfs_scrub_all command ran as expected. Even though SCRATCH_DEV is mounted,
> > > it did not scrub SCRATCH_DEV. Hence the failure.
> > > 2) xfs_scrub_all uses lsblk command to list all mounted xfs filesystems [1].
> > > However, lsblk command does not report that SCRATCH_DEV is mounted as xfs.
> > > 3) I leanred that lsblk command refers to udev database [2], and udev database
> > > sometimes fails to update the filesystem information. This is the case for
> > > the null_blk as SCRATCH_DEV on my test nodes.
> >
> > Hrm. I wonder if we're starting xfs_scrub_all too soon after the
> > _scratch_cycle_mount? It's possible that if udev is running slowly,
> > it won't yet have poked blkid to update its cache, in which case lsblk
> > won't show it.
> >
> > If you add _udev_wait after _scratch_cycle_mount, does the "Health
> > status has not beel collected" problem go away? I couldn't reproduce
> > this specific problem on my test VMs, but the "udev hasn't caught up and
> > breaks fstests" pattern is very familiar. :/
>
> Unfortunately, no. I made the change below in the test case, but I still see
> the "Health status has not beel collected" message.
>
> diff --git a/tests/xfs/802 b/tests/xfs/802
> index fc4767a..77e09f8 100755
> --- a/tests/xfs/802
> +++ b/tests/xfs/802
> @@ -131,6 +131,8 @@ systemctl cat "$new_scruball_svc" >> $seqres.full
> # Cycle mounts to clear all the incore CHECKED bits.
> _scratch_cycle_mount
>
> +_udev_wait $SCRATCH_DEV
> +
> echo "Scrub Everything"
> run_scrub_service "$new_scruball_svc"
>
>
> I also manually mounted the null_blk device with xfs, and ran "udevadm settle".
> Then still lsblk was failing to report fstype for the null_blk device (FYI, I
> use Fedora 43 to recreate the failure).
Waitaminute, how can you even format xfs on nullblk to run fstests?
Isn't that the bdev that silently discards everything written to it, and
returns zero on reads??
--D
> >
> > > Based on these observations, I think there are two points to improve:
> > >
> > > 1) I found "blkid -p" command reports that null_blk is mounted as xfs, even when
> > > lsblk does not report it. I think xfs_scrub_all can be modified to use
> > > "blkid -p" instead of lsblk to find out xfs filesystems mounted.
> > > 2) When there are other xfs filesystems on the test node than TEST_DEV or
> > > SCRATCH_DEV, xfs_scrub_all changes the status of them. This does not sound
> > > good to me since it affects system status out of the test targets block
> > > devices. I think he test case can be improved to check that there is no other
> > > xfs filesystems mounted other than TEST_DEV or SCRATCH_DEV/s. If not, the
> > > test case should be skipped.
> >
> > I wonder if a better solution would be to add to xfs_scrub_all a
> > --restrict $SCRATCH_MNT --restrict $TEST_DIR option so that it ignores
> > mounts that aren't under test?
>
> Yes, I agree that it will be the better solution since the test case will not be
> skipped even when there are other xfs filesystems mounted.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-09 6:07 ` Darrick J. Wong
@ 2026-02-09 6:28 ` hch
2026-02-09 7:54 ` Shinichiro Kawasaki
0 siblings, 1 reply; 12+ messages in thread
From: hch @ 2026-02-09 6:28 UTC (permalink / raw)
To: Darrick J. Wong; +Cc: Shinichiro Kawasaki, linux-xfs@vger.kernel.org, hch
On Sun, Feb 08, 2026 at 10:07:16PM -0800, Darrick J. Wong wrote:
> Waitaminute, how can you even format xfs on nullblk to run fstests?
> Isn't that the bdev that silently discards everything written to it, and
> returns zero on reads??
nullblk can be used with or without a backing store. In the former
case it will not always return zeroes on reads obviously.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-09 6:28 ` hch
@ 2026-02-09 7:54 ` Shinichiro Kawasaki
2026-02-10 2:00 ` Darrick J. Wong
0 siblings, 1 reply; 12+ messages in thread
From: Shinichiro Kawasaki @ 2026-02-09 7:54 UTC (permalink / raw)
To: hch; +Cc: Darrick J. Wong, linux-xfs@vger.kernel.org
On Feb 09, 2026 / 07:28, hch wrote:
> On Sun, Feb 08, 2026 at 10:07:16PM -0800, Darrick J. Wong wrote:
> > Waitaminute, how can you even format xfs on nullblk to run fstests?
> > Isn't that the bdev that silently discards everything written to it, and
> > returns zero on reads??
>
> nullblk can be used with or without a backing store. In the former
> case it will not always return zeroes on reads obviously.
Yes, null_blk has the "memory_backed" parameter. When 1 is set to this, data
written to the null_blk device is kept and read back. I create two 8GiB null_blk
devices enabling this memory_backed option, and use them as TEST_DEV and
SCRATCH_DEV for the regular xfs test runs.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-09 7:54 ` Shinichiro Kawasaki
@ 2026-02-10 2:00 ` Darrick J. Wong
2026-02-10 6:17 ` Darrick J. Wong
2026-02-10 6:19 ` Shinichiro Kawasaki
0 siblings, 2 replies; 12+ messages in thread
From: Darrick J. Wong @ 2026-02-10 2:00 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: hch, linux-xfs@vger.kernel.org
On Mon, Feb 09, 2026 at 07:54:38AM +0000, Shinichiro Kawasaki wrote:
> On Feb 09, 2026 / 07:28, hch wrote:
> > On Sun, Feb 08, 2026 at 10:07:16PM -0800, Darrick J. Wong wrote:
> > > Waitaminute, how can you even format xfs on nullblk to run fstests?
> > > Isn't that the bdev that silently discards everything written to it, and
> > > returns zero on reads??
> >
> > nullblk can be used with or without a backing store. In the former
> > case it will not always return zeroes on reads obviously.
>
> Yes, null_blk has the "memory_backed" parameter. When 1 is set to this, data
> written to the null_blk device is kept and read back. I create two 8GiB null_blk
> devices enabling this memory_backed option, and use them as TEST_DEV and
> SCRATCH_DEV for the regular xfs test runs.
Huh, ok. Just out of curiosity, does blkid (in cache mode) /ever/ see
the xfs filesystem? I'm wondering if there's a race, a slow utility, or
if whatever builds the blkid cache sees that it's nullblk and ignores
it?
--D
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-10 2:00 ` Darrick J. Wong
@ 2026-02-10 6:17 ` Darrick J. Wong
2026-02-10 6:19 ` Shinichiro Kawasaki
1 sibling, 0 replies; 12+ messages in thread
From: Darrick J. Wong @ 2026-02-10 6:17 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: hch, linux-xfs@vger.kernel.org
On Mon, Feb 09, 2026 at 06:00:40PM -0800, Darrick J. Wong wrote:
> On Mon, Feb 09, 2026 at 07:54:38AM +0000, Shinichiro Kawasaki wrote:
> > On Feb 09, 2026 / 07:28, hch wrote:
> > > On Sun, Feb 08, 2026 at 10:07:16PM -0800, Darrick J. Wong wrote:
> > > > Waitaminute, how can you even format xfs on nullblk to run fstests?
> > > > Isn't that the bdev that silently discards everything written to it, and
> > > > returns zero on reads??
> > >
> > > nullblk can be used with or without a backing store. In the former
> > > case it will not always return zeroes on reads obviously.
> >
> > Yes, null_blk has the "memory_backed" parameter. When 1 is set to this, data
> > written to the null_blk device is kept and read back. I create two 8GiB null_blk
> > devices enabling this memory_backed option, and use them as TEST_DEV and
> > SCRATCH_DEV for the regular xfs test runs.
>
> Huh, ok. Just out of curiosity, does blkid (in cache mode) /ever/ see
> the xfs filesystem? I'm wondering if there's a race, a slow utility, or
> if whatever builds the blkid cache sees that it's nullblk and ignores
> it?
Ah, I see. The problem isnt *blkid* failing to see the new xfs
filesystem, it's lsblk failing to see that it has an xfs filesystem:
# udevadm monitor &
[1] 5743
# monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent
# mkfs.xfs -f /dev/nullb0
meta-data=/dev/nullb0 isize=512 agcount=4, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=1
= reflink=1 bigtime=1 inobtcount=1 nrext64=1
= exchange=1 metadir=0
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
= rgcount=0 rgsize=0 extents
= zoned=0 start=0 reserved=0
Discarding blocks...Done.
#
<taps foot>
# mkfs.xfs -f /dev/sda
meta-data=/dev/sda isize=512 agcount=4, agsize=314368 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=1
= reflink=1 bigtime=1 inobtcount=1 nrext64=1
= exchange=1 metadir=0
data = bsize=4096 blocks=1257472, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
= rgcount=0 rgsize=0 extents
= zoned=0 start=0 reserved=0
Discarding blocks...Done.
# KERNEL[1500.715783] change /devices/pci0000:00/0000:00:06.0/virtio2/host0/target0:0:0/0:0:0:0/block/sda (block)
UDEV [1500.806556] change /devices/pci0000:00/0000:00:06.0/virtio2/host0/target0:0:0/0:0:0:0/block/sda (block)
So for some reason nullb0 doesn't generate kernel uevents when mkfs.xfs
closes the block device, like it does for scsi disks. I don't know why
that is, but I'll look at it when I get a chance; it's very late here
now.
--D
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-10 2:00 ` Darrick J. Wong
2026-02-10 6:17 ` Darrick J. Wong
@ 2026-02-10 6:19 ` Shinichiro Kawasaki
2026-02-13 22:14 ` Darrick J. Wong
1 sibling, 1 reply; 12+ messages in thread
From: Shinichiro Kawasaki @ 2026-02-10 6:19 UTC (permalink / raw)
To: Darrick J. Wong; +Cc: hch, linux-xfs@vger.kernel.org
On Feb 09, 2026 / 18:00, Darrick J. Wong wrote:
> On Mon, Feb 09, 2026 at 07:54:38AM +0000, Shinichiro Kawasaki wrote:
> > On Feb 09, 2026 / 07:28, hch wrote:
> > > On Sun, Feb 08, 2026 at 10:07:16PM -0800, Darrick J. Wong wrote:
> > > > Waitaminute, how can you even format xfs on nullblk to run fstests?
> > > > Isn't that the bdev that silently discards everything written to it, and
> > > > returns zero on reads??
> > >
> > > nullblk can be used with or without a backing store. In the former
> > > case it will not always return zeroes on reads obviously.
> >
> > Yes, null_blk has the "memory_backed" parameter. When 1 is set to this, data
> > written to the null_blk device is kept and read back. I create two 8GiB null_blk
> > devices enabling this memory_backed option, and use them as TEST_DEV and
> > SCRATCH_DEV for the regular xfs test runs.
>
> Huh, ok. Just out of curiosity, does blkid (in cache mode) /ever/ see
> the xfs filesystem? I'm wondering if there's a race, a slow utility, or
> if whatever builds the blkid cache sees that it's nullblk and ignores
> it?
I tried the experement below, using /dev/nullb1 formatted as xfs:
# Clear blkid cache
$ sudo rm /run/blkid/blkid.tab
# Call blkid, but normal user can not parse superblock, then can not get fstype.
$ blkid --match-tag=TYPE /dev/nullb1
# Call blkid with superuser privilege. It can get fstype, but does not cache it,
# since --probe option is specified.
$ sudo blkid --probe --match-tag=TYPE /dev/nullb1
/dev/nullb1: TYPE="xfs"
# Still normal user can not get fstype since fstype is not cached.
$ blkid --match-tag=TYPE /dev/nullb1
# Call blkid as superuser without --probe option. It caches the fstype.
$sudo blkid --match-tag=TYPE /dev/nullb1
/dev/nullb1: TYPE="xfs"
# Now normal user can get fstype referring to the cache
$ blkid --match-tag=TYPE /dev/nullb1
/dev/nullb1: TYPE="xfs"
Based on this result, my understanding is that blkid caches its superblock
parse results when --probe, or -p option, is not specified. As far as I git
grep util-linux, this behavior does not change for null_blk.
Anyway, I think blkid with --probe option is good for fstests usage, since it
directly checks the superblock of the target block devices.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-10 6:19 ` Shinichiro Kawasaki
@ 2026-02-13 22:14 ` Darrick J. Wong
2026-02-14 6:39 ` Shinichiro Kawasaki
0 siblings, 1 reply; 12+ messages in thread
From: Darrick J. Wong @ 2026-02-13 22:14 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: hch, linux-xfs@vger.kernel.org
On Tue, Feb 10, 2026 at 06:19:21AM +0000, Shinichiro Kawasaki wrote:
> On Feb 09, 2026 / 18:00, Darrick J. Wong wrote:
> > On Mon, Feb 09, 2026 at 07:54:38AM +0000, Shinichiro Kawasaki wrote:
> > > On Feb 09, 2026 / 07:28, hch wrote:
> > > > On Sun, Feb 08, 2026 at 10:07:16PM -0800, Darrick J. Wong wrote:
> > > > > Waitaminute, how can you even format xfs on nullblk to run fstests?
> > > > > Isn't that the bdev that silently discards everything written to it, and
> > > > > returns zero on reads??
> > > >
> > > > nullblk can be used with or without a backing store. In the former
> > > > case it will not always return zeroes on reads obviously.
> > >
> > > Yes, null_blk has the "memory_backed" parameter. When 1 is set to this, data
> > > written to the null_blk device is kept and read back. I create two 8GiB null_blk
> > > devices enabling this memory_backed option, and use them as TEST_DEV and
> > > SCRATCH_DEV for the regular xfs test runs.
> >
> > Huh, ok. Just out of curiosity, does blkid (in cache mode) /ever/ see
> > the xfs filesystem? I'm wondering if there's a race, a slow utility, or
> > if whatever builds the blkid cache sees that it's nullblk and ignores
> > it?
>
> I tried the experement below, using /dev/nullb1 formatted as xfs:
>
> # Clear blkid cache
> $ sudo rm /run/blkid/blkid.tab
>
> # Call blkid, but normal user can not parse superblock, then can not get fstype.
> $ blkid --match-tag=TYPE /dev/nullb1
>
> # Call blkid with superuser privilege. It can get fstype, but does not cache it,
> # since --probe option is specified.
> $ sudo blkid --probe --match-tag=TYPE /dev/nullb1
> /dev/nullb1: TYPE="xfs"
>
> # Still normal user can not get fstype since fstype is not cached.
> $ blkid --match-tag=TYPE /dev/nullb1
>
> # Call blkid as superuser without --probe option. It caches the fstype.
> $sudo blkid --match-tag=TYPE /dev/nullb1
> /dev/nullb1: TYPE="xfs"
>
> # Now normal user can get fstype referring to the cache
> $ blkid --match-tag=TYPE /dev/nullb1
> /dev/nullb1: TYPE="xfs"
>
>
> Based on this result, my understanding is that blkid caches its superblock
> parse results when --probe, or -p option, is not specified. As far as I git
> grep util-linux, this behavior does not change for null_blk.
<sigh> I just spent two hours digging further into why your nullblk
device doesn't show up in the lsblk output.
Let's start with creating an nullblk device and formatting it:
# modprobe null_blk gb=1 memory_backed=1
# mkfs.ext2 -F /dev/nullb0
# mkfs.ext2 -F /dev/sda
# mount /dev/nullb0 /mnt
Now let's query lsblk:
# lsblk -o NAME,KNAME,TYPE,FSTYPE,MOUNTPOINT,UUID
NAME KNAME TYPE FSTYPE MOUNTPOINT UUID
sda sda disk ext2 cca89aa9-2dfd-4609-9f62-8a3c88c2054a
nullb0 nullb0 disk /mnt
For nullb0, lsblk finds the mountpoint, but it can't identify it as an
ext2 filesystem. stracing the output, I see that it opens
/run/udev/data/b${major}:${minor} to find out the filesystem type.
# cat /run/udev/data/b252\:0
I:991780315
G:systemd
Q:systemd
V:1
# cat /run/udev/data/b8\:0
S:disk/by-uuid/cca89aa9-2dfd-4609-9f62-8a3c88c2054a
S:disk/by-path/pci-0000:00:06.0-scsi-0:0:0:0
S:disk/by-diskseq/1
S:disk/by-id/scsi-0QEMU_RAMDISK_drive-scsi0-0-0-0
I:592459
E:ID_FS_UUID=cca89aa9-2dfd-4609-9f62-8a3c88c2054a
E:ID_FS_UUID_ENC=cca89aa9-2dfd-4609-9f62-8a3c88c2054a
E:ID_FS_VERSION=1.0
E:ID_FS_BLOCKSIZE=4096
E:ID_FS_LASTBLOCK=2579968
E:ID_FS_SIZE=10567548928
E:ID_FS_TYPE=ext2
E:ID_FS_USAGE=filesystem
E:ID_SCSI=1
E:ID_VENDOR=QEMU
E:ID_VENDOR_ENC=QEMU\x20\x20\x20\x20
E:ID_MODEL=RAMDISK
E:ID_MODEL_ENC=RAMDISK\x20\x20\x20\x20\x20\x20\x20\x20\x20
E:ID_REVISION=2.5+
E:ID_TYPE=disk
E:ID_SERIAL=0QEMU_RAMDISK_drive-scsi0-0-0-0
E:ID_SERIAL_SHORT=drive-scsi0-0-0-0
E:ID_BUS=scsi
E:ID_PATH=pci-0000:00:06.0-scsi-0:0:0:0
E:ID_PATH_TAG=pci-0000_00_06_0-scsi-0_0_0_0
E:UDISKS_AUTO=0
G:systemd
Q:systemd
V:1
As you can see, the udev device database saw that sda has an ext2
filesystem, but records almost nothing for nullb0. That's why lsblk
doesn't detect fstype for nullb0. Why doesn't udev record anything for
nullb0? I suspect it has something to do with this hunk of
60-block.rules:
ACTION!="remove", SUBSYSTEM=="block", \
KERNEL=="loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*|ubi*|scm*|pmem*|nbd*|zd*|rbd*|zram*|ublkb*", \
OPTIONS+="watch"
This causes udev to establish an inotify watch on block devices. When a
bdev is opened for write and closed, udev receives the inotify event and
synthesizes a change uevent. Annoyingly, creating a new rule file with:
ACTION!="remove", SUBSYSTEM=="block", \
KERNEL=="nullb*", \
OPTIONS+="watch"
doesn't fix the problem, and I'm not familiar enough with the set of
udev rule files on a Debian 13 system to make any further diagnoses. If
you're really interested in using nullblk as a ramdisk for this purpose
then I think you should file a bug against systemd to make lsblk work
properly for nullblk.
Note: blkid without the -p looks at /run/blkid/blkid.tab and does not
pay attention to the /run/udev files. I don't know why the two
utilities look at different files.
> Anyway, I think blkid with --probe option is good for fstests usage, since it
> directly checks the superblock of the target block devices.
That's not an attractive option for fixing xfs/802. The test fails
because xfs_scrub is never run against the scratch fs on the nullblk.
The scratch fs is not seen by xfs_scrub_all because lsblk doesn't see a
fstype for nullb0. lsblk doesn't see that because (apparently) udev
doesn't touch nullb0.
The lsblk call is internal to xfs_scrub_all; it needs lsblk's json
output to find all mounted XFS filesystems on the system. blkid doesn't
reveal anything about mount points.
Yes, we could change xfs_scrub_all to call blkid -p on every block
device for which lsblk doesn't find a fstype but does find a mountpoint,
but at that point I say xfs shouldn't be working around bugs in udev
that concern an ephemeral block device.
--D
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-13 22:14 ` Darrick J. Wong
@ 2026-02-14 6:39 ` Shinichiro Kawasaki
2026-02-14 7:39 ` Darrick J. Wong
0 siblings, 1 reply; 12+ messages in thread
From: Shinichiro Kawasaki @ 2026-02-14 6:39 UTC (permalink / raw)
To: Darrick J. Wong; +Cc: hch, linux-xfs@vger.kernel.org
On Feb 13, 2026 / 14:14, Darrick J. Wong wrote:
[...]
> Why doesn't udev record anything for
> nullb0? I suspect it has something to do with this hunk of
> 60-block.rules:
>
> ACTION!="remove", SUBSYSTEM=="block", \
> KERNEL=="loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*|ubi*|scm*|pmem*|nbd*|zd*|rbd*|zram*|ublkb*", \
> OPTIONS+="watch"
>
> This causes udev to establish an inotify watch on block devices. When a
> bdev is opened for write and closed, udev receives the inotify event and
> synthesizes a change uevent. Annoyingly, creating a new rule file with:
>
> ACTION!="remove", SUBSYSTEM=="block", \
> KERNEL=="nullb*", \
> OPTIONS+="watch"
>
> doesn't fix the problem, and I'm not familiar enough with the set of
> udev rule files on a Debian 13 system to make any further diagnoses. If
> you're really interested in using nullblk as a ramdisk for this purpose
> then I think you should file a bug against systemd to make lsblk work
> properly for nullblk.
Darrick, thank you very much for digging it and sharing the interisting
findings. Yes, it is really misterious why null_blk is not handled as other
block devices. This motivated me to look into the udev rules, and I found that
60-persistent-storage.rules does this:
...
KERNEL!="loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*|ubi*|scm*|pmem*|nbd*|zd*|rbd*|zram*|ublkb*", GOTO="persistent_storage_end"
...
# probe filesystem metadata of disks
KERNEL!="sr*|mmcblk[0-9]boot[0-9]", IMPORT{builtin}="blkid"
...
LABEL="persistent_storage_end"
The "builtin-blkid" looks recording the block device attributes to the udev
database. I added one more new rule file as follows on top of the rule file you
added:
ACTION!="remove", SUBSYSTEM=="block", \
KERNEL=="nullb*", \
IMPORT{builtin}="blkid"
With this change, now lsblk reports that null_blk has xfs! I also confrimed that
the test case xfs/802 passes.
> > Anyway, I think blkid with --probe option is good for fstests usage, since it
> > directly checks the superblock of the target block devices.
>
> That's not an attractive option for fixing xfs/802. The test fails
> because xfs_scrub is never run against the scratch fs on the nullblk.
> The scratch fs is not seen by xfs_scrub_all because lsblk doesn't see a
> fstype for nullb0. lsblk doesn't see that because (apparently) udev
> doesn't touch nullb0.
>
> The lsblk call is internal to xfs_scrub_all; it needs lsblk's json
> output to find all mounted XFS filesystems on the system. blkid doesn't
> reveal anything about mount points.
>
> Yes, we could change xfs_scrub_all to call blkid -p on every block
> device for which lsblk doesn't find a fstype but does find a mountpoint,
> but at that point I say xfs shouldn't be working around bugs in udev
> that concern an ephemeral block device.
Thanks for the explanation. My take away is that systemd/udevd support is the
prerequisite of fstests target block devices. I suggested blkid -p because I
assumed that fstests would be independent from systemd/udevd. But the assumption
was wrong.
My next action is to set up the udev rules for null_blk in my test environments.
Thank you again for your effort.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] xfs/802 failure due to mssing fstype report by lsblk
2026-02-14 6:39 ` Shinichiro Kawasaki
@ 2026-02-14 7:39 ` Darrick J. Wong
0 siblings, 0 replies; 12+ messages in thread
From: Darrick J. Wong @ 2026-02-14 7:39 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: hch, linux-xfs@vger.kernel.org
On Sat, Feb 14, 2026 at 06:39:57AM +0000, Shinichiro Kawasaki wrote:
> On Feb 13, 2026 / 14:14, Darrick J. Wong wrote:
> [...]
> > Why doesn't udev record anything for
> > nullb0? I suspect it has something to do with this hunk of
> > 60-block.rules:
> >
> > ACTION!="remove", SUBSYSTEM=="block", \
> > KERNEL=="loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*|ubi*|scm*|pmem*|nbd*|zd*|rbd*|zram*|ublkb*", \
> > OPTIONS+="watch"
> >
> > This causes udev to establish an inotify watch on block devices. When a
> > bdev is opened for write and closed, udev receives the inotify event and
> > synthesizes a change uevent. Annoyingly, creating a new rule file with:
> >
> > ACTION!="remove", SUBSYSTEM=="block", \
> > KERNEL=="nullb*", \
> > OPTIONS+="watch"
> >
> > doesn't fix the problem, and I'm not familiar enough with the set of
> > udev rule files on a Debian 13 system to make any further diagnoses. If
> > you're really interested in using nullblk as a ramdisk for this purpose
> > then I think you should file a bug against systemd to make lsblk work
> > properly for nullblk.
>
> Darrick, thank you very much for digging it and sharing the interisting
> findings. Yes, it is really misterious why null_blk is not handled as other
> block devices. This motivated me to look into the udev rules, and I found that
> 60-persistent-storage.rules does this:
>
> ...
> KERNEL!="loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*|ubi*|scm*|pmem*|nbd*|zd*|rbd*|zram*|ublkb*", GOTO="persistent_storage_end"
> ...
> # probe filesystem metadata of disks
> KERNEL!="sr*|mmcblk[0-9]boot[0-9]", IMPORT{builtin}="blkid"
> ...
> LABEL="persistent_storage_end"
>
> The "builtin-blkid" looks recording the block device attributes to the udev
> database. I added one more new rule file as follows on top of the rule file you
> added:
>
> ACTION!="remove", SUBSYSTEM=="block", \
> KERNEL=="nullb*", \
> IMPORT{builtin}="blkid"
>
> With this change, now lsblk reports that null_blk has xfs! I also confrimed that
> the test case xfs/802 passes.
Excellent!
> > > Anyway, I think blkid with --probe option is good for fstests usage, since it
> > > directly checks the superblock of the target block devices.
> >
> > That's not an attractive option for fixing xfs/802. The test fails
> > because xfs_scrub is never run against the scratch fs on the nullblk.
> > The scratch fs is not seen by xfs_scrub_all because lsblk doesn't see a
> > fstype for nullb0. lsblk doesn't see that because (apparently) udev
> > doesn't touch nullb0.
> >
> > The lsblk call is internal to xfs_scrub_all; it needs lsblk's json
> > output to find all mounted XFS filesystems on the system. blkid doesn't
> > reveal anything about mount points.
> >
> > Yes, we could change xfs_scrub_all to call blkid -p on every block
> > device for which lsblk doesn't find a fstype but does find a mountpoint,
> > but at that point I say xfs shouldn't be working around bugs in udev
> > that concern an ephemeral block device.
>
> Thanks for the explanation. My take away is that systemd/udevd support is the
> prerequisite of fstests target block devices. I suggested blkid -p because I
> assumed that fstests would be independent from systemd/udevd. But the assumption
> was wrong.
>
> My next action is to set up the udev rules for null_blk in my test environments.
> Thank you again for your effort.
If you decide to send a PR to systemd to fix the udev rules upstream,
please cc me if they push back. Thanks for your persistence!
--D
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-02-14 7:39 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-06 8:40 [bug report] xfs/802 failure due to mssing fstype report by lsblk Shinichiro Kawasaki
2026-02-06 17:38 ` Darrick J. Wong
2026-02-09 2:50 ` Shinichiro Kawasaki
2026-02-09 6:07 ` Darrick J. Wong
2026-02-09 6:28 ` hch
2026-02-09 7:54 ` Shinichiro Kawasaki
2026-02-10 2:00 ` Darrick J. Wong
2026-02-10 6:17 ` Darrick J. Wong
2026-02-10 6:19 ` Shinichiro Kawasaki
2026-02-13 22:14 ` Darrick J. Wong
2026-02-14 6:39 ` Shinichiro Kawasaki
2026-02-14 7:39 ` Darrick J. Wong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox