From: "Darrick J. Wong" <djwong@kernel.org>
To: Patrick Fischer <patrick.fischer@siedl.net>
Cc: linux-xfs@vger.kernel.org
Subject: Re: xfs_scrub_all process execution results in a dead lock condition
Date: Thu, 30 Apr 2026 08:50:27 -0700 [thread overview]
Message-ID: <20260430155027.GE7751@frogsfrogsfrogs> (raw)
In-Reply-To: <323580211.1220195.1777554001363.JavaMail.zimbra@siedl.net>
On Thu, Apr 30, 2026 at 03:00:01PM +0200, Patrick Fischer wrote:
> Hello,
> I've encountered a bug within the xfsprogs-dev utilities, particular
> within the Python script xfs_scrub_all. I researched the master
> branch and saw, that the type of sub process call is like within
> kernel version 6.13.0-2 I stumbled across this issue.
"6.13.0-2" ... Debian Trixie?
> Overview:
> xfs_scrub_all.service systemd unit (or manual execution) is hanging
> due to a pipe buffer exhaustion after sub process call of lsblk.
>
> Steps to reproduce:
> Create a bunch of fake block devices to enlarge the output of lsblk to
> more than 65520 bytes:
> > modprobe scsi_debug max_lunx=3 num_tgts=7 add_hosts=100
/me notes that this wasn't enough to generate more than 60k of lsblk
output. Creating a fake 130k json file and changing cmd to
['cat', '/tmp/garbage'] was sufficient to reproduce the problem,
however.
> Run the command of xfs_scrub_all.service manually:
> > /usr/sbin/xfs_scrub_all --auto-media-scan-interval 1mo
>
>
> xfs_scrub_all shows a wait4 for the sub process lsblk:
> > wait4(2148527,
>
>
> Within sub process lsblk there is a write to FD1 / stdout:
> > write(1, " {\n "..., 4096
>
>
> Affected Code in /usr/sbin/xfs_scrub_all[1]:
> > 54 cmd=['lsblk', '-o', 'NAME,KNAME,TYPE,FSTYPE,MOUNTPOINT', '-J']
> > 55 result = subprocess.Popen(cmd, stdout=subprocess.PIPE)
> > 56 result.wait()
> > 57 if result.returncode != 0:
> > 58 return fs
>
>
> Actual Results:
> The execution of the command above launches a sub process of lsblk and
> returns more than 65520 bytes, resulting in an endless wait for
> return.
Yep, that's a bug.
Now that we can assume (demand?) python >= 3.5, I think we can replace
all the string iteration and collection mess in that function with a
simpler call to subprocess.run:
cmd=['lsblk', '-o', 'NAME,KNAME,TYPE,FSTYPE,MOUNTPOINT', '-J']
try:
proc = subprocess.run(cmd, capture_output = True, text = True, check = True)
except Exception as e:
print(e)
return fs
if proc.returncode != 0:
return fs
# The lsblk output had better be in disks-then-partitions order
bdevdata = json.loads(proc.stdout)
for bdev in bdevdata['blockdevices']:
> Expected Results:
> The unit / process should not enter a dead lock.
>
> [1] https://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git/tree/scrub/xfs_scrub_all.py.in
Thanks for the report, sorry there was a bug. I'll post a fixpatch
soon.
--D
>
> Regards,
> Patrick Fischer
>
> Die E-Mail wurde von IKARUS mail.security geprüft.
>
prev parent reply other threads:[~2026-04-30 15:50 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-30 13:00 xfs_scrub_all process execution results in a dead lock condition Patrick Fischer
2026-04-30 15:50 ` Darrick J. Wong [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260430155027.GE7751@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=patrick.fischer@siedl.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox