public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Too many xfs-conv kworker threads
@ 2025-11-25 19:49 Karim Manaouil
  2025-11-25 22:31 ` Dave Chinner
  0 siblings, 1 reply; 4+ messages in thread
From: Karim Manaouil @ 2025-11-25 19:49 UTC (permalink / raw)
  To: Carlos Maiolino; +Cc: linux-xfs, linux-kernel, Karim Manaouil

Hi folks,

I have four NVMe SSDs on RAID0 with XFS and upstream Linux kernel 6.15
with commit id e5f0a698b34ed76002dc5cff3804a61c80233a7a. The setup can
achieve 25GB/s and more than 2M IOPS. The CPU is a dual socket 24-cores
AMD EPYC 9224.

I am running thpchallenge-fio from mmtests (its purpose is described
here [1]). It's a fio job that inefficiently writes a large number of 64K
files. On a system with 128GiB of RAM, it could create up to 100K files.
A typical fio config looks like this:

[global]
direct=0
ioengine=sync
blocksize=4096
invalidate=0
fallocate=none
create_on_open=1

[writer]
nrfiles=785988
filesize=65536
readwrite=write
numjobs=4
filename_format=$jobnum/workfile.$filenum

I noticed that, at some point, top reports around 42650 sleeping tasks,
example:

Tasks: 42651 total,   1 running, 42650 sleeping,   0 stopped,   0 zombie

This is a test machine from a fresh boot running vanilla Debian.

After checking, it turned out, it was a massive list of xfs-conv
kworkers. Something like this (truncated):

  58214 ?        I      0:00 [kworker/47:203-xfs-conv/md127]
  58215 ?        I      0:00 [kworker/47:204-xfs-conv/md127]
  58216 ?        I      0:00 [kworker/47:205-xfs-conv/md127]
  58217 ?        I      0:00 [kworker/47:206-xfs-conv/md127]
  58219 ?        I      0:00 [kworker/12:539-xfs-conv/md127]
  58220 ?        I      0:00 [kworker/12:540-xfs-conv/md127]
  58221 ?        I      0:00 [kworker/12:541-xfs-conv/md127]
  58222 ?        I      0:00 [kworker/12:542-xfs-conv/md127]
  58223 ?        I      0:00 [kworker/12:543-xfs-conv/md127]
  58224 ?        I      0:00 [kworker/12:544-xfs-conv/md127]
  58225 ?        I      0:00 [kworker/12:545-xfs-conv/md127]
  58227 ?        I      0:00 [kworker/38:155-xfs-conv/md127]
  58228 ?        I      0:00 [kworker/38:156-xfs-conv/md127]
  58230 ?        I      0:00 [kworker/38:158-xfs-conv/md127]
  58233 ?        I      0:00 [kworker/38:161-xfs-conv/md127]
  58235 ?        I      0:00 [kworker/8:537-xfs-conv/md127]
  58237 ?        I      0:00 [kworker/8:539-xfs-conv/md127]
  58238 ?        I      0:00 [kworker/8:540-xfs-conv/md127]
  58239 ?        I      0:00 [kworker/8:541-xfs-conv/md127]
  58240 ?        I      0:00 [kworker/8:542-xfs-conv/md127]
  58241 ?        I      0:00 [kworker/8:543-xfs-conv/md127]

It seems like the kernel is creating too many kworkers on each CPU.

I am not sure if this has any effect on performance, but potentially,
there is some scheduling overhead?!

Is this a healthy amount of kworkers? Is this even needed? Even if we
are trying to write to ~80k small files by four parallel threads?

[1] https://lwn.net/Articles/770235/
-- 
~karim

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-11-26 22:33 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-25 19:49 Too many xfs-conv kworker threads Karim Manaouil
2025-11-25 22:31 ` Dave Chinner
2025-11-26 13:27   ` Karim Manaouil
2025-11-26 22:33     ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox