linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: bugzilla-daemon@kernel.org
To: linux-xfs@vger.kernel.org
Subject: [Bug 216007] XFS hangs in iowait when extracting large number of files
Date: Mon, 23 May 2022 08:29:14 +0000	[thread overview]
Message-ID: <bug-216007-201763-bROEL8eFC5@https.bugzilla.kernel.org/> (raw)
In-Reply-To: <bug-216007-201763@https.bugzilla.kernel.org/>

https://bugzilla.kernel.org/show_bug.cgi?id=216007

--- Comment #6 from Peter Pavlisko (bugzkernelorg8392@araxon.sk) ---
(In reply to Chris Murphy from comment #2)
> Please see
> https://xfs.org/index.php/XFS_FAQ#Q:
> _What_information_should_I_include_when_reporting_a_problem.3F and
> supplement the bug report with the missing information. Thanks.

certainly!

> kernel version (uname -a)

5.15.32-gentoo-r1

> xfsprogs version (xfs_repair -V)

xfs_repair version 5.14.2

> number of CPUs

1 CPU, 2 cores (Intel Celeron G1610T @ 2.30GHz)

> contents of /proc/meminfo

(at the time of iowait hangup)

MemTotal:        3995528 kB
MemFree:           29096 kB
MemAvailable:    3749216 kB
Buffers:           19984 kB
Cached:          3556248 kB
SwapCached:            0 kB
Active:            62888 kB
Inactive:        3560968 kB
Active(anon):        272 kB
Inactive(anon):    47772 kB
Active(file):      62616 kB
Inactive(file):  3513196 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       2097084 kB
SwapFree:        2097084 kB
Dirty:                28 kB
Writeback:             0 kB
AnonPages:         47628 kB
Mapped:            19540 kB
Shmem:               416 kB
KReclaimable:     199964 kB
Slab:             286472 kB
SReclaimable:     199964 kB
SUnreclaim:        86508 kB
KernelStack:        1984 kB
PageTables:         1648 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     4094848 kB
Committed_AS:     117448 kB
VmallocTotal:   34359738367 kB
VmallocUsed:        2764 kB
VmallocChunk:          0 kB
Percpu:              832 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:       20364 kB
DirectMap2M:     4139008 kB

> contents of /proc/mounts

/dev/root / ext4 rw,noatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,relatime,size=10240k,nr_inodes=498934,mode=755
0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /run tmpfs rw,nosuid,nodev,size=799220k,nr_inodes=819200,mode=755 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
cgroup_root /sys/fs/cgroup tmpfs
rw,nosuid,nodev,noexec,relatime,size=10240k,mode=755 0 0
openrc /sys/fs/cgroup/openrc cgroup
rw,nosuid,nodev,noexec,relatime,release_agent=/lib/rc/sh/cgroup-release-agent.sh,name=openrc
0 0
none /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate
0 0
mqueue /dev/mqueue mqueue rw,nosuid,nodev,noexec,relatime 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0
0
shm /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc
rw,nosuid,nodev,noexec,relatime 0 0
/dev/sdc1 /mnt/test xfs
rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0

> contents of /proc/partitions

major minor  #blocks  name

   1        0      16384 ram0
   1        1      16384 ram1
   1        2      16384 ram2
   1        3      16384 ram3
   1        4      16384 ram4
   1        5      16384 ram5
   1        6      16384 ram6
   1        7      16384 ram7
   1        8      16384 ram8
   1        9      16384 ram9
   1       10      16384 ram10
   1       11      16384 ram11
   1       12      16384 ram12
   1       13      16384 ram13
   1       14      16384 ram14
   1       15      16384 ram15
   8        0 1953514584 sda
   8        1     131072 sda1
   8        2    2097152 sda2
   8        3 1951285336 sda3
   8       16 1953514584 sdb
   8       17     131072 sdb1
   8       18    2097152 sdb2
   8       19 1951285336 sdb3
   8       32  976762584 sdc
   8       33  976761560 sdc1
  11        0    1048575 sr0
   9        3 1951285248 md3
   9        2    2097088 md2
   9        1     131008 md1

> RAID layout (hardware and/or software)

software RAID1 on system disks (/dev/sda + /dev/sdb, ext4)
no RAID on xfs disk (/dev/sdc, xfs)

> LVM configuration

no LVM

> type of disks you are using

CMR 3.5" SATA disks

/dev/sda and /dev/sdb: WD Red 2TB (WDC WD20EFRX-68EUZN0)
/dev/sdc: Seagate Constellation ES 1TB (MB1000GCEEK)

> write cache status of drives

not sure - how do I find out?

> size of BBWC and mode it is running in

no HW RAID (the drive controller is set to AHCI mode)
no battery backup

> xfs_info output on the filesystem in question

meta-data=/dev/sdc1              isize=512    agcount=4, agsize=61047598 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=244190390, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=119233, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

> dmesg output showing all error messages and stack traces

already an attachment to this ticket

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

  parent reply	other threads:[~2022-05-23  8:29 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-20 11:56 [Bug 216007] New: XFS hangs in iowait when extracting large number of files bugzilla-daemon
2022-05-20 11:56 ` [Bug 216007] " bugzilla-daemon
2022-05-20 20:46 ` bugzilla-daemon
2022-05-20 23:05 ` [Bug 216007] New: " Dave Chinner
2022-05-20 23:05 ` [Bug 216007] " bugzilla-daemon
2022-05-21  5:14 ` bugzilla-daemon
2022-05-21 22:31   ` Dave Chinner
2022-05-21 22:31 ` bugzilla-daemon
2022-05-23  8:29 ` bugzilla-daemon [this message]
2022-05-23  8:31 ` bugzilla-daemon
2022-05-23 10:02 ` bugzilla-daemon
2022-05-23 10:28 ` bugzilla-daemon
2022-05-24  7:54 ` bugzilla-daemon
2022-05-24 10:00 ` bugzilla-daemon
2022-05-24 10:49 ` bugzilla-daemon
2022-05-24 10:52 ` bugzilla-daemon
2022-05-24 10:53 ` bugzilla-daemon
2022-05-24 11:21 ` bugzilla-daemon
2022-05-24 11:48 ` bugzilla-daemon
2022-05-24 11:49 ` bugzilla-daemon
2022-05-25 17:13 ` bugzilla-daemon
2022-05-26  4:04 ` bugzilla-daemon
2022-05-26  8:51 ` bugzilla-daemon
2022-05-26  9:16 ` bugzilla-daemon
2022-05-26 10:26 ` bugzilla-daemon
2022-05-26 10:37 ` bugzilla-daemon
2022-06-04 16:25 ` bugzilla-daemon
2022-06-05  7:51 ` bugzilla-daemon
2022-06-06  7:49 ` bugzilla-daemon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-216007-201763-bROEL8eFC5@https.bugzilla.kernel.org/ \
    --to=bugzilla-daemon@kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).