From: Brian Foster <bfoster@redhat.com>
To: linux-xfs@vger.kernel.org
Subject: Re: [BUG] log I/O completion GPF via xfs/006 and xfs/264 on 5.17.0-rc8
Date: Fri, 18 Mar 2022 12:11:07 -0400 [thread overview]
Message-ID: <YjSvG0wgm6epCa8X@bfoster> (raw)
In-Reply-To: <YjSNTd+U3HBq/Gsv@bfoster>
On Fri, Mar 18, 2022 at 09:46:53AM -0400, Brian Foster wrote:
> Hi,
>
> I'm not sure if this is known and/or fixed already, but it didn't look
> familiar so here is a report. I hit a splat when testing Willy's
> prospective folio bookmark change and it turns out it replicates on
> Linus' current master (551acdc3c3d2). This initially reproduced on
> xfs/264 (mkfs defaults) and I saw a soft lockup warning variant via
> xfs/006, but when I attempted to reproduce the latter a second time I
> hit what looks like the same problem as xfs/264. Both tests seem to
> involve some form of error injection, so possibly the same underlying
> problem. The GPF splat from xfs/264 is below.
>
Darrick pointed out this [1] series on IRC (particularly the final
patch) so I gave that a try. I _think_ that addresses the GPF issue
given it was nearly 100% reproducible before and I didn't see it in a
few iterations, but once I started a test loop for a longer test I ran
into the aforementioned soft lockup again. A snippet of that one is
below [2]. When this occurs, the task appears to be stuck (i.e. the
warning repeats) indefinitely.
Brian
[1] https://lore.kernel.org/linux-xfs/20220317053907.164160-1-david@fromorbit.com/
[2] Soft lockup warning from xfs/264 with patches from [1] applied:
watchdog: BUG: soft lockup - CPU#52 stuck for 134s! [kworker/52:1H:1881]
Modules linked in: rfkill rpcrdma sunrpc intel_rapl_msr intel_rapl_common rdma_ucm ib_srpt ib_isert iscsi_target_mod i10nm_edac target_core_mod x86_pkg_temp_thermal intel_powerclamp ib_iser coretemp libiscsi scsi_transport_iscsi kvm_intel rdma_cm ib_umad ipmi_ssif ib_ipoib iw_cm ib_cm kvm iTCO_wdt iTCO_vendor_support irqbypass crct10dif_pclmul crc32_pclmul acpi_ipmi mlx5_ib ghash_clmulni_intel bnxt_re ipmi_si rapl intel_cstate ib_uverbs ipmi_devintf mei_me isst_if_mmio isst_if_mbox_pci i2c_i801 nd_pmem ib_core intel_uncore wmi_bmof pcspkr isst_if_common mei i2c_smbus intel_pch_thermal ipmi_msghandler nd_btt dax_pmem acpi_power_meter xfs libcrc32c sd_mod sg mlx5_core lpfc mgag200 i2c_algo_bit drm_shmem_helper nvmet_fc drm_kms_helper nvmet nvme_fc mlxfw nvme_fabrics syscopyarea sysfillrect pci_hyperv_intf sysimgblt fb_sys_fops nvme_core ahci tls t10_pi libahci crc32c_intel psample scsi_transport_fc bnxt_en drm megaraid_sas tg3 libata wmi nfit libnvdimm dm_mirror dm_region_hash
dm_log dm_mod
CPU: 52 PID: 1881 Comm: kworker/52:1H Tainted: G S L 5.17.0-rc8+ #17
Hardware name: Dell Inc. PowerEdge R750/06V45N, BIOS 1.2.4 05/28/2021
Workqueue: xfs-log/dm-5 xlog_ioend_work [xfs]
RIP: 0010:native_queued_spin_lock_slowpath+0x1b0/0x1e0
Code: c1 e9 12 83 e0 03 83 e9 01 48 c1 e0 05 48 63 c9 48 05 40 0d 03 00 48 03 04 cd e0 ba 00 8c 48 89 10 8b 42 08 85 c0 75 09 f3 90 <8b> 42 08 85 c0 74 f7 48 8b 0a 48 85 c9 0f 84 6b ff ff ff 0f 0d 09
RSP: 0018:ff4ed0b360e4bb48 EFLAGS: 00000246
RAX: 0000000000000000 RBX: ff3413f05c684540 RCX: 0000000000001719
RDX: ff34142ebfeb0d40 RSI: ffffffff8bf826f6 RDI: ffffffff8bf54147
RBP: ff34142ebfeb0d40 R08: ff34142ebfeb0a68 R09: 00000000000001bc
R10: 00000000000001d1 R11: 0000000000000abd R12: 0000000000d40000
R13: 0000000000000008 R14: ff3413f04cd84000 R15: ff3413f059404400
FS: 0000000000000000(0000) GS:ff34142ebfe80000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f9200514f70 CR3: 0000000216c16005 CR4: 0000000000771ee0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
<TASK>
_raw_spin_lock+0x2c/0x30
xfs_trans_ail_delete+0x2a/0xd0 [xfs]
xfs_buf_item_done+0x22/0x30 [xfs]
xfs_buf_ioend+0x71/0x5e0 [xfs]
xfs_trans_committed_bulk+0x167/0x2c0 [xfs]
? enqueue_entity+0x121/0x4d0
? enqueue_task_fair+0x417/0x530
? resched_curr+0x23/0xc0
? check_preempt_curr+0x3f/0x70
? _raw_spin_unlock_irqrestore+0x1f/0x31
? __wake_up_common_lock+0x87/0xc0
xlog_cil_committed+0x29c/0x2d0 [xfs]
? _raw_spin_unlock_irqrestore+0x1f/0x31
? __wake_up_common_lock+0x87/0xc0
xlog_cil_process_committed+0x69/0x80 [xfs]
xlog_state_shutdown_callbacks+0xce/0xf0 [xfs]
xlog_force_shutdown+0xd0/0x110 [xfs]
xfs_do_force_shutdown+0x5f/0x150 [xfs]
xlog_ioend_work+0x71/0x80 [xfs]
process_one_work+0x1c5/0x390
? process_one_work+0x390/0x390
worker_thread+0x30/0x350
? process_one_work+0x390/0x390
kthread+0xe6/0x110
? kthread_complete_and_exit+0x20/0x20
ret_from_fork+0x1f/0x30
</TASK>
next prev parent reply other threads:[~2022-03-18 16:11 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-18 13:46 [BUG] log I/O completion GPF via xfs/006 and xfs/264 on 5.17.0-rc8 Brian Foster
2022-03-18 16:11 ` Brian Foster [this message]
2022-03-18 21:42 ` Dave Chinner
2022-03-21 18:35 ` Brian Foster
2022-03-21 22:14 ` Dave Chinner
2022-03-22 14:33 ` Brian Foster
2022-03-22 21:41 ` Dave Chinner
2022-03-18 21:48 ` Dave Chinner
2022-03-18 21:51 ` Darrick J. Wong
2022-03-18 22:39 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YjSvG0wgm6epCa8X@bfoster \
--to=bfoster@redhat.com \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox