From: kernel test robot <oliver.sang@intel.com>
To: Yu Kuai <yukuai@fnnas.com>
Cc: <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
<linux-block@vger.kernel.org>, <axboe@kernel.dk>, <tj@kernel.org>,
<nilay@linux.ibm.com>, <ming.lei@redhat.com>, <yukuai@fnnas.com>,
<oliver.sang@intel.com>
Subject: Re: [PATCH v6 07/13] blk-mq-debugfs: warn about possible deadlock
Date: Tue, 30 Dec 2025 14:04:10 +0800 [thread overview]
Message-ID: <202512301342.35385eee-lkp@intel.com> (raw)
In-Reply-To: <20251225103248.1303397-8-yukuai@fnnas.com>
Hello,
kernel test robot noticed "RIP:debugfs_create_files" on:
commit: 492a1c791dd61f6b2abfc86a4a85acf5db1d0e32 ("[PATCH v6 07/13] blk-mq-debugfs: warn about possible deadlock")
url: https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/blk-wbt-factor-out-a-helper-wbt_set_lat/20251225-183443
base: https://git.kernel.org/cgit/linux/kernel/git/axboe/linux.git for-next
patch link: https://lore.kernel.org/all/20251225103248.1303397-8-yukuai@fnnas.com/
patch subject: [PATCH v6 07/13] blk-mq-debugfs: warn about possible deadlock
in testcase: blktests
version: blktests-x86_64-b1b99d1-1_20251223
with following parameters:
disk: 1SSD
test: nvme-005
nvme_trtype: rdma
config: x86_64-rhel-9.4-func
compiler: gcc-14
test machine: 224 threads 2 sockets Intel(R) Xeon(R) Platinum 8480+ (Sapphire Rapids) with 256G memory
(please refer to attached dmesg/kmsg for entire log/backtrace)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202512301342.35385eee-lkp@intel.com
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20251230/202512301342.35385eee-lkp@intel.com
[ 162.300625][ T1400] nvme nvme2: creating 128 I/O queues.
[ 162.876161][ T1400] nvme nvme2: mapped 128/0/0 default/read/poll queues.
[ 162.901778][ T1400] ------------[ cut here ]------------
[ 162.908519][ T1400] WARNING: block/blk-mq-debugfs.c:620 at debugfs_create_files+0xb8/0xe0, CPU#72: kworker/u898:10/1400
[ 162.922316][ T1400] Modules linked in: siw ib_uverbs nvmet_rdma nvmet nvme_auth hkdf nvme_rdma nvme_fabrics rdma_cm iw_cm ib_cm ib_core loop f2fs binfmt_misc intel_rapl_msr intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common intel_ifs i10nm_edac skx_edac_common nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp btrfs blake2b libblake2b xor zstd_compress kvm_intel raid6_pq kvm irqbypass dax_hmem ghash_clmulni_intel ast rapl cxl_acpi snd_pcm pmt_telemetry drm_client_lib spi_nor nvme iaa_crypto qat_4xxx intel_cstate cxl_port mei_me snd_timer pmt_discovery drm_shmem_helper pmt_class intel_sdsi mtd snd intel_qat ipmi_ssif intel_th_gth isst_if_mmio isst_if_mbox_pci i40e cxl_core idxd soundcore intel_th_pci i2c_i801 spi_intel_pci crc8 libie intel_uncore nvme_core einj intel_vsec cdc_ether acpi_power_meter mei drm_kms_helper pcspkr i2c_ismt intel_th wmi spi_intel isst_if_common libie_adminq i2c_smbus idxd_bus authenc ipmi_si acpi_ipmi ipmi_devintf ipmi_msghandler acpi_pad pinctrl_emmitsburg pfr_telemetry
[ 162.922410][ T1400] pfr_update drm fuse nfnetlink
[ 163.034763][ T1400] CPU: 72 UID: 0 PID: 1400 Comm: kworker/u898:10 Tainted: G S 6.19.0-rc1-00238-g492a1c791dd6 #1 PREEMPT(voluntary)
[ 163.050931][ T1400] Tainted: [S]=CPU_OUT_OF_SPEC
[ 163.056656][ T1400] Hardware name: Intel Corporation EAGLESTREAM/EAGLESTREAM, BIOS SE5C7411.86B.8118.D04.2206151341 06/15/2022
[ 163.070319][ T1400] Workqueue: nvme-reset-wq nvme_rdma_reset_ctrl_work [nvme_rdma]
[ 163.079319][ T1400] RIP: 0010:debugfs_create_files+0xb8/0xe0
[ 163.086849][ T1400] Code: 89 ef e8 1b 1f d0 ff 48 89 d8 48 c1 e8 03 42 80 3c 20 00 75 23 48 8b 2b 48 85 ed 75 b2 5b 5d 41 5c 41 5d 41 5e c3 cc cc cc cc <0f> 0b e9 5f ff ff ff e8 9c a6 66 ff eb af 48 89 df e8 d2 a6 66 ff
[ 163.109632][ T1400] RSP: 0018:ffa00000136c78c0 EFLAGS: 00010202
[ 163.116916][ T1400] RAX: 0000000000000007 RBX: ffffffff845523a0 RCX: ffffffff845523a0
[ 163.126203][ T1400] RDX: ff110022742fcc00 RSI: ff110020dbcfa400 RDI: 0000000000000001
[ 163.135503][ T1400] RBP: ffa00000136c7958 R08: 0000000000000001 R09: fff3fc00026d8f05
[ 163.144962][ T1400] R10: ffa00000136c782f R11: 00000000ffffffff R12: ff110022742fcc00
[ 163.154244][ T1400] R13: ff110020dbcfa400 R14: ff110022742fcc00 R15: ff110022742fccfe
[ 163.163587][ T1400] FS: 0000000000000000(0000) GS:ff11003fd9cf5000(0000) knlGS:0000000000000000
[ 163.173957][ T1400] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 163.181790][ T1400] CR2: 00005577f0a48e88 CR3: 000000405ca70004 CR4: 0000000000f73ef0
[ 163.191083][ T1400] PKRU: 55555554
[ 163.195396][ T1400] Call Trace:
[ 163.199405][ T1400] <TASK>
[ 163.203027][ T1400] blk_mq_debugfs_register_hctx+0x17a/0x440
[ 163.210760][ T1400] ? kobject_add+0x116/0x180
[ 163.216245][ T1400] ? __pfx_blk_mq_debugfs_register_hctx+0x10/0x10
[ 163.224443][ T1400] ? __pfx_mutex_unlock+0x10/0x10
[ 163.230393][ T1400] ? blk_mq_register_hctx+0x1ea/0x420
[ 163.236887][ T1400] blk_mq_debugfs_register_hctxs+0xe6/0x160
[ 163.243816][ T1400] __blk_mq_update_nr_hw_queues+0x544/0xab0
[ 163.250866][ T1400] ? __pfx___blk_mq_update_nr_hw_queues+0x10/0x10
[ 163.258381][ T1400] ? mutex_lock+0x91/0xf0
[ 163.263590][ T1400] ? __pfx_mutex_lock+0x10/0x10
[ 163.269330][ T1400] ? blk_mq_run_hw_queues+0xe1/0x400
[ 163.275597][ T1400] blk_mq_update_nr_hw_queues+0x35/0x50
[ 163.282091][ T1400] nvme_rdma_configure_io_queues.cold+0x3ff/0x72f [nvme_rdma]
[ 163.290878][ T1400] ? __pfx_nvme_rdma_configure_io_queues+0x10/0x10 [nvme_rdma]
[ 163.299714][ T1400] ? nvme_rdma_configure_admin_queue+0x3d4/0x750 [nvme_rdma]
[ 163.308274][ T1400] nvme_rdma_setup_ctrl+0x252/0x4e0 [nvme_rdma]
[ 163.315608][ T1400] ? nvme_change_ctrl_state+0x1a1/0x2e0 [nvme_core]
[ 163.323275][ T1400] nvme_rdma_reset_ctrl_work+0xa7/0x170 [nvme_rdma]
[ 163.330935][ T1400] process_one_work+0x668/0xec0
[ 163.336719][ T1400] worker_thread+0x629/0x10a0
[ 163.342203][ T1400] ? __pfx_worker_thread+0x10/0x10
[ 163.348169][ T1400] kthread+0x39b/0x750
[ 163.352977][ T1400] ? __pfx_kthread+0x10/0x10
[ 163.358344][ T1400] ? __pfx__raw_spin_lock_irq+0x10/0x10
[ 163.364774][ T1400] ? __pfx_kthread+0x10/0x10
[ 163.370133][ T1400] ? __pfx_kthread+0x10/0x10
[ 163.375530][ T1400] ret_from_fork+0x2aa/0x490
[ 163.380889][ T1400] ? __pfx_ret_from_fork+0x10/0x10
[ 163.386821][ T1400] ? switch_fpu+0x13/0x1a0
[ 163.391971][ T1400] ? __switch_to+0x4cd/0xe70
[ 163.397293][ T1400] ? __pfx_kthread+0x10/0x10
[ 163.402712][ T1400] ret_from_fork_asm+0x1a/0x30
[ 163.408231][ T1400] </TASK>
[ 163.411794][ T1400] ---[ end trace 0000000000000000 ]---
[ 163.447563][ T3933] nvme nvme2: Removing ctrl: NQN "blktests-subsystem-1"
[ 163.471043][ T3458] block nvme2n1: no available path - failing I/O
[ 163.479008][ T3458] block nvme2n1: no available path - failing I/O
[ 163.487171][ T3458] Buffer I/O error on dev nvme2n1, logical block 262142, async page read
[ 164.210460][ T3990] SoftiWARP detached
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
parent reply other threads:[~2025-12-30 6:04 UTC|newest]
Thread overview: expand[flat|nested] mbox.gz Atom feed
[parent not found: <20251225103248.1303397-8-yukuai@fnnas.com>]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202512301342.35385eee-lkp@intel.com \
--to=oliver.sang@intel.com \
--cc=axboe@kernel.dk \
--cc=linux-block@vger.kernel.org \
--cc=lkp@intel.com \
--cc=ming.lei@redhat.com \
--cc=nilay@linux.ibm.com \
--cc=oe-lkp@lists.linux.dev \
--cc=tj@kernel.org \
--cc=yukuai@fnnas.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox