* [PATCH v3 1/1] scsi: ufs: core: Fix &hwq->cq_lock deadlock issue
@ 2023-04-24 8:03 Alice Chao
2023-04-24 8:07 ` AngeloGioacchino Del Regno
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Alice Chao @ 2023-04-24 8:03 UTC (permalink / raw)
To: Alim Akhtar, Avri Altman, Bart Van Assche, James E.J. Bottomley,
Martin K. Petersen, Matthias Brugger, AngeloGioacchino Del Regno,
Can Guo, Asutosh Das, Stanley Chu, Manivannan Sadhasivam
Cc: peter.wang, chun-hung.wu, alice.chao, powen.kao, naomi.chu,
cc.chou, chaotian.jing, jiajie.hao, tun-yu.yu, eddie.huang,
wsd_upstream, linux-scsi, linux-kernel, linux-arm-kernel,
linux-mediatek
[name:lockdep&]WARNING: inconsistent lock state
[name:lockdep&]--------------------------------
[name:lockdep&]inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
[name:lockdep&]kworker/u16:4/260 [HC0[0]:SC0[0]:HE1:SE1] takes:
ffffff8028444600 (&hwq->cq_lock){?.-.}-{2:2}, at:
ufshcd_mcq_poll_cqe_lock+0x30/0xe0
[name:lockdep&]{IN-HARDIRQ-W} state was registered at:
lock_acquire+0x17c/0x33c
_raw_spin_lock+0x5c/0x7c
ufshcd_mcq_poll_cqe_lock+0x30/0xe0
ufs_mtk_mcq_intr+0x60/0x1bc [ufs_mediatek_mod]
__handle_irq_event_percpu+0x140/0x3ec
handle_irq_event+0x50/0xd8
handle_fasteoi_irq+0x148/0x2b0
generic_handle_domain_irq+0x4c/0x6c
gic_handle_irq+0x58/0x134
call_on_irq_stack+0x40/0x74
do_interrupt_handler+0x84/0xe4
el1_interrupt+0x3c/0x78
<snip>
Possible unsafe locking scenario:
CPU0
----
lock(&hwq->cq_lock);
<Interrupt>
lock(&hwq->cq_lock);
*** DEADLOCK ***
2 locks held by kworker/u16:4/260:
[name:lockdep&]
stack backtrace:
CPU: 7 PID: 260 Comm: kworker/u16:4 Tainted: G S W OE
6.1.17-mainline-android14-2-g277223301adb #1
Workqueue: ufs_eh_wq_0 ufshcd_err_handler
Call trace:
dump_backtrace+0x10c/0x160
show_stack+0x20/0x30
dump_stack_lvl+0x98/0xd8
dump_stack+0x20/0x60
print_usage_bug+0x584/0x76c
mark_lock_irq+0x488/0x510
mark_lock+0x1ec/0x25c
__lock_acquire+0x4d8/0xffc
lock_acquire+0x17c/0x33c
_raw_spin_lock+0x5c/0x7c
ufshcd_mcq_poll_cqe_lock+0x30/0xe0
ufshcd_poll+0x68/0x1b0
ufshcd_transfer_req_compl+0x9c/0xc8
ufshcd_err_handler+0x3bc/0xea0
process_one_work+0x2f4/0x7e8
worker_thread+0x234/0x450
kthread+0x110/0x134
ret_from_fork+0x10/0x20
ufs_mtk_mcq_intr() could refer to
https://lore.kernel.org/all/20230328103423.10970-3-powen.kao@mediatek.com/
When ufshcd_err_handler() is executed, CQ event interrupt can enter
waiting for the same lock. It could happened in upstream code path
ufshcd_handle_mcq_cq_events() and also in ufs_mtk_mcq_intr(). This
warning message will be generated when &hwq->cq_lock is used in IRQ
context with IRQ enabled. Use ufshcd_mcq_poll_cqe_lock() with
spin_lock_irqsave instead of spin_lock to resolve the deadlock issue.
Fixes: ed975065c31c ("scsi: ufs: core: mcq: Add completion support in poll")
Reviewed-by: Can Guo <quic_cang@quicinc.com>
Reviewed-by: Stanley Chu <stanley.chu@mediatek.com>
Signed-off-by: Alice Chao <alice.chao@mediatek.com>
---
Change since v2
-Add commit: Add Reviewed-by: tag
---
drivers/ufs/core/ufs-mcq.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
index 31df052fbc41..202ff71e1b58 100644
--- a/drivers/ufs/core/ufs-mcq.c
+++ b/drivers/ufs/core/ufs-mcq.c
@@ -299,11 +299,11 @@ EXPORT_SYMBOL_GPL(ufshcd_mcq_poll_cqe_nolock);
unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba,
struct ufs_hw_queue *hwq)
{
- unsigned long completed_reqs;
+ unsigned long completed_reqs, flags;
- spin_lock(&hwq->cq_lock);
+ spin_lock_irqsave(&hwq->cq_lock, flags);
completed_reqs = ufshcd_mcq_poll_cqe_nolock(hba, hwq);
- spin_unlock(&hwq->cq_lock);
+ spin_unlock_irqrestore(&hwq->cq_lock, flags);
return completed_reqs;
}
--
2.18.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v3 1/1] scsi: ufs: core: Fix &hwq->cq_lock deadlock issue
2023-04-24 8:03 [PATCH v3 1/1] scsi: ufs: core: Fix &hwq->cq_lock deadlock issue Alice Chao
@ 2023-04-24 8:07 ` AngeloGioacchino Del Regno
2023-04-24 16:49 ` Bart Van Assche
2023-04-25 3:19 ` Martin K. Petersen
2 siblings, 0 replies; 4+ messages in thread
From: AngeloGioacchino Del Regno @ 2023-04-24 8:07 UTC (permalink / raw)
To: Alice Chao, Alim Akhtar, Avri Altman, Bart Van Assche,
James E.J. Bottomley, Martin K. Petersen, Matthias Brugger,
Can Guo, Asutosh Das, Stanley Chu, Manivannan Sadhasivam
Cc: peter.wang, chun-hung.wu, powen.kao, naomi.chu, cc.chou,
chaotian.jing, jiajie.hao, tun-yu.yu, eddie.huang, wsd_upstream,
linux-scsi, linux-kernel, linux-arm-kernel, linux-mediatek
Il 24/04/23 10:03, Alice Chao ha scritto:
> [name:lockdep&]WARNING: inconsistent lock state
> [name:lockdep&]--------------------------------
> [name:lockdep&]inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
> [name:lockdep&]kworker/u16:4/260 [HC0[0]:SC0[0]:HE1:SE1] takes:
> ffffff8028444600 (&hwq->cq_lock){?.-.}-{2:2}, at:
> ufshcd_mcq_poll_cqe_lock+0x30/0xe0
> [name:lockdep&]{IN-HARDIRQ-W} state was registered at:
> lock_acquire+0x17c/0x33c
> _raw_spin_lock+0x5c/0x7c
> ufshcd_mcq_poll_cqe_lock+0x30/0xe0
> ufs_mtk_mcq_intr+0x60/0x1bc [ufs_mediatek_mod]
> __handle_irq_event_percpu+0x140/0x3ec
> handle_irq_event+0x50/0xd8
> handle_fasteoi_irq+0x148/0x2b0
> generic_handle_domain_irq+0x4c/0x6c
> gic_handle_irq+0x58/0x134
> call_on_irq_stack+0x40/0x74
> do_interrupt_handler+0x84/0xe4
> el1_interrupt+0x3c/0x78
> <snip>
>
> Possible unsafe locking scenario:
> CPU0
> ----
> lock(&hwq->cq_lock);
> <Interrupt>
> lock(&hwq->cq_lock);
> *** DEADLOCK ***
> 2 locks held by kworker/u16:4/260:
>
> [name:lockdep&]
> stack backtrace:
> CPU: 7 PID: 260 Comm: kworker/u16:4 Tainted: G S W OE
> 6.1.17-mainline-android14-2-g277223301adb #1
> Workqueue: ufs_eh_wq_0 ufshcd_err_handler
>
> Call trace:
> dump_backtrace+0x10c/0x160
> show_stack+0x20/0x30
> dump_stack_lvl+0x98/0xd8
> dump_stack+0x20/0x60
> print_usage_bug+0x584/0x76c
> mark_lock_irq+0x488/0x510
> mark_lock+0x1ec/0x25c
> __lock_acquire+0x4d8/0xffc
> lock_acquire+0x17c/0x33c
> _raw_spin_lock+0x5c/0x7c
> ufshcd_mcq_poll_cqe_lock+0x30/0xe0
> ufshcd_poll+0x68/0x1b0
> ufshcd_transfer_req_compl+0x9c/0xc8
> ufshcd_err_handler+0x3bc/0xea0
> process_one_work+0x2f4/0x7e8
> worker_thread+0x234/0x450
> kthread+0x110/0x134
> ret_from_fork+0x10/0x20
>
> ufs_mtk_mcq_intr() could refer to
> https://lore.kernel.org/all/20230328103423.10970-3-powen.kao@mediatek.com/
>
> When ufshcd_err_handler() is executed, CQ event interrupt can enter
> waiting for the same lock. It could happened in upstream code path
> ufshcd_handle_mcq_cq_events() and also in ufs_mtk_mcq_intr(). This
> warning message will be generated when &hwq->cq_lock is used in IRQ
> context with IRQ enabled. Use ufshcd_mcq_poll_cqe_lock() with
> spin_lock_irqsave instead of spin_lock to resolve the deadlock issue.
>
> Fixes: ed975065c31c ("scsi: ufs: core: mcq: Add completion support in poll")
> Reviewed-by: Can Guo <quic_cang@quicinc.com>
> Reviewed-by: Stanley Chu <stanley.chu@mediatek.com>
> Signed-off-by: Alice Chao <alice.chao@mediatek.com>
For readability purposes only - next time please put the actual description at
the beginning and the log at the end.
Anyway,
Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3 1/1] scsi: ufs: core: Fix &hwq->cq_lock deadlock issue
2023-04-24 8:03 [PATCH v3 1/1] scsi: ufs: core: Fix &hwq->cq_lock deadlock issue Alice Chao
2023-04-24 8:07 ` AngeloGioacchino Del Regno
@ 2023-04-24 16:49 ` Bart Van Assche
2023-04-25 3:19 ` Martin K. Petersen
2 siblings, 0 replies; 4+ messages in thread
From: Bart Van Assche @ 2023-04-24 16:49 UTC (permalink / raw)
To: Alice Chao, Alim Akhtar, Avri Altman, James E.J. Bottomley,
Martin K. Petersen, Matthias Brugger, AngeloGioacchino Del Regno,
Can Guo, Asutosh Das, Stanley Chu, Manivannan Sadhasivam
Cc: peter.wang, chun-hung.wu, powen.kao, naomi.chu, cc.chou,
chaotian.jing, jiajie.hao, tun-yu.yu, eddie.huang, wsd_upstream,
linux-scsi, linux-kernel, linux-arm-kernel, linux-mediatek
On 4/24/23 01:03, Alice Chao wrote:
> [name:lockdep&]WARNING: inconsistent lock state
> [name:lockdep&]--------------------------------
> [name:lockdep&]inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
> [name:lockdep&]kworker/u16:4/260 [HC0[0]:SC0[0]:HE1:SE1] takes:
> ffffff8028444600 (&hwq->cq_lock){?.-.}-{2:2}, at:
> ufshcd_mcq_poll_cqe_lock+0x30/0xe0
> [name:lockdep&]{IN-HARDIRQ-W} state was registered at:
> lock_acquire+0x17c/0x33c
> _raw_spin_lock+0x5c/0x7c
> ufshcd_mcq_poll_cqe_lock+0x30/0xe0
> ufs_mtk_mcq_intr+0x60/0x1bc [ufs_mediatek_mod]
> __handle_irq_event_percpu+0x140/0x3ec
> handle_irq_event+0x50/0xd8
> handle_fasteoi_irq+0x148/0x2b0
> generic_handle_domain_irq+0x4c/0x6c
> gic_handle_irq+0x58/0x134
> call_on_irq_stack+0x40/0x74
> do_interrupt_handler+0x84/0xe4
> el1_interrupt+0x3c/0x78
> <snip>
>
> Possible unsafe locking scenario:
> CPU0
> ----
> lock(&hwq->cq_lock);
> <Interrupt>
> lock(&hwq->cq_lock);
> *** DEADLOCK ***
> 2 locks held by kworker/u16:4/260:
>
> [name:lockdep&]
> stack backtrace:
> CPU: 7 PID: 260 Comm: kworker/u16:4 Tainted: G S W OE
> 6.1.17-mainline-android14-2-g277223301adb #1
> Workqueue: ufs_eh_wq_0 ufshcd_err_handler
>
> Call trace:
> dump_backtrace+0x10c/0x160
> show_stack+0x20/0x30
> dump_stack_lvl+0x98/0xd8
> dump_stack+0x20/0x60
> print_usage_bug+0x584/0x76c
> mark_lock_irq+0x488/0x510
> mark_lock+0x1ec/0x25c
> __lock_acquire+0x4d8/0xffc
> lock_acquire+0x17c/0x33c
> _raw_spin_lock+0x5c/0x7c
> ufshcd_mcq_poll_cqe_lock+0x30/0xe0
> ufshcd_poll+0x68/0x1b0
> ufshcd_transfer_req_compl+0x9c/0xc8
> ufshcd_err_handler+0x3bc/0xea0
> process_one_work+0x2f4/0x7e8
> worker_thread+0x234/0x450
> kthread+0x110/0x134
> ret_from_fork+0x10/0x20
>
> ufs_mtk_mcq_intr() could refer to
> https://lore.kernel.org/all/20230328103423.10970-3-powen.kao@mediatek.com/
>
> When ufshcd_err_handler() is executed, CQ event interrupt can enter
> waiting for the same lock. It could happened in upstream code path
> ufshcd_handle_mcq_cq_events() and also in ufs_mtk_mcq_intr(). This
> warning message will be generated when &hwq->cq_lock is used in IRQ
> context with IRQ enabled. Use ufshcd_mcq_poll_cqe_lock() with
> spin_lock_irqsave instead of spin_lock to resolve the deadlock issue.
For future patches, please make sure that the patch description occurs
before the call traces. Anyway:
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3 1/1] scsi: ufs: core: Fix &hwq->cq_lock deadlock issue
2023-04-24 8:03 [PATCH v3 1/1] scsi: ufs: core: Fix &hwq->cq_lock deadlock issue Alice Chao
2023-04-24 8:07 ` AngeloGioacchino Del Regno
2023-04-24 16:49 ` Bart Van Assche
@ 2023-04-25 3:19 ` Martin K. Petersen
2 siblings, 0 replies; 4+ messages in thread
From: Martin K. Petersen @ 2023-04-25 3:19 UTC (permalink / raw)
To: Alice Chao
Cc: Alim Akhtar, Avri Altman, Bart Van Assche, James E.J. Bottomley,
Martin K. Petersen, Matthias Brugger, AngeloGioacchino Del Regno,
Can Guo, Asutosh Das, Stanley Chu, Manivannan Sadhasivam,
peter.wang, chun-hung.wu, powen.kao, naomi.chu, cc.chou,
chaotian.jing, jiajie.hao, tun-yu.yu, eddie.huang, wsd_upstream,
linux-scsi, linux-kernel, linux-arm-kernel, linux-mediatek
Alice,
> When ufshcd_err_handler() is executed, CQ event interrupt can enter
> waiting for the same lock. It could happened in upstream code path
> ufshcd_handle_mcq_cq_events() and also in ufs_mtk_mcq_intr(). This
> warning message will be generated when &hwq->cq_lock is used in IRQ
> context with IRQ enabled. Use ufshcd_mcq_poll_cqe_lock() with
> spin_lock_irqsave instead of spin_lock to resolve the deadlock issue.
Applied to 6.4/scsi-staging, thanks!
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-04-25 3:20 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-24 8:03 [PATCH v3 1/1] scsi: ufs: core: Fix &hwq->cq_lock deadlock issue Alice Chao
2023-04-24 8:07 ` AngeloGioacchino Del Regno
2023-04-24 16:49 ` Bart Van Assche
2023-04-25 3:19 ` Martin K. Petersen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox