From: John Garry <john.garry@huawei.com>
To: Damien Le Moal <damien.lemoal@opensource.wdc.com>,
<linux-scsi@vger.kernel.org>,
"Martin K . Petersen" <martin.petersen@oracle.com>,
Xiang Chen <chenxiang66@hisilicon.com>,
"Jason Yan" <yanaijie@huawei.com>
Subject: Re: [PATCH v3 27/31] scsi: pm8001: Cleanup pm8001_queue_command()
Date: Wed, 16 Feb 2022 11:50:19 +0000 [thread overview]
Message-ID: <5a5481af-e975-c6fb-2d48-961769eae551@huawei.com> (raw)
In-Reply-To: <37df3c92-c28e-72d4-76d8-33356829af5a@opensource.wdc.com>
On 16/02/2022 11:42, Damien Le Moal wrote:
>> Hi Damien,
>>
>>> patch 30 cleans up pm8001_task_exec(). This patch is for
>>> pm8001_queue_command(). I preferred to separate to facilitate review.
>>> But if you insist, I can merge these into a much bigger "code cleanup"
>>> patch...
>>>
>> I don't mind really.
>>
>> BTW, on a separate topic, IIRC you said that rmmod hangs for this driver
>> - if so, did you investigate why?
> The problem is gone with the fixes. I suspect it was due to the buggy
> non-data command handling (likely, the flush issued when stopping the
> device on rmmod).
>
> I have not tackled/tried again the QD change failure though.
>
> Preparing v4 now. Will check the QD change.
>
ok, great.
JFYI, turning on DMA debug sometimes gives this even after fdisk -l:
[ 45.080945] sas: sas_scsi_find_task: querying task 0x(____ptrval____)
[ 45.087582] pm80xx0:: mpi_ssp_completion 1936:sas IO status 0x3b
[ 45.093681] pm80xx0:: mpi_ssp_completion 1947:SAS Address of IO
Failure Drive:5000c50085ff5559
[ 45.102641] pm80xx0:: mpi_ssp_completion 1936:sas IO status 0x3b
[ 45.108739] pm80xx0:: mpi_ssp_completion 1947:SAS Address of IO
Failure Drive:5000c50085ff5559
[ 45.117694] pm80xx0:: mpi_ssp_completion 1936:sas IO status 0x3b
[ 45.123792] pm80xx0:: mpi_ssp_completion 1947:SAS Address of IO
Failure Drive:5000c50085ff5559
[ 45.132652] pm80xx: rc= -5
[ 45.135370] sas: sas_scsi_find_task: task 0x(____ptrval____) result
code -5 not handled
[ 45.143466] sas: task 0x(____ptrval____) is not at LU: I_T recover
[ 45.149741] sas: I_T nexus reset for dev 5000c50085ff5559
[ 47.183916] sas: I_T 5000c50085ff5559 recovered
[ 47.189034] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1
tries: 1
[ 47.204168] ------------[ cut here ]------------
[ 47.208829] DMA-API: pm80xx 0000:04:00.0: cacheline tracking EEXIST,
overlapping mappings aren't supported
[ 47.218502] WARNING: CPU: 3 PID: 641 at kernel/dma/debug.c:570
add_dma_entry+0x308/0x3f0
[ 47.226607] Modules linked in:
[ 47.229678] CPU: 3 PID: 641 Comm: kworker/3:1H Not tainted
5.17.0-rc1-11918-gd9d909a8c666 #407
[ 47.238298] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI
RC0 - V1.16.01 03/15/2019
[ 47.246829] Workqueue: kblockd blk_mq_run_work_fn
[ 47.251552] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS
BTYPE=--)
[ 47.258522] pc : add_dma_entry+0x308/0x3f0
[ 47.262626] lr : add_dma_entry+0x308/0x3f0
[ 47.266730] sp : ffff80002e5c75f0
[ 47.270049] x29: ffff80002e5c75f0 x28: 0000002880a908c0 x27:
ffff80000cc95440
[ 47.277216] x26: ffff80000cc94000 x25: ffff80000cc94e20 x24:
ffff00208e4660c8
[ 47.284382] x23: ffff800009d16b40 x22: ffff80000a5b8700 x21:
1ffff00005cb8eca
[ 47.291548] x20: ffff80000caf4c90 x19: ffff0a2009726100 x18:
0000000000000000
[ 47.298713] x17: 70616c7265766f20 x16: 2c54534958454520 x15:
676e696b63617274
[ 47.305879] x14: 1ffff00005cb8df4 x13: 0000000041b58ab3 x12:
ffff700005cb8e27
[ 47.313044] x11: 1ffff00005cb8e26 x10: ffff700005cb8e26 x9 :
dfff800000000000
[ 47.320210] x8 : ffff80002e5c7137 x7 : 0000000000000001 x6 :
00008ffffa3471da
[ 47.327375] x5 : ffff80002e5c7130 x4 : dfff800000000000 x3 :
ffff8000083a1f48
[ 47.334540] x2 : 0000000000000000 x1 : 0000000000000000 x0 :
ffff00208f7ab200
[ 47.341706] Call trace:
[ 47.344157] add_dma_entry+0x308/0x3f0
[ 47.347914] debug_dma_map_sg+0x3ac/0x500
[ 47.351931] __dma_map_sg_attrs+0xac/0x130
[ 47.356037] dma_map_sg_attrs+0x14/0x2c
[ 47.359883] pm8001_task_exec.constprop.0+0x5e0/0x800
[ 47.364945] pm8001_queue_command+0x1c/0x2c
[ 47.369136] sas_queuecommand+0x2c4/0x360
[ 47.373153] scsi_queue_rq+0x810/0x1334
[ 47.377000] blk_mq_dispatch_rq_list+0x340/0xda0
[ 47.381625] __blk_mq_sched_dispatch_requests+0x14c/0x22c
[ 47.387034] blk_mq_sched_dispatch_requests+0x60/0x9c
[ 47.392095] __blk_mq_run_hw_queue+0xc8/0x274
[ 47.396460] blk_mq_run_work_fn+0x30/0x40
[ 47.400476] process_one_work+0x494/0xbac
[ 47.404494] worker_thread+0xac/0x6d0
[ 47.408164] kthread+0x174/0x184
[ 47.411401] ret_from_fork+0x10/0x2[ 45.080945] sas:
sas_scsi_find_task: querying task 0x(____ptrval____)
[ 45.087582] pm80xx0:: mpi_ssp_completion 1936:sas IO status 0x3b
[ 45.093681] pm80xx0:: mpi_ssp_completion 1947:SAS Address of IO
Failure Drive:5000c50085ff5559
[ 45.102641] pm80xx0:: mpi_ssp_completion 1936:sas IO status 0x3b
[ 45.108739] pm80xx0:: mpi_ssp_completion 1947:SAS Address of IO
Failure Drive:5000c50085ff5559
[ 45.117694] pm80xx0:: mpi_ssp_completion 1936:sas IO status 0x3b
[ 45.123792] pm80xx0:: mpi_ssp_completion 1947:SAS Address of IO
Failure Drive:5000c50085ff5559
[ 45.132652] pm80xx: rc= -5
[ 45.135370] sas: sas_scsi_find_task: task 0x(____ptrval____) result
code -5 not handled
[ 45.143466] sas: task 0x(____ptrval____) is not at LU: I_T recover
[ 45.149741] sas: I_T nexus reset for dev 5000c50085ff5559
[ 47.183916] sas: I_T 5000c50085ff5559 recovered
[ 47.189034] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1
tries: 1
[ 47.204168] ------------[ cut here ]------------
[ 47.208829] DMA-API: pm80xx 0000:04:00.0: cacheline tracking EEXIST,
overlapping mappings aren't supported
[ 47.218502] WARNING: CPU: 3 PID: 641 at kernel/dma/debug.c:570
add_dma_entry+0x308/0x3f0
[ 47.226607] Modules linked in:
[ 47.229678] CPU: 3 PID: 641 Comm: kworker/3:1H Not tainted
5.17.0-rc1-11918-gd9d909a8c666 #407
[ 47.238298] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI
RC0 - V1.16.01 03/15/2019
[ 47.246829] Workqueue: kblockd blk_mq_run_work_fn
[ 47.251552] pstate: 604000c9 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS
BTYPE=--)
[ 47.258522] pc : add_dma_entry+0x308/0x3f0
[ 47.262626] lr : add_dma_entry+0x308/0x3f0
[ 47.266730] sp : ffff80002e5c75f0
[ 47.270049] x29: ffff80002e5c75f0 x28: 0000002880a908c0 x27:
ffff80000cc95440
[ 47.277216] x26: ffff80000cc94000 x25: ffff80000cc94e20 x24:
ffff00208e4660c8
[ 47.284382] x23: ffff800009d16b40 x22: ffff80000a5b8700 x21:
1ffff00005cb8eca
[ 47.291548] x20: ffff80000caf4c90 x19: ffff0a2009726100 x18:
0000000000000000
[ 47.298713] x17: 70616c7265766f20 x16: 2c54534958454520 x15:
676e696b63617274
[ 47.305879] x14: 1ffff00005cb8df4 x13: 0000000041b58ab3 x12:
ffff700005cb8e27
[ 47.313044] x11: 1ffff00005cb8e26 x10: ffff700005cb8e26 x9 :
dfff800000000000
[ 47.320210] x8 : ffff80002e5c7137 x7 : 0000000000000001 x6 :
00008ffffa3471da
[ 47.327375] x5 : ffff80002e5c7130 x4 : dfff800000000000 x3 :
ffff8000083a1f48
[ 47.334540] x2 : 0000000000000000 x1 : 0000000000000000 x0 :
ffff00208f7ab200
[ 47.341706] Call trace:
[ 47.344157] add_dma_entry+0x308/0x3f0
[ 47.347914] debug_dma_map_sg+0x3ac/0x500
[ 47.351931] __dma_map_sg_attrs+0xac/0x130
[ 47.356037] dma_map_sg_attrs+0x14/0x2c
[ 47.359883] pm8001_task_exec.constprop.0+0x5e0/0x800
[ 47.364945] pm8001_queue_command+0x1c/0x2c
[ 47.369136] sas_queuecommand+0x2c4/0x360
[ 47.373153] scsi_queue_rq+0x810/0x1334
[ 47.377000] blk_mq_dispatch_rq_list+0x340/0xda0
[ 47.381625] __blk_mq_sched_dispatch_requests+0x14c/0x22c
[ 47.387034] blk_mq_sched_dispatch_requests+0x60/0x9c
[ 47.392095] __blk_mq_run_hw_queue+0xc8/0x274
[ 47.396460] blk_mq_run_work_fn+0x30/0x40
[ 47.400476] process_one_work+0x494/0xbac
[ 47.404494] worker_thread+0xac/0x6d0
[ 47.408164] kthread+0x174/0x184
[ 47.411401] ret_from_fork+0x10/0x2
I'll have a look at it. And that is on mainline or mkp-scsi staging, and
not your patchset.
Thanks,
John
next prev parent reply other threads:[~2022-02-16 11:50 UTC|newest]
Thread overview: 61+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-14 2:17 [PATCH v3 00/31] libsas and pm8001 fixes Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 01/31] scsi: libsas: Fix sas_ata_qc_issue() handling of NCQ NON DATA commands Damien Le Moal
2022-02-14 17:56 ` John Garry
2022-02-14 22:23 ` Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 02/31] scsi: pm8001: Fix __iomem pointer use in pm8001_phy_control() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 03/31] scsi: pm8001: Fix pm8001_update_flash() local variable type Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 04/31] scsi: pm8001: Fix command initialization in pm80XX_send_read_log() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 05/31] scsi: pm8001: Fix pm80xx_pci_mem_copy() interface Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 06/31] scsi: pm8001: Fix command initialization in pm8001_chip_ssp_tm_req() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 07/31] scsi: pm8001: Fix payload initialization in pm80xx_set_thermal_config() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 08/31] scsi: pm8001: Fix le32 values handling in pm80xx_set_sas_protocol_timer_config() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 09/31] scsi: pm8001: Fix payload initialization in pm80xx_encrypt_update() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 10/31] scsi: pm8001: Fix le32 values handling in pm80xx_chip_ssp_io_req() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 11/31] scsi: pm8001: Fix le32 values handling in pm80xx_chip_sata_req() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 12/31] scsi: pm8001: Fix use of struct set_phy_profile_req fields Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 13/31] scsi: pm8001: Remove local variable in pm8001_pci_resume() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 14/31] scsi: pm8001: Fix NCQ NON DATA command task initialization Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 15/31] scsi: pm8001: Fix NCQ NON DATA command completion handling Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 16/31] scsi: pm8001: Fix abort all task initialization Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 17/31] scsi: pm8001: Fix pm8001_tag_alloc() failures handling Damien Le Moal
2022-02-14 18:02 ` John Garry
2022-02-14 2:17 ` [PATCH v3 18/31] scsi: pm8001: Fix pm80xx_chip_phy_ctl_req() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 19/31] scsi: pm8001: Fix pm8001_mpi_task_abort_resp() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 20/31] scsi: pm8001: Fix tag values handling Damien Le Moal
2022-02-15 11:09 ` John Garry
2022-02-15 23:44 ` Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 21/31] scsi: pm8001: Fix task leak in pm8001_send_abort_all() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 22/31] scsi: pm8001: Fix tag leaks on error Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 23/31] scsi: pm8001: fix memory leak in pm8001_chip_fw_flash_update_req() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 24/31] scsi: pm8001: Fix process_one_iomb() kdoc comment Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 25/31] scsi: libsas: Simplify sas_ata_qc_issue() detection of NCQ commands Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 26/31] scsi: pm8001: Simplify pm8001_get_ncq_tag() Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 27/31] scsi: pm8001: Cleanup pm8001_queue_command() Damien Le Moal
2022-02-15 10:55 ` John Garry
2022-02-16 11:36 ` Damien Le Moal
2022-02-16 11:38 ` John Garry
2022-02-16 11:42 ` Damien Le Moal
2022-02-16 11:50 ` John Garry [this message]
2022-02-16 12:05 ` Damien Le Moal
2022-02-16 12:21 ` John Garry
2022-02-17 0:12 ` Damien Le Moal
2022-02-17 9:23 ` John Garry
2022-02-17 10:47 ` Damien Le Moal
2022-02-17 12:49 ` John Garry
2022-02-18 3:12 ` Damien Le Moal
2022-02-18 11:21 ` John Garry
2022-02-17 11:47 ` Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 28/31] scsi: pm8001: Introduce ccb alloc/free helpers Damien Le Moal
2022-02-15 11:07 ` John Garry
2022-02-15 23:41 ` Damien Le Moal
2022-02-16 11:43 ` Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 29/31] scsi: pm8001: Simplify pm8001_mpi_build_cmd() interface Damien Le Moal
2022-02-14 2:17 ` [PATCH v3 30/31] scsi: pm8001: Simplify pm8001_task_exec() Damien Le Moal
2022-02-15 8:57 ` John Garry
2022-02-14 2:17 ` [PATCH v3 31/31] scsi: pm8001: Simplify pm8001_ccb_task_free() Damien Le Moal
2022-02-14 2:23 ` [PATCH v3 00/31] libsas and pm8001 fixes Damien Le Moal
2022-02-15 3:18 ` Martin K. Petersen
2022-02-15 7:38 ` Damien Le Moal
2022-02-14 18:06 ` John Garry
2022-02-14 22:29 ` Damien Le Moal
2022-02-15 8:16 ` John Garry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5a5481af-e975-c6fb-2d48-961769eae551@huawei.com \
--to=john.garry@huawei.com \
--cc=chenxiang66@hisilicon.com \
--cc=damien.lemoal@opensource.wdc.com \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=yanaijie@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox