From: John Garry <john.g.garry@oracle.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
linux-nvme@lists.infradead.org,
"Martin K . Petersen" <martin.petersen@oracle.com>,
linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
Wen Xiong <wenxiong@linux.ibm.com>,
Keith Busch <kbusch@kernel.org>,
Xiang Chen <chenxiang66@hisilicon.com>
Subject: Re: [PATCH V2 5/9] scsi: hisi: take blk_mq_max_nr_hw_queues() into account for calculating io vectors
Date: Thu, 27 Jul 2023 13:36:00 +0100 [thread overview]
Message-ID: <b0ae97fd-583f-794b-a62b-452548ef9afd@oracle.com> (raw)
In-Reply-To: <ZMJcqzeaE5e0BdmK@ovpn-8-16.pek2.redhat.com>
>>
>> I am just saying that we have a fixed number of HW queues (16), each of
>> which may be used for interrupt or polling mode. And since we always
>> allocate max number of MSI, then number of interrupt queues available will
>> be 16 - nr_poll_queues.
>
> No.
>
> queue_count is fixed at 16, but pci_alloc_irq_vectors_affinity() still
> may return less vectors, which is one system-wide resource, and queue
> count is device resource.
>
> So when less vectors are allocated, you should have been capable of using
> more poll queues, unfortunately the current code can't support that.
>
> Even worse, hisi_hba->cq_nvecs can become negative if less vectors are returned.
OK, I see what you mean here. I thought that we were only considering
case of vectors allocated was max vectors requested.
Yes, I see how allocating less than max can cause an issue. I am not
sure if increasing iopoll_q_cnt over driver module param value is proper
then, but obviously we don't want cq_nvecs to become negative.
>
>>
>>>
>>>
>>>>> So it isn't related with driver's msi vector allocation bug, is it?
>>>> My deduction is this is how this currently "works" for non-zero iopoll
>>>> queues:
>>>> - allocate max MSI of 32, which gives 32 vectors including 16 cq vectors.
>>>> That then gives:
>>>> - cq_nvecs = 16 - iopoll_q_cnt
>>>> - shost->nr_hw_queues = 16
>>>> - 16x MSI cq vectors were spread over all CPUs
>>> It should be that cq_nvecs vectors spread over all CPUs, and
>>> iopoll_q_cnt are spread over all CPUs too.
>>
>> I agree, it should be, but I don't think that it is for HCTX_TYPE_DEFAULT,
>> as below.
>>
>>>
>>> For each queue type, nr_queues of this type are spread over all
>>> CPUs. >> - in hisi_sas_map_queues()
>>>> - HCTX_TYPE_DEFAULT qmap->nr_queues = 16 - iopoll_q_cnt, and for
>>>> blk_mq_pci_map_queues() we setup affinity for 16 - iopoll_q_cnt hw queues.
>>>> This looks broken, as we originally spread 16x vectors over all CPUs, but
>>>> now only setup mappings for (16 - iopoll_q_cnt) vectors, whose affinity
>>>> would spread a subset of CPUs. And then qmap->mq_map[] for other CPUs is not
>>>> set at all.
>>> That isn't true, please see my above comment.
>>
>> I am just basing that on what I mention above, so please let me know my
>> inaccuracy there.
>
> You said queue mapping for HCTX_TYPE_DEFAULT is broken, but it isn't.
>
> You said 'we originally spread 16x vectors over all CPUs', which isn't
> true.
Are you talking about the case of allocating less then max requested
vectors, as above?
If we have min_msi = 17, max_msi = 32, affinity_desc = {16, 0}, and we
allocate 32 vectors from pci_alloc_irq_vectors_affinity(), then I would
have thought that the affinity for the 16x cq vectors is spread over all
CPUs. Is that wrong?
> Again, '16 - iopoll_q_cnt' vectors are spread on all CPUs, and
> same with iopoll_q_cnt vectors.
>
> Since both blk_mq_map_queues() and blk_mq_pci_map_queues() does spread
> map->nr_queues over all CPUs, so there isn't spread a subset of CPUs.
>
Thanks,
John
next prev parent reply other threads:[~2023-07-27 12:36 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-26 9:40 [PATCH V2 0/9] blk-mq: fix wrong queue mapping for kdump kernel Ming Lei
2023-07-26 9:40 ` [PATCH V2 1/9] blk-mq: add blk_mq_max_nr_hw_queues() Ming Lei
2023-07-26 16:36 ` John Garry
2023-07-27 1:06 ` Ming Lei
2023-07-26 9:40 ` [PATCH V2 2/9] nvme-pci: use blk_mq_max_nr_hw_queues() to calculate io queues Ming Lei
2023-07-26 9:40 ` [PATCH V2 3/9] scsi: core: add helper of scsi_max_nr_hw_queues() Ming Lei
2023-07-26 9:40 ` [PATCH V2 4/9] scsi: lpfc: use blk_mq_max_nr_hw_queues() to calculate io vectors Ming Lei
2023-07-26 22:12 ` Justin Tee
2023-07-27 1:19 ` Ming Lei
2023-07-27 16:56 ` Justin Tee
2023-07-26 9:40 ` [PATCH V2 5/9] scsi: hisi: take blk_mq_max_nr_hw_queues() into account for calculating " Ming Lei
2023-07-26 15:42 ` John Garry
2023-07-27 1:15 ` Ming Lei
2023-07-27 7:35 ` John Garry
2023-07-27 9:42 ` Ming Lei
2023-07-27 10:28 ` John Garry
2023-07-27 10:56 ` Ming Lei
2023-07-27 11:30 ` John Garry
2023-07-27 12:01 ` Ming Lei
2023-07-27 12:36 ` John Garry [this message]
2023-07-26 9:40 ` [PATCH V2 6/9] scsi: mpi3mr: " Ming Lei
2023-07-26 9:40 ` [PATCH V2 7/9] scsi: megaraid: " Ming Lei
2023-07-26 9:40 ` [PATCH V2 8/9] scsi: mpt3sas: " Ming Lei
2023-07-26 9:40 ` [PATCH V2 9/9] scsi: pm8001: " Ming Lei
2023-07-31 7:14 ` [PATCH V2 0/9] blk-mq: fix wrong queue mapping for kdump kernel Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b0ae97fd-583f-794b-a62b-452548ef9afd@oracle.com \
--to=john.g.garry@oracle.com \
--cc=axboe@kernel.dk \
--cc=chenxiang66@hisilicon.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=ming.lei@redhat.com \
--cc=wenxiong@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).