From: "Michael S. Tsirkin" <mst@redhat.com>
To: Daniel Wagner <wagi@kernel.org>
Cc: "Jens Axboe" <axboe@kernel.dk>, "Keith Busch" <kbusch@kernel.org>,
"Christoph Hellwig" <hch@lst.de>,
"Sagi Grimberg" <sagi@grimberg.me>,
"Kashyap Desai" <kashyap.desai@broadcom.com>,
"Sumit Saxena" <sumit.saxena@broadcom.com>,
"Shivasharan S" <shivasharan.srikanteshwara@broadcom.com>,
"Chandrakanth patil" <chandrakanth.patil@broadcom.com>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
"Nilesh Javali" <njavali@marvell.com>,
GR-QLogic-Storage-Upstream@marvell.com,
"Don Brace" <don.brace@microchip.com>,
"Jason Wang" <jasowang@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Stefan Hajnoczi" <stefanha@redhat.com>,
"Eugenio Pérez" <eperezma@redhat.com>,
"Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Costa Shulyupin" <costa.shul@redhat.com>,
"Juri Lelli" <juri.lelli@redhat.com>,
"Valentin Schneider" <vschneid@redhat.com>,
"Waiman Long" <llong@redhat.com>,
"Ming Lei" <ming.lei@redhat.com>,
"Michal Koutný" <mkoutny@suse.com>,
"Frederic Weisbecker" <frederic@kernel.org>,
"Mel Gorman" <mgorman@suse.de>, "Hannes Reinecke" <hare@suse.de>,
"Sridhar Balaraman" <sbalaraman@parallelwireless.com>,
"brookxu.cn" <brookxu.cn@gmail.com>,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com,
linux-scsi@vger.kernel.org, storagedev@microchip.com,
virtualization@lists.linux.dev
Subject: Re: [PATCH v4 6/9] virtio: blk/scsi: use block layer helpers to calculate num of queues
Date: Thu, 19 Dec 2024 03:31:05 -0500 [thread overview]
Message-ID: <20241219032956-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20241217-isolcpus-io-queues-v4-6-5d355fbb1e14@kernel.org>
On Tue, Dec 17, 2024 at 07:29:40PM +0100, Daniel Wagner wrote:
> Multiqueue devices should only allocate queues for the housekeeping CPUs
> when isolcpus=managed_irq is set. This avoids that the isolated CPUs get
> disturbed with OS workload.
>
> Use helpers which calculates the correct number of queues which should
> be used when isolcpus is used.
>
> Signed-off-by: Daniel Wagner <wagi@kernel.org>
The subject is misleading, one thinks it is onlu virtio blk.
It's best to just split each driver in a patch by its own.
for the changes in virtio:
Acked-by: Michael S. Tsirkin <mst@redhat.com>
> ---
> drivers/block/virtio_blk.c | 5 ++---
> drivers/scsi/megaraid/megaraid_sas_base.c | 3 ++-
> drivers/scsi/virtio_scsi.c | 1 +
> 3 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index ed514ff46dc82acd629ae594cb0fa097bd301a9b..0287ceaaf19972f3a18e81cd2e3252e4d539ba93 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -976,9 +976,8 @@ static int init_vq(struct virtio_blk *vblk)
> return -EINVAL;
> }
>
> - num_vqs = min_t(unsigned int,
> - min_not_zero(num_request_queues, nr_cpu_ids),
> - num_vqs);
> + num_vqs = blk_mq_num_possible_queues(
> + min_not_zero(num_request_queues, num_vqs));
>
> num_poll_vqs = min_t(unsigned int, poll_queues, num_vqs - 1);
>
> diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
> index 59d385e5a917979ae2f61f5db2c3355b9cab7e08..3ff0978b3acb5baf757fee25d9fccf4971976272 100644
> --- a/drivers/scsi/megaraid/megaraid_sas_base.c
> +++ b/drivers/scsi/megaraid/megaraid_sas_base.c
> @@ -6236,7 +6236,8 @@ static int megasas_init_fw(struct megasas_instance *instance)
> intr_coalescing = (scratch_pad_1 & MR_INTR_COALESCING_SUPPORT_OFFSET) ?
> true : false;
> if (intr_coalescing &&
> - (blk_mq_num_online_queues(0) >= MR_HIGH_IOPS_QUEUE_COUNT) &&
> + (blk_mq_num_online_queues(0) >=
> + MR_HIGH_IOPS_QUEUE_COUNT) &&
> (instance->msix_vectors == MEGASAS_MAX_MSIX_QUEUES))
> instance->perf_mode = MR_BALANCED_PERF_MODE;
> else
> diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
> index 60be1a0c61836ba643adcf9ad8d5b68563a86cb1..46ca0b82f57ce2211c7e2817dd40ee34e65bcbf9 100644
> --- a/drivers/scsi/virtio_scsi.c
> +++ b/drivers/scsi/virtio_scsi.c
> @@ -919,6 +919,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
> /* We need to know how many queues before we allocate. */
> num_queues = virtscsi_config_get(vdev, num_queues) ? : 1;
> num_queues = min_t(unsigned int, nr_cpu_ids, num_queues);
> + num_queues = blk_mq_num_possible_queues(num_queues);
>
> num_targets = virtscsi_config_get(vdev, max_target) + 1;
>
>
> --
> 2.47.1
next prev parent reply other threads:[~2024-12-19 8:31 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-17 18:29 [PATCH v4 0/9] blk: honor isolcpus configuration Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 1/9] lib/group_cpus: let group_cpu_evenly return number of groups Daniel Wagner
2025-01-07 7:51 ` Hannes Reinecke
2025-01-07 8:20 ` Daniel Wagner
2025-01-07 10:35 ` Hannes Reinecke
2025-01-07 10:46 ` Daniel Wagner
2025-01-07 11:35 ` Hannes Reinecke
2024-12-17 18:29 ` [PATCH v4 2/9] sched/isolation: document HK_TYPE housekeeping option Daniel Wagner
2025-01-07 15:39 ` Waiman Long
2024-12-17 18:29 ` [PATCH v4 3/9] blk-mq: add number of queue calc helper Daniel Wagner
2025-01-08 7:04 ` Hannes Reinecke
2024-12-17 18:29 ` [PATCH v4 4/9] nvme-pci: use block layer helpers to calculate num of queues Daniel Wagner
2025-01-08 7:19 ` Hannes Reinecke
2024-12-17 18:29 ` [PATCH v4 5/9] scsi: " Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 6/9] virtio: blk/scsi: " Daniel Wagner
2024-12-19 6:25 ` Christoph Hellwig
2024-12-19 8:31 ` Michael S. Tsirkin [this message]
2024-12-17 18:29 ` [PATCH v4 7/9] lib/group_cpus: honor housekeeping config when grouping CPUs Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 8/9] blk-mq: use hk cpus only when isolcpus=managed_irq is enabled Daniel Wagner
2024-12-19 6:26 ` Christoph Hellwig
2024-12-19 9:20 ` Ming Lei
2024-12-19 15:38 ` Daniel Wagner
2024-12-20 8:54 ` Ming Lei
2025-01-10 9:21 ` Daniel Wagner
2025-01-11 3:31 ` Ming Lei
2025-01-13 13:19 ` Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 9/9] blk-mq: issue warning when offlining hctx with online isolcpus Daniel Wagner
2024-12-19 6:28 ` Christoph Hellwig
2024-12-20 9:04 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241219032956-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=GR-QLogic-Storage-Upstream@marvell.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=brookxu.cn@gmail.com \
--cc=chandrakanth.patil@broadcom.com \
--cc=costa.shul@redhat.com \
--cc=don.brace@microchip.com \
--cc=eperezma@redhat.com \
--cc=frederic@kernel.org \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=jasowang@redhat.com \
--cc=juri.lelli@redhat.com \
--cc=kashyap.desai@broadcom.com \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=llong@redhat.com \
--cc=martin.petersen@oracle.com \
--cc=megaraidlinux.pdl@broadcom.com \
--cc=mgorman@suse.de \
--cc=ming.lei@redhat.com \
--cc=mkoutny@suse.com \
--cc=njavali@marvell.com \
--cc=pbonzini@redhat.com \
--cc=sagi@grimberg.me \
--cc=sbalaraman@parallelwireless.com \
--cc=shivasharan.srikanteshwara@broadcom.com \
--cc=stefanha@redhat.com \
--cc=storagedev@microchip.com \
--cc=sumit.saxena@broadcom.com \
--cc=tglx@linutronix.de \
--cc=virtualization@lists.linux.dev \
--cc=vschneid@redhat.com \
--cc=wagi@kernel.org \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).