From: Ming Lei <ming.lei@redhat.com>
To: Daniel Wagner <wagi@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>, Keith Busch <kbusch@kernel.org>,
Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
Thomas Gleixner <tglx@linutronix.de>,
Costa Shulyupin <costa.shul@redhat.com>,
Juri Lelli <juri.lelli@redhat.com>,
Valentin Schneider <vschneid@redhat.com>,
Waiman Long <llong@redhat.com>,
Frederic Weisbecker <frederic@kernel.org>,
Mel Gorman <mgorman@suse.de>, Hannes Reinecke <hare@suse.de>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com,
linux-scsi@vger.kernel.org, storagedev@microchip.com,
virtualization@lists.linux.dev,
GR-QLogic-Storage-Upstream@marvell.com
Subject: Re: [PATCH v6 8/9] blk-mq: use hk cpus only when isolcpus=io_queue is enabled
Date: Fri, 9 May 2025 10:38:32 +0800 [thread overview]
Message-ID: <aB1qqNDEnHMlpMH_@fedora> (raw)
In-Reply-To: <20250424-isolcpus-io-queues-v6-8-9a53a870ca1f@kernel.org>
On Thu, Apr 24, 2025 at 08:19:47PM +0200, Daniel Wagner wrote:
> When isolcpus=io_queue is enabled all hardware queues should run on
> the housekeeping CPUs only. Thus ignore the affinity mask provided by
> the driver. Also we can't use blk_mq_map_queues because it will map all
> CPUs to first hctx unless, the CPU is the same as the hctx has the
> affinity set to, e.g. 8 CPUs with isolcpus=io_queue,2-3,6-7 config
>
> queue mapping for /dev/nvme0n1
> hctx0: default 2 3 4 6 7
> hctx1: default 5
> hctx2: default 0
> hctx3: default 1
>
> PCI name is 00:05.0: nvme0n1
> irq 57 affinity 0-1 effective 1 is_managed:0 nvme0q0
> irq 58 affinity 4 effective 4 is_managed:1 nvme0q1
> irq 59 affinity 5 effective 5 is_managed:1 nvme0q2
> irq 60 affinity 0 effective 0 is_managed:1 nvme0q3
> irq 61 affinity 1 effective 1 is_managed:1 nvme0q4
>
> where as with blk_mq_hk_map_queues we get
>
> queue mapping for /dev/nvme0n1
> hctx0: default 2 4
> hctx1: default 3 5
> hctx2: default 0 6
> hctx3: default 1 7
>
> PCI name is 00:05.0: nvme0n1
> irq 56 affinity 0-1 effective 1 is_managed:0 nvme0q0
> irq 61 affinity 4 effective 4 is_managed:1 nvme0q1
> irq 62 affinity 5 effective 5 is_managed:1 nvme0q2
> irq 63 affinity 0 effective 0 is_managed:1 nvme0q3
> irq 64 affinity 1 effective 1 is_managed:1 nvme0q4
>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Signed-off-by: Daniel Wagner <wagi@kernel.org>
> ---
> block/blk-mq-cpumap.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 67 insertions(+), 2 deletions(-)
>
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 6e6b3e989a5676186b5a31296a1b94b7602f1542..2d678d1db2b5196fc2b2ce5678fdb0cb6bad26e0 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -22,8 +22,8 @@ static unsigned int blk_mq_num_queues(const struct cpumask *mask,
> {
> unsigned int num;
>
> - if (housekeeping_enabled(HK_TYPE_MANAGED_IRQ))
> - mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
> + if (housekeeping_enabled(HK_TYPE_IO_QUEUE))
> + mask = housekeeping_cpumask(HK_TYPE_IO_QUEUE);
Here both two can be considered for figuring out nr_hw_queues:
if (housekeeping_enabled(HK_TYPE_IO_QUEUE))
mask = housekeeping_cpumask(HK_TYPE_IO_QUEUE);
else if (housekeeping_enabled(HK_TYPE_MANAGED_IRQ))
mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
>
> num = cpumask_weight(mask);
> return min_not_zero(num, max_queues);
> @@ -61,11 +61,73 @@ unsigned int blk_mq_num_online_queues(unsigned int max_queues)
> }
> EXPORT_SYMBOL_GPL(blk_mq_num_online_queues);
>
> +/*
> + * blk_mq_map_hk_queues - Create housekeeping CPU to hardware queue mapping
> + * @qmap: CPU to hardware queue map
> + *
> + * Create a housekeeping CPU to hardware queue mapping in @qmap. If the
> + * isolcpus feature is enabled and blk_mq_map_hk_queues returns true,
> + * @qmap contains a valid configuration honoring the io_queue
> + * configuration. If the isolcpus feature is disabled this function
> + * returns false.
> + */
> +static bool blk_mq_map_hk_queues(struct blk_mq_queue_map *qmap)
> +{
> + struct cpumask *hk_masks;
> + cpumask_var_t isol_mask;
> + unsigned int queue, cpu, nr_masks;
> +
> + if (!housekeeping_enabled(HK_TYPE_IO_QUEUE))
> + return false;
It could be more readable to move the above check to the caller.
Thanks,
Ming
next prev parent reply other threads:[~2025-05-09 2:39 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-24 18:19 [PATCH v6 0/9] blk: honor isolcpus configuration Daniel Wagner
2025-04-24 18:19 ` [PATCH v6 1/9] lib/group_cpus: let group_cpu_evenly return number initialized masks Daniel Wagner
2025-04-28 12:37 ` Thomas Gleixner
2025-05-09 1:29 ` Ming Lei
2025-04-24 18:19 ` [PATCH v6 2/9] blk-mq: add number of queue calc helper Daniel Wagner
2025-05-09 1:43 ` Ming Lei
2025-04-24 18:19 ` [PATCH v6 3/9] nvme-pci: use block layer helpers to calculate num of queues Daniel Wagner
2025-05-09 1:47 ` Ming Lei
2025-05-14 16:12 ` Daniel Wagner
2025-04-24 18:19 ` [PATCH v6 4/9] scsi: " Daniel Wagner
2025-05-09 1:49 ` Ming Lei
2025-04-24 18:19 ` [PATCH v6 5/9] virtio: blk/scsi: " Daniel Wagner
2025-05-09 1:52 ` Ming Lei
2025-04-24 18:19 ` [PATCH v6 6/9] isolation: introduce io_queue isolcpus type Daniel Wagner
2025-04-25 6:26 ` Hannes Reinecke
2025-04-25 7:32 ` Daniel Wagner
2025-05-09 2:04 ` Ming Lei
2025-05-14 16:08 ` Daniel Wagner
2025-04-24 18:19 ` [PATCH v6 7/9] lib/group_cpus: honor housekeeping config when grouping CPUs Daniel Wagner
2025-05-09 2:22 ` Ming Lei
[not found] ` <cd1576ee-82a3-4899-b218-2e5c5334af6e@redhat.com>
2025-05-14 17:49 ` Daniel Wagner
2025-04-24 18:19 ` [PATCH v6 8/9] blk-mq: use hk cpus only when isolcpus=io_queue is enabled Daniel Wagner
2025-05-09 2:38 ` Ming Lei [this message]
2025-05-15 8:36 ` Daniel Wagner
2025-04-24 18:19 ` [PATCH v6 9/9] blk-mq: prevent offlining hk CPU with associated online isolated CPUs Daniel Wagner
2025-04-25 6:28 ` Hannes Reinecke
2025-05-09 2:54 ` Ming Lei
2025-05-06 3:17 ` [PATCH v6 0/9] blk: honor isolcpus configuration Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aB1qqNDEnHMlpMH_@fedora \
--to=ming.lei@redhat.com \
--cc=GR-QLogic-Storage-Upstream@marvell.com \
--cc=axboe@kernel.dk \
--cc=costa.shul@redhat.com \
--cc=frederic@kernel.org \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=juri.lelli@redhat.com \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=llong@redhat.com \
--cc=martin.petersen@oracle.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=megaraidlinux.pdl@broadcom.com \
--cc=mgorman@suse.de \
--cc=mst@redhat.com \
--cc=sagi@grimberg.me \
--cc=storagedev@microchip.com \
--cc=tglx@linutronix.de \
--cc=virtualization@lists.linux.dev \
--cc=vschneid@redhat.com \
--cc=wagi@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).