public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Aaron Tomlin <atomlin@atomlin.com>
Cc: axboe@kernel.dk, kbusch@kernel.org, hch@lst.de, sagi@grimberg.me,
	mst@redhat.com, aacraid@microsemi.com,
	James.Bottomley@hansenpartnership.com,
	martin.petersen@oracle.com, liyihang9@h-partners.com,
	kashyap.desai@broadcom.com, sumit.saxena@broadcom.com,
	shivasharan.srikanteshwara@broadcom.com,
	chandrakanth.patil@broadcom.com, sathya.prakash@broadcom.com,
	sreekanth.reddy@broadcom.com,
	suganath-prabu.subramani@broadcom.com, ranjan.kumar@broadcom.com,
	jinpu.wang@cloud.ionos.com, tglx@kernel.org, mingo@redhat.com,
	peterz@infradead.org, juri.lelli@redhat.com,
	vincent.guittot@linaro.org, akpm@linux-foundation.org,
	maz@kernel.org, ruanjinjie@huawei.com, bigeasy@linutronix.de,
	yphbchou0911@gmail.com, wagi@kernel.org, frederic@kernel.org,
	longman@redhat.com, chenridong@huawei.com, hare@suse.de,
	kch@nvidia.com, steve@abita.co, sean@ashe.io, chjohnst@gmail.com,
	neelx@suse.com, mproche@gmail.com, linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org, virtualization@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
	megaraidlinux.pdl@broadcom.com, mpi3mr-linuxdrv.pdl@broadcom.com,
	MPT-FusionLinux.pdl@broadcom.com
Subject: Re: [PATCH v10 13/13] docs: add io_queue flag to isolcpus
Date: Mon, 6 Apr 2026 11:29:38 +0800	[thread overview]
Message-ID: <adMoon3Zf6gO-UbA@fedora> (raw)
In-Reply-To: <nxe24ixebb4lm2d5w4aubhtwr23df6mumqd663axj35oswdiyv@amtqhtsidyr4>

On Sun, Apr 05, 2026 at 09:15:36PM -0400, Aaron Tomlin wrote:
> On Fri, Apr 03, 2026 at 10:30:26AM +0800, Ming Lei wrote:
> > On Wed, Apr 01, 2026 at 06:23:12PM -0400, Aaron Tomlin wrote:
> > 
> > All these can be supported by `managed_irq` already, please document the thing
> > which `io_queue` solves, and `managed_irq` can't cover, so user can know
> > how to choose between the two command lines.
> > 
> > `Restrict the placement of queues to housekeeping CPUs only` looks totally
> > stale, please see patch 10, in which isolated CPUs are spread too.
> 
> Dear Ming,
> 
> Thank you for your careful review of the documentation and for raising
> these excellent points. I completely agree that the administrator guide
> must be as unambiguous as possible.
> 
> Regarding your first point on the distinction between managed_irq and
> io_queue, you are entirely correct that the documentation must explicitly
> guide the user in their choice. I shall revise the text to clarify that
> where managed_irq solely restricts the affinity of hardware interrupts at
> the interrupt controller level, io_queue governs the block layer
> multi-queue mapping algorithm itself. I will add a clear explanation that
> io_queue is required for users who utilise polling queues, which do not
> rely on interrupts, or specific drivers that do not use the managed
> interrupt infrastructure. Without io_queue, the block layer would still
> assign these polling duties to isolated CPUs, thereby breaking the
> isolation.

I don't think there is such breaking isolation thing. For iopoll, if
applications won't submit polled IO on isolated CPUs, everything is just
fine. If they do it, IO may be reaped from isolated CPUs, that is just their
choice, anything is wrong?

> 
> Every logical CPU, including the isolated ones, must logically map to a
> hardware context in order to submit input and output requests, saying they
> are completely restricted is indeed stale and technically inaccurate. The
> isolation mechanism actually ensures that the hardware contexts themselves
> are serviced by the housekeeping CPUs, while the isolated CPUs are simply
> mapped onto these housekeeping queues for submission purposes. I will
> rewrite this paragraph to accurately reflect this topology, ensuring it
> aligns perfectly with the behaviour introduced in patch 10.

I am not sure if the above words is helpful from administrator viewpoint about
the two kernel parameters.

IMO, only two differences from this viewpoint:

1) `io_queue` may reduce nr_hw_queues

2) when application submits IO from isolated CPUs, `io_queue` can complete
IO from housekeeping CPUs.

> 
> > > +
> > > +			  The io_queue configuration takes precedence
> > > +			  over managed_irq. When io_queue is used,
> > > +			  managed_irq placement constrains have no
> > > +			  effect.
> > > +
> > > +			  Note: Offlining housekeeping CPUS which serve
> > > +			  isolated CPUs will be rejected. Isolated CPUs
> > > +			  need to be offlined before offlining the
> > > +			  housekeeping CPUs.
> > > +
> > > +			  Note: When an isolated CPU issues an I/O request,
> > > +			  it is forwarded to a housekeeping CPU. This will
> > > +			  trigger a software interrupt on the completion
> > > +			  path.
> > 
> > `io_queue` doesn't touch io completion code path, which is more
> > implementation details, so not sure if the above Note is needed.
> 
> Possibly the original author intended to suggest that the software
> interrupt is sent to the isolated CPU?

I meant this point can't be found in the patches.


Thanks, 
Ming


      reply	other threads:[~2026-04-06  3:30 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-01 22:22 [PATCH v10 00/13] blk: honor isolcpus configuration Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 01/13] scsi: aacraid: use block layer helpers to calculate num of queues Aaron Tomlin
2026-04-03  1:43   ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 02/13] lib/group_cpus: remove dead !SMP code Aaron Tomlin
2026-04-03  1:45   ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 03/13] lib/group_cpus: Add group_mask_cpus_evenly() Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 04/13] genirq/affinity: Add cpumask to struct irq_affinity Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 05/13] blk-mq: add blk_mq_{online|possible}_queue_affinity Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 06/13] nvme-pci: use block layer helpers to constrain queue affinity Aaron Tomlin
2026-04-03  1:46   ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 07/13] scsi: Use " Aaron Tomlin
2026-04-03  1:46   ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 08/13] virtio: blk/scsi: use " Aaron Tomlin
2026-04-03  1:47   ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 09/13] isolation: Introduce io_queue isolcpus type Aaron Tomlin
2026-04-03  1:47   ` Martin K. Petersen
2026-04-01 22:23 ` [PATCH v10 10/13] blk-mq: use hk cpus only when isolcpus=io_queue is enabled Aaron Tomlin
2026-04-03  2:06   ` Waiman Long
2026-04-05 23:09     ` Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 11/13] blk-mq: prevent offlining hk CPUs with associated online isolated CPUs Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 12/13] genirq/affinity: Restrict managed IRQ affinity to housekeeping CPUs Aaron Tomlin
2026-04-01 22:23 ` [PATCH v10 13/13] docs: add io_queue flag to isolcpus Aaron Tomlin
2026-04-03  2:30   ` Ming Lei
2026-04-06  1:15     ` Aaron Tomlin
2026-04-06  3:29       ` Ming Lei [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=adMoon3Zf6gO-UbA@fedora \
    --to=ming.lei@redhat.com \
    --cc=James.Bottomley@hansenpartnership.com \
    --cc=MPT-FusionLinux.pdl@broadcom.com \
    --cc=aacraid@microsemi.com \
    --cc=akpm@linux-foundation.org \
    --cc=atomlin@atomlin.com \
    --cc=axboe@kernel.dk \
    --cc=bigeasy@linutronix.de \
    --cc=chandrakanth.patil@broadcom.com \
    --cc=chenridong@huawei.com \
    --cc=chjohnst@gmail.com \
    --cc=frederic@kernel.org \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=jinpu.wang@cloud.ionos.com \
    --cc=juri.lelli@redhat.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=liyihang9@h-partners.com \
    --cc=longman@redhat.com \
    --cc=martin.petersen@oracle.com \
    --cc=maz@kernel.org \
    --cc=megaraidlinux.pdl@broadcom.com \
    --cc=mingo@redhat.com \
    --cc=mpi3mr-linuxdrv.pdl@broadcom.com \
    --cc=mproche@gmail.com \
    --cc=mst@redhat.com \
    --cc=neelx@suse.com \
    --cc=peterz@infradead.org \
    --cc=ranjan.kumar@broadcom.com \
    --cc=ruanjinjie@huawei.com \
    --cc=sagi@grimberg.me \
    --cc=sathya.prakash@broadcom.com \
    --cc=sean@ashe.io \
    --cc=shivasharan.srikanteshwara@broadcom.com \
    --cc=sreekanth.reddy@broadcom.com \
    --cc=steve@abita.co \
    --cc=suganath-prabu.subramani@broadcom.com \
    --cc=sumit.saxena@broadcom.com \
    --cc=tglx@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=virtualization@lists.linux.dev \
    --cc=wagi@kernel.org \
    --cc=yphbchou0911@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox