linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Daniel Wagner <dwagner@suse.de>
Cc: "Daniel Wagner" <wagi@kernel.org>, "Jens Axboe" <axboe@kernel.dk>,
	"Keith Busch" <kbusch@kernel.org>,
	"Christoph Hellwig" <hch@lst.de>,
	"Sagi Grimberg" <sagi@grimberg.me>,
	"Kashyap Desai" <kashyap.desai@broadcom.com>,
	"Sumit Saxena" <sumit.saxena@broadcom.com>,
	"Shivasharan S" <shivasharan.srikanteshwara@broadcom.com>,
	"Chandrakanth patil" <chandrakanth.patil@broadcom.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	"Nilesh Javali" <njavali@marvell.com>,
	GR-QLogic-Storage-Upstream@marvell.com,
	"Don Brace" <don.brace@microchip.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Jason Wang" <jasowang@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Eugenio Pérez" <eperezma@redhat.com>,
	"Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Costa Shulyupin" <costa.shul@redhat.com>,
	"Juri Lelli" <juri.lelli@redhat.com>,
	"Valentin Schneider" <vschneid@redhat.com>,
	"Waiman Long" <llong@redhat.com>,
	"Michal Koutný" <mkoutny@suse.com>,
	"Frederic Weisbecker" <frederic@kernel.org>,
	"Mel Gorman" <mgorman@suse.de>, "Hannes Reinecke" <hare@suse.de>,
	"Sridhar Balaraman" <sbalaraman@parallelwireless.com>,
	"brookxu.cn" <brookxu.cn@gmail.com>,
	linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
	linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com,
	linux-scsi@vger.kernel.org, storagedev@microchip.com,
	virtualization@lists.linux.dev
Subject: Re: [PATCH v4 8/9] blk-mq: use hk cpus only when isolcpus=managed_irq is enabled
Date: Sat, 11 Jan 2025 11:31:10 +0800	[thread overview]
Message-ID: <Z4Hl_oQhJ2u6g2B3@fedora> (raw)
In-Reply-To: <ae9abe4c-010a-41ff-be44-1d52a331eb11@flourine.local>

On Fri, Jan 10, 2025 at 10:21:49AM +0100, Daniel Wagner wrote:
> Hi Ming,
> 
> On Fri, Dec 20, 2024 at 04:54:21PM +0800, Ming Lei wrote:
> > On Thu, Dec 19, 2024 at 04:38:43PM +0100, Daniel Wagner wrote:
> > 
> > > When isolcpus=managed_irq is enabled all hardware queues should run on
> > > the housekeeping CPUs only. Thus ignore the affinity mask provided by
> > > the driver.
> > 
> > Compared with in-tree code, the above words are misleading.
> > 
> > - irq core code respects isolated CPUs by trying to exclude isolated
> > CPUs from effective masks
> > 
> > - blk-mq won't schedule blockd on isolated CPUs
> 
> I see your point, the commit should highlight the fact when an
> application is issuing an I/O, this can lead to stalls.
> 
> What about a commit message like:
> 
>   When isolcpus=managed_irq is enabled, and the last housekeeping CPU for
>   a given hardware context goes offline, there is no CPU left which
>   handles the IOs anymore. If isolated CPUs mapped to this hardware
>   context are online and an application running on these isolated CPUs
>   issue an IO this will lead to stalls.

It isn't correct, the in-tree code doesn't have such stall, no matter if
IO is issued from HK or isolated CPUs since the managed irq is guaranteed to
live if any mapped CPU is online.

> 
>   The kernel will not schedule IO to isolated CPUS thus this avoids IO
>   stalls.
> 
>   Thus issue a warning when housekeeping CPUs are offlined for a hardware
>   context while there are still isolated CPUs online.
> 
> > If application aren't run on isolated CPUs, IO interrupt usually won't
> > be triggered on isolated CPUs, so isolated CPUs are _not_ ignored.
> 
> FWIW, the 'usually' part is what made our customers nervous. They saw
> some IRQ noise on the isolated CPUs with their workload and reported
> with these changes all was good. Unfortunately, we never got the hands
> on the workload, hard to say what was causing it.

Please see irq_do_set_affinity():

        if (irqd_affinity_is_managed(data) &&
              housekeeping_enabled(HK_TYPE_MANAGED_IRQ)) {
                  const struct cpumask *hk_mask;

                  hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);

                  cpumask_and(&tmp_mask, mask, hk_mask);
                  if (!cpumask_intersects(&tmp_mask, cpu_online_mask))
                          prog_mask = mask;
                  else
                          prog_mask = &tmp_mask;
          } else {
                  prog_mask = mask;

The whole mask which may include isolated CPUs is only programmed to
hardware if there isn't any online CPU in `irq_mask & hk_mask`.

> 
> > > On Thu, Dec 19, 2024 at 05:20:44PM +0800, Ming Lei wrote:
> > > > > +	cpumask_andnot(isol_mask,
> > > > > +		       cpu_possible_mask,
> > > > > +		       housekeeping_cpumask(HK_TYPE_MANAGED_IRQ));
> > > > > +
> > > > > +	for_each_cpu(cpu, isol_mask) {
> > > > > +		qmap->mq_map[cpu] = qmap->queue_offset + queue;
> > > > > +		queue = (queue + 1) % qmap->nr_queues;
> > > > > +	}
> > > > 
> > > > Looks the IO hang issue in V3 isn't addressed yet, is it?
> > > > 
> > > > https://lore.kernel.org/linux-block/ZrtX4pzqwVUEgIPS@fedora/
> > > 
> > > I've added an explanation in the cover letter why this is not
> > > addressed. From the cover letter:
> > > 
> > > I've experimented for a while and all solutions I came up were horrible
> > > hacks (the hotpath needs to be touched) and I don't want to slow down all
> > > other users (which are almost everyone). IMO, it's just not worth trying
> > 
> > IMO, this patchset is one improvement on existed best-effort approach, which
> > works fine most of times, so why you do think it slows down everyone?
> 
> I was talking about implementing the feature which would remap the
> isolated CPUs to online hardware context when the current hardware
> context goes offline. I didn't find a solution which I think would be
> worth presenting. All involved some sort of locking/refcounting in the
> hotpath, which I think we should just avoid.

I understand the trouble, but it is still one improvement from user
viewpoint instead of feature since the interface of 'isolcpus=manage_irq'
isn't changed.

> 
> > > to fix this corner case. If the user is using isolcpus and does CPU
> > > hotplug, we can expect that the user can also first offline the isolated
> > > CPUs. I've discussed this topic during ALPSS and the room came to the
> > > same conclusion. Thus I just added a patch which issues a warning that
> > > IOs are likely to hang.
> > 
> > If the change need userspace cooperation for using 'managed_irq', the exact
> > behavior need to be documented in both this commit and Documentation/admin-guide/kernel-parameters.txt,
> > instead of cover-letter only.
> > 
> > But this patch does cause regression for old applications which can't
> > follow the new introduced rule:
> > 
> > 	```
> > 	If the user is using isolcpus and does CPU hotplug, we can expect that the
> > 	user can also first offline the isolated CPUs.
> > 	```
> 
> Indeed, I forgot to update the documentation. I'll update it accordingly.

It isn't documentation thing, it breaks the no-regression policy, which crosses
our red-line.

If you really want to move on, please add one new kernel command
line with documenting the new usage which requires applications to
offline CPU in order.


thanks,
Ming


  reply	other threads:[~2025-01-11  3:31 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-17 18:29 [PATCH v4 0/9] blk: honor isolcpus configuration Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 1/9] lib/group_cpus: let group_cpu_evenly return number of groups Daniel Wagner
2025-01-07  7:51   ` Hannes Reinecke
2025-01-07  8:20     ` Daniel Wagner
2025-01-07 10:35       ` Hannes Reinecke
2025-01-07 10:46         ` Daniel Wagner
2025-01-07 11:35           ` Hannes Reinecke
2024-12-17 18:29 ` [PATCH v4 2/9] sched/isolation: document HK_TYPE housekeeping option Daniel Wagner
2025-01-07 15:39   ` Waiman Long
2024-12-17 18:29 ` [PATCH v4 3/9] blk-mq: add number of queue calc helper Daniel Wagner
2025-01-08  7:04   ` Hannes Reinecke
2024-12-17 18:29 ` [PATCH v4 4/9] nvme-pci: use block layer helpers to calculate num of queues Daniel Wagner
2025-01-08  7:19   ` Hannes Reinecke
2024-12-17 18:29 ` [PATCH v4 5/9] scsi: " Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 6/9] virtio: blk/scsi: " Daniel Wagner
2024-12-19  6:25   ` Christoph Hellwig
2024-12-19  8:31   ` Michael S. Tsirkin
2024-12-17 18:29 ` [PATCH v4 7/9] lib/group_cpus: honor housekeeping config when grouping CPUs Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 8/9] blk-mq: use hk cpus only when isolcpus=managed_irq is enabled Daniel Wagner
2024-12-19  6:26   ` Christoph Hellwig
2024-12-19  9:20   ` Ming Lei
2024-12-19 15:38     ` Daniel Wagner
2024-12-20  8:54       ` Ming Lei
2025-01-10  9:21         ` Daniel Wagner
2025-01-11  3:31           ` Ming Lei [this message]
2025-01-13 13:19             ` Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 9/9] blk-mq: issue warning when offlining hctx with online isolcpus Daniel Wagner
2024-12-19  6:28   ` Christoph Hellwig
2024-12-20  9:04   ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z4Hl_oQhJ2u6g2B3@fedora \
    --to=ming.lei@redhat.com \
    --cc=GR-QLogic-Storage-Upstream@marvell.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=brookxu.cn@gmail.com \
    --cc=chandrakanth.patil@broadcom.com \
    --cc=costa.shul@redhat.com \
    --cc=don.brace@microchip.com \
    --cc=dwagner@suse.de \
    --cc=eperezma@redhat.com \
    --cc=frederic@kernel.org \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=jasowang@redhat.com \
    --cc=juri.lelli@redhat.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=llong@redhat.com \
    --cc=martin.petersen@oracle.com \
    --cc=megaraidlinux.pdl@broadcom.com \
    --cc=mgorman@suse.de \
    --cc=mkoutny@suse.com \
    --cc=mst@redhat.com \
    --cc=njavali@marvell.com \
    --cc=pbonzini@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=sbalaraman@parallelwireless.com \
    --cc=shivasharan.srikanteshwara@broadcom.com \
    --cc=stefanha@redhat.com \
    --cc=storagedev@microchip.com \
    --cc=sumit.saxena@broadcom.com \
    --cc=tglx@linutronix.de \
    --cc=virtualization@lists.linux.dev \
    --cc=vschneid@redhat.com \
    --cc=wagi@kernel.org \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).