linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Daniel Wagner <dwagner@suse.de>
Cc: "Daniel Wagner" <wagi@kernel.org>, "Jens Axboe" <axboe@kernel.dk>,
	"Keith Busch" <kbusch@kernel.org>,
	"Christoph Hellwig" <hch@lst.de>,
	"Sagi Grimberg" <sagi@grimberg.me>,
	"Kashyap Desai" <kashyap.desai@broadcom.com>,
	"Sumit Saxena" <sumit.saxena@broadcom.com>,
	"Shivasharan S" <shivasharan.srikanteshwara@broadcom.com>,
	"Chandrakanth patil" <chandrakanth.patil@broadcom.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	"Nilesh Javali" <njavali@marvell.com>,
	GR-QLogic-Storage-Upstream@marvell.com,
	"Don Brace" <don.brace@microchip.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Jason Wang" <jasowang@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Eugenio Pérez" <eperezma@redhat.com>,
	"Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Costa Shulyupin" <costa.shul@redhat.com>,
	"Juri Lelli" <juri.lelli@redhat.com>,
	"Valentin Schneider" <vschneid@redhat.com>,
	"Waiman Long" <llong@redhat.com>,
	"Michal Koutný" <mkoutny@suse.com>,
	"Frederic Weisbecker" <frederic@kernel.org>,
	"Mel Gorman" <mgorman@suse.de>, "Hannes Reinecke" <hare@suse.de>,
	"Sridhar Balaraman" <sbalaraman@parallelwireless.com>,
	"brookxu.cn" <brookxu.cn@gmail.com>,
	linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
	linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com,
	linux-scsi@vger.kernel.org, storagedev@microchip.com,
	virtualization@lists.linux.dev
Subject: Re: [PATCH v4 8/9] blk-mq: use hk cpus only when isolcpus=managed_irq is enabled
Date: Fri, 20 Dec 2024 16:54:21 +0800	[thread overview]
Message-ID: <Z2UwvQoDM3f4zAxG@fedora> (raw)
In-Reply-To: <cc5e44dd-e1dc-4f24-88d9-ce45a8b0794f@flourine.local>

On Thu, Dec 19, 2024 at 04:38:43PM +0100, Daniel Wagner wrote:

> When isolcpus=managed_irq is enabled all hardware queues should run on
> the housekeeping CPUs only. Thus ignore the affinity mask provided by
> the driver.

Compared with in-tree code, the above words are misleading.

- irq core code respects isolated CPUs by trying to exclude isolated
CPUs from effective masks

- blk-mq won't schedule blockd on isolated CPUs

If application aren't run on isolated CPUs, IO interrupt usually won't
be triggered on isolated CPUs, so isolated CPUs are _not_ ignored.

> On Thu, Dec 19, 2024 at 05:20:44PM +0800, Ming Lei wrote:
> > > +	cpumask_andnot(isol_mask,
> > > +		       cpu_possible_mask,
> > > +		       housekeeping_cpumask(HK_TYPE_MANAGED_IRQ));
> > > +
> > > +	for_each_cpu(cpu, isol_mask) {
> > > +		qmap->mq_map[cpu] = qmap->queue_offset + queue;
> > > +		queue = (queue + 1) % qmap->nr_queues;
> > > +	}
> > 
> > Looks the IO hang issue in V3 isn't addressed yet, is it?
> > 
> > https://lore.kernel.org/linux-block/ZrtX4pzqwVUEgIPS@fedora/
> 
> I've added an explanation in the cover letter why this is not
> addressed. From the cover letter:
> 
> I've experimented for a while and all solutions I came up were horrible
> hacks (the hotpath needs to be touched) and I don't want to slow down all
> other users (which are almost everyone). IMO, it's just not worth trying

IMO, this patchset is one improvement on existed best-effort approach, which
works fine most of times, so why you do think it slows down everyone?

> to fix this corner case. If the user is using isolcpus and does CPU
> hotplug, we can expect that the user can also first offline the isolated
> CPUs. I've discussed this topic during ALPSS and the room came to the
> same conclusion. Thus I just added a patch which issues a warning that
> IOs are likely to hang.

If the change need userspace cooperation for using 'managed_irq', the exact
behavior need to be documented in both this commit and Documentation/admin-guide/kernel-parameters.txt,
instead of cover-letter only.

But this patch does cause regression for old applications which can't
follow the new introduced rule:

	```
	If the user is using isolcpus and does CPU hotplug, we can expect that the
	user can also first offline the isolated CPUs.
	```

Thanks,
Ming


  reply	other threads:[~2024-12-20  8:54 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-17 18:29 [PATCH v4 0/9] blk: honor isolcpus configuration Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 1/9] lib/group_cpus: let group_cpu_evenly return number of groups Daniel Wagner
2025-01-07  7:51   ` Hannes Reinecke
2025-01-07  8:20     ` Daniel Wagner
2025-01-07 10:35       ` Hannes Reinecke
2025-01-07 10:46         ` Daniel Wagner
2025-01-07 11:35           ` Hannes Reinecke
2024-12-17 18:29 ` [PATCH v4 2/9] sched/isolation: document HK_TYPE housekeeping option Daniel Wagner
2025-01-07 15:39   ` Waiman Long
2024-12-17 18:29 ` [PATCH v4 3/9] blk-mq: add number of queue calc helper Daniel Wagner
2025-01-08  7:04   ` Hannes Reinecke
2024-12-17 18:29 ` [PATCH v4 4/9] nvme-pci: use block layer helpers to calculate num of queues Daniel Wagner
2025-01-08  7:19   ` Hannes Reinecke
2024-12-17 18:29 ` [PATCH v4 5/9] scsi: " Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 6/9] virtio: blk/scsi: " Daniel Wagner
2024-12-19  6:25   ` Christoph Hellwig
2024-12-19  8:31   ` Michael S. Tsirkin
2024-12-17 18:29 ` [PATCH v4 7/9] lib/group_cpus: honor housekeeping config when grouping CPUs Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 8/9] blk-mq: use hk cpus only when isolcpus=managed_irq is enabled Daniel Wagner
2024-12-19  6:26   ` Christoph Hellwig
2024-12-19  9:20   ` Ming Lei
2024-12-19 15:38     ` Daniel Wagner
2024-12-20  8:54       ` Ming Lei [this message]
2025-01-10  9:21         ` Daniel Wagner
2025-01-11  3:31           ` Ming Lei
2025-01-13 13:19             ` Daniel Wagner
2024-12-17 18:29 ` [PATCH v4 9/9] blk-mq: issue warning when offlining hctx with online isolcpus Daniel Wagner
2024-12-19  6:28   ` Christoph Hellwig
2024-12-20  9:04   ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z2UwvQoDM3f4zAxG@fedora \
    --to=ming.lei@redhat.com \
    --cc=GR-QLogic-Storage-Upstream@marvell.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=brookxu.cn@gmail.com \
    --cc=chandrakanth.patil@broadcom.com \
    --cc=costa.shul@redhat.com \
    --cc=don.brace@microchip.com \
    --cc=dwagner@suse.de \
    --cc=eperezma@redhat.com \
    --cc=frederic@kernel.org \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=jasowang@redhat.com \
    --cc=juri.lelli@redhat.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=llong@redhat.com \
    --cc=martin.petersen@oracle.com \
    --cc=megaraidlinux.pdl@broadcom.com \
    --cc=mgorman@suse.de \
    --cc=mkoutny@suse.com \
    --cc=mst@redhat.com \
    --cc=njavali@marvell.com \
    --cc=pbonzini@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=sbalaraman@parallelwireless.com \
    --cc=shivasharan.srikanteshwara@broadcom.com \
    --cc=stefanha@redhat.com \
    --cc=storagedev@microchip.com \
    --cc=sumit.saxena@broadcom.com \
    --cc=tglx@linutronix.de \
    --cc=virtualization@lists.linux.dev \
    --cc=vschneid@redhat.com \
    --cc=wagi@kernel.org \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).