From: John Garry <john.garry@huawei.com>
To: Thomas Gleixner <tglx@linutronix.de>, Hannes Reinecke <hare@suse.com>
Cc: Christoph Hellwig <hch@lst.de>,
Marc Zyngier <marc.zyngier@arm.com>,
"axboe@kernel.dk" <axboe@kernel.dk>,
Keith Busch <keith.busch@intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Linuxarm <linuxarm@huawei.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
SCSI Mailing List <linux-scsi@vger.kernel.org>
Subject: Re: Question on handling managed IRQs when hotplugging CPUs
Date: Tue, 29 Jan 2019 15:27:20 +0000 [thread overview]
Message-ID: <bdd457db-6898-a085-bfaf-99d29d81467d@huawei.com> (raw)
In-Reply-To: <alpine.DEB.2.21.1901291257590.1513@nanos.tec.linutronix.de>
Hi Hannes, Thomas,
On 29/01/2019 12:01, Thomas Gleixner wrote:
> On Tue, 29 Jan 2019, Hannes Reinecke wrote:
>> That actually is a very good question, and I have been wondering about this
>> for quite some time.
>>
>> I find it a bit hard to envision a scenario where the IRQ affinity is
>> automatically (and, more importantly, atomically!) re-routed to one of the
>> other CPUs.
Isn't this what happens today for non-managed IRQs?
>> And even it it were, chances are that there are checks in the driver
>> _preventing_ them from handling those requests, seeing that they should have
>> been handled by another CPU ...
Really? I would not think that it matters which CPU we service the
interrupt on.
>>
>> I guess the safest bet is to implement a 'cleanup' worker queue which is
>> responsible of looking through all the outstanding commands (on all hardware
>> queues), and then complete those for which no corresponding CPU / irqhandler
>> can be found.
>>
>> But I defer to the higher authorities here; maybe I'm totally wrong and it's
>> already been taken care of.
>
> TBH, I don't know. I merily was involved in the genirq side of this. But
> yes, in order to make this work correctly the basic contract for CPU
> hotplug case must be:
>
> If the last CPU which is associated to a queue (and the corresponding
> interrupt) goes offline, then the subsytem/driver code has to make sure
> that:
>
> 1) No more requests can be queued on that queue
>
> 2) All outstanding of that queue have been completed or redirected
> (don't know if that's possible at all) to some other queue.
This may not be possible. For the HW I deal with, we have symmetrical
delivery and completion queues, and a command delivered on DQx will
always complete on CQx. Each completion queue has a dedicated IRQ.
>
> That has to be done in that order obviously. Whether any of the
> subsystems/drivers actually implements this, I can't tell.
Going back to c5cb83bb337c25, it seems to me that the change was made
with the idea that we can maintain the affinity for the IRQ as we're
shutting it down as no interrupts should occur.
However I don't see why we can't instead keep the IRQ up and set the
affinity to all online CPUs in offline path, and restore the original
affinity in online path. The reason we set the queue affinity to
specific CPUs is for performance, but I would not say that this matters
for handling residual IRQs.
Thanks,
John
>
> Thanks,
>
> tglx
>
> .
>
next prev parent reply other threads:[~2019-01-29 15:27 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <f6f6e031-8b79-439d-c2af-8d3e76f30710@huawei.com>
2019-01-29 11:54 ` Question on handling managed IRQs when hotplugging CPUs Hannes Reinecke
2019-01-29 12:01 ` Thomas Gleixner
2019-01-29 15:27 ` John Garry [this message]
2019-01-29 16:27 ` Thomas Gleixner
2019-01-29 17:23 ` John Garry
[not found] ` <20190129154433.GF15302@localhost.localdomain>
[not found] ` <757902fc-a9ea-090b-7853-89944a0ce1b5@huawei.com>
[not found] ` <20190129172059.GC17132@localhost.localdomain>
[not found] ` <3fe63dab-0791-f476-69c4-9866b70e8520@huawei.com>
[not found] ` <alpine.DEB.2.21.1901301338170.5537@nanos.tec.linutronix.de>
2019-01-31 17:48 ` John Garry
2019-02-01 15:56 ` Hannes Reinecke
2019-02-01 21:57 ` Thomas Gleixner
2019-02-04 7:12 ` Hannes Reinecke
2019-02-05 13:24 ` John Garry
2019-02-05 14:52 ` Keith Busch
2019-02-05 15:09 ` John Garry
2019-02-05 15:11 ` Keith Busch
2019-02-05 15:15 ` Hannes Reinecke
2019-02-05 15:27 ` John Garry
2019-02-05 18:23 ` Christoph Hellwig
2019-02-06 9:21 ` John Garry
2019-02-06 13:34 ` Benjamin Block
2019-02-05 15:10 ` Hannes Reinecke
2019-02-05 15:16 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bdd457db-6898-a085-bfaf-99d29d81467d@huawei.com \
--to=john.garry@huawei.com \
--cc=axboe@kernel.dk \
--cc=hare@suse.com \
--cc=hch@lst.de \
--cc=keith.busch@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=linuxarm@huawei.com \
--cc=marc.zyngier@arm.com \
--cc=mpe@ellerman.id.au \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox