From: Thomas Gleixner <tglx@linutronix.de>
To: John Garry <john.garry@huawei.com>, Marc Zyngier <maz@kernel.org>
Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Marcin Wojtas <mw@semihalf.com>,
Russell King <linux@armlinux.org.uk>,
"David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>,
kernel-team@android.com
Subject: Re: [PATCH 1/2] genirq: Extract irq_set_affinity_masks() from devm_platform_get_irqs_affinity()
Date: Tue, 15 Mar 2022 15:25:48 +0100 [thread overview]
Message-ID: <871qz370nn.ffs@tglx> (raw)
In-Reply-To: <eee8d4b8-6b47-d675-aa6c-b0376b693e87@huawei.com>
On Fri, Feb 18 2022 at 08:41, John Garry wrote:
> On 17/02/2022 17:17, Marc Zyngier wrote:
>>> I know you mentioned it in 2/2, but it would be interesting to see how
>>> network controller drivers can handle the problem of missing in-flight
>>> IO completions for managed irq shutdown. For storage controllers this
>>> is all now safely handled in the block layer.
>>
>> Do you have a pointer to this? It'd be interesting to see if there is
>> a common pattern.
>
> Check blk_mq_hctx_notify_offline() and other hotplug handler friends in
> block/blk-mq.c and also blk_mq_get_ctx()/blk_mq_map_queue()
>
> So the key steps in CPU offlining are:
> - when the last CPU in HW queue context cpumask is going offline we mark
> the HW queue as inactive and no longer queue requests there
> - drain all in-flight requests before we allow that last CPU to go
> offline, meaning that we always have a CPU online to service any
> completion interrupts
>
> This scheme relies on symmetrical HW submission and completion queues
> and also that the blk-mq HW queue context cpumask is same as the HW
> queue's IRQ affinity mask (see blk_mq_pci_map_queues()).
>
> I am not sure how much this would fit with the networking stack or that
> marvell driver.
The problem with networking is RX flow steering.
The driver in question initializes the RX flows in
mvpp22_port_rss_init() by default so the packets are evenly distributed
accross the RX queues.
So without actually steering the RX flow away from the RX queue which is
associated to the to be unplugged CPU, this does not really work well.
Thanks,
tglx
next prev parent reply other threads:[~2022-03-15 14:25 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-16 9:08 [PATCH 0/2] net: mvpp2: Survive CPU hotplug events Marc Zyngier
2022-02-16 9:08 ` [PATCH 1/2] genirq: Extract irq_set_affinity_masks() from devm_platform_get_irqs_affinity() Marc Zyngier
2022-02-16 10:56 ` Greg Kroah-Hartman
2022-02-17 17:07 ` John Garry
2022-02-17 17:17 ` Marc Zyngier
2022-02-18 8:41 ` John Garry
2022-03-15 14:25 ` Thomas Gleixner [this message]
2022-02-16 9:08 ` [PATCH 2/2] net: mvpp2: Convert to managed interrupts to fix CPU HP issues Marc Zyngier
2022-02-16 11:38 ` Marc Zyngier
2022-02-16 13:19 ` [PATCH 0/2] net: mvpp2: Survive CPU hotplug events Marcin Wojtas
2022-02-16 13:29 ` Marc Zyngier
2022-02-16 13:32 ` Marcin Wojtas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=871qz370nn.ffs@tglx \
--to=tglx@linutronix.de \
--cc=davem@davemloft.net \
--cc=gregkh@linuxfoundation.org \
--cc=john.garry@huawei.com \
--cc=kernel-team@android.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=maz@kernel.org \
--cc=mw@semihalf.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).