From: Shay Drori <shayd@nvidia.com>
To: Joe Damato <jdamato@fastly.com>, Jakub Kicinski <kuba@kernel.org>,
<netdev@vger.kernel.org>, Daniel Borkmann <daniel@iogearbox.net>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
"Harshitha Ramamurthy" <hramamurthy@google.com>,
"moderated list:INTEL ETHERNET DRIVERS"
<intel-wired-lan@lists.osuosl.org>,
Jeroen de Borst <jeroendb@google.com>,
Jiri Pirko <jiri@resnulli.us>, Leon Romanovsky <leon@kernel.org>,
open list <linux-kernel@vger.kernel.org>,
"open list:MELLANOX MLX4 core VPI driver"
<linux-rdma@vger.kernel.org>,
Lorenzo Bianconi <lorenzo@kernel.org>,
"Paolo Abeni" <pabeni@redhat.com>,
Praveen Kaligineedi <pkaligineedi@google.com>,
Przemek Kitszel <przemyslaw.kitszel@intel.com>,
Saeed Mahameed <saeedm@nvidia.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Shailend Chand <shailend@google.com>,
Tariq Toukan <tariqt@nvidia.com>,
"Tony Nguyen" <anthony.l.nguyen@intel.com>,
Willem de Bruijn <willemb@google.com>,
Yishai Hadas <yishaih@nvidia.com>,
Ziwei Xiao <ziweixiao@google.com>
Subject: Re: [RFC net-next 0/6] Cleanup IRQ affinity checks in several drivers
Date: Wed, 14 Aug 2024 19:03:35 +0300 [thread overview]
Message-ID: <701eb84c-8d26-4945-8af3-55a70e05b09c@nvidia.com> (raw)
In-Reply-To: <ZrzLEZs01KVkvBjw@LQ3V64L9R2>
On 14/08/2024 18:19, Joe Damato wrote:
> On Wed, Aug 14, 2024 at 08:09:15AM -0700, Jakub Kicinski wrote:
>> On Wed, 14 Aug 2024 13:12:08 +0100 Joe Damato wrote:
>>> Actually... how about a slightly different approach, which caches
>>> the affinity mask in the core?
>>
>> I was gonna say :)
>>
>>> 0. Extend napi struct to have a struct cpumask * field
>>>
>>> 1. extend netif_napi_set_irq to:
>>> a. store the IRQ number in the napi struct (as you suggested)
>>> b. call irq_get_effective_affinity_mask to store the mask in the
>>> napi struct
>>> c. set up generic affinity_notify.notify and
>>> affinity_notify.release callbacks to update the in core mask
>>> when it changes
>>
>> This part I'm not an export on.
several net drivers (mlx5, mlx4, ice, ena and more) are using a feature
called ARFS (rmap)[1], and this feature is using the affinity notifier
mechanism.
Also, affinity notifier infra is supporting only a single notifier per
IRQ.
Hence, your suggestion (1.c) will break the ARFS feature.
[1] see irq_cpu_rmap_add()
>>
>>> 2. add napi_affinity_no_change which now takes a napi_struct
>>>
>>> 3. cleanup all 5 drivers:
>>> a. add calls to netif_napi_set_irq for all 5 (I think no RTNL
>>> is needed, so I think this would be straight forward?)
>>> b. remove all affinity_mask caching code in 4 of 5 drivers
>>> c. update all 5 drivers to call napi_affinity_no_change in poll
>>>
>>> Then ... anyone who adds support for netif_napi_set_irq to their
>>> driver in the future gets automatic support in-core for
>>> caching/updating of the mask? And in the future netdev-genl could
>>> dump the mask since its in-core?
>>>
>>> I'll mess around with that locally to see how it looks, but let me
>>> know if that sounds like a better overall approach.
>
> I ended up going with the approach laid out above; moving the IRQ
> affinity mask updating code into the core (which adds that ability
> to gve/mlx4/mlx5... it seems mlx4/5 cached but didn't have notifiers
> setup to update the cached copy?)
maybe This is probably due to what I wrote above..
> and adding calls to
> netif_napi_set_irq in i40e/iavf and deleting their custom notifier
> code.
>
> It's almost ready for rfcv2; I think this approach is probably
> better ?
>
>> Could we even handle this directly as part of __napi_poll(),
>> once the driver gives core all of the relevant pieces of information ?
>
> I had been thinking the same thing, too, but it seems like at least
> one driver (mlx5) counts the number of affinity changes to export as
> a stat, so moving all of this to core would break that.
>
> So, I may avoid attempting that for this series.
>
> I'm still messing around with this but will send an rfcv2 in a bit.
>
next prev parent reply other threads:[~2024-08-14 16:04 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-12 14:56 [RFC net-next 0/6] Cleanup IRQ affinity checks in several drivers Joe Damato
2024-08-12 14:56 ` [RFC net-next 2/6] mlx5: Use napi_affinity_no_change Joe Damato
2024-08-12 14:56 ` [RFC net-next 6/6] mlx4: " Joe Damato
2024-08-14 0:17 ` [RFC net-next 0/6] Cleanup IRQ affinity checks in several drivers Jakub Kicinski
2024-08-14 7:14 ` Joe Damato
2024-08-14 12:12 ` Joe Damato
2024-08-14 15:09 ` Jakub Kicinski
2024-08-14 15:19 ` Joe Damato
2024-08-14 16:03 ` Shay Drori [this message]
2024-08-14 18:01 ` Joe Damato
2024-08-15 0:20 ` Jakub Kicinski
2024-08-15 10:22 ` Joe Damato
2024-08-20 6:40 ` Shay Drori
2024-08-20 8:33 ` Joe Damato
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=701eb84c-8d26-4945-8af3-55a70e05b09c@nvidia.com \
--to=shayd@nvidia.com \
--cc=anthony.l.nguyen@intel.com \
--cc=bigeasy@linutronix.de \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hramamurthy@google.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jdamato@fastly.com \
--cc=jeroendb@google.com \
--cc=jiri@resnulli.us \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=lorenzo@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pkaligineedi@google.com \
--cc=przemyslaw.kitszel@intel.com \
--cc=saeedm@nvidia.com \
--cc=shailend@google.com \
--cc=tariqt@nvidia.com \
--cc=willemb@google.com \
--cc=yishaih@nvidia.com \
--cc=ziweixiao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox