From: Ahmed Zaki <ahmed.zaki@intel.com>
To: Joe Damato <jdamato@fastly.com>, <netdev@vger.kernel.org>,
<intel-wired-lan@lists.osuosl.org>, <andrew+netdev@lunn.ch>,
<edumazet@google.com>, <kuba@kernel.org>, <horms@kernel.org>,
<pabeni@redhat.com>, <davem@davemloft.net>,
<michael.chan@broadcom.com>, <tariqt@nvidia.com>,
<anthony.l.nguyen@intel.com>, <przemyslaw.kitszel@intel.com>,
<shayd@nvidia.com>, <akpm@linux-foundation.org>,
<shayagr@amazon.com>, <kalesh-anakkur.purayil@broadcom.com>
Subject: Re: [Intel-wired-lan] [PATCH net-next v7 2/5] net: napi: add CPU affinity to napi_config
Date: Wed, 5 Feb 2025 08:20:20 -0700 [thread overview]
Message-ID: <8270a43c-61f8-446d-8701-4fbd13a72e32@intel.com> (raw)
In-Reply-To: <Z6KYDs0os_DizhMa@LQ3V64L9R2>
On 2025-02-04 3:43 p.m., Joe Damato wrote:
> On Tue, Feb 04, 2025 at 03:06:19PM -0700, Ahmed Zaki wrote:
>> A common task for most drivers is to remember the user-set CPU affinity
>> to its IRQs. On each netdev reset, the driver should re-assign the
>> user's settings to the IRQs.
>>
>> Add CPU affinity mask to napi_config. To delegate the CPU affinity
>> management to the core, drivers must:
>> 1 - set the new netdev flag "irq_affinity_auto":
>> netif_enable_irq_affinity(netdev)
>> 2 - create the napi with persistent config:
>> netif_napi_add_config()
>> 3 - bind an IRQ to the napi instance: netif_napi_set_irq()
>>
>> the core will then make sure to use re-assign affinity to the napi's
>> IRQ.
>>
>> The default IRQ mask is set to one cpu starting from the closest NUMA.
>
> Not sure, but maybe the above should be documented somewhere like
> Documentation/networking/napi.rst or similar?
>
> Maybe that's too nit-picky, though, since the per-NAPI config stuff
> never made it into the docs (I'll propose a patch to fix that).
Yeah, and not all API is there (like netif_napi_set_irq()).
>
>> Signed-off-by: Ahmed Zaki <ahmed.zaki@intel.com>
>> ---
>> include/linux/netdevice.h | 14 +++++++--
>> net/core/dev.c | 62 +++++++++++++++++++++++++++++++--------
>> 2 files changed, 61 insertions(+), 15 deletions(-)
>
> [...]
>
>> diff --git a/net/core/dev.c b/net/core/dev.c
>> index 33e84477c9c2..4cde7ac31e74 100644
>> --- a/net/core/dev.c
>> +++ b/net/core/dev.c
>
> [...]
>
>> @@ -6968,17 +6983,28 @@ void netif_napi_set_irq_locked(struct napi_struct *napi, int irq)
>> {
>> int rc;
>>
>> - /* Remove existing rmap entries */
>> - if (napi->dev->rx_cpu_rmap_auto &&
>> + /* Remove existing resources */
>> + if ((napi->dev->rx_cpu_rmap_auto || napi->dev->irq_affinity_auto) &&
>> napi->irq != irq && napi->irq > 0)
>> irq_set_affinity_notifier(napi->irq, NULL);
>>
>> napi->irq = irq;
>> - if (irq > 0) {
>> + if (irq < 0)
>> + return;
>> +
>> + if (napi->dev->rx_cpu_rmap_auto) {
>> rc = napi_irq_cpu_rmap_add(napi, irq);
>> if (rc)
>> netdev_warn(napi->dev, "Unable to update ARFS map (%d)\n",
>> rc);
>> + } else if (napi->config && napi->dev->irq_affinity_auto) {
>> + napi->notify.notify = netif_napi_irq_notify;
>> + napi->notify.release = netif_napi_affinity_release;
>> +
>> + rc = irq_set_affinity_notifier(irq, &napi->notify);
>> + if (rc)
>> + netdev_warn(napi->dev, "Unable to set IRQ notifier (%d)\n",
>> + rc);
>> }
>
> Should there be a WARN_ON or WARN_ON_ONCE in here somewhere if the
> driver calls netif_napi_set_irq_locked but did not link NAPI config
> with a call to netif_napi_add_config?
>
> It seems like in that case the driver is buggy and a warning might
> be helpful.
>
I think that is a good idea, if there is a new version I can add this in
the second part of the if:
if (WARN_ON_ONCE(!napi->config))
return;
next prev parent reply other threads:[~2025-02-05 15:20 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-04 22:06 [Intel-wired-lan] [PATCH net-next v7 0/5] net: napi: add CPU affinity to napi->config Ahmed Zaki
2025-02-04 22:06 ` [Intel-wired-lan] [PATCH net-next v7 1/5] net: move ARFS rmap management to core Ahmed Zaki
2025-02-07 2:29 ` Jakub Kicinski
2025-02-10 15:04 ` Ahmed Zaki
2025-02-11 0:13 ` Jakub Kicinski
2025-02-04 22:06 ` [Intel-wired-lan] [PATCH net-next v7 2/5] net: napi: add CPU affinity to napi_config Ahmed Zaki
2025-02-04 22:43 ` Joe Damato
2025-02-05 15:20 ` Ahmed Zaki [this message]
2025-02-07 2:33 ` Jakub Kicinski
2025-02-07 2:37 ` Jakub Kicinski
2025-02-04 22:06 ` [Intel-wired-lan] [PATCH net-next v7 3/5] bnxt: use napi's irq affinity Ahmed Zaki
2025-02-04 22:06 ` [Intel-wired-lan] [PATCH net-next v7 4/5] ice: " Ahmed Zaki
2025-02-04 22:06 ` [Intel-wired-lan] [PATCH net-next v7 5/5] idpf: " Ahmed Zaki
2025-02-07 0:24 ` [Intel-wired-lan] [PATCH net-next v7 0/5] net: napi: add CPU affinity to napi->config Joe Damato
2025-02-07 18:47 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8270a43c-61f8-446d-8701-4fbd13a72e32@intel.com \
--to=ahmed.zaki@intel.com \
--cc=akpm@linux-foundation.org \
--cc=andrew+netdev@lunn.ch \
--cc=anthony.l.nguyen@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jdamato@fastly.com \
--cc=kalesh-anakkur.purayil@broadcom.com \
--cc=kuba@kernel.org \
--cc=michael.chan@broadcom.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=przemyslaw.kitszel@intel.com \
--cc=shayagr@amazon.com \
--cc=shayd@nvidia.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox