netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Joe Damato <jdamato@fastly.com>
Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com>,
	Samiullah Khawaja <skhawaja@google.com>,
	"David S . Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Paolo Abeni <pabeni@redhat.com>,
	almasrymina@google.com, willemb@google.com,
	mkarsten@uwaterloo.ca, netdev@vger.kernel.org
Subject: Re: [PATCH net-next v5] Add support to set napi threaded for individual napi
Date: Mon, 5 May 2025 11:56:50 -0700	[thread overview]
Message-ID: <20250505115650.300a3422@kernel.org> (raw)
In-Reply-To: <aBWHv6TAwLnbPhMd@LQ3V64L9R2>

On Fri, 2 May 2025 20:04:31 -0700 Joe Damato wrote:
> > The thing I did add (in the rx-buf-len series) was a hook to the queue
> > count changing code which wipes the configuration for queues which are
> > explicitly disabled.
> > So if you do some random reconfig (eg attach XDP) and driver recreates
> > all NAPIs - the config should stay around. Same if you do ifdown ifup.
> > But if you set NAPI count from 8 to 4 - the NAPIs 4..7 should get wiped.  
> 
> Yea I see. I will go back and re-read that series because I think
> missed that part.
> 
> IIRC, if you:
>   1. set defer-hard-irqs on NAPIs 2 and 3
>   2. resize down to 2 queues
>   3. resize then back up to 4, the setting for NAPIs 2 and 3 should
>      be restored.
> 
> I now wonder if I should change that to be more like what you
> describe for rx-buf-len so we converge?

IMHO yes, but others may disagree.

      reply	other threads:[~2025-05-05 18:56 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-23 20:14 [PATCH net-next v5] Add support to set napi threaded for individual napi Samiullah Khawaja
2025-04-24 23:13 ` Joe Damato
2025-04-25 18:28   ` Samiullah Khawaja
2025-04-25 22:24     ` Joe Damato
2025-04-25 22:52       ` Samiullah Khawaja
2025-04-26  0:37         ` Jakub Kicinski
2025-04-26  2:34           ` Joe Damato
2025-04-26  2:47             ` Jakub Kicinski
2025-04-26  3:12               ` Jakub Kicinski
2025-04-26  3:53                 ` Samiullah Khawaja
2025-04-28 18:23                   ` Jakub Kicinski
2025-04-28 19:25                     ` Samiullah Khawaja
2025-04-25 23:06     ` Samiullah Khawaja
2025-04-26  0:42 ` Jakub Kicinski
2025-04-26  2:31   ` Joe Damato
2025-04-26 14:41     ` Willem de Bruijn
2025-04-28 18:12       ` Joe Damato
2025-04-28 18:38         ` Jakub Kicinski
2025-04-28 21:29           ` Joe Damato
2025-04-28 22:32             ` Jakub Kicinski
2025-04-30  0:16               ` Joe Damato
2025-05-03  2:10                 ` Jakub Kicinski
2025-05-03  3:04                   ` Joe Damato
2025-05-05 18:56                     ` Jakub Kicinski [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250505115650.300a3422@kernel.org \
    --to=kuba@kernel.org \
    --cc=almasrymina@google.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=jdamato@fastly.com \
    --cc=mkarsten@uwaterloo.ca \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=skhawaja@google.com \
    --cc=willemb@google.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).