linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Abeni <pabeni@redhat.com>
To: Felix Fietkau <nbd@nbd.name>, Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH net-next] net/core: add optional threading for backlog processing
Date: Tue, 28 Mar 2023 11:29:24 +0200	[thread overview]
Message-ID: <751fd5bb13a49583b1593fa209bfabc4917290ae.camel@redhat.com> (raw)
In-Reply-To: <f59ee83f-7267-04df-7286-f7ea147b5b49@nbd.name>

On Fri, 2023-03-24 at 18:57 +0100, Felix Fietkau wrote:
> On 24.03.23 18:47, Jakub Kicinski wrote:
> > On Fri, 24 Mar 2023 18:35:00 +0100 Felix Fietkau wrote:
> > > I'm primarily testing this on routers with 2 or 4 CPUs and limited 
> > > processing power, handling routing/NAT. RPS is typically needed to 
> > > properly distribute the load across all available CPUs. When there is 
> > > only a small number of flows that are pushing a lot of traffic, a static 
> > > RPS assignment often leaves some CPUs idle, whereas others become a 
> > > bottleneck by being fully loaded. Threaded NAPI reduces this a bit, but 
> > > CPUs can become bottlenecked and fully loaded by a NAPI thread alone.
> > 
> > The NAPI thread becomes a bottleneck with RPS enabled?
> 
> The devices that I work with often only have a single rx queue. That can
> easily become a bottleneck.
> 
> > > Making backlog processing threaded helps split up the processing work 
> > > even more and distribute it onto remaining idle CPUs.
> > 
> > You'd want to have both threaded NAPI and threaded backlog enabled?
> 
> Yes
> 
> > > It can basically be used to make RPS a bit more dynamic and 
> > > configurable, because you can assign multiple backlog threads to a set 
> > > of CPUs and selectively steer packets from specific devices / rx queues 
> > 
> > Can you give an example?
> > 
> > With the 4 CPU example, in case 2 queues are very busy - you're trying
> > to make sure that the RPS does not end up landing on the same CPU as
> > the other busy queue?
> 
> In this part I'm thinking about bigger systems where you want to have a
> group of CPUs dedicated to dealing with network traffic without
> assigning a fixed function (e.g. NAPI processing or RPS target) to each
> one, allowing for more dynamic processing.
> 
> > > to them and allow the scheduler to take care of the rest.
> > 
> > You trust the scheduler much more than I do, I think :)
> 
> In my tests it brings down latency (both avg and p99) considerably in
> some cases. I posted some numbers here:
> https://lore.kernel.org/netdev/e317d5bc-cc26-8b1b-ca4b-66b5328683c4@nbd.name/

It's still not 110% clear to me why/how this additional thread could
reduce latency. What/which threads are competing for the busy CPU[s]? I
suspect it could be easier/cleaner move away the others (non RPS)
threads.

Cheers,

Paolo



  parent reply	other threads:[~2023-03-28  9:32 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-24 17:13 [PATCH net-next] net/core: add optional threading for backlog processing Felix Fietkau
2023-03-24 17:20 ` Jakub Kicinski
2023-03-24 17:35   ` Felix Fietkau
2023-03-24 17:47     ` Jakub Kicinski
2023-03-24 17:57       ` Felix Fietkau
2023-03-25  3:19         ` Jakub Kicinski
2023-03-25  5:42           ` Felix Fietkau
2023-03-28  2:06             ` Jakub Kicinski
2023-03-28  9:46               ` Felix Fietkau
2023-03-28  9:29         ` Paolo Abeni [this message]
2023-03-28  9:45           ` Felix Fietkau
2023-03-28 15:13             ` Paolo Abeni
2023-03-28 15:21               ` Felix Fietkau
2023-03-29 16:14     ` Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=751fd5bb13a49583b1593fa209bfabc4917290ae.camel@redhat.com \
    --to=pabeni@redhat.com \
    --cc=corbet@lwn.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kuba@kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nbd@nbd.name \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).