netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lorenzo Bianconi <lorenzo@kernel.org>
To: Jesper Dangaard Brouer <hawk@kernel.org>, kuba@kernel.org
Cc: Paolo Abeni <pabeni@redhat.com>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	netdev@vger.kernel.org,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	willemdebruijn.kernel@gmail.com, toke@redhat.com,
	davem@davemloft.net, edumazet@google.com, bpf@vger.kernel.org,
	lorenzo.bianconi@redhat.com, sdf@google.com, jasowang@redhat.com
Subject: Re: [PATCH v5 net-next 1/3] net: introduce page_pool pointer in softnet_data percpu struct
Date: Wed, 17 Jan 2024 18:36:25 +0100	[thread overview]
Message-ID: <ZagQGZ5CM3vEH2RP@lore-desk> (raw)
In-Reply-To: <33bbb170-afdd-477f-9296-a9cede9bc2f2@kernel.org>

[-- Attachment #1: Type: text/plain, Size: 2844 bytes --]

> 
> 
> On 19/12/2023 16.23, Paolo Abeni wrote:
> > On Thu, 2023-12-14 at 15:29 +0100, Lorenzo Bianconi wrote:
> > > Allocate percpu page_pools in softnet_data.
> > > Moreover add cpuid filed in page_pool struct in order to recycle the
> > > page in the page_pool "hot" cache if napi_pp_put_page() is running on
> > > the same cpu.
> > > This is a preliminary patch to add xdp multi-buff support for xdp running
> > > in generic mode.
> > > 
> > > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > > ---
> > >   include/linux/netdevice.h       |  1 +
> > >   include/net/page_pool/helpers.h |  5 +++++
> > >   include/net/page_pool/types.h   |  1 +
> > >   net/core/dev.c                  | 39 ++++++++++++++++++++++++++++++++-
> > >   net/core/page_pool.c            |  5 +++++
> > >   net/core/skbuff.c               |  5 +++--
> > >   6 files changed, 53 insertions(+), 3 deletions(-)
> > 
> > @Jesper, @Ilias: could you please have a look at the pp bits?
> > 
> 
> I have some concerns... I'm still entertaining the idea, but we need to
> be aware of the tradeoffs we are making.
> 
> (1)
> Adding PP to softnet_data means per CPU caching 256 pages in the
> ptr_ring (plus likely 64 in the alloc-cache).   Fortunately, PP starts
> out empty, so as long as this PP isn't used they don't get cached. But
> if used, then PP don't have a MM shrinker that removes these cached
> pages, in case system is under MM pressure.  I guess, you can argue that
> keeping this per netdev rx-queue would make memory usage even higher.
> This is a tradeoff, we are trading memory (waste) for speed.
> 
> 
> (2) (Question to Jakub I guess)
> How does this connect with Jakub's PP netlink stats interface?
> E.g. I find it very practical that this allow us get PP stats per
> netdev, but in this case there isn't a netdev.
> 
> 
> (3) (Implicit locking)
> PP have lockless "alloc" because it it relies on drivers NAPI context.
> The places where netstack access softnet_data provide similar protection
> that we can rely on for PP, so this is likely correct implementation
> wise.  But it will give people like Sebastian (Cc) more gray hair when
> figuring out how PREEMPT_RT handle these cases.
> 
> (4)
> The optimization is needed for the case where we need to re-allocate and
> copy SKB fragments.  I think we should focus on avoiding this code path,
> instead of optimizing it.  For UDP it should be fairly easy, but for TCP
> this is harder.

Hi all,

I would resume this activity and it seems to me there is no a clear direction
about where we should add the page_pool (in a per_cpu pointer or in
netdev_rx_queue struct) or if we can rely on page_frag_cache instead.

@Jakub: what do you think? Should we add a page_pool in a per_cpu pointer?

Regards,
Lorenzo

> 
> --Jesper

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

  reply	other threads:[~2024-01-17 17:36 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-14 14:29 [PATCH v5 net-next 0/3] add multi-buff support for xdp running in generic mode Lorenzo Bianconi
2023-12-14 14:29 ` [PATCH v5 net-next 1/3] net: introduce page_pool pointer in softnet_data percpu struct Lorenzo Bianconi
2023-12-19 15:23   ` Paolo Abeni
2023-12-20 12:00     ` Jesper Dangaard Brouer
2024-01-17 17:36       ` Lorenzo Bianconi [this message]
2024-01-18  1:47         ` Jakub Kicinski
2024-01-22 11:19           ` Lorenzo Bianconi
2023-12-19 16:29   ` Eric Dumazet
2023-12-19 17:32     ` Lorenzo Bianconi
2023-12-14 14:29 ` [PATCH v5 net-next 2/3] xdp: rely on skb pointer reference in do_xdp_generic and netif_receive_generic_xdp Lorenzo Bianconi
2023-12-14 14:29 ` [PATCH v5 net-next 3/3] xdp: add multi-buff support for xdp running in generic mode Lorenzo Bianconi
2023-12-20 16:01   ` Jesper Dangaard Brouer
2023-12-21  8:23     ` Lorenzo Bianconi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZagQGZ5CM3vEH2RP@lore-desk \
    --to=lorenzo@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=bpf@vger.kernel.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jasowang@redhat.com \
    --cc=kuba@kernel.org \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@google.com \
    --cc=toke@redhat.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).