BPF List
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Lorenzo Bianconi <lorenzo@kernel.org>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>,
	Paolo Abeni <pabeni@redhat.com>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	netdev@vger.kernel.org,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	willemdebruijn.kernel@gmail.com, toke@redhat.com,
	davem@davemloft.net, edumazet@google.com, bpf@vger.kernel.org,
	lorenzo.bianconi@redhat.com, sdf@google.com, jasowang@redhat.com
Subject: Re: [PATCH v5 net-next 1/3] net: introduce page_pool pointer in softnet_data percpu struct
Date: Wed, 17 Jan 2024 17:47:22 -0800	[thread overview]
Message-ID: <20240117174722.521c9fdf@kernel.org> (raw)
In-Reply-To: <ZagQGZ5CM3vEH2RP@lore-desk>

On Wed, 17 Jan 2024 18:36:25 +0100 Lorenzo Bianconi wrote:
> I would resume this activity and it seems to me there is no a clear direction
> about where we should add the page_pool (in a per_cpu pointer or in
> netdev_rx_queue struct) or if we can rely on page_frag_cache instead.
> 
> @Jakub: what do you think? Should we add a page_pool in a per_cpu pointer?

Let's try to summarize. We want skb reallocation without linearization
for XDP generic. We need some fast-ish way to get pages for the payload.

First, options for placing the allocator:
 - struct netdev_rx_queue
 - per-CPU

IMO per-CPU has better scaling properties - you're less likely to
increase the CPU count to infinity than spawn extra netdev queues.

The second question is:
 - page_frag_cache
 - page_pool

I like the page pool because we have an increasing amount of infra for
it, and page pool is already used in veth, which we can hopefully also
de-duplicate if we have a per-CPU one, one day. But I do agree that
it's not a perfect fit.

To answer Jesper's questions:
 ad1. cache size - we can lower the cache to match page_frag_cache, 
      so I think 8 entries? page frag cache can give us bigger frags 
      and therefore lower frag count, so that's a minus for using 
      page pool
 ad2. nl API - we can extend netlink to dump unbound page pools fairly
      easily, I didn't want to do it without a clear use case, but I
      don't think there are any blockers
 ad3. locking - a bit independent of allocator but fair point, we assume
      XDP generic or Rx path for now, so sirq context / bh locked out
 ad4. right, well, right, IDK what real workloads need, and whether 
      XDP generic should be optimized at all.. I personally lean
      towards "no"
 
Sorry if I haven't helped much to clarify the direction :)
I have no strong preference on question #2, I would prefer to not add
per-queue state for something that's in no way tied to the device
(question #1 -> per-CPU). 

You did good perf analysis of the options, could you share it here
again?

  reply	other threads:[~2024-01-18  1:47 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-14 14:29 [PATCH v5 net-next 0/3] add multi-buff support for xdp running in generic mode Lorenzo Bianconi
2023-12-14 14:29 ` [PATCH v5 net-next 1/3] net: introduce page_pool pointer in softnet_data percpu struct Lorenzo Bianconi
2023-12-19 15:23   ` Paolo Abeni
2023-12-20 12:00     ` Jesper Dangaard Brouer
2024-01-17 17:36       ` Lorenzo Bianconi
2024-01-18  1:47         ` Jakub Kicinski [this message]
2024-01-22 11:19           ` Lorenzo Bianconi
2023-12-19 16:29   ` Eric Dumazet
2023-12-19 17:32     ` Lorenzo Bianconi
2023-12-14 14:29 ` [PATCH v5 net-next 2/3] xdp: rely on skb pointer reference in do_xdp_generic and netif_receive_generic_xdp Lorenzo Bianconi
2023-12-14 14:29 ` [PATCH v5 net-next 3/3] xdp: add multi-buff support for xdp running in generic mode Lorenzo Bianconi
2023-12-20 16:01   ` Jesper Dangaard Brouer
2023-12-21  8:23     ` Lorenzo Bianconi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240117174722.521c9fdf@kernel.org \
    --to=kuba@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=bpf@vger.kernel.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jasowang@redhat.com \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@google.com \
    --cc=toke@redhat.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox