netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <hawk@kernel.org>
To: Jakub Kicinski <kuba@kernel.org>, Lorenzo Bianconi <lorenzo@kernel.org>
Cc: aleksander.lobakin@intel.com, netdev@vger.kernel.org,
	davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
	lorenzo.bianconi@redhat.com, bpf@vger.kernel.org,
	toke@redhat.com, willemdebruijn.kernel@gmail.com,
	jasowang@redhat.com, sdf@google.com
Subject: Re: [PATCH v3 net-next 2/2] xdp: add multi-buff support for xdp running in generic mode
Date: Wed, 6 Dec 2023 13:41:49 +0100	[thread overview]
Message-ID: <4b9804e2-42f0-4aed-b191-2abe24390e37@kernel.org> (raw)
In-Reply-To: <20231205155849.49af176c@kernel.org>



On 12/6/23 00:58, Jakub Kicinski wrote:
> On Wed, 6 Dec 2023 00:08:15 +0100 Lorenzo Bianconi wrote:
>> v00 (NS:ns0 - 192.168.0.1/24) <---> (NS:ns1 - 192.168.0.2/24) v01 ==(XDP_REDIRECT)==> v10 (NS:ns1 - 192.168.1.1/24) <---> (NS:ns2 - 192.168.1.2/24) v11
>>
>> - v00: iperf3 client (pinned on core 0)
>> - v11: iperf3 server (pinned on core 7)
>>
>> net-next veth codebase (page_pool APIs):
>> =======================================
>> - MTU  1500: ~ 5.42 Gbps
>> - MTU  8000: ~ 14.1 Gbps
>> - MTU 64000: ~ 18.4 Gbps
>>
>> net-next veth codebase + page_frag_cahe APIs [0]:
>> =================================================
>> - MTU  1500: ~ 6.62 Gbps
>> - MTU  8000: ~ 14.7 Gbps
>> - MTU 64000: ~ 19.7 Gbps
>>
>> xdp_generic codebase + page_frag_cahe APIs (current proposed patch):
>> ====================================================================
>> - MTU  1500: ~ 6.41 Gbps
>> - MTU  8000: ~ 14.2 Gbps
>> - MTU 64000: ~ 19.8 Gbps
>>
>> xdp_generic codebase + page_frag_cahe APIs [1]:
>> ===============================================
> 
> This one should say page pool?
> 
>> - MTU  1500: ~ 5.75 Gbps
>> - MTU  8000: ~ 15.3 Gbps
>> - MTU 64000: ~ 21.2 Gbps
>>
>> It seems page_pool APIs are working better for xdp_generic codebase
>> (except MTU 1500 case) while page_frag_cache APIs are better for
>> veth driver. What do you think? Am I missing something?
> 
> IDK the details of veth XDP very well but IIUC they are pretty much
> the same. Are there any clues in perf -C 0 / 7?
> 
>> [0] Here I have just used napi_alloc_frag() instead of
>> page_pool_dev_alloc_va()/page_pool_dev_alloc() in
>> veth_convert_skb_to_xdp_buff()
>>
>> [1] I developed this PoC to use page_pool APIs for xdp_generic code:
> 
> Why not put the page pool in softnet_data?

First I thought cool that Jakub is suggesting softnet_data, which will
make page_pool (PP) even more central as the netstacks memory layer.

BUT then I realized that PP have a weakness, which is the return/free
path that need to take a normal spin_lock, as that can be called from
any CPU (unlike the RX/alloc case).  Thus, I fear that making multiple
devices share a page_pool via softnet_data, increase the chance of lock
contention when packets are "freed" returned/recycled.

--Jesper

p.s. PP have the page_pool_put_page_bulk() API, but only XDP 
(NIC-drivers) leverage this.

  reply	other threads:[~2023-12-06 12:41 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-01 13:48 [PATCH v3 net-next 0/2] add multi-buff support for xdp running in generic mode Lorenzo Bianconi
2023-12-01 13:48 ` [PATCH v3 net-next 1/2] xdp: rely on skb pointer reference in do_xdp_generic and netif_receive_generic_xdp Lorenzo Bianconi
2023-12-01 13:48 ` [PATCH v3 net-next 2/2] xdp: add multi-buff support for xdp running in generic mode Lorenzo Bianconi
2023-12-02  3:48   ` Jakub Kicinski
2023-12-04 15:43     ` Lorenzo Bianconi
2023-12-04 20:01       ` Jakub Kicinski
2023-12-05 23:08         ` Lorenzo Bianconi
2023-12-05 23:58           ` Jakub Kicinski
2023-12-06 12:41             ` Jesper Dangaard Brouer [this message]
2023-12-06 13:51               ` Lorenzo Bianconi
2023-12-06 16:03               ` Jakub Kicinski
2023-12-09 19:23                 ` Lorenzo Bianconi
2023-12-11 17:00                   ` Jakub Kicinski
2023-12-12  8:36                     ` Paolo Abeni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4b9804e2-42f0-4aed-b191-2abe24390e37@kernel.org \
    --to=hawk@kernel.org \
    --cc=aleksander.lobakin@intel.com \
    --cc=bpf@vger.kernel.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=jasowang@redhat.com \
    --cc=kuba@kernel.org \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@google.com \
    --cc=toke@redhat.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).