public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Lorenzo Bianconi <lorenzo@kernel.org>
To: Jakub Kicinski <kuba@kernel.org>
Cc: aleksander.lobakin@intel.com, netdev@vger.kernel.org,
	davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
	lorenzo.bianconi@redhat.com, bpf@vger.kernel.org,
	hawk@kernel.org, toke@redhat.com,
	willemdebruijn.kernel@gmail.com, jasowang@redhat.com,
	sdf@google.com
Subject: Re: [PATCH v3 net-next 2/2] xdp: add multi-buff support for xdp running in generic mode
Date: Mon, 4 Dec 2023 16:43:56 +0100	[thread overview]
Message-ID: <ZW3zvEbI6o4ydM_N@lore-desk> (raw)
In-Reply-To: <20231201194829.428a96da@kernel.org>

[-- Attachment #1: Type: text/plain, Size: 1878 bytes --]

> On Fri,  1 Dec 2023 14:48:26 +0100 Lorenzo Bianconi wrote:
> > Similar to native xdp, do not always linearize the skb in
> > netif_receive_generic_xdp routine but create a non-linear xdp_buff to be
> > processed by the eBPF program. This allow to add  multi-buffer support
> > for xdp running in generic mode.
> 
> Hm. How close is the xdp generic code to veth?

Actually they are quite close, the only difference is the use of page_pool vs
page_frag_cache APIs.

> I wonder if it'd make sense to create a page pool instance for each
> core, we could then pass it into a common "reallocate skb into a
> page-pool backed, fragged form" helper. Common between this code
> and veth? Perhaps we could even get rid of the veth page pools
> and use the per cpu pools there?

yes, I was thinking about it actually.
I run some preliminary tests to check if we are introducing any performance
penalties or so.
My setup relies on a couple of veth pairs and an eBPF program to perform
XDP_REDIRECT from one pair to another one. I am running the program in xdp
driver mode (not generic one).

v00 (NS:ns0 - 192.168.0.1/24) <---> (NS:ns1 - 192.168.0.2/24) v01    v10 (NS:ns1 - 192.168.1.1/24) <---> (NS:ns2 - 192.168.1.2/24) v11

v00: iperf3 client
v11: iperf3 server

I am run the test with different MTU valeus (1500B, 8KB, 64KB)

net-next veth codebase:
=======================
- MTU  1500: iperf3 ~  4.37Gbps
- MTU  8000: iperf3 ~  9.75Gbps
- MTU 64000: iperf3 ~ 11.24Gbps

net-next veth codebase + page_frag_cache instead of page_pool:
==============================================================
- MTU  1500: iperf3 ~  4.99Gbps (+14%)
- MTU  8000: iperf3 ~  8.5Gbps  (-12%)
- MTU 64000: iperf3 ~ 11.9Gbps  ( +6%)

It seems there is no a clear win situation of using page_pool or
page_frag_cache. What do you think?

Regards,
Lorenzo

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

  reply	other threads:[~2023-12-04 15:44 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-01 13:48 [PATCH v3 net-next 0/2] add multi-buff support for xdp running in generic mode Lorenzo Bianconi
2023-12-01 13:48 ` [PATCH v3 net-next 1/2] xdp: rely on skb pointer reference in do_xdp_generic and netif_receive_generic_xdp Lorenzo Bianconi
2023-12-01 13:48 ` [PATCH v3 net-next 2/2] xdp: add multi-buff support for xdp running in generic mode Lorenzo Bianconi
2023-12-02  3:48   ` Jakub Kicinski
2023-12-04 15:43     ` Lorenzo Bianconi [this message]
2023-12-04 20:01       ` Jakub Kicinski
2023-12-05 23:08         ` Lorenzo Bianconi
2023-12-05 23:58           ` Jakub Kicinski
2023-12-06 12:41             ` Jesper Dangaard Brouer
2023-12-06 13:51               ` Lorenzo Bianconi
2023-12-06 16:03               ` Jakub Kicinski
2023-12-09 19:23                 ` Lorenzo Bianconi
2023-12-11 17:00                   ` Jakub Kicinski
2023-12-12  8:36                     ` Paolo Abeni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZW3zvEbI6o4ydM_N@lore-desk \
    --to=lorenzo@kernel.org \
    --cc=aleksander.lobakin@intel.com \
    --cc=bpf@vger.kernel.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=jasowang@redhat.com \
    --cc=kuba@kernel.org \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@google.com \
    --cc=toke@redhat.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox