netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lorenzo Bianconi <lorenzo@kernel.org>
To: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Cc: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>,
	Yunsheng Lin <linyunsheng@huawei.com>,
	netdev@vger.kernel.org, bpf@vger.kernel.org, davem@davemloft.net,
	edumazet@google.com, kuba@kernel.org, pabeni@redhat.com,
	ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org,
	john.fastabend@gmail.com
Subject: Re: [RFC net-next] net: veth: reduce page_pool memory footprint using half page per-buffer
Date: Wed, 17 May 2023 00:52:25 +0200	[thread overview]
Message-ID: <ZGQJKRfuf4+av/MD@lore-desk> (raw)
In-Reply-To: <ZGIvbfPd46EIVZf/@boxer>

[-- Attachment #1: Type: text/plain, Size: 2740 bytes --]

> On Mon, May 15, 2023 at 01:24:20PM +0200, Lorenzo Bianconi wrote:
> > > On 2023/5/12 21:08, Lorenzo Bianconi wrote:
> > > > In order to reduce page_pool memory footprint, rely on
> > > > page_pool_dev_alloc_frag routine and reduce buffer size
> > > > (VETH_PAGE_POOL_FRAG_SIZE) to PAGE_SIZE / 2 in order to consume one page
> > > 
> > > Is there any performance improvement beside the memory saving? As it
> > > should reduce TLB miss, I wonder if the TLB miss reducing can even
> > > out the cost of the extra frag reference count handling for the
> > > frag support?
> > 
> > reducing the requested headroom to 192 (from 256) we have a nice improvement in
> > the 1500B frame case while it is mostly the same in the case of paged skb
> > (e.g. MTU 8000B).
> 
> Can you define 'nice improvement' ? ;)
> Show us numbers or improvement in %.

I am testing this RFC patch in the scenario reported below:

iperf tcp tx --> veth0 --> veth1 (xdp_pass) --> iperf tcp rx

- 6.4.0-rc1 net-next:
  MTU 1500B: ~ 7.07 Gbps
  MTU 8000B: ~ 14.7 Gbps

- 6.4.0-rc1 net-next + page_pool frag support in veth:
  MTU 1500B: ~ 8.57 Gbps
  MTU 8000B: ~ 14.5 Gbps

side note: it seems there is a regression between 6.2.15 and 6.4.0-rc1 net-next
(even without latest veth page_pool patches) in the throughput I can get in the
scenario above, but I have not looked into it yet.

- 6.2.15:
  MTU 1500B: ~ 7.91 Gbps
  MTU 8000B: ~ 14.1 Gbps

- 6.4.0-rc1 net-next w/o commits [0],[1],[2]
  MTU 1500B: ~ 6.38 Gbps
  MTU 8000B: ~ 13.2 Gbps

Regards,
Lorenzo

[0] 0ebab78cbcbf  net: veth: add page_pool for page recycling
[1] 4fc418053ec7  net: veth: add page_pool stats
[2] 9d142ed484a3  net: veth: rely on napi_build_skb in veth_convert_skb_to_xdp_buff

> 
> > 
> > > 
> > > > for two 1500B frames. Reduce VETH_XDP_PACKET_HEADROOM to 192 from 256
> > > > (XDP_PACKET_HEADROOM) to fit max_head_size in VETH_PAGE_POOL_FRAG_SIZE.
> > > > Please note, using default values (CONFIG_MAX_SKB_FRAGS=17), maximum
> > > > supported MTU is now reduced to 36350B.
> > > 
> > > Maybe we don't need to limit the frag size to VETH_PAGE_POOL_FRAG_SIZE,
> > > and use different frag size depending on the mtu or packet size?
> > > 
> > > Perhaps the page_pool_dev_alloc_frag() can be improved to return non-frag
> > > page if the requested frag size is larger than a specified size too.
> > > I will try to implement it if the above idea makes sense.
> > > 
> > 
> > since there are no significant differences between full page and fragmented page
> > implementation if the MTU is over the page boundary, does it worth to do so?
> > (at least for the veth use-case).
> > 
> > Regards,
> > Lorenzo
> > 
> 
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

  reply	other threads:[~2023-05-16 22:52 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-12 13:08 [RFC net-next] net: veth: reduce page_pool memory footprint using half page per-buffer Lorenzo Bianconi
2023-05-12 13:43 ` Alexander Lobakin
2023-05-12 14:14   ` Lorenzo Bianconi
2023-05-15 16:36     ` Alexander Lobakin
2023-05-15 11:10 ` Yunsheng Lin
2023-05-15 11:24   ` Lorenzo Bianconi
2023-05-15 13:11     ` Maciej Fijalkowski
2023-05-16 22:52       ` Lorenzo Bianconi [this message]
2023-05-17  9:41         ` Yunsheng Lin
2023-05-17 14:17           ` Lorenzo Bianconi
2023-05-18  1:16             ` Yunsheng Lin
2023-05-17 14:58         ` Jakub Kicinski
2023-05-16 12:55     ` Yunsheng Lin
2023-05-16 16:11 ` Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZGQJKRfuf4+av/MD@lore-desk \
    --to=lorenzo@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=maciej.fijalkowski@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).