netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lorenzo Bianconi <lorenzo@kernel.org>
To: Elad Yifee <eladwf@gmail.com>
Cc: Felix Fietkau <nbd@nbd.name>, Sean Wang <sean.wang@mediatek.com>,
	Mark Lee <Mark-MC.Lee@mediatek.com>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Matthias Brugger <matthias.bgg@gmail.com>,
	AngeloGioacchino Del Regno
	<angelogioacchino.delregno@collabora.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-mediatek@lists.infradead.org,
	Daniel Golle <daniel@makrotopia.org>,
	Joe Damato <jdamato@fastly.com>
Subject: Re: [PATCH net-next v2 0/2] net: ethernet: mtk_eth_soc: improve RX performance
Date: Mon, 29 Jul 2024 21:10:17 +0200	[thread overview]
Message-ID: <ZqfpGVhBe3zt0x-K@lore-desk> (raw)
In-Reply-To: <20240729183038.1959-1-eladwf@gmail.com>

[-- Attachment #1: Type: text/plain, Size: 5012 bytes --]

> This small series includes two short and simple patches to improve RX performance
> on this driver.

Hi Elad,

What is the chip revision you are running?
If you are using a device that does not support HW-LRO (e.g. MT7986 or
MT7988), I guess we can try to use page_pool_dev_alloc_frag() APIs and
request a 2048B buffer. Doing so, we can use use a single page for two
rx buffers improving recycling with page_pool. What do you think?

Regards,
Lorenzo

> 
> iperf3 result without these patches:
> 	[ ID] Interval           Transfer     Bandwidth
> 	[  4]   0.00-1.00   sec   563 MBytes  4.72 Gbits/sec
> 	[  4]   1.00-2.00   sec   563 MBytes  4.73 Gbits/sec
> 	[  4]   2.00-3.00   sec   552 MBytes  4.63 Gbits/sec
> 	[  4]   3.00-4.00   sec   561 MBytes  4.70 Gbits/sec
> 	[  4]   4.00-5.00   sec   562 MBytes  4.71 Gbits/sec
> 	[  4]   5.00-6.00   sec   565 MBytes  4.74 Gbits/sec
> 	[  4]   6.00-7.00   sec   563 MBytes  4.72 Gbits/sec
> 	[  4]   7.00-8.00   sec   565 MBytes  4.74 Gbits/sec
> 	[  4]   8.00-9.00   sec   562 MBytes  4.71 Gbits/sec
> 	[  4]   9.00-10.00  sec   558 MBytes  4.68 Gbits/sec
> 	- - - - - - - - - - - - - - - - - - - - - - - - -
> 	[ ID] Interval           Transfer     Bandwidth
> 	[  4]   0.00-10.00  sec  5.48 GBytes  4.71 Gbits/sec                  sender
> 	[  4]   0.00-10.00  sec  5.48 GBytes  4.71 Gbits/sec                  receiver
> 
> iperf3 result with "use prefetch methods" patch:
> 	[ ID] Interval           Transfer     Bandwidth
> 	[  4]   0.00-1.00   sec   598 MBytes  5.02 Gbits/sec
> 	[  4]   1.00-2.00   sec   588 MBytes  4.94 Gbits/sec
> 	[  4]   2.00-3.00   sec   592 MBytes  4.97 Gbits/sec
> 	[  4]   3.00-4.00   sec   594 MBytes  4.98 Gbits/sec
> 	[  4]   4.00-5.00   sec   590 MBytes  4.95 Gbits/sec
> 	[  4]   5.00-6.00   sec   594 MBytes  4.98 Gbits/sec
> 	[  4]   6.00-7.00   sec   594 MBytes  4.98 Gbits/sec
> 	[  4]   7.00-8.00   sec   593 MBytes  4.98 Gbits/sec
> 	[  4]   8.00-9.00   sec   593 MBytes  4.98 Gbits/sec
> 	[  4]   9.00-10.00  sec   594 MBytes  4.98 Gbits/sec
> 	- - - - - - - - - - - - - - - - - - - - - - - - -
> 	[ ID] Interval           Transfer     Bandwidth
> 	[  4]   0.00-10.00  sec  5.79 GBytes  4.98 Gbits/sec                  sender
> 	[  4]   0.00-10.00  sec  5.79 GBytes  4.98 Gbits/sec                  receiver
> 
> iperf3 result with "use PP exclusively for XDP programs" patch:
> 	[ ID] Interval           Transfer     Bandwidth
> 	[  4]   0.00-1.00   sec   635 MBytes  5.33 Gbits/sec
> 	[  4]   1.00-2.00   sec   636 MBytes  5.33 Gbits/sec
> 	[  4]   2.00-3.00   sec   637 MBytes  5.34 Gbits/sec
> 	[  4]   3.00-4.00   sec   636 MBytes  5.34 Gbits/sec
> 	[  4]   4.00-5.00   sec   637 MBytes  5.34 Gbits/sec
> 	[  4]   5.00-6.00   sec   637 MBytes  5.35 Gbits/sec
> 	[  4]   6.00-7.00   sec   637 MBytes  5.34 Gbits/sec
> 	[  4]   7.00-8.00   sec   636 MBytes  5.33 Gbits/sec
> 	[  4]   8.00-9.00   sec   634 MBytes  5.32 Gbits/sec
> 	[  4]   9.00-10.00  sec   637 MBytes  5.34 Gbits/sec
> 	- - - - - - - - - - - - - - - - - - - - - - - - -
> 	[ ID] Interval           Transfer     Bandwidth
> 	[  4]   0.00-10.00  sec  6.21 GBytes  5.34 Gbits/sec                  sender
> 	[  4]   0.00-10.00  sec  6.21 GBytes  5.34 Gbits/sec                  receiver
> 
> iperf3 result with both patches:
> 	[ ID] Interval           Transfer     Bandwidth
> 	[  4]   0.00-1.00   sec   652 MBytes  5.47 Gbits/sec
> 	[  4]   1.00-2.00   sec   653 MBytes  5.47 Gbits/sec
> 	[  4]   2.00-3.00   sec   654 MBytes  5.48 Gbits/sec
> 	[  4]   3.00-4.00   sec   654 MBytes  5.49 Gbits/sec
> 	[  4]   4.00-5.00   sec   653 MBytes  5.48 Gbits/sec
> 	[  4]   5.00-6.00   sec   653 MBytes  5.48 Gbits/sec
> 	[  4]   6.00-7.00   sec   653 MBytes  5.48 Gbits/sec
> 	[  4]   7.00-8.00   sec   653 MBytes  5.48 Gbits/sec
> 	[  4]   8.00-9.00   sec   653 MBytes  5.48 Gbits/sec
> 	[  4]   9.00-10.00  sec   654 MBytes  5.48 Gbits/sec
> 	- - - - - - - - - - - - - - - - - - - - - - - - -
> 	[ ID] Interval           Transfer     Bandwidth
> 	[  4]   0.00-10.00  sec  6.38 GBytes  5.48 Gbits/sec                  sender
> 	[  4]   0.00-10.00  sec  6.38 GBytes  5.48 Gbits/sec                  receiver
> 
> About 16% more packets/sec without XDP program loaded,
> and about 5% more packets/sec when using PP.
> Tested on Banana Pi BPI-R4 (MT7988A)
> 
> ---
> Technically, this is version 2 of the “use prefetch methods” patch.
> Initially, I submitted it as a single patch for review (RFC),
> but later I decided to include a second patch, resulting in this series
> Changes in v2:
> 	- Add "use PP exclusively for XDP programs" patch and create this series
> ---
> Elad Yifee (2):
>   net: ethernet: mtk_eth_soc: use prefetch methods
>   net: ethernet: mtk_eth_soc: use PP exclusively for XDP programs
> 
>  drivers/net/ethernet/mediatek/mtk_eth_soc.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> -- 
> 2.45.2
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

  parent reply	other threads:[~2024-07-29 19:10 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-29 18:29 [PATCH net-next v2 0/2] net: ethernet: mtk_eth_soc: improve RX performance Elad Yifee
2024-07-29 18:29 ` [PATCH net-next v2 1/2] net: ethernet: mtk_eth_soc: use prefetch methods Elad Yifee
2024-07-30  8:59   ` Joe Damato
2024-07-30 18:35     ` Elad Yifee
2024-08-01  7:09       ` Stefan Roese
2024-08-01 13:14         ` Joe Damato
2025-01-06 14:28   ` Shengyu Qu
2025-01-21 23:50     ` Andrew Lunn
2024-07-29 18:29 ` [PATCH net-next v2 2/2] net: ethernet: mtk_eth_soc: use PP exclusively for XDP programs Elad Yifee
2024-07-29 19:10 ` Lorenzo Bianconi [this message]
2024-07-30  5:29   ` [PATCH net-next v2 0/2] net: ethernet: mtk_eth_soc: improve RX performance Elad Yifee
2024-08-01  1:37     ` Jakub Kicinski
2024-08-01  3:53       ` Elad Yifee
2024-08-01  7:30         ` Lorenzo Bianconi
2024-08-01  8:01           ` Elad Yifee
2024-08-01  8:15             ` Lorenzo Bianconi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZqfpGVhBe3zt0x-K@lore-desk \
    --to=lorenzo@kernel.org \
    --cc=Mark-MC.Lee@mediatek.com \
    --cc=angelogioacchino.delregno@collabora.com \
    --cc=daniel@makrotopia.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eladwf@gmail.com \
    --cc=jdamato@fastly.com \
    --cc=kuba@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=matthias.bgg@gmail.com \
    --cc=nbd@nbd.name \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sean.wang@mediatek.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).