public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrew Lunn <andrew@lunn.ch>
To: Shenwei Wang <shenwei.wang@nxp.com>
Cc: "David S . Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Wei Fang <wei.fang@nxp.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	imx@lists.linux.dev
Subject: Re: [PATCH 1/1] net: fec: using page pool to manage RX buffers
Date: Fri, 30 Sep 2022 21:52:02 +0200	[thread overview]
Message-ID: <YzdI4mDXCKuI/58N@lunn.ch> (raw)
In-Reply-To: <20220930193751.1249054-1-shenwei.wang@nxp.com>

On Fri, Sep 30, 2022 at 02:37:51PM -0500, Shenwei Wang wrote:
> This patch optimizes the RX buffer management by using the page
> pool. The purpose for this change is to prepare for the following
> XDP support. The current driver uses one frame per page for easy
> management.
> 
> The following are the comparing result between page pool implementation
> and the original implementation (non page pool).
> 
>  --- Page Pool implementation ----
> 
> shenwei@5810:~$ iperf -c 10.81.16.245 -w 2m -i 1
> ------------------------------------------------------------
> Client connecting to 10.81.16.245, TCP port 5001
> TCP window size:  416 KByte (WARNING: requested 1.91 MByte)
> ------------------------------------------------------------
> [  1] local 10.81.17.20 port 43204 connected with 10.81.16.245 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  1] 0.0000-1.0000 sec   111 MBytes   933 Mbits/sec
> [  1] 1.0000-2.0000 sec   111 MBytes   934 Mbits/sec
> [  1] 2.0000-3.0000 sec   112 MBytes   935 Mbits/sec
> [  1] 3.0000-4.0000 sec   111 MBytes   933 Mbits/sec
> [  1] 4.0000-5.0000 sec   111 MBytes   934 Mbits/sec
> [  1] 5.0000-6.0000 sec   111 MBytes   933 Mbits/sec
> [  1] 6.0000-7.0000 sec   111 MBytes   931 Mbits/sec
> [  1] 7.0000-8.0000 sec   112 MBytes   935 Mbits/sec
> [  1] 8.0000-9.0000 sec   111 MBytes   933 Mbits/sec
> [  1] 9.0000-10.0000 sec   112 MBytes   935 Mbits/sec
> [  1] 0.0000-10.0077 sec  1.09 GBytes   933 Mbits/sec
> 
>  --- Non Page Pool implementation ----
> 
> shenwei@5810:~$ iperf -c 10.81.16.245 -w 2m -i 1
> ------------------------------------------------------------
> Client connecting to 10.81.16.245, TCP port 5001
> TCP window size:  416 KByte (WARNING: requested 1.91 MByte)
> ------------------------------------------------------------
> [  1] local 10.81.17.20 port 49154 connected with 10.81.16.245 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  1] 0.0000-1.0000 sec   104 MBytes   868 Mbits/sec
> [  1] 1.0000-2.0000 sec   105 MBytes   878 Mbits/sec
> [  1] 2.0000-3.0000 sec   105 MBytes   881 Mbits/sec
> [  1] 3.0000-4.0000 sec   105 MBytes   879 Mbits/sec
> [  1] 4.0000-5.0000 sec   105 MBytes   878 Mbits/sec
> [  1] 5.0000-6.0000 sec   105 MBytes   878 Mbits/sec
> [  1] 6.0000-7.0000 sec   104 MBytes   875 Mbits/sec
> [  1] 7.0000-8.0000 sec   104 MBytes   875 Mbits/sec
> [  1] 8.0000-9.0000 sec   104 MBytes   873 Mbits/sec
> [  1] 9.0000-10.0000 sec   104 MBytes   875 Mbits/sec
> [  1] 0.0000-10.0073 sec  1.02 GBytes   875 Mbits/sec

What SoC? As i keep saying, the FEC is used in a lot of different
SoCs, and you need to show this does not cause any regressions in the
older SoCs. There are probably a lot more imx5 and imx6 devices out in
the wild than imx8, which is what i guess you are testing on. Mainline
needs to work well on them all, even if NXP no longer cares about the
older Socs.

    Andrew

  reply	other threads:[~2022-09-30 19:52 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-30 19:37 [PATCH 1/1] net: fec: using page pool to manage RX buffers Shenwei Wang
2022-09-30 19:52 ` Andrew Lunn [this message]
2022-09-30 19:58   ` [EXT] " Shenwei Wang
2022-09-30 20:01     ` Andrew Lunn
2022-09-30 20:08       ` Shenwei Wang
2022-09-30 20:05 ` Andrew Lunn
2022-09-30 20:07   ` [EXT] " Shenwei Wang
2022-09-30 20:20     ` Andrew Lunn
2022-09-30 20:15 ` Andrew Lunn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YzdI4mDXCKuI/58N@lunn.ch \
    --to=andrew@lunn.ch \
    --cc=ast@kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=imx@lists.linux.dev \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=shenwei.wang@nxp.com \
    --cc=wei.fang@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox