From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6B30C433FE for ; Fri, 30 Sep 2022 19:52:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232027AbiI3Twc (ORCPT ); Fri, 30 Sep 2022 15:52:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231946AbiI3Tw3 (ORCPT ); Fri, 30 Sep 2022 15:52:29 -0400 Received: from vps0.lunn.ch (vps0.lunn.ch [185.16.172.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0796A1A6EB2; Fri, 30 Sep 2022 12:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lunn.ch; s=20171124; h=In-Reply-To:Content-Disposition:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:From:Sender:Reply-To:Subject: Date:Message-ID:To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Content-Disposition:In-Reply-To:References; bh=pnHQZ9rBe55rnctqBA3woeObnRsQg9T9AOmpTNE1wGs=; b=ToODwlkniUi0GLmBg2vrxa6hQ5 2RP3Ewop11YiW6MkKFSuV8+H8snyofHw354kmo+vI3zQQGSsENm6VwHIJwvzaQ3nbd434B2JveNdp M+TyZHziuptI5JTS956k/Ue00V5FoYuP7JRV+tAk3aQIZ356xdBYSFv5h2SWd5plzLEI=; Received: from andrew by vps0.lunn.ch with local (Exim 4.94.2) (envelope-from ) id 1oeM3G-000jo6-34; Fri, 30 Sep 2022 21:52:02 +0200 Date: Fri, 30 Sep 2022 21:52:02 +0200 From: Andrew Lunn To: Shenwei Wang Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Wei Fang , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, imx@lists.linux.dev Subject: Re: [PATCH 1/1] net: fec: using page pool to manage RX buffers Message-ID: References: <20220930193751.1249054-1-shenwei.wang@nxp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220930193751.1249054-1-shenwei.wang@nxp.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 30, 2022 at 02:37:51PM -0500, Shenwei Wang wrote: > This patch optimizes the RX buffer management by using the page > pool. The purpose for this change is to prepare for the following > XDP support. The current driver uses one frame per page for easy > management. > > The following are the comparing result between page pool implementation > and the original implementation (non page pool). > > --- Page Pool implementation ---- > > shenwei@5810:~$ iperf -c 10.81.16.245 -w 2m -i 1 > ------------------------------------------------------------ > Client connecting to 10.81.16.245, TCP port 5001 > TCP window size: 416 KByte (WARNING: requested 1.91 MByte) > ------------------------------------------------------------ > [ 1] local 10.81.17.20 port 43204 connected with 10.81.16.245 port 5001 > [ ID] Interval Transfer Bandwidth > [ 1] 0.0000-1.0000 sec 111 MBytes 933 Mbits/sec > [ 1] 1.0000-2.0000 sec 111 MBytes 934 Mbits/sec > [ 1] 2.0000-3.0000 sec 112 MBytes 935 Mbits/sec > [ 1] 3.0000-4.0000 sec 111 MBytes 933 Mbits/sec > [ 1] 4.0000-5.0000 sec 111 MBytes 934 Mbits/sec > [ 1] 5.0000-6.0000 sec 111 MBytes 933 Mbits/sec > [ 1] 6.0000-7.0000 sec 111 MBytes 931 Mbits/sec > [ 1] 7.0000-8.0000 sec 112 MBytes 935 Mbits/sec > [ 1] 8.0000-9.0000 sec 111 MBytes 933 Mbits/sec > [ 1] 9.0000-10.0000 sec 112 MBytes 935 Mbits/sec > [ 1] 0.0000-10.0077 sec 1.09 GBytes 933 Mbits/sec > > --- Non Page Pool implementation ---- > > shenwei@5810:~$ iperf -c 10.81.16.245 -w 2m -i 1 > ------------------------------------------------------------ > Client connecting to 10.81.16.245, TCP port 5001 > TCP window size: 416 KByte (WARNING: requested 1.91 MByte) > ------------------------------------------------------------ > [ 1] local 10.81.17.20 port 49154 connected with 10.81.16.245 port 5001 > [ ID] Interval Transfer Bandwidth > [ 1] 0.0000-1.0000 sec 104 MBytes 868 Mbits/sec > [ 1] 1.0000-2.0000 sec 105 MBytes 878 Mbits/sec > [ 1] 2.0000-3.0000 sec 105 MBytes 881 Mbits/sec > [ 1] 3.0000-4.0000 sec 105 MBytes 879 Mbits/sec > [ 1] 4.0000-5.0000 sec 105 MBytes 878 Mbits/sec > [ 1] 5.0000-6.0000 sec 105 MBytes 878 Mbits/sec > [ 1] 6.0000-7.0000 sec 104 MBytes 875 Mbits/sec > [ 1] 7.0000-8.0000 sec 104 MBytes 875 Mbits/sec > [ 1] 8.0000-9.0000 sec 104 MBytes 873 Mbits/sec > [ 1] 9.0000-10.0000 sec 104 MBytes 875 Mbits/sec > [ 1] 0.0000-10.0073 sec 1.02 GBytes 875 Mbits/sec What SoC? As i keep saying, the FEC is used in a lot of different SoCs, and you need to show this does not cause any regressions in the older SoCs. There are probably a lot more imx5 and imx6 devices out in the wild than imx8, which is what i guess you are testing on. Mainline needs to work well on them all, even if NXP no longer cares about the older Socs. Andrew