netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Furong Xu <0x1207@gmail.com>
Cc: <netdev@vger.kernel.org>,
	<linux-stm32@st-md-mailman.stormreply.com>,
	<linux-arm-kernel@lists.infradead.org>,
	<linux-kernel@vger.kernel.org>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>,
	"Paolo Abeni" <pabeni@redhat.com>,
	Maxime Coquelin <mcoquelin.stm32@gmail.com>, <xfr@outlook.com>
Subject: Re: [PATCH net-next v1 3/3] net: stmmac: Optimize cache prefetch in RX path
Date: Mon, 13 Jan 2025 13:10:46 +0100	[thread overview]
Message-ID: <f20c339f-5286-477c-9255-e2e1fbeba57c@intel.com> (raw)
In-Reply-To: <b992690bf7197e4b967ed9f7a0422edae50129f2.1736500685.git.0x1207@gmail.com>

From: Furong Xu <0x1207@gmail.com>
Date: Fri, 10 Jan 2025 17:53:59 +0800

> Current code prefetches cache lines for the received frame first, and
> then dma_sync_single_for_cpu() against this frame, this is wrong.
> Cache prefetch should be triggered after dma_sync_single_for_cpu().
> 
> This patch brings ~2.8% driver performance improvement in a TCP RX
> throughput test with iPerf tool on a single isolated Cortex-A65 CPU
> core, 2.84 Gbits/sec increased to 2.92 Gbits/sec.
> 
> Signed-off-by: Furong Xu <0x1207@gmail.com>
> ---
>  drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index c1aeaec53b4c..1b4e8b035b1a 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -5497,10 +5497,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
>  
>  		/* Buffer is good. Go on. */
>  
> -		prefetch(page_address(buf->page) + buf->page_offset);
> -		if (buf->sec_page)
> -			prefetch(page_address(buf->sec_page));
> -
>  		buf1_len = stmmac_rx_buf1_len(priv, p, status, len);
>  		len += buf1_len;
>  		buf2_len = stmmac_rx_buf2_len(priv, p, status, len);
> @@ -5522,6 +5518,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
>  
>  			dma_sync_single_for_cpu(priv->device, buf->addr,
>  						buf1_len, dma_dir);
> +			prefetch(page_address(buf->page) + buf->page_offset);
>  
>  			xdp_init_buff(&ctx.xdp, buf_sz, &rx_q->xdp_rxq);
>  			xdp_prepare_buff(&ctx.xdp, page_address(buf->page),
> @@ -5596,6 +5593,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
>  		} else if (buf1_len) {
>  			dma_sync_single_for_cpu(priv->device, buf->addr,
>  						buf1_len, dma_dir);
> +			prefetch(page_address(buf->page) + buf->page_offset);
>  			skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
>  					buf->page, buf->page_offset, buf1_len,
>  					priv->dma_conf.dma_buf_sz);

Are you sure you need to prefetch frags as well? I'd say this is a waste
of cycles, as the kernel core stack barely looks at payload...
Probably prefetching only header buffers would be enough.

> @@ -5608,6 +5606,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
>  		if (buf2_len) {
>  			dma_sync_single_for_cpu(priv->device, buf->sec_addr,
>  						buf2_len, dma_dir);
> +			prefetch(page_address(buf->sec_page));
>  			skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
>  					buf->sec_page, 0, buf2_len,
>  					priv->dma_conf.dma_buf_sz);

Thanks,
Olek

  reply	other threads:[~2025-01-13 12:11 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-10  9:53 [PATCH net-next v1 0/3] net: stmmac: RX performance improvement Furong Xu
2025-01-10  9:53 ` [PATCH net-next v1 1/3] net: stmmac: Switch to zero-copy in non-XDP RX path Furong Xu
2025-01-13  9:41   ` Yanteng Si
2025-01-13 12:03     ` Alexander Lobakin
2025-01-13 15:16       ` Yanteng Si
2025-01-13 16:48       ` Andrew Lunn
2025-01-14 17:23         ` Alexander Lobakin
2025-01-10  9:53 ` [PATCH net-next v1 2/3] net: stmmac: Set page_pool_params.max_len to a precise size Furong Xu
2025-01-10  9:53 ` [PATCH net-next v1 3/3] net: stmmac: Optimize cache prefetch in RX path Furong Xu
2025-01-13 12:10   ` Alexander Lobakin [this message]
2025-01-13 12:37     ` Furong Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f20c339f-5286-477c-9255-e2e1fbeba57c@intel.com \
    --to=aleksander.lobakin@intel.com \
    --cc=0x1207@gmail.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kuba@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-stm32@st-md-mailman.stormreply.com \
    --cc=mcoquelin.stm32@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=xfr@outlook.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).