netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Furong Xu <0x1207@gmail.com>
To: Ido Schimmel <idosch@idosch.org>
Cc: Andrew Lunn <andrew@lunn.ch>, Brad Griffis <bgriffis@nvidia.com>,
	Jon Hunter <jonathanh@nvidia.com>,
	netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	Alexander Lobakin <aleksander.lobakin@intel.com>,
	Joe Damato <jdamato@fastly.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Maxime Coquelin <mcoquelin.stm32@gmail.com>,
	xfr@outlook.com,
	"linux-tegra@vger.kernel.org" <linux-tegra@vger.kernel.org>
Subject: Re: [PATCH net-next v3 1/4] net: stmmac: Switch to zero-copy in non-XDP RX path
Date: Sat, 25 Jan 2025 22:43:42 +0800	[thread overview]
Message-ID: <20250125224342.00006ced@gmail.com> (raw)
In-Reply-To: <Z5S69kb7Qz_QZqOh@shredder>

Hi Ido

On Sat, 25 Jan 2025 12:20:38 +0200, Ido Schimmel wrote:

> On Fri, Jan 24, 2025 at 10:42:56AM +0800, Furong Xu wrote:
> > On Thu, 23 Jan 2025 22:48:42 +0100, Andrew Lunn <andrew@lunn.ch>
> > wrote: 
> > > > Just to clarify, the patch that you had us try was not intended
> > > > as an actual fix, correct? It was only for diagnostic purposes,
> > > > i.e. to see if there is some kind of cache coherence issue,
> > > > which seems to be the case?  So perhaps the only fix needed is
> > > > to add dma-coherent to our device tree?    
> > > 
> > > That sounds quite error prone. How many other DT blobs are
> > > missing the property? If the memory should be coherent, i would
> > > expect the driver to allocate coherent memory. Or the driver
> > > needs to handle non-coherent memory and add the necessary
> > > flush/invalidates etc.  
> > 
> > stmmac driver does the necessary cache flush/invalidates to
> > maintain cache lines explicitly.  
> 
> Given the problem happens when the kernel performs syncing, is it
> possible that there is a problem with how the syncing is performed?
> 
> I am not familiar with this driver, but it seems to allocate multiple
> buffers per packet when split header is enabled and these buffers are
> allocated from the same page pool (see stmmac_init_rx_buffers()).
> Despite that, the driver is creating the page pool with a non-zero
> offset (see __alloc_dma_rx_desc_resources()) to avoid syncing the
> headroom, which is only present in the head buffer.
> 
> I asked Thierry to test the following patch [1] and initial testing
> seems OK. He also confirmed that "SPH feature enabled" shows up in the
> kernel log.
> BTW, the commit that added split header support (67afd6d1cfdf0) says
> that it "reduces CPU usage because without the feature all the entire
> packet is memcpy'ed, while that with the feature only the header is".
> This is no longer correct after your patch, so is there still value in
> the split header feature? With two large buffers being allocated from

Thanks for these great insights!

Yes, when "SPH feature enabled", it is not correct after my patch,
pp_params.offset should be updated to match the offset of split payload.

But I would like to let pp_params.max_len remains to
dma_conf->dma_buf_sz since the sizes of both header and payload are
limited to dma_conf->dma_buf_sz by DMA engine, no more than
dma_conf->dma_buf_sz bytes will be written into a page buffer.
So my patch would be like [2]:

BTW, the split header feature will be very useful on some certain
cases, stmmac driver should support this feature always.

[2]
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index edbf8994455d..def0d893efbb 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -2091,7 +2091,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv,
        pp_params.nid = dev_to_node(priv->device);
        pp_params.dev = priv->device;
        pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
-       pp_params.offset = stmmac_rx_offset(priv);
+       pp_params.offset = priv->sph ? 0 : stmmac_rx_offset(priv);
        pp_params.max_len = dma_conf->dma_buf_sz;

        rx_q->page_pool = page_pool_create(&pp_params);

  reply	other threads:[~2025-01-25 14:44 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-15  3:27 [PATCH net-next v3 0/4] net: stmmac: RX performance improvement Furong Xu
2025-01-15  3:27 ` [PATCH net-next v3 1/4] net: stmmac: Switch to zero-copy in non-XDP RX path Furong Xu
2025-01-15 16:58   ` Larysa Zaremba
2025-01-16  2:05   ` Yanteng Si
2025-01-23 14:06   ` Jon Hunter
2025-01-23 16:35     ` Furong Xu
2025-01-23 19:53       ` Brad Griffis
2025-01-23 21:48         ` Andrew Lunn
2025-01-24  2:42           ` Furong Xu
2025-01-24 13:15             ` Thierry Reding
2025-01-28 20:04               ` Lucas Stach
2025-01-25 10:20             ` Ido Schimmel
2025-01-25 14:43               ` Furong Xu [this message]
2025-01-26  8:41                 ` Ido Schimmel
2025-01-26 10:37                   ` Furong Xu
2025-01-26 11:35                     ` Ido Schimmel
2025-01-26 12:56                       ` Furong Xu
2025-01-25 15:03               ` Furong Xu
2025-01-25 19:08                 ` Andrew Lunn
2025-01-26  2:39                   ` Furong Xu
2025-01-27 13:28                 ` Thierry Reding
2025-01-29 14:51                   ` Jon Hunter
2025-02-07  9:07                     ` Furong Xu
2025-02-07 13:42                       ` Jon Hunter
2025-01-24  1:53         ` Furong Xu
2025-01-24 15:14           ` Andrew Lunn
2025-01-15  3:27 ` [PATCH net-next v3 2/4] net: stmmac: Set page_pool_params.max_len to a precise size Furong Xu
2025-01-15 10:07   ` Yanteng Si
2025-01-15  3:27 ` [PATCH net-next v3 3/4] net: stmmac: Optimize cache prefetch in RX path Furong Xu
2025-01-15 16:24   ` Yanteng Si
2025-01-15  3:27 ` [PATCH net-next v3 4/4] net: stmmac: Convert prefetch() to net_prefetch() for received frames Furong Xu
2025-01-15 16:33   ` Yanteng Si
2025-01-15 16:35   ` Larysa Zaremba
2025-01-15 17:35   ` Joe Damato
2025-01-16 11:40 ` [PATCH net-next v3 0/4] net: stmmac: RX performance improvement patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250125224342.00006ced@gmail.com \
    --to=0x1207@gmail.com \
    --cc=aleksander.lobakin@intel.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=andrew@lunn.ch \
    --cc=bgriffis@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=idosch@idosch.org \
    --cc=jdamato@fastly.com \
    --cc=jonathanh@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-stm32@st-md-mailman.stormreply.com \
    --cc=linux-tegra@vger.kernel.org \
    --cc=mcoquelin.stm32@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=xfr@outlook.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).