From: Furong Xu <0x1207@gmail.com>
To: Ido Schimmel <idosch@idosch.org>
Cc: Andrew Lunn <andrew@lunn.ch>, Brad Griffis <bgriffis@nvidia.com>,
Jon Hunter <jonathanh@nvidia.com>,
netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org,
Alexander Lobakin <aleksander.lobakin@intel.com>,
Joe Damato <jdamato@fastly.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Maxime Coquelin <mcoquelin.stm32@gmail.com>,
xfr@outlook.com,
"linux-tegra@vger.kernel.org" <linux-tegra@vger.kernel.org>
Subject: Re: [PATCH net-next v3 1/4] net: stmmac: Switch to zero-copy in non-XDP RX path
Date: Sun, 26 Jan 2025 18:37:14 +0800 [thread overview]
Message-ID: <20250126183714.00005068@gmail.com> (raw)
In-Reply-To: <Z5X1M0Fs-K6FkSAl@shredder>
On Sun, 26 Jan 2025 10:41:23 +0200, Ido Schimmel wrote:
> SPH is the only scenario in which the driver uses multiple buffers per
> packet?
Yes.
Jumbo mode may use multiple buffers per packet too, but they are
high order pages, just like a single page in a page pool when using
a standard MTU.
> > pp_params.max_len = dma_conf->dma_buf_sz;
>
> Are you sure this is correct? Page pool documentation says that "For
> pages recycled on the XDP xmit and skb paths the page pool will use
> the max_len member of struct page_pool_params to decide how much of
> the page needs to be synced (starting at offset)" [1].
Page pool must sync an area of the buffer because both DMA and CPU may
touch this area, other areas are CPU exclusive, so no sync for them
seems better.
> While "no more than dma_conf->dma_buf_sz bytes will be written into a
> page buffer", for the head buffer they will be written starting at a
> non-zero offset unlike buffers used for the data, no?
Correct, they have different offsets.
The "SPH feature" splits header into buf->page (non-zero offset) and
splits payload into buf->sec_page (zero offset).
For buf->page, pp_params.max_len should be the size of L3/L4 header,
and with a offset of NET_SKB_PAD.
For buf->sec_page, pp_params.max_len should be dma_conf->dma_buf_sz,
and with a offset of 0.
This is always true:
sizeof(L3/L4 header) + NET_SKB_PAD < dma_conf->dma_buf_sz + 0
pp_params.max_len = dma_conf->dma_buf_sz;
make things simpler :)
next prev parent reply other threads:[~2025-01-26 10:37 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-15 3:27 [PATCH net-next v3 0/4] net: stmmac: RX performance improvement Furong Xu
2025-01-15 3:27 ` [PATCH net-next v3 1/4] net: stmmac: Switch to zero-copy in non-XDP RX path Furong Xu
2025-01-15 16:58 ` Larysa Zaremba
2025-01-16 2:05 ` Yanteng Si
2025-01-23 14:06 ` Jon Hunter
2025-01-23 16:35 ` Furong Xu
2025-01-23 19:53 ` Brad Griffis
2025-01-23 21:48 ` Andrew Lunn
2025-01-24 2:42 ` Furong Xu
2025-01-24 13:15 ` Thierry Reding
2025-01-28 20:04 ` Lucas Stach
2025-01-25 10:20 ` Ido Schimmel
2025-01-25 14:43 ` Furong Xu
2025-01-26 8:41 ` Ido Schimmel
2025-01-26 10:37 ` Furong Xu [this message]
2025-01-26 11:35 ` Ido Schimmel
2025-01-26 12:56 ` Furong Xu
2025-01-25 15:03 ` Furong Xu
2025-01-25 19:08 ` Andrew Lunn
2025-01-26 2:39 ` Furong Xu
2025-01-27 13:28 ` Thierry Reding
2025-01-29 14:51 ` Jon Hunter
2025-02-07 9:07 ` Furong Xu
2025-02-07 13:42 ` Jon Hunter
2025-01-24 1:53 ` Furong Xu
2025-01-24 15:14 ` Andrew Lunn
2025-01-15 3:27 ` [PATCH net-next v3 2/4] net: stmmac: Set page_pool_params.max_len to a precise size Furong Xu
2025-01-15 10:07 ` Yanteng Si
2025-01-15 3:27 ` [PATCH net-next v3 3/4] net: stmmac: Optimize cache prefetch in RX path Furong Xu
2025-01-15 16:24 ` Yanteng Si
2025-01-15 3:27 ` [PATCH net-next v3 4/4] net: stmmac: Convert prefetch() to net_prefetch() for received frames Furong Xu
2025-01-15 16:33 ` Yanteng Si
2025-01-15 16:35 ` Larysa Zaremba
2025-01-15 17:35 ` Joe Damato
2025-01-16 11:40 ` [PATCH net-next v3 0/4] net: stmmac: RX performance improvement patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250126183714.00005068@gmail.com \
--to=0x1207@gmail.com \
--cc=aleksander.lobakin@intel.com \
--cc=andrew+netdev@lunn.ch \
--cc=andrew@lunn.ch \
--cc=bgriffis@nvidia.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=idosch@idosch.org \
--cc=jdamato@fastly.com \
--cc=jonathanh@nvidia.com \
--cc=kuba@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-stm32@st-md-mailman.stormreply.com \
--cc=linux-tegra@vger.kernel.org \
--cc=mcoquelin.stm32@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=xfr@outlook.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).