From: Hariprasad Kelam <hkelam@marvell.com>
To: Wei Fang <wei.fang@nxp.com>
Cc: <andrew+netdev@lunn.ch>, <davem@davemloft.net>,
<edumazet@google.com>, <kuba@kernel.org>, <pabeni@redhat.com>,
<mcoquelin.stm32@gmail.com>, <alexandre.torgue@foss.st.com>,
<ast@kernel.org>, <daniel@iogearbox.net>, <hawk@kernel.org>,
<john.fastabend@gmail.com>, <sdf@fomichev.me>,
<rmk+kernel@armlinux.org.uk>, <0x1207@gmail.com>,
<hayashi.kunihiko@socionext.com>, <vladimir.oltean@nxp.com>,
<boon.leong.ong@intel.com>, <imx@lists.linux.dev>,
<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<linux-stm32@st-md-mailman.stormreply.com>,
<linux-arm-kernel@lists.infradead.org>, <bpf@vger.kernel.org>
Subject: Re: [PATCH net] net: stmmac: fix the crash issue for zero copy XDP_TX action
Date: Wed, 17 Dec 2025 16:38:13 +0530 [thread overview]
Message-ID: <aUKPHdtAPDnMqB7X@test-OptiPlex-Tower-Plus-7010> (raw)
In-Reply-To: <20251204071332.1907111-1-wei.fang@nxp.com>
On 2025-12-04 at 12:43:32, Wei Fang (wei.fang@nxp.com) wrote:
> There is a crash issue when running zero copy XDP_TX action, the crash
> log is shown below.
>
> [ 216.122464] Unable to handle kernel paging request at virtual address fffeffff80000000
> [ 216.187524] Internal error: Oops: 0000000096000144 [#1] SMP
> [ 216.301694] Call trace:
> [ 216.304130] dcache_clean_poc+0x20/0x38 (P)
> [ 216.308308] __dma_sync_single_for_device+0x1bc/0x1e0
> [ 216.313351] stmmac_xdp_xmit_xdpf+0x354/0x400
> [ 216.317701] __stmmac_xdp_run_prog+0x164/0x368
> [ 216.322139] stmmac_napi_poll_rxtx+0xba8/0xf00
> [ 216.326576] __napi_poll+0x40/0x218
> [ 216.408054] Kernel panic - not syncing: Oops: Fatal exception in interrupt
>
> For XDP_TX action, the xdp_buff is converted to xdp_frame by
> xdp_convert_buff_to_frame(). The memory type of the resulting xdp_frame
> depends on the memory type of the xdp_buff. For page pool based xdp_buff
> it produces xdp_frame with memory type MEM_TYPE_PAGE_POOL. For zero copy
> XSK pool based xdp_buff it produces xdp_frame with memory type
> MEM_TYPE_PAGE_ORDER0. However, stmmac_xdp_xmit_back() does not check the
> memory type and always uses the page pool type, this leads to invalid
> mappings and causes the crash. Therefore, check the xdp_buff memory type
> in stmmac_xdp_xmit_back() to fix this issue.
>
> Fixes: bba2556efad6 ("net: stmmac: Enable RX via AF_XDP zero-copy")
> Signed-off-by: Wei Fang <wei.fang@nxp.com>
> ---
> .../net/ethernet/stmicro/stmmac/stmmac_main.c | 17 +++++++++++++++--
> 1 file changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index 7b90ecd3a55e..a6664f300e4a 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -88,6 +88,7 @@ MODULE_PARM_DESC(phyaddr, "Physical device address");
> #define STMMAC_XDP_CONSUMED BIT(0)
> #define STMMAC_XDP_TX BIT(1)
> #define STMMAC_XDP_REDIRECT BIT(2)
> +#define STMMAC_XSK_CONSUMED BIT(3)
>
> static int flow_ctrl = 0xdead;
> module_param(flow_ctrl, int, 0644);
> @@ -4988,6 +4989,7 @@ static int stmmac_xdp_get_tx_queue(struct stmmac_priv *priv,
> static int stmmac_xdp_xmit_back(struct stmmac_priv *priv,
> struct xdp_buff *xdp)
> {
> + bool zc = !!(xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL);
> struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp);
> int cpu = smp_processor_id();
> struct netdev_queue *nq;
> @@ -5004,9 +5006,18 @@ static int stmmac_xdp_xmit_back(struct stmmac_priv *priv,
> /* Avoids TX time-out as we are sharing with slow path */
> txq_trans_cond_update(nq);
>
> - res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf, false);
> - if (res == STMMAC_XDP_TX)
> + /* For zero copy XDP_TX action, dma_map is true */
> + res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf, zc);
Seems stmmac_xdp_xmit_xdpf is using dma_map_single if we pass zc is true.
Ideally in case of zc, driver can use page_pool_get_dma_addr, may be you
need pass zc param as false. Please check
Thanks,
Hariprasad k
next prev parent reply other threads:[~2025-12-17 11:09 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-04 7:13 [PATCH net] net: stmmac: fix the crash issue for zero copy XDP_TX action Wei Fang
2025-12-17 11:08 ` Hariprasad Kelam [this message]
2025-12-17 12:49 ` Wei Fang
2025-12-18 6:21 ` Hariprasad Kelam
2025-12-18 6:36 ` Wei Fang
2025-12-19 10:04 ` Hariprasad Kelam
2025-12-29 3:41 ` Wei Fang
2025-12-29 16:40 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aUKPHdtAPDnMqB7X@test-OptiPlex-Tower-Plus-7010 \
--to=hkelam@marvell.com \
--cc=0x1207@gmail.com \
--cc=alexandre.torgue@foss.st.com \
--cc=andrew+netdev@lunn.ch \
--cc=ast@kernel.org \
--cc=boon.leong.ong@intel.com \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=hayashi.kunihiko@socionext.com \
--cc=imx@lists.linux.dev \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-stm32@st-md-mailman.stormreply.com \
--cc=mcoquelin.stm32@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=rmk+kernel@armlinux.org.uk \
--cc=sdf@fomichev.me \
--cc=vladimir.oltean@nxp.com \
--cc=wei.fang@nxp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox