public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Alexander H Duyck <alexander.duyck@gmail.com>
To: Wei Fang <wei.fang@nxp.com>,
	davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
	shenwei.wang@nxp.com, xiaoning.wang@nxp.com, pabeni@redhat.com,
	netdev@vger.kernel.org
Cc: linux-imx@nxp.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH net] net: fec: tx processing does not call XDP APIs if budget is 0
Date: Tue, 25 Jul 2023 09:51:44 -0700	[thread overview]
Message-ID: <70b71e7bb8a7dff2dacab99b0746e7bf2bee9344.camel@gmail.com> (raw)
In-Reply-To: <20230725074148.2936402-1-wei.fang@nxp.com>

On Tue, 2023-07-25 at 15:41 +0800, Wei Fang wrote:
> According to the clarification [1] in the latest napi.rst, the tx
> processing cannot call any XDP (or page pool) APIs if the "budget"
> is 0. Because NAPI is called with the budget of 0 (such as netpoll)
> indicates we may be in an IRQ context, however, we cannot use the
> page pool from IRQ context.
> 
> [1] https://lore.kernel.org/all/20230720161323.2025379-1-kuba@kernel.org/
> 
> Fixes: 20f797399035 ("net: fec: recycle pages for transmitted XDP frames")
> Signed-off-by: Wei Fang <wei.fang@nxp.com>
> Suggested-by: Jakub Kicinski <kuba@kernel.org>
> ---
>  drivers/net/ethernet/freescale/fec_main.c | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
> index 073d61619336..66b5cbdb43b9 100644
> --- a/drivers/net/ethernet/freescale/fec_main.c
> +++ b/drivers/net/ethernet/freescale/fec_main.c
> @@ -1372,7 +1372,7 @@ fec_enet_hwtstamp(struct fec_enet_private *fep, unsigned ts,
>  }
>  
>  static void
> -fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
> +fec_enet_tx_queue(struct net_device *ndev, u16 queue_id, int budget)
>  {
>  	struct	fec_enet_private *fep;
>  	struct xdp_frame *xdpf;
> @@ -1416,6 +1416,14 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
>  			if (!skb)
>  				goto tx_buf_done;
>  		} else {
> +			/* Tx processing cannot call any XDP (or page pool) APIs if
> +			 * the "budget" is 0. Because NAPI is called with budget of
> +			 * 0 (such as netpoll) indicates we may be in an IRQ context,
> +			 * however, we can't use the page pool from IRQ context.
> +			 */
> +			if (unlikely(!budget))
> +				break;
> +
>  			xdpf = txq->tx_buf[index].xdp;
>  			if (bdp->cbd_bufaddr)
>  				dma_unmap_single(&fep->pdev->dev,

This statement isn't correct. There are napi enabled and non-napi
versions of these calls. This is the reason for things like the
"allow_direct" parameter in page_pool_put_full_page and the
"napi_direct" parameter in __xdp_return.

By blocking on these cases you can end up hanging the Tx queue which is
going to break netpoll as you are going to stall the ring on XDP
packets if they are already in the queue.

From what I can tell your driver is using xdp_return_frame in the case
of an XDP frame which doesn't make use of the NAPI optimizations in
freeing from what I can tell. The NAPI optimized version is
xdp_return_frame_rx.

> @@ -1508,14 +1516,14 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
>  		writel(0, txq->bd.reg_desc_active);
>  }
>  
> -static void fec_enet_tx(struct net_device *ndev)
> +static void fec_enet_tx(struct net_device *ndev, int budget)
>  {
>  	struct fec_enet_private *fep = netdev_priv(ndev);
>  	int i;
>  
>  	/* Make sure that AVB queues are processed first. */
>  	for (i = fep->num_tx_queues - 1; i >= 0; i--)
> -		fec_enet_tx_queue(ndev, i);
> +		fec_enet_tx_queue(ndev, i, budget);
>  }
>  
>  static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq,
> @@ -1858,7 +1866,7 @@ static int fec_enet_rx_napi(struct napi_struct *napi, int budget)
>  
>  	do {
>  		done += fec_enet_rx(ndev, budget - done);
> -		fec_enet_tx(ndev);
> +		fec_enet_tx(ndev, budget);
>  	} while ((done < budget) && fec_enet_collect_events(fep));
>  
>  	if (done < budget) {

Since you are passing budget, one optimization you could make use of
would be napi_consume_skb in your Tx path instead of dev_kfree_skb_any.

  reply	other threads:[~2023-07-25 16:52 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-25  7:41 [PATCH net] net: fec: tx processing does not call XDP APIs if budget is 0 Wei Fang
2023-07-25 16:51 ` Alexander H Duyck [this message]
2023-07-26  3:40   ` Wei Fang
2023-07-26 15:53     ` Alexander Duyck
2023-07-27  2:08       ` Wei Fang
2023-07-27  4:14         ` Jakub Kicinski
2023-07-27  5:32           ` Wei Fang
2023-07-27  4:20 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=70b71e7bb8a7dff2dacab99b0746e7bf2bee9344.camel@gmail.com \
    --to=alexander.duyck@gmail.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kuba@kernel.org \
    --cc=linux-imx@nxp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=shenwei.wang@nxp.com \
    --cc=wei.fang@nxp.com \
    --cc=xiaoning.wang@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox