BPF List
 help / color / mirror / Atom feed
From: Maxim Mikityanskiy <maximmi@nvidia.com>
To: "Björn Töpel" <bjorn.topel@gmail.com>,
	ast@kernel.org, daniel@iogearbox.net
Cc: "Björn Töpel" <bjorn.topel@intel.com>,
	jonathan.lemon@gmail.com, magnus.karlsson@intel.com,
	netdev@vger.kernel.org, bpf@vger.kernel.org
Subject: Re: [PATCH bpf] xdp: Handle MEM_TYPE_XSK_BUFF_POOL correctly in xdp_return_buff()
Date: Mon, 30 Nov 2020 10:54:50 +0200	[thread overview]
Message-ID: <0441d9e3-3880-5eb4-ca3d-0b714d41b48e@nvidia.com> (raw)
In-Reply-To: <20201127171726.123627-1-bjorn.topel@gmail.com>

On 2020-11-27 19:17, Björn Töpel wrote:
> From: Björn Töpel <bjorn.topel@intel.com>
> 
> It turns out that it does exist a path where xdp_return_buff() is
> being passed an XDP buffer of type MEM_TYPE_XSK_BUFF_POOL. This path
> is when AF_XDP zero-copy mode is enabled, and a buffer is redirected
> to a DEVMAP with an attached XDP program that drops the buffer.
> 
> This change simply puts the handling of MEM_TYPE_XSK_BUFF_POOL back
> into xdp_return_buff().
> 
> Reported-by: Maxim Mikityanskiy <maximmi@nvidia.com>
> Fixes: 82c41671ca4f ("xdp: Simplify xdp_return_{frame, frame_rx_napi, buff}")
> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>

Thanks for addressing this!

Acked-by: Maxim Mikityanskiy <maximmi@nvidia.com>

> ---
>   net/core/xdp.c | 17 ++++++++++-------
>   1 file changed, 10 insertions(+), 7 deletions(-)
> 
> diff --git a/net/core/xdp.c b/net/core/xdp.c
> index 48aba933a5a8..491ad569a79c 100644
> --- a/net/core/xdp.c
> +++ b/net/core/xdp.c
> @@ -335,11 +335,10 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model);
>    * scenarios (e.g. queue full), it is possible to return the xdp_frame
>    * while still leveraging this protection.  The @napi_direct boolean
>    * is used for those calls sites.  Thus, allowing for faster recycling
> - * of xdp_frames/pages in those cases. This path is never used by the
> - * MEM_TYPE_XSK_BUFF_POOL memory type, so it's explicitly not part of
> - * the switch-statement.
> + * of xdp_frames/pages in those cases.
>    */
> -static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct)
> +static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct,
> +			 struct xdp_buff *xdp)
>   {
>   	struct xdp_mem_allocator *xa;
>   	struct page *page;
> @@ -361,6 +360,10 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct)
>   		page = virt_to_page(data); /* Assumes order0 page*/
>   		put_page(page);
>   		break;
> +	case MEM_TYPE_XSK_BUFF_POOL:
> +		/* NB! Only valid from an xdp_buff! */
> +		xsk_buff_free(xdp);
> +		break;
>   	default:
>   		/* Not possible, checked in xdp_rxq_info_reg_mem_model() */
>   		WARN(1, "Incorrect XDP memory type (%d) usage", mem->type);
> @@ -370,19 +373,19 @@ static void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct)
>   
>   void xdp_return_frame(struct xdp_frame *xdpf)
>   {
> -	__xdp_return(xdpf->data, &xdpf->mem, false);
> +	__xdp_return(xdpf->data, &xdpf->mem, false, NULL);
>   }
>   EXPORT_SYMBOL_GPL(xdp_return_frame);
>   
>   void xdp_return_frame_rx_napi(struct xdp_frame *xdpf)
>   {
> -	__xdp_return(xdpf->data, &xdpf->mem, true);
> +	__xdp_return(xdpf->data, &xdpf->mem, true, NULL);
>   }
>   EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi);
>   
>   void xdp_return_buff(struct xdp_buff *xdp)
>   {
> -	__xdp_return(xdp->data, &xdp->rxq->mem, true);
> +	__xdp_return(xdp->data, &xdp->rxq->mem, true, xdp);
>   }
>   
>   /* Only called for MEM_TYPE_PAGE_POOL see xdp.h */
> 
> base-commit: 9a44bc9449cfe7e39dbadf537ff669fb007a9e63
> 


  reply	other threads:[~2020-11-30  8:56 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-27 17:17 [PATCH bpf] xdp: Handle MEM_TYPE_XSK_BUFF_POOL correctly in xdp_return_buff() Björn Töpel
2020-11-30  8:54 ` Maxim Mikityanskiy [this message]
2020-11-30 22:10 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0441d9e3-3880-5eb4-ca3d-0b714d41b48e@nvidia.com \
    --to=maximmi@nvidia.com \
    --cc=ast@kernel.org \
    --cc=bjorn.topel@gmail.com \
    --cc=bjorn.topel@intel.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=jonathan.lemon@gmail.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox