From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SQShjCmk" Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFDB710DE for ; Thu, 30 Nov 2023 10:49:14 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-285bb0575d5so1690652a91.1 for ; Thu, 30 Nov 2023 10:49:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701370154; x=1701974954; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XJL3/xZlETlaSZxAbteHfIDjLP0TbdjMEyzT5VAw/Aw=; b=SQShjCmkpS3myXcupF1sSNyN4lVv7f8sQbr4z47TNjayUfDnD6UAEU2jFxrQGvf/Zp 7r1c89jOvf+1Dws2ZI87DOvgNVaIeVcnOn0jvbIpM9GlOZX3DTV6jsgHOgGBM8flnqy5 wgd6Pin9H+Zzv4Kpw/IltQ3I7n4IY44IkGeGbCZCYwRzZLlDY9UiuJ/HRx8e7phyk8cV xcgs9UQaxts6DdFF9aBxfgTgCXS2ImFD/pRnm2EsGhyUHtBD7t2fdDAprhulEvfnyHHr sFOuHDEaZ6Ku6g0agsGQeC6v8zwoajfA3e1ltnZb5DKLJ3Fj0+Xfy63tcm1hNykNfyCM xs0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701370154; x=1701974954; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XJL3/xZlETlaSZxAbteHfIDjLP0TbdjMEyzT5VAw/Aw=; b=Vlvt+B8aeLQScLAld/ZlxkSrtOAk2Tl5mLHNtpflE/T/dIdExUq/gZXT46wV5N03xx 8Y0w4okaRpxnH9J1JAVJofBLNHfYQMHsBUzTr3wN+hUgo1FHutkH1KKIcoilUwaiesCT LMMd4bkIXvEeiUW7HxqeVkn2lrvxG+nWXDAmuX/5JMEuPovqLdPQ3QeDVfZWEp3NeZo/ gaSeV7J82y2G4LPJ8HfgmMBH5upXp9ysrtuSaiegBtVU7JI8dPcvzls79gZZQXVyD+sB zUs0x8JVk7PSeqGFYXp0XoZ6hWwFSRRGNeN+NMpQ+c8jWKoMFudo56UjEK6Fx+mUcB5D 9myQ== X-Gm-Message-State: AOJu0YxRLQv9bNOSagDvWJB79FE1WKFlTIur/T6Kqr7Jcqk7ClqUoHaH 7kAt+4btrC6Pe6W6hh4aqS5p7cc= X-Google-Smtp-Source: AGHT+IGyDyxAMwcoJS6sciBw/5vkzmmMOCxH2ARe2G7kvbgl3cR0YNiiHR7qICXmLsKg3A/6Wj9bU1U= X-Received: from sdf.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5935]) (user=sdf job=sendgmr) by 2002:a17:90a:e011:b0:286:5274:8cab with SMTP id u17-20020a17090ae01100b0028652748cabmr78809pjy.1.1701370154197; Thu, 30 Nov 2023 10:49:14 -0800 (PST) Date: Thu, 30 Nov 2023 10:49:12 -0800 In-Reply-To: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: Message-ID: Subject: Re: [PATCH v2 net-next 2/2] xdp: add multi-buff support for xdp running in generic mode From: Stanislav Fomichev To: Lorenzo Bianconi Cc: Jesper Dangaard Brouer , netdev@vger.kernel.org, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, lorenzo.bianconi@redhat.com, bpf@vger.kernel.org, toke@redhat.com, willemdebruijn.kernel@gmail.com, jasowang@redhat.com, kernel-team , Yan Zhai Content-Type: text/plain; charset="utf-8" On 11/30, Lorenzo Bianconi wrote: > > > > > > On 11/30/23 10:11, Lorenzo Bianconi wrote: > > > Similar to native xdp, do not always linearize the skb in > > > netif_receive_generic_xdp routine but create a non-linear xdp_buff to be > > > processed by the eBPF program. This allow to add multi-buffer support > > > for xdp running in generic mode. > > > > > > Signed-off-by: Lorenzo Bianconi > > > --- > > > net/core/dev.c | 144 ++++++++++++++++++++++++++++++++++++++++--------- > > > 1 file changed, 119 insertions(+), 25 deletions(-) > > > > > > diff --git a/net/core/dev.c b/net/core/dev.c > > > index 4df68d7f04a2..0d08e755bb7f 100644 > > > --- a/net/core/dev.c > > > +++ b/net/core/dev.c > > > @@ -4853,6 +4853,12 @@ u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, > > > xdp_init_buff(xdp, frame_sz, &rxqueue->xdp_rxq); > > > xdp_prepare_buff(xdp, hard_start, skb_headroom(skb) - mac_len, > > > skb_headlen(skb) + mac_len, true); > > > + if (skb_is_nonlinear(skb)) { > > > + skb_shinfo(skb)->xdp_frags_size = skb->data_len; > > > + xdp_buff_set_frags_flag(xdp); > > > + } else { > > > + xdp_buff_clear_frags_flag(xdp); > > > + } > > > orig_data_end = xdp->data_end; > > > orig_data = xdp->data; > > > @@ -4882,6 +4888,14 @@ u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, > > > skb->len += off; /* positive on grow, negative on shrink */ > > > } > > > + /* XDP frag metadata (e.g. nr_frags) are updated in eBPF helpers > > > + * (e.g. bpf_xdp_adjust_tail), we need to update data_len here. > > > + */ > > > + if (xdp_buff_has_frags(xdp)) > > > + skb->data_len = skb_shinfo(skb)->xdp_frags_size; > > > + else > > > + skb->data_len = 0; > > > + > > > /* check if XDP changed eth hdr such SKB needs update */ > > > eth = (struct ethhdr *)xdp->data; > > > if ((orig_eth_type != eth->h_proto) || > > > @@ -4915,54 +4929,134 @@ u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, > > > return act; > > > } > > > -static u32 netif_receive_generic_xdp(struct sk_buff **pskb, > > > - struct xdp_buff *xdp, > > > - struct bpf_prog *xdp_prog) > > > +static int netif_skb_check_for_generic_xdp(struct sk_buff **pskb, > > > + struct bpf_prog *prog) > > > > I like this is split out into a check function. > > > > > { > > > struct sk_buff *skb = *pskb; > > > - u32 act = XDP_DROP; > > > - > > > - /* Reinjected packets coming from act_mirred or similar should > > > - * not get XDP generic processing. > > > - */ > > > - if (skb_is_redirected(skb)) > > > - return XDP_PASS; > > > > (For other reviewers) > > This reinjected check is moved further down. > > > > > + int err; > > > - /* XDP packets must be linear and must have sufficient headroom > > > - * of XDP_PACKET_HEADROOM bytes. This is the guarantee that also > > > - * native XDP provides, thus we need to do it here as well. > > > + /* XDP does not support fraglist so we need to linearize > > > + * the skb. > > > */ > > > - if (skb_cloned(skb) || skb_is_nonlinear(skb) || > > > - skb_headroom(skb) < XDP_PACKET_HEADROOM) { > > > + if (skb_has_frag_list(skb) || !prog->aux->xdp_has_frags) { > > > int hroom = XDP_PACKET_HEADROOM - skb_headroom(skb); > > > int troom = skb->tail + skb->data_len - skb->end; > > > /* In case we have to go down the path and also linearize, > > > * then lets do the pskb_expand_head() work just once here. > > > */ > > > - if (pskb_expand_head(skb, > > > - hroom > 0 ? ALIGN(hroom, NET_SKB_PAD) : 0, > > > - troom > 0 ? troom + 128 : 0, GFP_ATOMIC)) > > > - goto do_drop; > > > - if (skb_linearize(skb)) > > > - goto do_drop; > > > + err = pskb_expand_head(skb, > > > + hroom > 0 ? ALIGN(hroom, NET_SKB_PAD) : 0, > > > + troom > 0 ? troom + 128 : 0, GFP_ATOMIC); > > > + if (err) > > > + return err; > > > + > > > + err = skb_linearize(skb); > > > + if (err) > > > + return err; > > > + > > > + return 0; > > > + } > > > + > > > + /* XDP packets must have sufficient headroom of XDP_PACKET_HEADROOM > > > + * bytes. This is the guarantee that also native XDP provides, > > > + * thus we need to do it here as well. > > > + */ > > > + if (skb_cloned(skb) || skb_shinfo(skb)->nr_frags || > > > > I though we could allow a SKB with skb_shinfo(skb)->nr_frags (that isn't > > cloned or shared) to be processed by generic XDP without any reallocation? > > I do not think so, we discussed about it with Jakub here [0] > > [0] https://lore.kernel.org/netdev/20231128105145.7b39db7d@kernel.org/ Can this be done as an optimization later on? If, from the bpf side, the verifier can attest that the program is not calling bpf_xdp_{load,store}_bytes on the frags for example.