From: Yan Zhai <yan@cloudflare.com>
To: Alexander Lobakin <aleksander.lobakin@intel.com>
Cc: netdev@vger.kernel.org,
"Jesse Brandeburg" <jesse.brandeburg@intel.com>,
"Tony Nguyen" <anthony.l.nguyen@intel.com>,
"David S. Miller" <davem@davemloft.net>,
"Eric Dumazet" <edumazet@google.com>,
"Jakub Kicinski" <kuba@kernel.org>,
"Paolo Abeni" <pabeni@redhat.com>,
"Björn Töpel" <bjorn@kernel.org>,
"Magnus Karlsson" <magnus.karlsson@intel.com>,
"Maciej Fijalkowski" <maciej.fijalkowski@intel.com>,
"Jonathan Lemon" <jonathan.lemon@gmail.com>,
"Alexei Starovoitov" <ast@kernel.org>,
"Daniel Borkmann" <daniel@iogearbox.net>,
"Jesper Dangaard Brouer" <hawk@kernel.org>,
"John Fastabend" <john.fastabend@gmail.com>,
intel-wired-lan@lists.osuosl.org, linux-kernel@vger.kernel.org,
bpf@vger.kernel.org
Subject: Re: [RFC net-next 5/9] ice: apply XDP offloading fixup when building skb
Date: Fri, 21 Jun 2024 11:05:16 -0500 [thread overview]
Message-ID: <CAO3-PbrVbOo9ydrtc7kfWitXrnftgT3QGpub3y2K209L0jis1Q@mail.gmail.com> (raw)
In-Reply-To: <6414deb0-165c-4a98-8467-ba6949166f96@intel.com>
On Fri, Jun 21, 2024 at 4:22 AM Alexander Lobakin
<aleksander.lobakin@intel.com> wrote:
>
> From: Yan Zhai <yan@cloudflare.com>
> Date: Thu, 20 Jun 2024 15:19:22 -0700
>
> > Add a common point to transfer offloading info from XDP context to skb.
> >
> > Signed-off-by: Yan Zhai <yan@cloudflare.com>
> > Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
> > ---
> > drivers/net/ethernet/intel/ice/ice_txrx.c | 2 ++
> > drivers/net/ethernet/intel/ice/ice_xsk.c | 6 +++++-
> > include/net/xdp_sock_drv.h | 2 +-
> > 3 files changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
> > index 8bb743f78fcb..a247306837ed 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_txrx.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
> > @@ -1222,6 +1222,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
> >
> > hard_start = page_address(rx_buf->page) + rx_buf->page_offset -
> > offset;
> > + xdp_init_buff_minimal(xdp);
>
> Two lines below, you have this:
>
> xdp_buff_clear_frags_flag(xdp);
>
> Which clears frags bit in xdp->flags. I.e. since you always clear flags
> here, this call becomes redundant.
> But I'd say that `xdp->flags = 0` really wants to be moved from
> xdp_init_buff() to xdp_prepare_buff().
>
You are right, there is some redundancy here. I will fix it if people
feel good about the use case in general :)
> > xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
> > #if (PAGE_SIZE > 4096)
> > /* At larger PAGE_SIZE, frame_sz depend on len size */
> > @@ -1287,6 +1288,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
> >
> > /* populate checksum, VLAN, and protocol */
> > ice_process_skb_fields(rx_ring, rx_desc, skb);
> > + xdp_buff_fixup_skb_offloading(xdp, skb);
> >
> > ice_trace(clean_rx_irq_indicate, rx_ring, rx_desc, skb);
> > /* send completed skb up the stack */
> > diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > index a65955eb23c0..367658acaab8 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > @@ -845,8 +845,10 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
> > xdp_prog = READ_ONCE(rx_ring->xdp_prog);
> > xdp_ring = rx_ring->xdp_ring;
> >
> > - if (ntc != rx_ring->first_desc)
> > + if (ntc != rx_ring->first_desc) {
> > first = *ice_xdp_buf(rx_ring, rx_ring->first_desc);
> > + xdp_init_buff_minimal(first);
>
> xdp_buff_set_size() always clears flags, this is redundant.
>
> > + }
> >
> > while (likely(total_rx_packets < (unsigned int)budget)) {
> > union ice_32b_rx_flex_desc *rx_desc;
> > @@ -920,6 +922,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
> > break;
> > }
> >
> > + xdp = first;
> > first = NULL;
> > rx_ring->first_desc = ntc;
> >
> > @@ -934,6 +937,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
> > vlan_tci = ice_get_vlan_tci(rx_desc);
> >
> > ice_process_skb_fields(rx_ring, rx_desc, skb);
> > + xdp_buff_fixup_skb_offloading(xdp, skb);
> > ice_receive_skb(rx_ring, skb, vlan_tci);
> > }
> >
> > diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
> > index 0a5dca2b2b3f..02243dc064c2 100644
> > --- a/include/net/xdp_sock_drv.h
> > +++ b/include/net/xdp_sock_drv.h
> > @@ -181,7 +181,7 @@ static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size)
> > xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
> > xdp->data_meta = xdp->data;
> > xdp->data_end = xdp->data + size;
> > - xdp->flags = 0;
> > + xdp_init_buff_minimal(xdp);
>
> Why is this done in the patch prefixed with "ice:"?
>
Good catch, this should be moved to the previous patch.
thanks
Yan
> > }
> >
> > static inline dma_addr_t xsk_buff_raw_get_dma(struct xsk_buff_pool *pool,
>
> Thanks,
> Olek
next prev parent reply other threads:[~2024-06-21 16:05 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-20 22:19 [RFC net-next 0/9] xdp: allow disable GRO per packet by XDP Yan Zhai
2024-06-20 22:19 ` [RFC net-next 1/9] skb: introduce gro_disabled bit Yan Zhai
2024-06-21 9:11 ` Alexander Lobakin
2024-06-21 15:40 ` Yan Zhai
2024-06-21 9:49 ` Paolo Abeni
2024-06-21 14:29 ` Yan Zhai
2024-06-21 9:57 ` Paolo Abeni
2024-06-21 15:17 ` Yan Zhai
2024-06-21 12:15 ` Willem de Bruijn
2024-06-21 12:47 ` Daniel Borkmann
2024-06-21 16:00 ` Yan Zhai
2024-06-21 16:15 ` Daniel Borkmann
2024-06-21 17:20 ` Yan Zhai
2024-06-23 8:23 ` Willem de Bruijn
2024-06-24 13:30 ` Daniel Borkmann
2024-06-24 17:49 ` Yan Zhai
2024-06-21 15:34 ` Yan Zhai
2024-06-23 8:27 ` Willem de Bruijn
2024-06-24 18:17 ` Yan Zhai
2024-06-30 13:40 ` Willem de Bruijn
2024-07-03 18:46 ` Yan Zhai
2024-06-20 22:19 ` [RFC net-next 2/9] xdp: add XDP_FLAGS_GRO_DISABLED flag Yan Zhai
2024-06-21 9:15 ` Alexander Lobakin
2024-06-21 16:12 ` Yan Zhai
2024-06-20 22:19 ` [RFC net-next 3/9] xdp: implement bpf_xdp_disable_gro kfunc Yan Zhai
2024-06-20 22:19 ` [RFC net-next 4/9] bnxt: apply XDP offloading fixup when building skb Yan Zhai
2024-06-20 22:19 ` [RFC net-next 5/9] ice: " Yan Zhai
2024-06-21 9:20 ` Alexander Lobakin
2024-06-21 16:05 ` Yan Zhai [this message]
2024-06-20 22:19 ` [RFC net-next 6/9] veth: " Yan Zhai
2024-06-20 22:19 ` [RFC net-next 7/9] mlx5: move xdp_buff scope one level up Jesper Dangaard Brouer
2024-06-20 22:19 ` [RFC net-next 8/9] mlx5: apply XDP offloading fixup when building skb Yan Zhai
2024-06-20 22:19 ` [RFC net-next 9/9] bpf: selftests: test disabling GRO by XDP Yan Zhai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAO3-PbrVbOo9ydrtc7kfWitXrnftgT3QGpub3y2K209L0jis1Q@mail.gmail.com \
--to=yan@cloudflare.com \
--cc=aleksander.lobakin@intel.com \
--cc=anthony.l.nguyen@intel.com \
--cc=ast@kernel.org \
--cc=bjorn@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jesse.brandeburg@intel.com \
--cc=john.fastabend@gmail.com \
--cc=jonathan.lemon@gmail.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maciej.fijalkowski@intel.com \
--cc=magnus.karlsson@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).