From: Maciej Fijalkowski <maciejromanfijalkowski@gmail.com>
To: Jakub Kicinski <jakub.kicinski@netronome.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
davem@davemloft.net,
Krzysztof Kazimierczak <krzysztof.kazimierczak@intel.com>,
netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com,
Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
Tony Nguyen <anthony.l.nguyen@intel.com>,
Andrew Bowers <andrewx.bowers@intel.com>
Subject: Re: [net-next 5/9] ice: Add support for AF_XDP
Date: Wed, 30 Oct 2019 23:04:06 +0100 [thread overview]
Message-ID: <20191030230406.00004e3c@gmail.com> (raw)
In-Reply-To: <20191030115009.6168b50f@cakuba.netronome.com>
On Wed, 30 Oct 2019 11:50:09 -0700
Jakub Kicinski <jakub.kicinski@netronome.com> wrote:
> On Tue, 29 Oct 2019 20:29:06 -0700, Jeff Kirsher wrote:
> > +/**
> > + * ice_run_xdp_zc - Executes an XDP program in zero-copy path
> > + * @rx_ring: Rx ring
> > + * @xdp: xdp_buff used as input to the XDP program
> > + *
> > + * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
> > + */
> > +static int
> > +ice_run_xdp_zc(struct ice_ring *rx_ring, struct xdp_buff *xdp)
> > +{
> > + int err, result = ICE_XDP_PASS;
> > + struct bpf_prog *xdp_prog;
> > + struct ice_ring *xdp_ring;
> > + u32 act;
> > +
> > + rcu_read_lock();
> > + xdp_prog = READ_ONCE(rx_ring->xdp_prog);
> > + if (!xdp_prog) {
> > + rcu_read_unlock();
> > + return ICE_XDP_PASS;
> > + }
> > +
> > + act = bpf_prog_run_xdp(xdp_prog, xdp);
> > + xdp->handle += xdp->data - xdp->data_hard_start;
> > + switch (act) {
> > + case XDP_PASS:
> > + break;
> > + case XDP_TX:
> > + xdp_ring = rx_ring->vsi->xdp_rings[rx_ring->q_index];
> > + result = ice_xmit_xdp_buff(xdp, xdp_ring);
> > + break;
>
> From the quick look at the code it wasn't clear to me how you deal with
> XDP_TX on ZC frames. Could you describe the flow of such frames a
> little bit?
Sure, here we go.
The ice_xmit_xdp_buff() is calling a convert_to_xdp_frame(xdp) which in that
case falls to the xdp_convert_zc_to_xdp_frame(xdp), because xdp->rxq->mem.type
is set to MEM_TYPE_ZERO_COPY.
Now we are in the xdp_convert_zc_to_xdp_frame(xdp), where we allocate the
extra page dedicated for struct xdp_frame, copy into this page the data that are
within the current struct xdp_buff and everything is finished with a
xdp_return_buff(xdp) call. Again, the mem->type is the MEM_TYPE_ZERO_COPY so
the callback for freeing ZC frames is invoked, via:
xa->zc_alloc->free(xa->zc_alloc, handle);
And this guy was set during the Rx context initialization:
ring->zca.free = ice_zca_free;
err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
MEM_TYPE_ZERO_COPY,
&ring->zca);
Finally, the ice_zca_free will put the ZC frame back to HW Rx ring. At this
point, the ZC frame was recycled and the content from that frame sit in the
standalone page, represented by struct xdp_frame, so we are free to proceed
with transmission and that extra page will be freed during the cleanup of Tx
irq, after it got transmitted successfully.
I might over-complicate this description with too much code examples, so let me
know if that makes sense to you.
Maciej
>
> > + case XDP_REDIRECT:
> > + err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
> > + result = !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED;
> > + break;
> > + default:
> > + bpf_warn_invalid_xdp_action(act);
> > + /* fallthrough -- not supported action */
> > + case XDP_ABORTED:
> > + trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
> > + /* fallthrough -- handle aborts by dropping frame */
> > + case XDP_DROP:
> > + result = ICE_XDP_CONSUMED;
> > + break;
> > + }
> > +
> > + rcu_read_unlock();
> > + return result;
> > +}
>
next prev parent reply other threads:[~2019-10-30 22:04 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-30 3:29 [net-next 0/9][pull request] 100GbE Intel Wired LAN Driver Updates 2019-10-29 Jeff Kirsher
2019-10-30 3:29 ` [net-next 1/9] ice: Introduce ice_base.c Jeff Kirsher
2019-10-30 3:29 ` [net-next 2/9] ice: get rid of per-tc flow in Tx queue configuration routines Jeff Kirsher
2019-10-30 3:29 ` [net-next 3/9] ice: Add support for XDP Jeff Kirsher
2019-10-30 18:27 ` Jakub Kicinski
2019-10-31 0:26 ` Maciej Fijalkowski
2019-10-30 3:29 ` [net-next 4/9] ice: Move common functions to ice_txrx_lib.c Jeff Kirsher
2019-10-30 3:29 ` [net-next 5/9] ice: Add support for AF_XDP Jeff Kirsher
2019-10-30 18:50 ` Jakub Kicinski
2019-10-30 22:04 ` Maciej Fijalkowski [this message]
2019-10-30 22:06 ` Jakub Kicinski
2019-10-30 3:29 ` [net-next 6/9] ice: introduce legacy Rx flag Jeff Kirsher
2019-10-30 3:29 ` [net-next 7/9] ice: introduce frame padding computation logic Jeff Kirsher
2019-10-30 3:29 ` [net-next 8/9] ice: add build_skb() support Jeff Kirsher
2019-10-30 3:29 ` [net-next 9/9] ice: allow 3k MTU for XDP Jeff Kirsher
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191030230406.00004e3c@gmail.com \
--to=maciejromanfijalkowski@gmail.com \
--cc=andrewx.bowers@intel.com \
--cc=anthony.l.nguyen@intel.com \
--cc=davem@davemloft.net \
--cc=jakub.kicinski@netronome.com \
--cc=jeffrey.t.kirsher@intel.com \
--cc=krzysztof.kazimierczak@intel.com \
--cc=maciej.fijalkowski@intel.com \
--cc=netdev@vger.kernel.org \
--cc=nhorman@redhat.com \
--cc=sassmann@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).