From: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
To: "Toke Høiland-Jørgensen" <toke@redhat.com>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>,
bpf@vger.kernel.org, netdev@vger.kernel.org, davem@davemloft.net,
kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net,
brouer@redhat.com, pabeni@redhat.com, echaudro@redhat.com,
toshiaki.makita1@gmail.com, andrii@kernel.org
Subject: Re: [PATCH v4 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
Date: Thu, 10 Mar 2022 20:26:04 +0100 [thread overview]
Message-ID: <YipQzAGMyVbJQyhX@lore-desk> (raw)
In-Reply-To: <87ilsly6db.fsf@toke.dk>
[-- Attachment #1: Type: text/plain, Size: 4410 bytes --]
> Lorenzo Bianconi <lorenzo.bianconi@redhat.com> writes:
>
> >> Lorenzo Bianconi <lorenzo@kernel.org> writes:
> >>
> >> > Introduce veth_convert_xdp_buff_from_skb routine in order to
> >> > convert a non-linear skb into a xdp buffer. If the received skb
> >> > is cloned or shared, veth_convert_xdp_buff_from_skb will copy it
> >> > in a new skb composed by order-0 pages for the linear and the
> >> > fragmented area. Moreover veth_convert_xdp_buff_from_skb guarantees
> >> > we have enough headroom for xdp.
> >> > This is a preliminary patch to allow attaching xdp programs with frags
> >> > support on veth devices.
> >> >
> >> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> >>
> >> It's cool that we can do this! A few comments below:
> >
> > Hi Toke,
> >
> > thx for the review :)
> >
> > [...]
> >
> >> > +static int veth_convert_xdp_buff_from_skb(struct veth_rq *rq,
> >> > + struct xdp_buff *xdp,
> >> > + struct sk_buff **pskb)
> >> > +{
> >>
> >> nit: It's not really "converting" and skb into an xdp_buff, since the
> >> xdp_buff lives on the stack; so maybe 'veth_init_xdp_buff_from_skb()'?
> >
> > I kept the previous naming convention used for xdp_convert_frame_to_buff()
> > (my goal would be to move it in xdp.c and reuse this routine for the
> > generic-xdp use case) but I am fine with
> > veth_init_xdp_buff_from_skb().
>
> Consistency is probably good, but right now we have functions of the
> form 'xdp_convert_X_to_Y()' and 'xdp_update_Y_from_X()'. So to follow
> that you'd have either 'veth_update_xdp_buff_from_skb()' or
> 'veth_convert_skb_to_xdp_buff()' :)
ack, I am fine with veth_convert_skb_to_xdp_buff()
>
> >> > + struct sk_buff *skb = *pskb;
> >> > + u32 frame_sz;
> >> >
> >> > if (skb_shared(skb) || skb_head_is_locked(skb) ||
> >> > - skb_is_nonlinear(skb) || headroom < XDP_PACKET_HEADROOM) {
> >> > + skb_shinfo(skb)->nr_frags) {
> >>
> >> So this always clones the skb if it has frags? Is that really needed?
> >
> > if we look at skb_cow_data(), paged area is always considered not writable
>
> Ah, right, did not know that. Seems a bit odd, but OK.
>
> >> Also, there's a lot of memory allocation and copying going on here; have
> >> you measured the performance?
> >
> > even in the previous implementation we always reallocate the skb if the
> > conditions above are verified so I do not expect any difference in the single
> > buffer use-case but I will run some performance tests.
>
> No, I wouldn't expect any difference for the single-buffer case, but I
> would also be interested in how big the overhead is of having to copy
> the whole jumbo-frame?
oh ok, I got what you mean. I guess we can compare the tcp throughput for
the legacy skb mode (when no program is attached on the veth pair) and xdp
mode (when we load a simple xdp program that just returns xdp_pass) when
jumbo frames are enabled. I would expect a performance penalty but let's see.
>
> BTW, just noticed one other change - before we had:
>
> > - headroom = skb_headroom(skb) - mac_len;
> > if (skb_shared(skb) || skb_head_is_locked(skb) ||
> > - skb_is_nonlinear(skb) || headroom < XDP_PACKET_HEADROOM) {
>
>
> And in your patch that becomes:
>
> > + } else if (skb_headroom(skb) < XDP_PACKET_HEADROOM &&
> > + pskb_expand_head(skb, VETH_XDP_HEADROOM, 0, GFP_ATOMIC)) {
> > + goto drop;
>
>
> So the mac_len subtraction disappeared; that seems wrong?
we call __skb_push before running veth_convert_xdp_buff_from_skb() in
veth_xdp_rcv_skb().
>
> >> > +
> >> > + if (xdp_buff_has_frags(&xdp))
> >> > + skb->data_len = skb_shinfo(skb)->xdp_frags_size;
> >> > + else
> >> > + skb->data_len = 0;
> >>
> >> We can remove entire frags using xdp_adjust_tail, right? Will that get
> >> propagated in the right way to the skb frags due to the dual use of
> >> skb_shared_info, or?
> >
> > bpf_xdp_frags_shrink_tail() can remove entire frags and it will modify
> > metadata contained in the skb_shared_info (e.g. nr_frags or the frag
> > size of the given page). We should consider the data_len field in this
> > case. Agree?
>
> Right, that's what I assumed; makes sense. But adding a comment
> mentioning this above the update of data_len might be helpful? :)
ack, will do.
Regards,
Lorenzo
>
> -Toke
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
next prev parent reply other threads:[~2022-03-10 19:27 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-08 16:05 [PATCH v4 bpf-next 0/3] introduce xdp frags support to veth driver Lorenzo Bianconi
2022-03-08 16:05 ` [PATCH v4 bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit Lorenzo Bianconi
2022-03-10 11:10 ` Toke Høiland-Jørgensen
2022-03-15 5:32 ` John Fastabend
2022-03-08 16:05 ` [PATCH v4 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb Lorenzo Bianconi
2022-03-10 11:21 ` Toke Høiland-Jørgensen
2022-03-10 11:43 ` Lorenzo Bianconi
2022-03-10 19:06 ` Toke Høiland-Jørgensen
2022-03-10 19:26 ` Lorenzo Bianconi [this message]
2022-03-10 23:46 ` Lorenzo Bianconi
2022-03-12 21:18 ` Jakub Kicinski
2022-03-08 16:06 ` [PATCH v4 bpf-next 3/3] veth: allow jumbo frames in xdp mode Lorenzo Bianconi
2022-03-10 11:30 ` Toke Høiland-Jørgensen
2022-03-10 15:06 ` Lorenzo Bianconi
2022-03-10 18:53 ` Toke Høiland-Jørgensen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YipQzAGMyVbJQyhX@lore-desk \
--to=lorenzo.bianconi@redhat.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brouer@redhat.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=echaudro@redhat.com \
--cc=kuba@kernel.org \
--cc=lorenzo@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=toke@redhat.com \
--cc=toshiaki.makita1@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).