From: John Fastabend <john.fastabend@gmail.com>
To: Lorenzo Bianconi <lorenzo@kernel.org>,
bpf@vger.kernel.org, netdev@vger.kernel.org
Cc: davem@davemloft.net, kuba@kernel.org, ast@kernel.org,
daniel@iogearbox.net, shayagr@amazon.com, sameehj@amazon.com,
john.fastabend@gmail.com, dsahern@kernel.org, brouer@redhat.com,
lorenzo.bianconi@redhat.com, echaudro@redhat.com
Subject: RE: [PATCH v4 bpf-next 06/13] bpf: introduce bpf_xdp_get_frags_{count, total_size} helpers
Date: Fri, 02 Oct 2020 08:36:28 -0700 [thread overview]
Message-ID: <5f7748fc80bd9_38b02081@john-XPS-13-9370.notmuch> (raw)
In-Reply-To: <deb81e4cf02db9a1da2b4088a49afd7acf8b82b6.1601648734.git.lorenzo@kernel.org>
Lorenzo Bianconi wrote:
> From: Sameeh Jubran <sameehj@amazon.com>
>
> Introduce the two following bpf helpers in order to provide some
> metadata about a xdp multi-buff fame to bpf layer:
>
> - bpf_xdp_get_frags_count()
> get the number of fragments for a given xdp multi-buffer.
Same comment as in the cover letter can you provide a use case
for how/where I would use xdp_get_frags_count()? Is it just for
debug? If its just debug do we really want a uapi helper for it.
>
> * bpf_xdp_get_frags_total_size()
> get the total size of fragments for a given xdp multi-buffer.
This is awkward IMO. If total size is needed it should return total size
in all cases not just in the mb case otherwise programs will have two
paths the mb path and the non-mb path. And if you have mixed workload
the branch predictor will miss? Plus its extra instructions to load.
And if its useful for something beyond just debug and its going to be
read every packet or something I think we should put it in the metadata
so that its not hidden behind a helper which likely will show up as
overhead on a 40+gbps nic. The use case I have in mind is counting
bytes maybe sliced by IP or protocol. Here you will always read it
and I don't want code with a if/else stuck in the middle when if
we do it right we have a single read.
>
> Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
> Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
> include/uapi/linux/bpf.h | 14 ++++++++++++
> net/core/filter.c | 42 ++++++++++++++++++++++++++++++++++
> tools/include/uapi/linux/bpf.h | 14 ++++++++++++
> 3 files changed, 70 insertions(+)
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 4f556cfcbfbe..0715995eb18c 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -3668,6 +3668,18 @@ union bpf_attr {
> * Return
> * The helper returns **TC_ACT_REDIRECT** on success or
> * **TC_ACT_SHOT** on error.
> + *
> + * int bpf_xdp_get_frags_count(struct xdp_buff *xdp_md)
> + * Description
> + * Get the number of fragments for a given xdp multi-buffer.
> + * Return
> + * The number of fragments
> + *
> + * int bpf_xdp_get_frags_total_size(struct xdp_buff *xdp_md)
> + * Description
> + * Get the total size of fragments for a given xdp multi-buffer.
Why just fragments? Will I have to also add the initial frag0 to it
or not. I think the description is a bit ambiguous.
> + * Return
> + * The total size of fragments for a given xdp multi-buffer.
> */
[...]
> +const struct bpf_func_proto bpf_xdp_get_frags_count_proto = {
> + .func = bpf_xdp_get_frags_count,
> + .gpl_only = false,
> + .ret_type = RET_INTEGER,
> + .arg1_type = ARG_PTR_TO_CTX,
> +};
> +
> +BPF_CALL_1(bpf_xdp_get_frags_total_size, struct xdp_buff*, xdp)
> +{
> + struct skb_shared_info *sinfo;
> + int nfrags, i, size = 0;
> +
> + if (likely(!xdp->mb))
> + return 0;
> +
> + sinfo = xdp_get_shared_info_from_buff(xdp);
> + nfrags = min_t(u8, sinfo->nr_frags, MAX_SKB_FRAGS);
> +
> + for (i = 0; i < nfrags; i++)
> + size += skb_frag_size(&sinfo->frags[i]);
Wont the hardware just know this? I think walking the frag list
just to get the total seems wrong. The hardware should have a
total_len field somewhere we can just read no? If mvneta doesn't
know the total length that seems like a driver limitation and we
shouldn't encode it in the helper.
> +
> + return size;
> +}
next prev parent reply other threads:[~2020-10-02 15:36 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-02 14:41 [PATCH v4 bpf-next 00/13] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
2020-10-02 14:41 ` [PATCH v4 bpf-next 01/13] xdp: introduce mb in xdp_buff/xdp_frame Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 02/13] xdp: initialize xdp_buff mb bit to 0 in all XDP drivers Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 03/13] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 04/13] xdp: add multi-buff support to xdp_return_{buff/frame} Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 05/13] net: mvneta: add multi buffer support to XDP_TX Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 06/13] bpf: introduce bpf_xdp_get_frags_{count, total_size} helpers Lorenzo Bianconi
2020-10-02 15:36 ` John Fastabend [this message]
2020-10-02 16:25 ` Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 07/13] samples/bpf: add bpf program that uses xdp mb helpers Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 08/13] bpf: move user_size out of bpf_test_init Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 09/13] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Lorenzo Bianconi
2020-10-08 8:06 ` Shay Agroskin
2020-10-08 10:46 ` Jesper Dangaard Brouer
2020-10-02 14:42 ` [PATCH v4 bpf-next 10/13] bpf: test_run: add skb_shared_info pointer in bpf_test_finish signature Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 11/13] bpf: add xdp multi-buffer selftest Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 12/13] net: mvneta: enable jumbo frames for XDP Lorenzo Bianconi
2020-10-02 14:42 ` [PATCH v4 bpf-next 13/13] bpf: cpumap: introduce xdp multi-buff support Lorenzo Bianconi
2020-10-02 15:25 ` [PATCH v4 bpf-next 00/13] mvneta: introduce XDP multi-buffer support John Fastabend
2020-10-02 16:06 ` Lorenzo Bianconi
2020-10-02 18:06 ` John Fastabend
2020-10-05 9:52 ` Jesper Dangaard Brouer
2020-10-05 21:22 ` John Fastabend
2020-10-05 22:24 ` Lorenzo Bianconi
2020-10-06 4:29 ` John Fastabend
2020-10-06 7:30 ` Jesper Dangaard Brouer
2020-10-06 15:28 ` Lorenzo Bianconi
2020-10-08 14:38 ` John Fastabend
2020-10-02 19:53 ` Daniel Borkmann
2020-10-05 15:50 ` Tirthendu Sarkar
2020-10-06 12:39 ` Jubran, Samih
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5f7748fc80bd9_38b02081@john-XPS-13-9370.notmuch \
--to=john.fastabend@gmail.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brouer@redhat.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=echaudro@redhat.com \
--cc=kuba@kernel.org \
--cc=lorenzo.bianconi@redhat.com \
--cc=lorenzo@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=sameehj@amazon.com \
--cc=shayagr@amazon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).