From: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
To: Saeed Mahameed <saeed@kernel.org>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>,
bpf@vger.kernel.org, netdev@vger.kernel.org, davem@davemloft.net,
kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net,
shayagr@amazon.com, sameehj@amazon.com, john.fastabend@gmail.com,
dsahern@kernel.org, brouer@redhat.com, echaudro@redhat.com,
jasowang@redhat.com
Subject: Re: [PATCH v5 bpf-next 03/14] xdp: add xdp_shared_info data structure
Date: Tue, 8 Dec 2020 12:01:25 +0100 [thread overview]
Message-ID: <20201208110125.GC36228@lore-desk> (raw)
In-Reply-To: <5465830698257f18ae474877648f4a9fe2e1eefe.camel@kernel.org>
[-- Attachment #1: Type: text/plain, Size: 7571 bytes --]
> On Mon, 2020-12-07 at 17:32 +0100, Lorenzo Bianconi wrote:
> > Introduce xdp_shared_info data structure to contain info about
> > "non-linear" xdp frame. xdp_shared_info will alias skb_shared_info
> > allowing to keep most of the frags in the same cache-line.
> > Introduce some xdp_shared_info helpers aligned to skb_frag* ones
> >
>
> is there or will be a more general purpose use to this xdp_shared_info
> ? other than hosting frags ?
I do not have other use-cases at the moment other than multi-buff but in
theory it is possible I guess.
The reason we introduced it is to have most of the frags in the first
shared_info cache-line to avoid cache-misses.
>
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > ---
> > drivers/net/ethernet/marvell/mvneta.c | 62 +++++++++++++++--------
> > ----
> > include/net/xdp.h | 52 ++++++++++++++++++++--
> > 2 files changed, 82 insertions(+), 32 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c
> > b/drivers/net/ethernet/marvell/mvneta.c
> > index 1e5b5c69685a..d635463609ad 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -2033,14 +2033,17 @@ int mvneta_rx_refill_queue(struct mvneta_port
> > *pp, struct mvneta_rx_queue *rxq)
> >
>
> [...]
>
> > static void
> > @@ -2278,7 +2281,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port
> > *pp,
> > struct mvneta_rx_desc *rx_desc,
> > struct mvneta_rx_queue *rxq,
> > struct xdp_buff *xdp, int *size,
> > - struct skb_shared_info *xdp_sinfo,
> > + struct xdp_shared_info *xdp_sinfo,
> > struct page *page)
> > {
> > struct net_device *dev = pp->dev;
> > @@ -2301,13 +2304,13 @@ mvneta_swbm_add_rx_fragment(struct
> > mvneta_port *pp,
> > if (data_len > 0 && xdp_sinfo->nr_frags < MAX_SKB_FRAGS) {
> > skb_frag_t *frag = &xdp_sinfo->frags[xdp_sinfo-
> > >nr_frags++];
> >
> > - skb_frag_off_set(frag, pp->rx_offset_correction);
> > - skb_frag_size_set(frag, data_len);
> > - __skb_frag_set_page(frag, page);
> > + xdp_set_frag_offset(frag, pp->rx_offset_correction);
> > + xdp_set_frag_size(frag, data_len);
> > + xdp_set_frag_page(frag, page);
> >
>
> why three separate setters ? why not just one
> xdp_set_frag(page, offset, size) ?
to be aligned with skb_frags helpers, but I guess we can have a single helper,
I do not have a strong opinion on it
>
> > /* last fragment */
> > if (len == *size) {
> > - struct skb_shared_info *sinfo;
> > + struct xdp_shared_info *sinfo;
> >
> > sinfo = xdp_get_shared_info_from_buff(xdp);
> > sinfo->nr_frags = xdp_sinfo->nr_frags;
> > @@ -2324,10 +2327,13 @@ static struct sk_buff *
> > mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue
> > *rxq,
> > struct xdp_buff *xdp, u32 desc_status)
> > {
[...]
> >
> > -static inline struct skb_shared_info *
> > +struct xdp_shared_info {
>
> xdp_shared_info is a bad name, we need this to have a specific purpose
> xdp_frags should the proper name, so people will think twice before
> adding weird bits to this so called shared_info.
I named the struct xdp_shared_info to recall skb_shared_info but I guess
xdp_frags is fine too. Agree?
>
> > + u16 nr_frags;
> > + u16 data_length; /* paged area length */
> > + skb_frag_t frags[MAX_SKB_FRAGS];
>
> why MAX_SKB_FRAGS ? just use a flexible array member
> skb_frag_t frags[];
>
> and enforce size via the n_frags and on the construction of the
> tailroom preserved buffer, which is already being done.
>
> this is waste of unnecessary space, at lease by definition of the
> struct, in your use case you do:
> memcpy(frag_list, xdp_sinfo->frags, sizeof(skb_frag_t) * num_frags);
> And the tailroom space was already preserved for a full skb_shinfo.
> so i don't see why you need this array to be of a fixed MAX_SKB_FRAGS
> size.
In order to avoid cache-misses, xdp_shared info is built as a variable
on mvneta_rx_swbm() stack and it is written to "shared_info" area only on the
last fragment in mvneta_swbm_add_rx_fragment(). I used MAX_SKB_FRAGS to be
aligned with skb_shared_info struct but probably we can use even a smaller value.
Another approach would be to define two different struct, e.g.
stuct xdp_frag_metadata {
u16 nr_frags;
u16 data_length; /* paged area length */
};
struct xdp_frags {
skb_frag_t frags[MAX_SKB_FRAGS];
};
and then define xdp_shared_info as
struct xdp_shared_info {
stuct xdp_frag_metadata meta;
skb_frag_t frags[];
};
In this way we can probably optimize the space. What do you think?
>
> > +};
> > +
> > +static inline struct xdp_shared_info *
> > xdp_get_shared_info_from_buff(struct xdp_buff *xdp)
> > {
> > - return (struct skb_shared_info *)xdp_data_hard_end(xdp);
> > + BUILD_BUG_ON(sizeof(struct xdp_shared_info) >
> > + sizeof(struct skb_shared_info));
> > + return (struct xdp_shared_info *)xdp_data_hard_end(xdp);
> > +}
> > +
>
> Back to my first comment, do we have plans to use this tail room buffer
> for other than frag_list use cases ? what will be the buffer format
> then ? should we push all new fields to the end of the xdp_shared_info
> struct ? or deal with this tailroom buffer as a stack ?
> my main concern is that for drivers that don't support frag list and
> still want to utilize the tailroom buffer for other usecases they will
> have to skip the first sizeof(xdp_shared_info) so they won't break the
> stack.
for the moment I do not know if this area is used for other purposes.
Do you think there are other use-cases for it?
>
> > +static inline struct page *xdp_get_frag_page(const skb_frag_t *frag)
> > +{
> > + return frag->bv_page;
> > +}
> > +
> > +static inline unsigned int xdp_get_frag_offset(const skb_frag_t
> > *frag)
> > +{
> > + return frag->bv_offset;
> > +}
> > +
> > +static inline unsigned int xdp_get_frag_size(const skb_frag_t *frag)
> > +{
> > + return frag->bv_len;
> > +}
> > +
> > +static inline void *xdp_get_frag_address(const skb_frag_t *frag)
> > +{
> > + return page_address(xdp_get_frag_page(frag)) +
> > + xdp_get_frag_offset(frag);
> > +}
> > +
> > +static inline void xdp_set_frag_page(skb_frag_t *frag, struct page
> > *page)
> > +{
> > + frag->bv_page = page;
> > +}
> > +
> > +static inline void xdp_set_frag_offset(skb_frag_t *frag, u32 offset)
> > +{
> > + frag->bv_offset = offset;
> > +}
> > +
> > +static inline void xdp_set_frag_size(skb_frag_t *frag, u32 size)
> > +{
> > + frag->bv_len = size;
> > }
> >
> > struct xdp_frame {
> > @@ -120,12 +164,12 @@ static __always_inline void
> > xdp_frame_bulk_init(struct xdp_frame_bulk *bq)
> > bq->xa = NULL;
> > }
> >
> > -static inline struct skb_shared_info *
> > +static inline struct xdp_shared_info *
> > xdp_get_shared_info_from_frame(struct xdp_frame *frame)
> > {
> > void *data_hard_start = frame->data - frame->headroom -
> > sizeof(*frame);
> >
> > - return (struct skb_shared_info *)(data_hard_start + frame-
> > >frame_sz -
> > + return (struct xdp_shared_info *)(data_hard_start + frame-
> > >frame_sz -
> > SKB_DATA_ALIGN(sizeof(struct
> > skb_shared_info)));
> > }
> >
>
> need a comment here why we preserve the size of skb_shared_info, yet
> the usable buffer is of type xdp_shared_info.
ack, I will add it in v6.
Regards,
Lorenzo
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
next prev parent reply other threads:[~2020-12-08 11:03 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-07 16:32 [PATCH v5 bpf-next 00/14] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
2020-12-07 16:32 ` [PATCH v5 bpf-next 01/14] xdp: introduce mb in xdp_buff/xdp_frame Lorenzo Bianconi
2020-12-07 21:16 ` Alexander Duyck
2020-12-07 23:03 ` Saeed Mahameed
2020-12-08 3:16 ` Alexander Duyck
2020-12-08 6:49 ` Saeed Mahameed
2020-12-08 9:47 ` Jesper Dangaard Brouer
2020-12-07 16:32 ` [PATCH v5 bpf-next 02/14] xdp: initialize xdp_buff mb bit to 0 in all XDP drivers Lorenzo Bianconi
2020-12-07 21:15 ` Alexander Duyck
2020-12-07 21:37 ` Maciej Fijalkowski
2020-12-07 23:20 ` Saeed Mahameed
2020-12-08 10:31 ` Lorenzo Bianconi
2020-12-08 13:29 ` Jesper Dangaard Brouer
2020-12-07 16:32 ` [PATCH v5 bpf-next 03/14] xdp: add xdp_shared_info data structure Lorenzo Bianconi
2020-12-08 0:22 ` Saeed Mahameed
2020-12-08 11:01 ` Lorenzo Bianconi [this message]
2020-12-19 14:53 ` Shay Agroskin
2020-12-19 15:30 ` Jamal Hadi Salim
2020-12-21 9:01 ` Jesper Dangaard Brouer
2020-12-21 13:00 ` Jamal Hadi Salim
2020-12-20 17:52 ` Lorenzo Bianconi
2020-12-21 20:55 ` Shay Agroskin
2020-12-07 16:32 ` [PATCH v5 bpf-next 04/14] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Lorenzo Bianconi
2020-12-07 16:32 ` [PATCH v5 bpf-next 05/14] xdp: add multi-buff support to xdp_return_{buff/frame} Lorenzo Bianconi
2020-12-07 16:32 ` [PATCH v5 bpf-next 06/14] net: mvneta: add multi buffer support to XDP_TX Lorenzo Bianconi
2020-12-19 15:56 ` Shay Agroskin
2020-12-20 18:06 ` Lorenzo Bianconi
2020-12-07 16:32 ` [PATCH v5 bpf-next 07/14] bpf: move user_size out of bpf_test_init Lorenzo Bianconi
2020-12-07 16:32 ` [PATCH v5 bpf-next 08/14] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Lorenzo Bianconi
2020-12-07 16:32 ` [PATCH v5 bpf-next 09/14] bpf: test_run: add xdp_shared_info pointer in bpf_test_finish signature Lorenzo Bianconi
2020-12-07 16:32 ` [PATCH v5 bpf-next 10/14] net: mvneta: enable jumbo frames for XDP Lorenzo Bianconi
2020-12-07 16:32 ` [PATCH v5 bpf-next 11/14] bpf: cpumap: introduce xdp multi-buff support Lorenzo Bianconi
2020-12-19 17:46 ` Shay Agroskin
2020-12-20 17:56 ` Lorenzo Bianconi
2020-12-07 16:32 ` [PATCH v5 bpf-next 12/14] bpf: add multi-buff support to the bpf_xdp_adjust_tail() API Lorenzo Bianconi
2020-12-07 16:32 ` [PATCH v5 bpf-next 13/14] bpf: add new frame_length field to the XDP ctx Lorenzo Bianconi
2020-12-08 22:17 ` Maciej Fijalkowski
2020-12-09 10:35 ` Eelco Chaudron
2020-12-09 11:10 ` Maciej Fijalkowski
2020-12-09 12:07 ` Eelco Chaudron
2020-12-15 13:28 ` Eelco Chaudron
2020-12-15 18:06 ` Maciej Fijalkowski
2020-12-16 14:08 ` Eelco Chaudron
2021-01-15 16:36 ` Eelco Chaudron
2021-01-18 16:48 ` Maciej Fijalkowski
2021-01-20 13:20 ` Eelco Chaudron
2021-02-01 16:00 ` Eelco Chaudron
2020-12-07 16:32 ` [PATCH v5 bpf-next 14/14] bpf: update xdp_adjust_tail selftest to include multi-buffer Lorenzo Bianconi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201208110125.GC36228@lore-desk \
--to=lorenzo.bianconi@redhat.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brouer@redhat.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=echaudro@redhat.com \
--cc=jasowang@redhat.com \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=lorenzo@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=saeed@kernel.org \
--cc=sameehj@amazon.com \
--cc=shayagr@amazon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).