From: sdf@google.com
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: hjm2133@columbia.edu, bpf@vger.kernel.org,
netdev@vger.kernel.org, ppenkov@google.com
Subject: Re: [RFC PATCH bpf-next 0/2] bpf: Implement shared persistent fast(er) sk_storoage mode
Date: Tue, 24 Aug 2021 09:03:20 -0700 [thread overview]
Message-ID: <YSUYSIYyXmBgKRwr@google.com> (raw)
In-Reply-To: <20210824003847.4jlkv2hpx7milwfr@ast-mbp.dhcp.thefacebook.com>
On 08/23, Alexei Starovoitov wrote:
> On Mon, Aug 23, 2021 at 05:52:50PM -0400, Hans Montero wrote:
> > From: Hans Montero <hjm2133@columbia.edu>
> >
> > This patch set adds a BPF local storage optimization. The first patch
> adds the
> > feature, and the second patch extends the bpf selftests so that the
> feature is
> > tested.
> >
> > We are running BPF programs for each egress packet and noticed that
> > bpf_sk_storage_get incurs a significant amount of cpu time. By inlining
> the
> > storage into struct sock and accessing that instead of performing a map
> lookup,
> > we expect to reduce overhead for our specific use-case.
> Looks like a hack to me. Please share the perf numbers and setup details.
> I think there should be a different way to address performance concerns
> without going into such hacks.
What kind of perf numbers would you like to see? What we see here is
that bpf_sk_storage_get() cycles are somewhere on par with hashtable
lookups (we've moved off of 5-tuple ht lookup to sk_storage). Looking
at the code, it seems it's mostly coming from the following:
sk_storage = rcu_dereference(sk->sk_bpf_storage);
sdata = rcu_dereference(local_storage->cache[smap->cache_idx]);
return sdata->data
We do 3 cold-cache references :-( This is where the idea of inlining
something in the socket itself came from. The RFC is just to present
the case and discuss. I was thinking about doing some kind of
inlining at runtime (and fallback to non-inlined case) but wanted
to start with discussing this compile-time option first.
We can also try to save sdata somewhere in the socket to avoid two
lookups for the cached case, this can potentially save us two
rcu_dereference's.
Is that something that looks acceptable? I was wondering whether you've
considered any socket storage optimizations on your side?
I can try to set up some office hours to discuss in person if that's
preferred.
> > This also has a
> > side-effect of persisting the socket storage, which can be beneficial.
> Without explicit opt-in such sharing will cause multiple bpf progs to
> corrupt
> each other data.
New BPF_F_SHARED_LOCAL_STORAGE flag is here to provide this opt-in.
next prev parent reply other threads:[~2021-08-24 16:03 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-23 21:52 [RFC PATCH bpf-next 0/2] bpf: Implement shared persistent fast(er) sk_storoage mode Hans Montero
2021-08-23 21:52 ` [RFC PATCH bpf-next 1/2] bpf: Implement shared sk_storage optimization Hans Montero
2021-08-23 21:52 ` [RFC PATCH bpf-next 2/2] selftests/bpf: Extend tests for shared sk_storage Hans Montero
2021-08-24 0:38 ` [RFC PATCH bpf-next 0/2] bpf: Implement shared persistent fast(er) sk_storoage mode Alexei Starovoitov
2021-08-24 16:03 ` sdf [this message]
2021-08-24 22:15 ` Alexei Starovoitov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YSUYSIYyXmBgKRwr@google.com \
--to=sdf@google.com \
--cc=alexei.starovoitov@gmail.com \
--cc=bpf@vger.kernel.org \
--cc=hjm2133@columbia.edu \
--cc=netdev@vger.kernel.org \
--cc=ppenkov@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).