From: sdf@google.com
To: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Networking <netdev@vger.kernel.org>, bpf <bpf@vger.kernel.org>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Martin KaFai Lau <kafai@fb.com>, Yonghong Song <yhs@fb.com>
Subject: Re: [PATCH bpf-next v3] bpf: increase supported cgroup storage value size
Date: Tue, 27 Jul 2021 13:47:53 -0700 [thread overview]
Message-ID: <YQBw+SLUQf0phOik@google.com> (raw)
In-Reply-To: <CAEf4BzaLc7rvUPquXnf+qxjrLSkCR21D7hj0HNVACmwNpgZvSw@mail.gmail.com>
On 07/27, Andrii Nakryiko wrote:
> On Mon, Jul 26, 2021 at 4:00 PM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > Current max cgroup storage value size is 4k (PAGE_SIZE). The other local
> > storages accept up to 64k (BPF_LOCAL_STORAGE_MAX_VALUE_SIZE). Let's
> align
> > max cgroup value size with the other storages.
> >
> > For percpu, the max is 32k (PCPU_MIN_UNIT_SIZE) because percpu
> > allocator is not happy about larger values.
> >
> > netcnt test is extended to exercise those maximum values
> > (non-percpu max size is close to, but not real max).
> >
> > v3:
> > * refine SIZEOF_BPF_LOCAL_STORAGE_ELEM comment (Yonghong Song)
> > * anonymous struct in percpu_net_cnt & net_cnt (Yonghong Song)
> > * reorder free (Yonghong Song)
> >
> > v2:
> > * cap max_value_size instead of BUILD_BUG_ON (Martin KaFai Lau)
> >
> > Cc: Martin KaFai Lau <kafai@fb.com>
> > Cc: Yonghong Song <yhs@fb.com>
> > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > ---
> > kernel/bpf/local_storage.c | 11 +++++-
> > tools/testing/selftests/bpf/netcnt_common.h | 38 +++++++++++++++++----
> > tools/testing/selftests/bpf/test_netcnt.c | 17 ++++++---
> > 3 files changed, 53 insertions(+), 13 deletions(-)
> >
> > diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c
> > index 7ed2a14dc0de..035e9e3a7132 100644
> > --- a/kernel/bpf/local_storage.c
> > +++ b/kernel/bpf/local_storage.c
> > @@ -1,6 +1,7 @@
> > //SPDX-License-Identifier: GPL-2.0
> > #include <linux/bpf-cgroup.h>
> > #include <linux/bpf.h>
> > +#include <linux/bpf_local_storage.h>
> > #include <linux/btf.h>
> > #include <linux/bug.h>
> > #include <linux/filter.h>
> > @@ -283,9 +284,17 @@ static int cgroup_storage_get_next_key(struct
> bpf_map *_map, void *key,
> >
> > static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
> > {
> > + __u32 max_value_size = BPF_LOCAL_STORAGE_MAX_VALUE_SIZE;
> > int numa_node = bpf_map_attr_numa_node(attr);
> > struct bpf_cgroup_storage_map *map;
> >
> > + /* percpu is bound by PCPU_MIN_UNIT_SIZE, non-percu
> > + * is the same as other local storages.
> > + */
> > + if (attr->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE)
> > + max_value_size = min_t(__u32, max_value_size,
> > + PCPU_MIN_UNIT_SIZE);
> > +
> > if (attr->key_size != sizeof(struct bpf_cgroup_storage_key) &&
> > attr->key_size != sizeof(__u64))
> > return ERR_PTR(-EINVAL);
> > @@ -293,7 +302,7 @@ static struct bpf_map
> *cgroup_storage_map_alloc(union bpf_attr *attr)
> > if (attr->value_size == 0)
> > return ERR_PTR(-EINVAL);
> >
> > - if (attr->value_size > PAGE_SIZE)
> > + if (attr->value_size > max_value_size)
> > return ERR_PTR(-E2BIG);
> >
> > if (attr->map_flags & ~LOCAL_STORAGE_CREATE_FLAG_MASK ||
> > diff --git a/tools/testing/selftests/bpf/netcnt_common.h
> b/tools/testing/selftests/bpf/netcnt_common.h
> > index 81084c1c2c23..87f5b97e1932 100644
> > --- a/tools/testing/selftests/bpf/netcnt_common.h
> > +++ b/tools/testing/selftests/bpf/netcnt_common.h
> > @@ -6,19 +6,43 @@
> >
> > #define MAX_PERCPU_PACKETS 32
> >
> > +/* sizeof(struct bpf_local_storage_elem):
> > + *
> > + * It really is about 128 bytes on x86_64, but allocate more to
> account for
> > + * possible layout changes, different architectures, etc.
> > + * The kernel will wrap up to PAGE_SIZE internally anyway.
> > + */
> > +#define SIZEOF_BPF_LOCAL_STORAGE_ELEM 256
> > +
> > +/* Try to estimate kernel's BPF_LOCAL_STORAGE_MAX_VALUE_SIZE: */
> > +#define BPF_LOCAL_STORAGE_MAX_VALUE_SIZE (0xFFFF - \
> > +
> SIZEOF_BPF_LOCAL_STORAGE_ELEM)
> > +
> > +#define PCPU_MIN_UNIT_SIZE 32768
> > +
> > struct percpu_net_cnt {
> > - __u64 packets;
> > - __u64 bytes;
> > + union {
> so you have a struct with a single anonymous union inside, isn't that
> right? Any problems with just making struct percpu_net_cnt into union
> percpu_net_cnt?
We'd have to s/struct/union/ everywhere in this case, not sure
we want to add more churn? Seemed easier to do anonymous union+struct.
> > + struct {
> > + __u64 packets;
> > + __u64 bytes;
> >
> > - __u64 prev_ts;
> > + __u64 prev_ts;
> >
> > - __u64 prev_packets;
> > - __u64 prev_bytes;
> > + __u64 prev_packets;
> > + __u64 prev_bytes;
> > + };
> > + __u8 data[PCPU_MIN_UNIT_SIZE];
> > + };
> > };
> >
> > struct net_cnt {
> > - __u64 packets;
> > - __u64 bytes;
> > + union {
> similarly here
> > + struct {
> > + __u64 packets;
> > + __u64 bytes;
> > + };
> > + __u8 data[BPF_LOCAL_STORAGE_MAX_VALUE_SIZE];
> > + };
> > };
> >
> > #endif
> > diff --git a/tools/testing/selftests/bpf/test_netcnt.c
> b/tools/testing/selftests/bpf/test_netcnt.c
> > index a7b9a69f4fd5..372afccf2d17 100644
> > --- a/tools/testing/selftests/bpf/test_netcnt.c
> > +++ b/tools/testing/selftests/bpf/test_netcnt.c
> > @@ -33,11 +33,11 @@ static int bpf_find_map(const char *test, struct
> bpf_object *obj,
> >
> > int main(int argc, char **argv)
> > {
> > - struct percpu_net_cnt *percpu_netcnt;
> > + struct percpu_net_cnt *percpu_netcnt = NULL;
> > struct bpf_cgroup_storage_key key;
> > + struct net_cnt *netcnt = NULL;
> > int map_fd, percpu_map_fd;
> > int error = EXIT_FAILURE;
> > - struct net_cnt netcnt;
> > struct bpf_object *obj;
> > int prog_fd, cgroup_fd;
> > unsigned long packets;
> > @@ -52,6 +52,12 @@ int main(int argc, char **argv)
> > goto err;
> > }
> >
> > + netcnt = malloc(sizeof(*netcnt));
> curious, was it too big to be just allocated on the stack? Isn't the
> thread stack size much bigger than 64KB (at least by default)?
I haven't tried really, I just moved it to malloc because it crossed
some unconscious boundary for the 'stuff I allocate on the stack'.
I can try it out if you prefer to keep it on the stack, let me know.
next prev parent reply other threads:[~2021-07-27 20:47 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-26 23:00 [PATCH bpf-next v3] bpf: increase supported cgroup storage value size Stanislav Fomichev
2021-07-27 0:33 ` Martin KaFai Lau
2021-07-27 19:32 ` Andrii Nakryiko
2021-07-27 20:47 ` sdf [this message]
2021-07-27 21:30 ` Andrii Nakryiko
2021-07-27 22:17 ` sdf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YQBw+SLUQf0phOik@google.com \
--to=sdf@google.com \
--cc=andrii.nakryiko@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=kafai@fb.com \
--cc=netdev@vger.kernel.org \
--cc=yhs@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).