* [PATCH bpf-next v3 1/6] bpf: Avoid unnecessary extra percpu memory allocation
2023-12-16 2:30 [PATCH bpf-next v3 0/6] bpf: Reduce memory usage for bpf_global_percpu_ma Yonghong Song
@ 2023-12-16 2:30 ` Yonghong Song
2023-12-16 2:30 ` [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator Yonghong Song
` (4 subsequent siblings)
5 siblings, 0 replies; 16+ messages in thread
From: Yonghong Song @ 2023-12-16 2:30 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau, Hou Tao
Currently, for percpu memory allocation, say if the user
requests allocation size to be 32 bytes, the actually
calculated size will be 40 bytes and it further rounds
to 64 bytes, and eventually 64 bytes are allocated,
wasting 32-byte memory.
Change bpf_mem_alloc() to calculate the cache index
based on the user-provided allocation size so unnecessary
extra memory can be avoided.
Suggested-by: Hou Tao <houtao1@huawei.com>
Acked-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/memalloc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 6a51cfe4c2d6..00e101c2a68b 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -871,7 +871,9 @@ void notrace *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size)
if (!size)
return ZERO_SIZE_PTR;
- idx = bpf_mem_cache_idx(size + LLIST_NODE_SZ);
+ if (!ma->percpu)
+ size += LLIST_NODE_SZ;
+ idx = bpf_mem_cache_idx(size);
if (idx < 0)
return NULL;
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator
2023-12-16 2:30 [PATCH bpf-next v3 0/6] bpf: Reduce memory usage for bpf_global_percpu_ma Yonghong Song
2023-12-16 2:30 ` [PATCH bpf-next v3 1/6] bpf: Avoid unnecessary extra percpu memory allocation Yonghong Song
@ 2023-12-16 2:30 ` Yonghong Song
2023-12-16 3:12 ` Hou Tao
2023-12-16 15:27 ` kernel test robot
2023-12-16 2:30 ` [PATCH bpf-next v3 3/6] bpf: Refill only one percpu element in memalloc Yonghong Song
` (3 subsequent siblings)
5 siblings, 2 replies; 16+ messages in thread
From: Yonghong Song @ 2023-12-16 2:30 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
Commit 41a5db8d8161 ("Add support for non-fix-size percpu mem allocation")
added support for non-fix-size percpu memory allocation.
Such allocation will allocate percpu memory for all buckets on all
cpus and the memory consumption is in the order to quadratic.
For example, let us say, 4 cpus, unit size 16 bytes, so each
cpu has 16 * 4 = 64 bytes, with 4 cpus, total will be 64 * 4 = 256 bytes.
Then let us say, 8 cpus with the same unit size, each cpu
has 16 * 8 = 128 bytes, with 8 cpus, total will be 128 * 8 = 1024 bytes.
So if the number of cpus doubles, the number of memory consumption
will be 4 times. So for a system with large number of cpus, the
memory consumption goes up quickly with quadratic order.
For example, for 4KB percpu allocation, 128 cpus. The total memory
consumption will 4KB * 128 * 128 = 64MB. Things will become
worse if the number of cpus is bigger (e.g., 512, 1024, etc.)
In Commit 41a5db8d8161, the non-fix-size percpu memory allocation is
done in boot time, so for system with large number of cpus, the initial
percpu memory consumption is very visible. For example, for 128 cpu
system, the total percpu memory allocation will be at least
(16 + 32 + 64 + 96 + 128 + 196 + 256 + 512 + 1024 + 2048 + 4096)
* 128 * 128 = ~138MB.
which is pretty big. It will be even bigger for larger number of cpus.
Note that the current prefill also allocates 4 entries if the unit size
is less than 256. So on top of 138MB memory consumption, this will
add more consumption with
3 * (16 + 32 + 64 + 96 + 128 + 196 + 256) * 128 * 128 = ~38MB.
Next patch will try to reduce this memory consumption.
Later on, Commit 1fda5bb66ad8 ("bpf: Do not allocate percpu memory
at init stage") moved the non-fix-size percpu memory allocation
to bpf verificaiton stage. Once a particular bpf_percpu_obj_new()
is called by bpf program, the memory allocator will try to fill in
the cache with all sizes, causing the same amount of percpu memory
consumption as in the boot stage.
To reduce the initial percpu memory consumption for non-fix-size
percpu memory allocation, instead of filling the cache with all
supported allocation sizes, this patch intends to fill the cache
only for the requested size. As typically users will not use large
percpu data structure, this can save memory significantly.
For example, the allocation size is 64 bytes with 128 cpus.
Then total percpu memory amount will be 64 * 128 * 128 = 1MB,
much less than previous 138MB.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/linux/bpf.h | 2 +-
include/linux/bpf_mem_alloc.h | 7 ++++
kernel/bpf/core.c | 8 +++--
kernel/bpf/memalloc.c | 67 ++++++++++++++++++++++++++++++++++-
kernel/bpf/verifier.c | 28 ++++++---------
5 files changed, 90 insertions(+), 22 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 5e694934cf37..bd32274561e3 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -61,7 +61,7 @@ extern struct idr btf_idr;
extern spinlock_t btf_idr_lock;
extern struct kobject *btf_kobj;
extern struct bpf_mem_alloc bpf_global_ma, bpf_global_percpu_ma;
-extern bool bpf_global_ma_set;
+extern bool bpf_global_ma_set, bpf_global_percpu_ma_set;
typedef u64 (*bpf_callback_t)(u64, u64, u64, u64, u64);
typedef int (*bpf_iter_init_seq_priv_t)(void *private_data,
diff --git a/include/linux/bpf_mem_alloc.h b/include/linux/bpf_mem_alloc.h
index bb1223b21308..43e635c67150 100644
--- a/include/linux/bpf_mem_alloc.h
+++ b/include/linux/bpf_mem_alloc.h
@@ -21,8 +21,15 @@ struct bpf_mem_alloc {
* 'size = 0' is for bpf_mem_alloc which manages many fixed-size objects.
* Alloc and free are done with bpf_mem_{alloc,free}() and the size of
* the returned object is given by the size argument of bpf_mem_alloc().
+ * If percpu equals true, error will be returned in order to avoid
+ * large memory consumption and the below bpf_mem_alloc_percpu_unit_init()
+ * should be used to do on-demand per-cpu allocation for each size.
*/
int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu);
+/* Initialize a non-fix-size percpu memory allocator */
+int bpf_mem_alloc_percpu_init(struct bpf_mem_alloc *ma);
+/* The percpu allocation with a specific unit size. */
+int bpf_mem_alloc_percpu_unit_init(struct bpf_mem_alloc *ma, int size);
void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma);
/* kmalloc/kfree equivalent: */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 5aa6863ac33b..bc93eb7e00c7 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -64,8 +64,8 @@
#define OFF insn->off
#define IMM insn->imm
-struct bpf_mem_alloc bpf_global_ma;
-bool bpf_global_ma_set;
+struct bpf_mem_alloc bpf_global_ma, bpf_global_percpu_ma;
+bool bpf_global_ma_set, bpf_global_percpu_ma_set;
/* No hurry in this branch
*
@@ -2963,7 +2963,9 @@ static int __init bpf_global_ma_init(void)
ret = bpf_mem_alloc_init(&bpf_global_ma, 0, false);
bpf_global_ma_set = !ret;
- return ret;
+ ret = bpf_mem_alloc_percpu_init(&bpf_global_percpu_ma);
+ bpf_global_percpu_ma_set = !ret;
+ return !bpf_global_ma_set || !bpf_global_percpu_ma_set;
}
late_initcall(bpf_global_ma_init);
#endif
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 00e101c2a68b..30e347fccc6a 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -121,6 +121,8 @@ struct bpf_mem_caches {
struct bpf_mem_cache cache[NUM_CACHES];
};
+static const u16 sizes[NUM_CACHES] = {96, 192, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096};
+
static struct llist_node notrace *__llist_del_first(struct llist_head *head)
{
struct llist_node *entry, *next;
@@ -520,12 +522,14 @@ static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx)
*/
int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
{
- static u16 sizes[NUM_CACHES] = {96, 192, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096};
int cpu, i, err, unit_size, percpu_size = 0;
struct bpf_mem_caches *cc, __percpu *pcc;
struct bpf_mem_cache *c, __percpu *pc;
struct obj_cgroup *objcg = NULL;
+ if (percpu && size == 0)
+ return -EINVAL;
+
/* room for llist_node and per-cpu pointer */
if (percpu)
percpu_size = LLIST_NODE_SZ + sizeof(void *);
@@ -618,6 +622,67 @@ static void drain_mem_cache(struct bpf_mem_cache *c)
free_all(llist_del_all(&c->waiting_for_gp), percpu);
}
+__init int bpf_mem_alloc_percpu_init(struct bpf_mem_alloc *ma)
+{
+ struct bpf_mem_caches __percpu *pcc;
+
+ pcc = __alloc_percpu_gfp(sizeof(struct bpf_mem_caches), 8, GFP_KERNEL);
+ if (!pcc)
+ return -ENOMEM;
+
+ ma->caches = pcc;
+ ma->percpu = true;
+ return 0;
+}
+
+int bpf_mem_alloc_percpu_unit_init(struct bpf_mem_alloc *ma, int size)
+{
+ int cpu, i, err = 0, unit_size, percpu_size;
+ struct bpf_mem_caches *cc, __percpu *pcc;
+ struct obj_cgroup *objcg;
+ struct bpf_mem_cache *c;
+
+ i = bpf_mem_cache_idx(size);
+ if (i < 0)
+ return -EINVAL;
+
+ /* room for llist_node and per-cpu pointer */
+ percpu_size = LLIST_NODE_SZ + sizeof(void *);
+
+ pcc = ma->caches;
+ unit_size = sizes[i];
+
+#ifdef CONFIG_MEMCG_KMEM
+ objcg = get_obj_cgroup_from_current();
+#endif
+ for_each_possible_cpu(cpu) {
+ cc = per_cpu_ptr(pcc, cpu);
+ c = &cc->cache[i];
+ if (cpu == 0 && c->unit_size)
+ goto out;
+
+ c->unit_size = unit_size;
+ c->objcg = objcg;
+ c->percpu_size = percpu_size;
+ c->tgt = c;
+
+ init_refill_work(c);
+ prefill_mem_cache(c, cpu);
+
+ if (cpu == 0) {
+ err = check_obj_size(c, i);
+ if (err) {
+ drain_mem_cache(c);
+ memset(c, 0, sizeof(*c));
+ goto out;
+ }
+ }
+ }
+
+out:
+ return err;
+}
+
static void check_mem_cache(struct bpf_mem_cache *c)
{
WARN_ON_ONCE(!llist_empty(&c->free_by_rcu_ttrace));
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1863826a4ac3..ce62ee0cc8f6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -42,9 +42,6 @@ static const struct bpf_verifier_ops * const bpf_verifier_ops[] = {
#undef BPF_LINK_TYPE
};
-struct bpf_mem_alloc bpf_global_percpu_ma;
-static bool bpf_global_percpu_ma_set;
-
/* bpf_check() is a static code analyzer that walks eBPF program
* instruction by instruction and updates register/stack state.
* All paths of conditional branches are analyzed until 'bpf_exit' insn.
@@ -12062,20 +12059,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
if (meta.func_id == special_kfunc_list[KF_bpf_obj_new_impl] && !bpf_global_ma_set)
return -ENOMEM;
- if (meta.func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl]) {
- if (!bpf_global_percpu_ma_set) {
- mutex_lock(&bpf_percpu_ma_lock);
- if (!bpf_global_percpu_ma_set) {
- err = bpf_mem_alloc_init(&bpf_global_percpu_ma, 0, true);
- if (!err)
- bpf_global_percpu_ma_set = true;
- }
- mutex_unlock(&bpf_percpu_ma_lock);
- if (err)
- return err;
- }
- }
-
if (((u64)(u32)meta.arg_constant.value) != meta.arg_constant.value) {
verbose(env, "local type ID argument must be in range [0, U32_MAX]\n");
return -EINVAL;
@@ -12096,6 +12079,17 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return -EINVAL;
}
+ if (meta.func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl]) {
+ if (!bpf_global_percpu_ma_set)
+ return -ENOMEM;
+
+ mutex_lock(&bpf_percpu_ma_lock);
+ err = bpf_mem_alloc_percpu_unit_init(&bpf_global_percpu_ma, ret_t->size);
+ mutex_unlock(&bpf_percpu_ma_lock);
+ if (err)
+ return err;
+ }
+
struct_meta = btf_find_struct_meta(ret_btf, ret_btf_id);
if (meta.func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl]) {
if (!__btf_type_is_scalar_struct(env, ret_btf, ret_t, 0)) {
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator
2023-12-16 2:30 ` [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator Yonghong Song
@ 2023-12-16 3:12 ` Hou Tao
2023-12-17 7:11 ` Yonghong Song
2023-12-16 15:27 ` kernel test robot
1 sibling, 1 reply; 16+ messages in thread
From: Hou Tao @ 2023-12-16 3:12 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
Hi,
On 12/16/2023 10:30 AM, Yonghong Song wrote:
> Commit 41a5db8d8161 ("Add support for non-fix-size percpu mem allocation")
> added support for non-fix-size percpu memory allocation.
> Such allocation will allocate percpu memory for all buckets on all
> cpus and the memory consumption is in the order to quadratic.
> For example, let us say, 4 cpus, unit size 16 bytes, so each
> cpu has 16 * 4 = 64 bytes, with 4 cpus, total will be 64 * 4 = 256 bytes.
> Then let us say, 8 cpus with the same unit size, each cpu
> has 16 * 8 = 128 bytes, with 8 cpus, total will be 128 * 8 = 1024 bytes.
> So if the number of cpus doubles, the number of memory consumption
> will be 4 times. So for a system with large number of cpus, the
> memory consumption goes up quickly with quadratic order.
> For example, for 4KB percpu allocation, 128 cpus. The total memory
> consumption will 4KB * 128 * 128 = 64MB. Things will become
> worse if the number of cpus is bigger (e.g., 512, 1024, etc.)
SNIP
> +__init int bpf_mem_alloc_percpu_init(struct bpf_mem_alloc *ma)
> +{
> + struct bpf_mem_caches __percpu *pcc;
> +
> + pcc = __alloc_percpu_gfp(sizeof(struct bpf_mem_caches), 8, GFP_KERNEL);
> + if (!pcc)
> + return -ENOMEM;
> +
> + ma->caches = pcc;
> + ma->percpu = true;
> + return 0;
> +}
> +
> +int bpf_mem_alloc_percpu_unit_init(struct bpf_mem_alloc *ma, int size)
> +{
> + int cpu, i, err = 0, unit_size, percpu_size;
> + struct bpf_mem_caches *cc, __percpu *pcc;
> + struct obj_cgroup *objcg;
> + struct bpf_mem_cache *c;
> +
> + i = bpf_mem_cache_idx(size);
> + if (i < 0)
> + return -EINVAL;
> +
> + /* room for llist_node and per-cpu pointer */
> + percpu_size = LLIST_NODE_SZ + sizeof(void *);
> +
> + pcc = ma->caches;
> + unit_size = sizes[i];
> +
> +#ifdef CONFIG_MEMCG_KMEM
> + objcg = get_obj_cgroup_from_current();
> +#endif
For bpf_global_percpu_ma, we also need to account the allocated memory
to root memory cgroup just like bpf_global_ma did, do we ? So it seems
that we need to initialize c->objcg early in bpf_mem_alloc_percpu_init ().
> + for_each_possible_cpu(cpu) {
> + cc = per_cpu_ptr(pcc, cpu);
> + c = &cc->cache[i];
> + if (cpu == 0 && c->unit_size)
> + goto out;
> +
> + c->unit_size = unit_size;
> + c->objcg = objcg;
> + c->percpu_size = percpu_size;
> + c->tgt = c;
> +
> + init_refill_work(c);
> + prefill_mem_cache(c, cpu);
> +
> + if (cpu == 0) {
> + err = check_obj_size(c, i);
> + if (err) {
> + drain_mem_cache(c);
> + memset(c, 0, sizeof(*c));
I also forgot about c->objcg. objcg may be leaked if we do memset() here.
> + goto out;
> + }
> + }
> + }
> +
> +out:
> + return err;
> +}
> +
>
.
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator
2023-12-16 3:12 ` Hou Tao
@ 2023-12-17 7:11 ` Yonghong Song
2023-12-17 17:21 ` Yonghong Song
0 siblings, 1 reply; 16+ messages in thread
From: Yonghong Song @ 2023-12-17 7:11 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
On 12/15/23 7:12 PM, Hou Tao wrote:
> Hi,
>
> On 12/16/2023 10:30 AM, Yonghong Song wrote:
>> Commit 41a5db8d8161 ("Add support for non-fix-size percpu mem allocation")
>> added support for non-fix-size percpu memory allocation.
>> Such allocation will allocate percpu memory for all buckets on all
>> cpus and the memory consumption is in the order to quadratic.
>> For example, let us say, 4 cpus, unit size 16 bytes, so each
>> cpu has 16 * 4 = 64 bytes, with 4 cpus, total will be 64 * 4 = 256 bytes.
>> Then let us say, 8 cpus with the same unit size, each cpu
>> has 16 * 8 = 128 bytes, with 8 cpus, total will be 128 * 8 = 1024 bytes.
>> So if the number of cpus doubles, the number of memory consumption
>> will be 4 times. So for a system with large number of cpus, the
>> memory consumption goes up quickly with quadratic order.
>> For example, for 4KB percpu allocation, 128 cpus. The total memory
>> consumption will 4KB * 128 * 128 = 64MB. Things will become
>> worse if the number of cpus is bigger (e.g., 512, 1024, etc.)
> SNIP
>> +__init int bpf_mem_alloc_percpu_init(struct bpf_mem_alloc *ma)
>> +{
>> + struct bpf_mem_caches __percpu *pcc;
>> +
>> + pcc = __alloc_percpu_gfp(sizeof(struct bpf_mem_caches), 8, GFP_KERNEL);
>> + if (!pcc)
>> + return -ENOMEM;
>> +
>> + ma->caches = pcc;
>> + ma->percpu = true;
>> + return 0;
>> +}
>> +
>> +int bpf_mem_alloc_percpu_unit_init(struct bpf_mem_alloc *ma, int size)
>> +{
>> + int cpu, i, err = 0, unit_size, percpu_size;
>> + struct bpf_mem_caches *cc, __percpu *pcc;
>> + struct obj_cgroup *objcg;
>> + struct bpf_mem_cache *c;
>> +
>> + i = bpf_mem_cache_idx(size);
>> + if (i < 0)
>> + return -EINVAL;
>> +
>> + /* room for llist_node and per-cpu pointer */
>> + percpu_size = LLIST_NODE_SZ + sizeof(void *);
>> +
>> + pcc = ma->caches;
>> + unit_size = sizes[i];
>> +
>> +#ifdef CONFIG_MEMCG_KMEM
>> + objcg = get_obj_cgroup_from_current();
>> +#endif
> For bpf_global_percpu_ma, we also need to account the allocated memory
> to root memory cgroup just like bpf_global_ma did, do we ? So it seems
> that we need to initialize c->objcg early in bpf_mem_alloc_percpu_init ().
Good point. Agree. the original behavior percpu non-fix-size mem
allocation is to do get_obj_cgroup_from_current() at init stage
and charge to root memory cgroup, and we indeed should move
the above bpf_mem_alloc_percpu_init().
>> + for_each_possible_cpu(cpu) {
>> + cc = per_cpu_ptr(pcc, cpu);
>> + c = &cc->cache[i];
>> + if (cpu == 0 && c->unit_size)
>> + goto out;
>> +
>> + c->unit_size = unit_size;
>> + c->objcg = objcg;
>> + c->percpu_size = percpu_size;
>> + c->tgt = c;
>> +
>> + init_refill_work(c);
>> + prefill_mem_cache(c, cpu);
>> +
>> + if (cpu == 0) {
>> + err = check_obj_size(c, i);
>> + if (err) {
>> + drain_mem_cache(c);
>> + memset(c, 0, sizeof(*c));
> I also forgot about c->objcg. objcg may be leaked if we do memset() here.
The objcg gets a reference at init bpf_mem_alloc_init() stage
and released at bpf_mem_alloc_destroy(). For bpf_global_ma,
if there is a failure, indeed bpf_mem_alloc_destroy() will be
called and the reference c->objcg will be released.
So if we move get_obj_cgroup_from_current() to
bpf_mem_alloc_percpu_init() stage, we should be okay here.
BTW, is check_obj_size() really necessary here? My answer is no
since as you mentioned, the size->cache_index is pretty stable,
so check_obj_size() should not return error in such cases.
What do you think?
>> + goto out;
>> + }
>> + }
>> + }
>> +
>> +out:
>> + return err;
>> +}
>> +
>>
> .
>
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator
2023-12-17 7:11 ` Yonghong Song
@ 2023-12-17 17:21 ` Yonghong Song
2023-12-18 1:15 ` Hou Tao
0 siblings, 1 reply; 16+ messages in thread
From: Yonghong Song @ 2023-12-17 17:21 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
On 12/16/23 11:11 PM, Yonghong Song wrote:
>
> On 12/15/23 7:12 PM, Hou Tao wrote:
>> Hi,
>>
>> On 12/16/2023 10:30 AM, Yonghong Song wrote:
>>> Commit 41a5db8d8161 ("Add support for non-fix-size percpu mem
>>> allocation")
>>> added support for non-fix-size percpu memory allocation.
>>> Such allocation will allocate percpu memory for all buckets on all
>>> cpus and the memory consumption is in the order to quadratic.
>>> For example, let us say, 4 cpus, unit size 16 bytes, so each
>>> cpu has 16 * 4 = 64 bytes, with 4 cpus, total will be 64 * 4 = 256
>>> bytes.
>>> Then let us say, 8 cpus with the same unit size, each cpu
>>> has 16 * 8 = 128 bytes, with 8 cpus, total will be 128 * 8 = 1024
>>> bytes.
>>> So if the number of cpus doubles, the number of memory consumption
>>> will be 4 times. So for a system with large number of cpus, the
>>> memory consumption goes up quickly with quadratic order.
>>> For example, for 4KB percpu allocation, 128 cpus. The total memory
>>> consumption will 4KB * 128 * 128 = 64MB. Things will become
>>> worse if the number of cpus is bigger (e.g., 512, 1024, etc.)
>> SNIP
>>> +__init int bpf_mem_alloc_percpu_init(struct bpf_mem_alloc *ma)
>>> +{
>>> + struct bpf_mem_caches __percpu *pcc;
>>> +
>>> + pcc = __alloc_percpu_gfp(sizeof(struct bpf_mem_caches), 8,
>>> GFP_KERNEL);
>>> + if (!pcc)
>>> + return -ENOMEM;
>>> +
>>> + ma->caches = pcc;
>>> + ma->percpu = true;
>>> + return 0;
>>> +}
>>> +
>>> +int bpf_mem_alloc_percpu_unit_init(struct bpf_mem_alloc *ma, int size)
>>> +{
>>> + int cpu, i, err = 0, unit_size, percpu_size;
>>> + struct bpf_mem_caches *cc, __percpu *pcc;
>>> + struct obj_cgroup *objcg;
>>> + struct bpf_mem_cache *c;
>>> +
>>> + i = bpf_mem_cache_idx(size);
>>> + if (i < 0)
>>> + return -EINVAL;
>>> +
>>> + /* room for llist_node and per-cpu pointer */
>>> + percpu_size = LLIST_NODE_SZ + sizeof(void *);
>>> +
>>> + pcc = ma->caches;
>>> + unit_size = sizes[i];
>>> +
>>> +#ifdef CONFIG_MEMCG_KMEM
>>> + objcg = get_obj_cgroup_from_current();
>>> +#endif
>> For bpf_global_percpu_ma, we also need to account the allocated memory
>> to root memory cgroup just like bpf_global_ma did, do we ? So it seems
>> that we need to initialize c->objcg early in
>> bpf_mem_alloc_percpu_init ().
>
> Good point. Agree. the original behavior percpu non-fix-size mem
> allocation is to do get_obj_cgroup_from_current() at init stage
> and charge to root memory cgroup, and we indeed should move
> the above bpf_mem_alloc_percpu_init().
>
>>> + for_each_possible_cpu(cpu) {
>>> + cc = per_cpu_ptr(pcc, cpu);
>>> + c = &cc->cache[i];
>>> + if (cpu == 0 && c->unit_size)
>>> + goto out;
>>> +
>>> + c->unit_size = unit_size;
>>> + c->objcg = objcg;
>>> + c->percpu_size = percpu_size;
>>> + c->tgt = c;
>>> +
>>> + init_refill_work(c);
>>> + prefill_mem_cache(c, cpu);
>>> +
>>> + if (cpu == 0) {
>>> + err = check_obj_size(c, i);
>>> + if (err) {
>>> + drain_mem_cache(c);
>>> + memset(c, 0, sizeof(*c));
>> I also forgot about c->objcg. objcg may be leaked if we do memset()
>> here.
>
> The objcg gets a reference at init bpf_mem_alloc_init() stage
> and released at bpf_mem_alloc_destroy(). For bpf_global_ma,
> if there is a failure, indeed bpf_mem_alloc_destroy() will be
> called and the reference c->objcg will be released.
>
> So if we move get_obj_cgroup_from_current() to
> bpf_mem_alloc_percpu_init() stage, we should be okay here.
>
> BTW, is check_obj_size() really necessary here? My answer is no
> since as you mentioned, the size->cache_index is pretty stable,
> so check_obj_size() should not return error in such cases.
> What do you think?
How about the following change on top of this patch?
diff --git a/include/linux/bpf_mem_alloc.h b/include/linux/bpf_mem_alloc.h
index 43e635c67150..d1403204379e 100644
--- a/include/linux/bpf_mem_alloc.h
+++ b/include/linux/bpf_mem_alloc.h
@@ -11,6 +11,7 @@ struct bpf_mem_caches;
struct bpf_mem_alloc {
struct bpf_mem_caches __percpu *caches;
struct bpf_mem_cache __percpu *cache;
+ struct obj_cgroup *objcg;
bool percpu;
struct work_struct work;
};
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 5cf2738c20a9..6486da4ba097 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -553,6 +553,8 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
if (memcg_bpf_enabled())
objcg = get_obj_cgroup_from_current();
#endif
+ ma->objcg = objcg;
+
for_each_possible_cpu(cpu) {
c = per_cpu_ptr(pc, cpu);
c->unit_size = unit_size;
@@ -573,6 +575,7 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
#ifdef CONFIG_MEMCG_KMEM
objcg = get_obj_cgroup_from_current();
#endif
+ ma->objcg = objcg;
for_each_possible_cpu(cpu) {
cc = per_cpu_ptr(pcc, cpu);
for (i = 0; i < NUM_CACHES; i++) {
@@ -637,6 +640,12 @@ __init int bpf_mem_alloc_percpu_init(struct bpf_mem_alloc *ma)
ma->caches = pcc;
ma->percpu = true;
+
+#ifdef CONFIG_MEMCG_KMEM
+ ma->objcg = get_obj_cgroup_from_current();
+#else
+ ma->objcg = NULL;
+#endif
return 0;
}
@@ -656,10 +665,8 @@ int bpf_mem_alloc_percpu_unit_init(struct bpf_mem_alloc *ma, int size)
pcc = ma->caches;
unit_size = sizes[i];
+ objcg = ma->objcg;
-#ifdef CONFIG_MEMCG_KMEM
- objcg = get_obj_cgroup_from_current();
-#endif
for_each_possible_cpu(cpu) {
cc = per_cpu_ptr(pcc, cpu);
c = &cc->cache[i];
@@ -799,9 +806,8 @@ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
rcu_in_progress += atomic_read(&c->call_rcu_ttrace_in_progress);
rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
}
- /* objcg is the same across cpus */
- if (c->objcg)
- obj_cgroup_put(c->objcg);
+ if (ma->objcg)
+ obj_cgroup_put(ma->objcg);
destroy_mem_alloc(ma, rcu_in_progress);
}
if (ma->caches) {
@@ -817,8 +823,8 @@ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
}
}
- if (c->objcg)
- obj_cgroup_put(c->objcg);
+ if (ma->objcg)
+ obj_cgroup_put(ma->objcg);
destroy_mem_alloc(ma, rcu_in_progress);
}
}
I still think check_obj_size for percpu allocation is not needed.
But I guess we can address that issue later on.
>
>>> + goto out;
>>> + }
>>> + }
>>> + }
>>> +
>>> +out:
>>> + return err;
>>> +}
>>> +
>> .
>>
>
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator
2023-12-17 17:21 ` Yonghong Song
@ 2023-12-18 1:15 ` Hou Tao
0 siblings, 0 replies; 16+ messages in thread
From: Hou Tao @ 2023-12-18 1:15 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
Hi,
On 12/18/2023 1:21 AM, Yonghong Song wrote:
>
> On 12/16/23 11:11 PM, Yonghong Song wrote:
>>
>> On 12/15/23 7:12 PM, Hou Tao wrote:
>>> Hi,
>>>
>>> On 12/16/2023 10:30 AM, Yonghong Song wrote:
>>>> Commit 41a5db8d8161 ("Add support for non-fix-size percpu mem
>>>> allocation")
>>>> added support for non-fix-size percpu memory allocation.
>>>> Such allocation will allocate percpu memory for all buckets on all
>>>> cpus and the memory consumption is in the order to quadratic.
>>>> For example, let us say, 4 cpus, unit size 16 bytes, so each
>>>> cpu has 16 * 4 = 64 bytes, with 4 cpus, total will be 64 * 4 = 256
>>>> bytes.
>>>> Then let us say, 8 cpus with the same unit size, each cpu
>>>> has 16 * 8 = 128 bytes, with 8 cpus, total will be 128 * 8 = 1024
>>>> bytes.
>>>> So if the number of cpus doubles, the number of memory consumption
>>>> will be 4 times. So for a system with large number of cpus, the
>>>> memory consumption goes up quickly with quadratic order.
>>>> For example, for 4KB percpu allocation, 128 cpus. The total memory
>>>> consumption will 4KB * 128 * 128 = 64MB. Things will become
>>>> worse if the number of cpus is bigger (e.g., 512, 1024, etc.)
>>> SNIP
>>>> +__init int bpf_mem_alloc_percpu_init(struct bpf_mem_alloc *ma)
>>>> +{
>>>> + struct bpf_mem_caches __percpu *pcc;
>>>> +
>>>> + pcc = __alloc_percpu_gfp(sizeof(struct bpf_mem_caches), 8,
>>>> GFP_KERNEL);
>>>> + if (!pcc)
>>>> + return -ENOMEM;
>>>> +
>>>> + ma->caches = pcc;
>>>> + ma->percpu = true;
>>>> + return 0;
>>>> +}
>>>> +
>>>> +int bpf_mem_alloc_percpu_unit_init(struct bpf_mem_alloc *ma, int
>>>> size)
>>>> +{
>>>> + int cpu, i, err = 0, unit_size, percpu_size;
>>>> + struct bpf_mem_caches *cc, __percpu *pcc;
>>>> + struct obj_cgroup *objcg;
>>>> + struct bpf_mem_cache *c;
>>>> +
>>>> + i = bpf_mem_cache_idx(size);
>>>> + if (i < 0)
>>>> + return -EINVAL;
>>>> +
>>>> + /* room for llist_node and per-cpu pointer */
>>>> + percpu_size = LLIST_NODE_SZ + sizeof(void *);
>>>> +
>>>> + pcc = ma->caches;
>>>> + unit_size = sizes[i];
>>>> +
>>>> +#ifdef CONFIG_MEMCG_KMEM
>>>> + objcg = get_obj_cgroup_from_current();
>>>> +#endif
>>> For bpf_global_percpu_ma, we also need to account the allocated memory
>>> to root memory cgroup just like bpf_global_ma did, do we ? So it seems
>>> that we need to initialize c->objcg early in
>>> bpf_mem_alloc_percpu_init ().
>>
>> Good point. Agree. the original behavior percpu non-fix-size mem
>> allocation is to do get_obj_cgroup_from_current() at init stage
>> and charge to root memory cgroup, and we indeed should move
>> the above bpf_mem_alloc_percpu_init().
>>
>>>> + for_each_possible_cpu(cpu) {
>>>> + cc = per_cpu_ptr(pcc, cpu);
>>>> + c = &cc->cache[i];
>>>> + if (cpu == 0 && c->unit_size)
>>>> + goto out;
>>>> +
>>>> + c->unit_size = unit_size;
>>>> + c->objcg = objcg;
>>>> + c->percpu_size = percpu_size;
>>>> + c->tgt = c;
>>>> +
>>>> + init_refill_work(c);
>>>> + prefill_mem_cache(c, cpu);
>>>> +
>>>> + if (cpu == 0) {
>>>> + err = check_obj_size(c, i);
>>>> + if (err) {
>>>> + drain_mem_cache(c);
>>>> + memset(c, 0, sizeof(*c));
>>> I also forgot about c->objcg. objcg may be leaked if we do memset()
>>> here.
>>
>> The objcg gets a reference at init bpf_mem_alloc_init() stage
>> and released at bpf_mem_alloc_destroy(). For bpf_global_ma,
>> if there is a failure, indeed bpf_mem_alloc_destroy() will be
>> called and the reference c->objcg will be released.
>>
>> So if we move get_obj_cgroup_from_current() to
>> bpf_mem_alloc_percpu_init() stage, we should be okay here.
>>
>> BTW, is check_obj_size() really necessary here? My answer is no
>> since as you mentioned, the size->cache_index is pretty stable,
>> so check_obj_size() should not return error in such cases.
>> What do you think?
>
> How about the following change on top of this patch?
I think the patch below is fine. Before the change, objcg is a
per-bpf_mem_alloc object, but the implementation doesn't make it being
explicit. The change below make the objcg being a a per-bpf_mem_alloc
object.
>
> diff --git a/include/linux/bpf_mem_alloc.h
> b/include/linux/bpf_mem_alloc.h
> index 43e635c67150..d1403204379e 100644
> --- a/include/linux/bpf_mem_alloc.h
> +++ b/include/linux/bpf_mem_alloc.h
> @@ -11,6 +11,7 @@ struct bpf_mem_caches;
> struct bpf_mem_alloc {
> struct bpf_mem_caches __percpu *caches;
> struct bpf_mem_cache __percpu *cache;
> + struct obj_cgroup *objcg;
> bool percpu;
> struct work_struct work;
> };
> diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
> index 5cf2738c20a9..6486da4ba097 100644
> --- a/kernel/bpf/memalloc.c
> +++ b/kernel/bpf/memalloc.c
> @@ -553,6 +553,8 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma,
> int size, bool percpu)
> if (memcg_bpf_enabled())
> objcg = get_obj_cgroup_from_current();
> #endif
> + ma->objcg = objcg;
> +
> for_each_possible_cpu(cpu) {
> c = per_cpu_ptr(pc, cpu);
> c->unit_size = unit_size;
> @@ -573,6 +575,7 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma,
> int size, bool percpu)
> #ifdef CONFIG_MEMCG_KMEM
> objcg = get_obj_cgroup_from_current();
> #endif
> + ma->objcg = objcg;
> for_each_possible_cpu(cpu) {
> cc = per_cpu_ptr(pcc, cpu);
> for (i = 0; i < NUM_CACHES; i++) {
> @@ -637,6 +640,12 @@ __init int bpf_mem_alloc_percpu_init(struct
> bpf_mem_alloc *ma)
>
> ma->caches = pcc;
> ma->percpu = true;
> +
> +#ifdef CONFIG_MEMCG_KMEM
> + ma->objcg = get_obj_cgroup_from_current();
> +#else
> + ma->objcg = NULL;
> +#endif
> return 0;
> }
>
> @@ -656,10 +665,8 @@ int bpf_mem_alloc_percpu_unit_init(struct
> bpf_mem_alloc *ma, int size)
>
> pcc = ma->caches;
> unit_size = sizes[i];
> + objcg = ma->objcg;
>
> -#ifdef CONFIG_MEMCG_KMEM
> - objcg = get_obj_cgroup_from_current();
> -#endif
> for_each_possible_cpu(cpu) {
> cc = per_cpu_ptr(pcc, cpu);
> c = &cc->cache[i];
> @@ -799,9 +806,8 @@ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
> rcu_in_progress +=
> atomic_read(&c->call_rcu_ttrace_in_progress);
> rcu_in_progress +=
> atomic_read(&c->call_rcu_in_progress);
> }
> - /* objcg is the same across cpus */
> - if (c->objcg)
> - obj_cgroup_put(c->objcg);
> + if (ma->objcg)
> + obj_cgroup_put(ma->objcg);
> destroy_mem_alloc(ma, rcu_in_progress);
> }
> if (ma->caches) {
> @@ -817,8 +823,8 @@ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
> rcu_in_progress +=
> atomic_read(&c->call_rcu_in_progress);
> }
> }
> - if (c->objcg)
> - obj_cgroup_put(c->objcg);
> + if (ma->objcg)
> + obj_cgroup_put(ma->objcg);
> destroy_mem_alloc(ma, rcu_in_progress);
> }
> }
>
> I still think check_obj_size for percpu allocation is not needed.
> But I guess we can address that issue later on.
You are right. check_obj_size() is not needed for per-cpu allocation, so
it is OK to just remove it. I also remove check_obj_size() for kmalloc
allocation in [1].
[1]:
https://lore.kernel.org/bpf/20231216131052.27621-1-houtao@huaweicloud.com/
>
>>
>>>> + goto out;
>>>> + }
>>>> + }
>>>> + }
>>>> +
>>>> +out:
>>>> + return err;
>>>> +}
>>>> +
>>> .
>>>
>>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator
2023-12-16 2:30 ` [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator Yonghong Song
2023-12-16 3:12 ` Hou Tao
@ 2023-12-16 15:27 ` kernel test robot
1 sibling, 0 replies; 16+ messages in thread
From: kernel test robot @ 2023-12-16 15:27 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: llvm, oe-kbuild-all, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, kernel-team, Martin KaFai Lau
Hi Yonghong,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Yonghong-Song/bpf-Avoid-unnecessary-extra-percpu-memory-allocation/20231216-103310
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20231216023015.3741144-1-yonghong.song%40linux.dev
patch subject: [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator
config: x86_64-randconfig-003-20231216 (https://download.01.org/0day-ci/archive/20231216/202312162351.UuoFmjJk-lkp@intel.com/config)
compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231216/202312162351.UuoFmjJk-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202312162351.UuoFmjJk-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> kernel/bpf/memalloc.c:665:14: warning: variable 'objcg' is uninitialized when used here [-Wuninitialized]
c->objcg = objcg;
^~~~~
kernel/bpf/memalloc.c:642:26: note: initialize the variable 'objcg' to silence this warning
struct obj_cgroup *objcg;
^
= NULL
1 warning generated.
vim +/objcg +665 kernel/bpf/memalloc.c
637
638 int bpf_mem_alloc_percpu_unit_init(struct bpf_mem_alloc *ma, int size)
639 {
640 int cpu, i, err = 0, unit_size, percpu_size;
641 struct bpf_mem_caches *cc, __percpu *pcc;
642 struct obj_cgroup *objcg;
643 struct bpf_mem_cache *c;
644
645 i = bpf_mem_cache_idx(size);
646 if (i < 0)
647 return -EINVAL;
648
649 /* room for llist_node and per-cpu pointer */
650 percpu_size = LLIST_NODE_SZ + sizeof(void *);
651
652 pcc = ma->caches;
653 unit_size = sizes[i];
654
655 #ifdef CONFIG_MEMCG_KMEM
656 objcg = get_obj_cgroup_from_current();
657 #endif
658 for_each_possible_cpu(cpu) {
659 cc = per_cpu_ptr(pcc, cpu);
660 c = &cc->cache[i];
661 if (cpu == 0 && c->unit_size)
662 goto out;
663
664 c->unit_size = unit_size;
> 665 c->objcg = objcg;
666 c->percpu_size = percpu_size;
667 c->tgt = c;
668
669 init_refill_work(c);
670 prefill_mem_cache(c, cpu);
671
672 if (cpu == 0) {
673 err = check_obj_size(c, i);
674 if (err) {
675 drain_mem_cache(c);
676 memset(c, 0, sizeof(*c));
677 goto out;
678 }
679 }
680 }
681
682 out:
683 return err;
684 }
685
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH bpf-next v3 3/6] bpf: Refill only one percpu element in memalloc
2023-12-16 2:30 [PATCH bpf-next v3 0/6] bpf: Reduce memory usage for bpf_global_percpu_ma Yonghong Song
2023-12-16 2:30 ` [PATCH bpf-next v3 1/6] bpf: Avoid unnecessary extra percpu memory allocation Yonghong Song
2023-12-16 2:30 ` [PATCH bpf-next v3 2/6] bpf: Allow per unit prefill for non-fix-size percpu memory allocator Yonghong Song
@ 2023-12-16 2:30 ` Yonghong Song
2023-12-16 3:13 ` Hou Tao
2023-12-16 2:30 ` [PATCH bpf-next v3 4/6] bpf: Limit up to 512 bytes for bpf_global_percpu_ma allocation Yonghong Song
` (2 subsequent siblings)
5 siblings, 1 reply; 16+ messages in thread
From: Yonghong Song @ 2023-12-16 2:30 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
Typically for percpu map element or data structure, once allocated,
most operations are lookup or in-place update. Deletion are really
rare. Currently, for percpu data strcture, 4 elements will be
refilled if the size is <= 256. Let us just do with one element
for percpu data. For example, for size 256 and 128 cpus, the
potential saving will be 3 * 256 * 128 * 128 = 12MB.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/memalloc.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 30e347fccc6a..5cf2738c20a9 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -485,11 +485,16 @@ static void init_refill_work(struct bpf_mem_cache *c)
static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
{
- /* To avoid consuming memory assume that 1st run of bpf
- * prog won't be doing more than 4 map_update_elem from
- * irq disabled region
+ int cnt = 1;
+
+ /* To avoid consuming memory, for non-percpu allocation, assume that
+ * 1st run of bpf prog won't be doing more than 4 map_update_elem from
+ * irq disabled region if unit size is less than or equal to 256.
+ * For all other cases, let us just do one allocation.
*/
- alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu), false);
+ if (!c->percpu_size && c->unit_size <= 256)
+ cnt = 4;
+ alloc_bulk(c, cnt, cpu_to_node(cpu), false);
}
static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx)
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH bpf-next v3 3/6] bpf: Refill only one percpu element in memalloc
2023-12-16 2:30 ` [PATCH bpf-next v3 3/6] bpf: Refill only one percpu element in memalloc Yonghong Song
@ 2023-12-16 3:13 ` Hou Tao
0 siblings, 0 replies; 16+ messages in thread
From: Hou Tao @ 2023-12-16 3:13 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
On 12/16/2023 10:30 AM, Yonghong Song wrote:
> Typically for percpu map element or data structure, once allocated,
> most operations are lookup or in-place update. Deletion are really
> rare. Currently, for percpu data strcture, 4 elements will be
> refilled if the size is <= 256. Let us just do with one element
> for percpu data. For example, for size 256 and 128 cpus, the
> potential saving will be 3 * 256 * 128 * 128 = 12MB.
>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Acked-by: Hou Tao <houtao1@huawei.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH bpf-next v3 4/6] bpf: Limit up to 512 bytes for bpf_global_percpu_ma allocation
2023-12-16 2:30 [PATCH bpf-next v3 0/6] bpf: Reduce memory usage for bpf_global_percpu_ma Yonghong Song
` (2 preceding siblings ...)
2023-12-16 2:30 ` [PATCH bpf-next v3 3/6] bpf: Refill only one percpu element in memalloc Yonghong Song
@ 2023-12-16 2:30 ` Yonghong Song
2023-12-16 4:05 ` Hou Tao
2023-12-16 2:30 ` [PATCH bpf-next v3 5/6] selftests/bpf: Cope with 512 bytes limit with bpf_global_percpu_ma Yonghong Song
2023-12-16 2:30 ` [PATCH bpf-next v3 6/6] selftests/bpf: Add a selftest with > 512-byte percpu allocation size Yonghong Song
5 siblings, 1 reply; 16+ messages in thread
From: Yonghong Song @ 2023-12-16 2:30 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
For percpu data structure allocation with bpf_global_percpu_ma,
the maximum data size is 4K. But for a system with large
number of cpus, bigger data size (e.g., 2K, 4K) might consume
a lot of memory. For example, the percpu memory consumption
with unit size 2K and 1024 cpus will be 2K * 1K * 1k = 2GB
memory.
We should discourage such usage. Let us limit the maximum data
size to be 512 for bpf_global_percpu_ma allocation.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index ce62ee0cc8f6..039d699a425d 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -192,6 +192,8 @@ struct bpf_verifier_stack_elem {
POISON_POINTER_DELTA))
#define BPF_MAP_PTR(X) ((struct bpf_map *)((X) & ~BPF_MAP_PTR_UNPRIV))
+#define BPF_GLOBAL_PERCPU_MA_MAX_SIZE 512
+
static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx);
static int release_reference(struct bpf_verifier_env *env, int ref_obj_id);
static void invalidate_non_owning_refs(struct bpf_verifier_env *env);
@@ -12083,6 +12085,12 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
if (!bpf_global_percpu_ma_set)
return -ENOMEM;
+ if (ret_t->size > BPF_GLOBAL_PERCPU_MA_MAX_SIZE) {
+ verbose(env, "bpf_percpu_obj_new type size (%d) is greater than %d\n",
+ ret_t->size, BPF_GLOBAL_PERCPU_MA_MAX_SIZE);
+ return -EINVAL;
+ }
+
mutex_lock(&bpf_percpu_ma_lock);
err = bpf_mem_alloc_percpu_unit_init(&bpf_global_percpu_ma, ret_t->size);
mutex_unlock(&bpf_percpu_ma_lock);
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH bpf-next v3 4/6] bpf: Limit up to 512 bytes for bpf_global_percpu_ma allocation
2023-12-16 2:30 ` [PATCH bpf-next v3 4/6] bpf: Limit up to 512 bytes for bpf_global_percpu_ma allocation Yonghong Song
@ 2023-12-16 4:05 ` Hou Tao
0 siblings, 0 replies; 16+ messages in thread
From: Hou Tao @ 2023-12-16 4:05 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
On 12/16/2023 10:30 AM, Yonghong Song wrote:
> For percpu data structure allocation with bpf_global_percpu_ma,
> the maximum data size is 4K. But for a system with large
> number of cpus, bigger data size (e.g., 2K, 4K) might consume
> a lot of memory. For example, the percpu memory consumption
> with unit size 2K and 1024 cpus will be 2K * 1K * 1k = 2GB
> memory.
>
> We should discourage such usage. Let us limit the maximum data
> size to be 512 for bpf_global_percpu_ma allocation.
>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Make sense to me, so
Acked-by: Hou Tao <houtao1@huawei.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH bpf-next v3 5/6] selftests/bpf: Cope with 512 bytes limit with bpf_global_percpu_ma
2023-12-16 2:30 [PATCH bpf-next v3 0/6] bpf: Reduce memory usage for bpf_global_percpu_ma Yonghong Song
` (3 preceding siblings ...)
2023-12-16 2:30 ` [PATCH bpf-next v3 4/6] bpf: Limit up to 512 bytes for bpf_global_percpu_ma allocation Yonghong Song
@ 2023-12-16 2:30 ` Yonghong Song
2023-12-16 4:04 ` Hou Tao
2023-12-16 2:30 ` [PATCH bpf-next v3 6/6] selftests/bpf: Add a selftest with > 512-byte percpu allocation size Yonghong Song
5 siblings, 1 reply; 16+ messages in thread
From: Yonghong Song @ 2023-12-16 2:30 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
In the previous patch, the maximum data size for bpf_global_percpu_ma
is 512 bytes. This breaks selftest test_bpf_ma. The test is adjusted
in two aspects:
- Since the maximum allowed data size for bpf_global_percpu_ma is
512, remove all tests beyond that, names sizes 1024, 2048 and 4096.
- Previously the percpu data size is bucket_size - 8 in order to
avoid percpu allocation into the next bucket. This patch removed
such data size adjustment thanks to Patch 1.
Also, a better way to generate BTF type is used than adding
a member to the value struct.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
.../selftests/bpf/prog_tests/test_bpf_ma.c | 20 +++++++----
.../testing/selftests/bpf/progs/test_bpf_ma.c | 34 +++++++++----------
2 files changed, 30 insertions(+), 24 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c b/tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c
index d3491a84b3b9..ccae0b31ac6c 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c
@@ -14,7 +14,8 @@ static void do_bpf_ma_test(const char *name)
struct test_bpf_ma *skel;
struct bpf_program *prog;
struct btf *btf;
- int i, err;
+ int i, err, id;
+ char tname[32];
skel = test_bpf_ma__open();
if (!ASSERT_OK_PTR(skel, "open"))
@@ -25,16 +26,21 @@ static void do_bpf_ma_test(const char *name)
goto out;
for (i = 0; i < ARRAY_SIZE(skel->rodata->data_sizes); i++) {
- char name[32];
- int id;
-
- snprintf(name, sizeof(name), "bin_data_%u", skel->rodata->data_sizes[i]);
- id = btf__find_by_name_kind(btf, name, BTF_KIND_STRUCT);
- if (!ASSERT_GT(id, 0, "bin_data"))
+ snprintf(tname, sizeof(tname), "bin_data_%u", skel->rodata->data_sizes[i]);
+ id = btf__find_by_name_kind(btf, tname, BTF_KIND_STRUCT);
+ if (!ASSERT_GT(id, 0, tname))
goto out;
skel->rodata->data_btf_ids[i] = id;
}
+ for (i = 0; i < ARRAY_SIZE(skel->rodata->percpu_data_sizes); i++) {
+ snprintf(tname, sizeof(tname), "percpu_bin_data_%u", skel->rodata->percpu_data_sizes[i]);
+ id = btf__find_by_name_kind(btf, tname, BTF_KIND_STRUCT);
+ if (!ASSERT_GT(id, 0, tname))
+ goto out;
+ skel->rodata->percpu_data_btf_ids[i] = id;
+ }
+
prog = bpf_object__find_program_by_name(skel->obj, name);
if (!ASSERT_OK_PTR(prog, "invalid prog name"))
goto out;
diff --git a/tools/testing/selftests/bpf/progs/test_bpf_ma.c b/tools/testing/selftests/bpf/progs/test_bpf_ma.c
index b685a4aba6bd..e453a9392e5a 100644
--- a/tools/testing/selftests/bpf/progs/test_bpf_ma.c
+++ b/tools/testing/selftests/bpf/progs/test_bpf_ma.c
@@ -20,6 +20,9 @@ char _license[] SEC("license") = "GPL";
const unsigned int data_sizes[] = {8, 16, 32, 64, 96, 128, 192, 256, 512, 1024, 2048, 4096};
const volatile unsigned int data_btf_ids[ARRAY_SIZE(data_sizes)] = {};
+const unsigned int percpu_data_sizes[] = {8, 16, 32, 64, 96, 128, 192, 256, 512};
+const volatile unsigned int percpu_data_btf_ids[ARRAY_SIZE(data_sizes)] = {};
+
int err = 0;
int pid = 0;
@@ -27,10 +30,10 @@ int pid = 0;
struct bin_data_##_size { \
char data[_size - sizeof(void *)]; \
}; \
+ /* See Commit 5d8d6634ccc, force btf generation for type bin_data_##_size */ \
+ struct bin_data_##_size *__bin_data_##_size; \
struct map_value_##_size { \
struct bin_data_##_size __kptr * data; \
- /* To emit BTF info for bin_data_xx */ \
- struct bin_data_##_size not_used; \
}; \
struct { \
__uint(type, BPF_MAP_TYPE_ARRAY); \
@@ -40,8 +43,12 @@ int pid = 0;
} array_##_size SEC(".maps")
#define DEFINE_ARRAY_WITH_PERCPU_KPTR(_size) \
+ struct percpu_bin_data_##_size { \
+ char data[_size]; \
+ }; \
+ struct percpu_bin_data_##_size *__percpu_bin_data_##_size; \
struct map_value_percpu_##_size { \
- struct bin_data_##_size __percpu_kptr * data; \
+ struct percpu_bin_data_##_size __percpu_kptr * data; \
}; \
struct { \
__uint(type, BPF_MAP_TYPE_ARRAY); \
@@ -114,7 +121,7 @@ static __always_inline void batch_percpu_alloc(struct bpf_map *map, unsigned int
return;
}
/* per-cpu allocator may not be able to refill in time */
- new = bpf_percpu_obj_new_impl(data_btf_ids[idx], NULL);
+ new = bpf_percpu_obj_new_impl(percpu_data_btf_ids[idx], NULL);
if (!new)
continue;
@@ -179,7 +186,7 @@ DEFINE_ARRAY_WITH_KPTR(1024);
DEFINE_ARRAY_WITH_KPTR(2048);
DEFINE_ARRAY_WITH_KPTR(4096);
-/* per-cpu kptr doesn't support bin_data_8 which is a zero-sized array */
+DEFINE_ARRAY_WITH_PERCPU_KPTR(8);
DEFINE_ARRAY_WITH_PERCPU_KPTR(16);
DEFINE_ARRAY_WITH_PERCPU_KPTR(32);
DEFINE_ARRAY_WITH_PERCPU_KPTR(64);
@@ -188,9 +195,6 @@ DEFINE_ARRAY_WITH_PERCPU_KPTR(128);
DEFINE_ARRAY_WITH_PERCPU_KPTR(192);
DEFINE_ARRAY_WITH_PERCPU_KPTR(256);
DEFINE_ARRAY_WITH_PERCPU_KPTR(512);
-DEFINE_ARRAY_WITH_PERCPU_KPTR(1024);
-DEFINE_ARRAY_WITH_PERCPU_KPTR(2048);
-DEFINE_ARRAY_WITH_PERCPU_KPTR(4096);
SEC("?fentry/" SYS_PREFIX "sys_nanosleep")
int test_batch_alloc_free(void *ctx)
@@ -248,9 +252,10 @@ int test_batch_percpu_alloc_free(void *ctx)
if ((u32)bpf_get_current_pid_tgid() != pid)
return 0;
- /* Alloc 128 16-bytes per-cpu objects in batch to trigger refilling,
- * then free 128 16-bytes per-cpu objects in batch to trigger freeing.
+ /* Alloc 128 8-bytes per-cpu objects in batch to trigger refilling,
+ * then free 128 8-bytes per-cpu objects in batch to trigger freeing.
*/
+ CALL_BATCH_PERCPU_ALLOC_FREE(8, 128, 0);
CALL_BATCH_PERCPU_ALLOC_FREE(16, 128, 1);
CALL_BATCH_PERCPU_ALLOC_FREE(32, 128, 2);
CALL_BATCH_PERCPU_ALLOC_FREE(64, 128, 3);
@@ -259,9 +264,6 @@ int test_batch_percpu_alloc_free(void *ctx)
CALL_BATCH_PERCPU_ALLOC_FREE(192, 128, 6);
CALL_BATCH_PERCPU_ALLOC_FREE(256, 128, 7);
CALL_BATCH_PERCPU_ALLOC_FREE(512, 64, 8);
- CALL_BATCH_PERCPU_ALLOC_FREE(1024, 32, 9);
- CALL_BATCH_PERCPU_ALLOC_FREE(2048, 16, 10);
- CALL_BATCH_PERCPU_ALLOC_FREE(4096, 8, 11);
return 0;
}
@@ -272,9 +274,10 @@ int test_percpu_free_through_map_free(void *ctx)
if ((u32)bpf_get_current_pid_tgid() != pid)
return 0;
- /* Alloc 128 16-bytes per-cpu objects in batch to trigger refilling,
+ /* Alloc 128 8-bytes per-cpu objects in batch to trigger refilling,
* then free these object through map free.
*/
+ CALL_BATCH_PERCPU_ALLOC(8, 128, 0);
CALL_BATCH_PERCPU_ALLOC(16, 128, 1);
CALL_BATCH_PERCPU_ALLOC(32, 128, 2);
CALL_BATCH_PERCPU_ALLOC(64, 128, 3);
@@ -283,9 +286,6 @@ int test_percpu_free_through_map_free(void *ctx)
CALL_BATCH_PERCPU_ALLOC(192, 128, 6);
CALL_BATCH_PERCPU_ALLOC(256, 128, 7);
CALL_BATCH_PERCPU_ALLOC(512, 64, 8);
- CALL_BATCH_PERCPU_ALLOC(1024, 32, 9);
- CALL_BATCH_PERCPU_ALLOC(2048, 16, 10);
- CALL_BATCH_PERCPU_ALLOC(4096, 8, 11);
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH bpf-next v3 5/6] selftests/bpf: Cope with 512 bytes limit with bpf_global_percpu_ma
2023-12-16 2:30 ` [PATCH bpf-next v3 5/6] selftests/bpf: Cope with 512 bytes limit with bpf_global_percpu_ma Yonghong Song
@ 2023-12-16 4:04 ` Hou Tao
0 siblings, 0 replies; 16+ messages in thread
From: Hou Tao @ 2023-12-16 4:04 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
On 12/16/2023 10:30 AM, Yonghong Song wrote:
> In the previous patch, the maximum data size for bpf_global_percpu_ma
> is 512 bytes. This breaks selftest test_bpf_ma. The test is adjusted
> in two aspects:
> - Since the maximum allowed data size for bpf_global_percpu_ma is
> 512, remove all tests beyond that, names sizes 1024, 2048 and 4096.
> - Previously the percpu data size is bucket_size - 8 in order to
> avoid percpu allocation into the next bucket. This patch removed
> such data size adjustment thanks to Patch 1.
>
> Also, a better way to generate BTF type is used than adding
> a member to the value struct.
>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Acked-by: Hou Tao <houtao1@huawei.com>
And thanks for the update.
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH bpf-next v3 6/6] selftests/bpf: Add a selftest with > 512-byte percpu allocation size
2023-12-16 2:30 [PATCH bpf-next v3 0/6] bpf: Reduce memory usage for bpf_global_percpu_ma Yonghong Song
` (4 preceding siblings ...)
2023-12-16 2:30 ` [PATCH bpf-next v3 5/6] selftests/bpf: Cope with 512 bytes limit with bpf_global_percpu_ma Yonghong Song
@ 2023-12-16 2:30 ` Yonghong Song
2023-12-16 4:07 ` Hou Tao
5 siblings, 1 reply; 16+ messages in thread
From: Yonghong Song @ 2023-12-16 2:30 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, kernel-team,
Martin KaFai Lau
Add a selftest to capture the verification failure when the allocation
size is greater than 512.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
.../selftests/bpf/progs/percpu_alloc_fail.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c b/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
index 1a891d30f1fe..f2b8eb2ff76f 100644
--- a/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
+++ b/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
@@ -17,6 +17,10 @@ struct val_with_rb_root_t {
struct bpf_spin_lock lock;
};
+struct val_600b_t {
+ char b[600];
+};
+
struct elem {
long sum;
struct val_t __percpu_kptr *pc;
@@ -161,4 +165,18 @@ int BPF_PROG(test_array_map_7)
return 0;
}
+SEC("?fentry.s/bpf_fentry_test1")
+__failure __msg("bpf_percpu_obj_new type size (600) is greater than 512")
+int BPF_PROG(test_array_map_8)
+{
+ struct val_600b_t __percpu_kptr *p;
+
+ p = bpf_percpu_obj_new(struct val_600b_t);
+ if (!p)
+ return 0;
+
+ bpf_percpu_obj_drop(p);
+ return 0;
+}
+
char _license[] SEC("license") = "GPL";
--
2.34.1
^ permalink raw reply related [flat|nested] 16+ messages in thread