BPF List
 help / color / mirror / Atom feed
From: Yonghong Song <yonghong.song@linux.dev>
To: bpf@vger.kernel.org
Cc: Alexei Starovoitov <ast@kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	kernel-team@fb.com, Martin KaFai Lau <martin.lau@kernel.org>
Subject: [PATCH bpf-next 3/5] bpf: Refill only one percpu element in memalloc
Date: Tue, 12 Dec 2023 14:30:55 -0800	[thread overview]
Message-ID: <20231212223055.2138132-1-yonghong.song@linux.dev> (raw)
In-Reply-To: <20231212223040.2135547-1-yonghong.song@linux.dev>

Typically for percpu map element or data structure, once allocated,
most operations are lookup or in-place update. Deletion are really
rare. Currently, for percpu data strcture, 4 elements will be
refilled if the size is <= 256. Let us just do with one element
for percpu data. For example, for size 256 and 128 cpus, the
potential saving will be 3 * 256 * 128 * 128 = 12MB.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/bpf/memalloc.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 84987e97fd0a..a1d718ee264d 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -483,11 +483,15 @@ static void init_refill_work(struct bpf_mem_cache *c)
 
 static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
 {
+	int cnt = 1;
+
 	/* To avoid consuming memory assume that 1st run of bpf
 	 * prog won't be doing more than 4 map_update_elem from
 	 * irq disabled region
 	 */
-	alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu), false);
+	if (!c->percpu_size && c->unit_size <= 256)
+		cnt = 4;
+	alloc_bulk(c, cnt, cpu_to_node(cpu), false);
 }
 
 static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx)
-- 
2.34.1


  parent reply	other threads:[~2023-12-12 22:31 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-12 22:30 [PATCH bpf-next 0/5] bpf: Reduce memory usage for bpf_global_percpu_ma Yonghong Song
2023-12-12 22:30 ` [PATCH bpf-next 1/5] bpf: Refactor to have a memalloc cache destroying function Yonghong Song
2023-12-13 11:05   ` Hou Tao
2023-12-12 22:30 ` [PATCH bpf-next 2/5] bpf: Allow per unit prefill for non-fix-size percpu memory allocator Yonghong Song
2023-12-13 11:03   ` Hou Tao
2023-12-13 17:25     ` Yonghong Song
2023-12-12 22:30 ` Yonghong Song [this message]
2023-12-13 11:05   ` [PATCH bpf-next 3/5] bpf: Refill only one percpu element in memalloc Hou Tao
2023-12-13 17:26     ` Yonghong Song
2023-12-12 22:31 ` [PATCH bpf-next 4/5] bpf: Limit up to 512 bytes for bpf_global_percpu_ma allocation Yonghong Song
2023-12-13 10:15   ` kernel test robot
2023-12-13 17:20     ` Yonghong Song
2023-12-13 11:09   ` Hou Tao
2023-12-13 17:28     ` Yonghong Song
2023-12-13 14:13   ` kernel test robot
2023-12-12 22:31 ` [PATCH bpf-next 5/5] selftests/bpf: Cope with 512 bytes limit with bpf_global_percpu_ma Yonghong Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231212223055.2138132-1-yonghong.song@linux.dev \
    --to=yonghong.song@linux.dev \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=kernel-team@fb.com \
    --cc=martin.lau@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox