From: Yonghong Song <yonghong.song@linux.dev>
To: bpf@vger.kernel.org
Cc: Alexei Starovoitov <ast@kernel.org>,
Andrii Nakryiko <andrii@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
kernel-team@fb.com, Martin KaFai Lau <martin.lau@kernel.org>
Subject: [PATCH bpf-next 1/5] bpf: Refactor to have a memalloc cache destroying function
Date: Tue, 12 Dec 2023 14:30:45 -0800 [thread overview]
Message-ID: <20231212223045.2137288-1-yonghong.song@linux.dev> (raw)
In-Reply-To: <20231212223040.2135547-1-yonghong.song@linux.dev>
The function, named as bpf_mem_alloc_destroy_cache(), will be used
in the subsequent patch.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/memalloc.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 6a51cfe4c2d6..75068167e745 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -618,6 +618,13 @@ static void drain_mem_cache(struct bpf_mem_cache *c)
free_all(llist_del_all(&c->waiting_for_gp), percpu);
}
+static void bpf_mem_alloc_destroy_cache(struct bpf_mem_cache *c)
+{
+ WRITE_ONCE(c->draining, true);
+ irq_work_sync(&c->refill_work);
+ drain_mem_cache(c);
+}
+
static void check_mem_cache(struct bpf_mem_cache *c)
{
WARN_ON_ONCE(!llist_empty(&c->free_by_rcu_ttrace));
@@ -723,9 +730,7 @@ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
rcu_in_progress = 0;
for_each_possible_cpu(cpu) {
c = per_cpu_ptr(ma->cache, cpu);
- WRITE_ONCE(c->draining, true);
- irq_work_sync(&c->refill_work);
- drain_mem_cache(c);
+ bpf_mem_alloc_destroy_cache(c);
rcu_in_progress += atomic_read(&c->call_rcu_ttrace_in_progress);
rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
}
@@ -740,9 +745,7 @@ void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma)
cc = per_cpu_ptr(ma->caches, cpu);
for (i = 0; i < NUM_CACHES; i++) {
c = &cc->cache[i];
- WRITE_ONCE(c->draining, true);
- irq_work_sync(&c->refill_work);
- drain_mem_cache(c);
+ bpf_mem_alloc_destroy_cache(c);
rcu_in_progress += atomic_read(&c->call_rcu_ttrace_in_progress);
rcu_in_progress += atomic_read(&c->call_rcu_in_progress);
}
--
2.34.1
next prev parent reply other threads:[~2023-12-12 22:30 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-12 22:30 [PATCH bpf-next 0/5] bpf: Reduce memory usage for bpf_global_percpu_ma Yonghong Song
2023-12-12 22:30 ` Yonghong Song [this message]
2023-12-13 11:05 ` [PATCH bpf-next 1/5] bpf: Refactor to have a memalloc cache destroying function Hou Tao
2023-12-12 22:30 ` [PATCH bpf-next 2/5] bpf: Allow per unit prefill for non-fix-size percpu memory allocator Yonghong Song
2023-12-13 11:03 ` Hou Tao
2023-12-13 17:25 ` Yonghong Song
2023-12-12 22:30 ` [PATCH bpf-next 3/5] bpf: Refill only one percpu element in memalloc Yonghong Song
2023-12-13 11:05 ` Hou Tao
2023-12-13 17:26 ` Yonghong Song
2023-12-12 22:31 ` [PATCH bpf-next 4/5] bpf: Limit up to 512 bytes for bpf_global_percpu_ma allocation Yonghong Song
2023-12-13 10:15 ` kernel test robot
2023-12-13 17:20 ` Yonghong Song
2023-12-13 11:09 ` Hou Tao
2023-12-13 17:28 ` Yonghong Song
2023-12-13 14:13 ` kernel test robot
2023-12-12 22:31 ` [PATCH bpf-next 5/5] selftests/bpf: Cope with 512 bytes limit with bpf_global_percpu_ma Yonghong Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231212223045.2137288-1-yonghong.song@linux.dev \
--to=yonghong.song@linux.dev \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=kernel-team@fb.com \
--cc=martin.lau@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox