* [PATCH bpf-next v3 0/5] Free htab element out of bucket lock
@ 2025-01-17 10:18 Hou Tao
2025-01-17 10:18 ` [PATCH bpf-next v3 1/5] bpf: Free special fields after unlock in htab_lru_map_delete_node() Hou Tao
` (4 more replies)
0 siblings, 5 replies; 14+ messages in thread
From: Hou Tao @ 2025-01-17 10:18 UTC (permalink / raw)
To: bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
From: Hou Tao <houtao1@huawei.com>
Hi,
The patch set continues the previous work [1] to move all the freeings
of htab elements out of bucket lock. One motivation for the patch set is
the locking problem reported by Sebastian [2]: the freeing of bpf_timer
under PREEMPT_RT may acquire a spin-lock (namely softirq_expiry_lock).
However the freeing procedure for htab element has already held a
raw-spin-lock (namely bucket lock), and it will trigger the warning:
"BUG: scheduling while atomic" as demonstrated by the selftests patch.
Another motivation is to reduce the locked scope of bucket lock.
However, the patch set doesn't move all freeing of htab element out of
bucket lock, it still keep the free of special fields in pre-allocated
hash map under the protect of bucket lock in htab_map_update_elem(). The
patch set is structured as follows:
* Patch #1 moves the element freeing out of bucket lock for
htab_lru_map_delete_node(). However the freeing is still in the locked
scope of LRU raw spin lock.
* Patch #2~#3 move the element freeing out of bucket lock for
__htab_map_lookup_and_delete_elem()
* Patch #4 cancels the bpf_timer in two steps to fix the locking
problem in htab_map_update_elem() for PREEMPT_PRT.
* Patch #5 adds a selftest for the locking problem
Please see individual patches for more details. Comments are always
welcome.
---
v3:
* patch #1: update the commit message to state that the freeing of
special field is still in the locked scope of LRU raw spin lock
* patch #4: cancel the bpf_timer in two steps only for PREEMPT_RT
(suggested by Alexei)
v2: https://lore.kernel.org/bpf/20250109061901.2620825-1-houtao@huaweicloud.com
* cancels the bpf timer in two steps instead of breaking the reuse
the refill of per-cpu ->extra_elems into two steps
v1: https://lore.kernel.org/bpf/20250107085559.3081563-1-houtao@huaweicloud.com
[1]: https://lore.kernel.org/bpf/20241106063542.357743-1-houtao@huaweicloud.com
[2]: https://lore.kernel.org/bpf/20241106084527.4gPrMnHt@linutronix.de
Hou Tao (5):
bpf: Free special fields after unlock in htab_lru_map_delete_node()
bpf: Bail out early in __htab_map_lookup_and_delete_elem()
bpf: Free element after unlock in __htab_map_lookup_and_delete_elem()
bpf: Cancel the running bpf_timer through kworker for PREEMPT_RT
selftests/bpf: Add test case for the freeing of bpf_timer
kernel/bpf/hashtab.c | 60 ++++---
kernel/bpf/helpers.c | 18 +-
.../selftests/bpf/prog_tests/free_timer.c | 165 ++++++++++++++++++
.../testing/selftests/bpf/progs/free_timer.c | 71 ++++++++
4 files changed, 284 insertions(+), 30 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/free_timer.c
create mode 100644 tools/testing/selftests/bpf/progs/free_timer.c
--
2.29.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH bpf-next v3 1/5] bpf: Free special fields after unlock in htab_lru_map_delete_node()
2025-01-17 10:18 [PATCH bpf-next v3 0/5] Free htab element out of bucket lock Hou Tao
@ 2025-01-17 10:18 ` Hou Tao
2025-01-17 12:31 ` Toke Høiland-Jørgensen
2025-01-17 10:18 ` [PATCH bpf-next v3 2/5] bpf: Bail out early in __htab_map_lookup_and_delete_elem() Hou Tao
` (3 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Hou Tao @ 2025-01-17 10:18 UTC (permalink / raw)
To: bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
From: Hou Tao <houtao1@huawei.com>
When bpf_timer is used in LRU hash map, calling check_and_free_fields()
in htab_lru_map_delete_node() will invoke bpf_timer_cancel_and_free() to
free the bpf_timer. If the timer is running on other CPUs,
hrtimer_cancel() will invoke hrtimer_cancel_wait_running() to spin on
current CPU to wait for the completion of the hrtimer callback.
Considering that the deletion has already acquired a raw-spin-lock
(bucket lock). To reduce the time holding the bucket lock, move the
invocation of check_and_free_fields() out of bucket lock. However,
because htab_lru_map_delete_node() is invoked with LRU raw spin lock
being held, the freeing of special fields still happens in a locked
scope.
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
kernel/bpf/hashtab.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 40095dda891d3..963cccb01daae 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -824,13 +824,14 @@ static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node)
hlist_nulls_for_each_entry_rcu(l, n, head, hash_node)
if (l == tgt_l) {
hlist_nulls_del_rcu(&l->hash_node);
- check_and_free_fields(htab, l);
bpf_map_dec_elem_count(&htab->map);
break;
}
htab_unlock_bucket(htab, b, tgt_l->hash, flags);
+ if (l == tgt_l)
+ check_and_free_fields(htab, l);
return l == tgt_l;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH bpf-next v3 2/5] bpf: Bail out early in __htab_map_lookup_and_delete_elem()
2025-01-17 10:18 [PATCH bpf-next v3 0/5] Free htab element out of bucket lock Hou Tao
2025-01-17 10:18 ` [PATCH bpf-next v3 1/5] bpf: Free special fields after unlock in htab_lru_map_delete_node() Hou Tao
@ 2025-01-17 10:18 ` Hou Tao
2025-01-17 12:31 ` Toke Høiland-Jørgensen
2025-01-17 10:18 ` [PATCH bpf-next v3 3/5] bpf: Free element after unlock " Hou Tao
` (2 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Hou Tao @ 2025-01-17 10:18 UTC (permalink / raw)
To: bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
From: Hou Tao <houtao1@huawei.com>
Use goto statement to bail out early when the target element is not
found, instead of using a large else branch to handle the more likely
case. This change doesn't affect functionality and simply make the code
cleaner.
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
kernel/bpf/hashtab.c | 51 ++++++++++++++++++++++----------------------
1 file changed, 26 insertions(+), 25 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 963cccb01daae..6545ef40e128a 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -1635,37 +1635,38 @@ static int __htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key,
l = lookup_elem_raw(head, hash, key, key_size);
if (!l) {
ret = -ENOENT;
- } else {
- if (is_percpu) {
- u32 roundup_value_size = round_up(map->value_size, 8);
- void __percpu *pptr;
- int off = 0, cpu;
+ goto out_unlock;
+ }
- pptr = htab_elem_get_ptr(l, key_size);
- for_each_possible_cpu(cpu) {
- copy_map_value_long(&htab->map, value + off, per_cpu_ptr(pptr, cpu));
- check_and_init_map_value(&htab->map, value + off);
- off += roundup_value_size;
- }
- } else {
- u32 roundup_key_size = round_up(map->key_size, 8);
+ if (is_percpu) {
+ u32 roundup_value_size = round_up(map->value_size, 8);
+ void __percpu *pptr;
+ int off = 0, cpu;
- if (flags & BPF_F_LOCK)
- copy_map_value_locked(map, value, l->key +
- roundup_key_size,
- true);
- else
- copy_map_value(map, value, l->key +
- roundup_key_size);
- /* Zeroing special fields in the temp buffer */
- check_and_init_map_value(map, value);
+ pptr = htab_elem_get_ptr(l, key_size);
+ for_each_possible_cpu(cpu) {
+ copy_map_value_long(&htab->map, value + off, per_cpu_ptr(pptr, cpu));
+ check_and_init_map_value(&htab->map, value + off);
+ off += roundup_value_size;
}
+ } else {
+ u32 roundup_key_size = round_up(map->key_size, 8);
- hlist_nulls_del_rcu(&l->hash_node);
- if (!is_lru_map)
- free_htab_elem(htab, l);
+ if (flags & BPF_F_LOCK)
+ copy_map_value_locked(map, value, l->key +
+ roundup_key_size,
+ true);
+ else
+ copy_map_value(map, value, l->key +
+ roundup_key_size);
+ /* Zeroing special fields in the temp buffer */
+ check_and_init_map_value(map, value);
}
+ hlist_nulls_del_rcu(&l->hash_node);
+ if (!is_lru_map)
+ free_htab_elem(htab, l);
+out_unlock:
htab_unlock_bucket(htab, b, hash, bflags);
if (is_lru_map && l)
--
2.29.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH bpf-next v3 3/5] bpf: Free element after unlock in __htab_map_lookup_and_delete_elem()
2025-01-17 10:18 [PATCH bpf-next v3 0/5] Free htab element out of bucket lock Hou Tao
2025-01-17 10:18 ` [PATCH bpf-next v3 1/5] bpf: Free special fields after unlock in htab_lru_map_delete_node() Hou Tao
2025-01-17 10:18 ` [PATCH bpf-next v3 2/5] bpf: Bail out early in __htab_map_lookup_and_delete_elem() Hou Tao
@ 2025-01-17 10:18 ` Hou Tao
2025-01-17 12:35 ` Toke Høiland-Jørgensen
2025-01-17 10:18 ` [PATCH bpf-next v3 4/5] bpf: Cancel the running bpf_timer through kworker for PREEMPT_RT Hou Tao
2025-01-17 10:18 ` [PATCH bpf-next v3 5/5] selftests/bpf: Add test case for the freeing of bpf_timer Hou Tao
4 siblings, 1 reply; 14+ messages in thread
From: Hou Tao @ 2025-01-17 10:18 UTC (permalink / raw)
To: bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
From: Hou Tao <houtao1@huawei.com>
The freeing of special fields in map value may acquire a spin-lock
(e.g., the freeing of bpf_timer), however, the lookup_and_delete_elem
procedure has already held a raw-spin-lock, which violates the lockdep
rule.
The running context of __htab_map_lookup_and_delete_elem() has already
disabled the migration. Therefore, it is OK to invoke free_htab_elem()
after unlocking the bucket lock.
Fix the potential problem by freeing element after unlocking bucket lock
in __htab_map_lookup_and_delete_elem().
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
kernel/bpf/hashtab.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 6545ef40e128a..4a9eeb7aef855 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -1663,14 +1663,16 @@ static int __htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key,
check_and_init_map_value(map, value);
}
hlist_nulls_del_rcu(&l->hash_node);
- if (!is_lru_map)
- free_htab_elem(htab, l);
out_unlock:
htab_unlock_bucket(htab, b, hash, bflags);
- if (is_lru_map && l)
- htab_lru_push_free(htab, l);
+ if (l) {
+ if (is_lru_map)
+ htab_lru_push_free(htab, l);
+ else
+ free_htab_elem(htab, l);
+ }
return ret;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH bpf-next v3 4/5] bpf: Cancel the running bpf_timer through kworker for PREEMPT_RT
2025-01-17 10:18 [PATCH bpf-next v3 0/5] Free htab element out of bucket lock Hou Tao
` (2 preceding siblings ...)
2025-01-17 10:18 ` [PATCH bpf-next v3 3/5] bpf: Free element after unlock " Hou Tao
@ 2025-01-17 10:18 ` Hou Tao
2025-01-17 12:40 ` Toke Høiland-Jørgensen
2025-01-17 10:18 ` [PATCH bpf-next v3 5/5] selftests/bpf: Add test case for the freeing of bpf_timer Hou Tao
4 siblings, 1 reply; 14+ messages in thread
From: Hou Tao @ 2025-01-17 10:18 UTC (permalink / raw)
To: bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
From: Hou Tao <houtao1@huawei.com>
During the update procedure, when overwrite element in a pre-allocated
htab, the freeing of old_element is protected by the bucket lock. The
reason why the bucket lock is necessary is that the old_element has
already been stashed in htab->extra_elems after alloc_htab_elem()
returns. If freeing the old_element after the bucket lock is unlocked,
the stashed element may be reused by concurrent update procedure and the
freeing of old_element will run concurrently with the reuse of the
old_element. However, the invocation of check_and_free_fields() may
acquire a spin-lock which violates the lockdep rule because its caller
has already held a raw-spin-lock (bucket lock). The following warning
will be reported when such race happens:
BUG: scheduling while atomic: test_progs/676/0x00000003
3 locks held by test_progs/676:
#0: ffffffff864b0240 (rcu_read_lock_trace){....}-{0:0}, at: bpf_prog_test_run_syscall+0x2c0/0x830
#1: ffff88810e961188 (&htab->lockdep_key){....}-{2:2}, at: htab_map_update_elem+0x306/0x1500
#2: ffff8881f4eac1b8 (&base->softirq_expiry_lock){....}-{2:2}, at: hrtimer_cancel_wait_running+0xe9/0x1b0
Modules linked in: bpf_testmod(O)
Preemption disabled at:
[<ffffffff817837a3>] htab_map_update_elem+0x293/0x1500
CPU: 0 UID: 0 PID: 676 Comm: test_progs Tainted: G ... 6.12.0+ #11
Tainted: [W]=WARN, [O]=OOT_MODULE
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)...
Call Trace:
<TASK>
dump_stack_lvl+0x57/0x70
dump_stack+0x10/0x20
__schedule_bug+0x120/0x170
__schedule+0x300c/0x4800
schedule_rtlock+0x37/0x60
rtlock_slowlock_locked+0x6d9/0x54c0
rt_spin_lock+0x168/0x230
hrtimer_cancel_wait_running+0xe9/0x1b0
hrtimer_cancel+0x24/0x30
bpf_timer_delete_work+0x1d/0x40
bpf_timer_cancel_and_free+0x5e/0x80
bpf_obj_free_fields+0x262/0x4a0
check_and_free_fields+0x1d0/0x280
htab_map_update_elem+0x7fc/0x1500
bpf_prog_9f90bc20768e0cb9_overwrite_cb+0x3f/0x43
bpf_prog_ea601c4649694dbd_overwrite_timer+0x5d/0x7e
bpf_prog_test_run_syscall+0x322/0x830
__sys_bpf+0x135d/0x3ca0
__x64_sys_bpf+0x75/0xb0
x64_sys_call+0x1b5/0xa10
do_syscall_64+0x3b/0xc0
entry_SYSCALL_64_after_hwframe+0x4b/0x53
...
</TASK>
It seems feasible to break the reuse and refill of per-cpu extra_elems
into two independent parts: reuse the per-cpu extra_elems with bucket
lock being held and refill the old_element as per-cpu extra_elems after
the bucket lock is unlocked. However, it will make the concurrent
overwrite procedures on the same CPU return unexpected -E2BIG error when
the map is full.
Therefore, the patch fixes the lock problem by breaking the cancelling
of bpf_timer into two steps for PREEMPT_RT:
1) use hrtimer_try_to_cancel() and check its return value
2) if the timer is running, use hrtimer_cancel() through a kworker to
cancel it again
Considering that the current implementation of hrtimer_cancel() will try
to acquire a being held softirq_expiry_lock when the current timer is
running, these steps above are reasonable. However, it also has
downside. When the timer is running, the cancelling of the timer is
delayed when releasing the last map uref. The delay is also fixable
(e.g., break the cancelling of bpf timer into two parts: one part in
locked scope, another one in unlocked scope), it can be revised later if
necessary.
It is a bit hard to decide the right fix tag. One reason is that the
problem depends on PREEMPT_RT which is enabled in v6.12. Considering the
softirq_expiry_lock lock exists since v5.4 and bpf_timer is introduced
in v5.15, the bpf_timer commit is used in the fixes tag and an extra
depends-on tag is added to state the dependency on PREEMPT_RT.
Fixes: b00628b1c7d5 ("bpf: Introduce bpf timers.")
Depends-on: v6.12+ with PREEMPT_RT enabled
Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Closes: https://lore.kernel.org/bpf/20241106084527.4gPrMnHt@linutronix.de
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
kernel/bpf/helpers.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index bcda671feafd9..f27ce162427ab 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1593,10 +1593,24 @@ void bpf_timer_cancel_and_free(void *val)
* To avoid these issues, punt to workqueue context when we are in a
* timer callback.
*/
- if (this_cpu_read(hrtimer_running))
+ if (this_cpu_read(hrtimer_running)) {
queue_work(system_unbound_wq, &t->cb.delete_work);
- else
+ return;
+ }
+
+ if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+ /* If the timer is running on other CPU, also use a kworker to
+ * wait for the completion of the timer instead of trying to
+ * acquire a sleepable lock in hrtimer_cancel() to wait for its
+ * completion.
+ */
+ if (hrtimer_try_to_cancel(&t->timer) >= 0)
+ kfree_rcu(t, cb.rcu);
+ else
+ queue_work(system_unbound_wq, &t->cb.delete_work);
+ } else {
bpf_timer_delete_work(&t->cb.delete_work);
+ }
}
/* This function is called by map_delete/update_elem for individual element and
--
2.29.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH bpf-next v3 5/5] selftests/bpf: Add test case for the freeing of bpf_timer
2025-01-17 10:18 [PATCH bpf-next v3 0/5] Free htab element out of bucket lock Hou Tao
` (3 preceding siblings ...)
2025-01-17 10:18 ` [PATCH bpf-next v3 4/5] bpf: Cancel the running bpf_timer through kworker for PREEMPT_RT Hou Tao
@ 2025-01-17 10:18 ` Hou Tao
4 siblings, 0 replies; 14+ messages in thread
From: Hou Tao @ 2025-01-17 10:18 UTC (permalink / raw)
To: bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
From: Hou Tao <houtao1@huawei.com>
The main purpose of the test is to demonstrate the lock problem for the
free of bpf_timer under PREEMPT_RT. When freeing a bpf_timer which is
running on other CPU in bpf_timer_cancel_and_free(), hrtimer_cancel()
will try to acquire a spin-lock (namely softirq_expiry_lock), however
the freeing procedure has already held a raw-spin-lock.
The test first creates two threads: one to start timers and the other to
free timers. The start-timers thread will start the timer and then wake
up the free-timers thread to free these timers when the starts complete.
After freeing, the free-timer thread will wake up the start-timer thread
to complete the current iteration. A loop of 10 iterations is used.
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
.../selftests/bpf/prog_tests/free_timer.c | 165 ++++++++++++++++++
.../testing/selftests/bpf/progs/free_timer.c | 71 ++++++++
2 files changed, 236 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/free_timer.c
create mode 100644 tools/testing/selftests/bpf/progs/free_timer.c
diff --git a/tools/testing/selftests/bpf/prog_tests/free_timer.c b/tools/testing/selftests/bpf/prog_tests/free_timer.c
new file mode 100644
index 0000000000000..b7b77a6b29799
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/free_timer.c
@@ -0,0 +1,165 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2025. Huawei Technologies Co., Ltd */
+#define _GNU_SOURCE
+#include <unistd.h>
+#include <sys/syscall.h>
+#include <test_progs.h>
+
+#include "free_timer.skel.h"
+
+struct run_ctx {
+ struct bpf_program *start_prog;
+ struct bpf_program *overwrite_prog;
+ pthread_barrier_t notify;
+ int loop;
+ bool start;
+ bool stop;
+};
+
+static void start_threads(struct run_ctx *ctx)
+{
+ ctx->start = true;
+}
+
+static void stop_threads(struct run_ctx *ctx)
+{
+ ctx->stop = true;
+ /* Guarantee the order between ->stop and ->start */
+ __atomic_store_n(&ctx->start, true, __ATOMIC_RELEASE);
+}
+
+static int wait_for_start(struct run_ctx *ctx)
+{
+ while (!__atomic_load_n(&ctx->start, __ATOMIC_ACQUIRE))
+ usleep(10);
+
+ return ctx->stop;
+}
+
+static void *overwrite_timer_fn(void *arg)
+{
+ struct run_ctx *ctx = arg;
+ int loop, fd, err;
+ cpu_set_t cpuset;
+ long ret = 0;
+
+ /* Pin on CPU 0 */
+ CPU_ZERO(&cpuset);
+ CPU_SET(0, &cpuset);
+ pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset);
+
+ /* Is the thread being stopped ? */
+ err = wait_for_start(ctx);
+ if (err)
+ return NULL;
+
+ fd = bpf_program__fd(ctx->overwrite_prog);
+ loop = ctx->loop;
+ while (loop-- > 0) {
+ LIBBPF_OPTS(bpf_test_run_opts, opts);
+
+ /* Wait for start thread to complete */
+ pthread_barrier_wait(&ctx->notify);
+
+ /* Overwrite timers */
+ err = bpf_prog_test_run_opts(fd, &opts);
+ if (err)
+ ret |= 1;
+ else if (opts.retval)
+ ret |= 2;
+
+ /* Notify start thread to start timers */
+ pthread_barrier_wait(&ctx->notify);
+ }
+
+ return (void *)ret;
+}
+
+static void *start_timer_fn(void *arg)
+{
+ struct run_ctx *ctx = arg;
+ int loop, fd, err;
+ cpu_set_t cpuset;
+ long ret = 0;
+
+ /* Pin on CPU 1 */
+ CPU_ZERO(&cpuset);
+ CPU_SET(1, &cpuset);
+ pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset);
+
+ /* Is the thread being stopped ? */
+ err = wait_for_start(ctx);
+ if (err)
+ return NULL;
+
+ fd = bpf_program__fd(ctx->start_prog);
+ loop = ctx->loop;
+ while (loop-- > 0) {
+ LIBBPF_OPTS(bpf_test_run_opts, opts);
+
+ /* Run the prog to start timer */
+ err = bpf_prog_test_run_opts(fd, &opts);
+ if (err)
+ ret |= 4;
+ else if (opts.retval)
+ ret |= 8;
+
+ /* Notify overwrite thread to do overwrite */
+ pthread_barrier_wait(&ctx->notify);
+
+ /* Wait for overwrite thread to complete */
+ pthread_barrier_wait(&ctx->notify);
+ }
+
+ return (void *)ret;
+}
+
+void test_free_timer(void)
+{
+ struct free_timer *skel;
+ struct bpf_program *prog;
+ struct run_ctx ctx;
+ pthread_t tid[2];
+ void *ret;
+ int err;
+
+ skel = free_timer__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_load"))
+ return;
+
+ memset(&ctx, 0, sizeof(ctx));
+
+ prog = bpf_object__find_program_by_name(skel->obj, "start_timer");
+ if (!ASSERT_OK_PTR(prog, "find start prog"))
+ goto out;
+ ctx.start_prog = prog;
+
+ prog = bpf_object__find_program_by_name(skel->obj, "overwrite_timer");
+ if (!ASSERT_OK_PTR(prog, "find overwrite prog"))
+ goto out;
+ ctx.overwrite_prog = prog;
+
+ pthread_barrier_init(&ctx.notify, NULL, 2);
+ ctx.loop = 10;
+
+ err = pthread_create(&tid[0], NULL, start_timer_fn, &ctx);
+ if (!ASSERT_OK(err, "create start_timer"))
+ goto out;
+
+ err = pthread_create(&tid[1], NULL, overwrite_timer_fn, &ctx);
+ if (!ASSERT_OK(err, "create overwrite_timer")) {
+ stop_threads(&ctx);
+ goto out;
+ }
+
+ start_threads(&ctx);
+
+ ret = NULL;
+ err = pthread_join(tid[0], &ret);
+ ASSERT_EQ(err | (long)ret, 0, "start_timer");
+ ret = NULL;
+ err = pthread_join(tid[1], &ret);
+ ASSERT_EQ(err | (long)ret, 0, "overwrite_timer");
+out:
+ free_timer__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/free_timer.c b/tools/testing/selftests/bpf/progs/free_timer.c
new file mode 100644
index 0000000000000..4501ae8fc4143
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/free_timer.c
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2025. Huawei Technologies Co., Ltd */
+#include <linux/bpf.h>
+#include <time.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+
+#define MAX_ENTRIES 8
+
+struct map_value {
+ struct bpf_timer timer;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __type(key, int);
+ __type(value, struct map_value);
+ __uint(max_entries, MAX_ENTRIES);
+} map SEC(".maps");
+
+static int timer_cb(void *map, void *key, struct map_value *value)
+{
+ volatile int sum = 0;
+ int i;
+
+ bpf_for(i, 0, 1024 * 1024) sum += i;
+
+ return 0;
+}
+
+static int start_cb(int key)
+{
+ struct map_value *value;
+
+ value = bpf_map_lookup_elem(&map, (void *)&key);
+ if (!value)
+ return 0;
+
+ bpf_timer_init(&value->timer, &map, CLOCK_MONOTONIC);
+ bpf_timer_set_callback(&value->timer, timer_cb);
+ /* Hope 100us will be enough to wake-up and run the overwrite thread */
+ bpf_timer_start(&value->timer, 100000, BPF_F_TIMER_CPU_PIN);
+
+ return 0;
+}
+
+static int overwrite_cb(int key)
+{
+ struct map_value zero = {};
+
+ /* Free the timer which may run on other CPU */
+ bpf_map_update_elem(&map, (void *)&key, &zero, BPF_ANY);
+
+ return 0;
+}
+
+SEC("syscall")
+int BPF_PROG(start_timer)
+{
+ bpf_loop(MAX_ENTRIES, start_cb, NULL, 0);
+ return 0;
+}
+
+SEC("syscall")
+int BPF_PROG(overwrite_timer)
+{
+ bpf_loop(MAX_ENTRIES, overwrite_cb, NULL, 0);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
--
2.29.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v3 2/5] bpf: Bail out early in __htab_map_lookup_and_delete_elem()
2025-01-17 10:18 ` [PATCH bpf-next v3 2/5] bpf: Bail out early in __htab_map_lookup_and_delete_elem() Hou Tao
@ 2025-01-17 12:31 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 14+ messages in thread
From: Toke Høiland-Jørgensen @ 2025-01-17 12:31 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
Hou Tao <houtao@huaweicloud.com> writes:
> From: Hou Tao <houtao1@huawei.com>
>
> Use goto statement to bail out early when the target element is not
> found, instead of using a large else branch to handle the more likely
> case. This change doesn't affect functionality and simply make the code
> cleaner.
>
> Signed-off-by: Hou Tao <houtao1@huawei.com>
Yeah, nice cleanup :)
Reviewed-by: Toke Høiland-Jørgensen <toke@kernel.org>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v3 1/5] bpf: Free special fields after unlock in htab_lru_map_delete_node()
2025-01-17 10:18 ` [PATCH bpf-next v3 1/5] bpf: Free special fields after unlock in htab_lru_map_delete_node() Hou Tao
@ 2025-01-17 12:31 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 14+ messages in thread
From: Toke Høiland-Jørgensen @ 2025-01-17 12:31 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
Hou Tao <houtao@huaweicloud.com> writes:
> From: Hou Tao <houtao1@huawei.com>
>
> When bpf_timer is used in LRU hash map, calling check_and_free_fields()
> in htab_lru_map_delete_node() will invoke bpf_timer_cancel_and_free() to
> free the bpf_timer. If the timer is running on other CPUs,
> hrtimer_cancel() will invoke hrtimer_cancel_wait_running() to spin on
> current CPU to wait for the completion of the hrtimer callback.
>
> Considering that the deletion has already acquired a raw-spin-lock
> (bucket lock). To reduce the time holding the bucket lock, move the
> invocation of check_and_free_fields() out of bucket lock. However,
> because htab_lru_map_delete_node() is invoked with LRU raw spin lock
> being held, the freeing of special fields still happens in a locked
> scope.
>
> Signed-off-by: Hou Tao <houtao1@huawei.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@kernel.org>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v3 3/5] bpf: Free element after unlock in __htab_map_lookup_and_delete_elem()
2025-01-17 10:18 ` [PATCH bpf-next v3 3/5] bpf: Free element after unlock " Hou Tao
@ 2025-01-17 12:35 ` Toke Høiland-Jørgensen
2025-01-20 8:49 ` Hou Tao
0 siblings, 1 reply; 14+ messages in thread
From: Toke Høiland-Jørgensen @ 2025-01-17 12:35 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
Hou Tao <houtao@huaweicloud.com> writes:
> From: Hou Tao <houtao1@huawei.com>
>
> The freeing of special fields in map value may acquire a spin-lock
> (e.g., the freeing of bpf_timer), however, the lookup_and_delete_elem
> procedure has already held a raw-spin-lock, which violates the lockdep
> rule.
This implies that we're fixing a locking violation here? Does this need
a Fixes tag?
-Toke
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v3 4/5] bpf: Cancel the running bpf_timer through kworker for PREEMPT_RT
2025-01-17 10:18 ` [PATCH bpf-next v3 4/5] bpf: Cancel the running bpf_timer through kworker for PREEMPT_RT Hou Tao
@ 2025-01-17 12:40 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 14+ messages in thread
From: Toke Høiland-Jørgensen @ 2025-01-17 12:40 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
Hou Tao <houtao@huaweicloud.com> writes:
> From: Hou Tao <houtao1@huawei.com>
>
> During the update procedure, when overwrite element in a pre-allocated
> htab, the freeing of old_element is protected by the bucket lock. The
> reason why the bucket lock is necessary is that the old_element has
> already been stashed in htab->extra_elems after alloc_htab_elem()
> returns. If freeing the old_element after the bucket lock is unlocked,
> the stashed element may be reused by concurrent update procedure and the
> freeing of old_element will run concurrently with the reuse of the
> old_element. However, the invocation of check_and_free_fields() may
> acquire a spin-lock which violates the lockdep rule because its caller
> has already held a raw-spin-lock (bucket lock). The following warning
> will be reported when such race happens:
>
> BUG: scheduling while atomic: test_progs/676/0x00000003
> 3 locks held by test_progs/676:
> #0: ffffffff864b0240 (rcu_read_lock_trace){....}-{0:0}, at: bpf_prog_test_run_syscall+0x2c0/0x830
> #1: ffff88810e961188 (&htab->lockdep_key){....}-{2:2}, at: htab_map_update_elem+0x306/0x1500
> #2: ffff8881f4eac1b8 (&base->softirq_expiry_lock){....}-{2:2}, at: hrtimer_cancel_wait_running+0xe9/0x1b0
> Modules linked in: bpf_testmod(O)
> Preemption disabled at:
> [<ffffffff817837a3>] htab_map_update_elem+0x293/0x1500
> CPU: 0 UID: 0 PID: 676 Comm: test_progs Tainted: G ... 6.12.0+ #11
> Tainted: [W]=WARN, [O]=OOT_MODULE
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)...
> Call Trace:
> <TASK>
> dump_stack_lvl+0x57/0x70
> dump_stack+0x10/0x20
> __schedule_bug+0x120/0x170
> __schedule+0x300c/0x4800
> schedule_rtlock+0x37/0x60
> rtlock_slowlock_locked+0x6d9/0x54c0
> rt_spin_lock+0x168/0x230
> hrtimer_cancel_wait_running+0xe9/0x1b0
> hrtimer_cancel+0x24/0x30
> bpf_timer_delete_work+0x1d/0x40
> bpf_timer_cancel_and_free+0x5e/0x80
> bpf_obj_free_fields+0x262/0x4a0
> check_and_free_fields+0x1d0/0x280
> htab_map_update_elem+0x7fc/0x1500
> bpf_prog_9f90bc20768e0cb9_overwrite_cb+0x3f/0x43
> bpf_prog_ea601c4649694dbd_overwrite_timer+0x5d/0x7e
> bpf_prog_test_run_syscall+0x322/0x830
> __sys_bpf+0x135d/0x3ca0
> __x64_sys_bpf+0x75/0xb0
> x64_sys_call+0x1b5/0xa10
> do_syscall_64+0x3b/0xc0
> entry_SYSCALL_64_after_hwframe+0x4b/0x53
> ...
> </TASK>
>
> It seems feasible to break the reuse and refill of per-cpu extra_elems
> into two independent parts: reuse the per-cpu extra_elems with bucket
> lock being held and refill the old_element as per-cpu extra_elems after
> the bucket lock is unlocked. However, it will make the concurrent
> overwrite procedures on the same CPU return unexpected -E2BIG error when
> the map is full.
>
> Therefore, the patch fixes the lock problem by breaking the cancelling
> of bpf_timer into two steps for PREEMPT_RT:
> 1) use hrtimer_try_to_cancel() and check its return value
> 2) if the timer is running, use hrtimer_cancel() through a kworker to
> cancel it again
> Considering that the current implementation of hrtimer_cancel() will try
> to acquire a being held softirq_expiry_lock when the current timer is
> running, these steps above are reasonable. However, it also has
> downside. When the timer is running, the cancelling of the timer is
> delayed when releasing the last map uref. The delay is also fixable
> (e.g., break the cancelling of bpf timer into two parts: one part in
> locked scope, another one in unlocked scope), it can be revised later if
> necessary.
>
> It is a bit hard to decide the right fix tag. One reason is that the
> problem depends on PREEMPT_RT which is enabled in v6.12. Considering the
> softirq_expiry_lock lock exists since v5.4 and bpf_timer is introduced
> in v5.15, the bpf_timer commit is used in the fixes tag and an extra
> depends-on tag is added to state the dependency on PREEMPT_RT.
>
> Fixes: b00628b1c7d5 ("bpf: Introduce bpf timers.")
> Depends-on: v6.12+ with PREEMPT_RT enabled
> Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Closes: https://lore.kernel.org/bpf/20241106084527.4gPrMnHt@linutronix.de
> Signed-off-by: Hou Tao <houtao1@huawei.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@kernel.org>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v3 3/5] bpf: Free element after unlock in __htab_map_lookup_and_delete_elem()
2025-01-17 12:35 ` Toke Høiland-Jørgensen
@ 2025-01-20 8:49 ` Hou Tao
2025-01-20 8:52 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 14+ messages in thread
From: Hou Tao @ 2025-01-20 8:49 UTC (permalink / raw)
To: Toke Høiland-Jørgensen, bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
Hi,
On 1/17/2025 8:35 PM, Toke Høiland-Jørgensen wrote:
> Hou Tao <houtao@huaweicloud.com> writes:
>
>> From: Hou Tao <houtao1@huawei.com>
>>
>> The freeing of special fields in map value may acquire a spin-lock
>> (e.g., the freeing of bpf_timer), however, the lookup_and_delete_elem
>> procedure has already held a raw-spin-lock, which violates the lockdep
>> rule.
> This implies that we're fixing a locking violation here? Does this need
> a Fixes tag?
>
> -Toke
Ah, the fix tag is a bit hard. The lockdep violation in the patch is
also related with PREEMPT_RT, however, the lookup_and_delete_elem is
introduced in v5.14. Also considering that patch #4 will also fix the
lockdep violation in the case, I prefer to not add a fix tag in the
patch. Instead I will update the commit message for the patch to state
that it will reduce the lock scope of bucket lock. What do you think ?
> .
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v3 3/5] bpf: Free element after unlock in __htab_map_lookup_and_delete_elem()
2025-01-20 8:49 ` Hou Tao
@ 2025-01-20 8:52 ` Toke Høiland-Jørgensen
2025-01-21 1:15 ` Hou Tao
0 siblings, 1 reply; 14+ messages in thread
From: Toke Høiland-Jørgensen @ 2025-01-20 8:52 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
Hou Tao <houtao@huaweicloud.com> writes:
> Hi,
>
> On 1/17/2025 8:35 PM, Toke Høiland-Jørgensen wrote:
>> Hou Tao <houtao@huaweicloud.com> writes:
>>
>>> From: Hou Tao <houtao1@huawei.com>
>>>
>>> The freeing of special fields in map value may acquire a spin-lock
>>> (e.g., the freeing of bpf_timer), however, the lookup_and_delete_elem
>>> procedure has already held a raw-spin-lock, which violates the lockdep
>>> rule.
>> This implies that we're fixing a locking violation here? Does this need
>> a Fixes tag?
>>
>> -Toke
>
> Ah, the fix tag is a bit hard. The lockdep violation in the patch is
> also related with PREEMPT_RT, however, the lookup_and_delete_elem is
> introduced in v5.14. Also considering that patch #4 will also fix the
> lockdep violation in the case, I prefer to not add a fix tag in the
> patch. Instead I will update the commit message for the patch to state
> that it will reduce the lock scope of bucket lock. What do you think ?
Sure; and maybe put the same explanation for why there's no Fixes tag
into the commit message as well? :)
-Toke
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v3 3/5] bpf: Free element after unlock in __htab_map_lookup_and_delete_elem()
2025-01-20 8:52 ` Toke Høiland-Jørgensen
@ 2025-01-21 1:15 ` Hou Tao
2025-01-21 11:04 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 14+ messages in thread
From: Hou Tao @ 2025-01-21 1:15 UTC (permalink / raw)
To: Toke Høiland-Jørgensen, bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
Hi,
On 1/20/2025 4:52 PM, Toke Høiland-Jørgensen wrote:
> Hou Tao <houtao@huaweicloud.com> writes:
>
>> Hi,
>>
>> On 1/17/2025 8:35 PM, Toke Høiland-Jørgensen wrote:
>>> Hou Tao <houtao@huaweicloud.com> writes:
>>>
>>>> From: Hou Tao <houtao1@huawei.com>
>>>>
>>>> The freeing of special fields in map value may acquire a spin-lock
>>>> (e.g., the freeing of bpf_timer), however, the lookup_and_delete_elem
>>>> procedure has already held a raw-spin-lock, which violates the lockdep
>>>> rule.
>>> This implies that we're fixing a locking violation here? Does this need
>>> a Fixes tag?
>>>
>>> -Toke
>> Ah, the fix tag is a bit hard. The lockdep violation in the patch is
>> also related with PREEMPT_RT, however, the lookup_and_delete_elem is
>> introduced in v5.14. Also considering that patch #4 will also fix the
>> lockdep violation in the case, I prefer to not add a fix tag in the
>> patch. Instead I will update the commit message for the patch to state
>> that it will reduce the lock scope of bucket lock. What do you think ?
> Sure; and maybe put the same explanation for why there's no Fixes tag
> into the commit message as well? :)
I have rewritten the commit message for the patch and it is ready for
resend. However it seems Alexei has already merged this patch set [1],
therefore, I will keep it as is.
[1]:
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/commit/?id=d10cafc5d54a0f70681ab2f739ea6c46282c86f9
>
> -Toke
>
> .
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v3 3/5] bpf: Free element after unlock in __htab_map_lookup_and_delete_elem()
2025-01-21 1:15 ` Hou Tao
@ 2025-01-21 11:04 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 14+ messages in thread
From: Toke Høiland-Jørgensen @ 2025-01-21 11:04 UTC (permalink / raw)
To: Hou Tao, bpf
Cc: Martin KaFai Lau, Alexei Starovoitov, Andrii Nakryiko,
Eduard Zingerman, Song Liu, Hao Luo, Yonghong Song,
Daniel Borkmann, KP Singh, Stanislav Fomichev, Jiri Olsa,
John Fastabend, Sebastian Andrzej Siewior, houtao1, xukuohai
Hou Tao <houtao@huaweicloud.com> writes:
> Hi,
>
> On 1/20/2025 4:52 PM, Toke Høiland-Jørgensen wrote:
>> Hou Tao <houtao@huaweicloud.com> writes:
>>
>>> Hi,
>>>
>>> On 1/17/2025 8:35 PM, Toke Høiland-Jørgensen wrote:
>>>> Hou Tao <houtao@huaweicloud.com> writes:
>>>>
>>>>> From: Hou Tao <houtao1@huawei.com>
>>>>>
>>>>> The freeing of special fields in map value may acquire a spin-lock
>>>>> (e.g., the freeing of bpf_timer), however, the lookup_and_delete_elem
>>>>> procedure has already held a raw-spin-lock, which violates the lockdep
>>>>> rule.
>>>> This implies that we're fixing a locking violation here? Does this need
>>>> a Fixes tag?
>>>>
>>>> -Toke
>>> Ah, the fix tag is a bit hard. The lockdep violation in the patch is
>>> also related with PREEMPT_RT, however, the lookup_and_delete_elem is
>>> introduced in v5.14. Also considering that patch #4 will also fix the
>>> lockdep violation in the case, I prefer to not add a fix tag in the
>>> patch. Instead I will update the commit message for the patch to state
>>> that it will reduce the lock scope of bucket lock. What do you think ?
>> Sure; and maybe put the same explanation for why there's no Fixes tag
>> into the commit message as well? :)
>
> I have rewritten the commit message for the patch and it is ready for
> resend. However it seems Alexei has already merged this patch set [1],
> therefore, I will keep it as is.
Ah well; thanks anyway! :)
-Toke
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-01-21 11:04 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-17 10:18 [PATCH bpf-next v3 0/5] Free htab element out of bucket lock Hou Tao
2025-01-17 10:18 ` [PATCH bpf-next v3 1/5] bpf: Free special fields after unlock in htab_lru_map_delete_node() Hou Tao
2025-01-17 12:31 ` Toke Høiland-Jørgensen
2025-01-17 10:18 ` [PATCH bpf-next v3 2/5] bpf: Bail out early in __htab_map_lookup_and_delete_elem() Hou Tao
2025-01-17 12:31 ` Toke Høiland-Jørgensen
2025-01-17 10:18 ` [PATCH bpf-next v3 3/5] bpf: Free element after unlock " Hou Tao
2025-01-17 12:35 ` Toke Høiland-Jørgensen
2025-01-20 8:49 ` Hou Tao
2025-01-20 8:52 ` Toke Høiland-Jørgensen
2025-01-21 1:15 ` Hou Tao
2025-01-21 11:04 ` Toke Høiland-Jørgensen
2025-01-17 10:18 ` [PATCH bpf-next v3 4/5] bpf: Cancel the running bpf_timer through kworker for PREEMPT_RT Hou Tao
2025-01-17 12:40 ` Toke Høiland-Jørgensen
2025-01-17 10:18 ` [PATCH bpf-next v3 5/5] selftests/bpf: Add test case for the freeing of bpf_timer Hou Tao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox