* [PATCH 0/5] kprobes: jump label: Cleanup with guard and __free
@ 2024-12-09 2:40 Masami Hiramatsu (Google)
2024-12-09 2:41 ` [PATCH 1/5] jump_label: Define guard() for jump_label_lock Masami Hiramatsu (Google)
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-12-09 2:40 UTC (permalink / raw)
To: Steven Rostedt, Peter Zijlstra
Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
Naveen N Rao, Josh Poimboeuf, Jason Baron, Ard Biesheuvel,
linux-kernel, linux-trace-kernel
Hi,
Here is another series of patches to cleanup kprobes and probe events in
ftrace to use guard() and __free(). This defines new guard and __free for
jump_label. I removed all gotos in kprobes with this series.
Thanks,
---
Masami Hiramatsu (Google) (5):
jump_label: Define guard() for jump_label_lock
kprobes: Use guard() for external locks
kprobes: Use guard for rcu_read_lock
kprobes: Remove unneeded goto
kprobes: Remove remaining gotos
include/linux/jump_label.h | 3
kernel/kprobes.c | 383 ++++++++++++++++++++------------------------
2 files changed, 181 insertions(+), 205 deletions(-)
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/5] jump_label: Define guard() for jump_label_lock
2024-12-09 2:40 [PATCH 0/5] kprobes: jump label: Cleanup with guard and __free Masami Hiramatsu (Google)
@ 2024-12-09 2:41 ` Masami Hiramatsu (Google)
2024-12-09 2:41 ` [PATCH 2/5] kprobes: Use guard() for external locks Masami Hiramatsu (Google)
` (3 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-12-09 2:41 UTC (permalink / raw)
To: Steven Rostedt, Peter Zijlstra
Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
Naveen N Rao, Josh Poimboeuf, Jason Baron, Ard Biesheuvel,
linux-kernel, linux-trace-kernel
From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
include/linux/jump_label.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index f5a2727ca4a9..fdb79dd1ebd8 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -75,6 +75,7 @@
#include <linux/types.h>
#include <linux/compiler.h>
+#include <linux/cleanup.h>
extern bool static_key_initialized;
@@ -347,6 +348,8 @@ static inline void static_key_disable(struct static_key *key)
#endif /* CONFIG_JUMP_LABEL */
+DEFINE_LOCK_GUARD_0(jump_label_lock, jump_label_lock(), jump_label_unlock())
+
#define STATIC_KEY_INIT STATIC_KEY_INIT_FALSE
#define jump_label_enabled static_key_enabled
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/5] kprobes: Use guard() for external locks
2024-12-09 2:40 [PATCH 0/5] kprobes: jump label: Cleanup with guard and __free Masami Hiramatsu (Google)
2024-12-09 2:41 ` [PATCH 1/5] jump_label: Define guard() for jump_label_lock Masami Hiramatsu (Google)
@ 2024-12-09 2:41 ` Masami Hiramatsu (Google)
2024-12-09 11:04 ` Peter Zijlstra
2024-12-09 2:41 ` [PATCH 3/5] kprobes: Use guard for rcu_read_lock Masami Hiramatsu (Google)
` (2 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-12-09 2:41 UTC (permalink / raw)
To: Steven Rostedt, Peter Zijlstra
Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
Naveen N Rao, Josh Poimboeuf, Jason Baron, Ard Biesheuvel,
linux-kernel, linux-trace-kernel
From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Use guard() for text_mutex, cpu_read_lock, and jump_label_lock in
the kprobes.
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
kernel/kprobes.c | 209 +++++++++++++++++++++++-------------------------------
1 file changed, 90 insertions(+), 119 deletions(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 62b5b08d809d..004eb8326520 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -596,41 +596,38 @@ static void kick_kprobe_optimizer(void)
/* Kprobe jump optimizer */
static void kprobe_optimizer(struct work_struct *work)
{
- mutex_lock(&kprobe_mutex);
- cpus_read_lock();
- mutex_lock(&text_mutex);
+ guard(mutex)(&kprobe_mutex);
- /*
- * Step 1: Unoptimize kprobes and collect cleaned (unused and disarmed)
- * kprobes before waiting for quiesence period.
- */
- do_unoptimize_kprobes();
+ scoped_guard(cpus_read_lock) {
+ guard(mutex)(&text_mutex);
- /*
- * Step 2: Wait for quiesence period to ensure all potentially
- * preempted tasks to have normally scheduled. Because optprobe
- * may modify multiple instructions, there is a chance that Nth
- * instruction is preempted. In that case, such tasks can return
- * to 2nd-Nth byte of jump instruction. This wait is for avoiding it.
- * Note that on non-preemptive kernel, this is transparently converted
- * to synchronoze_sched() to wait for all interrupts to have completed.
- */
- synchronize_rcu_tasks();
+ /*
+ * Step 1: Unoptimize kprobes and collect cleaned (unused and disarmed)
+ * kprobes before waiting for quiesence period.
+ */
+ do_unoptimize_kprobes();
- /* Step 3: Optimize kprobes after quiesence period */
- do_optimize_kprobes();
+ /*
+ * Step 2: Wait for quiesence period to ensure all potentially
+ * preempted tasks to have normally scheduled. Because optprobe
+ * may modify multiple instructions, there is a chance that Nth
+ * instruction is preempted. In that case, such tasks can return
+ * to 2nd-Nth byte of jump instruction. This wait is for avoiding it.
+ * Note that on non-preemptive kernel, this is transparently converted
+ * to synchronoze_sched() to wait for all interrupts to have completed.
+ */
+ synchronize_rcu_tasks();
- /* Step 4: Free cleaned kprobes after quiesence period */
- do_free_cleaned_kprobes();
+ /* Step 3: Optimize kprobes after quiesence period */
+ do_optimize_kprobes();
- mutex_unlock(&text_mutex);
- cpus_read_unlock();
+ /* Step 4: Free cleaned kprobes after quiesence period */
+ do_free_cleaned_kprobes();
+ }
/* Step 5: Kick optimizer again if needed */
if (!list_empty(&optimizing_list) || !list_empty(&unoptimizing_list))
kick_kprobe_optimizer();
-
- mutex_unlock(&kprobe_mutex);
}
static void wait_for_kprobe_optimizer_locked(void)
@@ -853,29 +850,24 @@ static void try_to_optimize_kprobe(struct kprobe *p)
return;
/* For preparing optimization, jump_label_text_reserved() is called. */
- cpus_read_lock();
- jump_label_lock();
- mutex_lock(&text_mutex);
+ guard(cpus_read_lock)();
+ guard(jump_label_lock)();
+ guard(mutex)(&text_mutex);
ap = alloc_aggr_kprobe(p);
if (!ap)
- goto out;
+ return;
op = container_of(ap, struct optimized_kprobe, kp);
if (!arch_prepared_optinsn(&op->optinsn)) {
/* If failed to setup optimizing, fallback to kprobe. */
arch_remove_optimized_kprobe(op);
kfree(op);
- goto out;
+ return;
}
init_aggr_kprobe(ap, p);
optimize_kprobe(ap); /* This just kicks optimizer thread. */
-
-out:
- mutex_unlock(&text_mutex);
- jump_label_unlock();
- cpus_read_unlock();
}
static void optimize_all_kprobes(void)
@@ -1158,12 +1150,9 @@ static int arm_kprobe(struct kprobe *kp)
if (unlikely(kprobe_ftrace(kp)))
return arm_kprobe_ftrace(kp);
- cpus_read_lock();
- mutex_lock(&text_mutex);
+ guard(cpus_read_lock)();
+ guard(mutex)(&text_mutex);
__arm_kprobe(kp);
- mutex_unlock(&text_mutex);
- cpus_read_unlock();
-
return 0;
}
@@ -1172,12 +1161,9 @@ static int disarm_kprobe(struct kprobe *kp, bool reopt)
if (unlikely(kprobe_ftrace(kp)))
return disarm_kprobe_ftrace(kp);
- cpus_read_lock();
- mutex_lock(&text_mutex);
+ guard(cpus_read_lock)();
+ guard(mutex)(&text_mutex);
__disarm_kprobe(kp, reopt);
- mutex_unlock(&text_mutex);
- cpus_read_unlock();
-
return 0;
}
@@ -1294,62 +1280,55 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
int ret = 0;
struct kprobe *ap = orig_p;
- cpus_read_lock();
-
- /* For preparing optimization, jump_label_text_reserved() is called */
- jump_label_lock();
- mutex_lock(&text_mutex);
-
- if (!kprobe_aggrprobe(orig_p)) {
- /* If 'orig_p' is not an 'aggr_kprobe', create new one. */
- ap = alloc_aggr_kprobe(orig_p);
- if (!ap) {
- ret = -ENOMEM;
- goto out;
+ scoped_guard(cpus_read_lock) {
+ /* For preparing optimization, jump_label_text_reserved() is called */
+ guard(jump_label_lock)();
+ guard(mutex)(&text_mutex);
+
+ if (!kprobe_aggrprobe(orig_p)) {
+ /* If 'orig_p' is not an 'aggr_kprobe', create new one. */
+ ap = alloc_aggr_kprobe(orig_p);
+ if (!ap)
+ return -ENOMEM;
+ init_aggr_kprobe(ap, orig_p);
+ } else if (kprobe_unused(ap)) {
+ /* This probe is going to die. Rescue it */
+ ret = reuse_unused_kprobe(ap);
+ if (ret)
+ return ret;
}
- init_aggr_kprobe(ap, orig_p);
- } else if (kprobe_unused(ap)) {
- /* This probe is going to die. Rescue it */
- ret = reuse_unused_kprobe(ap);
- if (ret)
- goto out;
- }
- if (kprobe_gone(ap)) {
- /*
- * Attempting to insert new probe at the same location that
- * had a probe in the module vaddr area which already
- * freed. So, the instruction slot has already been
- * released. We need a new slot for the new probe.
- */
- ret = arch_prepare_kprobe(ap);
- if (ret)
+ if (kprobe_gone(ap)) {
/*
- * Even if fail to allocate new slot, don't need to
- * free the 'ap'. It will be used next time, or
- * freed by unregister_kprobe().
+ * Attempting to insert new probe at the same location that
+ * had a probe in the module vaddr area which already
+ * freed. So, the instruction slot has already been
+ * released. We need a new slot for the new probe.
*/
- goto out;
+ ret = arch_prepare_kprobe(ap);
+ if (ret)
+ /*
+ * Even if fail to allocate new slot, don't need to
+ * free the 'ap'. It will be used next time, or
+ * freed by unregister_kprobe().
+ */
+ return ret;
- /* Prepare optimized instructions if possible. */
- prepare_optimized_kprobe(ap);
+ /* Prepare optimized instructions if possible. */
+ prepare_optimized_kprobe(ap);
- /*
- * Clear gone flag to prevent allocating new slot again, and
- * set disabled flag because it is not armed yet.
- */
- ap->flags = (ap->flags & ~KPROBE_FLAG_GONE)
- | KPROBE_FLAG_DISABLED;
- }
-
- /* Copy the insn slot of 'p' to 'ap'. */
- copy_kprobe(ap, p);
- ret = add_new_kprobe(ap, p);
+ /*
+ * Clear gone flag to prevent allocating new slot again, and
+ * set disabled flag because it is not armed yet.
+ */
+ ap->flags = (ap->flags & ~KPROBE_FLAG_GONE)
+ | KPROBE_FLAG_DISABLED;
+ }
-out:
- mutex_unlock(&text_mutex);
- jump_label_unlock();
- cpus_read_unlock();
+ /* Copy the insn slot of 'p' to 'ap'. */
+ copy_kprobe(ap, p);
+ ret = add_new_kprobe(ap, p);
+ }
if (ret == 0 && kprobe_disabled(ap) && !kprobe_disabled(p)) {
ap->flags &= ~KPROBE_FLAG_DISABLED;
@@ -1559,26 +1538,23 @@ static int check_kprobe_address_safe(struct kprobe *p,
ret = check_ftrace_location(p);
if (ret)
return ret;
- jump_label_lock();
+
+ guard(jump_label_lock)();
/* Ensure the address is in a text area, and find a module if exists. */
*probed_mod = NULL;
if (!core_kernel_text((unsigned long) p->addr)) {
guard(preempt)();
*probed_mod = __module_text_address((unsigned long) p->addr);
- if (!(*probed_mod)) {
- ret = -EINVAL;
- goto out;
- }
+ if (!(*probed_mod))
+ return -EINVAL;
/*
* We must hold a refcount of the probed module while updating
* its code to prohibit unexpected unloading.
*/
- if (unlikely(!try_module_get(*probed_mod))) {
- ret = -ENOENT;
- goto out;
- }
+ if (unlikely(!try_module_get(*probed_mod)))
+ return -ENOENT;
}
/* Ensure it is not in reserved area. */
if (in_gate_area_no_mm((unsigned long) p->addr) ||
@@ -1588,8 +1564,7 @@ static int check_kprobe_address_safe(struct kprobe *p,
find_bug((unsigned long)p->addr) ||
is_cfi_preamble_symbol((unsigned long)p->addr)) {
module_put(*probed_mod);
- ret = -EINVAL;
- goto out;
+ return -EINVAL;
}
/* Get module refcount and reject __init functions for loaded modules. */
@@ -1601,14 +1576,11 @@ static int check_kprobe_address_safe(struct kprobe *p,
if (within_module_init((unsigned long)p->addr, *probed_mod) &&
!module_is_coming(*probed_mod)) {
module_put(*probed_mod);
- ret = -ENOENT;
+ return -ENOENT;
}
}
-out:
- jump_label_unlock();
-
- return ret;
+ return 0;
}
static int __register_kprobe(struct kprobe *p)
@@ -1623,14 +1595,13 @@ static int __register_kprobe(struct kprobe *p)
/* Since this may unoptimize 'old_p', locking 'text_mutex'. */
return register_aggr_kprobe(old_p, p);
- cpus_read_lock();
- /* Prevent text modification */
- mutex_lock(&text_mutex);
- ret = prepare_kprobe(p);
- mutex_unlock(&text_mutex);
- cpus_read_unlock();
- if (ret)
- return ret;
+ scoped_guard(cpus_read_lock) {
+ /* Prevent text modification */
+ guard(mutex)(&text_mutex);
+ ret = prepare_kprobe(p);
+ if (ret)
+ return ret;
+ }
INIT_HLIST_NODE(&p->hlist);
hlist_add_head_rcu(&p->hlist,
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 3/5] kprobes: Use guard for rcu_read_lock
2024-12-09 2:40 [PATCH 0/5] kprobes: jump label: Cleanup with guard and __free Masami Hiramatsu (Google)
2024-12-09 2:41 ` [PATCH 1/5] jump_label: Define guard() for jump_label_lock Masami Hiramatsu (Google)
2024-12-09 2:41 ` [PATCH 2/5] kprobes: Use guard() for external locks Masami Hiramatsu (Google)
@ 2024-12-09 2:41 ` Masami Hiramatsu (Google)
2024-12-09 2:41 ` [PATCH 4/5] kprobes: Remove unneeded goto Masami Hiramatsu (Google)
2024-12-09 2:42 ` [PATCH 5/5] kprobes: Remove remaining gotos Masami Hiramatsu (Google)
4 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-12-09 2:41 UTC (permalink / raw)
To: Steven Rostedt, Peter Zijlstra
Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
Naveen N Rao, Josh Poimboeuf, Jason Baron, Ard Biesheuvel,
linux-kernel, linux-trace-kernel
From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Use guard(rcu) for rcu_read_lock so that it can remove unneeded
gotos and make it more structured.
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
kernel/kprobes.c | 66 +++++++++++++++++++++++++++++-------------------------
1 file changed, 36 insertions(+), 30 deletions(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 004eb8326520..a24587e8f91a 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -144,30 +144,26 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
/* Since the slot array is not protected by rcu, we need a mutex */
guard(mutex)(&c->mutex);
- retry:
- rcu_read_lock();
- list_for_each_entry_rcu(kip, &c->pages, list) {
- if (kip->nused < slots_per_page(c)) {
- int i;
-
- for (i = 0; i < slots_per_page(c); i++) {
- if (kip->slot_used[i] == SLOT_CLEAN) {
- kip->slot_used[i] = SLOT_USED;
- kip->nused++;
- rcu_read_unlock();
- return kip->insns + (i * c->insn_size);
+ do {
+ guard(rcu)();
+ list_for_each_entry_rcu(kip, &c->pages, list) {
+ if (kip->nused < slots_per_page(c)) {
+ int i;
+
+ for (i = 0; i < slots_per_page(c); i++) {
+ if (kip->slot_used[i] == SLOT_CLEAN) {
+ kip->slot_used[i] = SLOT_USED;
+ kip->nused++;
+ return kip->insns + (i * c->insn_size);
+ }
}
+ /* kip->nused is broken. Fix it. */
+ kip->nused = slots_per_page(c);
+ WARN_ON(1);
}
- /* kip->nused is broken. Fix it. */
- kip->nused = slots_per_page(c);
- WARN_ON(1);
}
- }
- rcu_read_unlock();
-
/* If there are any garbage slots, collect it and try again. */
- if (c->nr_garbage && collect_garbage_slots(c) == 0)
- goto retry;
+ } while (c->nr_garbage && collect_garbage_slots(c) == 0);
/* All out of space. Need to allocate a new page. */
kip = kmalloc(struct_size(kip, slot_used, slots_per_page(c)), GFP_KERNEL);
@@ -246,25 +242,35 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c)
return 0;
}
-void __free_insn_slot(struct kprobe_insn_cache *c,
- kprobe_opcode_t *slot, int dirty)
+static long __find_insn_page(struct kprobe_insn_cache *c,
+ kprobe_opcode_t *slot, struct kprobe_insn_page **pkip)
{
- struct kprobe_insn_page *kip;
+ struct kprobe_insn_page *kip = NULL;
long idx;
- guard(mutex)(&c->mutex);
- rcu_read_lock();
+ guard(rcu)();
list_for_each_entry_rcu(kip, &c->pages, list) {
idx = ((long)slot - (long)kip->insns) /
(c->insn_size * sizeof(kprobe_opcode_t));
- if (idx >= 0 && idx < slots_per_page(c))
- goto out;
+ if (idx >= 0 && idx < slots_per_page(c)) {
+ *pkip = kip;
+ return idx;
+ }
}
/* Could not find this slot. */
WARN_ON(1);
- kip = NULL;
-out:
- rcu_read_unlock();
+ *pkip = NULL;
+ return -1;
+}
+
+void __free_insn_slot(struct kprobe_insn_cache *c,
+ kprobe_opcode_t *slot, int dirty)
+{
+ struct kprobe_insn_page *kip = NULL;
+ long idx;
+
+ guard(mutex)(&c->mutex);
+ idx = __find_insn_page(c, slot, &kip);
/* Mark and sweep: this may sleep */
if (kip) {
/* Check double free */
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 4/5] kprobes: Remove unneeded goto
2024-12-09 2:40 [PATCH 0/5] kprobes: jump label: Cleanup with guard and __free Masami Hiramatsu (Google)
` (2 preceding siblings ...)
2024-12-09 2:41 ` [PATCH 3/5] kprobes: Use guard for rcu_read_lock Masami Hiramatsu (Google)
@ 2024-12-09 2:41 ` Masami Hiramatsu (Google)
2024-12-09 2:42 ` [PATCH 5/5] kprobes: Remove remaining gotos Masami Hiramatsu (Google)
4 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-12-09 2:41 UTC (permalink / raw)
To: Steven Rostedt, Peter Zijlstra
Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
Naveen N Rao, Josh Poimboeuf, Jason Baron, Ard Biesheuvel,
linux-kernel, linux-trace-kernel
From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Remove unneeded gotos. Since the labels referred by these gotos have
only one reference for each, we can replace those gotos with the
referred code.
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
kernel/kprobes.c | 45 +++++++++++++++++++++------------------------
1 file changed, 21 insertions(+), 24 deletions(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index a24587e8f91a..34cbbb2206f4 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1071,20 +1071,18 @@ static int __arm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
if (*cnt == 0) {
ret = register_ftrace_function(ops);
- if (WARN(ret < 0, "Failed to register kprobe-ftrace (error %d)\n", ret))
- goto err_ftrace;
+ if (WARN(ret < 0, "Failed to register kprobe-ftrace (error %d)\n", ret)) {
+ /*
+ * At this point, sinec ops is not registered, we should be sefe from
+ * registering empty filter.
+ */
+ ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0);
+ return ret;
+ }
}
(*cnt)++;
return ret;
-
-err_ftrace:
- /*
- * At this point, sinec ops is not registered, we should be sefe from
- * registering empty filter.
- */
- ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0);
- return ret;
}
static int arm_kprobe_ftrace(struct kprobe *p)
@@ -1428,7 +1426,7 @@ _kprobe_addr(kprobe_opcode_t *addr, const char *symbol_name,
unsigned long offset, bool *on_func_entry)
{
if ((symbol_name && addr) || (!symbol_name && !addr))
- goto invalid;
+ return ERR_PTR(-EINVAL);
if (symbol_name) {
/*
@@ -1458,11 +1456,10 @@ _kprobe_addr(kprobe_opcode_t *addr, const char *symbol_name,
* at the start of the function.
*/
addr = arch_adjust_kprobe_addr((unsigned long)addr, offset, on_func_entry);
- if (addr)
- return addr;
+ if (!addr)
+ return ERR_PTR(-EINVAL);
-invalid:
- return ERR_PTR(-EINVAL);
+ return addr;
}
static kprobe_opcode_t *kprobe_addr(struct kprobe *p)
@@ -1486,15 +1483,15 @@ static struct kprobe *__get_valid_kprobe(struct kprobe *p)
if (unlikely(!ap))
return NULL;
- if (p != ap) {
- list_for_each_entry(list_p, &ap->list, list)
- if (list_p == p)
- /* kprobe p is a valid probe */
- goto valid;
- return NULL;
- }
-valid:
- return ap;
+ if (p == ap)
+ return ap;
+
+ list_for_each_entry(list_p, &ap->list, list)
+ if (list_p == p)
+ /* kprobe p is a valid probe */
+ return ap;
+
+ return NULL;
}
/*
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 5/5] kprobes: Remove remaining gotos
2024-12-09 2:40 [PATCH 0/5] kprobes: jump label: Cleanup with guard and __free Masami Hiramatsu (Google)
` (3 preceding siblings ...)
2024-12-09 2:41 ` [PATCH 4/5] kprobes: Remove unneeded goto Masami Hiramatsu (Google)
@ 2024-12-09 2:42 ` Masami Hiramatsu (Google)
4 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-12-09 2:42 UTC (permalink / raw)
To: Steven Rostedt, Peter Zijlstra
Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
Naveen N Rao, Josh Poimboeuf, Jason Baron, Ard Biesheuvel,
linux-kernel, linux-trace-kernel
From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Remove remaining gotos from kprobes.c to clean up the code.
This does not use cleanup macros, but changes code flow for avoiding
gotos.
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
kernel/kprobes.c | 63 +++++++++++++++++++++++++++---------------------------
1 file changed, 31 insertions(+), 32 deletions(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 34cbbb2206f4..030569210670 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1730,29 +1730,31 @@ static int __unregister_kprobe_top(struct kprobe *p)
if (IS_ERR(ap))
return PTR_ERR(ap);
- if (ap == p)
- /*
- * This probe is an independent(and non-optimized) kprobe
- * (not an aggrprobe). Remove from the hash list.
- */
- goto disarmed;
-
- /* Following process expects this probe is an aggrprobe */
- WARN_ON(!kprobe_aggrprobe(ap));
+ WARN_ON(ap != p && !kprobe_aggrprobe(ap));
- if (list_is_singular(&ap->list) && kprobe_disarmed(ap))
+ /*
+ * If the probe is an independent(and non-optimized) kprobe
+ * (not an aggrprobe), the last kprobe on the aggrprobe, or
+ * kprobe is already disarmed, just remove from the hash list.
+ */
+ if (ap == p ||
+ (list_is_singular(&ap->list) && kprobe_disarmed(ap))) {
/*
* !disarmed could be happen if the probe is under delayed
* unoptimizing.
*/
- goto disarmed;
- else {
- /* If disabling probe has special handlers, update aggrprobe */
- if (p->post_handler && !kprobe_gone(p)) {
- list_for_each_entry(list_p, &ap->list, list) {
- if ((list_p != p) && (list_p->post_handler))
- goto noclean;
- }
+ hlist_del_rcu(&ap->hlist);
+ return 0;
+ }
+
+ /* If disabling probe has special handlers, update aggrprobe */
+ if (p->post_handler && !kprobe_gone(p)) {
+ list_for_each_entry(list_p, &ap->list, list) {
+ if ((list_p != p) && (list_p->post_handler))
+ break;
+ }
+ /* No other probe has post_handler */
+ if (list_entry_is_head(list_p, &ap->list, list)) {
/*
* For the kprobe-on-ftrace case, we keep the
* post_handler setting to identify this aggrprobe
@@ -1761,24 +1763,21 @@ static int __unregister_kprobe_top(struct kprobe *p)
if (!kprobe_ftrace(ap))
ap->post_handler = NULL;
}
-noclean:
+ }
+
+ /*
+ * Remove from the aggrprobe: this path will do nothing in
+ * __unregister_kprobe_bottom().
+ */
+ list_del_rcu(&p->list);
+ if (!kprobe_disabled(ap) && !kprobes_all_disarmed)
/*
- * Remove from the aggrprobe: this path will do nothing in
- * __unregister_kprobe_bottom().
+ * Try to optimize this probe again, because post
+ * handler may have been changed.
*/
- list_del_rcu(&p->list);
- if (!kprobe_disabled(ap) && !kprobes_all_disarmed)
- /*
- * Try to optimize this probe again, because post
- * handler may have been changed.
- */
- optimize_kprobe(ap);
- }
+ optimize_kprobe(ap);
return 0;
-disarmed:
- hlist_del_rcu(&ap->hlist);
- return 0;
}
static void __unregister_kprobe_bottom(struct kprobe *p)
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] kprobes: Use guard() for external locks
2024-12-09 2:41 ` [PATCH 2/5] kprobes: Use guard() for external locks Masami Hiramatsu (Google)
@ 2024-12-09 11:04 ` Peter Zijlstra
2024-12-10 2:04 ` Masami Hiramatsu
0 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2024-12-09 11:04 UTC (permalink / raw)
To: Masami Hiramatsu (Google)
Cc: Steven Rostedt, Anil S Keshavamurthy, David S . Miller,
Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
Naveen N Rao, Josh Poimboeuf, Jason Baron, Ard Biesheuvel,
linux-kernel, linux-trace-kernel
On Mon, Dec 09, 2024 at 11:41:26AM +0900, Masami Hiramatsu (Google) wrote:
> From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
>
> Use guard() for text_mutex, cpu_read_lock, and jump_label_lock in
> the kprobes.
> @@ -853,29 +850,24 @@ static void try_to_optimize_kprobe(struct kprobe *p)
> return;
>
> /* For preparing optimization, jump_label_text_reserved() is called. */
> - cpus_read_lock();
> - jump_label_lock();
> - mutex_lock(&text_mutex);
> + guard(cpus_read_lock)();
> + guard(jump_label_lock)();
> + guard(mutex)(&text_mutex);
>
> @@ -1294,62 +1280,55 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> int ret = 0;
> struct kprobe *ap = orig_p;
>
> - cpus_read_lock();
> -
> - /* For preparing optimization, jump_label_text_reserved() is called */
> - jump_label_lock();
> - mutex_lock(&text_mutex);
Why does kprobe need jump_label_lock and how does it then not also need
static_call_lock ?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] kprobes: Use guard() for external locks
2024-12-09 11:04 ` Peter Zijlstra
@ 2024-12-10 2:04 ` Masami Hiramatsu
2024-12-10 2:15 ` Masami Hiramatsu
0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2024-12-10 2:04 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Steven Rostedt, Anil S Keshavamurthy, David S . Miller,
Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
Naveen N Rao, Josh Poimboeuf, Jason Baron, Ard Biesheuvel,
linux-kernel, linux-trace-kernel
On Mon, 9 Dec 2024 12:04:11 +0100
Peter Zijlstra <peterz@infradead.org> wrote:
> On Mon, Dec 09, 2024 at 11:41:26AM +0900, Masami Hiramatsu (Google) wrote:
> > From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> >
> > Use guard() for text_mutex, cpu_read_lock, and jump_label_lock in
> > the kprobes.
>
> > @@ -853,29 +850,24 @@ static void try_to_optimize_kprobe(struct kprobe *p)
> > return;
> >
> > /* For preparing optimization, jump_label_text_reserved() is called. */
> > - cpus_read_lock();
> > - jump_label_lock();
> > - mutex_lock(&text_mutex);
> > + guard(cpus_read_lock)();
> > + guard(jump_label_lock)();
> > + guard(mutex)(&text_mutex);
> >
>
> > @@ -1294,62 +1280,55 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> > int ret = 0;
> > struct kprobe *ap = orig_p;
> >
> > - cpus_read_lock();
> > -
> > - /* For preparing optimization, jump_label_text_reserved() is called */
> > - jump_label_lock();
> > - mutex_lock(&text_mutex);
>
> Why does kprobe need jump_label_lock and how does it then not also need
> static_call_lock ?
Good catch! It has not been updated for static_call_text_reserved().
We need static_call_lock() here too.
Thanks!
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] kprobes: Use guard() for external locks
2024-12-10 2:04 ` Masami Hiramatsu
@ 2024-12-10 2:15 ` Masami Hiramatsu
2024-12-10 12:10 ` Peter Zijlstra
0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2024-12-10 2:15 UTC (permalink / raw)
To: Masami Hiramatsu
Cc: Peter Zijlstra, Steven Rostedt, Anil S Keshavamurthy,
David S . Miller, Mathieu Desnoyers, Oleg Nesterov,
Tzvetomir Stoyanov, Naveen N Rao, Josh Poimboeuf, Jason Baron,
Ard Biesheuvel, linux-kernel, linux-trace-kernel
On Tue, 10 Dec 2024 11:04:28 +0900
Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:
> On Mon, 9 Dec 2024 12:04:11 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
>
> > On Mon, Dec 09, 2024 at 11:41:26AM +0900, Masami Hiramatsu (Google) wrote:
> > > From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > >
> > > Use guard() for text_mutex, cpu_read_lock, and jump_label_lock in
> > > the kprobes.
> >
> > > @@ -853,29 +850,24 @@ static void try_to_optimize_kprobe(struct kprobe *p)
> > > return;
> > >
> > > /* For preparing optimization, jump_label_text_reserved() is called. */
> > > - cpus_read_lock();
> > > - jump_label_lock();
> > > - mutex_lock(&text_mutex);
> > > + guard(cpus_read_lock)();
> > > + guard(jump_label_lock)();
> > > + guard(mutex)(&text_mutex);
> > >
> >
> > > @@ -1294,62 +1280,55 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> > > int ret = 0;
> > > struct kprobe *ap = orig_p;
> > >
> > > - cpus_read_lock();
> > > -
> > > - /* For preparing optimization, jump_label_text_reserved() is called */
> > > - jump_label_lock();
> > > - mutex_lock(&text_mutex);
> >
> > Why does kprobe need jump_label_lock and how does it then not also need
> > static_call_lock ?
>
> Good catch! It has not been updated for static_call_text_reserved().
> We need static_call_lock() here too.
Wait, this is for checking the jump_label_text_reserved(), but as far as
I know, the text reserved area of jump_label will be updated when the
module is loaded or removed. And the static call too, right?
In that case, what we need is to lock the modules (for the short term,
can we use rcu_read_lock?) for using both jump_label_text_reserved()
and static_call_text_reserved()?
Thank you,
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] kprobes: Use guard() for external locks
2024-12-10 2:15 ` Masami Hiramatsu
@ 2024-12-10 12:10 ` Peter Zijlstra
2024-12-10 14:12 ` Masami Hiramatsu
0 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2024-12-10 12:10 UTC (permalink / raw)
To: Masami Hiramatsu
Cc: Steven Rostedt, Anil S Keshavamurthy, David S . Miller,
Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
Naveen N Rao, Josh Poimboeuf, Jason Baron, Ard Biesheuvel,
linux-kernel, linux-trace-kernel
On Tue, Dec 10, 2024 at 11:15:28AM +0900, Masami Hiramatsu wrote:
> On Tue, 10 Dec 2024 11:04:28 +0900
> Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:
>
> > On Mon, 9 Dec 2024 12:04:11 +0100
> > Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > > On Mon, Dec 09, 2024 at 11:41:26AM +0900, Masami Hiramatsu (Google) wrote:
> > > > From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > >
> > > > Use guard() for text_mutex, cpu_read_lock, and jump_label_lock in
> > > > the kprobes.
> > >
> > > > @@ -853,29 +850,24 @@ static void try_to_optimize_kprobe(struct kprobe *p)
> > > > return;
> > > >
> > > > /* For preparing optimization, jump_label_text_reserved() is called. */
> > > > - cpus_read_lock();
> > > > - jump_label_lock();
> > > > - mutex_lock(&text_mutex);
> > > > + guard(cpus_read_lock)();
> > > > + guard(jump_label_lock)();
> > > > + guard(mutex)(&text_mutex);
> > > >
> > >
> > > > @@ -1294,62 +1280,55 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> > > > int ret = 0;
> > > > struct kprobe *ap = orig_p;
> > > >
> > > > - cpus_read_lock();
> > > > -
> > > > - /* For preparing optimization, jump_label_text_reserved() is called */
> > > > - jump_label_lock();
> > > > - mutex_lock(&text_mutex);
> > >
> > > Why does kprobe need jump_label_lock and how does it then not also need
> > > static_call_lock ?
> >
> > Good catch! It has not been updated for static_call_text_reserved().
> > We need static_call_lock() here too.
>
> Wait, this is for checking the jump_label_text_reserved(), but as far as
> I know, the text reserved area of jump_label will be updated when the
> module is loaded or removed. And the static call too, right?
Correct.
> In that case, what we need is to lock the modules (for the short term,
> can we use rcu_read_lock?) for using both jump_label_text_reserved()
> and static_call_text_reserved()?
Yes, rcu_read_lock() is sufficient to observe fully loaded modules. I
don't think you care about placing kprobes on modules that are still
loading (that doesn't really make sense).
Also see:
https://lkml.kernel.org/r/20241205215102.hRywUW2A@linutronix.de
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] kprobes: Use guard() for external locks
2024-12-10 12:10 ` Peter Zijlstra
@ 2024-12-10 14:12 ` Masami Hiramatsu
2024-12-10 23:17 ` Masami Hiramatsu
0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2024-12-10 14:12 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Steven Rostedt, Anil S Keshavamurthy, David S . Miller,
Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
Naveen N Rao, Josh Poimboeuf, Jason Baron, Ard Biesheuvel,
linux-kernel, linux-trace-kernel
On Tue, 10 Dec 2024 13:10:27 +0100
Peter Zijlstra <peterz@infradead.org> wrote:
>
> > Wait, this is for checking the jump_label_text_reserved(), but as far as
> > I know, the text reserved area of jump_label will be updated when the
> > module is loaded or removed. And the static call too, right?
>
> Correct.
>
> > In that case, what we need is to lock the modules (for the short term,
> > can we use rcu_read_lock?) for using both jump_label_text_reserved()
> > and static_call_text_reserved()?
>
> Yes, rcu_read_lock() is sufficient to observe fully loaded modules. I
> don't think you care about placing kprobes on modules that are still
> loading (that doesn't really make sense).
Actually, to probe module's __init function, it may put a probe during
loading modules (by trace_kprobe.c) which has been done by module
notification callback.
trace_kprobe_module_callback()
-> register_module_trace_kprobe()
-> __register_trace_kprobe()
-> register_kprobe()
-> check_kprobe_address_safe()
Anyway, unless we run the module notifier callbacks in parallel,
it should be safe.
Hmm, however, it seems that trace_probe's module notifier priority
is not correct. It must be lower than jump_label but it is the same.
OK, let me remove jump_label_lock() from kprobes (if it gets
module reference), and give a lower priority to the trace_probe's
module notifier to ensure it is called after jump_label is updated.
>
> Also see:
>
> https://lkml.kernel.org/r/20241205215102.hRywUW2A@linutronix.de
Thank you,
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] kprobes: Use guard() for external locks
2024-12-10 14:12 ` Masami Hiramatsu
@ 2024-12-10 23:17 ` Masami Hiramatsu
0 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu @ 2024-12-10 23:17 UTC (permalink / raw)
To: Masami Hiramatsu
Cc: Peter Zijlstra, Steven Rostedt, Anil S Keshavamurthy,
David S . Miller, Mathieu Desnoyers, Oleg Nesterov,
Tzvetomir Stoyanov, Naveen N Rao, Josh Poimboeuf, Jason Baron,
Ard Biesheuvel, linux-kernel, linux-trace-kernel
On Tue, 10 Dec 2024 23:12:41 +0900
Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:
> On Tue, 10 Dec 2024 13:10:27 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
>
> >
> > > Wait, this is for checking the jump_label_text_reserved(), but as far as
> > > I know, the text reserved area of jump_label will be updated when the
> > > module is loaded or removed. And the static call too, right?
> >
> > Correct.
> >
> > > In that case, what we need is to lock the modules (for the short term,
> > > can we use rcu_read_lock?) for using both jump_label_text_reserved()
> > > and static_call_text_reserved()?
> >
> > Yes, rcu_read_lock() is sufficient to observe fully loaded modules. I
> > don't think you care about placing kprobes on modules that are still
> > loading (that doesn't really make sense).
>
> Actually, to probe module's __init function, it may put a probe during
> loading modules (by trace_kprobe.c) which has been done by module
> notification callback.
>
> trace_kprobe_module_callback()
> -> register_module_trace_kprobe()
> -> __register_trace_kprobe()
> -> register_kprobe()
> -> check_kprobe_address_safe()
>
> Anyway, unless we run the module notifier callbacks in parallel,
> it should be safe.
Hmm, this is still a problem. We need to acquire jump_label_lock()
even for the jump_label_text_reserved().
If user runs module load and register_kprobe() in parallel,
jump_label_module_notify() and check_kprobe_address_safe() may run
in parallel.
jump_label_module_notify()
-> jump_label_add_module()
-> jump_label_sort_entries() <- update mod->jump_entries
check_kprobe_address_safe()
-> jump_label_text_reserved()
-> __jump_label_mod_text_reserved() <- read mod->jump_entries
So there is a race on mod->jump_entries. jump_label_lock() avoids
this race.
(IIRC, module can get the reference in the MODULE_STATE_COMING state.)
On the other hand, static_call_text_reserved() does not need a lock
because it does not sort the list, nor update the static_call_site::addr.
In conclusion, we need jump_label_lock() as it is, and don't need
static_call_lock().
Thank you,
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2024-12-10 23:17 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-09 2:40 [PATCH 0/5] kprobes: jump label: Cleanup with guard and __free Masami Hiramatsu (Google)
2024-12-09 2:41 ` [PATCH 1/5] jump_label: Define guard() for jump_label_lock Masami Hiramatsu (Google)
2024-12-09 2:41 ` [PATCH 2/5] kprobes: Use guard() for external locks Masami Hiramatsu (Google)
2024-12-09 11:04 ` Peter Zijlstra
2024-12-10 2:04 ` Masami Hiramatsu
2024-12-10 2:15 ` Masami Hiramatsu
2024-12-10 12:10 ` Peter Zijlstra
2024-12-10 14:12 ` Masami Hiramatsu
2024-12-10 23:17 ` Masami Hiramatsu
2024-12-09 2:41 ` [PATCH 3/5] kprobes: Use guard for rcu_read_lock Masami Hiramatsu (Google)
2024-12-09 2:41 ` [PATCH 4/5] kprobes: Remove unneeded goto Masami Hiramatsu (Google)
2024-12-09 2:42 ` [PATCH 5/5] kprobes: Remove remaining gotos Masami Hiramatsu (Google)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).