linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] kprobes: tracing/probes: Fix and cleanup to use guard
@ 2024-11-26 13:56 Masami Hiramatsu (Google)
  2024-11-26 13:56 ` [PATCH 1/6] tracing/eprobe: Fix to release eprobe when failed to add dyn_event Masami Hiramatsu (Google)
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-11-26 13:56 UTC (permalink / raw)
  To: Steven Rostedt, Naveen N Rao
  Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
	Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
	linux-kernel, linux-trace-kernel

Hi,

This series fixes eprobes and cleanup kprobes and probe events in ftrace
to use guard() and scoped_guard() instead of pairs of mutex locks.

Some locks are still not using guard(). We need some more work to complete.

Thanks,

---

Masami Hiramatsu (Google) (6):
      tracing/eprobe: Fix to release eprobe when failed to add dyn_event
      kprobes: Adopt guard() and scoped_guard()
      tracing/kprobe: Adopt guard() and scoped_guard()
      tracing/uprobe: Adopt guard() and scoped_guard()
      tracing/eprobe: Adopt guard() and scoped_guard()
      tracing/dynevent: Adopt guard() and scoped_guard()


 kernel/kprobes.c              |  205 ++++++++++++++++++-----------------------
 kernel/trace/trace_dynevent.c |   12 +-
 kernel/trace/trace_eprobe.c   |   31 +++---
 kernel/trace/trace_kprobe.c   |   18 +---
 kernel/trace/trace_uprobe.c   |   15 +--
 5 files changed, 121 insertions(+), 160 deletions(-)

--
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/6] tracing/eprobe: Fix to release eprobe when failed to add dyn_event
  2024-11-26 13:56 [PATCH 0/6] kprobes: tracing/probes: Fix and cleanup to use guard Masami Hiramatsu (Google)
@ 2024-11-26 13:56 ` Masami Hiramatsu (Google)
  2024-11-26 13:56 ` [PATCH 2/6] kprobes: Adopt guard() and scoped_guard() Masami Hiramatsu (Google)
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-11-26 13:56 UTC (permalink / raw)
  To: Steven Rostedt, Naveen N Rao
  Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
	Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
	linux-kernel, linux-trace-kernel

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Fix eprobe event to unregister event call and release eprobe when it fails
to add dynamic event correctly.

Fixes: 7491e2c44278 ("tracing: Add a probe that attaches to trace events")
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 kernel/trace/trace_eprobe.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
index ebda68ee9abf..be8be0c1aaf0 100644
--- a/kernel/trace/trace_eprobe.c
+++ b/kernel/trace/trace_eprobe.c
@@ -963,6 +963,11 @@ static int __trace_eprobe_create(int argc, const char *argv[])
 		goto error;
 	}
 	ret = dyn_event_add(&ep->devent, &ep->tp.event->call);
+	if (ret < 0) {
+		trace_probe_unregister_event_call(&ep->tp);
+		mutex_unlock(&event_mutex);
+		goto error;
+	}
 	mutex_unlock(&event_mutex);
 	return ret;
 parse_error:


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/6] kprobes: Adopt guard() and scoped_guard()
  2024-11-26 13:56 [PATCH 0/6] kprobes: tracing/probes: Fix and cleanup to use guard Masami Hiramatsu (Google)
  2024-11-26 13:56 ` [PATCH 1/6] tracing/eprobe: Fix to release eprobe when failed to add dyn_event Masami Hiramatsu (Google)
@ 2024-11-26 13:56 ` Masami Hiramatsu (Google)
  2024-11-26 13:56 ` [PATCH 3/6] tracing/kprobe: " Masami Hiramatsu (Google)
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-11-26 13:56 UTC (permalink / raw)
  To: Steven Rostedt, Naveen N Rao
  Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
	Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
	linux-kernel, linux-trace-kernel

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Use guard() or scoped_guard() for critical sections rather than
discrete lock/unlock pairs.

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 kernel/kprobes.c |  205 +++++++++++++++++++++++-------------------------------
 1 file changed, 89 insertions(+), 116 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index cb9dbdafbbcf..62b5b08d809d 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -141,10 +141,9 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c);
 kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 {
 	struct kprobe_insn_page *kip;
-	kprobe_opcode_t *slot = NULL;
 
 	/* Since the slot array is not protected by rcu, we need a mutex */
-	mutex_lock(&c->mutex);
+	guard(mutex)(&c->mutex);
  retry:
 	rcu_read_lock();
 	list_for_each_entry_rcu(kip, &c->pages, list) {
@@ -155,9 +154,8 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 				if (kip->slot_used[i] == SLOT_CLEAN) {
 					kip->slot_used[i] = SLOT_USED;
 					kip->nused++;
-					slot = kip->insns + (i * c->insn_size);
 					rcu_read_unlock();
-					goto out;
+					return kip->insns + (i * c->insn_size);
 				}
 			}
 			/* kip->nused is broken. Fix it. */
@@ -174,12 +172,12 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 	/* All out of space.  Need to allocate a new page. */
 	kip = kmalloc(struct_size(kip, slot_used, slots_per_page(c)), GFP_KERNEL);
 	if (!kip)
-		goto out;
+		return NULL;
 
 	kip->insns = c->alloc();
 	if (!kip->insns) {
 		kfree(kip);
-		goto out;
+		return NULL;
 	}
 	INIT_LIST_HEAD(&kip->list);
 	memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
@@ -188,14 +186,12 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 	kip->ngarbage = 0;
 	kip->cache = c;
 	list_add_rcu(&kip->list, &c->pages);
-	slot = kip->insns;
 
 	/* Record the perf ksymbol register event after adding the page */
 	perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_OOL, (unsigned long)kip->insns,
 			   PAGE_SIZE, false, c->sym);
-out:
-	mutex_unlock(&c->mutex);
-	return slot;
+
+	return kip->insns;
 }
 
 /* Return true if all garbages are collected, otherwise false. */
@@ -256,7 +252,7 @@ void __free_insn_slot(struct kprobe_insn_cache *c,
 	struct kprobe_insn_page *kip;
 	long idx;
 
-	mutex_lock(&c->mutex);
+	guard(mutex)(&c->mutex);
 	rcu_read_lock();
 	list_for_each_entry_rcu(kip, &c->pages, list) {
 		idx = ((long)slot - (long)kip->insns) /
@@ -282,7 +278,6 @@ void __free_insn_slot(struct kprobe_insn_cache *c,
 			collect_one_slot(kip, idx);
 		}
 	}
-	mutex_unlock(&c->mutex);
 }
 
 /*
@@ -638,10 +633,9 @@ static void kprobe_optimizer(struct work_struct *work)
 	mutex_unlock(&kprobe_mutex);
 }
 
-/* Wait for completing optimization and unoptimization */
-void wait_for_kprobe_optimizer(void)
+static void wait_for_kprobe_optimizer_locked(void)
 {
-	mutex_lock(&kprobe_mutex);
+	lockdep_assert_held(&kprobe_mutex);
 
 	while (!list_empty(&optimizing_list) || !list_empty(&unoptimizing_list)) {
 		mutex_unlock(&kprobe_mutex);
@@ -653,8 +647,14 @@ void wait_for_kprobe_optimizer(void)
 
 		mutex_lock(&kprobe_mutex);
 	}
+}
 
-	mutex_unlock(&kprobe_mutex);
+/* Wait for completing optimization and unoptimization */
+void wait_for_kprobe_optimizer(void)
+{
+	guard(mutex)(&kprobe_mutex);
+
+	wait_for_kprobe_optimizer_locked();
 }
 
 bool optprobe_queued_unopt(struct optimized_kprobe *op)
@@ -884,10 +884,10 @@ static void optimize_all_kprobes(void)
 	struct kprobe *p;
 	unsigned int i;
 
-	mutex_lock(&kprobe_mutex);
+	guard(mutex)(&kprobe_mutex);
 	/* If optimization is already allowed, just return. */
 	if (kprobes_allow_optimization)
-		goto out;
+		return;
 
 	cpus_read_lock();
 	kprobes_allow_optimization = true;
@@ -899,8 +899,6 @@ static void optimize_all_kprobes(void)
 	}
 	cpus_read_unlock();
 	pr_info("kprobe jump-optimization is enabled. All kprobes are optimized if possible.\n");
-out:
-	mutex_unlock(&kprobe_mutex);
 }
 
 #ifdef CONFIG_SYSCTL
@@ -910,12 +908,10 @@ static void unoptimize_all_kprobes(void)
 	struct kprobe *p;
 	unsigned int i;
 
-	mutex_lock(&kprobe_mutex);
+	guard(mutex)(&kprobe_mutex);
 	/* If optimization is already prohibited, just return. */
-	if (!kprobes_allow_optimization) {
-		mutex_unlock(&kprobe_mutex);
+	if (!kprobes_allow_optimization)
 		return;
-	}
 
 	cpus_read_lock();
 	kprobes_allow_optimization = false;
@@ -927,10 +923,8 @@ static void unoptimize_all_kprobes(void)
 		}
 	}
 	cpus_read_unlock();
-	mutex_unlock(&kprobe_mutex);
-
 	/* Wait for unoptimizing completion. */
-	wait_for_kprobe_optimizer();
+	wait_for_kprobe_optimizer_locked();
 	pr_info("kprobe jump-optimization is disabled. All kprobes are based on software breakpoint.\n");
 }
 
@@ -942,7 +936,7 @@ static int proc_kprobes_optimization_handler(const struct ctl_table *table,
 {
 	int ret;
 
-	mutex_lock(&kprobe_sysctl_mutex);
+	guard(mutex)(&kprobe_sysctl_mutex);
 	sysctl_kprobes_optimization = kprobes_allow_optimization ? 1 : 0;
 	ret = proc_dointvec_minmax(table, write, buffer, length, ppos);
 
@@ -950,7 +944,6 @@ static int proc_kprobes_optimization_handler(const struct ctl_table *table,
 		optimize_all_kprobes();
 	else
 		unoptimize_all_kprobes();
-	mutex_unlock(&kprobe_sysctl_mutex);
 
 	return ret;
 }
@@ -1025,7 +1018,8 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
 #define __arm_kprobe(p)				arch_arm_kprobe(p)
 #define __disarm_kprobe(p, o)			arch_disarm_kprobe(p)
 #define kprobe_disarmed(p)			kprobe_disabled(p)
-#define wait_for_kprobe_optimizer()		do {} while (0)
+#define wait_for_kprobe_optimizer_locked()			\
+	lockdep_assert_held(&kprobe_mutex)
 
 static int reuse_unused_kprobe(struct kprobe *ap)
 {
@@ -1489,6 +1483,7 @@ _kprobe_addr(kprobe_opcode_t *addr, const char *symbol_name,
 static kprobe_opcode_t *kprobe_addr(struct kprobe *p)
 {
 	bool on_func_entry;
+
 	return _kprobe_addr(p->addr, p->symbol_name, p->offset, &on_func_entry);
 }
 
@@ -1523,14 +1518,12 @@ static struct kprobe *__get_valid_kprobe(struct kprobe *p)
  */
 static inline int warn_kprobe_rereg(struct kprobe *p)
 {
-	int ret = 0;
+	guard(mutex)(&kprobe_mutex);
 
-	mutex_lock(&kprobe_mutex);
 	if (WARN_ON_ONCE(__get_valid_kprobe(p)))
-		ret = -EINVAL;
-	mutex_unlock(&kprobe_mutex);
+		return -EINVAL;
 
-	return ret;
+	return 0;
 }
 
 static int check_ftrace_location(struct kprobe *p)
@@ -1618,44 +1611,17 @@ static int check_kprobe_address_safe(struct kprobe *p,
 	return ret;
 }
 
-int register_kprobe(struct kprobe *p)
+static int __register_kprobe(struct kprobe *p)
 {
 	int ret;
 	struct kprobe *old_p;
-	struct module *probed_mod;
-	kprobe_opcode_t *addr;
-	bool on_func_entry;
-
-	/* Adjust probe address from symbol */
-	addr = _kprobe_addr(p->addr, p->symbol_name, p->offset, &on_func_entry);
-	if (IS_ERR(addr))
-		return PTR_ERR(addr);
-	p->addr = addr;
-
-	ret = warn_kprobe_rereg(p);
-	if (ret)
-		return ret;
 
-	/* User can pass only KPROBE_FLAG_DISABLED to register_kprobe */
-	p->flags &= KPROBE_FLAG_DISABLED;
-	p->nmissed = 0;
-	INIT_LIST_HEAD(&p->list);
-
-	ret = check_kprobe_address_safe(p, &probed_mod);
-	if (ret)
-		return ret;
-
-	mutex_lock(&kprobe_mutex);
-
-	if (on_func_entry)
-		p->flags |= KPROBE_FLAG_ON_FUNC_ENTRY;
+	guard(mutex)(&kprobe_mutex);
 
 	old_p = get_kprobe(p->addr);
-	if (old_p) {
+	if (old_p)
 		/* Since this may unoptimize 'old_p', locking 'text_mutex'. */
-		ret = register_aggr_kprobe(old_p, p);
-		goto out;
-	}
+		return register_aggr_kprobe(old_p, p);
 
 	cpus_read_lock();
 	/* Prevent text modification */
@@ -1664,7 +1630,7 @@ int register_kprobe(struct kprobe *p)
 	mutex_unlock(&text_mutex);
 	cpus_read_unlock();
 	if (ret)
-		goto out;
+		return ret;
 
 	INIT_HLIST_NODE(&p->hlist);
 	hlist_add_head_rcu(&p->hlist,
@@ -1675,14 +1641,43 @@ int register_kprobe(struct kprobe *p)
 		if (ret) {
 			hlist_del_rcu(&p->hlist);
 			synchronize_rcu();
-			goto out;
 		}
 	}
 
 	/* Try to optimize kprobe */
 	try_to_optimize_kprobe(p);
-out:
-	mutex_unlock(&kprobe_mutex);
+	return 0;
+}
+
+int register_kprobe(struct kprobe *p)
+{
+	int ret;
+	struct module *probed_mod;
+	kprobe_opcode_t *addr;
+	bool on_func_entry;
+
+	/* Canonicalize probe address from symbol */
+	addr = _kprobe_addr(p->addr, p->symbol_name, p->offset, &on_func_entry);
+	if (IS_ERR(addr))
+		return PTR_ERR(addr);
+	p->addr = addr;
+
+	ret = warn_kprobe_rereg(p);
+	if (ret)
+		return ret;
+
+	/* User can pass only KPROBE_FLAG_DISABLED to register_kprobe */
+	p->flags &= KPROBE_FLAG_DISABLED;
+	if (on_func_entry)
+		p->flags |= KPROBE_FLAG_ON_FUNC_ENTRY;
+	p->nmissed = 0;
+	INIT_LIST_HEAD(&p->list);
+
+	ret = check_kprobe_address_safe(p, &probed_mod);
+	if (ret)
+		return ret;
+
+	ret = __register_kprobe(p);
 
 	if (probed_mod)
 		module_put(probed_mod);
@@ -1858,12 +1853,11 @@ void unregister_kprobes(struct kprobe **kps, int num)
 
 	if (num <= 0)
 		return;
-	mutex_lock(&kprobe_mutex);
-	for (i = 0; i < num; i++)
-		if (__unregister_kprobe_top(kps[i]) < 0)
-			kps[i]->addr = NULL;
-	mutex_unlock(&kprobe_mutex);
-
+	scoped_guard(mutex, &kprobe_mutex) {
+		for (i = 0; i < num; i++)
+			if (__unregister_kprobe_top(kps[i]) < 0)
+				kps[i]->addr = NULL;
+	}
 	synchronize_rcu();
 	for (i = 0; i < num; i++)
 		if (kps[i]->addr)
@@ -2302,8 +2296,9 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
 
 	if (num <= 0)
 		return;
-	mutex_lock(&kprobe_mutex);
 	for (i = 0; i < num; i++) {
+		guard(mutex)(&kprobe_mutex);
+
 		if (__unregister_kprobe_top(&rps[i]->kp) < 0)
 			rps[i]->kp.addr = NULL;
 #ifdef CONFIG_KRETPROBE_ON_RETHOOK
@@ -2312,7 +2307,6 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
 		rcu_assign_pointer(rps[i]->rph->rp, NULL);
 #endif
 	}
-	mutex_unlock(&kprobe_mutex);
 
 	synchronize_rcu();
 	for (i = 0; i < num; i++) {
@@ -2393,18 +2387,14 @@ static void kill_kprobe(struct kprobe *p)
 /* Disable one kprobe */
 int disable_kprobe(struct kprobe *kp)
 {
-	int ret = 0;
 	struct kprobe *p;
 
-	mutex_lock(&kprobe_mutex);
+	guard(mutex)(&kprobe_mutex);
 
 	/* Disable this kprobe */
 	p = __disable_kprobe(kp);
-	if (IS_ERR(p))
-		ret = PTR_ERR(p);
 
-	mutex_unlock(&kprobe_mutex);
-	return ret;
+	return IS_ERR(p) ? PTR_ERR(p) : 0;
 }
 EXPORT_SYMBOL_GPL(disable_kprobe);
 
@@ -2414,20 +2404,16 @@ int enable_kprobe(struct kprobe *kp)
 	int ret = 0;
 	struct kprobe *p;
 
-	mutex_lock(&kprobe_mutex);
+	guard(mutex)(&kprobe_mutex);
 
 	/* Check whether specified probe is valid. */
 	p = __get_valid_kprobe(kp);
-	if (unlikely(p == NULL)) {
-		ret = -EINVAL;
-		goto out;
-	}
+	if (unlikely(p == NULL))
+		return -EINVAL;
 
-	if (kprobe_gone(kp)) {
+	if (kprobe_gone(kp))
 		/* This kprobe has gone, we couldn't enable it. */
-		ret = -EINVAL;
-		goto out;
-	}
+		return -EINVAL;
 
 	if (p != kp)
 		kp->flags &= ~KPROBE_FLAG_DISABLED;
@@ -2441,8 +2427,6 @@ int enable_kprobe(struct kprobe *kp)
 				kp->flags |= KPROBE_FLAG_DISABLED;
 		}
 	}
-out:
-	mutex_unlock(&kprobe_mutex);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(enable_kprobe);
@@ -2630,11 +2614,11 @@ static int kprobes_module_callback(struct notifier_block *nb,
 	unsigned int i;
 	int checkcore = (val == MODULE_STATE_GOING);
 
-	if (val == MODULE_STATE_COMING) {
-		mutex_lock(&kprobe_mutex);
+	guard(mutex)(&kprobe_mutex);
+
+	if (val == MODULE_STATE_COMING)
 		add_module_kprobe_blacklist(mod);
-		mutex_unlock(&kprobe_mutex);
-	}
+
 	if (val != MODULE_STATE_GOING && val != MODULE_STATE_LIVE)
 		return NOTIFY_DONE;
 
@@ -2644,7 +2628,6 @@ static int kprobes_module_callback(struct notifier_block *nb,
 	 * notified, only '.init.text' section would be freed. We need to
 	 * disable kprobes which have been inserted in the sections.
 	 */
-	mutex_lock(&kprobe_mutex);
 	for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
 		head = &kprobe_table[i];
 		hlist_for_each_entry(p, head, hlist)
@@ -2667,7 +2650,6 @@ static int kprobes_module_callback(struct notifier_block *nb,
 	}
 	if (val == MODULE_STATE_GOING)
 		remove_module_kprobe_blacklist(mod);
-	mutex_unlock(&kprobe_mutex);
 	return NOTIFY_DONE;
 }
 
@@ -2695,7 +2677,7 @@ void kprobe_free_init_mem(void)
 	struct kprobe *p;
 	int i;
 
-	mutex_lock(&kprobe_mutex);
+	guard(mutex)(&kprobe_mutex);
 
 	/* Kill all kprobes on initmem because the target code has been freed. */
 	for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
@@ -2705,8 +2687,6 @@ void kprobe_free_init_mem(void)
 				kill_kprobe(p);
 		}
 	}
-
-	mutex_unlock(&kprobe_mutex);
 }
 
 static int __init init_kprobes(void)
@@ -2902,11 +2882,11 @@ static int arm_all_kprobes(void)
 	unsigned int i, total = 0, errors = 0;
 	int err, ret = 0;
 
-	mutex_lock(&kprobe_mutex);
+	guard(mutex)(&kprobe_mutex);
 
 	/* If kprobes are armed, just return */
 	if (!kprobes_all_disarmed)
-		goto already_enabled;
+		return 0;
 
 	/*
 	 * optimize_kprobe() called by arm_kprobe() checks
@@ -2936,8 +2916,6 @@ static int arm_all_kprobes(void)
 	else
 		pr_info("Kprobes globally enabled\n");
 
-already_enabled:
-	mutex_unlock(&kprobe_mutex);
 	return ret;
 }
 
@@ -2948,13 +2926,11 @@ static int disarm_all_kprobes(void)
 	unsigned int i, total = 0, errors = 0;
 	int err, ret = 0;
 
-	mutex_lock(&kprobe_mutex);
+	guard(mutex)(&kprobe_mutex);
 
 	/* If kprobes are already disarmed, just return */
-	if (kprobes_all_disarmed) {
-		mutex_unlock(&kprobe_mutex);
+	if (kprobes_all_disarmed)
 		return 0;
-	}
 
 	kprobes_all_disarmed = true;
 
@@ -2979,11 +2955,8 @@ static int disarm_all_kprobes(void)
 	else
 		pr_info("Kprobes globally disabled\n");
 
-	mutex_unlock(&kprobe_mutex);
-
 	/* Wait for disarming all kprobes by optimizer */
-	wait_for_kprobe_optimizer();
-
+	wait_for_kprobe_optimizer_locked();
 	return ret;
 }
 


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/6] tracing/kprobe: Adopt guard() and scoped_guard()
  2024-11-26 13:56 [PATCH 0/6] kprobes: tracing/probes: Fix and cleanup to use guard Masami Hiramatsu (Google)
  2024-11-26 13:56 ` [PATCH 1/6] tracing/eprobe: Fix to release eprobe when failed to add dyn_event Masami Hiramatsu (Google)
  2024-11-26 13:56 ` [PATCH 2/6] kprobes: Adopt guard() and scoped_guard() Masami Hiramatsu (Google)
@ 2024-11-26 13:56 ` Masami Hiramatsu (Google)
  2024-11-26 13:56 ` [PATCH 4/6] tracing/uprobe: " Masami Hiramatsu (Google)
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-11-26 13:56 UTC (permalink / raw)
  To: Steven Rostedt, Naveen N Rao
  Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
	Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
	linux-kernel, linux-trace-kernel

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Use guard() or scoped_guard() in kprobe events for critical sections
rather than discrete lock/unlock pairs.

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 kernel/trace/trace_kprobe.c |   18 ++++++------------
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 263fac44d3ca..bae26eb14449 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -634,7 +634,7 @@ static int register_trace_kprobe(struct trace_kprobe *tk)
 	struct trace_kprobe *old_tk;
 	int ret;
 
-	mutex_lock(&event_mutex);
+	guard(mutex)(&event_mutex);
 
 	old_tk = find_trace_kprobe(trace_probe_name(&tk->tp),
 				   trace_probe_group_name(&tk->tp));
@@ -642,11 +642,9 @@ static int register_trace_kprobe(struct trace_kprobe *tk)
 		if (trace_kprobe_is_return(tk) != trace_kprobe_is_return(old_tk)) {
 			trace_probe_log_set_index(0);
 			trace_probe_log_err(0, DIFF_PROBE_TYPE);
-			ret = -EEXIST;
-		} else {
-			ret = append_trace_kprobe(tk, old_tk);
+			return -EEXIST;
 		}
-		goto end;
+		return append_trace_kprobe(tk, old_tk);
 	}
 
 	/* Register new event */
@@ -657,7 +655,7 @@ static int register_trace_kprobe(struct trace_kprobe *tk)
 			trace_probe_log_err(0, EVENT_EXIST);
 		} else
 			pr_warn("Failed to register probe event(%d)\n", ret);
-		goto end;
+		return ret;
 	}
 
 	/* Register k*probe */
@@ -672,8 +670,6 @@ static int register_trace_kprobe(struct trace_kprobe *tk)
 	else
 		dyn_event_add(&tk->devent, trace_probe_event_call(&tk->tp));
 
-end:
-	mutex_unlock(&event_mutex);
 	return ret;
 }
 
@@ -706,7 +702,7 @@ static int trace_kprobe_module_callback(struct notifier_block *nb,
 		return NOTIFY_DONE;
 
 	/* Update probes on coming module */
-	mutex_lock(&event_mutex);
+	guard(mutex)(&event_mutex);
 	for_each_trace_kprobe(tk, pos) {
 		if (trace_kprobe_within_module(tk, mod)) {
 			/* Don't need to check busy - this should have gone. */
@@ -718,7 +714,6 @@ static int trace_kprobe_module_callback(struct notifier_block *nb,
 					module_name(mod), ret);
 		}
 	}
-	mutex_unlock(&event_mutex);
 
 	return NOTIFY_DONE;
 }
@@ -1968,13 +1963,12 @@ static __init void enable_boot_kprobe_events(void)
 	struct trace_kprobe *tk;
 	struct dyn_event *pos;
 
-	mutex_lock(&event_mutex);
+	guard(mutex)(&event_mutex);
 	for_each_trace_kprobe(tk, pos) {
 		list_for_each_entry(file, &tr->events, list)
 			if (file->event_call == trace_probe_event_call(&tk->tp))
 				trace_event_enable_disable(file, 1, 0);
 	}
-	mutex_unlock(&event_mutex);
 }
 
 static __init void setup_boot_kprobe_events(void)


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/6] tracing/uprobe: Adopt guard() and scoped_guard()
  2024-11-26 13:56 [PATCH 0/6] kprobes: tracing/probes: Fix and cleanup to use guard Masami Hiramatsu (Google)
                   ` (2 preceding siblings ...)
  2024-11-26 13:56 ` [PATCH 3/6] tracing/kprobe: " Masami Hiramatsu (Google)
@ 2024-11-26 13:56 ` Masami Hiramatsu (Google)
  2024-11-26 13:57 ` [PATCH 5/6] tracing/eprobe: " Masami Hiramatsu (Google)
  2024-11-26 13:57 ` [PATCH 6/6] tracing/dynevent: " Masami Hiramatsu (Google)
  5 siblings, 0 replies; 10+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-11-26 13:56 UTC (permalink / raw)
  To: Steven Rostedt, Naveen N Rao
  Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
	Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
	linux-kernel, linux-trace-kernel

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Use guard() or scoped_guard() in uprobe events for critical sections
rather than discrete lock/unlock pairs.

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 kernel/trace/trace_uprobe.c |   15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index b30fc8fcd095..4150ab1d835e 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -496,11 +496,11 @@ static int register_trace_uprobe(struct trace_uprobe *tu)
 	struct trace_uprobe *old_tu;
 	int ret;
 
-	mutex_lock(&event_mutex);
+	guard(mutex)(&event_mutex);
 
 	ret = validate_ref_ctr_offset(tu);
 	if (ret)
-		goto end;
+		return ret;
 
 	/* register as an event */
 	old_tu = find_probe_event(trace_probe_name(&tu->tp),
@@ -509,11 +509,9 @@ static int register_trace_uprobe(struct trace_uprobe *tu)
 		if (is_ret_probe(tu) != is_ret_probe(old_tu)) {
 			trace_probe_log_set_index(0);
 			trace_probe_log_err(0, DIFF_PROBE_TYPE);
-			ret = -EEXIST;
-		} else {
-			ret = append_trace_uprobe(tu, old_tu);
+			return -EEXIST;
 		}
-		goto end;
+		return append_trace_uprobe(tu, old_tu);
 	}
 
 	ret = register_uprobe_event(tu);
@@ -523,14 +521,11 @@ static int register_trace_uprobe(struct trace_uprobe *tu)
 			trace_probe_log_err(0, EVENT_EXIST);
 		} else
 			pr_warn("Failed to register probe event(%d)\n", ret);
-		goto end;
+		return ret;
 	}
 
 	dyn_event_add(&tu->devent, trace_probe_event_call(&tu->tp));
 
-end:
-	mutex_unlock(&event_mutex);
-
 	return ret;
 }
 


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/6] tracing/eprobe: Adopt guard() and scoped_guard()
  2024-11-26 13:56 [PATCH 0/6] kprobes: tracing/probes: Fix and cleanup to use guard Masami Hiramatsu (Google)
                   ` (3 preceding siblings ...)
  2024-11-26 13:56 ` [PATCH 4/6] tracing/uprobe: " Masami Hiramatsu (Google)
@ 2024-11-26 13:57 ` Masami Hiramatsu (Google)
  2024-11-26 13:57 ` [PATCH 6/6] tracing/dynevent: " Masami Hiramatsu (Google)
  5 siblings, 0 replies; 10+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-11-26 13:57 UTC (permalink / raw)
  To: Steven Rostedt, Naveen N Rao
  Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
	Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
	linux-kernel, linux-trace-kernel

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Use guard() or scoped_guard() in eprobe events for critical sections
rather than discrete lock/unlock pairs.

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 kernel/trace/trace_eprobe.c |   36 +++++++++++++++++-------------------
 1 file changed, 17 insertions(+), 19 deletions(-)

diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
index be8be0c1aaf0..82fd637cfc19 100644
--- a/kernel/trace/trace_eprobe.c
+++ b/kernel/trace/trace_eprobe.c
@@ -917,10 +917,10 @@ static int __trace_eprobe_create(int argc, const char *argv[])
 		goto error;
 	}
 
-	mutex_lock(&event_mutex);
-	event_call = find_and_get_event(sys_name, sys_event);
-	ep = alloc_event_probe(group, event, event_call, argc - 2);
-	mutex_unlock(&event_mutex);
+	scoped_guard(mutex, &event_mutex) {
+		event_call = find_and_get_event(sys_name, sys_event);
+		ep = alloc_event_probe(group, event, event_call, argc - 2);
+	}
 
 	if (IS_ERR(ep)) {
 		ret = PTR_ERR(ep);
@@ -952,23 +952,21 @@ static int __trace_eprobe_create(int argc, const char *argv[])
 	if (ret < 0)
 		goto error;
 	init_trace_eprobe_call(ep);
-	mutex_lock(&event_mutex);
-	ret = trace_probe_register_event_call(&ep->tp);
-	if (ret) {
-		if (ret == -EEXIST) {
-			trace_probe_log_set_index(0);
-			trace_probe_log_err(0, EVENT_EXIST);
+	scoped_guard(mutex, &event_mutex) {
+		ret = trace_probe_register_event_call(&ep->tp);
+		if (ret) {
+			if (ret == -EEXIST) {
+				trace_probe_log_set_index(0);
+				trace_probe_log_err(0, EVENT_EXIST);
+			}
+			goto error;
+		}
+		ret = dyn_event_add(&ep->devent, &ep->tp.event->call);
+		if (ret < 0) {
+			trace_probe_unregister_event_call(&ep->tp);
+			goto error;
 		}
-		mutex_unlock(&event_mutex);
-		goto error;
-	}
-	ret = dyn_event_add(&ep->devent, &ep->tp.event->call);
-	if (ret < 0) {
-		trace_probe_unregister_event_call(&ep->tp);
-		mutex_unlock(&event_mutex);
-		goto error;
 	}
-	mutex_unlock(&event_mutex);
 	return ret;
 parse_error:
 	ret = -EINVAL;


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/6] tracing/dynevent: Adopt guard() and scoped_guard()
  2024-11-26 13:56 [PATCH 0/6] kprobes: tracing/probes: Fix and cleanup to use guard Masami Hiramatsu (Google)
                   ` (4 preceding siblings ...)
  2024-11-26 13:57 ` [PATCH 5/6] tracing/eprobe: " Masami Hiramatsu (Google)
@ 2024-11-26 13:57 ` Masami Hiramatsu (Google)
  2024-11-28  8:42   ` kernel test robot
  5 siblings, 1 reply; 10+ messages in thread
From: Masami Hiramatsu (Google) @ 2024-11-26 13:57 UTC (permalink / raw)
  To: Steven Rostedt, Naveen N Rao
  Cc: Anil S Keshavamurthy, Masami Hiramatsu, David S . Miller,
	Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
	linux-kernel, linux-trace-kernel

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Use guard() or scoped_guard() in dynamic events for critical sections
rather than discrete lock/unlock pairs.

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 kernel/trace/trace_dynevent.c |   12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
index 4376887e0d8a..dbdcaef08f5d 100644
--- a/kernel/trace/trace_dynevent.c
+++ b/kernel/trace/trace_dynevent.c
@@ -63,9 +63,8 @@ int dyn_event_register(struct dyn_event_operations *ops)
 		return -EINVAL;
 
 	INIT_LIST_HEAD(&ops->list);
-	mutex_lock(&dyn_event_ops_mutex);
+	guard(mutex)(&dyn_event_ops_mutex);
 	list_add_tail(&ops->list, &dyn_event_ops_list);
-	mutex_unlock(&dyn_event_ops_mutex);
 	return 0;
 }
 
@@ -106,7 +105,7 @@ int dyn_event_release(const char *raw_command, struct dyn_event_operations *type
 		goto out;
 	}
 
-	mutex_lock(&event_mutex);
+	guard(mutex)(&event_mutex);
 	for_each_dyn_event_safe(pos, n) {
 		if (type && type != pos->ops)
 			continue;
@@ -119,7 +118,6 @@ int dyn_event_release(const char *raw_command, struct dyn_event_operations *type
 			break;
 	}
 	tracing_reset_all_online_cpus();
-	mutex_unlock(&event_mutex);
 out:
 	argv_free(argv);
 	return ret;
@@ -133,13 +131,12 @@ static int create_dyn_event(const char *raw_command)
 	if (raw_command[0] == '-' || raw_command[0] == '!')
 		return dyn_event_release(raw_command, NULL);
 
-	mutex_lock(&dyn_event_ops_mutex);
+	guard(mutex)(&dyn_event_ops_mutex);
 	list_for_each_entry(ops, &dyn_event_ops_list, list) {
 		ret = ops->create(raw_command);
 		if (!ret || ret != -ECANCELED)
 			break;
 	}
-	mutex_unlock(&dyn_event_ops_mutex);
 	if (ret == -ECANCELED)
 		ret = -EINVAL;
 
@@ -198,7 +195,7 @@ int dyn_events_release_all(struct dyn_event_operations *type)
 	struct dyn_event *ev, *tmp;
 	int ret = 0;
 
-	mutex_lock(&event_mutex);
+	guard(mutex)(&event_mutex);
 	for_each_dyn_event(ev) {
 		if (type && ev->ops != type)
 			continue;
@@ -216,7 +213,6 @@ int dyn_events_release_all(struct dyn_event_operations *type)
 	}
 out:
 	tracing_reset_all_online_cpus();
-	mutex_unlock(&event_mutex);
 
 	return ret;
 }


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/6] tracing/dynevent: Adopt guard() and scoped_guard()
  2024-11-26 13:57 ` [PATCH 6/6] tracing/dynevent: " Masami Hiramatsu (Google)
@ 2024-11-28  8:42   ` kernel test robot
  2024-11-28 16:57     ` Steven Rostedt
  0 siblings, 1 reply; 10+ messages in thread
From: kernel test robot @ 2024-11-28  8:42 UTC (permalink / raw)
  To: Masami Hiramatsu (Google), Steven Rostedt, Naveen N Rao
  Cc: llvm, oe-kbuild-all, Anil S Keshavamurthy, Masami Hiramatsu,
	David S . Miller, Mathieu Desnoyers, Oleg Nesterov,
	Tzvetomir Stoyanov, linux-kernel, linux-trace-kernel

Hi Masami,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on v6.12 next-20241128]
[cannot apply to rostedt-trace/for-next rostedt-trace/for-next-urgent]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Masami-Hiramatsu-Google/tracing-eprobe-Fix-to-release-eprobe-when-failed-to-add-dyn_event/20241128-100853
base:   linus/master
patch link:    https://lore.kernel.org/r/173262943230.8323.4595317585229847937.stgit%40devnote2
patch subject: [PATCH 6/6] tracing/dynevent: Adopt guard() and scoped_guard()
config: arm-randconfig-001-20241128 (https://download.01.org/0day-ci/archive/20241128/202411281612.F29skJjy-lkp@intel.com/config)
compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 592c0fe55f6d9a811028b5f3507be91458ab2713)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241128/202411281612.F29skJjy-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202411281612.F29skJjy-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from kernel/trace/trace_dynevent.c:11:
   In file included from include/linux/mm.h:2223:
   include/linux/vmstat.h:518:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
     518 |         return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
         |                               ~~~~~~~~~~~ ^ ~~~
>> kernel/trace/trace_dynevent.c:105:3: error: cannot jump from this goto statement to its label
     105 |                 goto out;
         |                 ^
   kernel/trace/trace_dynevent.c:108:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
     108 |         guard(mutex)(&event_mutex);
         |         ^
   include/linux/cleanup.h:315:15: note: expanded from macro 'guard'
     315 |         CLASS(_name, __UNIQUE_ID(guard))
         |                      ^
   include/linux/compiler.h:189:29: note: expanded from macro '__UNIQUE_ID'
     189 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
         |                             ^
   include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
      84 | #define __PASTE(a,b) ___PASTE(a,b)
         |                      ^
   include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
      83 | #define ___PASTE(a,b) a##b
         |                       ^
   <scratch space>:162:1: note: expanded from here
     162 | __UNIQUE_ID_guard315
         | ^
   kernel/trace/trace_dynevent.c:92:4: error: cannot jump from this goto statement to its label
      92 |                         goto out;
         |                         ^
   kernel/trace/trace_dynevent.c:108:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
     108 |         guard(mutex)(&event_mutex);
         |         ^
   include/linux/cleanup.h:315:15: note: expanded from macro 'guard'
     315 |         CLASS(_name, __UNIQUE_ID(guard))
         |                      ^
   include/linux/compiler.h:189:29: note: expanded from macro '__UNIQUE_ID'
     189 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
         |                             ^
   include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
      84 | #define __PASTE(a,b) ___PASTE(a,b)
         |                      ^
   include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
      83 | #define ___PASTE(a,b) a##b
         |                       ^
   <scratch space>:162:1: note: expanded from here
     162 | __UNIQUE_ID_guard315
         | ^
   kernel/trace/trace_dynevent.c:85:4: error: cannot jump from this goto statement to its label
      85 |                         goto out;
         |                         ^
   kernel/trace/trace_dynevent.c:108:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
     108 |         guard(mutex)(&event_mutex);
         |         ^
   include/linux/cleanup.h:315:15: note: expanded from macro 'guard'
     315 |         CLASS(_name, __UNIQUE_ID(guard))
         |                      ^
   include/linux/compiler.h:189:29: note: expanded from macro '__UNIQUE_ID'
     189 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
         |                             ^
   include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
      84 | #define __PASTE(a,b) ___PASTE(a,b)
         |                      ^
   include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
      83 | #define ___PASTE(a,b) a##b
         |                       ^
   <scratch space>:162:1: note: expanded from here
     162 | __UNIQUE_ID_guard315
         | ^
   1 warning and 3 errors generated.


vim +105 kernel/trace/trace_dynevent.c

5448d44c38557f Masami Hiramatsu          2018-11-05  @11  #include <linux/mm.h>
5448d44c38557f Masami Hiramatsu          2018-11-05   12  #include <linux/mutex.h>
5448d44c38557f Masami Hiramatsu          2018-11-05   13  #include <linux/tracefs.h>
5448d44c38557f Masami Hiramatsu          2018-11-05   14  
5448d44c38557f Masami Hiramatsu          2018-11-05   15  #include "trace.h"
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   16) #include "trace_output.h"	/* for trace_event_sem */
5448d44c38557f Masami Hiramatsu          2018-11-05   17  #include "trace_dynevent.h"
5448d44c38557f Masami Hiramatsu          2018-11-05   18  
5448d44c38557f Masami Hiramatsu          2018-11-05   19  static DEFINE_MUTEX(dyn_event_ops_mutex);
5448d44c38557f Masami Hiramatsu          2018-11-05   20  static LIST_HEAD(dyn_event_ops_list);
5448d44c38557f Masami Hiramatsu          2018-11-05   21  
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   22) bool trace_event_dyn_try_get_ref(struct trace_event_call *dyn_call)
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   23) {
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   24) 	struct trace_event_call *call;
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   25) 	bool ret = false;
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   26) 
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   27) 	if (WARN_ON_ONCE(!(dyn_call->flags & TRACE_EVENT_FL_DYNAMIC)))
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   28) 		return false;
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   29) 
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   30) 	down_read(&trace_event_sem);
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   31) 	list_for_each_entry(call, &ftrace_events, list) {
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   32) 		if (call == dyn_call) {
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   33) 			atomic_inc(&dyn_call->refcnt);
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   34) 			ret = true;
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   35) 		}
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   36) 	}
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   37) 	up_read(&trace_event_sem);
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   38) 	return ret;
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   39) }
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   40) 
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   41) void trace_event_dyn_put_ref(struct trace_event_call *call)
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   42) {
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   43) 	if (WARN_ON_ONCE(!(call->flags & TRACE_EVENT_FL_DYNAMIC)))
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   44) 		return;
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   45) 
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   46) 	if (WARN_ON_ONCE(atomic_read(&call->refcnt) <= 0)) {
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   47) 		atomic_set(&call->refcnt, 0);
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   48) 		return;
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   49) 	}
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   50) 
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   51) 	atomic_dec(&call->refcnt);
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   52) }
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   53) 
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   54) bool trace_event_dyn_busy(struct trace_event_call *call)
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   55) {
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   56) 	return atomic_read(&call->refcnt) != 0;
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   57) }
1d18538e6a0926 Steven Rostedt (VMware    2021-08-16   58) 
5448d44c38557f Masami Hiramatsu          2018-11-05   59  int dyn_event_register(struct dyn_event_operations *ops)
5448d44c38557f Masami Hiramatsu          2018-11-05   60  {
5448d44c38557f Masami Hiramatsu          2018-11-05   61  	if (!ops || !ops->create || !ops->show || !ops->is_busy ||
5448d44c38557f Masami Hiramatsu          2018-11-05   62  	    !ops->free || !ops->match)
5448d44c38557f Masami Hiramatsu          2018-11-05   63  		return -EINVAL;
5448d44c38557f Masami Hiramatsu          2018-11-05   64  
5448d44c38557f Masami Hiramatsu          2018-11-05   65  	INIT_LIST_HEAD(&ops->list);
79cc5c1710963f Masami Hiramatsu (Google  2024-11-26   66) 	guard(mutex)(&dyn_event_ops_mutex);
5448d44c38557f Masami Hiramatsu          2018-11-05   67  	list_add_tail(&ops->list, &dyn_event_ops_list);
5448d44c38557f Masami Hiramatsu          2018-11-05   68  	return 0;
5448d44c38557f Masami Hiramatsu          2018-11-05   69  }
5448d44c38557f Masami Hiramatsu          2018-11-05   70  
d262271d04830e Masami Hiramatsu          2021-02-01   71  int dyn_event_release(const char *raw_command, struct dyn_event_operations *type)
5448d44c38557f Masami Hiramatsu          2018-11-05   72  {
5448d44c38557f Masami Hiramatsu          2018-11-05   73  	struct dyn_event *pos, *n;
5448d44c38557f Masami Hiramatsu          2018-11-05   74  	char *system = NULL, *event, *p;
d262271d04830e Masami Hiramatsu          2021-02-01   75  	int argc, ret = -ENOENT;
d262271d04830e Masami Hiramatsu          2021-02-01   76  	char **argv;
d262271d04830e Masami Hiramatsu          2021-02-01   77  
d262271d04830e Masami Hiramatsu          2021-02-01   78  	argv = argv_split(GFP_KERNEL, raw_command, &argc);
d262271d04830e Masami Hiramatsu          2021-02-01   79  	if (!argv)
d262271d04830e Masami Hiramatsu          2021-02-01   80  		return -ENOMEM;
5448d44c38557f Masami Hiramatsu          2018-11-05   81  
1ce25e9f6fff76 Masami Hiramatsu          2018-11-05   82  	if (argv[0][0] == '-') {
d262271d04830e Masami Hiramatsu          2021-02-01   83  		if (argv[0][1] != ':') {
d262271d04830e Masami Hiramatsu          2021-02-01   84  			ret = -EINVAL;
d262271d04830e Masami Hiramatsu          2021-02-01   85  			goto out;
d262271d04830e Masami Hiramatsu          2021-02-01   86  		}
5448d44c38557f Masami Hiramatsu          2018-11-05   87  		event = &argv[0][2];
1ce25e9f6fff76 Masami Hiramatsu          2018-11-05   88  	} else {
1ce25e9f6fff76 Masami Hiramatsu          2018-11-05   89  		event = strchr(argv[0], ':');
d262271d04830e Masami Hiramatsu          2021-02-01   90  		if (!event) {
d262271d04830e Masami Hiramatsu          2021-02-01   91  			ret = -EINVAL;
d262271d04830e Masami Hiramatsu          2021-02-01   92  			goto out;
d262271d04830e Masami Hiramatsu          2021-02-01   93  		}
1ce25e9f6fff76 Masami Hiramatsu          2018-11-05   94  		event++;
1ce25e9f6fff76 Masami Hiramatsu          2018-11-05   95  	}
1ce25e9f6fff76 Masami Hiramatsu          2018-11-05   96  
5448d44c38557f Masami Hiramatsu          2018-11-05   97  	p = strchr(event, '/');
5448d44c38557f Masami Hiramatsu          2018-11-05   98  	if (p) {
5448d44c38557f Masami Hiramatsu          2018-11-05   99  		system = event;
5448d44c38557f Masami Hiramatsu          2018-11-05  100  		event = p + 1;
5448d44c38557f Masami Hiramatsu          2018-11-05  101  		*p = '\0';
5448d44c38557f Masami Hiramatsu          2018-11-05  102  	}
95c104c378dc7d Linyu Yuan                2022-06-27  103  	if (!system && event[0] == '\0') {
8db403b9631331 Christophe JAILLET        2021-04-11  104  		ret = -EINVAL;
8db403b9631331 Christophe JAILLET        2021-04-11 @105  		goto out;
8db403b9631331 Christophe JAILLET        2021-04-11  106  	}
5448d44c38557f Masami Hiramatsu          2018-11-05  107  
79cc5c1710963f Masami Hiramatsu (Google  2024-11-26  108) 	guard(mutex)(&event_mutex);
5448d44c38557f Masami Hiramatsu          2018-11-05  109  	for_each_dyn_event_safe(pos, n) {
5448d44c38557f Masami Hiramatsu          2018-11-05  110  		if (type && type != pos->ops)
5448d44c38557f Masami Hiramatsu          2018-11-05  111  			continue;
30199137c899d7 Masami Hiramatsu          2019-06-20  112  		if (!pos->ops->match(system, event,
d262271d04830e Masami Hiramatsu          2021-02-01  113  				argc - 1, (const char **)argv + 1, pos))
cb8e7a8d55e052 Masami Hiramatsu          2019-06-20  114  			continue;
cb8e7a8d55e052 Masami Hiramatsu          2019-06-20  115  
5448d44c38557f Masami Hiramatsu          2018-11-05  116  		ret = pos->ops->free(pos);
cb8e7a8d55e052 Masami Hiramatsu          2019-06-20  117  		if (ret)
5448d44c38557f Masami Hiramatsu          2018-11-05  118  			break;
5448d44c38557f Masami Hiramatsu          2018-11-05  119  	}
4313e5a613049d Steven Rostedt (Google    2022-11-23  120) 	tracing_reset_all_online_cpus();
d262271d04830e Masami Hiramatsu          2021-02-01  121  out:
d262271d04830e Masami Hiramatsu          2021-02-01  122  	argv_free(argv);
5448d44c38557f Masami Hiramatsu          2018-11-05  123  	return ret;
5448d44c38557f Masami Hiramatsu          2018-11-05  124  }
5448d44c38557f Masami Hiramatsu          2018-11-05  125  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/6] tracing/dynevent: Adopt guard() and scoped_guard()
  2024-11-28  8:42   ` kernel test robot
@ 2024-11-28 16:57     ` Steven Rostedt
  2024-11-29  8:16       ` Masami Hiramatsu
  0 siblings, 1 reply; 10+ messages in thread
From: Steven Rostedt @ 2024-11-28 16:57 UTC (permalink / raw)
  To: kernel test robot
  Cc: Masami Hiramatsu (Google), Naveen N Rao, llvm, oe-kbuild-all,
	Anil S Keshavamurthy, David S . Miller, Mathieu Desnoyers,
	Oleg Nesterov, Tzvetomir Stoyanov, linux-kernel,
	linux-trace-kernel

On Thu, 28 Nov 2024 16:42:47 +0800
kernel test robot <lkp@intel.com> wrote:

> 5448d44c38557f Masami Hiramatsu          2018-11-05   97  	p = strchr(event, '/');
> 5448d44c38557f Masami Hiramatsu          2018-11-05   98  	if (p) {
> 5448d44c38557f Masami Hiramatsu          2018-11-05   99  		system = event;
> 5448d44c38557f Masami Hiramatsu          2018-11-05  100  		event = p + 1;
> 5448d44c38557f Masami Hiramatsu          2018-11-05  101  		*p = '\0';
> 5448d44c38557f Masami Hiramatsu          2018-11-05  102  	}
> 95c104c378dc7d Linyu Yuan                2022-06-27  103  	if (!system && event[0] == '\0') {
> 8db403b9631331 Christophe JAILLET        2021-04-11  104  		ret = -EINVAL;
> 8db403b9631331 Christophe JAILLET        2021-04-11 @105  		goto out;

This is a legitimate bug. The "goto out" can't jump over a guard().

-- Steve


> 8db403b9631331 Christophe JAILLET        2021-04-11  106  	}
> 5448d44c38557f Masami Hiramatsu          2018-11-05  107  
> 79cc5c1710963f Masami Hiramatsu (Google  2024-11-26  108) 	guard(mutex)(&event_mutex);
> 5448d44c38557f Masami Hiramatsu          2018-11-05  109  	for_each_dyn_event_safe(pos, n) {
> 5448d44c38557f Masami Hiramatsu          2018-11-05  110  		if (type && type != pos->ops)
> 5448d44c38557f Masami Hiramatsu          2018-11-05  111  			continue;
> 30199137c899d7 Masami Hiramatsu          2019-06-20  112  		if (!pos->ops->match(system, event,
> d262271d04830e Masami Hiramatsu          2021-02-01  113  				argc - 1, (const char **)argv + 1, pos))
> cb8e7a8d55e052 Masami Hiramatsu          2019-06-20  114  			continue;
> cb8e7a8d55e052 Masami Hiramatsu          2019-06-20  115  
> 5448d44c38557f Masami Hiramatsu          2018-11-05  116  		ret = pos->ops->free(pos);
> cb8e7a8d55e052 Masami Hiramatsu          2019-06-20  117  		if (ret)
> 5448d44c38557f Masami Hiramatsu          2018-11-05  118  			break;
> 5448d44c38557f Masami Hiramatsu          2018-11-05  119  	}
> 4313e5a613049d Steven Rostedt (Google    2022-11-23  120) 	tracing_reset_all_online_cpus();
> d262271d04830e Masami Hiramatsu          2021-02-01  121  out:
> d262271d04830e Masami Hiramatsu          2021-02-01  122  	argv_free(argv);
> 5448d44c38557f Masami Hiramatsu          2018-11-05  123  	return ret;
> 5448d44c38557f Masami Hiramatsu          2018-11-05  124  }
> 5448d44c38557f Masami Hiramatsu          2018-11-05  125  

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/6] tracing/dynevent: Adopt guard() and scoped_guard()
  2024-11-28 16:57     ` Steven Rostedt
@ 2024-11-29  8:16       ` Masami Hiramatsu
  0 siblings, 0 replies; 10+ messages in thread
From: Masami Hiramatsu @ 2024-11-29  8:16 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: kernel test robot, Masami Hiramatsu (Google), Naveen N Rao, llvm,
	oe-kbuild-all, Anil S Keshavamurthy, David S . Miller,
	Mathieu Desnoyers, Oleg Nesterov, Tzvetomir Stoyanov,
	linux-kernel, linux-trace-kernel

On Thu, 28 Nov 2024 11:57:56 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Thu, 28 Nov 2024 16:42:47 +0800
> kernel test robot <lkp@intel.com> wrote:
> 
> > 5448d44c38557f Masami Hiramatsu          2018-11-05   97  	p = strchr(event, '/');
> > 5448d44c38557f Masami Hiramatsu          2018-11-05   98  	if (p) {
> > 5448d44c38557f Masami Hiramatsu          2018-11-05   99  		system = event;
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  100  		event = p + 1;
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  101  		*p = '\0';
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  102  	}
> > 95c104c378dc7d Linyu Yuan                2022-06-27  103  	if (!system && event[0] == '\0') {
> > 8db403b9631331 Christophe JAILLET        2021-04-11  104  		ret = -EINVAL;
> > 8db403b9631331 Christophe JAILLET        2021-04-11 @105  		goto out;
> 
> This is a legitimate bug. The "goto out" can't jump over a guard().

Hmm, OK. let me fix that.

Thanks!

> 
> -- Steve
> 
> 
> > 8db403b9631331 Christophe JAILLET        2021-04-11  106  	}
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  107  
> > 79cc5c1710963f Masami Hiramatsu (Google  2024-11-26  108) 	guard(mutex)(&event_mutex);
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  109  	for_each_dyn_event_safe(pos, n) {
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  110  		if (type && type != pos->ops)
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  111  			continue;
> > 30199137c899d7 Masami Hiramatsu          2019-06-20  112  		if (!pos->ops->match(system, event,
> > d262271d04830e Masami Hiramatsu          2021-02-01  113  				argc - 1, (const char **)argv + 1, pos))
> > cb8e7a8d55e052 Masami Hiramatsu          2019-06-20  114  			continue;
> > cb8e7a8d55e052 Masami Hiramatsu          2019-06-20  115  
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  116  		ret = pos->ops->free(pos);
> > cb8e7a8d55e052 Masami Hiramatsu          2019-06-20  117  		if (ret)
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  118  			break;
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  119  	}
> > 4313e5a613049d Steven Rostedt (Google    2022-11-23  120) 	tracing_reset_all_online_cpus();
> > d262271d04830e Masami Hiramatsu          2021-02-01  121  out:
> > d262271d04830e Masami Hiramatsu          2021-02-01  122  	argv_free(argv);
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  123  	return ret;
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  124  }
> > 5448d44c38557f Masami Hiramatsu          2018-11-05  125  


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-11-29  8:16 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-26 13:56 [PATCH 0/6] kprobes: tracing/probes: Fix and cleanup to use guard Masami Hiramatsu (Google)
2024-11-26 13:56 ` [PATCH 1/6] tracing/eprobe: Fix to release eprobe when failed to add dyn_event Masami Hiramatsu (Google)
2024-11-26 13:56 ` [PATCH 2/6] kprobes: Adopt guard() and scoped_guard() Masami Hiramatsu (Google)
2024-11-26 13:56 ` [PATCH 3/6] tracing/kprobe: " Masami Hiramatsu (Google)
2024-11-26 13:56 ` [PATCH 4/6] tracing/uprobe: " Masami Hiramatsu (Google)
2024-11-26 13:57 ` [PATCH 5/6] tracing/eprobe: " Masami Hiramatsu (Google)
2024-11-26 13:57 ` [PATCH 6/6] tracing/dynevent: " Masami Hiramatsu (Google)
2024-11-28  8:42   ` kernel test robot
2024-11-28 16:57     ` Steven Rostedt
2024-11-29  8:16       ` Masami Hiramatsu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).