* [PATCH perf/core 00/11] uprobes: Add unique uprobe
@ 2025-09-02 14:34 Jiri Olsa
2025-09-02 14:34 ` [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer Jiri Olsa
` (10 more replies)
0 siblings, 11 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:34 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
hi,
this patchset adds unique uprobe and support to change userspace
task's registers from within bpf uprobe program.
We recently had several requests for tetragon to be able to change
user application function return value or divert its execution through
instruction pointer change.
v1 changes (from rfc [1]):
- added unique probe support
- added more tests
thanks,
jirka
[1] https://lore.kernel.org/bpf/20250801210238.2207429-1-jolsa@kernel.org/
---
Jiri Olsa (11):
uprobes: Add unique flag to uprobe consumer
uprobes: Skip emulate/sstep on unique uprobe when ip is changed
perf: Add support to attach standard unique uprobe
bpf: Add support to attach uprobe_multi unique uprobe
bpf: Allow uprobe program to change context registers
libbpf: Add support to attach unique uprobe_multi uprobe
libbpf: Add support to attach generic unique uprobe
selftests/bpf: Add uprobe multi context registers changes test
selftests/bpf: Add uprobe multi context ip register change test
selftests/bpf: Add uprobe multi unique attach test
selftests/bpf: Add uprobe unique attach test
include/linux/bpf.h | 1 +
include/linux/trace_events.h | 2 +-
include/linux/uprobes.h | 1 +
include/uapi/linux/bpf.h | 3 +-
kernel/events/core.c | 12 ++++-
kernel/events/uprobes.c | 43 ++++++++++++++--
kernel/trace/bpf_trace.c | 7 +--
kernel/trace/trace_event_perf.c | 4 +-
kernel/trace/trace_probe.h | 2 +-
kernel/trace/trace_uprobe.c | 9 ++--
tools/include/uapi/linux/bpf.h | 3 +-
tools/lib/bpf/libbpf.c | 36 +++++++++++---
tools/lib/bpf/libbpf.h | 8 ++-
tools/testing/selftests/bpf/prog_tests/bpf_cookie.c | 1 +
tools/testing/selftests/bpf/prog_tests/uprobe.c | 111 ++++++++++++++++++++++++++++++++++++++++-
tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c | 248 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
tools/testing/selftests/bpf/progs/uprobe_multi.c | 38 ++++++++++++++
tools/testing/selftests/bpf/progs/uprobe_multi_unique.c | 34 +++++++++++++
18 files changed, 534 insertions(+), 29 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/uprobe_multi_unique.c
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
@ 2025-09-02 14:34 ` Jiri Olsa
2025-09-02 15:11 ` Masami Hiramatsu
2025-09-03 10:49 ` Oleg Nesterov
2025-09-02 14:34 ` [PATCH perf/core 02/11] uprobes: Skip emulate/sstep on unique uprobe when ip is changed Jiri Olsa
` (9 subsequent siblings)
10 siblings, 2 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:34 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding unique flag to uprobe consumer to ensure it's the only consumer
attached on the uprobe.
This is helpful for use cases when consumer wants to change user space
registers, which might confuse other consumers. With this change we can
ensure there's only one consumer on specific uprobe.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/uprobes.h | 1 +
kernel/events/uprobes.c | 30 ++++++++++++++++++++++++++++--
2 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 08ef78439d0d..0df849dee720 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -60,6 +60,7 @@ struct uprobe_consumer {
struct list_head cons_node;
__u64 id; /* set when uprobe_consumer is registered */
+ bool is_unique; /* the only consumer on uprobe */
};
#ifdef CONFIG_UPROBES
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 996a81080d56..b9b088f7333a 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1024,14 +1024,35 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset,
return uprobe;
}
-static void consumer_add(struct uprobe *uprobe, struct uprobe_consumer *uc)
+static bool consumer_can_add(struct list_head *head, struct uprobe_consumer *uc)
+{
+ /* Uprobe has no consumer, we can add any. */
+ if (list_empty(head))
+ return true;
+ /* Uprobe has consumer/s, we can't add unique one. */
+ if (uc->is_unique)
+ return false;
+ /*
+ * Uprobe has consumer/s, we can add nother consumer only if the
+ * current consumer is not unique.
+ **/
+ return !list_first_entry(head, struct uprobe_consumer, cons_node)->is_unique;
+}
+
+static int consumer_add(struct uprobe *uprobe, struct uprobe_consumer *uc)
{
static atomic64_t id;
+ int ret = -EBUSY;
down_write(&uprobe->consumer_rwsem);
+ if (!consumer_can_add(&uprobe->consumers, uc))
+ goto unlock;
list_add_rcu(&uc->cons_node, &uprobe->consumers);
uc->id = (__u64) atomic64_inc_return(&id);
+ ret = 0;
+unlock:
up_write(&uprobe->consumer_rwsem);
+ return ret;
}
/*
@@ -1420,7 +1441,12 @@ struct uprobe *uprobe_register(struct inode *inode,
return uprobe;
down_write(&uprobe->register_rwsem);
- consumer_add(uprobe, uc);
+ ret = consumer_add(uprobe, uc);
+ if (ret) {
+ put_uprobe(uprobe);
+ up_write(&uprobe->register_rwsem);
+ return ERR_PTR(ret);
+ }
ret = register_for_each_vma(uprobe, uc);
up_write(&uprobe->register_rwsem);
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH perf/core 02/11] uprobes: Skip emulate/sstep on unique uprobe when ip is changed
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
2025-09-02 14:34 ` [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer Jiri Olsa
@ 2025-09-02 14:34 ` Jiri Olsa
2025-09-03 11:26 ` Oleg Nesterov
2025-09-02 14:34 ` [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe Jiri Olsa
` (8 subsequent siblings)
10 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:34 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
If uprobe consumer changes instruction pointer we still execute
(single step or emulate) the original instruction and increment
the ip register with the size of the instruction.
In case the instruction is emulated, the new ip register value is
incremented with the instructions size and process is likely to
crash with illegal instruction.
In case the instruction is single-stepped, the ip register change
is lost and process continues with the original ip register value.
If user decided to take execution elsewhere, it makes little sense
to execute the original instruction, so let's skip it. Allowing this
behaviour only for uprobe with unique consumer attached.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
kernel/events/uprobes.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index b9b088f7333a..da8291941c6b 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -2568,7 +2568,7 @@ static bool ignore_ret_handler(int rc)
return rc == UPROBE_HANDLER_REMOVE || rc == UPROBE_HANDLER_IGNORE;
}
-static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
+static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs, bool *is_unique)
{
struct uprobe_consumer *uc;
bool has_consumers = false, remove = true;
@@ -2582,6 +2582,9 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
__u64 cookie = 0;
int rc = 0;
+ if (is_unique)
+ *is_unique |= uc->is_unique;
+
if (uc->handler) {
rc = uc->handler(uc, regs, &cookie);
WARN(rc < 0 || rc > 2,
@@ -2735,6 +2738,7 @@ static void handle_swbp(struct pt_regs *regs)
{
struct uprobe *uprobe;
unsigned long bp_vaddr;
+ bool is_unique = false;
int is_swbp;
bp_vaddr = uprobe_get_swbp_addr(regs);
@@ -2789,7 +2793,10 @@ static void handle_swbp(struct pt_regs *regs)
if (arch_uprobe_ignore(&uprobe->arch, regs))
goto out;
- handler_chain(uprobe, regs);
+ handler_chain(uprobe, regs, &is_unique);
+
+ if (is_unique && instruction_pointer(regs) != bp_vaddr)
+ goto out;
/* Try to optimize after first hit. */
arch_uprobe_optimize(&uprobe->arch, bp_vaddr);
@@ -2819,7 +2826,7 @@ void handle_syscall_uprobe(struct pt_regs *regs, unsigned long bp_vaddr)
return;
if (arch_uprobe_ignore(&uprobe->arch, regs))
return;
- handler_chain(uprobe, regs);
+ handler_chain(uprobe, regs, NULL);
}
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
2025-09-02 14:34 ` [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer Jiri Olsa
2025-09-02 14:34 ` [PATCH perf/core 02/11] uprobes: Skip emulate/sstep on unique uprobe when ip is changed Jiri Olsa
@ 2025-09-02 14:34 ` Jiri Olsa
2025-09-02 16:13 ` Alexei Starovoitov
` (2 more replies)
2025-09-02 14:34 ` [PATCH perf/core 04/11] bpf: Add support to attach uprobe_multi " Jiri Olsa
` (7 subsequent siblings)
10 siblings, 3 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:34 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding support to attach unique probe through perf uprobe pmu.
Adding new 'unique' format attribute that allows to pass the
request to create unique uprobe the uprobe consumer.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/trace_events.h | 2 +-
kernel/events/core.c | 8 ++++++--
kernel/trace/trace_event_perf.c | 4 ++--
kernel/trace/trace_probe.h | 2 +-
kernel/trace/trace_uprobe.c | 9 +++++----
5 files changed, 15 insertions(+), 10 deletions(-)
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 04307a19cde3..1d35727fda27 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -877,7 +877,7 @@ extern int bpf_get_kprobe_info(const struct perf_event *event,
#endif
#ifdef CONFIG_UPROBE_EVENTS
extern int perf_uprobe_init(struct perf_event *event,
- unsigned long ref_ctr_offset, bool is_retprobe);
+ unsigned long ref_ctr_offset, bool is_retprobe, bool is_unique);
extern void perf_uprobe_destroy(struct perf_event *event);
extern int bpf_get_uprobe_info(const struct perf_event *event,
u32 *fd_type, const char **filename,
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 28de3baff792..10a9341c638f 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -11046,11 +11046,13 @@ EXPORT_SYMBOL_GPL(perf_tp_event);
*/
enum perf_probe_config {
PERF_PROBE_CONFIG_IS_RETPROBE = 1U << 0, /* [k,u]retprobe */
+ PERF_PROBE_CONFIG_IS_UNIQUE = 1U << 1, /* unique uprobe */
PERF_UPROBE_REF_CTR_OFFSET_BITS = 32,
PERF_UPROBE_REF_CTR_OFFSET_SHIFT = 64 - PERF_UPROBE_REF_CTR_OFFSET_BITS,
};
PMU_FORMAT_ATTR(retprobe, "config:0");
+PMU_FORMAT_ATTR(unique, "config:1");
#endif
#ifdef CONFIG_KPROBE_EVENTS
@@ -11114,6 +11116,7 @@ PMU_FORMAT_ATTR(ref_ctr_offset, "config:32-63");
static struct attribute *uprobe_attrs[] = {
&format_attr_retprobe.attr,
+ &format_attr_unique.attr,
&format_attr_ref_ctr_offset.attr,
NULL,
};
@@ -11144,7 +11147,7 @@ static int perf_uprobe_event_init(struct perf_event *event)
{
int err;
unsigned long ref_ctr_offset;
- bool is_retprobe;
+ bool is_retprobe, is_unique;
if (event->attr.type != perf_uprobe.type)
return -ENOENT;
@@ -11159,8 +11162,9 @@ static int perf_uprobe_event_init(struct perf_event *event)
return -EOPNOTSUPP;
is_retprobe = event->attr.config & PERF_PROBE_CONFIG_IS_RETPROBE;
+ is_unique = event->attr.config & PERF_PROBE_CONFIG_IS_UNIQUE;
ref_ctr_offset = event->attr.config >> PERF_UPROBE_REF_CTR_OFFSET_SHIFT;
- err = perf_uprobe_init(event, ref_ctr_offset, is_retprobe);
+ err = perf_uprobe_init(event, ref_ctr_offset, is_retprobe, is_unique);
if (err)
return err;
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index a6bb7577e8c5..b4383ab21d88 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -296,7 +296,7 @@ void perf_kprobe_destroy(struct perf_event *p_event)
#ifdef CONFIG_UPROBE_EVENTS
int perf_uprobe_init(struct perf_event *p_event,
- unsigned long ref_ctr_offset, bool is_retprobe)
+ unsigned long ref_ctr_offset, bool is_retprobe, bool is_unique)
{
int ret;
char *path = NULL;
@@ -317,7 +317,7 @@ int perf_uprobe_init(struct perf_event *p_event,
}
tp_event = create_local_trace_uprobe(path, p_event->attr.probe_offset,
- ref_ctr_offset, is_retprobe);
+ ref_ctr_offset, is_retprobe, is_unique);
if (IS_ERR(tp_event)) {
ret = PTR_ERR(tp_event);
goto out;
diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
index 842383fbc03b..92870b98b296 100644
--- a/kernel/trace/trace_probe.h
+++ b/kernel/trace/trace_probe.h
@@ -469,7 +469,7 @@ extern void destroy_local_trace_kprobe(struct trace_event_call *event_call);
extern struct trace_event_call *
create_local_trace_uprobe(char *name, unsigned long offs,
- unsigned long ref_ctr_offset, bool is_return);
+ unsigned long ref_ctr_offset, bool is_return, bool is_unique);
extern void destroy_local_trace_uprobe(struct trace_event_call *event_call);
#endif
extern int traceprobe_define_arg_fields(struct trace_event_call *event_call,
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 8b0bcc0d8f41..4ecb6083f949 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -333,7 +333,7 @@ trace_uprobe_primary_from_call(struct trace_event_call *call)
* Allocate new trace_uprobe and initialize it (including uprobes).
*/
static struct trace_uprobe *
-alloc_trace_uprobe(const char *group, const char *event, int nargs, bool is_ret)
+alloc_trace_uprobe(const char *group, const char *event, int nargs, bool is_ret, bool is_unique)
{
struct trace_uprobe *tu;
int ret;
@@ -356,6 +356,7 @@ alloc_trace_uprobe(const char *group, const char *event, int nargs, bool is_ret)
tu->consumer.handler = uprobe_dispatcher;
if (is_ret)
tu->consumer.ret_handler = uretprobe_dispatcher;
+ tu->consumer.is_unique = is_unique;
init_trace_uprobe_filter(tu->tp.event->filter);
return tu;
@@ -688,7 +689,7 @@ static int __trace_uprobe_create(int argc, const char **argv)
argc -= 2;
argv += 2;
- tu = alloc_trace_uprobe(group, event, argc, is_return);
+ tu = alloc_trace_uprobe(group, event, argc, is_return, false /* unique */);
if (IS_ERR(tu)) {
ret = PTR_ERR(tu);
/* This must return -ENOMEM otherwise there is a bug */
@@ -1636,7 +1637,7 @@ static int unregister_uprobe_event(struct trace_uprobe *tu)
#ifdef CONFIG_PERF_EVENTS
struct trace_event_call *
create_local_trace_uprobe(char *name, unsigned long offs,
- unsigned long ref_ctr_offset, bool is_return)
+ unsigned long ref_ctr_offset, bool is_return, bool is_unique)
{
enum probe_print_type ptype;
struct trace_uprobe *tu;
@@ -1658,7 +1659,7 @@ create_local_trace_uprobe(char *name, unsigned long offs,
* duplicated name "DUMMY_EVENT" here.
*/
tu = alloc_trace_uprobe(UPROBE_EVENT_SYSTEM, "DUMMY_EVENT", 0,
- is_return);
+ is_return, is_unique);
if (IS_ERR(tu)) {
pr_info("Failed to allocate trace_uprobe.(%d)\n",
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH perf/core 04/11] bpf: Add support to attach uprobe_multi unique uprobe
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
` (2 preceding siblings ...)
2025-09-02 14:34 ` [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe Jiri Olsa
@ 2025-09-02 14:34 ` Jiri Olsa
2025-09-02 16:11 ` Alexei Starovoitov
2025-09-02 14:34 ` [PATCH perf/core 05/11] bpf: Allow uprobe program to change context registers Jiri Olsa
` (6 subsequent siblings)
10 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:34 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding support to attach unique uprobe through uprobe multi link
interface.
Adding new BPF_F_UPROBE_MULTI_UNIQUE flag that denotes the unique
uprobe creation.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/uapi/linux/bpf.h | 3 ++-
kernel/trace/bpf_trace.c | 4 +++-
tools/include/uapi/linux/bpf.h | 3 ++-
3 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 233de8677382..3de9eb469fe2 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1300,7 +1300,8 @@ enum {
* BPF_TRACE_UPROBE_MULTI attach type to create return probe.
*/
enum {
- BPF_F_UPROBE_MULTI_RETURN = (1U << 0)
+ BPF_F_UPROBE_MULTI_RETURN = (1U << 0),
+ BPF_F_UPROBE_MULTI_UNIQUE = (1U << 1),
};
/* link_create.netfilter.flags used in LINK_CREATE command for
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 3ae52978cae6..0674d5ac7b55 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3349,7 +3349,7 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
return -EINVAL;
flags = attr->link_create.uprobe_multi.flags;
- if (flags & ~BPF_F_UPROBE_MULTI_RETURN)
+ if (flags & ~(BPF_F_UPROBE_MULTI_RETURN|BPF_F_UPROBE_MULTI_UNIQUE))
return -EINVAL;
/*
@@ -3423,6 +3423,8 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
uprobes[i].link = link;
+ if (flags & BPF_F_UPROBE_MULTI_UNIQUE)
+ uprobes[i].consumer.is_unique = true;
if (!(flags & BPF_F_UPROBE_MULTI_RETURN))
uprobes[i].consumer.handler = uprobe_multi_link_handler;
if (flags & BPF_F_UPROBE_MULTI_RETURN || is_uprobe_session(prog))
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 233de8677382..3de9eb469fe2 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1300,7 +1300,8 @@ enum {
* BPF_TRACE_UPROBE_MULTI attach type to create return probe.
*/
enum {
- BPF_F_UPROBE_MULTI_RETURN = (1U << 0)
+ BPF_F_UPROBE_MULTI_RETURN = (1U << 0),
+ BPF_F_UPROBE_MULTI_UNIQUE = (1U << 1),
};
/* link_create.netfilter.flags used in LINK_CREATE command for
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH perf/core 05/11] bpf: Allow uprobe program to change context registers
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
` (3 preceding siblings ...)
2025-09-02 14:34 ` [PATCH perf/core 04/11] bpf: Add support to attach uprobe_multi " Jiri Olsa
@ 2025-09-02 14:34 ` Jiri Olsa
2025-09-02 14:34 ` [PATCH perf/core 06/11] libbpf: Add support to attach unique uprobe_multi uprobe Jiri Olsa
` (5 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:34 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Currently uprobe (BPF_PROG_TYPE_KPROBE) program can't write to the
context registers data. While this makes sense for kprobe attachments,
for uprobe attachment it might make sense to be able to change user
space registers to alter application execution.
Since uprobe and kprobe programs share the same type (BPF_PROG_TYPE_KPROBE),
we can't deny write access to context during the program load. We need
to check on it during program attachment to see if it's going to be
kprobe or uprobe.
Storing the program's write attempt to context and checking on it
during the attachment.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 1 +
kernel/events/core.c | 4 ++++
kernel/trace/bpf_trace.c | 3 +--
3 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cc700925b802..404a30cde84e 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1619,6 +1619,7 @@ struct bpf_prog_aux {
bool priv_stack_requested;
bool changes_pkt_data;
bool might_sleep;
+ bool kprobe_write_ctx;
u64 prog_array_member_cnt; /* counts how many times as member of prog_array */
struct mutex ext_mutex; /* mutex for is_extended and prog_array_member_cnt */
struct bpf_arena *arena;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 10a9341c638f..74bb5008a1c4 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -11242,6 +11242,10 @@ static int __perf_event_set_bpf_prog(struct perf_event *event,
if (prog->kprobe_override && !is_kprobe)
return -EINVAL;
+ /* Writing to context allowed only for uprobes. */
+ if (prog->aux->kprobe_write_ctx && !is_uprobe)
+ return -EINVAL;
+
if (is_tracepoint || is_syscall_tp) {
int off = trace_event_get_offsets(event->tp_event);
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 0674d5ac7b55..6fdec68563bd 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1521,8 +1521,6 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
{
if (off < 0 || off >= sizeof(struct pt_regs))
return false;
- if (type != BPF_READ)
- return false;
if (off % size != 0)
return false;
/*
@@ -1532,6 +1530,7 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
if (off + size > sizeof(struct pt_regs))
return false;
+ prog->aux->kprobe_write_ctx |= type == BPF_WRITE;
return true;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH perf/core 06/11] libbpf: Add support to attach unique uprobe_multi uprobe
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
` (4 preceding siblings ...)
2025-09-02 14:34 ` [PATCH perf/core 05/11] bpf: Allow uprobe program to change context registers Jiri Olsa
@ 2025-09-02 14:34 ` Jiri Olsa
2025-09-02 14:35 ` [PATCH perf/core 07/11] libbpf: Add support to attach generic unique uprobe Jiri Olsa
` (4 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:34 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding support to attach unique uprobe multi by adding the 'unique'
bool flag to struct bpf_uprobe_multi_opts.
Also adding "uprobe.unique[.s]" auto load sections to create
uprobe_multi unique uprobe.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/lib/bpf/libbpf.c | 7 ++++++-
tools/lib/bpf/libbpf.h | 4 +++-
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 8f5a81b672e1..1f613a5f95b6 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -9525,6 +9525,8 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("uprobe.multi+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_NONE, attach_uprobe_multi),
SEC_DEF("uretprobe.multi+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_NONE, attach_uprobe_multi),
SEC_DEF("uprobe.session+", KPROBE, BPF_TRACE_UPROBE_SESSION, SEC_NONE, attach_uprobe_multi),
+ SEC_DEF("uprobe.unique+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_NONE, attach_uprobe_multi),
+ SEC_DEF("uprobe.unique.s+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_SLEEPABLE, attach_uprobe_multi),
SEC_DEF("uprobe.multi.s+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_SLEEPABLE, attach_uprobe_multi),
SEC_DEF("uretprobe.multi.s+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_SLEEPABLE, attach_uprobe_multi),
SEC_DEF("uprobe.session.s+", KPROBE, BPF_TRACE_UPROBE_SESSION, SEC_SLEEPABLE, attach_uprobe_multi),
@@ -11880,6 +11882,7 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
case 3:
opts.session = str_has_pfx(probe_type, "uprobe.session");
opts.retprobe = str_has_pfx(probe_type, "uretprobe.multi");
+ opts.unique = str_has_pfx(probe_type, "uprobe.unique");
*link = bpf_program__attach_uprobe_multi(prog, -1, binary_path, func_name, &opts);
ret = libbpf_get_error(*link);
@@ -12116,10 +12119,10 @@ bpf_program__attach_uprobe_multi(const struct bpf_program *prog,
LIBBPF_OPTS(bpf_link_create_opts, lopts);
unsigned long *resolved_offsets = NULL;
enum bpf_attach_type attach_type;
+ bool retprobe, session, unique;
int err = 0, link_fd, prog_fd;
struct bpf_link *link = NULL;
char full_path[PATH_MAX];
- bool retprobe, session;
const __u64 *cookies;
const char **syms;
size_t cnt;
@@ -12141,6 +12144,7 @@ bpf_program__attach_uprobe_multi(const struct bpf_program *prog,
cnt = OPTS_GET(opts, cnt, 0);
retprobe = OPTS_GET(opts, retprobe, false);
session = OPTS_GET(opts, session, false);
+ unique = OPTS_GET(opts, unique, false);
/*
* User can specify 2 mutually exclusive set of inputs:
@@ -12203,6 +12207,7 @@ bpf_program__attach_uprobe_multi(const struct bpf_program *prog,
lopts.uprobe_multi.cookies = cookies;
lopts.uprobe_multi.cnt = cnt;
lopts.uprobe_multi.flags = retprobe ? BPF_F_UPROBE_MULTI_RETURN : 0;
+ lopts.uprobe_multi.flags |= unique ? BPF_F_UPROBE_MULTI_UNIQUE : 0;
if (pid == 0)
pid = getpid();
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 455a957cb702..13a10299331b 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -596,10 +596,12 @@ struct bpf_uprobe_multi_opts {
bool retprobe;
/* create session kprobes */
bool session;
+ /* create unique uprobe */
+ bool unique;
size_t :0;
};
-#define bpf_uprobe_multi_opts__last_field session
+#define bpf_uprobe_multi_opts__last_field unique
/**
* @brief **bpf_program__attach_uprobe_multi()** attaches a BPF program
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH perf/core 07/11] libbpf: Add support to attach generic unique uprobe
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
` (5 preceding siblings ...)
2025-09-02 14:34 ` [PATCH perf/core 06/11] libbpf: Add support to attach unique uprobe_multi uprobe Jiri Olsa
@ 2025-09-02 14:35 ` Jiri Olsa
2025-09-03 18:26 ` Andrii Nakryiko
2025-09-02 14:35 ` [PATCH perf/core 08/11] selftests/bpf: Add uprobe multi context registers changes test Jiri Olsa
` (3 subsequent siblings)
10 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:35 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding support to attach unique generic uprobe by adding the 'unique'
bool flag to struct bpf_uprobe_opts.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/lib/bpf/libbpf.c | 29 ++++++++++++++++++++++++-----
tools/lib/bpf/libbpf.h | 4 +++-
2 files changed, 27 insertions(+), 6 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 1f613a5f95b6..aac2bd4fb95e 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -11045,11 +11045,19 @@ static int determine_uprobe_retprobe_bit(void)
return parse_uint_from_file(file, "config:%d\n");
}
+static int determine_uprobe_unique_bit(void)
+{
+ const char *file = "/sys/bus/event_source/devices/uprobe/format/unique";
+
+ return parse_uint_from_file(file, "config:%d\n");
+}
+
#define PERF_UPROBE_REF_CTR_OFFSET_BITS 32
#define PERF_UPROBE_REF_CTR_OFFSET_SHIFT 32
static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
- uint64_t offset, int pid, size_t ref_ctr_off)
+ uint64_t offset, int pid, size_t ref_ctr_off,
+ bool unique)
{
const size_t attr_sz = sizeof(struct perf_event_attr);
struct perf_event_attr attr;
@@ -11080,6 +11088,16 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
}
attr.config |= 1 << bit;
}
+ if (uprobe && unique) {
+ int bit = determine_uprobe_unique_bit();
+
+ if (bit < 0) {
+ pr_warn("failed to determine uprobe unique bit: %s\n",
+ errstr(bit));
+ return bit;
+ }
+ attr.config |= 1 << bit;
+ }
attr.size = attr_sz;
attr.type = type;
attr.config |= (__u64)ref_ctr_off << PERF_UPROBE_REF_CTR_OFFSET_SHIFT;
@@ -11286,7 +11304,7 @@ int probe_kern_syscall_wrapper(int token_fd)
if (determine_kprobe_perf_type() >= 0) {
int pfd;
- pfd = perf_event_open_probe(false, false, syscall_name, 0, getpid(), 0);
+ pfd = perf_event_open_probe(false, false, syscall_name, 0, getpid(), 0, false);
if (pfd >= 0)
close(pfd);
@@ -11348,7 +11366,7 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
if (!legacy) {
pfd = perf_event_open_probe(false /* uprobe */, retprobe,
func_name, offset,
- -1 /* pid */, 0 /* ref_ctr_off */);
+ -1 /* pid */, 0 /* ref_ctr_off */, false /* unique */);
} else {
char probe_name[MAX_EVENT_NAME_LEN];
@@ -12251,7 +12269,7 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
struct bpf_link *link;
size_t ref_ctr_off;
int pfd, err;
- bool retprobe, legacy;
+ bool retprobe, legacy, unique;
const char *func_name;
if (!OPTS_VALID(opts, bpf_uprobe_opts))
@@ -12261,6 +12279,7 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
retprobe = OPTS_GET(opts, retprobe, false);
ref_ctr_off = OPTS_GET(opts, ref_ctr_offset, 0);
pe_opts.bpf_cookie = OPTS_GET(opts, bpf_cookie, 0);
+ unique = OPTS_GET(opts, unique, false);
if (!binary_path)
return libbpf_err_ptr(-EINVAL);
@@ -12321,7 +12340,7 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
if (!legacy) {
pfd = perf_event_open_probe(true /* uprobe */, retprobe, binary_path,
- func_offset, pid, ref_ctr_off);
+ func_offset, pid, ref_ctr_off, unique);
} else {
char probe_name[MAX_EVENT_NAME_LEN];
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 13a10299331b..0a38dee1d9c1 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -701,9 +701,11 @@ struct bpf_uprobe_opts {
const char *func_name;
/* uprobe attach mode */
enum probe_attach_mode attach_mode;
+ /* create unique uprobe */
+ bool unique;
size_t :0;
};
-#define bpf_uprobe_opts__last_field attach_mode
+#define bpf_uprobe_opts__last_field unique
/**
* @brief **bpf_program__attach_uprobe()** attaches a BPF program
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH perf/core 08/11] selftests/bpf: Add uprobe multi context registers changes test
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
` (6 preceding siblings ...)
2025-09-02 14:35 ` [PATCH perf/core 07/11] libbpf: Add support to attach generic unique uprobe Jiri Olsa
@ 2025-09-02 14:35 ` Jiri Olsa
2025-09-02 14:35 ` [PATCH perf/core 09/11] selftests/bpf: Add uprobe multi context ip register change test Jiri Olsa
` (2 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:35 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding test to check we can change common register values through
uprobe program.
It's x86_64 specific test.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/bpf_cookie.c | 1 +
.../bpf/prog_tests/uprobe_multi_test.c | 107 ++++++++++++++++++
.../selftests/bpf/progs/uprobe_multi.c | 24 ++++
3 files changed, 132 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
index 4a0670c056ba..2425fd26d1ea 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
@@ -6,6 +6,7 @@
#include <sys/syscall.h>
#include <sys/mman.h>
#include <unistd.h>
+#include <asm/ptrace.h>
#include <test_progs.h>
#include <network_helpers.h>
#include <bpf/btf.h>
diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
index 2ee17ef1dae2..012652b07399 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
@@ -2,6 +2,7 @@
#include <unistd.h>
#include <pthread.h>
+#include <asm/ptrace.h>
#include <test_progs.h>
#include "uprobe_multi.skel.h"
#include "uprobe_multi_bench.skel.h"
@@ -1340,6 +1341,111 @@ static void test_bench_attach_usdt(void)
printf("%s: detached in %7.3lfs\n", __func__, detach_delta);
}
+#ifdef __x86_64__
+__naked __maybe_unused unsigned long uprobe_regs_change_trigger(void)
+{
+ asm volatile (
+ "ret\n"
+ );
+}
+
+static __naked void uprobe_regs_change(struct pt_regs *before, struct pt_regs *after)
+{
+ asm volatile (
+ "movq %r11, 48(%rdi)\n"
+ "movq %r10, 56(%rdi)\n"
+ "movq %r9, 64(%rdi)\n"
+ "movq %r8, 72(%rdi)\n"
+ "movq %rax, 80(%rdi)\n"
+ "movq %rcx, 88(%rdi)\n"
+ "movq %rdx, 96(%rdi)\n"
+ "movq %rsi, 104(%rdi)\n"
+ "movq %rdi, 112(%rdi)\n"
+
+ /* save 2nd argument */
+ "pushq %rsi\n"
+ "call uprobe_regs_change_trigger\n"
+
+ /* save return value and load 2nd argument pointer to rax */
+ "pushq %rax\n"
+ "movq 8(%rsp), %rax\n"
+
+ "movq %r11, 48(%rax)\n"
+ "movq %r10, 56(%rax)\n"
+ "movq %r9, 64(%rax)\n"
+ "movq %r8, 72(%rax)\n"
+ "movq %rcx, 88(%rax)\n"
+ "movq %rdx, 96(%rax)\n"
+ "movq %rsi, 104(%rax)\n"
+ "movq %rdi, 112(%rax)\n"
+
+ /* restore return value and 2nd argument */
+ "pop %rax\n"
+ "pop %rsi\n"
+
+ "movq %rax, 80(%rsi)\n"
+ "ret\n"
+ );
+}
+
+static void unique_regs_common(void)
+{
+ struct pt_regs before = {}, after = {}, expected = {
+ .rax = 0xc0ffe,
+ .rcx = 0xbad,
+ .rdx = 0xdead,
+ .r8 = 0x8,
+ .r9 = 0x9,
+ .r10 = 0x10,
+ .r11 = 0x11,
+ .rdi = 0x12,
+ .rsi = 0x13,
+ };
+ LIBBPF_OPTS(bpf_uprobe_multi_opts, opts,
+ .unique = true,
+ );
+ struct uprobe_multi *skel;
+
+ skel = uprobe_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+ skel->bss->pid = getpid();
+ skel->bss->regs = expected;
+
+ skel->links.uprobe_change_regs = bpf_program__attach_uprobe_multi(
+ skel->progs.uprobe_change_regs,
+ -1, "/proc/self/exe",
+ "uprobe_regs_change_trigger",
+ &opts);
+ if (!ASSERT_OK_PTR(skel->links.uprobe_change_regs, "bpf_program__attach_uprobe_multi"))
+ goto cleanup;
+
+ uprobe_regs_change(&before, &after);
+
+ ASSERT_EQ(after.rax, expected.rax, "ax");
+ ASSERT_EQ(after.rcx, expected.rcx, "cx");
+ ASSERT_EQ(after.rdx, expected.rdx, "dx");
+ ASSERT_EQ(after.r8, expected.r8, "r8");
+ ASSERT_EQ(after.r9, expected.r9, "r9");
+ ASSERT_EQ(after.r10, expected.r10, "r10");
+ ASSERT_EQ(after.r11, expected.r11, "r11");
+ ASSERT_EQ(after.rdi, expected.rdi, "rdi");
+ ASSERT_EQ(after.rsi, expected.rsi, "rsi");
+
+cleanup:
+ uprobe_multi__destroy(skel);
+}
+
+static void test_unique(void)
+{
+ if (test__start_subtest("unique_regs_common"))
+ unique_regs_common();
+}
+#else
+static void test_unique(void) { }
+#endif
+
void test_uprobe_multi_test(void)
{
if (test__start_subtest("skel_api"))
@@ -1372,5 +1478,6 @@ void test_uprobe_multi_test(void)
test_session_cookie_skel_api();
if (test__start_subtest("session_cookie_recursive"))
test_session_recursive_skel_api();
+ test_unique();
RUN_TESTS(uprobe_multi_verifier);
}
diff --git a/tools/testing/selftests/bpf/progs/uprobe_multi.c b/tools/testing/selftests/bpf/progs/uprobe_multi.c
index 44190efcdba2..f26f8b840985 100644
--- a/tools/testing/selftests/bpf/progs/uprobe_multi.c
+++ b/tools/testing/selftests/bpf/progs/uprobe_multi.c
@@ -141,3 +141,27 @@ int usdt_extra(struct pt_regs *ctx)
/* we need this one just to mix PID-filtered and global USDT probes */
return 0;
}
+
+#if defined(__TARGET_ARCH_x86)
+struct pt_regs regs;
+
+SEC("uprobe.unique")
+int BPF_UPROBE(uprobe_change_regs)
+{
+ pid_t cur_pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (cur_pid != pid)
+ return 0;
+
+ ctx->ax = regs.ax;
+ ctx->cx = regs.cx;
+ ctx->dx = regs.dx;
+ ctx->r8 = regs.r8;
+ ctx->r9 = regs.r9;
+ ctx->r10 = regs.r10;
+ ctx->r11 = regs.r11;
+ ctx->di = regs.di;
+ ctx->si = regs.si;
+ return 0;
+}
+#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH perf/core 09/11] selftests/bpf: Add uprobe multi context ip register change test
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
` (7 preceding siblings ...)
2025-09-02 14:35 ` [PATCH perf/core 08/11] selftests/bpf: Add uprobe multi context registers changes test Jiri Olsa
@ 2025-09-02 14:35 ` Jiri Olsa
2025-09-02 14:35 ` [PATCH perf/core 10/11] selftests/bpf: Add uprobe multi unique attach test Jiri Olsa
2025-09-02 14:35 ` [PATCH perf/core 11/11] selftests/bpf: Add uprobe " Jiri Olsa
10 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:35 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding test to check we can change the application execution
through instruction pointer change through uprobe program.
It's x86_64 specific test.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../bpf/prog_tests/uprobe_multi_test.c | 42 +++++++++++++++++++
.../selftests/bpf/progs/uprobe_multi.c | 14 +++++++
2 files changed, 56 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
index 012652b07399..4630a6c65c3c 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
@@ -1437,10 +1437,52 @@ static void unique_regs_common(void)
uprobe_multi__destroy(skel);
}
+noinline static unsigned long uprobe_regs_change_ip_1(void)
+{
+ return 0xc0ffee;
+}
+
+noinline static unsigned long uprobe_regs_change_ip_2(void)
+{
+ return 0xdeadbeef;
+}
+
+static void unique_regs_ip(void)
+{
+ LIBBPF_OPTS(bpf_uprobe_multi_opts, opts,
+ .unique = true,
+ );
+ struct uprobe_multi *skel;
+ int ret;
+
+ skel = uprobe_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+ skel->bss->pid = getpid();
+ skel->bss->ip = (unsigned long) uprobe_regs_change_ip_2;
+
+ skel->links.uprobe_change_ip = bpf_program__attach_uprobe_multi(
+ skel->progs.uprobe_change_ip,
+ -1, "/proc/self/exe",
+ "uprobe_regs_change_ip_1",
+ &opts);
+ if (!ASSERT_OK_PTR(skel->links.uprobe_change_ip, "bpf_program__attach_uprobe_multi"))
+ goto cleanup;
+
+ ret = uprobe_regs_change_ip_1();
+ ASSERT_EQ(ret, 0xdeadbeef, "ret");
+
+cleanup:
+ uprobe_multi__destroy(skel);
+}
+
static void test_unique(void)
{
if (test__start_subtest("unique_regs_common"))
unique_regs_common();
+ if (test__start_subtest("unique_regs_ip"))
+ unique_regs_ip();
}
#else
static void test_unique(void) { }
diff --git a/tools/testing/selftests/bpf/progs/uprobe_multi.c b/tools/testing/selftests/bpf/progs/uprobe_multi.c
index f26f8b840985..563fd37ed77d 100644
--- a/tools/testing/selftests/bpf/progs/uprobe_multi.c
+++ b/tools/testing/selftests/bpf/progs/uprobe_multi.c
@@ -164,4 +164,18 @@ int BPF_UPROBE(uprobe_change_regs)
ctx->si = regs.si;
return 0;
}
+
+unsigned long ip;
+
+SEC("uprobe.unique")
+int BPF_UPROBE(uprobe_change_ip)
+{
+ pid_t cur_pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (cur_pid != pid)
+ return 0;
+
+ ctx->ip = ip;
+ return 0;
+}
#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH perf/core 10/11] selftests/bpf: Add uprobe multi unique attach test
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
` (8 preceding siblings ...)
2025-09-02 14:35 ` [PATCH perf/core 09/11] selftests/bpf: Add uprobe multi context ip register change test Jiri Olsa
@ 2025-09-02 14:35 ` Jiri Olsa
2025-09-02 14:35 ` [PATCH perf/core 11/11] selftests/bpf: Add uprobe " Jiri Olsa
10 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:35 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding test to check the unique uprobe attchment together
with not-unique uprobe on top of uprobe_multi link.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../bpf/prog_tests/uprobe_multi_test.c | 99 +++++++++++++++++++
.../selftests/bpf/progs/uprobe_multi_unique.c | 34 +++++++
2 files changed, 133 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/uprobe_multi_unique.c
diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
index 4630a6c65c3c..1043bc4387e8 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
@@ -14,6 +14,7 @@
#include "uprobe_multi_session_cookie.skel.h"
#include "uprobe_multi_session_recursive.skel.h"
#include "uprobe_multi_verifier.skel.h"
+#include "uprobe_multi_unique.skel.h"
#include "bpf/libbpf_internal.h"
#include "testing_helpers.h"
#include "../sdt.h"
@@ -1477,12 +1478,110 @@ static void unique_regs_ip(void)
uprobe_multi__destroy(skel);
}
+static void unique_attach(void)
+{
+ LIBBPF_OPTS(bpf_uprobe_multi_opts, opts,
+ .unique = true,
+ );
+ struct bpf_link *link_1, *link_2 = NULL;
+ struct bpf_program *prog_1, *prog_2;
+ struct uprobe_multi_unique *skel;
+
+ skel = uprobe_multi_unique__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "uprobe_multi_unique__open_and_load"))
+ return;
+
+ skel->bss->my_pid = getpid();
+
+ prog_1 = skel->progs.test1;
+ prog_2 = skel->progs.test2;
+
+ /* not-unique and unique */
+ link_1 = bpf_program__attach_uprobe_multi(prog_1, -1, "/proc/self/exe",
+ "uprobe_multi_func_1",
+ NULL);
+ if (!ASSERT_OK_PTR(link_1, "bpf_program__attach_uprobe_multi"))
+ goto cleanup;
+
+ link_2 = bpf_program__attach_uprobe_multi(prog_2, -1, "/proc/self/exe",
+ "uprobe_multi_func_1",
+ &opts);
+ if (!ASSERT_ERR_PTR(link_2, "bpf_program__attach_uprobe_multi")) {
+ bpf_link__destroy(link_2);
+ goto cleanup;
+ }
+
+ bpf_link__destroy(link_1);
+
+ /* unique and unique */
+ link_1 = bpf_program__attach_uprobe_multi(prog_1, -1, "/proc/self/exe",
+ "uprobe_multi_func_1",
+ &opts);
+ if (!ASSERT_OK_PTR(link_1, "bpf_program__attach_uprobe_multi"))
+ goto cleanup;
+
+ link_2 = bpf_program__attach_uprobe_multi(prog_2, -1, "/proc/self/exe",
+ "uprobe_multi_func_1",
+ &opts);
+ if (!ASSERT_ERR_PTR(link_2, "bpf_program__attach_uprobe_multi")) {
+ bpf_link__destroy(link_2);
+ goto cleanup;
+ }
+
+ bpf_link__destroy(link_1);
+
+ /* unique and not-unique */
+ link_1 = bpf_program__attach_uprobe_multi(prog_1, -1, "/proc/self/exe",
+ "uprobe_multi_func_1",
+ &opts);
+ if (!ASSERT_OK_PTR(link_1, "bpf_program__attach_uprobe_multi"))
+ goto cleanup;
+
+ link_2 = bpf_program__attach_uprobe_multi(prog_2, -1, "/proc/self/exe",
+ "uprobe_multi_func_1",
+ NULL);
+ if (!ASSERT_ERR_PTR(link_2, "bpf_program__attach_uprobe_multi")) {
+ bpf_link__destroy(link_2);
+ goto cleanup;
+ }
+
+ bpf_link__destroy(link_1);
+
+ /* not-unique and not-unique */
+ link_1 = bpf_program__attach_uprobe_multi(prog_1, -1, "/proc/self/exe",
+ "uprobe_multi_func_1",
+ NULL);
+ if (!ASSERT_OK_PTR(link_1, "bpf_program__attach_uprobe_multi"))
+ goto cleanup;
+
+ link_2 = bpf_program__attach_uprobe_multi(prog_2, -1, "/proc/self/exe",
+ "uprobe_multi_func_1",
+ NULL);
+ if (!ASSERT_OK_PTR(link_2, "bpf_program__attach_uprobe_multi")) {
+ bpf_link__destroy(link_1);
+ goto cleanup;
+ }
+
+ uprobe_multi_func_1();
+
+ ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
+ ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
+
+ bpf_link__destroy(link_1);
+ bpf_link__destroy(link_2);
+
+cleanup:
+ uprobe_multi_unique__destroy(skel);
+}
+
static void test_unique(void)
{
if (test__start_subtest("unique_regs_common"))
unique_regs_common();
if (test__start_subtest("unique_regs_ip"))
unique_regs_ip();
+ if (test__start_subtest("unique_attach"))
+ unique_attach();
}
#else
static void test_unique(void) { }
diff --git a/tools/testing/selftests/bpf/progs/uprobe_multi_unique.c b/tools/testing/selftests/bpf/progs/uprobe_multi_unique.c
new file mode 100644
index 000000000000..e31e17bd85ea
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/uprobe_multi_unique.c
@@ -0,0 +1,34 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+pid_t my_pid = 0;
+
+int test1_result = 0;
+int test2_result = 0;
+
+SEC("uprobe.multi")
+int BPF_UPROBE(test1)
+{
+ pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (pid != my_pid)
+ return 0;
+
+ test1_result = 1;
+ return 0;
+}
+
+SEC("uprobe.multi")
+int BPF_UPROBE(test2)
+{
+ pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (pid != my_pid)
+ return 0;
+
+ test2_result = 1;
+ return 0;
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH perf/core 11/11] selftests/bpf: Add uprobe unique attach test
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
` (9 preceding siblings ...)
2025-09-02 14:35 ` [PATCH perf/core 10/11] selftests/bpf: Add uprobe multi unique attach test Jiri Olsa
@ 2025-09-02 14:35 ` Jiri Olsa
10 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-02 14:35 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding test to check the unique uprobe attchment together
with not-unique uprobe on top perf uprobe pmu.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../testing/selftests/bpf/prog_tests/uprobe.c | 111 +++++++++++++++++-
1 file changed, 110 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe.c b/tools/testing/selftests/bpf/prog_tests/uprobe.c
index cf3e0e7a64fa..4e1be03d863d 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe.c
@@ -33,7 +33,7 @@ static int urand_trigger(FILE **urand_pipe)
return exit_code;
}
-void test_uprobe(void)
+static void test_uprobe_urandlib(void)
{
LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
struct test_uprobe *skel;
@@ -93,3 +93,112 @@ void test_uprobe(void)
pclose(urand_pipe);
test_uprobe__destroy(skel);
}
+
+static noinline void uprobe_unique_trigger(void)
+{
+ asm volatile ("");
+}
+
+static void test_uprobe_unique(void)
+{
+ LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts,
+ .func_name = "uprobe_unique_trigger",
+ );
+ struct bpf_link *link_1, *link_2 = NULL;
+ struct bpf_program *prog_1, *prog_2;
+ struct test_uprobe *skel;
+
+ skel = test_uprobe__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "test_uprobe__open_and_load"))
+ return;
+
+ skel->bss->my_pid = getpid();
+
+ prog_1 = skel->progs.test1;
+ prog_2 = skel->progs.test2;
+
+ /* not-unique and unique */
+ uprobe_opts.unique = false;
+ link_1 = bpf_program__attach_uprobe_opts(prog_1, -1, "/proc/self/exe",
+ 0 /* offset */, &uprobe_opts);
+ if (!ASSERT_OK_PTR(link_1, "bpf_program__attach_uprobe_opts_1"))
+ goto cleanup;
+
+ uprobe_opts.unique = true;
+ link_2 = bpf_program__attach_uprobe_opts(prog_2, -1, "/proc/self/exe",
+ 0 /* offset */, &uprobe_opts);
+ if (!ASSERT_ERR_PTR(link_2, "bpf_program__attach_uprobe_opts_2")) {
+ bpf_link__destroy(link_2);
+ goto cleanup;
+ }
+
+ bpf_link__destroy(link_1);
+
+ /* unique and unique */
+ uprobe_opts.unique = true;
+ link_1 = bpf_program__attach_uprobe_opts(prog_1, -1, "/proc/self/exe",
+ 0 /* offset */, &uprobe_opts);
+ if (!ASSERT_OK_PTR(link_1, "bpf_program__attach_uprobe_opts_1"))
+ goto cleanup;
+
+ uprobe_opts.unique = true;
+ link_2 = bpf_program__attach_uprobe_opts(prog_2, -1, "/proc/self/exe",
+ 0 /* offset */, &uprobe_opts);
+ if (!ASSERT_ERR_PTR(link_2, "bpf_program__attach_uprobe_opts_2")) {
+ bpf_link__destroy(link_2);
+ goto cleanup;
+ }
+
+ bpf_link__destroy(link_1);
+
+ /* unique and not-unique */
+ uprobe_opts.unique = true;
+ link_1 = bpf_program__attach_uprobe_opts(prog_1, -1, "/proc/self/exe",
+ 0 /* offset */, &uprobe_opts);
+ if (!ASSERT_OK_PTR(link_1, "bpf_program__attach_uprobe_opts_1"))
+ goto cleanup;
+
+ uprobe_opts.unique = false;
+ link_2 = bpf_program__attach_uprobe_opts(prog_2, -1, "/proc/self/exe",
+ 0 /* offset */, &uprobe_opts);
+ if (!ASSERT_ERR_PTR(link_2, "bpf_program__attach_uprobe_opts_2")) {
+ bpf_link__destroy(link_2);
+ goto cleanup;
+ }
+
+ bpf_link__destroy(link_1);
+
+ /* not-unique and not-unique */
+ uprobe_opts.unique = false;
+ link_1 = bpf_program__attach_uprobe_opts(prog_1, -1, "/proc/self/exe",
+ 0 /* offset */, &uprobe_opts);
+ if (!ASSERT_OK_PTR(link_1, "bpf_program__attach_uprobe_opts_1"))
+ goto cleanup;
+
+ uprobe_opts.unique = false;
+ link_2 = bpf_program__attach_uprobe_opts(prog_2, -1, "/proc/self/exe",
+ 0 /* offset */, &uprobe_opts);
+ if (!ASSERT_OK_PTR(link_2, "bpf_program__attach_uprobe_opts_2")) {
+ bpf_link__destroy(link_1);
+ goto cleanup;
+ }
+
+ uprobe_unique_trigger();
+
+ ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
+ ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
+
+ bpf_link__destroy(link_1);
+ bpf_link__destroy(link_2);
+
+cleanup:
+ test_uprobe__destroy(skel);
+}
+
+void test_uprobe(void)
+{
+ if (test__start_subtest("urandlib"))
+ test_uprobe_urandlib();
+ if (test__start_subtest("unique"))
+ test_uprobe_unique();
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer
2025-09-02 14:34 ` [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer Jiri Olsa
@ 2025-09-02 15:11 ` Masami Hiramatsu
2025-09-03 6:35 ` Jiri Olsa
2025-09-03 10:49 ` Oleg Nesterov
1 sibling, 1 reply; 30+ messages in thread
From: Masami Hiramatsu @ 2025-09-02 15:11 UTC (permalink / raw)
To: Jiri Olsa
Cc: Oleg Nesterov, Peter Zijlstra, Andrii Nakryiko, bpf, linux-kernel,
linux-trace-kernel, x86, Song Liu, Yonghong Song, John Fastabend,
Hao Luo, Steven Rostedt, Ingo Molnar
On Tue, 2 Sep 2025 16:34:54 +0200
Jiri Olsa <jolsa@kernel.org> wrote:
> Adding unique flag to uprobe consumer to ensure it's the only consumer
> attached on the uprobe.
>
> This is helpful for use cases when consumer wants to change user space
> registers, which might confuse other consumers. With this change we can
> ensure there's only one consumer on specific uprobe.
nit: Does this mean one callback (consumer) is exclusively attached?
If so, "exclusive" will be better wording?
The logic looks good to me.
Thanks,
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> include/linux/uprobes.h | 1 +
> kernel/events/uprobes.c | 30 ++++++++++++++++++++++++++++--
> 2 files changed, 29 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
> index 08ef78439d0d..0df849dee720 100644
> --- a/include/linux/uprobes.h
> +++ b/include/linux/uprobes.h
> @@ -60,6 +60,7 @@ struct uprobe_consumer {
> struct list_head cons_node;
>
> __u64 id; /* set when uprobe_consumer is registered */
> + bool is_unique; /* the only consumer on uprobe */
> };
>
> #ifdef CONFIG_UPROBES
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 996a81080d56..b9b088f7333a 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -1024,14 +1024,35 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset,
> return uprobe;
> }
>
> -static void consumer_add(struct uprobe *uprobe, struct uprobe_consumer *uc)
> +static bool consumer_can_add(struct list_head *head, struct uprobe_consumer *uc)
> +{
> + /* Uprobe has no consumer, we can add any. */
> + if (list_empty(head))
> + return true;
> + /* Uprobe has consumer/s, we can't add unique one. */
> + if (uc->is_unique)
> + return false;
> + /*
> + * Uprobe has consumer/s, we can add nother consumer only if the
> + * current consumer is not unique.
> + **/
> + return !list_first_entry(head, struct uprobe_consumer, cons_node)->is_unique;
> +}
> +
> +static int consumer_add(struct uprobe *uprobe, struct uprobe_consumer *uc)
> {
> static atomic64_t id;
> + int ret = -EBUSY;
>
> down_write(&uprobe->consumer_rwsem);
> + if (!consumer_can_add(&uprobe->consumers, uc))
> + goto unlock;
> list_add_rcu(&uc->cons_node, &uprobe->consumers);
> uc->id = (__u64) atomic64_inc_return(&id);
> + ret = 0;
> +unlock:
> up_write(&uprobe->consumer_rwsem);
> + return ret;
> }
>
> /*
> @@ -1420,7 +1441,12 @@ struct uprobe *uprobe_register(struct inode *inode,
> return uprobe;
>
> down_write(&uprobe->register_rwsem);
> - consumer_add(uprobe, uc);
> + ret = consumer_add(uprobe, uc);
> + if (ret) {
> + put_uprobe(uprobe);
> + up_write(&uprobe->register_rwsem);
> + return ERR_PTR(ret);
> + }
> ret = register_for_each_vma(uprobe, uc);
> up_write(&uprobe->register_rwsem);
>
> --
> 2.51.0
>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 04/11] bpf: Add support to attach uprobe_multi unique uprobe
2025-09-02 14:34 ` [PATCH perf/core 04/11] bpf: Add support to attach uprobe_multi " Jiri Olsa
@ 2025-09-02 16:11 ` Alexei Starovoitov
2025-09-03 6:35 ` Jiri Olsa
0 siblings, 1 reply; 30+ messages in thread
From: Alexei Starovoitov @ 2025-09-02 16:11 UTC (permalink / raw)
To: Jiri Olsa
Cc: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko,
bpf, LKML, linux-trace-kernel, X86 ML, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On Tue, Sep 2, 2025 at 7:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Adding support to attach unique uprobe through uprobe multi link
> interface.
>
> Adding new BPF_F_UPROBE_MULTI_UNIQUE flag that denotes the unique
> uprobe creation.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> include/uapi/linux/bpf.h | 3 ++-
> kernel/trace/bpf_trace.c | 4 +++-
> tools/include/uapi/linux/bpf.h | 3 ++-
> 3 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 233de8677382..3de9eb469fe2 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1300,7 +1300,8 @@ enum {
> * BPF_TRACE_UPROBE_MULTI attach type to create return probe.
> */
> enum {
> - BPF_F_UPROBE_MULTI_RETURN = (1U << 0)
> + BPF_F_UPROBE_MULTI_RETURN = (1U << 0),
> + BPF_F_UPROBE_MULTI_UNIQUE = (1U << 1),
I second Masami's point. "exclusive" name fits better.
And once you use that name the "multi_exclusive"
part will not make sense.
How can an exclusive user of the uprobe be "multi" at the same time?
Like attaching to multiple uprobes and modifying regsiters
in all of them? Is it practical ?
It feels to me BPF_F_UPROBE_EXCLUSIVE should be targeting
one specific uprobe.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe
2025-09-02 14:34 ` [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe Jiri Olsa
@ 2025-09-02 16:13 ` Alexei Starovoitov
2025-09-03 3:32 ` kernel test robot
2025-09-03 11:59 ` Oleg Nesterov
2 siblings, 0 replies; 30+ messages in thread
From: Alexei Starovoitov @ 2025-09-02 16:13 UTC (permalink / raw)
To: Jiri Olsa
Cc: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko,
bpf, LKML, linux-trace-kernel, X86 ML, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On Tue, Sep 2, 2025 at 7:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Adding support to attach unique probe through perf uprobe pmu.
>
> Adding new 'unique' format attribute that allows to pass the
> request to create unique uprobe the uprobe consumer.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> include/linux/trace_events.h | 2 +-
> kernel/events/core.c | 8 ++++++--
> kernel/trace/trace_event_perf.c | 4 ++--
> kernel/trace/trace_probe.h | 2 +-
> kernel/trace/trace_uprobe.c | 9 +++++----
> 5 files changed, 15 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> index 04307a19cde3..1d35727fda27 100644
> --- a/include/linux/trace_events.h
> +++ b/include/linux/trace_events.h
> @@ -877,7 +877,7 @@ extern int bpf_get_kprobe_info(const struct perf_event *event,
> #endif
> #ifdef CONFIG_UPROBE_EVENTS
> extern int perf_uprobe_init(struct perf_event *event,
> - unsigned long ref_ctr_offset, bool is_retprobe);
> + unsigned long ref_ctr_offset, bool is_retprobe, bool is_unique);
In bpf land we don't allow multiple bool arguments any more.
It makes callsites hard to read/review/maintain.
Here I recommend to use enum flags as well.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe
2025-09-02 14:34 ` [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe Jiri Olsa
2025-09-02 16:13 ` Alexei Starovoitov
@ 2025-09-03 3:32 ` kernel test robot
2025-09-03 11:59 ` Oleg Nesterov
2 siblings, 0 replies; 30+ messages in thread
From: kernel test robot @ 2025-09-03 3:32 UTC (permalink / raw)
To: Jiri Olsa, Oleg Nesterov, Masami Hiramatsu, Andrii Nakryiko
Cc: oe-kbuild-all, bpf, linux-kernel, linux-trace-kernel, x86,
Song Liu, Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Hi Jiri,
kernel test robot noticed the following build warnings:
[auto build test WARNING on tip/perf/core]
[also build test WARNING on next-20250902]
[cannot apply to bpf-next/net bpf-next/master bpf/master perf-tools-next/perf-tools-next perf-tools/perf-tools trace/for-next linus/master acme/perf/core v6.17-rc4]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Jiri-Olsa/uprobes-Add-unique-flag-to-uprobe-consumer/20250902-224356
base: tip/perf/core
patch link: https://lore.kernel.org/r/20250902143504.1224726-4-jolsa%40kernel.org
patch subject: [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe
config: x86_64-randconfig-001-20250903 (https://download.01.org/0day-ci/archive/20250903/202509031116.yIcyjvUx-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-12) 11.3.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250903/202509031116.yIcyjvUx-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202509031116.yIcyjvUx-lkp@intel.com/
All warnings (new ones prefixed by >>):
In file included from include/linux/trace_events.h:10,
from include/trace/syscall.h:7,
from include/linux/syscalls.h:95,
from kernel/events/core.c:34:
>> include/linux/perf_event.h:2073:32: warning: 'format_attr_unique' defined but not used [-Wunused-variable]
2073 | static struct device_attribute format_attr_##_name = __ATTR_RO(_name)
| ^~~~~~~~~~~~
kernel/events/core.c:11055:1: note: in expansion of macro 'PMU_FORMAT_ATTR'
11055 | PMU_FORMAT_ATTR(unique, "config:1");
| ^~~~~~~~~~~~~~~
vim +/format_attr_unique +2073 include/linux/perf_event.h
b6c00fb9949fbd0 Kan Liang 2023-01-04 2069
b6c00fb9949fbd0 Kan Liang 2023-01-04 2070 #define PMU_FORMAT_ATTR(_name, _format) \
b6c00fb9949fbd0 Kan Liang 2023-01-04 2071 PMU_FORMAT_ATTR_SHOW(_name, _format) \
641cc938815dfd0 Jiri Olsa 2012-03-15 2072 \
641cc938815dfd0 Jiri Olsa 2012-03-15 @2073 static struct device_attribute format_attr_##_name = __ATTR_RO(_name)
641cc938815dfd0 Jiri Olsa 2012-03-15 2074
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 04/11] bpf: Add support to attach uprobe_multi unique uprobe
2025-09-02 16:11 ` Alexei Starovoitov
@ 2025-09-03 6:35 ` Jiri Olsa
2025-09-03 15:32 ` Alexei Starovoitov
0 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2025-09-03 6:35 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko,
bpf, LKML, linux-trace-kernel, X86 ML, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On Tue, Sep 02, 2025 at 09:11:22AM -0700, Alexei Starovoitov wrote:
> On Tue, Sep 2, 2025 at 7:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > Adding support to attach unique uprobe through uprobe multi link
> > interface.
> >
> > Adding new BPF_F_UPROBE_MULTI_UNIQUE flag that denotes the unique
> > uprobe creation.
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> > include/uapi/linux/bpf.h | 3 ++-
> > kernel/trace/bpf_trace.c | 4 +++-
> > tools/include/uapi/linux/bpf.h | 3 ++-
> > 3 files changed, 7 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 233de8677382..3de9eb469fe2 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -1300,7 +1300,8 @@ enum {
> > * BPF_TRACE_UPROBE_MULTI attach type to create return probe.
> > */
> > enum {
> > - BPF_F_UPROBE_MULTI_RETURN = (1U << 0)
> > + BPF_F_UPROBE_MULTI_RETURN = (1U << 0),
> > + BPF_F_UPROBE_MULTI_UNIQUE = (1U << 1),
>
> I second Masami's point. "exclusive" name fits better.
> And once you use that name the "multi_exclusive"
> part will not make sense.
> How can an exclusive user of the uprobe be "multi" at the same time?
> Like attaching to multiple uprobes and modifying regsiters
> in all of them? Is it practical ?
we can still attach single uprobe with uprobe_multi,
but for more uprobes it's probably not practical
> It till attach single uprobe with eels to me BPF_F_UPROBE_EXCLUSIVE should be targeting
> one specific uprobe.
do you mean to force single uprobe with this flag?
I understood 'BPF_F_UPROBE_MULTI_' flag prefix more as indication what link
it belongs to, but I'm ok with BPF_F_UPROBE_EXCLUSIVE
thanks,
jirka
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer
2025-09-02 15:11 ` Masami Hiramatsu
@ 2025-09-03 6:35 ` Jiri Olsa
0 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-03 6:35 UTC (permalink / raw)
To: Masami Hiramatsu
Cc: Oleg Nesterov, Peter Zijlstra, Andrii Nakryiko, bpf, linux-kernel,
linux-trace-kernel, x86, Song Liu, Yonghong Song, John Fastabend,
Hao Luo, Steven Rostedt, Ingo Molnar
On Wed, Sep 03, 2025 at 12:11:33AM +0900, Masami Hiramatsu wrote:
> On Tue, 2 Sep 2025 16:34:54 +0200
> Jiri Olsa <jolsa@kernel.org> wrote:
>
> > Adding unique flag to uprobe consumer to ensure it's the only consumer
> > attached on the uprobe.
> >
> > This is helpful for use cases when consumer wants to change user space
> > registers, which might confuse other consumers. With this change we can
> > ensure there's only one consumer on specific uprobe.
>
> nit: Does this mean one callback (consumer) is exclusively attached?
> If so, "exclusive" will be better wording?
yes, exclusive is better, will change
thanks,
jirka
>
> The logic looks good to me.
>
> Thanks,
>
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> > include/linux/uprobes.h | 1 +
> > kernel/events/uprobes.c | 30 ++++++++++++++++++++++++++++--
> > 2 files changed, 29 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
> > index 08ef78439d0d..0df849dee720 100644
> > --- a/include/linux/uprobes.h
> > +++ b/include/linux/uprobes.h
> > @@ -60,6 +60,7 @@ struct uprobe_consumer {
> > struct list_head cons_node;
> >
> > __u64 id; /* set when uprobe_consumer is registered */
> > + bool is_unique; /* the only consumer on uprobe */
> > };
> >
> > #ifdef CONFIG_UPROBES
> > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > index 996a81080d56..b9b088f7333a 100644
> > --- a/kernel/events/uprobes.c
> > +++ b/kernel/events/uprobes.c
> > @@ -1024,14 +1024,35 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset,
> > return uprobe;
> > }
> >
> > -static void consumer_add(struct uprobe *uprobe, struct uprobe_consumer *uc)
> > +static bool consumer_can_add(struct list_head *head, struct uprobe_consumer *uc)
> > +{
> > + /* Uprobe has no consumer, we can add any. */
> > + if (list_empty(head))
> > + return true;
> > + /* Uprobe has consumer/s, we can't add unique one. */
> > + if (uc->is_unique)
> > + return false;
> > + /*
> > + * Uprobe has consumer/s, we can add nother consumer only if the
> > + * current consumer is not unique.
> > + **/
> > + return !list_first_entry(head, struct uprobe_consumer, cons_node)->is_unique;
> > +}
> > +
> > +static int consumer_add(struct uprobe *uprobe, struct uprobe_consumer *uc)
> > {
> > static atomic64_t id;
> > + int ret = -EBUSY;
> >
> > down_write(&uprobe->consumer_rwsem);
> > + if (!consumer_can_add(&uprobe->consumers, uc))
> > + goto unlock;
> > list_add_rcu(&uc->cons_node, &uprobe->consumers);
> > uc->id = (__u64) atomic64_inc_return(&id);
> > + ret = 0;
> > +unlock:
> > up_write(&uprobe->consumer_rwsem);
> > + return ret;
> > }
> >
> > /*
> > @@ -1420,7 +1441,12 @@ struct uprobe *uprobe_register(struct inode *inode,
> > return uprobe;
> >
> > down_write(&uprobe->register_rwsem);
> > - consumer_add(uprobe, uc);
> > + ret = consumer_add(uprobe, uc);
> > + if (ret) {
> > + put_uprobe(uprobe);
> > + up_write(&uprobe->register_rwsem);
> > + return ERR_PTR(ret);
> > + }
> > ret = register_for_each_vma(uprobe, uc);
> > up_write(&uprobe->register_rwsem);
> >
> > --
> > 2.51.0
> >
>
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer
2025-09-02 14:34 ` [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer Jiri Olsa
2025-09-02 15:11 ` Masami Hiramatsu
@ 2025-09-03 10:49 ` Oleg Nesterov
2025-09-03 12:02 ` Jiri Olsa
1 sibling, 1 reply; 30+ messages in thread
From: Oleg Nesterov @ 2025-09-03 10:49 UTC (permalink / raw)
To: Jiri Olsa
Cc: Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko, bpf,
linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On 09/02, Jiri Olsa wrote:
>
> +static bool consumer_can_add(struct list_head *head, struct uprobe_consumer *uc)
> +{
> + /* Uprobe has no consumer, we can add any. */
> + if (list_empty(head))
> + return true;
> + /* Uprobe has consumer/s, we can't add unique one. */
> + if (uc->is_unique)
> + return false;
> + /*
> + * Uprobe has consumer/s, we can add nother consumer only if the
> + * current consumer is not unique.
> + **/
> + return !list_first_entry(head, struct uprobe_consumer, cons_node)->is_unique;
> +}
Since you are going to send V2 anyway... purely cosmetic and subjective nit,
but somehow I can't resist,
bool consumer_can_add(struct list_head *head, struct uprobe_consumer *new)
{
struct uprobe_consumer *old = list_first_entry_or_null(...);
return !old || (!old->exclusive && !new->exclusive);
}
looks a bit more readable to me. Please ignore if you like your version more.
Oleg.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 02/11] uprobes: Skip emulate/sstep on unique uprobe when ip is changed
2025-09-02 14:34 ` [PATCH perf/core 02/11] uprobes: Skip emulate/sstep on unique uprobe when ip is changed Jiri Olsa
@ 2025-09-03 11:26 ` Oleg Nesterov
2025-09-03 18:20 ` Andrii Nakryiko
2025-09-03 19:50 ` Jiri Olsa
0 siblings, 2 replies; 30+ messages in thread
From: Oleg Nesterov @ 2025-09-03 11:26 UTC (permalink / raw)
To: Jiri Olsa
Cc: Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko, bpf,
linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On 09/02, Jiri Olsa wrote:
>
> If user decided to take execution elsewhere, it makes little sense
> to execute the original instruction, so let's skip it.
Exactly.
So why do we need all these "is_unique" complications? Only a single
is_unique/exclusive consumer can change regs->ip, so I guess handle_swbp()
can just do
handler_chain(uprobe, regs);
if (instruction_pointer(regs) != bp_vaddr)
goto out;
> Allowing this
> behaviour only for uprobe with unique consumer attached.
But if a non-exclusive consumer changes regs->ip, we have a problem
anyway, right?
We can probably add something like
rc = uc->handler(uc, regs, &cookie);
+ WARN_ON(!uc->is_unique && instruction_pointer(regs) != bp_vaddr);
into handler_chain(), although I don't think this is needed.
Oleg.
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> kernel/events/uprobes.c | 13 ++++++++++---
> 1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index b9b088f7333a..da8291941c6b 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -2568,7 +2568,7 @@ static bool ignore_ret_handler(int rc)
> return rc == UPROBE_HANDLER_REMOVE || rc == UPROBE_HANDLER_IGNORE;
> }
>
> -static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
> +static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs, bool *is_unique)
> {
> struct uprobe_consumer *uc;
> bool has_consumers = false, remove = true;
> @@ -2582,6 +2582,9 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
> __u64 cookie = 0;
> int rc = 0;
>
> + if (is_unique)
> + *is_unique |= uc->is_unique;
> +
> if (uc->handler) {
> rc = uc->handler(uc, regs, &cookie);
> WARN(rc < 0 || rc > 2,
> @@ -2735,6 +2738,7 @@ static void handle_swbp(struct pt_regs *regs)
> {
> struct uprobe *uprobe;
> unsigned long bp_vaddr;
> + bool is_unique = false;
> int is_swbp;
>
> bp_vaddr = uprobe_get_swbp_addr(regs);
> @@ -2789,7 +2793,10 @@ static void handle_swbp(struct pt_regs *regs)
> if (arch_uprobe_ignore(&uprobe->arch, regs))
> goto out;
>
> - handler_chain(uprobe, regs);
> + handler_chain(uprobe, regs, &is_unique);
> +
> + if (is_unique && instruction_pointer(regs) != bp_vaddr)
> + goto out;
>
> /* Try to optimize after first hit. */
> arch_uprobe_optimize(&uprobe->arch, bp_vaddr);
> @@ -2819,7 +2826,7 @@ void handle_syscall_uprobe(struct pt_regs *regs, unsigned long bp_vaddr)
> return;
> if (arch_uprobe_ignore(&uprobe->arch, regs))
> return;
> - handler_chain(uprobe, regs);
> + handler_chain(uprobe, regs, NULL);
> }
>
> /*
> --
> 2.51.0
>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe
2025-09-02 14:34 ` [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe Jiri Olsa
2025-09-02 16:13 ` Alexei Starovoitov
2025-09-03 3:32 ` kernel test robot
@ 2025-09-03 11:59 ` Oleg Nesterov
2025-09-03 13:03 ` Jiri Olsa
2 siblings, 1 reply; 30+ messages in thread
From: Oleg Nesterov @ 2025-09-03 11:59 UTC (permalink / raw)
To: Jiri Olsa
Cc: Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko, bpf,
linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
Slightly off-topic, but
On 09/02, Jiri Olsa wrote:
>
> @@ -11144,7 +11147,7 @@ static int perf_uprobe_event_init(struct perf_event *event)
> {
> int err;
> unsigned long ref_ctr_offset;
> - bool is_retprobe;
> + bool is_retprobe, is_unique;
>
> if (event->attr.type != perf_uprobe.type)
> return -ENOENT;
> @@ -11159,8 +11162,9 @@ static int perf_uprobe_event_init(struct perf_event *event)
> return -EOPNOTSUPP;
>
> is_retprobe = event->attr.config & PERF_PROBE_CONFIG_IS_RETPROBE;
> + is_unique = event->attr.config & PERF_PROBE_CONFIG_IS_UNIQUE;
> ref_ctr_offset = event->attr.config >> PERF_UPROBE_REF_CTR_OFFSET_SHIFT;
> - err = perf_uprobe_init(event, ref_ctr_offset, is_retprobe);
> + err = perf_uprobe_init(event, ref_ctr_offset, is_retprobe, is_unique);
I am wondering why (with or without this change) perf_uprobe_init() needs
the additional arguments besides "event". It can look at event->attr.config
itself?
Same for perf_kprobe_init()...
Oleg.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer
2025-09-03 10:49 ` Oleg Nesterov
@ 2025-09-03 12:02 ` Jiri Olsa
0 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-03 12:02 UTC (permalink / raw)
To: Oleg Nesterov
Cc: Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko, bpf,
linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On Wed, Sep 03, 2025 at 12:49:33PM +0200, Oleg Nesterov wrote:
> On 09/02, Jiri Olsa wrote:
> >
> > +static bool consumer_can_add(struct list_head *head, struct uprobe_consumer *uc)
> > +{
> > + /* Uprobe has no consumer, we can add any. */
> > + if (list_empty(head))
> > + return true;
> > + /* Uprobe has consumer/s, we can't add unique one. */
> > + if (uc->is_unique)
> > + return false;
> > + /*
> > + * Uprobe has consumer/s, we can add nother consumer only if the
> > + * current consumer is not unique.
> > + **/
> > + return !list_first_entry(head, struct uprobe_consumer, cons_node)->is_unique;
> > +}
>
> Since you are going to send V2 anyway... purely cosmetic and subjective nit,
> but somehow I can't resist,
>
> bool consumer_can_add(struct list_head *head, struct uprobe_consumer *new)
> {
> struct uprobe_consumer *old = list_first_entry_or_null(...);
>
> return !old || (!old->exclusive && !new->exclusive);
> }
>
> looks a bit more readable to me. Please ignore if you like your version more.
yep, looks better, thanks
jirka
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe
2025-09-03 11:59 ` Oleg Nesterov
@ 2025-09-03 13:03 ` Jiri Olsa
2025-09-03 13:22 ` Oleg Nesterov
0 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2025-09-03 13:03 UTC (permalink / raw)
To: Oleg Nesterov
Cc: Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko, bpf,
linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On Wed, Sep 03, 2025 at 01:59:13PM +0200, Oleg Nesterov wrote:
> Slightly off-topic, but
>
> On 09/02, Jiri Olsa wrote:
> >
> > @@ -11144,7 +11147,7 @@ static int perf_uprobe_event_init(struct perf_event *event)
> > {
> > int err;
> > unsigned long ref_ctr_offset;
> > - bool is_retprobe;
> > + bool is_retprobe, is_unique;
> >
> > if (event->attr.type != perf_uprobe.type)
> > return -ENOENT;
> > @@ -11159,8 +11162,9 @@ static int perf_uprobe_event_init(struct perf_event *event)
> > return -EOPNOTSUPP;
> >
> > is_retprobe = event->attr.config & PERF_PROBE_CONFIG_IS_RETPROBE;
> > + is_unique = event->attr.config & PERF_PROBE_CONFIG_IS_UNIQUE;
> > ref_ctr_offset = event->attr.config >> PERF_UPROBE_REF_CTR_OFFSET_SHIFT;
> > - err = perf_uprobe_init(event, ref_ctr_offset, is_retprobe);
> > + err = perf_uprobe_init(event, ref_ctr_offset, is_retprobe, is_unique);
>
> I am wondering why (with or without this change) perf_uprobe_init() needs
> the additional arguments besides "event". It can look at event->attr.config
> itself?
>
> Same for perf_kprobe_init()...
I think that's because we define enum perf_probe_config together
with PMU_FORMAT_ATTRs and code for attr->config parsing, which
makes sense to me
otherwise I think we could pass perf_event_attr all the way to
create_local_trace_[ku]probe
jirka
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe
2025-09-03 13:03 ` Jiri Olsa
@ 2025-09-03 13:22 ` Oleg Nesterov
0 siblings, 0 replies; 30+ messages in thread
From: Oleg Nesterov @ 2025-09-03 13:22 UTC (permalink / raw)
To: Jiri Olsa
Cc: Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko, bpf,
linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On 09/03, Jiri Olsa wrote:
>
> On Wed, Sep 03, 2025 at 01:59:13PM +0200, Oleg Nesterov wrote:
> >
> > I am wondering why (with or without this change) perf_uprobe_init() needs
> > the additional arguments besides "event". It can look at event->attr.config
> > itself?
> >
> > Same for perf_kprobe_init()...
>
> I think that's because we define enum perf_probe_config together
> with PMU_FORMAT_ATTRs and code for attr->config parsing, which
> makes sense to me
Ah, and "enum perf_probe_config" is not exported...
Thanks, please forget then.
Oleg.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 04/11] bpf: Add support to attach uprobe_multi unique uprobe
2025-09-03 6:35 ` Jiri Olsa
@ 2025-09-03 15:32 ` Alexei Starovoitov
2025-09-03 19:55 ` Jiri Olsa
0 siblings, 1 reply; 30+ messages in thread
From: Alexei Starovoitov @ 2025-09-03 15:32 UTC (permalink / raw)
To: Jiri Olsa
Cc: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko,
bpf, LKML, linux-trace-kernel, X86 ML, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On Tue, Sep 2, 2025 at 11:35 PM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Tue, Sep 02, 2025 at 09:11:22AM -0700, Alexei Starovoitov wrote:
> > On Tue, Sep 2, 2025 at 7:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > >
> > > Adding support to attach unique uprobe through uprobe multi link
> > > interface.
> > >
> > > Adding new BPF_F_UPROBE_MULTI_UNIQUE flag that denotes the unique
> > > uprobe creation.
> > >
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > > include/uapi/linux/bpf.h | 3 ++-
> > > kernel/trace/bpf_trace.c | 4 +++-
> > > tools/include/uapi/linux/bpf.h | 3 ++-
> > > 3 files changed, 7 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > index 233de8677382..3de9eb469fe2 100644
> > > --- a/include/uapi/linux/bpf.h
> > > +++ b/include/uapi/linux/bpf.h
> > > @@ -1300,7 +1300,8 @@ enum {
> > > * BPF_TRACE_UPROBE_MULTI attach type to create return probe.
> > > */
> > > enum {
> > > - BPF_F_UPROBE_MULTI_RETURN = (1U << 0)
> > > + BPF_F_UPROBE_MULTI_RETURN = (1U << 0),
> > > + BPF_F_UPROBE_MULTI_UNIQUE = (1U << 1),
> >
> > I second Masami's point. "exclusive" name fits better.
> > And once you use that name the "multi_exclusive"
> > part will not make sense.
> > How can an exclusive user of the uprobe be "multi" at the same time?
> > Like attaching to multiple uprobes and modifying regsiters
> > in all of them? Is it practical ?
>
> we can still attach single uprobe with uprobe_multi,
> but for more uprobes it's probably not practical
>
> > It till attach single uprobe with eels to me BPF_F_UPROBE_EXCLUSIVE should be targeting
> > one specific uprobe.
>
> do you mean to force single uprobe with this flag?
>
> I understood 'BPF_F_UPROBE_MULTI_' flag prefix more as indication what link
> it belongs to, but I'm ok with BPF_F_UPROBE_EXCLUSIVE
What is the use case for attaching the same bpf prog to multiple
uprobes and modifying their registers?
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 02/11] uprobes: Skip emulate/sstep on unique uprobe when ip is changed
2025-09-03 11:26 ` Oleg Nesterov
@ 2025-09-03 18:20 ` Andrii Nakryiko
2025-09-03 19:57 ` Jiri Olsa
2025-09-03 19:50 ` Jiri Olsa
1 sibling, 1 reply; 30+ messages in thread
From: Andrii Nakryiko @ 2025-09-03 18:20 UTC (permalink / raw)
To: Oleg Nesterov
Cc: Jiri Olsa, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko, bpf,
linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On Wed, Sep 3, 2025 at 4:28 AM Oleg Nesterov <oleg@redhat.com> wrote:
>
> On 09/02, Jiri Olsa wrote:
> >
> > If user decided to take execution elsewhere, it makes little sense
> > to execute the original instruction, so let's skip it.
>
> Exactly.
>
> So why do we need all these "is_unique" complications? Only a single
I second this. This whole is_unique flag just seems like an
unnecessary thing that spills all around (extra kernel and libbpf
flags/APIs), and it's all just not to confuse the second uprobe
attached? Let's just allow uprobes to override user registers and
handle IP change on kernel side (as unlikely() check)?
> is_unique/exclusive consumer can change regs->ip, so I guess handle_swbp()
> can just do
>
> handler_chain(uprobe, regs);
> if (instruction_pointer(regs) != bp_vaddr)
> goto out;
>
>
> > Allowing this
> > behaviour only for uprobe with unique consumer attached.
>
> But if a non-exclusive consumer changes regs->ip, we have a problem
> anyway, right?
>
> We can probably add something like
>
> rc = uc->handler(uc, regs, &cookie);
> + WARN_ON(!uc->is_unique && instruction_pointer(regs) != bp_vaddr);
>
> into handler_chain(), although I don't think this is needed.
>
> Oleg.
>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> > kernel/events/uprobes.c | 13 ++++++++++---
> > 1 file changed, 10 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > index b9b088f7333a..da8291941c6b 100644
> > --- a/kernel/events/uprobes.c
> > +++ b/kernel/events/uprobes.c
> > @@ -2568,7 +2568,7 @@ static bool ignore_ret_handler(int rc)
> > return rc == UPROBE_HANDLER_REMOVE || rc == UPROBE_HANDLER_IGNORE;
> > }
> >
> > -static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
> > +static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs, bool *is_unique)
> > {
> > struct uprobe_consumer *uc;
> > bool has_consumers = false, remove = true;
> > @@ -2582,6 +2582,9 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
> > __u64 cookie = 0;
> > int rc = 0;
> >
> > + if (is_unique)
> > + *is_unique |= uc->is_unique;
> > +
> > if (uc->handler) {
> > rc = uc->handler(uc, regs, &cookie);
> > WARN(rc < 0 || rc > 2,
> > @@ -2735,6 +2738,7 @@ static void handle_swbp(struct pt_regs *regs)
> > {
> > struct uprobe *uprobe;
> > unsigned long bp_vaddr;
> > + bool is_unique = false;
> > int is_swbp;
> >
> > bp_vaddr = uprobe_get_swbp_addr(regs);
> > @@ -2789,7 +2793,10 @@ static void handle_swbp(struct pt_regs *regs)
> > if (arch_uprobe_ignore(&uprobe->arch, regs))
> > goto out;
> >
> > - handler_chain(uprobe, regs);
> > + handler_chain(uprobe, regs, &is_unique);
> > +
> > + if (is_unique && instruction_pointer(regs) != bp_vaddr)
> > + goto out;
> >
> > /* Try to optimize after first hit. */
> > arch_uprobe_optimize(&uprobe->arch, bp_vaddr);
> > @@ -2819,7 +2826,7 @@ void handle_syscall_uprobe(struct pt_regs *regs, unsigned long bp_vaddr)
> > return;
> > if (arch_uprobe_ignore(&uprobe->arch, regs))
> > return;
> > - handler_chain(uprobe, regs);
> > + handler_chain(uprobe, regs, NULL);
> > }
> >
> > /*
> > --
> > 2.51.0
> >
>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 07/11] libbpf: Add support to attach generic unique uprobe
2025-09-02 14:35 ` [PATCH perf/core 07/11] libbpf: Add support to attach generic unique uprobe Jiri Olsa
@ 2025-09-03 18:26 ` Andrii Nakryiko
0 siblings, 0 replies; 30+ messages in thread
From: Andrii Nakryiko @ 2025-09-03 18:26 UTC (permalink / raw)
To: Jiri Olsa
Cc: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko,
bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
On Tue, Sep 2, 2025 at 7:36 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Adding support to attach unique generic uprobe by adding the 'unique'
> bool flag to struct bpf_uprobe_opts.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> tools/lib/bpf/libbpf.c | 29 ++++++++++++++++++++++++-----
> tools/lib/bpf/libbpf.h | 4 +++-
> 2 files changed, 27 insertions(+), 6 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 1f613a5f95b6..aac2bd4fb95e 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -11045,11 +11045,19 @@ static int determine_uprobe_retprobe_bit(void)
> return parse_uint_from_file(file, "config:%d\n");
> }
>
> +static int determine_uprobe_unique_bit(void)
> +{
> + const char *file = "/sys/bus/event_source/devices/uprobe/format/unique";
> +
> + return parse_uint_from_file(file, "config:%d\n");
> +}
perf event-based attachment is legacy and libbpf will automatically
try to use BPF_LINK_CREATE, so I don't think we need to add this at
all.
But I also feel like we don't really need any special thing for
"exclusive uprobe", because ultimately we just want to let uprobe
override user registers, which we can allow for normal uprobes, as I
mentioned in the other email.
[...]
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 02/11] uprobes: Skip emulate/sstep on unique uprobe when ip is changed
2025-09-03 11:26 ` Oleg Nesterov
2025-09-03 18:20 ` Andrii Nakryiko
@ 2025-09-03 19:50 ` Jiri Olsa
1 sibling, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-03 19:50 UTC (permalink / raw)
To: Oleg Nesterov
Cc: Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko, bpf,
linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On Wed, Sep 03, 2025 at 01:26:48PM +0200, Oleg Nesterov wrote:
> On 09/02, Jiri Olsa wrote:
> >
> > If user decided to take execution elsewhere, it makes little sense
> > to execute the original instruction, so let's skip it.
>
> Exactly.
>
> So why do we need all these "is_unique" complications? Only a single
> is_unique/exclusive consumer can change regs->ip, so I guess handle_swbp()
> can just do
>
> handler_chain(uprobe, regs);
> if (instruction_pointer(regs) != bp_vaddr)
> goto out;
hum, that's what I did in rfc [1] but I thought you did not like that [2]
[1] https://lore.kernel.org/bpf/20250801210238.2207429-2-jolsa@kernel.org/
[2] https://lore.kernel.org/bpf/20250802103426.GC31711@redhat.com/
I guess I misunderstood your reply [2], I'd be happy to drop the
unique/exclusive flag
jirka
>
>
> > Allowing this
> > behaviour only for uprobe with unique consumer attached.
>
> But if a non-exclusive consumer changes regs->ip, we have a problem
> anyway, right?
>
> We can probably add something like
>
> rc = uc->handler(uc, regs, &cookie);
> + WARN_ON(!uc->is_unique && instruction_pointer(regs) != bp_vaddr);
>
> into handler_chain(), although I don't think this is needed.
>
> Oleg.
>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> > kernel/events/uprobes.c | 13 ++++++++++---
> > 1 file changed, 10 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > index b9b088f7333a..da8291941c6b 100644
> > --- a/kernel/events/uprobes.c
> > +++ b/kernel/events/uprobes.c
> > @@ -2568,7 +2568,7 @@ static bool ignore_ret_handler(int rc)
> > return rc == UPROBE_HANDLER_REMOVE || rc == UPROBE_HANDLER_IGNORE;
> > }
> >
> > -static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
> > +static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs, bool *is_unique)
> > {
> > struct uprobe_consumer *uc;
> > bool has_consumers = false, remove = true;
> > @@ -2582,6 +2582,9 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
> > __u64 cookie = 0;
> > int rc = 0;
> >
> > + if (is_unique)
> > + *is_unique |= uc->is_unique;
> > +
> > if (uc->handler) {
> > rc = uc->handler(uc, regs, &cookie);
> > WARN(rc < 0 || rc > 2,
> > @@ -2735,6 +2738,7 @@ static void handle_swbp(struct pt_regs *regs)
> > {
> > struct uprobe *uprobe;
> > unsigned long bp_vaddr;
> > + bool is_unique = false;
> > int is_swbp;
> >
> > bp_vaddr = uprobe_get_swbp_addr(regs);
> > @@ -2789,7 +2793,10 @@ static void handle_swbp(struct pt_regs *regs)
> > if (arch_uprobe_ignore(&uprobe->arch, regs))
> > goto out;
> >
> > - handler_chain(uprobe, regs);
> > + handler_chain(uprobe, regs, &is_unique);
> > +
> > + if (is_unique && instruction_pointer(regs) != bp_vaddr)
> > + goto out;
> >
> > /* Try to optimize after first hit. */
> > arch_uprobe_optimize(&uprobe->arch, bp_vaddr);
> > @@ -2819,7 +2826,7 @@ void handle_syscall_uprobe(struct pt_regs *regs, unsigned long bp_vaddr)
> > return;
> > if (arch_uprobe_ignore(&uprobe->arch, regs))
> > return;
> > - handler_chain(uprobe, regs);
> > + handler_chain(uprobe, regs, NULL);
> > }
> >
> > /*
> > --
> > 2.51.0
> >
>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 04/11] bpf: Add support to attach uprobe_multi unique uprobe
2025-09-03 15:32 ` Alexei Starovoitov
@ 2025-09-03 19:55 ` Jiri Olsa
0 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-03 19:55 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Jiri Olsa, Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra,
Andrii Nakryiko, bpf, LKML, linux-trace-kernel, X86 ML, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
On Wed, Sep 03, 2025 at 08:32:14AM -0700, Alexei Starovoitov wrote:
> On Tue, Sep 2, 2025 at 11:35 PM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Tue, Sep 02, 2025 at 09:11:22AM -0700, Alexei Starovoitov wrote:
> > > On Tue, Sep 2, 2025 at 7:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > >
> > > > Adding support to attach unique uprobe through uprobe multi link
> > > > interface.
> > > >
> > > > Adding new BPF_F_UPROBE_MULTI_UNIQUE flag that denotes the unique
> > > > uprobe creation.
> > > >
> > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > > ---
> > > > include/uapi/linux/bpf.h | 3 ++-
> > > > kernel/trace/bpf_trace.c | 4 +++-
> > > > tools/include/uapi/linux/bpf.h | 3 ++-
> > > > 3 files changed, 7 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > index 233de8677382..3de9eb469fe2 100644
> > > > --- a/include/uapi/linux/bpf.h
> > > > +++ b/include/uapi/linux/bpf.h
> > > > @@ -1300,7 +1300,8 @@ enum {
> > > > * BPF_TRACE_UPROBE_MULTI attach type to create return probe.
> > > > */
> > > > enum {
> > > > - BPF_F_UPROBE_MULTI_RETURN = (1U << 0)
> > > > + BPF_F_UPROBE_MULTI_RETURN = (1U << 0),
> > > > + BPF_F_UPROBE_MULTI_UNIQUE = (1U << 1),
> > >
> > > I second Masami's point. "exclusive" name fits better.
> > > And once you use that name the "multi_exclusive"
> > > part will not make sense.
> > > How can an exclusive user of the uprobe be "multi" at the same time?
> > > Like attaching to multiple uprobes and modifying regsiters
> > > in all of them? Is it practical ?
> >
> > we can still attach single uprobe with uprobe_multi,
> > but for more uprobes it's probably not practical
> >
> > > It till attach single uprobe with eels to me BPF_F_UPROBE_EXCLUSIVE should be targeting
> > > one specific uprobe.
> >
> > do you mean to force single uprobe with this flag?
> >
> > I understood 'BPF_F_UPROBE_MULTI_' flag prefix more as indication what link
> > it belongs to, but I'm ok with BPF_F_UPROBE_EXCLUSIVE
>
> What is the use case for attaching the same bpf prog to multiple
> uprobes and modifying their registers?
I dont have one.. but I guess you could have just one bpf program
doing the whatever override based on uprobe cookie?
I added the unique flag support in here because you can also
create just single uprobe with uprobe_multi interface
jirka
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH perf/core 02/11] uprobes: Skip emulate/sstep on unique uprobe when ip is changed
2025-09-03 18:20 ` Andrii Nakryiko
@ 2025-09-03 19:57 ` Jiri Olsa
0 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2025-09-03 19:57 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko,
bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
On Wed, Sep 03, 2025 at 11:20:01AM -0700, Andrii Nakryiko wrote:
> On Wed, Sep 3, 2025 at 4:28 AM Oleg Nesterov <oleg@redhat.com> wrote:
> >
> > On 09/02, Jiri Olsa wrote:
> > >
> > > If user decided to take execution elsewhere, it makes little sense
> > > to execute the original instruction, so let's skip it.
> >
> > Exactly.
> >
> > So why do we need all these "is_unique" complications? Only a single
>
> I second this. This whole is_unique flag just seems like an
> unnecessary thing that spills all around (extra kernel and libbpf
> flags/APIs), and it's all just not to confuse the second uprobe
> attached? Let's just allow uprobes to override user registers and
> handle IP change on kernel side (as unlikely() check)?
yes! ;-) I'd just refresh rfc version then
thanks,
jirka
>
> > is_unique/exclusive consumer can change regs->ip, so I guess handle_swbp()
> > can just do
> >
> > handler_chain(uprobe, regs);
> > if (instruction_pointer(regs) != bp_vaddr)
> > goto out;
> >
> >
> > > Allowing this
> > > behaviour only for uprobe with unique consumer attached.
> >
> > But if a non-exclusive consumer changes regs->ip, we have a problem
> > anyway, right?
> >
> > We can probably add something like
> >
> > rc = uc->handler(uc, regs, &cookie);
> > + WARN_ON(!uc->is_unique && instruction_pointer(regs) != bp_vaddr);
> >
> > into handler_chain(), although I don't think this is needed.
> >
> > Oleg.
> >
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > > kernel/events/uprobes.c | 13 ++++++++++---
> > > 1 file changed, 10 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > > index b9b088f7333a..da8291941c6b 100644
> > > --- a/kernel/events/uprobes.c
> > > +++ b/kernel/events/uprobes.c
> > > @@ -2568,7 +2568,7 @@ static bool ignore_ret_handler(int rc)
> > > return rc == UPROBE_HANDLER_REMOVE || rc == UPROBE_HANDLER_IGNORE;
> > > }
> > >
> > > -static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
> > > +static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs, bool *is_unique)
> > > {
> > > struct uprobe_consumer *uc;
> > > bool has_consumers = false, remove = true;
> > > @@ -2582,6 +2582,9 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
> > > __u64 cookie = 0;
> > > int rc = 0;
> > >
> > > + if (is_unique)
> > > + *is_unique |= uc->is_unique;
> > > +
> > > if (uc->handler) {
> > > rc = uc->handler(uc, regs, &cookie);
> > > WARN(rc < 0 || rc > 2,
> > > @@ -2735,6 +2738,7 @@ static void handle_swbp(struct pt_regs *regs)
> > > {
> > > struct uprobe *uprobe;
> > > unsigned long bp_vaddr;
> > > + bool is_unique = false;
> > > int is_swbp;
> > >
> > > bp_vaddr = uprobe_get_swbp_addr(regs);
> > > @@ -2789,7 +2793,10 @@ static void handle_swbp(struct pt_regs *regs)
> > > if (arch_uprobe_ignore(&uprobe->arch, regs))
> > > goto out;
> > >
> > > - handler_chain(uprobe, regs);
> > > + handler_chain(uprobe, regs, &is_unique);
> > > +
> > > + if (is_unique && instruction_pointer(regs) != bp_vaddr)
> > > + goto out;
> > >
> > > /* Try to optimize after first hit. */
> > > arch_uprobe_optimize(&uprobe->arch, bp_vaddr);
> > > @@ -2819,7 +2826,7 @@ void handle_syscall_uprobe(struct pt_regs *regs, unsigned long bp_vaddr)
> > > return;
> > > if (arch_uprobe_ignore(&uprobe->arch, regs))
> > > return;
> > > - handler_chain(uprobe, regs);
> > > + handler_chain(uprobe, regs, NULL);
> > > }
> > >
> > > /*
> > > --
> > > 2.51.0
> > >
> >
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2025-09-03 19:57 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-02 14:34 [PATCH perf/core 00/11] uprobes: Add unique uprobe Jiri Olsa
2025-09-02 14:34 ` [PATCH perf/core 01/11] uprobes: Add unique flag to uprobe consumer Jiri Olsa
2025-09-02 15:11 ` Masami Hiramatsu
2025-09-03 6:35 ` Jiri Olsa
2025-09-03 10:49 ` Oleg Nesterov
2025-09-03 12:02 ` Jiri Olsa
2025-09-02 14:34 ` [PATCH perf/core 02/11] uprobes: Skip emulate/sstep on unique uprobe when ip is changed Jiri Olsa
2025-09-03 11:26 ` Oleg Nesterov
2025-09-03 18:20 ` Andrii Nakryiko
2025-09-03 19:57 ` Jiri Olsa
2025-09-03 19:50 ` Jiri Olsa
2025-09-02 14:34 ` [PATCH perf/core 03/11] perf: Add support to attach standard unique uprobe Jiri Olsa
2025-09-02 16:13 ` Alexei Starovoitov
2025-09-03 3:32 ` kernel test robot
2025-09-03 11:59 ` Oleg Nesterov
2025-09-03 13:03 ` Jiri Olsa
2025-09-03 13:22 ` Oleg Nesterov
2025-09-02 14:34 ` [PATCH perf/core 04/11] bpf: Add support to attach uprobe_multi " Jiri Olsa
2025-09-02 16:11 ` Alexei Starovoitov
2025-09-03 6:35 ` Jiri Olsa
2025-09-03 15:32 ` Alexei Starovoitov
2025-09-03 19:55 ` Jiri Olsa
2025-09-02 14:34 ` [PATCH perf/core 05/11] bpf: Allow uprobe program to change context registers Jiri Olsa
2025-09-02 14:34 ` [PATCH perf/core 06/11] libbpf: Add support to attach unique uprobe_multi uprobe Jiri Olsa
2025-09-02 14:35 ` [PATCH perf/core 07/11] libbpf: Add support to attach generic unique uprobe Jiri Olsa
2025-09-03 18:26 ` Andrii Nakryiko
2025-09-02 14:35 ` [PATCH perf/core 08/11] selftests/bpf: Add uprobe multi context registers changes test Jiri Olsa
2025-09-02 14:35 ` [PATCH perf/core 09/11] selftests/bpf: Add uprobe multi context ip register change test Jiri Olsa
2025-09-02 14:35 ` [PATCH perf/core 10/11] selftests/bpf: Add uprobe multi unique attach test Jiri Olsa
2025-09-02 14:35 ` [PATCH perf/core 11/11] selftests/bpf: Add uprobe " Jiri Olsa
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).