* [PATCH v2 1/6] uprobes: revamp uprobe refcounting and lifetime management
2024-08-08 0:21 [PATCH v2 0/6] uprobes: RCU-protected hot path optimizations Andrii Nakryiko
@ 2024-08-08 0:21 ` Andrii Nakryiko
2024-08-08 0:21 ` [PATCH v2 2/6] uprobes: protected uprobe lifetime with SRCU Andrii Nakryiko
` (4 subsequent siblings)
5 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2024-08-08 0:21 UTC (permalink / raw)
To: linux-trace-kernel, peterz, oleg, rostedt, mhiramat
Cc: bpf, linux-kernel, jolsa, paulmck, Andrii Nakryiko
Revamp how struct uprobe is refcounted, and thus how its lifetime is
managed.
Right now, there are a few possible "owners" of uprobe refcount:
- uprobes_tree RB tree assumes one refcount when uprobe is registered
and added to the lookup tree;
- while uprobe is triggered and kernel is handling it in the breakpoint
handler code, temporary refcount bump is done to keep uprobe from
being freed;
- if we have uretprobe requested on a given struct uprobe instance, we
take another refcount to keep uprobe alive until user space code
returns from the function and triggers return handler.
The uprobe_tree's extra refcount of 1 is confusing and problematic. No
matter how many actual consumers are attached, they all share the same
refcount, and we have an extra logic to drop the "last" (which might not
really be last) refcount once uprobe's consumer list becomes empty.
This is unconventional and has to be kept in mind as a special case all
the time. Further, because of this design we have the situations where
find_uprobe() will find uprobe, bump refcount, return it to the caller,
but that uprobe will still need uprobe_is_active() check, after which
the caller is required to drop refcount and try again. This is just too
many details leaking to the higher level logic.
This patch changes refcounting scheme in such a way as to not have
uprobes_tree keeping extra refcount for struct uprobe. Instead, each
uprobe_consumer is assuming its own refcount, which will be dropped
when consumer is unregistered. Other than that, all the active users of
uprobe (entry and return uprobe handling code) keeps exactly the same
refcounting approach.
With the above setup, once uprobe's refcount drops to zero, we need to
make sure that uprobe's "destructor" removes uprobe from uprobes_tree,
of course. This, though, races with uprobe entry handling code in
handle_swbp(), which, through find_active_uprobe()->find_uprobe() lookup,
can race with uprobe being destroyed after refcount drops to zero (e.g.,
due to uprobe_consumer unregistering). So we add try_get_uprobe(), which
will attempt to bump refcount, unless it already is zero. Caller needs
to guarantee that uprobe instance won't be freed in parallel, which is
the case while we keep uprobes_treelock (for read or write, doesn't
matter).
Note also, we now don't leak the race between registration and
unregistration, so we remove the retry logic completely. If
find_uprobe() returns valid uprobe, it's guaranteed to remain in
uprobes_tree with properly incremented refcount. The race is handled
inside __insert_uprobe() and put_uprobe() working together:
__insert_uprobe() will remove uprobe from RB-tree, if it can't bump
refcount and will retry to insert the new uprobe instance. put_uprobe()
won't attempt to remove uprobe from RB-tree, if it's already not there.
All that is protected by uprobes_treelock, which keeps things simple.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/events/uprobes.c | 179 +++++++++++++++++++++++-----------------
1 file changed, 101 insertions(+), 78 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 30348f13d4a7..16bdfbb0900e 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -109,6 +109,11 @@ struct xol_area {
unsigned long vaddr; /* Page(s) of instruction slots */
};
+static void uprobe_warn(struct task_struct *t, const char *msg)
+{
+ pr_warn("uprobe: %s:%d failed to %s\n", current->comm, current->pid, msg);
+}
+
/*
* valid_vma: Verify if the specified vma is an executable vma
* Relax restrictions while unregistering: vm_flags might have
@@ -587,25 +592,53 @@ set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long v
*(uprobe_opcode_t *)&auprobe->insn);
}
+/* uprobe should have guaranteed positive refcount */
static struct uprobe *get_uprobe(struct uprobe *uprobe)
{
refcount_inc(&uprobe->ref);
return uprobe;
}
+/*
+ * uprobe should have guaranteed lifetime, which can be either of:
+ * - caller already has refcount taken (and wants an extra one);
+ * - uprobe is RCU protected and won't be freed until after grace period;
+ * - we are holding uprobes_treelock (for read or write, doesn't matter).
+ */
+static struct uprobe *try_get_uprobe(struct uprobe *uprobe)
+{
+ if (refcount_inc_not_zero(&uprobe->ref))
+ return uprobe;
+ return NULL;
+}
+
+static inline bool uprobe_is_active(struct uprobe *uprobe)
+{
+ return !RB_EMPTY_NODE(&uprobe->rb_node);
+}
+
static void put_uprobe(struct uprobe *uprobe)
{
- if (refcount_dec_and_test(&uprobe->ref)) {
- /*
- * If application munmap(exec_vma) before uprobe_unregister()
- * gets called, we don't get a chance to remove uprobe from
- * delayed_uprobe_list from remove_breakpoint(). Do it here.
- */
- mutex_lock(&delayed_uprobe_lock);
- delayed_uprobe_remove(uprobe, NULL);
- mutex_unlock(&delayed_uprobe_lock);
- kfree(uprobe);
- }
+ if (!refcount_dec_and_test(&uprobe->ref))
+ return;
+
+ write_lock(&uprobes_treelock);
+
+ if (uprobe_is_active(uprobe))
+ rb_erase(&uprobe->rb_node, &uprobes_tree);
+
+ write_unlock(&uprobes_treelock);
+
+ /*
+ * If application munmap(exec_vma) before uprobe_unregister()
+ * gets called, we don't get a chance to remove uprobe from
+ * delayed_uprobe_list from remove_breakpoint(). Do it here.
+ */
+ mutex_lock(&delayed_uprobe_lock);
+ delayed_uprobe_remove(uprobe, NULL);
+ mutex_unlock(&delayed_uprobe_lock);
+
+ kfree(uprobe);
}
static __always_inline
@@ -656,7 +689,7 @@ static struct uprobe *__find_uprobe(struct inode *inode, loff_t offset)
struct rb_node *node = rb_find(&key, &uprobes_tree, __uprobe_cmp_key);
if (node)
- return get_uprobe(__node_2_uprobe(node));
+ return try_get_uprobe(__node_2_uprobe(node));
return NULL;
}
@@ -676,26 +709,44 @@ static struct uprobe *find_uprobe(struct inode *inode, loff_t offset)
return uprobe;
}
+/*
+ * Attempt to insert a new uprobe into uprobes_tree.
+ *
+ * If uprobe already exists (for given inode+offset), we just increment
+ * refcount of previously existing uprobe.
+ *
+ * If not, a provided new instance of uprobe is inserted into the tree (with
+ * assumed initial refcount == 1).
+ *
+ * In any case, we return a uprobe instance that ends up being in uprobes_tree.
+ * Caller has to clean up new uprobe instance, if it ended up not being
+ * inserted into the tree.
+ *
+ * We assume that uprobes_treelock is held for writing.
+ */
static struct uprobe *__insert_uprobe(struct uprobe *uprobe)
{
struct rb_node *node;
-
+again:
node = rb_find_add(&uprobe->rb_node, &uprobes_tree, __uprobe_cmp);
- if (node)
- return get_uprobe(__node_2_uprobe(node));
+ if (node) {
+ struct uprobe *u = __node_2_uprobe(node);
- /* get access + creation ref */
- refcount_set(&uprobe->ref, 2);
- return NULL;
+ if (!try_get_uprobe(u)) {
+ rb_erase(node, &uprobes_tree);
+ RB_CLEAR_NODE(&u->rb_node);
+ goto again;
+ }
+
+ return u;
+ }
+
+ return uprobe;
}
/*
- * Acquire uprobes_treelock.
- * Matching uprobe already exists in rbtree;
- * increment (access refcount) and return the matching uprobe.
- *
- * No matching uprobe; insert the uprobe in rb_tree;
- * get a double refcount (access + creation) and return NULL.
+ * Acquire uprobes_treelock and insert uprobe into uprobes_tree
+ * (or reuse existing one, see __insert_uprobe() comments above).
*/
static struct uprobe *insert_uprobe(struct uprobe *uprobe)
{
@@ -732,11 +783,13 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset,
uprobe->ref_ctr_offset = ref_ctr_offset;
init_rwsem(&uprobe->register_rwsem);
init_rwsem(&uprobe->consumer_rwsem);
+ RB_CLEAR_NODE(&uprobe->rb_node);
+ refcount_set(&uprobe->ref, 1);
/* add to uprobes_tree, sorted on inode:offset */
cur_uprobe = insert_uprobe(uprobe);
/* a uprobe exists for this inode:offset combination */
- if (cur_uprobe) {
+ if (cur_uprobe != uprobe) {
if (cur_uprobe->ref_ctr_offset != uprobe->ref_ctr_offset) {
ref_ctr_mismatch_warn(cur_uprobe, uprobe);
put_uprobe(cur_uprobe);
@@ -921,26 +974,6 @@ remove_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, unsigned long vad
return set_orig_insn(&uprobe->arch, mm, vaddr);
}
-static inline bool uprobe_is_active(struct uprobe *uprobe)
-{
- return !RB_EMPTY_NODE(&uprobe->rb_node);
-}
-/*
- * There could be threads that have already hit the breakpoint. They
- * will recheck the current insn and restart if find_uprobe() fails.
- * See find_active_uprobe().
- */
-static void delete_uprobe(struct uprobe *uprobe)
-{
- if (WARN_ON(!uprobe_is_active(uprobe)))
- return;
-
- write_lock(&uprobes_treelock);
- rb_erase(&uprobe->rb_node, &uprobes_tree);
- write_unlock(&uprobes_treelock);
- RB_CLEAR_NODE(&uprobe->rb_node); /* for uprobe_is_active() */
-}
-
struct map_info {
struct map_info *next;
struct mm_struct *mm;
@@ -1094,17 +1127,13 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc)
int err;
down_write(&uprobe->register_rwsem);
- if (WARN_ON(!consumer_del(uprobe, uc)))
+ if (WARN_ON(!consumer_del(uprobe, uc))) {
err = -ENOENT;
- else
+ } else {
err = register_for_each_vma(uprobe, NULL);
-
- /* TODO : cant unregister? schedule a worker thread */
- if (!err) {
- if (!uprobe->consumers)
- delete_uprobe(uprobe);
- else
- err = -EBUSY;
+ /* TODO : cant unregister? schedule a worker thread */
+ if (unlikely(err))
+ uprobe_warn(current, "unregister, leaking uprobe");
}
up_write(&uprobe->register_rwsem);
@@ -1159,27 +1188,16 @@ struct uprobe *uprobe_register(struct inode *inode,
if (!IS_ALIGNED(ref_ctr_offset, sizeof(short)))
return ERR_PTR(-EINVAL);
- retry:
uprobe = alloc_uprobe(inode, offset, ref_ctr_offset);
if (IS_ERR(uprobe))
return uprobe;
- /*
- * We can race with uprobe_unregister()->delete_uprobe().
- * Check uprobe_is_active() and retry if it is false.
- */
down_write(&uprobe->register_rwsem);
- ret = -EAGAIN;
- if (likely(uprobe_is_active(uprobe))) {
- consumer_add(uprobe, uc);
- ret = register_for_each_vma(uprobe, uc);
- }
+ consumer_add(uprobe, uc);
+ ret = register_for_each_vma(uprobe, uc);
up_write(&uprobe->register_rwsem);
- put_uprobe(uprobe);
if (ret) {
- if (unlikely(ret == -EAGAIN))
- goto retry;
uprobe_unregister(uprobe, uc);
return ERR_PTR(ret);
}
@@ -1286,15 +1304,17 @@ static void build_probe_list(struct inode *inode,
u = rb_entry(t, struct uprobe, rb_node);
if (u->inode != inode || u->offset < min)
break;
- list_add(&u->pending_list, head);
- get_uprobe(u);
+ /* if uprobe went away, it's safe to ignore it */
+ if (try_get_uprobe(u))
+ list_add(&u->pending_list, head);
}
for (t = n; (t = rb_next(t)); ) {
u = rb_entry(t, struct uprobe, rb_node);
if (u->inode != inode || u->offset > max)
break;
- list_add(&u->pending_list, head);
- get_uprobe(u);
+ /* if uprobe went away, it's safe to ignore it */
+ if (try_get_uprobe(u))
+ list_add(&u->pending_list, head);
}
}
read_unlock(&uprobes_treelock);
@@ -1752,6 +1772,12 @@ static int dup_utask(struct task_struct *t, struct uprobe_task *o_utask)
return -ENOMEM;
*n = *o;
+ /*
+ * uprobe's refcnt has to be positive at this point, kept by
+ * utask->return_instances items; return_instances can't be
+ * removed right now, as task is blocked due to duping; so
+ * get_uprobe() is safe to use here.
+ */
get_uprobe(n->uprobe);
n->next = NULL;
@@ -1763,12 +1789,6 @@ static int dup_utask(struct task_struct *t, struct uprobe_task *o_utask)
return 0;
}
-static void uprobe_warn(struct task_struct *t, const char *msg)
-{
- pr_warn("uprobe: %s:%d failed to %s\n",
- current->comm, current->pid, msg);
-}
-
static void dup_xol_work(struct callback_head *work)
{
if (current->flags & PF_EXITING)
@@ -1894,7 +1914,10 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs)
}
orig_ret_vaddr = utask->return_instances->orig_ret_vaddr;
}
-
+ /*
+ * uprobe's refcnt is positive, held by caller, so it's safe to
+ * unconditionally bump it one more time here
+ */
ri->uprobe = get_uprobe(uprobe);
ri->func = instruction_pointer(regs);
ri->stack = user_stack_pointer(regs);
--
2.43.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v2 2/6] uprobes: protected uprobe lifetime with SRCU
2024-08-08 0:21 [PATCH v2 0/6] uprobes: RCU-protected hot path optimizations Andrii Nakryiko
2024-08-08 0:21 ` [PATCH v2 1/6] uprobes: revamp uprobe refcounting and lifetime management Andrii Nakryiko
@ 2024-08-08 0:21 ` Andrii Nakryiko
2024-08-08 10:20 ` Oleg Nesterov
2024-08-08 0:21 ` [PATCH v2 3/6] uprobes: get rid of enum uprobe_filter_ctx in uprobe filter callbacks Andrii Nakryiko
` (3 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Andrii Nakryiko @ 2024-08-08 0:21 UTC (permalink / raw)
To: linux-trace-kernel, peterz, oleg, rostedt, mhiramat
Cc: bpf, linux-kernel, jolsa, paulmck, Andrii Nakryiko
To avoid unnecessarily taking a (brief) refcount on uprobe during
breakpoint handling in handle_swbp for entry uprobes, make find_uprobe()
not take refcount, but protect the lifetime of a uprobe instance with
RCU. This improves scalability, as refcount gets quite expensive due to
cache line bouncing between multiple CPUs.
Specifically, we utilize our own uprobe-specific SRCU instance for this
RCU protection. put_uprobe() will delay actual kfree() using call_srcu().
For now, uretprobe and single-stepping handling will still acquire
refcount as necessary. We'll address these issues in follow up patches
by making them use SRCU with timeout.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/events/uprobes.c | 98 ++++++++++++++++++++++++-----------------
1 file changed, 57 insertions(+), 41 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 16bdfbb0900e..11c97f178dbf 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -41,6 +41,8 @@ static struct rb_root uprobes_tree = RB_ROOT;
static DEFINE_RWLOCK(uprobes_treelock); /* serialize rbtree access */
+DEFINE_STATIC_SRCU(uprobes_srcu);
+
#define UPROBES_HASH_SZ 13
/* serialize uprobe->pending_list */
static struct mutex uprobes_mmap_mutex[UPROBES_HASH_SZ];
@@ -52,7 +54,10 @@ DEFINE_STATIC_PERCPU_RWSEM(dup_mmap_sem);
#define UPROBE_COPY_INSN 0
struct uprobe {
- struct rb_node rb_node; /* node in the rb tree */
+ union {
+ struct rb_node rb_node; /* node in the rb tree */
+ struct rcu_head rcu; /* mutually exclusive with rb_node */
+ };
refcount_t ref;
struct rw_semaphore register_rwsem;
struct rw_semaphore consumer_rwsem;
@@ -617,6 +622,13 @@ static inline bool uprobe_is_active(struct uprobe *uprobe)
return !RB_EMPTY_NODE(&uprobe->rb_node);
}
+static void uprobe_free_rcu(struct rcu_head *rcu)
+{
+ struct uprobe *uprobe = container_of(rcu, struct uprobe, rcu);
+
+ kfree(uprobe);
+}
+
static void put_uprobe(struct uprobe *uprobe)
{
if (!refcount_dec_and_test(&uprobe->ref))
@@ -638,7 +650,7 @@ static void put_uprobe(struct uprobe *uprobe)
delayed_uprobe_remove(uprobe, NULL);
mutex_unlock(&delayed_uprobe_lock);
- kfree(uprobe);
+ call_srcu(&uprobes_srcu, &uprobe->rcu, uprobe_free_rcu);
}
static __always_inline
@@ -680,33 +692,25 @@ static inline int __uprobe_cmp(struct rb_node *a, const struct rb_node *b)
return uprobe_cmp(u->inode, u->offset, __node_2_uprobe(b));
}
-static struct uprobe *__find_uprobe(struct inode *inode, loff_t offset)
+/*
+ * Assumes being inside RCU protected region.
+ * No refcount is taken on returned uprobe.
+ */
+static struct uprobe *find_uprobe_rcu(struct inode *inode, loff_t offset)
{
struct __uprobe_key key = {
.inode = inode,
.offset = offset,
};
- struct rb_node *node = rb_find(&key, &uprobes_tree, __uprobe_cmp_key);
-
- if (node)
- return try_get_uprobe(__node_2_uprobe(node));
-
- return NULL;
-}
+ struct rb_node *node;
-/*
- * Find a uprobe corresponding to a given inode:offset
- * Acquires uprobes_treelock
- */
-static struct uprobe *find_uprobe(struct inode *inode, loff_t offset)
-{
- struct uprobe *uprobe;
+ lockdep_assert(srcu_read_lock_held(&uprobes_srcu));
read_lock(&uprobes_treelock);
- uprobe = __find_uprobe(inode, offset);
+ node = rb_find(&key, &uprobes_tree, __uprobe_cmp_key);
read_unlock(&uprobes_treelock);
- return uprobe;
+ return node ? __node_2_uprobe(node) : NULL;
}
/*
@@ -1080,10 +1084,10 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new)
goto free;
/*
* We take mmap_lock for writing to avoid the race with
- * find_active_uprobe() which takes mmap_lock for reading.
+ * find_active_uprobe_rcu() which takes mmap_lock for reading.
* Thus this install_breakpoint() can not make
- * is_trap_at_addr() true right after find_uprobe()
- * returns NULL in find_active_uprobe().
+ * is_trap_at_addr() true right after find_uprobe_rcu()
+ * returns NULL in find_active_uprobe_rcu().
*/
mmap_write_lock(mm);
vma = find_vma(mm, info->vaddr);
@@ -1885,9 +1889,13 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs)
return;
}
+ /* we need to bump refcount to store uprobe in utask */
+ if (!try_get_uprobe(uprobe))
+ return;
+
ri = kmalloc(sizeof(struct return_instance), GFP_KERNEL);
if (!ri)
- return;
+ goto fail;
trampoline_vaddr = uprobe_get_trampoline_vaddr();
orig_ret_vaddr = arch_uretprobe_hijack_return_addr(trampoline_vaddr, regs);
@@ -1914,11 +1922,7 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs)
}
orig_ret_vaddr = utask->return_instances->orig_ret_vaddr;
}
- /*
- * uprobe's refcnt is positive, held by caller, so it's safe to
- * unconditionally bump it one more time here
- */
- ri->uprobe = get_uprobe(uprobe);
+ ri->uprobe = uprobe;
ri->func = instruction_pointer(regs);
ri->stack = user_stack_pointer(regs);
ri->orig_ret_vaddr = orig_ret_vaddr;
@@ -1929,8 +1933,9 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs)
utask->return_instances = ri;
return;
- fail:
+fail:
kfree(ri);
+ put_uprobe(uprobe);
}
/* Prepare to single-step probed instruction out of line. */
@@ -1945,9 +1950,14 @@ pre_ssout(struct uprobe *uprobe, struct pt_regs *regs, unsigned long bp_vaddr)
if (!utask)
return -ENOMEM;
+ if (!try_get_uprobe(uprobe))
+ return -EINVAL;
+
xol_vaddr = xol_get_insn_slot(uprobe);
- if (!xol_vaddr)
- return -ENOMEM;
+ if (!xol_vaddr) {
+ err = -ENOMEM;
+ goto err_out;
+ }
utask->xol_vaddr = xol_vaddr;
utask->vaddr = bp_vaddr;
@@ -1955,12 +1965,15 @@ pre_ssout(struct uprobe *uprobe, struct pt_regs *regs, unsigned long bp_vaddr)
err = arch_uprobe_pre_xol(&uprobe->arch, regs);
if (unlikely(err)) {
xol_free_insn_slot(current);
- return err;
+ goto err_out;
}
utask->active_uprobe = uprobe;
utask->state = UTASK_SSTEP;
return 0;
+err_out:
+ put_uprobe(uprobe);
+ return err;
}
/*
@@ -2044,7 +2057,8 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr)
return is_trap_insn(&opcode);
}
-static struct uprobe *find_active_uprobe(unsigned long bp_vaddr, int *is_swbp)
+/* assumes being inside RCU protected region */
+static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swbp)
{
struct mm_struct *mm = current->mm;
struct uprobe *uprobe = NULL;
@@ -2057,7 +2071,7 @@ static struct uprobe *find_active_uprobe(unsigned long bp_vaddr, int *is_swbp)
struct inode *inode = file_inode(vma->vm_file);
loff_t offset = vaddr_to_offset(vma, bp_vaddr);
- uprobe = find_uprobe(inode, offset);
+ uprobe = find_uprobe_rcu(inode, offset);
}
if (!uprobe)
@@ -2203,13 +2217,15 @@ static void handle_swbp(struct pt_regs *regs)
{
struct uprobe *uprobe;
unsigned long bp_vaddr;
- int is_swbp;
+ int is_swbp, srcu_idx;
bp_vaddr = uprobe_get_swbp_addr(regs);
if (bp_vaddr == uprobe_get_trampoline_vaddr())
return uprobe_handle_trampoline(regs);
- uprobe = find_active_uprobe(bp_vaddr, &is_swbp);
+ srcu_idx = srcu_read_lock(&uprobes_srcu);
+
+ uprobe = find_active_uprobe_rcu(bp_vaddr, &is_swbp);
if (!uprobe) {
if (is_swbp > 0) {
/* No matching uprobe; signal SIGTRAP. */
@@ -2225,7 +2241,7 @@ static void handle_swbp(struct pt_regs *regs)
*/
instruction_pointer_set(regs, bp_vaddr);
}
- return;
+ goto out;
}
/* change it in advance for ->handler() and restart */
@@ -2260,12 +2276,12 @@ static void handle_swbp(struct pt_regs *regs)
if (arch_uprobe_skip_sstep(&uprobe->arch, regs))
goto out;
- if (!pre_ssout(uprobe, regs, bp_vaddr))
- return;
+ if (pre_ssout(uprobe, regs, bp_vaddr))
+ goto out;
- /* arch_uprobe_skip_sstep() succeeded, or restart if can't singlestep */
out:
- put_uprobe(uprobe);
+ /* arch_uprobe_skip_sstep() succeeded, or restart if can't singlestep */
+ srcu_read_unlock(&uprobes_srcu, srcu_idx);
}
/*
--
2.43.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [PATCH v2 2/6] uprobes: protected uprobe lifetime with SRCU
2024-08-08 0:21 ` [PATCH v2 2/6] uprobes: protected uprobe lifetime with SRCU Andrii Nakryiko
@ 2024-08-08 10:20 ` Oleg Nesterov
2024-08-08 16:58 ` Andrii Nakryiko
0 siblings, 1 reply; 12+ messages in thread
From: Oleg Nesterov @ 2024-08-08 10:20 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: linux-trace-kernel, peterz, rostedt, mhiramat, bpf, linux-kernel,
jolsa, paulmck
On 08/07, Andrii Nakryiko wrote:
>
> struct uprobe {
> - struct rb_node rb_node; /* node in the rb tree */
> + union {
> + struct rb_node rb_node; /* node in the rb tree */
> + struct rcu_head rcu; /* mutually exclusive with rb_node */
Andrii, I am sorry.
I suggested this in reply to 3/8 before I read
[PATCH 7/8] uprobes: perform lockless SRCU-protected uprobes_tree lookup
I have no idea if rb_erase() is rcu-safe or not, but this union certainly
doesn't look right if we use rb_find_rcu/etc.
Yes, this version doesn't include the SRCU-protected uprobes_tree changes,
but still...
Oleg.
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [PATCH v2 2/6] uprobes: protected uprobe lifetime with SRCU
2024-08-08 10:20 ` Oleg Nesterov
@ 2024-08-08 16:58 ` Andrii Nakryiko
2024-08-08 17:51 ` Andrii Nakryiko
0 siblings, 1 reply; 12+ messages in thread
From: Andrii Nakryiko @ 2024-08-08 16:58 UTC (permalink / raw)
To: Oleg Nesterov
Cc: Andrii Nakryiko, linux-trace-kernel, peterz, rostedt, mhiramat,
bpf, linux-kernel, jolsa, paulmck
On Thu, Aug 8, 2024 at 3:20 AM Oleg Nesterov <oleg@redhat.com> wrote:
>
> On 08/07, Andrii Nakryiko wrote:
> >
> > struct uprobe {
> > - struct rb_node rb_node; /* node in the rb tree */
> > + union {
> > + struct rb_node rb_node; /* node in the rb tree */
> > + struct rcu_head rcu; /* mutually exclusive with rb_node */
>
> Andrii, I am sorry.
>
> I suggested this in reply to 3/8 before I read
> [PATCH 7/8] uprobes: perform lockless SRCU-protected uprobes_tree lookup
>
> I have no idea if rb_erase() is rcu-safe or not, but this union certainly
> doesn't look right if we use rb_find_rcu/etc.
>
Ah, because put_uprobe() might be fast enough to remove uprobe from
the tree, process delayed_uprobe_remove() and then enqueue
uprobe_free_rcu() callback (which would use rcu field here,
overwriting rb_node), while we are still doing a lockless lookup,
finding this overwritten rb_node . Good catch, if that's the case (and
I'm testing all this right now), then it's an easy fix.
It would also explain why I initially didn't get any crashes for
lockless RB-tree lookup with uprobe-stress (I was really surprised
that I "missed" the crash initially).
Thanks!
> Yes, this version doesn't include the SRCU-protected uprobes_tree changes,
> but still...
>
> Oleg.
>
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [PATCH v2 2/6] uprobes: protected uprobe lifetime with SRCU
2024-08-08 16:58 ` Andrii Nakryiko
@ 2024-08-08 17:51 ` Andrii Nakryiko
0 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2024-08-08 17:51 UTC (permalink / raw)
To: Oleg Nesterov
Cc: Andrii Nakryiko, linux-trace-kernel, peterz, rostedt, mhiramat,
bpf, linux-kernel, jolsa, paulmck
On Thu, Aug 8, 2024 at 9:58 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Aug 8, 2024 at 3:20 AM Oleg Nesterov <oleg@redhat.com> wrote:
> >
> > On 08/07, Andrii Nakryiko wrote:
> > >
> > > struct uprobe {
> > > - struct rb_node rb_node; /* node in the rb tree */
> > > + union {
> > > + struct rb_node rb_node; /* node in the rb tree */
> > > + struct rcu_head rcu; /* mutually exclusive with rb_node */
> >
> > Andrii, I am sorry.
> >
> > I suggested this in reply to 3/8 before I read
> > [PATCH 7/8] uprobes: perform lockless SRCU-protected uprobes_tree lookup
> >
> > I have no idea if rb_erase() is rcu-safe or not, but this union certainly
> > doesn't look right if we use rb_find_rcu/etc.
> >
>
> Ah, because put_uprobe() might be fast enough to remove uprobe from
> the tree, process delayed_uprobe_remove() and then enqueue
> uprobe_free_rcu() callback (which would use rcu field here,
> overwriting rb_node), while we are still doing a lockless lookup,
> finding this overwritten rb_node . Good catch, if that's the case (and
> I'm testing all this right now), then it's an easy fix.
>
> It would also explain why I initially didn't get any crashes for
> lockless RB-tree lookup with uprobe-stress (I was really surprised
> that I "missed" the crash initially).
>
> Thanks!
I can confirm that the crash went away. Previously it was crashing
after a few minutes, but now it's running for almost an hour with no
problem. Phew, I was worried there for a bit, but it seems like we are
back to the "everything is fine" state.
Okay, I'll incorporate this fix and synchronize_srcu() locally, will
give it a few more days, maybe Peter will want to take another look.
Will send a new revision early next week.
>
>
> > Yes, this version doesn't include the SRCU-protected uprobes_tree changes,
> > but still...
> >
> > Oleg.
> >
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 3/6] uprobes: get rid of enum uprobe_filter_ctx in uprobe filter callbacks
2024-08-08 0:21 [PATCH v2 0/6] uprobes: RCU-protected hot path optimizations Andrii Nakryiko
2024-08-08 0:21 ` [PATCH v2 1/6] uprobes: revamp uprobe refcounting and lifetime management Andrii Nakryiko
2024-08-08 0:21 ` [PATCH v2 2/6] uprobes: protected uprobe lifetime with SRCU Andrii Nakryiko
@ 2024-08-08 0:21 ` Andrii Nakryiko
2024-08-08 0:21 ` [PATCH v2 4/6] uprobes: travers uprobe's consumer list locklessly under SRCU protection Andrii Nakryiko
` (2 subsequent siblings)
5 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2024-08-08 0:21 UTC (permalink / raw)
To: linux-trace-kernel, peterz, oleg, rostedt, mhiramat
Cc: bpf, linux-kernel, jolsa, paulmck, Andrii Nakryiko
It serves no purpose beyond adding unnecessray argument passed to the
filter callback. Just get rid of it, no one is actually using it.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
include/linux/uprobes.h | 10 +---------
kernel/events/uprobes.c | 18 +++++++-----------
kernel/trace/bpf_trace.c | 3 +--
kernel/trace/trace_uprobe.c | 9 +++------
4 files changed, 12 insertions(+), 28 deletions(-)
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index f50df1fa93e7..63ae2ade3487 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -28,20 +28,12 @@ struct page;
#define MAX_URETPROBE_DEPTH 64
-enum uprobe_filter_ctx {
- UPROBE_FILTER_REGISTER,
- UPROBE_FILTER_UNREGISTER,
- UPROBE_FILTER_MMAP,
-};
-
struct uprobe_consumer {
int (*handler)(struct uprobe_consumer *self, struct pt_regs *regs);
int (*ret_handler)(struct uprobe_consumer *self,
unsigned long func,
struct pt_regs *regs);
- bool (*filter)(struct uprobe_consumer *self,
- enum uprobe_filter_ctx ctx,
- struct mm_struct *mm);
+ bool (*filter)(struct uprobe_consumer *self, struct mm_struct *mm);
struct uprobe_consumer *next;
};
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 11c97f178dbf..9b31235bc177 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -920,21 +920,19 @@ static int prepare_uprobe(struct uprobe *uprobe, struct file *file,
return ret;
}
-static inline bool consumer_filter(struct uprobe_consumer *uc,
- enum uprobe_filter_ctx ctx, struct mm_struct *mm)
+static inline bool consumer_filter(struct uprobe_consumer *uc, struct mm_struct *mm)
{
- return !uc->filter || uc->filter(uc, ctx, mm);
+ return !uc->filter || uc->filter(uc, mm);
}
-static bool filter_chain(struct uprobe *uprobe,
- enum uprobe_filter_ctx ctx, struct mm_struct *mm)
+static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm)
{
struct uprobe_consumer *uc;
bool ret = false;
down_read(&uprobe->consumer_rwsem);
for (uc = uprobe->consumers; uc; uc = uc->next) {
- ret = consumer_filter(uc, ctx, mm);
+ ret = consumer_filter(uc, mm);
if (ret)
break;
}
@@ -1101,12 +1099,10 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new)
if (is_register) {
/* consult only the "caller", new consumer. */
- if (consumer_filter(new,
- UPROBE_FILTER_REGISTER, mm))
+ if (consumer_filter(new, mm))
err = install_breakpoint(uprobe, mm, vma, info->vaddr);
} else if (test_bit(MMF_HAS_UPROBES, &mm->flags)) {
- if (!filter_chain(uprobe,
- UPROBE_FILTER_UNREGISTER, mm))
+ if (!filter_chain(uprobe, mm))
err |= remove_breakpoint(uprobe, mm, info->vaddr);
}
@@ -1389,7 +1385,7 @@ int uprobe_mmap(struct vm_area_struct *vma)
*/
list_for_each_entry_safe(uprobe, u, &tmp_list, pending_list) {
if (!fatal_signal_pending(current) &&
- filter_chain(uprobe, UPROBE_FILTER_MMAP, vma->vm_mm)) {
+ filter_chain(uprobe, vma->vm_mm)) {
unsigned long vaddr = offset_to_vaddr(vma, uprobe->offset);
install_breakpoint(uprobe, vma->vm_mm, vma, vaddr);
}
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 4e391daafa64..73c570b5988b 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3320,8 +3320,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
}
static bool
-uprobe_multi_link_filter(struct uprobe_consumer *con, enum uprobe_filter_ctx ctx,
- struct mm_struct *mm)
+uprobe_multi_link_filter(struct uprobe_consumer *con, struct mm_struct *mm)
{
struct bpf_uprobe *uprobe;
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 52e76a73fa7c..7eb79e0a5352 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -1078,9 +1078,7 @@ print_uprobe_event(struct trace_iterator *iter, int flags, struct trace_event *e
return trace_handle_return(s);
}
-typedef bool (*filter_func_t)(struct uprobe_consumer *self,
- enum uprobe_filter_ctx ctx,
- struct mm_struct *mm);
+typedef bool (*filter_func_t)(struct uprobe_consumer *self, struct mm_struct *mm);
static int trace_uprobe_enable(struct trace_uprobe *tu, filter_func_t filter)
{
@@ -1339,8 +1337,7 @@ static int uprobe_perf_open(struct trace_event_call *call,
return err;
}
-static bool uprobe_perf_filter(struct uprobe_consumer *uc,
- enum uprobe_filter_ctx ctx, struct mm_struct *mm)
+static bool uprobe_perf_filter(struct uprobe_consumer *uc, struct mm_struct *mm)
{
struct trace_uprobe_filter *filter;
struct trace_uprobe *tu;
@@ -1426,7 +1423,7 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
static int uprobe_perf_func(struct trace_uprobe *tu, struct pt_regs *regs,
struct uprobe_cpu_buffer **ucbp)
{
- if (!uprobe_perf_filter(&tu->consumer, 0, current->mm))
+ if (!uprobe_perf_filter(&tu->consumer, current->mm))
return UPROBE_HANDLER_REMOVE;
if (!is_ret_probe(tu))
--
2.43.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v2 4/6] uprobes: travers uprobe's consumer list locklessly under SRCU protection
2024-08-08 0:21 [PATCH v2 0/6] uprobes: RCU-protected hot path optimizations Andrii Nakryiko
` (2 preceding siblings ...)
2024-08-08 0:21 ` [PATCH v2 3/6] uprobes: get rid of enum uprobe_filter_ctx in uprobe filter callbacks Andrii Nakryiko
@ 2024-08-08 0:21 ` Andrii Nakryiko
2024-08-08 14:40 ` Oleg Nesterov
2024-08-08 0:21 ` [PATCH v2 5/6] perf/uprobe: split uprobe_unregister() Andrii Nakryiko
2024-08-08 0:21 ` [PATCH v2 6/6] uprobes: switch to RCU Tasks Trace flavor for better performance Andrii Nakryiko
5 siblings, 1 reply; 12+ messages in thread
From: Andrii Nakryiko @ 2024-08-08 0:21 UTC (permalink / raw)
To: linux-trace-kernel, peterz, oleg, rostedt, mhiramat
Cc: bpf, linux-kernel, jolsa, paulmck, Andrii Nakryiko
uprobe->register_rwsem is one of a few big bottlenecks to scalability of
uprobes, so we need to get rid of it to improve uprobe performance and
multi-CPU scalability.
First, we turn uprobe's consumer list to a typical doubly-linked list
and utilize existing RCU-aware helpers for traversing such lists, as
well as adding and removing elements from it.
For entry uprobes we already have SRCU protection active since before
uprobe lookup. For uretprobe we keep refcount, guaranteeing that uprobe
won't go away from under us, but we add SRCU protection around consumer
list traversal.
Lastly, to keep handler_chain()'s UPROBE_HANDLER_REMOVE handling simple,
we remember whether any removal was requested during handler calls, but
then we double-check the decision under a proper register_rwsem using
consumers' filter callbacks. Handler removal is very rare, so this extra
lock won't hurt performance, overall, but we also avoid the need for any
extra protection (e.g., seqcount locks).
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
include/linux/uprobes.h | 2 +-
kernel/events/uprobes.c | 110 +++++++++++++++++++++-------------------
2 files changed, 60 insertions(+), 52 deletions(-)
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 63ae2ade3487..f67f8d98c3c6 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -35,7 +35,7 @@ struct uprobe_consumer {
struct pt_regs *regs);
bool (*filter)(struct uprobe_consumer *self, struct mm_struct *mm);
- struct uprobe_consumer *next;
+ struct list_head cons_node;
};
#ifdef CONFIG_UPROBES
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 9b31235bc177..5bddd20b7053 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -62,7 +62,7 @@ struct uprobe {
struct rw_semaphore register_rwsem;
struct rw_semaphore consumer_rwsem;
struct list_head pending_list;
- struct uprobe_consumer *consumers;
+ struct list_head consumers;
struct inode *inode; /* Also hold a ref to inode */
loff_t offset;
loff_t ref_ctr_offset;
@@ -785,6 +785,7 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset,
uprobe->inode = inode;
uprobe->offset = offset;
uprobe->ref_ctr_offset = ref_ctr_offset;
+ INIT_LIST_HEAD(&uprobe->consumers);
init_rwsem(&uprobe->register_rwsem);
init_rwsem(&uprobe->consumer_rwsem);
RB_CLEAR_NODE(&uprobe->rb_node);
@@ -810,34 +811,10 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset,
static void consumer_add(struct uprobe *uprobe, struct uprobe_consumer *uc)
{
down_write(&uprobe->consumer_rwsem);
- uc->next = uprobe->consumers;
- uprobe->consumers = uc;
+ list_add_rcu(&uc->cons_node, &uprobe->consumers);
up_write(&uprobe->consumer_rwsem);
}
-/*
- * For uprobe @uprobe, delete the consumer @uc.
- * Return true if the @uc is deleted successfully
- * or return false.
- */
-static bool consumer_del(struct uprobe *uprobe, struct uprobe_consumer *uc)
-{
- struct uprobe_consumer **con;
- bool ret = false;
-
- down_write(&uprobe->consumer_rwsem);
- for (con = &uprobe->consumers; *con; con = &(*con)->next) {
- if (*con == uc) {
- *con = uc->next;
- ret = true;
- break;
- }
- }
- up_write(&uprobe->consumer_rwsem);
-
- return ret;
-}
-
static int __copy_insn(struct address_space *mapping, struct file *filp,
void *insn, int nbytes, loff_t offset)
{
@@ -931,7 +908,8 @@ static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm)
bool ret = false;
down_read(&uprobe->consumer_rwsem);
- for (uc = uprobe->consumers; uc; uc = uc->next) {
+ list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node,
+ srcu_read_lock_held(&uprobes_srcu)) {
ret = consumer_filter(uc, mm);
if (ret)
break;
@@ -1127,18 +1105,30 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc)
int err;
down_write(&uprobe->register_rwsem);
- if (WARN_ON(!consumer_del(uprobe, uc))) {
- err = -ENOENT;
- } else {
- err = register_for_each_vma(uprobe, NULL);
- /* TODO : cant unregister? schedule a worker thread */
- if (unlikely(err))
- uprobe_warn(current, "unregister, leaking uprobe");
- }
+
+ list_del_rcu(&uc->cons_node);
+ err = register_for_each_vma(uprobe, NULL);
+
up_write(&uprobe->register_rwsem);
- if (!err)
- put_uprobe(uprobe);
+ /* TODO : cant unregister? schedule a worker thread */
+ if (unlikely(err)) {
+ uprobe_warn(current, "unregister, leaking uprobe");
+ return;
+ }
+
+ put_uprobe(uprobe);
+
+ /*
+ * Now that handler_chain() and handle_uretprobe_chain() iterate over
+ * uprobe->consumers list under RCU protection without holding
+ * uprobe->register_rwsem, we need to wait for RCU grace period to
+ * make sure that we can't call into just unregistered
+ * uprobe_consumer's callbacks anymore. If we don't do that, fast and
+ * unlucky enough caller can free consumer's memory and cause
+ * handler_chain() or handle_uretprobe_chain() to do an use-after-free.
+ */
+ synchronize_srcu(&uprobes_srcu);
}
EXPORT_SYMBOL_GPL(uprobe_unregister);
@@ -1216,13 +1206,20 @@ EXPORT_SYMBOL_GPL(uprobe_register);
int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool add)
{
struct uprobe_consumer *con;
- int ret = -ENOENT;
+ int ret = -ENOENT, srcu_idx;
down_write(&uprobe->register_rwsem);
- for (con = uprobe->consumers; con && con != uc ; con = con->next)
- ;
- if (con)
- ret = register_for_each_vma(uprobe, add ? uc : NULL);
+
+ srcu_idx = srcu_read_lock(&uprobes_srcu);
+ list_for_each_entry_srcu(con, &uprobe->consumers, cons_node,
+ srcu_read_lock_held(&uprobes_srcu)) {
+ if (con == uc) {
+ ret = register_for_each_vma(uprobe, add ? uc : NULL);
+ break;
+ }
+ }
+ srcu_read_unlock(&uprobes_srcu, srcu_idx);
+
up_write(&uprobe->register_rwsem);
return ret;
@@ -2088,10 +2085,12 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
struct uprobe_consumer *uc;
int remove = UPROBE_HANDLER_REMOVE;
bool need_prep = false; /* prepare return uprobe, when needed */
+ bool has_consumers = false;
- down_read(&uprobe->register_rwsem);
current->utask->auprobe = &uprobe->arch;
- for (uc = uprobe->consumers; uc; uc = uc->next) {
+
+ list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node,
+ srcu_read_lock_held(&uprobes_srcu)) {
int rc = 0;
if (uc->handler) {
@@ -2104,17 +2103,24 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
need_prep = true;
remove &= rc;
+ has_consumers = true;
}
current->utask->auprobe = NULL;
if (need_prep && !remove)
prepare_uretprobe(uprobe, regs); /* put bp at return */
- if (remove && uprobe->consumers) {
- WARN_ON(!uprobe_is_active(uprobe));
- unapply_uprobe(uprobe, current->mm);
+ if (remove && has_consumers) {
+ down_read(&uprobe->register_rwsem);
+
+ /* re-check that removal is still required, this time under lock */
+ if (!filter_chain(uprobe, current->mm)) {
+ WARN_ON(!uprobe_is_active(uprobe));
+ unapply_uprobe(uprobe, current->mm);
+ }
+
+ up_read(&uprobe->register_rwsem);
}
- up_read(&uprobe->register_rwsem);
}
static void
@@ -2122,13 +2128,15 @@ handle_uretprobe_chain(struct return_instance *ri, struct pt_regs *regs)
{
struct uprobe *uprobe = ri->uprobe;
struct uprobe_consumer *uc;
+ int srcu_idx;
- down_read(&uprobe->register_rwsem);
- for (uc = uprobe->consumers; uc; uc = uc->next) {
+ srcu_idx = srcu_read_lock(&uprobes_srcu);
+ list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node,
+ srcu_read_lock_held(&uprobes_srcu)) {
if (uc->ret_handler)
uc->ret_handler(uc, ri->func, regs);
}
- up_read(&uprobe->register_rwsem);
+ srcu_read_unlock(&uprobes_srcu, srcu_idx);
}
static struct return_instance *find_next_ret_chain(struct return_instance *ri)
--
2.43.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [PATCH v2 4/6] uprobes: travers uprobe's consumer list locklessly under SRCU protection
2024-08-08 0:21 ` [PATCH v2 4/6] uprobes: travers uprobe's consumer list locklessly under SRCU protection Andrii Nakryiko
@ 2024-08-08 14:40 ` Oleg Nesterov
2024-08-08 17:50 ` Andrii Nakryiko
0 siblings, 1 reply; 12+ messages in thread
From: Oleg Nesterov @ 2024-08-08 14:40 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: linux-trace-kernel, peterz, rostedt, mhiramat, bpf, linux-kernel,
jolsa, paulmck
On 08/07, Andrii Nakryiko wrote:
>
> @@ -1127,18 +1105,30 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc)
> int err;
>
> down_write(&uprobe->register_rwsem);
> - if (WARN_ON(!consumer_del(uprobe, uc))) {
> - err = -ENOENT;
> - } else {
> - err = register_for_each_vma(uprobe, NULL);
> - /* TODO : cant unregister? schedule a worker thread */
> - if (unlikely(err))
> - uprobe_warn(current, "unregister, leaking uprobe");
> - }
> +
> + list_del_rcu(&uc->cons_node);
> + err = register_for_each_vma(uprobe, NULL);
> +
> up_write(&uprobe->register_rwsem);
>
> - if (!err)
> - put_uprobe(uprobe);
> + /* TODO : cant unregister? schedule a worker thread */
> + if (unlikely(err)) {
> + uprobe_warn(current, "unregister, leaking uprobe");
> + return;
Looks wrong... We can (should) skip put_uprobe(), but we can't avoid
synchronize_srcu().
The caller can free the consumer right after return. You even added
a fat comment below.
Yes, the problem will go away after you split it into nosync/sync, but
still.
Oleg.
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [PATCH v2 4/6] uprobes: travers uprobe's consumer list locklessly under SRCU protection
2024-08-08 14:40 ` Oleg Nesterov
@ 2024-08-08 17:50 ` Andrii Nakryiko
0 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2024-08-08 17:50 UTC (permalink / raw)
To: Oleg Nesterov
Cc: Andrii Nakryiko, linux-trace-kernel, peterz, rostedt, mhiramat,
bpf, linux-kernel, jolsa, paulmck
On Thu, Aug 8, 2024 at 7:40 AM Oleg Nesterov <oleg@redhat.com> wrote:
>
> On 08/07, Andrii Nakryiko wrote:
> >
> > @@ -1127,18 +1105,30 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc)
> > int err;
> >
> > down_write(&uprobe->register_rwsem);
> > - if (WARN_ON(!consumer_del(uprobe, uc))) {
> > - err = -ENOENT;
> > - } else {
> > - err = register_for_each_vma(uprobe, NULL);
> > - /* TODO : cant unregister? schedule a worker thread */
> > - if (unlikely(err))
> > - uprobe_warn(current, "unregister, leaking uprobe");
> > - }
> > +
> > + list_del_rcu(&uc->cons_node);
> > + err = register_for_each_vma(uprobe, NULL);
> > +
> > up_write(&uprobe->register_rwsem);
> >
> > - if (!err)
> > - put_uprobe(uprobe);
> > + /* TODO : cant unregister? schedule a worker thread */
> > + if (unlikely(err)) {
> > + uprobe_warn(current, "unregister, leaking uprobe");
> > + return;
>
> Looks wrong... We can (should) skip put_uprobe(), but we can't avoid
> synchronize_srcu().
>
> The caller can free the consumer right after return. You even added
> a fat comment below.
>
Yep, totally my bad, you are right. I'll add a goto synchronize (and
yep, we'll later remove it, but we should be thorough here).
> Yes, the problem will go away after you split it into nosync/sync, but
> still.
>
> Oleg.
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 5/6] perf/uprobe: split uprobe_unregister()
2024-08-08 0:21 [PATCH v2 0/6] uprobes: RCU-protected hot path optimizations Andrii Nakryiko
` (3 preceding siblings ...)
2024-08-08 0:21 ` [PATCH v2 4/6] uprobes: travers uprobe's consumer list locklessly under SRCU protection Andrii Nakryiko
@ 2024-08-08 0:21 ` Andrii Nakryiko
2024-08-08 0:21 ` [PATCH v2 6/6] uprobes: switch to RCU Tasks Trace flavor for better performance Andrii Nakryiko
5 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2024-08-08 0:21 UTC (permalink / raw)
To: linux-trace-kernel, peterz, oleg, rostedt, mhiramat
Cc: bpf, linux-kernel, jolsa, paulmck, Andrii Nakryiko
From: Peter Zijlstra <peterz@infradead.org>
With uprobe_unregister() having grown a synchronize_srcu(), it becomes
fairly slow to call. Esp. since both users of this API call it in a
loop.
Peel off the sync_srcu() and do it once, after the loop.
We also need to add uprobe_unregister_sync() into uprobe_register()'s
error handling path, as we need to be careful about returning to the
caller before we have a guarantee that partially attached consumer won't
be called anymore. This is an unlikely slow path and this should be
totally fine to be slow in the case of a failed attach.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Co-developed-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
include/linux/uprobes.h | 8 ++++++--
kernel/events/uprobes.c | 18 ++++++++++++++----
kernel/trace/bpf_trace.c | 5 ++++-
kernel/trace/trace_uprobe.c | 6 +++++-
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 3 ++-
5 files changed, 31 insertions(+), 9 deletions(-)
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index f67f8d98c3c6..3a3154b74fe0 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -107,7 +107,8 @@ extern unsigned long uprobe_get_trap_addr(struct pt_regs *regs);
extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr, uprobe_opcode_t);
extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, loff_t ref_ctr_offset, struct uprobe_consumer *uc);
extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool);
-extern void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc);
+extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc);
+extern void uprobe_unregister_sync(void);
extern int uprobe_mmap(struct vm_area_struct *vma);
extern void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end);
extern void uprobe_start_dup_mmap(void);
@@ -156,7 +157,10 @@ uprobe_apply(struct uprobe* uprobe, struct uprobe_consumer *uc, bool add)
return -ENOSYS;
}
static inline void
-uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc)
+uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc)
+{
+}
+static inline void uprobe_unregister_sync(void)
{
}
static inline int uprobe_mmap(struct vm_area_struct *vma)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 5bddd20b7053..419085741850 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1096,11 +1096,11 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new)
}
/**
- * uprobe_unregister - unregister an already registered probe.
+ * uprobe_unregister_nosync - unregister an already registered probe.
* @uprobe: uprobe to remove
* @uc: identify which probe if multiple probes are colocated.
*/
-void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc)
+void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc)
{
int err;
@@ -1118,7 +1118,11 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc)
}
put_uprobe(uprobe);
+}
+EXPORT_SYMBOL_GPL(uprobe_unregister_nosync);
+void uprobe_unregister_sync(void)
+{
/*
* Now that handler_chain() and handle_uretprobe_chain() iterate over
* uprobe->consumers list under RCU protection without holding
@@ -1130,7 +1134,7 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc)
*/
synchronize_srcu(&uprobes_srcu);
}
-EXPORT_SYMBOL_GPL(uprobe_unregister);
+EXPORT_SYMBOL_GPL(uprobe_unregister_sync);
/**
* uprobe_register - register a probe
@@ -1188,7 +1192,13 @@ struct uprobe *uprobe_register(struct inode *inode,
up_write(&uprobe->register_rwsem);
if (ret) {
- uprobe_unregister(uprobe, uc);
+ uprobe_unregister_nosync(uprobe, uc);
+ /*
+ * Registration might have partially succeeded, so we can have
+ * this consumer being called right at this time. We need to
+ * sync here. It's ok, it's unlikely slow path.
+ */
+ uprobe_unregister_sync();
return ERR_PTR(ret);
}
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 73c570b5988b..6b632710c98e 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3184,7 +3184,10 @@ static void bpf_uprobe_unregister(struct bpf_uprobe *uprobes, u32 cnt)
u32 i;
for (i = 0; i < cnt; i++)
- uprobe_unregister(uprobes[i].uprobe, &uprobes[i].consumer);
+ uprobe_unregister_nosync(uprobes[i].uprobe, &uprobes[i].consumer);
+
+ if (cnt)
+ uprobe_unregister_sync();
}
static void bpf_uprobe_multi_link_release(struct bpf_link *link)
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 7eb79e0a5352..f7443e996b1b 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -1097,6 +1097,7 @@ static int trace_uprobe_enable(struct trace_uprobe *tu, filter_func_t filter)
static void __probe_event_disable(struct trace_probe *tp)
{
struct trace_uprobe *tu;
+ bool sync = false;
tu = container_of(tp, struct trace_uprobe, tp);
WARN_ON(!uprobe_filter_is_empty(tu->tp.event->filter));
@@ -1105,9 +1106,12 @@ static void __probe_event_disable(struct trace_probe *tp)
if (!tu->uprobe)
continue;
- uprobe_unregister(tu->uprobe, &tu->consumer);
+ uprobe_unregister_nosync(tu->uprobe, &tu->consumer);
+ sync = true;
tu->uprobe = NULL;
}
+ if (sync)
+ uprobe_unregister_sync();
}
static int probe_event_enable(struct trace_event_call *call,
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index 3c0515a27842..1fc16657cf42 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -475,7 +475,8 @@ static void testmod_unregister_uprobe(void)
mutex_lock(&testmod_uprobe_mutex);
if (uprobe.uprobe) {
- uprobe_unregister(uprobe.uprobe, &uprobe.consumer);
+ uprobe_unregister_nosync(uprobe.uprobe, &uprobe.consumer);
+ uprobe_unregister_sync();
path_put(&uprobe.path);
uprobe.uprobe = NULL;
}
--
2.43.5
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH v2 6/6] uprobes: switch to RCU Tasks Trace flavor for better performance
2024-08-08 0:21 [PATCH v2 0/6] uprobes: RCU-protected hot path optimizations Andrii Nakryiko
` (4 preceding siblings ...)
2024-08-08 0:21 ` [PATCH v2 5/6] perf/uprobe: split uprobe_unregister() Andrii Nakryiko
@ 2024-08-08 0:21 ` Andrii Nakryiko
5 siblings, 0 replies; 12+ messages in thread
From: Andrii Nakryiko @ 2024-08-08 0:21 UTC (permalink / raw)
To: linux-trace-kernel, peterz, oleg, rostedt, mhiramat
Cc: bpf, linux-kernel, jolsa, paulmck, Andrii Nakryiko
This patch switches uprobes SRCU usage to RCU Tasks Trace flavor, which
is optimized for more lightweight and quick readers (at the expense of
slower writers, which for uprobes is a fine tradeof) and has better
performance and scalability with number of CPUs.
Similarly to baseline vs SRCU, we've benchmarked SRCU-based
implementation vs RCU Tasks Trace implementation.
SRCU
====
uprobe-nop ( 1 cpus): 3.276 ± 0.005M/s ( 3.276M/s/cpu)
uprobe-nop ( 2 cpus): 4.125 ± 0.002M/s ( 2.063M/s/cpu)
uprobe-nop ( 4 cpus): 7.713 ± 0.002M/s ( 1.928M/s/cpu)
uprobe-nop ( 8 cpus): 8.097 ± 0.006M/s ( 1.012M/s/cpu)
uprobe-nop (16 cpus): 6.501 ± 0.056M/s ( 0.406M/s/cpu)
uprobe-nop (32 cpus): 4.398 ± 0.084M/s ( 0.137M/s/cpu)
uprobe-nop (64 cpus): 6.452 ± 0.000M/s ( 0.101M/s/cpu)
uretprobe-nop ( 1 cpus): 2.055 ± 0.001M/s ( 2.055M/s/cpu)
uretprobe-nop ( 2 cpus): 2.677 ± 0.000M/s ( 1.339M/s/cpu)
uretprobe-nop ( 4 cpus): 4.561 ± 0.003M/s ( 1.140M/s/cpu)
uretprobe-nop ( 8 cpus): 5.291 ± 0.002M/s ( 0.661M/s/cpu)
uretprobe-nop (16 cpus): 5.065 ± 0.019M/s ( 0.317M/s/cpu)
uretprobe-nop (32 cpus): 3.622 ± 0.003M/s ( 0.113M/s/cpu)
uretprobe-nop (64 cpus): 3.723 ± 0.002M/s ( 0.058M/s/cpu)
RCU Tasks Trace
===============
uprobe-nop ( 1 cpus): 3.396 ± 0.002M/s ( 3.396M/s/cpu)
uprobe-nop ( 2 cpus): 4.271 ± 0.006M/s ( 2.135M/s/cpu)
uprobe-nop ( 4 cpus): 8.499 ± 0.015M/s ( 2.125M/s/cpu)
uprobe-nop ( 8 cpus): 10.355 ± 0.028M/s ( 1.294M/s/cpu)
uprobe-nop (16 cpus): 7.615 ± 0.099M/s ( 0.476M/s/cpu)
uprobe-nop (32 cpus): 4.430 ± 0.007M/s ( 0.138M/s/cpu)
uprobe-nop (64 cpus): 6.887 ± 0.020M/s ( 0.108M/s/cpu)
uretprobe-nop ( 1 cpus): 2.174 ± 0.001M/s ( 2.174M/s/cpu)
uretprobe-nop ( 2 cpus): 2.853 ± 0.001M/s ( 1.426M/s/cpu)
uretprobe-nop ( 4 cpus): 4.913 ± 0.002M/s ( 1.228M/s/cpu)
uretprobe-nop ( 8 cpus): 5.883 ± 0.002M/s ( 0.735M/s/cpu)
uretprobe-nop (16 cpus): 5.147 ± 0.001M/s ( 0.322M/s/cpu)
uretprobe-nop (32 cpus): 3.738 ± 0.008M/s ( 0.117M/s/cpu)
uretprobe-nop (64 cpus): 4.397 ± 0.002M/s ( 0.069M/s/cpu)
Peak throughput for uprobes increases from 8 mln/s to 10.3 mln/s
(+28%!), and for uretprobes from 5.3 mln/s to 5.8 mln/s (+11%), as we
have more work to do on uretprobes side.
Even single-thread (no contention) performance is slightly better: 3.276
mln/s to 3.396 mln/s (+3.5%) for uprobes, and 2.055 mln/s to 2.174 mln/s
(+5.8%) for uretprobes.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/events/uprobes.c | 37 +++++++++++++++----------------------
1 file changed, 15 insertions(+), 22 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 419085741850..b3b98c635422 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -41,8 +41,6 @@ static struct rb_root uprobes_tree = RB_ROOT;
static DEFINE_RWLOCK(uprobes_treelock); /* serialize rbtree access */
-DEFINE_STATIC_SRCU(uprobes_srcu);
-
#define UPROBES_HASH_SZ 13
/* serialize uprobe->pending_list */
static struct mutex uprobes_mmap_mutex[UPROBES_HASH_SZ];
@@ -650,7 +648,7 @@ static void put_uprobe(struct uprobe *uprobe)
delayed_uprobe_remove(uprobe, NULL);
mutex_unlock(&delayed_uprobe_lock);
- call_srcu(&uprobes_srcu, &uprobe->rcu, uprobe_free_rcu);
+ call_rcu_tasks_trace(&uprobe->rcu, uprobe_free_rcu);
}
static __always_inline
@@ -704,7 +702,7 @@ static struct uprobe *find_uprobe_rcu(struct inode *inode, loff_t offset)
};
struct rb_node *node;
- lockdep_assert(srcu_read_lock_held(&uprobes_srcu));
+ lockdep_assert(rcu_read_lock_trace_held());
read_lock(&uprobes_treelock);
node = rb_find(&key, &uprobes_tree, __uprobe_cmp_key);
@@ -908,8 +906,7 @@ static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm)
bool ret = false;
down_read(&uprobe->consumer_rwsem);
- list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node,
- srcu_read_lock_held(&uprobes_srcu)) {
+ list_for_each_entry_rcu(uc, &uprobe->consumers, cons_node, rcu_read_lock_trace_held()) {
ret = consumer_filter(uc, mm);
if (ret)
break;
@@ -1132,7 +1129,7 @@ void uprobe_unregister_sync(void)
* unlucky enough caller can free consumer's memory and cause
* handler_chain() or handle_uretprobe_chain() to do an use-after-free.
*/
- synchronize_srcu(&uprobes_srcu);
+ synchronize_rcu_tasks_trace();
}
EXPORT_SYMBOL_GPL(uprobe_unregister_sync);
@@ -1216,19 +1213,18 @@ EXPORT_SYMBOL_GPL(uprobe_register);
int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool add)
{
struct uprobe_consumer *con;
- int ret = -ENOENT, srcu_idx;
+ int ret = -ENOENT;
down_write(&uprobe->register_rwsem);
- srcu_idx = srcu_read_lock(&uprobes_srcu);
- list_for_each_entry_srcu(con, &uprobe->consumers, cons_node,
- srcu_read_lock_held(&uprobes_srcu)) {
+ rcu_read_lock_trace();
+ list_for_each_entry_rcu(con, &uprobe->consumers, cons_node, rcu_read_lock_trace_held()) {
if (con == uc) {
ret = register_for_each_vma(uprobe, add ? uc : NULL);
break;
}
}
- srcu_read_unlock(&uprobes_srcu, srcu_idx);
+ rcu_read_unlock_trace();
up_write(&uprobe->register_rwsem);
@@ -2099,8 +2095,7 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
current->utask->auprobe = &uprobe->arch;
- list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node,
- srcu_read_lock_held(&uprobes_srcu)) {
+ list_for_each_entry_rcu(uc, &uprobe->consumers, cons_node, rcu_read_lock_trace_held()) {
int rc = 0;
if (uc->handler) {
@@ -2138,15 +2133,13 @@ handle_uretprobe_chain(struct return_instance *ri, struct pt_regs *regs)
{
struct uprobe *uprobe = ri->uprobe;
struct uprobe_consumer *uc;
- int srcu_idx;
- srcu_idx = srcu_read_lock(&uprobes_srcu);
- list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node,
- srcu_read_lock_held(&uprobes_srcu)) {
+ rcu_read_lock_trace();
+ list_for_each_entry_rcu(uc, &uprobe->consumers, cons_node, rcu_read_lock_trace_held()) {
if (uc->ret_handler)
uc->ret_handler(uc, ri->func, regs);
}
- srcu_read_unlock(&uprobes_srcu, srcu_idx);
+ rcu_read_unlock_trace();
}
static struct return_instance *find_next_ret_chain(struct return_instance *ri)
@@ -2231,13 +2224,13 @@ static void handle_swbp(struct pt_regs *regs)
{
struct uprobe *uprobe;
unsigned long bp_vaddr;
- int is_swbp, srcu_idx;
+ int is_swbp;
bp_vaddr = uprobe_get_swbp_addr(regs);
if (bp_vaddr == uprobe_get_trampoline_vaddr())
return uprobe_handle_trampoline(regs);
- srcu_idx = srcu_read_lock(&uprobes_srcu);
+ rcu_read_lock_trace();
uprobe = find_active_uprobe_rcu(bp_vaddr, &is_swbp);
if (!uprobe) {
@@ -2295,7 +2288,7 @@ static void handle_swbp(struct pt_regs *regs)
out:
/* arch_uprobe_skip_sstep() succeeded, or restart if can't singlestep */
- srcu_read_unlock(&uprobes_srcu, srcu_idx);
+ rcu_read_unlock_trace();
}
/*
--
2.43.5
^ permalink raw reply related [flat|nested] 12+ messages in thread