* [[PATCH v2] 1/1] kprobes: retry pending optprobe after freeing blocker
2025-10-29 4:02 [PATCH] " Masami Hiramatsu
@ 2025-10-30 9:36 ` hongao
0 siblings, 0 replies; 3+ messages in thread
From: hongao @ 2025-10-30 9:36 UTC (permalink / raw)
To: mhiramat
Cc: naveen, anil.s.keshavamurthy, davem, linux-kernel,
linux-trace-kernel, hongao
Thanks for the review.
The freeing_list cleanup now retries optimizing any sibling probe that was
deferred while this aggregator was being torn down. Track the pending
address in struct optimized_kprobe so __disarm_kprobe() can defer the
retry until kprobe_optimizer() finishes disarming.
Signed-off-by: hongao <hongao@uniontech.com>
---
Changes since v1:
- Replace `kprobe_opcode_t *pending_reopt_addr` with `bool reopt_unblocked_probes`
in `struct optimized_kprobe` to avoid storing an address and simplify logic.
- Use `op->kp.addr` when looking up the sibling optimized probe instead of
keeping a separate stored address.
- Defer re-optimization by setting/clearing `op->reopt_unblocked_probes` in
`__disarm_kprobe()` / consuming it in `do_free_cleaned_kprobes()` so the
retry runs after the worker finishes disarming.
---
include/linux/kprobes.h | 1 +
kernel/kprobes.c | 28 ++++++++++++++++++++++------
2 files changed, 23 insertions(+), 6 deletions(-)
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 8c4f3bb24..4f49925a4 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -338,6 +338,7 @@ DEFINE_INSN_CACHE_OPS(insn);
struct optimized_kprobe {
struct kprobe kp;
struct list_head list; /* list for optimizing queue */
+ bool reopt_unblocked_probes;
struct arch_optimized_insn optinsn;
};
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index da59c68df..799542dff 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -514,6 +514,7 @@ static LIST_HEAD(freeing_list);
static void kprobe_optimizer(struct work_struct *work);
static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer);
+static void optimize_kprobe(struct kprobe *p);
#define OPTIMIZE_DELAY 5
/*
@@ -591,6 +592,21 @@ static void do_free_cleaned_kprobes(void)
*/
continue;
}
+ if (op->reopt_unblocked_probes) {
+ struct kprobe *unblocked;
+
+ /*
+ * The aggregator was holding back another probe while it sat on the
+ * unoptimizing/freeing lists. Now that the aggregator has been fully
+ * reverted we can safely retry the optimization of that sibling.
+ */
+
+ unblocked = get_optimized_kprobe(op->kp.addr);
+ if (unlikely(unblocked))
+ optimize_kprobe(unblocked);
+ op->reopt_unblocked_probes = false;
+ }
+
free_aggr_kprobe(&op->kp);
}
}
@@ -1009,13 +1025,13 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
_p = get_optimized_kprobe(p->addr);
if (unlikely(_p) && reopt)
optimize_kprobe(_p);
+ } else if (reopt && kprobe_aggrprobe(p)) {
+ struct optimized_kprobe *op =
+ container_of(p, struct optimized_kprobe, kp);
+
+ /* Defer the re-optimization until the worker finishes disarming. */
+ op->reopt_unblocked_probes = true;
}
- /*
- * TODO: Since unoptimization and real disarming will be done by
- * the worker thread, we can not check whether another probe are
- * unoptimized because of this probe here. It should be re-optimized
- * by the worker thread.
- */
}
#else /* !CONFIG_OPTPROBES */
--
2.47.2
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [[PATCH v2] 1/1] kprobes: retry pending optprobe after freeing blocker
@ 2025-12-10 3:33 hongao
2026-01-05 4:58 ` Masami Hiramatsu
0 siblings, 1 reply; 3+ messages in thread
From: hongao @ 2025-12-10 3:33 UTC (permalink / raw)
To: naveen, anil.s.keshavamurthy, davem, mhiramat
Cc: linux-kernel, linux-trace-kernel, hongao
The freeing_list cleanup now retries optimizing any sibling probe that was
deferred while this aggregator was being torn down. Track the pending
address in struct optimized_kprobe so __disarm_kprobe() can defer the
retry until kprobe_optimizer() finishes disarming.
Signed-off-by: hongao <hongao@uniontech.com>
---
Changes since v1:
- Replace `kprobe_opcode_t *pending_reopt_addr` with `bool reopt_unblocked_probes`
in `struct optimized_kprobe` to avoid storing an address and simplify logic.
- Use `op->kp.addr` when looking up the sibling optimized probe instead of
keeping a separate stored address.
- Defer re-optimization by setting/clearing `op->reopt_unblocked_probes` in
`__disarm_kprobe()` / consuming it in `do_free_cleaned_kprobes()` so the
retry runs after the worker finishes disarming.
- Link to v1: https://lore.kernel.org/all/2B0BC73E9D190B7B+20251027130535.2296913-1-hongao@uniontech.com/
---
include/linux/kprobes.h | 1 +
kernel/kprobes.c | 28 ++++++++++++++++++++++------
2 files changed, 23 insertions(+), 6 deletions(-)
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 8c4f3bb24..4f49925a4 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -338,6 +338,7 @@ DEFINE_INSN_CACHE_OPS(insn);
struct optimized_kprobe {
struct kprobe kp;
struct list_head list; /* list for optimizing queue */
+ bool reopt_unblocked_probes;
struct arch_optimized_insn optinsn;
};
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index da59c68df..799542dff 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -514,6 +514,7 @@ static LIST_HEAD(freeing_list);
static void kprobe_optimizer(struct work_struct *work);
static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer);
+static void optimize_kprobe(struct kprobe *p);
#define OPTIMIZE_DELAY 5
/*
@@ -591,6 +592,21 @@ static void do_free_cleaned_kprobes(void)
*/
continue;
}
+ if (op->reopt_unblocked_probes) {
+ struct kprobe *unblocked;
+
+ /*
+ * The aggregator was holding back another probe while it sat on the
+ * unoptimizing/freeing lists. Now that the aggregator has been fully
+ * reverted we can safely retry the optimization of that sibling.
+ */
+
+ unblocked = get_optimized_kprobe(op->kp.addr);
+ if (unlikely(unblocked))
+ optimize_kprobe(unblocked);
+ op->reopt_unblocked_probes = false;
+ }
+
free_aggr_kprobe(&op->kp);
}
}
@@ -1009,13 +1025,13 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
_p = get_optimized_kprobe(p->addr);
if (unlikely(_p) && reopt)
optimize_kprobe(_p);
+ } else if (reopt && kprobe_aggrprobe(p)) {
+ struct optimized_kprobe *op =
+ container_of(p, struct optimized_kprobe, kp);
+
+ /* Defer the re-optimization until the worker finishes disarming. */
+ op->reopt_unblocked_probes = true;
}
- /*
- * TODO: Since unoptimization and real disarming will be done by
- * the worker thread, we can not check whether another probe are
- * unoptimized because of this probe here. It should be re-optimized
- * by the worker thread.
- */
}
#else /* !CONFIG_OPTPROBES */
--
2.47.2
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [[PATCH v2] 1/1] kprobes: retry pending optprobe after freeing blocker
2025-12-10 3:33 [[PATCH v2] 1/1] kprobes: retry pending optprobe after freeing blocker hongao
@ 2026-01-05 4:58 ` Masami Hiramatsu
0 siblings, 0 replies; 3+ messages in thread
From: Masami Hiramatsu @ 2026-01-05 4:58 UTC (permalink / raw)
To: hongao
Cc: naveen, anil.s.keshavamurthy, davem, linux-kernel,
linux-trace-kernel
Hi Hongao,
Thanks for updating. After a detailed review, we don't need a new boolean
flag for it. Since the "queued unused probe" (which is handled by
do_free_cleaned_kprobes() eventually) is always disarmed. Thus, we only
need to check all probes to reoptimize sibling probes in the
do_free_cleaned_kprobes().
On Wed, 10 Dec 2025 11:33:21 +0800
hongao <hongao@uniontech.com> wrote:
> The freeing_list cleanup now retries optimizing any sibling probe that was
> deferred while this aggregator was being torn down. Track the pending
> address in struct optimized_kprobe so __disarm_kprobe() can defer the
> retry until kprobe_optimizer() finishes disarming.
>
> Signed-off-by: hongao <hongao@uniontech.com>
> ---
> Changes since v1:
> - Replace `kprobe_opcode_t *pending_reopt_addr` with `bool reopt_unblocked_probes`
> in `struct optimized_kprobe` to avoid storing an address and simplify logic.
> - Use `op->kp.addr` when looking up the sibling optimized probe instead of
> keeping a separate stored address.
> - Defer re-optimization by setting/clearing `op->reopt_unblocked_probes` in
> `__disarm_kprobe()` / consuming it in `do_free_cleaned_kprobes()` so the
> retry runs after the worker finishes disarming.
> - Link to v1: https://lore.kernel.org/all/2B0BC73E9D190B7B+20251027130535.2296913-1-hongao@uniontech.com/
> ---
> include/linux/kprobes.h | 1 +
> kernel/kprobes.c | 28 ++++++++++++++++++++++------
> 2 files changed, 23 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
> index 8c4f3bb24..4f49925a4 100644
> --- a/include/linux/kprobes.h
> +++ b/include/linux/kprobes.h
> @@ -338,6 +338,7 @@ DEFINE_INSN_CACHE_OPS(insn);
> struct optimized_kprobe {
> struct kprobe kp;
> struct list_head list; /* list for optimizing queue */
> + bool reopt_unblocked_probes;
> struct arch_optimized_insn optinsn;
> };
This is not needed.
>
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index da59c68df..799542dff 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -514,6 +514,7 @@ static LIST_HEAD(freeing_list);
>
> static void kprobe_optimizer(struct work_struct *work);
> static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer);
> +static void optimize_kprobe(struct kprobe *p);
> #define OPTIMIZE_DELAY 5
>
> /*
> @@ -591,6 +592,21 @@ static void do_free_cleaned_kprobes(void)
> */
> continue;
> }
> + if (op->reopt_unblocked_probes) {
> + struct kprobe *unblocked;
> +
> + /*
> + * The aggregator was holding back another probe while it sat on the
> + * unoptimizing/freeing lists. Now that the aggregator has been fully
> + * reverted we can safely retry the optimization of that sibling.
> + */
> +
> + unblocked = get_optimized_kprobe(op->kp.addr);
> + if (unlikely(unblocked))
> + optimize_kprobe(unblocked);
> + op->reopt_unblocked_probes = false;
> + }
This is what we need. (but do not need to check/update reopt_unblocked_probes.
> +
> free_aggr_kprobe(&op->kp);
> }
> }
> @@ -1009,13 +1025,13 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
> _p = get_optimized_kprobe(p->addr);
> if (unlikely(_p) && reopt)
> optimize_kprobe(_p);
> + } else if (reopt && kprobe_aggrprobe(p)) {
> + struct optimized_kprobe *op =
> + container_of(p, struct optimized_kprobe, kp);
> +
> + /* Defer the re-optimization until the worker finishes disarming. */
> + op->reopt_unblocked_probes = true;
Do not need this.
> }
> - /*
> - * TODO: Since unoptimization and real disarming will be done by
> - * the worker thread, we can not check whether another probe are
> - * unoptimized because of this probe here. It should be re-optimized
> - * by the worker thread.
> - */
Only remove this comment.
Thank you,
> }
>
> #else /* !CONFIG_OPTPROBES */
> --
> 2.47.2
>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-01-05 4:58 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-10 3:33 [[PATCH v2] 1/1] kprobes: retry pending optprobe after freeing blocker hongao
2026-01-05 4:58 ` Masami Hiramatsu
-- strict thread matches above, loose matches on Subject: below --
2025-10-29 4:02 [PATCH] " Masami Hiramatsu
2025-10-30 9:36 ` [[PATCH v2] 1/1] " hongao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).