The Linux Kernel Mailing List
 help / color / mirror / Atom feed
* [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch
@ 2026-05-12  8:20 Tiezhu Yang
  2026-05-12  8:20 ` [PATCH v1 1/4] LoongArch: ftrace: Fix stale per-CPU kprobe state after task migration Tiezhu Yang
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Tiezhu Yang @ 2026-05-12  8:20 UTC (permalink / raw)
  To: Huacai Chen; +Cc: loongarch, linux-kernel

This series addresses ftrace and kprobes issues observed under heavy
tracing loads.

Tiezhu Yang (4):
  LoongArch: ftrace: Fix stale per-CPU kprobe state after task migration
  LoongArch: kprobes: Use larch_insn_text_copy() to patch instructions
  LoongArch: kprobes: Fix single-stepping instruction slot preparation
  LoongArch: kprobes: Fix handling of fatal unrecoverable recursions

 arch/loongarch/kernel/ftrace_dyn.c | 23 +++++++++++++++++++++++
 arch/loongarch/kernel/kprobes.c    | 19 +++++++++++--------
 2 files changed, 34 insertions(+), 8 deletions(-)

-- 
2.42.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v1 1/4] LoongArch: ftrace: Fix stale per-CPU kprobe state after task migration
  2026-05-12  8:20 [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch Tiezhu Yang
@ 2026-05-12  8:20 ` Tiezhu Yang
  2026-05-12  8:20 ` [PATCH v1 2/4] LoongArch: kprobes: Use larch_insn_text_copy() to patch instructions Tiezhu Yang
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Tiezhu Yang @ 2026-05-12  8:20 UTC (permalink / raw)
  To: Huacai Chen; +Cc: loongarch, linux-kernel

When probing kernel functions via ftrace, a scheduling event occurring
inside the kprobe handler can lead to task migration. If the executing
task is migrated to another CPU after setting per-CPU "current_kprobe"
but before clearing it, the original CPU is left with a stale state.

This stale state causes the original CPU to remain permanently "busy"
from the kprobe perspective, as its per-CPU "current_kprobe" is never
cleared thereafter. As a result, subsequent kprobes triggered on that
CPU are incorrectly treated as nested probes and skipped (increasing
nmissed count), leading to functional failures on the affected CPU.

Fix this by adding a self-reset mechanism in kprobe_ftrace_handler().
When a kprobe is triggered while the "current_kprobe" is already set,
verify if it is a stale state by checking:

(1) In task context (not a legitimate interrupt nest).
(2) A different kprobe (not a recursive trigger on the same probe).
(3) In an active or stepping state.

If these conditions are met, it indicates that the previous kprobe
execution was scheduled and migrated. Reset the per-CPU state to
recover the CPU's kprobe functionality, allowing the current probe
to be processed normally.

This acts as a defensive mechanism to recover the CPU's kprobe state
machine from inconsistent states caused by unexpected task migrations
during kprobe execution.

Fixes: 09e679c28a4d ("LoongArch: Add kprobes on ftrace support")
Reported-by: Hui Li <lihui@loongson.cn>
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
 arch/loongarch/kernel/ftrace_dyn.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/arch/loongarch/kernel/ftrace_dyn.c b/arch/loongarch/kernel/ftrace_dyn.c
index d5d81d74034c..03d303d67f18 100644
--- a/arch/loongarch/kernel/ftrace_dyn.c
+++ b/arch/loongarch/kernel/ftrace_dyn.c
@@ -310,6 +310,29 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
 		goto out;
 
 	kcb = get_kprobe_ctlblk();
+
+	if (kprobe_running()) {
+		/*
+		 * If a previous kprobe handler was interrupted by a scheduling event
+		 * and the task migrated to another CPU, the "current_kprobe" state
+		 * might be left stale on this CPU.
+		 *
+		 * Reset the stale state here to allow the current probe to proceed
+		 * normally instead of being falsely treated as a nested probe if:
+		 *
+		 * (1) In task context (not a legitimate interrupt nest).
+		 * (2) A different kprobe (not a recursive trigger on the same probe).
+		 * (3) In an active or stepping state.
+		 *
+		 * This acts as a defensive mechanism to recover the CPU's kprobe state
+		 * machine from inconsistent states caused by unexpected task migrations.
+		 */
+		if (in_task() && kprobe_running() != p &&
+		    (kcb->kprobe_status == KPROBE_HIT_ACTIVE ||
+		     kcb->kprobe_status == KPROBE_HIT_SSDONE))
+			reset_current_kprobe();
+	}
+
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(p);
 	} else {
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v1 2/4] LoongArch: kprobes: Use larch_insn_text_copy() to patch instructions
  2026-05-12  8:20 [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch Tiezhu Yang
  2026-05-12  8:20 ` [PATCH v1 1/4] LoongArch: ftrace: Fix stale per-CPU kprobe state after task migration Tiezhu Yang
@ 2026-05-12  8:20 ` Tiezhu Yang
  2026-05-12  8:20 ` [PATCH v1 3/4] LoongArch: kprobes: Fix single-stepping instruction slot preparation Tiezhu Yang
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Tiezhu Yang @ 2026-05-12  8:20 UTC (permalink / raw)
  To: Huacai Chen; +Cc: loongarch, linux-kernel

On SMP systems, kprobe handlers would occasionally fail to execute on
certain CPU cores. The issue is hard to reproduce and typically occurs
randomly under high system load.

The root cause is a software-side instruction hazard. According to the
LoongArch Reference Manual, while the cache coherency is maintained by
hardware, software must explicitly use the "IBAR" instruction to ensure
the instruction fetch unit (IFU) observes the effects of recent stores.

The current arch_arm_kprobe() and arch_disarm_kprobe() only execute the
"IBAR" barrier (via flush_insn_slot -> local_flush_icache_range) on the
local CPU. This leaves a vulnerable window where remote CPU cores may
continue executing stale instructions from their pipelines or prefetch
buffers, as they have not executed an "IBAR" since the code modification.

Switch to larch_insn_text_copy() to fix this:
1. Synchronization: It uses stop_machine_cpuslocked() to synchronize all
   online CPUs, ensuring no CPU is executing the target code area during
   modification.
2. Visibility: By passing cpu_online_mask to stop_machine_cpuslocked(),
   the callback text_copy_cb() is executed on all online cores. Each CPU
   core invokes local_flush_icache_range() to execute "IBAR", clearing
   instruction hazards system-wide and ensuring the "break" instruction
   is visible to the fetch units of all cores.
3. Robustness: It properly manages memory write permissions (ROX/RW) for
   the kernel text segment during patching, ensuring compatibility with
   CONFIG_STRICT_KERNEL_RWX.

Fixes: 6d4cc40fb5f5 ("LoongArch: Add kprobes support")
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
 arch/loongarch/kernel/kprobes.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/loongarch/kernel/kprobes.c b/arch/loongarch/kernel/kprobes.c
index 8ba391cfabb0..04b5b05715cd 100644
--- a/arch/loongarch/kernel/kprobes.c
+++ b/arch/loongarch/kernel/kprobes.c
@@ -60,16 +60,18 @@ NOKPROBE_SYMBOL(arch_prepare_kprobe);
 /* Install breakpoint in text */
 void arch_arm_kprobe(struct kprobe *p)
 {
-	*p->addr = KPROBE_BP_INSN;
-	flush_insn_slot(p);
+	u32 insn = KPROBE_BP_INSN;
+
+	larch_insn_text_copy(p->addr, &insn, LOONGARCH_INSN_SIZE);
 }
 NOKPROBE_SYMBOL(arch_arm_kprobe);
 
 /* Remove breakpoint from text */
 void arch_disarm_kprobe(struct kprobe *p)
 {
-	*p->addr = p->opcode;
-	flush_insn_slot(p);
+	u32 insn = p->opcode;
+
+	larch_insn_text_copy(p->addr, &insn, LOONGARCH_INSN_SIZE);
 }
 NOKPROBE_SYMBOL(arch_disarm_kprobe);
 
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v1 3/4] LoongArch: kprobes: Fix single-stepping instruction slot preparation
  2026-05-12  8:20 [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch Tiezhu Yang
  2026-05-12  8:20 ` [PATCH v1 1/4] LoongArch: ftrace: Fix stale per-CPU kprobe state after task migration Tiezhu Yang
  2026-05-12  8:20 ` [PATCH v1 2/4] LoongArch: kprobes: Use larch_insn_text_copy() to patch instructions Tiezhu Yang
@ 2026-05-12  8:20 ` Tiezhu Yang
  2026-05-13  6:15   ` Lisa Robinson
  2026-05-14 13:20   ` WANG Rui
  2026-05-12  8:20 ` [PATCH v1 4/4] LoongArch: kprobes: Fix handling of fatal unrecoverable recursions Tiezhu Yang
  2026-05-15  2:52 ` [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch Tiezhu Yang
  4 siblings, 2 replies; 9+ messages in thread
From: Tiezhu Yang @ 2026-05-12  8:20 UTC (permalink / raw)
  To: Huacai Chen; +Cc: loongarch, linux-kernel

In arch_prepare_ss_slot(), the original code directly assigns instructions
to the buffer using raw memory stores. This approach has two significant
drawbacks on LoongArch:

1. Consistency: It skips the necessary instruction barrier synchronization
   required by the architecture. Without a local barrier, the instruction
   fetch unit might not observe the newly prepared instructions in the
   single-step slot, even on the local CPU.
2. Atomicity: Raw memory assignments do not guarantee that the instruction
   unit sees a complete instruction at all times, which is critical for
   the integrity of single-step execution.

Like RISC-V and ARM64, use larch_insn_patch_text() for slot preparation to
ensure the atomic instruction writes and proper local instruction barrier
execution.

Note that global stop_machine synchronization is not required here because
the single-step slot is executed only after a breakpoint exception, which
inherently provides a context synchronization event for the CPU to observe
the new instructions.

Fixes: 6d4cc40fb5f5 ("LoongArch: Add kprobes support")
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
 arch/loongarch/kernel/kprobes.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/loongarch/kernel/kprobes.c b/arch/loongarch/kernel/kprobes.c
index 04b5b05715cd..8e1b7a87c897 100644
--- a/arch/loongarch/kernel/kprobes.c
+++ b/arch/loongarch/kernel/kprobes.c
@@ -12,8 +12,8 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
 
 static void arch_prepare_ss_slot(struct kprobe *p)
 {
-	p->ainsn.insn[0] = *p->addr;
-	p->ainsn.insn[1] = KPROBE_SSTEPBP_INSN;
+	larch_insn_patch_text(p->ainsn.insn, *p->addr);
+	larch_insn_patch_text(p->ainsn.insn + 1, KPROBE_SSTEPBP_INSN);
 	p->ainsn.restore = (unsigned long)p->addr + LOONGARCH_INSN_SIZE;
 }
 NOKPROBE_SYMBOL(arch_prepare_ss_slot);
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v1 4/4] LoongArch: kprobes: Fix handling of fatal unrecoverable recursions
  2026-05-12  8:20 [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch Tiezhu Yang
                   ` (2 preceding siblings ...)
  2026-05-12  8:20 ` [PATCH v1 3/4] LoongArch: kprobes: Fix single-stepping instruction slot preparation Tiezhu Yang
@ 2026-05-12  8:20 ` Tiezhu Yang
  2026-05-15  2:52 ` [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch Tiezhu Yang
  4 siblings, 0 replies; 9+ messages in thread
From: Tiezhu Yang @ 2026-05-12  8:20 UTC (permalink / raw)
  To: Huacai Chen; +Cc: loongarch, linux-kernel

KPROBE_HIT_SS and KPROBE_REENTER are two types of fatal recursions
that can not be safely recovered in kprobes.

KPROBE_HIT_SS means that a kprobe is hit during single-stepping.
At this point, the architecture-specific single-step context is
already active. Nested single-stepping would corrupt the state,
as the kprobe control block (kcb) and hardware registers cannot
safely store multiple levels of stepping state.

KPROBE_REENTER means that a third-level recursion occurs when a
probe is hit while the system is already handling a nested probe
(second-level). The kcb only provides a single slot (prev_kprobe)
to backup the state. When a third probe is hit, there is no more
space to save the state without corrupting the first-level backup.

Kprobes work by replacing instructions with breakpoints. In order
to execute the original instruction and continue, it must be moved
to a temporary "single-step" slot. Since there is no backup space
left to set up this slot safely, the CPU would be forced to return
to the same original breakpoint address, triggering an endless loop.

Currently, the code only prints a warning and returns. This leads
to an infinite re-entry loop as the CPU repeatedly hits the same
trap and a "stuck" CPU core because preemption was disabled at the
start of the handler and never re-enabled in this early return path.

Fix the logic by:
1. Merging KPROBE_HIT_SS and KPROBE_REENTER cases, as both represent
   fatal recursions that cannot be safely recovered.
2. Balancing the preemption count with preempt_enable_no_resched() to
   maintain preemption balance before the system halts.
3. Replacing WARN_ON_ONCE() with BUG() to terminate the system. This
   aligns LoongArch with other architectures (x86, arm64, riscv) and
   prevents stack overflow while providing diagnostic information.

Fixes: 6d4cc40fb5f5 ("LoongArch: Add kprobes support")
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
 arch/loongarch/kernel/kprobes.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/loongarch/kernel/kprobes.c b/arch/loongarch/kernel/kprobes.c
index 8e1b7a87c897..e6395a9dbaec 100644
--- a/arch/loongarch/kernel/kprobes.c
+++ b/arch/loongarch/kernel/kprobes.c
@@ -186,16 +186,17 @@ static bool reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
 			   struct kprobe_ctlblk *kcb)
 {
 	switch (kcb->kprobe_status) {
-	case KPROBE_HIT_SS:
 	case KPROBE_HIT_SSDONE:
 	case KPROBE_HIT_ACTIVE:
 		kprobes_inc_nmissed_count(p);
 		setup_singlestep(p, regs, kcb, 1);
 		break;
+	case KPROBE_HIT_SS:
 	case KPROBE_REENTER:
+		preempt_enable_no_resched();
 		pr_warn("Failed to recover from reentered kprobes.\n");
 		dump_kprobe(p);
-		WARN_ON_ONCE(1);
+		BUG();
 		break;
 	default:
 		WARN_ON(1);
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v1 3/4] LoongArch: kprobes: Fix single-stepping instruction slot preparation
  2026-05-12  8:20 ` [PATCH v1 3/4] LoongArch: kprobes: Fix single-stepping instruction slot preparation Tiezhu Yang
@ 2026-05-13  6:15   ` Lisa Robinson
  2026-05-14 13:20   ` WANG Rui
  1 sibling, 0 replies; 9+ messages in thread
From: Lisa Robinson @ 2026-05-13  6:15 UTC (permalink / raw)
  To: yangtiezhu; +Cc: chenhuacai, linux-kernel, loongarch, Lisa Robinson

> In arch_prepare_ss_slot(), the original code directly assigns instructions
> to the buffer using raw memory stores. This approach has two significant
> drawbacks on LoongArch:
> 
> 1. Consistency: It skips the necessary instruction barrier synchronization
>    required by the architecture. Without a local barrier, the instruction
>    fetch unit might not observe the newly prepared instructions in the
>    single-step slot, even on the local CPU.
> 2. Atomicity: Raw memory assignments do not guarantee that the instruction
>    unit sees a complete instruction at all times, which is critical for
>    the integrity of single-step execution.
> 
> Like RISC-V and ARM64, use larch_insn_patch_text() for slot preparation to
> ensure the atomic instruction writes and proper local instruction barrier
> execution.
> 
> Note that global stop_machine synchronization is not required here because
> the single-step slot is executed only after a breakpoint exception, which
> inherently provides a context synchronization event for the CPU to observe
> the new instructions.
> 
> Fixes: 6d4cc40fb5f5 ("LoongArch: Add kprobes support")
> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
> ---
>  arch/loongarch/kernel/kprobes.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/loongarch/kernel/kprobes.c b/arch/loongarch/kernel/kprobes.c
> index 04b5b05715cd..8e1b7a87c897 100644
> --- a/arch/loongarch/kernel/kprobes.c
> +++ b/arch/loongarch/kernel/kprobes.c
> @@ -12,8 +12,8 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>  
>  static void arch_prepare_ss_slot(struct kprobe *p)
>  {
> -	p->ainsn.insn[0] = *p->addr;
> -	p->ainsn.insn[1] = KPROBE_SSTEPBP_INSN;
> +	larch_insn_patch_text(p->ainsn.insn, *p->addr);
> +	larch_insn_patch_text(p->ainsn.insn + 1, KPROBE_SSTEPBP_INSN);

This instruction sequence is executed only after arch_arm_kprobe(), so any
instruction hazards are prevented.

--
Lisa

>  	p->ainsn.restore = (unsigned long)p->addr + LOONGARCH_INSN_SIZE;
>  }
>  NOKPROBE_SYMBOL(arch_prepare_ss_slot);
> -- 
> 2.42.0

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v1 3/4] LoongArch: kprobes: Fix single-stepping instruction slot preparation
  2026-05-12  8:20 ` [PATCH v1 3/4] LoongArch: kprobes: Fix single-stepping instruction slot preparation Tiezhu Yang
  2026-05-13  6:15   ` Lisa Robinson
@ 2026-05-14 13:20   ` WANG Rui
  1 sibling, 0 replies; 9+ messages in thread
From: WANG Rui @ 2026-05-14 13:20 UTC (permalink / raw)
  To: Tiezhu Yang; +Cc: Huacai Chen, loongarch, linux-kernel

Hi Tiezhu,

On Tue, May 12, 2026 at 4:26 PM Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
> In arch_prepare_ss_slot(), the original code directly assigns instructions
> to the buffer using raw memory stores. This approach has two significant
> drawbacks on LoongArch:
>
> 1. Consistency: It skips the necessary instruction barrier synchronization
>    required by the architecture. Without a local barrier, the instruction
>    fetch unit might not observe the newly prepared instructions in the
>    single-step slot, even on the local CPU.
> 2. Atomicity: Raw memory assignments do not guarantee that the instruction
>    unit sees a complete instruction at all times, which is critical for
>    the integrity of single-step execution.

Does this need to be atomic?

Thanks,
Rui

>
> Like RISC-V and ARM64, use larch_insn_patch_text() for slot preparation to
> ensure the atomic instruction writes and proper local instruction barrier
> execution.
>
> Note that global stop_machine synchronization is not required here because
> the single-step slot is executed only after a breakpoint exception, which
> inherently provides a context synchronization event for the CPU to observe
> the new instructions.
>
> Fixes: 6d4cc40fb5f5 ("LoongArch: Add kprobes support")
> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
> ---
>  arch/loongarch/kernel/kprobes.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/loongarch/kernel/kprobes.c b/arch/loongarch/kernel/kprobes.c
> index 04b5b05715cd..8e1b7a87c897 100644
> --- a/arch/loongarch/kernel/kprobes.c
> +++ b/arch/loongarch/kernel/kprobes.c
> @@ -12,8 +12,8 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>
>  static void arch_prepare_ss_slot(struct kprobe *p)
>  {
> -       p->ainsn.insn[0] = *p->addr;
> -       p->ainsn.insn[1] = KPROBE_SSTEPBP_INSN;
> +       larch_insn_patch_text(p->ainsn.insn, *p->addr);
> +       larch_insn_patch_text(p->ainsn.insn + 1, KPROBE_SSTEPBP_INSN);
>         p->ainsn.restore = (unsigned long)p->addr + LOONGARCH_INSN_SIZE;
>  }
>  NOKPROBE_SYMBOL(arch_prepare_ss_slot);
> --
> 2.42.0
>
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch
  2026-05-12  8:20 [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch Tiezhu Yang
                   ` (3 preceding siblings ...)
  2026-05-12  8:20 ` [PATCH v1 4/4] LoongArch: kprobes: Fix handling of fatal unrecoverable recursions Tiezhu Yang
@ 2026-05-15  2:52 ` Tiezhu Yang
  2026-05-15  4:29   ` Huacai Chen
  4 siblings, 1 reply; 9+ messages in thread
From: Tiezhu Yang @ 2026-05-15  2:52 UTC (permalink / raw)
  To: Huacai Chen, Lisa Robinson, WANG Rui; +Cc: loongarch, linux-kernel

On 2026/5/12 下午4:20, Tiezhu Yang wrote:
> This series addresses ftrace and kprobes issues observed under heavy
> tracing loads.
> 
> Tiezhu Yang (4):
>    LoongArch: ftrace: Fix stale per-CPU kprobe state after task migration
>    LoongArch: kprobes: Use larch_insn_text_copy() to patch instructions
>    LoongArch: kprobes: Fix single-stepping instruction slot preparation
>    LoongArch: kprobes: Fix handling of fatal unrecoverable recursions

Hi Lisa and Rui,

You are right, please ignore the patch #3.

Hi Huacai,

Please disregard the patch #1 and focus on the patch #2 and #4.

After thinking it through, it is better to avoid calling functions that
might sleep in the probe handler. We have modified the test module with
a proper way, there is no problem thereafter.

The right way is to prevent the problem at the root instead of trying to
fix it after the fact.

Thanks,
Tiezhu


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch
  2026-05-15  2:52 ` [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch Tiezhu Yang
@ 2026-05-15  4:29   ` Huacai Chen
  0 siblings, 0 replies; 9+ messages in thread
From: Huacai Chen @ 2026-05-15  4:29 UTC (permalink / raw)
  To: Tiezhu Yang; +Cc: Lisa Robinson, WANG Rui, loongarch, linux-kernel

On Fri, May 15, 2026 at 10:52 AM Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
> On 2026/5/12 下午4:20, Tiezhu Yang wrote:
> > This series addresses ftrace and kprobes issues observed under heavy
> > tracing loads.
> >
> > Tiezhu Yang (4):
> >    LoongArch: ftrace: Fix stale per-CPU kprobe state after task migration
> >    LoongArch: kprobes: Use larch_insn_text_copy() to patch instructions
> >    LoongArch: kprobes: Fix single-stepping instruction slot preparation
> >    LoongArch: kprobes: Fix handling of fatal unrecoverable recursions
>
> Hi Lisa and Rui,
>
> You are right, please ignore the patch #3.
I think Patch#3 is OK, but you need to remove the introduction about atomicity.

Huacai

>
> Hi Huacai,
>
> Please disregard the patch #1 and focus on the patch #2 and #4.
>
> After thinking it through, it is better to avoid calling functions that
> might sleep in the probe handler. We have modified the test module with
> a proper way, there is no problem thereafter.
>
> The right way is to prevent the problem at the root instead of trying to
> fix it after the fact.
>
> Thanks,
> Tiezhu
>
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-05-15  4:29 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-12  8:20 [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch Tiezhu Yang
2026-05-12  8:20 ` [PATCH v1 1/4] LoongArch: ftrace: Fix stale per-CPU kprobe state after task migration Tiezhu Yang
2026-05-12  8:20 ` [PATCH v1 2/4] LoongArch: kprobes: Use larch_insn_text_copy() to patch instructions Tiezhu Yang
2026-05-12  8:20 ` [PATCH v1 3/4] LoongArch: kprobes: Fix single-stepping instruction slot preparation Tiezhu Yang
2026-05-13  6:15   ` Lisa Robinson
2026-05-14 13:20   ` WANG Rui
2026-05-12  8:20 ` [PATCH v1 4/4] LoongArch: kprobes: Fix handling of fatal unrecoverable recursions Tiezhu Yang
2026-05-15  2:52 ` [PATCH v1 0/4] Fix ftrace and kprobes issues for LoongArch Tiezhu Yang
2026-05-15  4:29   ` Huacai Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox