* [PATCH v2 1/4] LoongArch: ftrace: Refactor register restoration in ftrace_common_return
2025-12-12 9:10 [PATCH v2 0/4] Fix the failure issue of the module_attach test case Chenghao Duan
@ 2025-12-12 9:11 ` Chenghao Duan
2025-12-12 9:11 ` [PATCH v2 2/4] ftrace: samples: Adjust register stack restore order in direct call trampolines Chenghao Duan
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Chenghao Duan @ 2025-12-12 9:11 UTC (permalink / raw)
To: yangtiezhu, hengqi.chen, chenhuacai
Cc: kernel, zhangtianyang, masahiroy, linux-kernel, loongarch, bpf,
duanchenghao, youling.tang, jianghaoran, vincent.mc.li
Refactor the register restoration sequence in the ftrace_common_return
function to clearly distinguish between the logic of normal returns and
direct call returns in function tracing scenarios. The logic is as
follows:
1. In the case of a normal return, the execution flow returns to the
traced function, and ftrace must ensure that the register data is
consistent with the state when the function was entered.
ra = parent return address; t0 = traced function return address.
2. In the case of a direct call return, the execution flow jumps to the
custom trampoline function, and ftrace must ensure that the register
data is consistent with the state when ftrace was entered.
ra = traced function return address; t0 = parent return address.
Fixes: 9cdc3b6a299c ("LoongArch: ftrace: Add direct call support")
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
arch/loongarch/kernel/mcount_dyn.S | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/loongarch/kernel/mcount_dyn.S b/arch/loongarch/kernel/mcount_dyn.S
index d6b474ad1d5e..5729c20e5b8b 100644
--- a/arch/loongarch/kernel/mcount_dyn.S
+++ b/arch/loongarch/kernel/mcount_dyn.S
@@ -94,7 +94,6 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
* at the callsite, so there is no need to restore the T series regs.
*/
ftrace_common_return:
- PTR_L ra, sp, PT_R1
PTR_L a0, sp, PT_R4
PTR_L a1, sp, PT_R5
PTR_L a2, sp, PT_R6
@@ -104,12 +103,17 @@ ftrace_common_return:
PTR_L a6, sp, PT_R10
PTR_L a7, sp, PT_R11
PTR_L fp, sp, PT_R22
- PTR_L t0, sp, PT_ERA
PTR_L t1, sp, PT_R13
- PTR_ADDI sp, sp, PT_SIZE
bnez t1, .Ldirect
+
+ PTR_L ra, sp, PT_R1
+ PTR_L t0, sp, PT_ERA
+ PTR_ADDI sp, sp, PT_SIZE
jr t0
.Ldirect:
+ PTR_L t0, sp, PT_R1
+ PTR_L ra, sp, PT_ERA
+ PTR_ADDI sp, sp, PT_SIZE
jr t1
SYM_CODE_END(ftrace_common)
@@ -161,6 +165,8 @@ SYM_CODE_END(return_to_handler)
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
SYM_CODE_START(ftrace_stub_direct_tramp)
UNWIND_HINT_UNDEFINED
- jr t0
+ move t1, ra
+ move ra, t0
+ jr t1
SYM_CODE_END(ftrace_stub_direct_tramp)
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread* [PATCH v2 2/4] ftrace: samples: Adjust register stack restore order in direct call trampolines
2025-12-12 9:10 [PATCH v2 0/4] Fix the failure issue of the module_attach test case Chenghao Duan
2025-12-12 9:11 ` [PATCH v2 1/4] LoongArch: ftrace: Refactor register restoration in ftrace_common_return Chenghao Duan
@ 2025-12-12 9:11 ` Chenghao Duan
2025-12-12 9:11 ` [PATCH v2 3/4] LoongArch: BPF: Enhance trampoline support for kernel and module tracing Chenghao Duan
2025-12-12 9:11 ` [PATCH v2 4/4] LoongArch: BPF: Enable BPF exception fixup for specific ADE subcode Chenghao Duan
3 siblings, 0 replies; 9+ messages in thread
From: Chenghao Duan @ 2025-12-12 9:11 UTC (permalink / raw)
To: yangtiezhu, hengqi.chen, chenhuacai
Cc: kernel, zhangtianyang, masahiroy, linux-kernel, loongarch, bpf,
duanchenghao, youling.tang, jianghaoran, vincent.mc.li,
Youling Tang
Ensure that in the ftrace direct call logic, the CPU register state
(with ra = parent return address) is restored to the correct state
after the execution of the custom trampoline function and before
returning to the traced function. Additionally, guarantee the
correctness of the jump logic for jr t0 (traced function address).
Reported-by: Youling Tang <tangyouling@kylinos.cn>
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
samples/ftrace/ftrace-direct-modify.c | 8 ++++----
samples/ftrace/ftrace-direct-multi-modify.c | 8 ++++----
samples/ftrace/ftrace-direct-multi.c | 4 ++--
samples/ftrace/ftrace-direct-too.c | 4 ++--
samples/ftrace/ftrace-direct.c | 4 ++--
5 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/samples/ftrace/ftrace-direct-modify.c b/samples/ftrace/ftrace-direct-modify.c
index da3a9f2091f5..1ba1927b548e 100644
--- a/samples/ftrace/ftrace-direct-modify.c
+++ b/samples/ftrace/ftrace-direct-modify.c
@@ -176,8 +176,8 @@ asm (
" st.d $t0, $sp, 0\n"
" st.d $ra, $sp, 8\n"
" bl my_direct_func1\n"
-" ld.d $t0, $sp, 0\n"
-" ld.d $ra, $sp, 8\n"
+" ld.d $ra, $sp, 0\n"
+" ld.d $t0, $sp, 8\n"
" addi.d $sp, $sp, 16\n"
" jr $t0\n"
" .size my_tramp1, .-my_tramp1\n"
@@ -189,8 +189,8 @@ asm (
" st.d $t0, $sp, 0\n"
" st.d $ra, $sp, 8\n"
" bl my_direct_func2\n"
-" ld.d $t0, $sp, 0\n"
-" ld.d $ra, $sp, 8\n"
+" ld.d $ra, $sp, 0\n"
+" ld.d $t0, $sp, 8\n"
" addi.d $sp, $sp, 16\n"
" jr $t0\n"
" .size my_tramp2, .-my_tramp2\n"
diff --git a/samples/ftrace/ftrace-direct-multi-modify.c b/samples/ftrace/ftrace-direct-multi-modify.c
index 8f7986d698d8..7a7822dfeb50 100644
--- a/samples/ftrace/ftrace-direct-multi-modify.c
+++ b/samples/ftrace/ftrace-direct-multi-modify.c
@@ -199,8 +199,8 @@ asm (
" move $a0, $t0\n"
" bl my_direct_func1\n"
" ld.d $a0, $sp, 0\n"
-" ld.d $t0, $sp, 8\n"
-" ld.d $ra, $sp, 16\n"
+" ld.d $ra, $sp, 8\n"
+" ld.d $t0, $sp, 16\n"
" addi.d $sp, $sp, 32\n"
" jr $t0\n"
" .size my_tramp1, .-my_tramp1\n"
@@ -215,8 +215,8 @@ asm (
" move $a0, $t0\n"
" bl my_direct_func2\n"
" ld.d $a0, $sp, 0\n"
-" ld.d $t0, $sp, 8\n"
-" ld.d $ra, $sp, 16\n"
+" ld.d $ra, $sp, 8\n"
+" ld.d $t0, $sp, 16\n"
" addi.d $sp, $sp, 32\n"
" jr $t0\n"
" .size my_tramp2, .-my_tramp2\n"
diff --git a/samples/ftrace/ftrace-direct-multi.c b/samples/ftrace/ftrace-direct-multi.c
index db326c81a27d..3fe6ddaf0b69 100644
--- a/samples/ftrace/ftrace-direct-multi.c
+++ b/samples/ftrace/ftrace-direct-multi.c
@@ -131,8 +131,8 @@ asm (
" move $a0, $t0\n"
" bl my_direct_func\n"
" ld.d $a0, $sp, 0\n"
-" ld.d $t0, $sp, 8\n"
-" ld.d $ra, $sp, 16\n"
+" ld.d $ra, $sp, 8\n"
+" ld.d $t0, $sp, 16\n"
" addi.d $sp, $sp, 32\n"
" jr $t0\n"
" .size my_tramp, .-my_tramp\n"
diff --git a/samples/ftrace/ftrace-direct-too.c b/samples/ftrace/ftrace-direct-too.c
index 3d0fa260332d..bf2411aa6fd7 100644
--- a/samples/ftrace/ftrace-direct-too.c
+++ b/samples/ftrace/ftrace-direct-too.c
@@ -143,8 +143,8 @@ asm (
" ld.d $a0, $sp, 0\n"
" ld.d $a1, $sp, 8\n"
" ld.d $a2, $sp, 16\n"
-" ld.d $t0, $sp, 24\n"
-" ld.d $ra, $sp, 32\n"
+" ld.d $ra, $sp, 24\n"
+" ld.d $t0, $sp, 32\n"
" addi.d $sp, $sp, 48\n"
" jr $t0\n"
" .size my_tramp, .-my_tramp\n"
diff --git a/samples/ftrace/ftrace-direct.c b/samples/ftrace/ftrace-direct.c
index 956834b0d19a..5368c8c39cbb 100644
--- a/samples/ftrace/ftrace-direct.c
+++ b/samples/ftrace/ftrace-direct.c
@@ -124,8 +124,8 @@ asm (
" st.d $ra, $sp, 16\n"
" bl my_direct_func\n"
" ld.d $a0, $sp, 0\n"
-" ld.d $t0, $sp, 8\n"
-" ld.d $ra, $sp, 16\n"
+" ld.d $ra, $sp, 8\n"
+" ld.d $t0, $sp, 16\n"
" addi.d $sp, $sp, 32\n"
" jr $t0\n"
" .size my_tramp, .-my_tramp\n"
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 3/4] LoongArch: BPF: Enhance trampoline support for kernel and module tracing
2025-12-12 9:10 [PATCH v2 0/4] Fix the failure issue of the module_attach test case Chenghao Duan
2025-12-12 9:11 ` [PATCH v2 1/4] LoongArch: ftrace: Refactor register restoration in ftrace_common_return Chenghao Duan
2025-12-12 9:11 ` [PATCH v2 2/4] ftrace: samples: Adjust register stack restore order in direct call trampolines Chenghao Duan
@ 2025-12-12 9:11 ` Chenghao Duan
2025-12-14 12:36 ` Tiezhu Yang
2025-12-12 9:11 ` [PATCH v2 4/4] LoongArch: BPF: Enable BPF exception fixup for specific ADE subcode Chenghao Duan
3 siblings, 1 reply; 9+ messages in thread
From: Chenghao Duan @ 2025-12-12 9:11 UTC (permalink / raw)
To: yangtiezhu, hengqi.chen, chenhuacai
Cc: kernel, zhangtianyang, masahiroy, linux-kernel, loongarch, bpf,
duanchenghao, youling.tang, jianghaoran, vincent.mc.li
This patch addresses two main issues in the LoongArch BPF trampoline
implementation:
1. BPF-to-BPF call handling:
- Modify the build_prologue function to ensure that the value of the
return address register ra is saved to t0 before entering the
trampoline operation.
- This ensures that the return address handling logic is accurate and
error-free when a BPF program calls another BPF program.
2. Enable Module Function Tracing Support:
- Remove the previous restrictions that blocked the tracing of kernel
module functions.
- Fix the issue that previously caused kernel lockups when attempting
to trace module functions
3. Related Function Optimizations:
- Adjust the jump offset of tail calls to ensure correct instruction
alignment.
- Enhance the bpf_arch_text_poke() function to enable accurate location
of BPF program entry points.
- Refine the trampoline return logic to ensure that the register data
is correct when returning to both the traced function and the parent
function.
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
arch/loongarch/net/bpf_jit.c | 38 +++++++++++++++++++++++++-----------
1 file changed, 27 insertions(+), 11 deletions(-)
diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
index 8dc58781b8eb..0c16a1b18e8f 100644
--- a/arch/loongarch/net/bpf_jit.c
+++ b/arch/loongarch/net/bpf_jit.c
@@ -139,6 +139,7 @@ static void build_prologue(struct jit_ctx *ctx)
stack_adjust = round_up(stack_adjust, 16);
stack_adjust += bpf_stack_adjust;
+ move_reg(ctx, LOONGARCH_GPR_T0, LOONGARCH_GPR_RA);
/* Reserve space for the move_imm + jirl instruction */
for (i = 0; i < LOONGARCH_LONG_JUMP_NINSNS; i++)
emit_insn(ctx, nop);
@@ -238,7 +239,7 @@ static void __build_epilogue(struct jit_ctx *ctx, bool is_tail_call)
* Call the next bpf prog and skip the first instruction
* of TCC initialization.
*/
- emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T3, 6);
+ emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T3, 7);
}
}
@@ -1265,7 +1266,7 @@ static int emit_jump_or_nops(void *target, void *ip, u32 *insns, bool is_call)
return 0;
}
- return emit_jump_and_link(&ctx, is_call ? LOONGARCH_GPR_T0 : LOONGARCH_GPR_ZERO, (u64)target);
+ return emit_jump_and_link(&ctx, is_call ? LOONGARCH_GPR_RA : LOONGARCH_GPR_ZERO, (u64)target);
}
static int emit_call(struct jit_ctx *ctx, u64 addr)
@@ -1289,6 +1290,10 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
void *new_addr)
{
int ret;
+ unsigned long size = 0;
+ unsigned long offset = 0;
+ char namebuf[KSYM_NAME_LEN];
+ void *image = NULL;
bool is_call;
u32 old_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP};
u32 new_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP};
@@ -1296,9 +1301,18 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
/* Only poking bpf text is supported. Since kernel function entry
* is set up by ftrace, we rely on ftrace to poke kernel functions.
*/
- if (!is_bpf_text_address((unsigned long)ip))
+ if (!__bpf_address_lookup((unsigned long)ip, &size, &offset, namebuf))
return -ENOTSUPP;
+ image = ip - offset;
+ /* zero offset means we're poking bpf prog entry */
+ if (offset == 0)
+ /* skip to the nop instruction in bpf prog entry:
+ * move t0, ra
+ * nop
+ */
+ ip = image + LOONGARCH_INSN_SIZE;
+
is_call = old_t == BPF_MOD_CALL;
ret = emit_jump_or_nops(old_addr, ip, old_insns, is_call);
if (ret)
@@ -1622,14 +1636,12 @@ static int __arch_prepare_bpf_trampoline(struct jit_ctx *ctx, struct bpf_tramp_i
/* To traced function */
/* Ftrace jump skips 2 NOP instructions */
- if (is_kernel_text((unsigned long)orig_call))
+ if (is_kernel_text((unsigned long)orig_call) ||
+ is_module_text_address((unsigned long)orig_call))
orig_call += LOONGARCH_FENTRY_NBYTES;
/* Direct jump skips 5 NOP instructions */
else if (is_bpf_text_address((unsigned long)orig_call))
orig_call += LOONGARCH_BPF_FENTRY_NBYTES;
- /* Module tracing not supported - cause kernel lockups */
- else if (is_module_text_address((unsigned long)orig_call))
- return -ENOTSUPP;
if (flags & BPF_TRAMP_F_CALL_ORIG) {
move_addr(ctx, LOONGARCH_GPR_A0, (const u64)im);
@@ -1722,12 +1734,16 @@ static int __arch_prepare_bpf_trampoline(struct jit_ctx *ctx, struct bpf_tramp_i
emit_insn(ctx, ldd, LOONGARCH_GPR_FP, LOONGARCH_GPR_SP, 0);
emit_insn(ctx, addid, LOONGARCH_GPR_SP, LOONGARCH_GPR_SP, 16);
- if (flags & BPF_TRAMP_F_SKIP_FRAME)
+ if (flags & BPF_TRAMP_F_SKIP_FRAME) {
/* return to parent function */
- emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_RA, 0);
- else
- /* return to traced function */
+ move_reg(ctx, LOONGARCH_GPR_RA, LOONGARCH_GPR_T0);
emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T0, 0);
+ } else {
+ /* return to traced function */
+ move_reg(ctx, LOONGARCH_GPR_T1, LOONGARCH_GPR_RA);
+ move_reg(ctx, LOONGARCH_GPR_RA, LOONGARCH_GPR_T0);
+ emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T1, 0);
+ }
}
ret = ctx->idx;
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH v2 3/4] LoongArch: BPF: Enhance trampoline support for kernel and module tracing
2025-12-12 9:11 ` [PATCH v2 3/4] LoongArch: BPF: Enhance trampoline support for kernel and module tracing Chenghao Duan
@ 2025-12-14 12:36 ` Tiezhu Yang
2025-12-15 2:32 ` Chenghao Duan
0 siblings, 1 reply; 9+ messages in thread
From: Tiezhu Yang @ 2025-12-14 12:36 UTC (permalink / raw)
To: Chenghao Duan, hengqi.chen, chenhuacai
Cc: kernel, zhangtianyang, masahiroy, linux-kernel, loongarch, bpf,
youling.tang, jianghaoran, vincent.mc.li
On 12/12/25 17:11, Chenghao Duan wrote:
> This patch addresses two main issues in the LoongArch BPF trampoline
> implementation:
>
> 1. BPF-to-BPF call handling:
> - Modify the build_prologue function to ensure that the value of the
> return address register ra is saved to t0 before entering the
> trampoline operation.
> - This ensures that the return address handling logic is accurate and
> error-free when a BPF program calls another BPF program.
>
> 2. Enable Module Function Tracing Support:
> - Remove the previous restrictions that blocked the tracing of kernel
> module functions.
> - Fix the issue that previously caused kernel lockups when attempting
> to trace module functions
>
> 3. Related Function Optimizations:
> - Adjust the jump offset of tail calls to ensure correct instruction
> alignment.
> - Enhance the bpf_arch_text_poke() function to enable accurate location
> of BPF program entry points.
> - Refine the trampoline return logic to ensure that the register data
> is correct when returning to both the traced function and the parent
> function.
>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
As described in the commit message, your changes include many kinds
of contents, thanks for the fixes and optimizations.
In order to avoid introducing bugs in the middle, please separate each
logical change into a separate patch, each patch should make an easily
understood change that can be verified by reviewers, each patch should
be justifiable on its own merits.
The current patch #4 can be put after the current patch #2 as a
preparation for the bpf patches.
Furthermore, it would be better to put the related test cases in the
commit message of each patch rather than in the cover letter, so that
it can be verified easily to know what this patch affected and can be
recorded in the git log.
And also please add Fixes tag for each patch if possible.
Thanks,
Tiezhu
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/4] LoongArch: BPF: Enhance trampoline support for kernel and module tracing
2025-12-14 12:36 ` Tiezhu Yang
@ 2025-12-15 2:32 ` Chenghao Duan
0 siblings, 0 replies; 9+ messages in thread
From: Chenghao Duan @ 2025-12-15 2:32 UTC (permalink / raw)
To: Tiezhu Yang
Cc: hengqi.chen, chenhuacai, kernel, zhangtianyang, masahiroy,
linux-kernel, loongarch, bpf, youling.tang, jianghaoran,
vincent.mc.li
On Sun, Dec 14, 2025 at 08:36:16PM +0800, Tiezhu Yang wrote:
> On 12/12/25 17:11, Chenghao Duan wrote:
> > This patch addresses two main issues in the LoongArch BPF trampoline
> > implementation:
> >
> > 1. BPF-to-BPF call handling:
> > - Modify the build_prologue function to ensure that the value of the
> > return address register ra is saved to t0 before entering the
> > trampoline operation.
> > - This ensures that the return address handling logic is accurate and
> > error-free when a BPF program calls another BPF program.
> >
> > 2. Enable Module Function Tracing Support:
> > - Remove the previous restrictions that blocked the tracing of kernel
> > module functions.
> > - Fix the issue that previously caused kernel lockups when attempting
> > to trace module functions
> >
> > 3. Related Function Optimizations:
> > - Adjust the jump offset of tail calls to ensure correct instruction
> > alignment.
> > - Enhance the bpf_arch_text_poke() function to enable accurate location
> > of BPF program entry points.
> > - Refine the trampoline return logic to ensure that the register data
> > is correct when returning to both the traced function and the parent
> > function.
> >
> > Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
>
> As described in the commit message, your changes include many kinds
> of contents, thanks for the fixes and optimizations.
>
> In order to avoid introducing bugs in the middle, please separate each
> logical change into a separate patch, each patch should make an easily
> understood change that can be verified by reviewers, each patch should
> be justifiable on its own merits.
>
> The current patch #4 can be put after the current patch #2 as a
> preparation for the bpf patches.
>
Got it. I will incorporate your suggestions in the next version.
> Furthermore, it would be better to put the related test cases in the
> commit message of each patch rather than in the cover letter, so that
> it can be verified easily to know what this patch affected and can be
> recorded in the git log.
I fully agree with your suggestions. In fact, the current three patches
(excluding 0002-ftrace-samples-xxx.patch) are all fixes for the failed
test cases of module_attach. The test items included in the cover letter
of 0000-xxx.patch are intended to verify that the trampoline-related
test cases can pass after the current changes. I will follow your advice
and place the relevant test cases in the commit message of the
corresponding patches in the next version.
Chenghao
>
> And also please add Fixes tag for each patch if possible.
>
> Thanks,
> Tiezhu
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 4/4] LoongArch: BPF: Enable BPF exception fixup for specific ADE subcode
2025-12-12 9:10 [PATCH v2 0/4] Fix the failure issue of the module_attach test case Chenghao Duan
` (2 preceding siblings ...)
2025-12-12 9:11 ` [PATCH v2 3/4] LoongArch: BPF: Enhance trampoline support for kernel and module tracing Chenghao Duan
@ 2025-12-12 9:11 ` Chenghao Duan
2025-12-13 13:16 ` Huacai Chen
3 siblings, 1 reply; 9+ messages in thread
From: Chenghao Duan @ 2025-12-12 9:11 UTC (permalink / raw)
To: yangtiezhu, hengqi.chen, chenhuacai
Cc: kernel, zhangtianyang, masahiroy, linux-kernel, loongarch, bpf,
duanchenghao, youling.tang, jianghaoran, vincent.mc.li
This patch allows the LoongArch BPF JIT to handle recoverable memory
access errors generated by BPF_PROBE_MEM* instructions.
When a BPF program performs memory access operations, the instructions
it executes may trigger ADEM exceptions. The kernel’s built-in BPF
exception table mechanism (EX_TYPE_BPF) will generate corresponding
exception fixup entries in the JIT compilation phase; however, the
architecture-specific trap handling function needs to proactively call
the common fixup routine to achieve exception recovery.
do_ade(): fix EX_TYPE_BPF memory access exceptions for BPF programs,
ensure safe execution.
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
arch/loongarch/kernel/traps.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/loongarch/kernel/traps.c b/arch/loongarch/kernel/traps.c
index da5926fead4a..4cf72e0af6a3 100644
--- a/arch/loongarch/kernel/traps.c
+++ b/arch/loongarch/kernel/traps.c
@@ -534,7 +534,13 @@ asmlinkage void noinstr do_fpe(struct pt_regs *regs, unsigned long fcsr)
asmlinkage void noinstr do_ade(struct pt_regs *regs)
{
- irqentry_state_t state = irqentry_enter(regs);
+ irqentry_state_t state;
+ unsigned int esubcode = FIELD_GET(CSR_ESTAT_ESUBCODE, regs->csr_estat);
+
+ if ((esubcode == EXSUBCODE_ADEM) && fixup_exception(regs))
+ return;
+
+ state = irqentry_enter(regs);
die_if_kernel("Kernel ade access", regs);
force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *)regs->csr_badvaddr);
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH v2 4/4] LoongArch: BPF: Enable BPF exception fixup for specific ADE subcode
2025-12-12 9:11 ` [PATCH v2 4/4] LoongArch: BPF: Enable BPF exception fixup for specific ADE subcode Chenghao Duan
@ 2025-12-13 13:16 ` Huacai Chen
2025-12-15 2:18 ` Chenghao Duan
0 siblings, 1 reply; 9+ messages in thread
From: Huacai Chen @ 2025-12-13 13:16 UTC (permalink / raw)
To: Chenghao Duan
Cc: yangtiezhu, hengqi.chen, kernel, zhangtianyang, masahiroy,
linux-kernel, loongarch, bpf, youling.tang, jianghaoran,
vincent.mc.li
Hi, Chenghao,
On Fri, Dec 12, 2025 at 5:11 PM Chenghao Duan <duanchenghao@kylinos.cn> wrote:
>
> This patch allows the LoongArch BPF JIT to handle recoverable memory
> access errors generated by BPF_PROBE_MEM* instructions.
>
> When a BPF program performs memory access operations, the instructions
> it executes may trigger ADEM exceptions. The kernel’s built-in BPF
> exception table mechanism (EX_TYPE_BPF) will generate corresponding
> exception fixup entries in the JIT compilation phase; however, the
> architecture-specific trap handling function needs to proactively call
> the common fixup routine to achieve exception recovery.
>
> do_ade(): fix EX_TYPE_BPF memory access exceptions for BPF programs,
> ensure safe execution.
>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
> ---
> arch/loongarch/kernel/traps.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/arch/loongarch/kernel/traps.c b/arch/loongarch/kernel/traps.c
> index da5926fead4a..4cf72e0af6a3 100644
> --- a/arch/loongarch/kernel/traps.c
> +++ b/arch/loongarch/kernel/traps.c
> @@ -534,7 +534,13 @@ asmlinkage void noinstr do_fpe(struct pt_regs *regs, unsigned long fcsr)
>
> asmlinkage void noinstr do_ade(struct pt_regs *regs)
> {
> - irqentry_state_t state = irqentry_enter(regs);
> + irqentry_state_t state;
> + unsigned int esubcode = FIELD_GET(CSR_ESTAT_ESUBCODE, regs->csr_estat);
> +
> + if ((esubcode == EXSUBCODE_ADEM) && fixup_exception(regs))
> + return;
No chance for ADEF? And I don't think ixup_exception() can be done out
of irqentry_enter().
This patch is needed by BPF but not part of BPF, so I think the
subject should be:
LoongArch: Enable exception fixup for specific ADE subcode
Huacai
> +
> + state = irqentry_enter(regs);
>
> die_if_kernel("Kernel ade access", regs);
> force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *)regs->csr_badvaddr);
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v2 4/4] LoongArch: BPF: Enable BPF exception fixup for specific ADE subcode
2025-12-13 13:16 ` Huacai Chen
@ 2025-12-15 2:18 ` Chenghao Duan
0 siblings, 0 replies; 9+ messages in thread
From: Chenghao Duan @ 2025-12-15 2:18 UTC (permalink / raw)
To: Huacai Chen
Cc: yangtiezhu, hengqi.chen, kernel, zhangtianyang, masahiroy,
linux-kernel, loongarch, bpf, youling.tang, jianghaoran,
vincent.mc.li
On Sat, Dec 13, 2025 at 09:16:00PM +0800, Huacai Chen wrote:
> Hi, Chenghao,
>
> On Fri, Dec 12, 2025 at 5:11 PM Chenghao Duan <duanchenghao@kylinos.cn> wrote:
> >
> > This patch allows the LoongArch BPF JIT to handle recoverable memory
> > access errors generated by BPF_PROBE_MEM* instructions.
> >
> > When a BPF program performs memory access operations, the instructions
> > it executes may trigger ADEM exceptions. The kernel’s built-in BPF
> > exception table mechanism (EX_TYPE_BPF) will generate corresponding
> > exception fixup entries in the JIT compilation phase; however, the
> > architecture-specific trap handling function needs to proactively call
> > the common fixup routine to achieve exception recovery.
> >
> > do_ade(): fix EX_TYPE_BPF memory access exceptions for BPF programs,
> > ensure safe execution.
> >
> > Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
> > ---
> > arch/loongarch/kernel/traps.c | 8 +++++++-
> > 1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/loongarch/kernel/traps.c b/arch/loongarch/kernel/traps.c
> > index da5926fead4a..4cf72e0af6a3 100644
> > --- a/arch/loongarch/kernel/traps.c
> > +++ b/arch/loongarch/kernel/traps.c
> > @@ -534,7 +534,13 @@ asmlinkage void noinstr do_fpe(struct pt_regs *regs, unsigned long fcsr)
> >
> > asmlinkage void noinstr do_ade(struct pt_regs *regs)
> > {
> > - irqentry_state_t state = irqentry_enter(regs);
> > + irqentry_state_t state;
> > + unsigned int esubcode = FIELD_GET(CSR_ESTAT_ESUBCODE, regs->csr_estat);
> > +
> > + if ((esubcode == EXSUBCODE_ADEM) && fixup_exception(regs))
> > + return;
> No chance for ADEF? And I don't think ixup_exception() can be done out
> of irqentry_enter().
At present, exception fixes are only applied to memory-type BPF programs,
with no handling implemented for ADEF.
In the next version, fixup_exception will be processed internally within
irqentry_enter()/irqentry_exit, and the title will be revised with
reference to your suggestions.
Chenghao
>
> This patch is needed by BPF but not part of BPF, so I think the
> subject should be:
> LoongArch: Enable exception fixup for specific ADE subcode
>
> Huacai
>
> > +
> > + state = irqentry_enter(regs);
> >
> > die_if_kernel("Kernel ade access", regs);
> > force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *)regs->csr_badvaddr);
> > --
> > 2.25.1
> >
^ permalink raw reply [flat|nested] 9+ messages in thread