netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [syzbot] [bpf?] KASAN: stack-out-of-bounds Write in __bpf_get_stack
@ 2025-11-10 18:41 syzbot
  2025-11-10 21:16 ` [RFC bpf-next PATCH] bpf: Clamp trace length in __bpf_get_stack to fix OOB write Brahmajit Das
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: syzbot @ 2025-11-10 18:41 UTC (permalink / raw)
  To: andrii, ast, bpf, contact, daniel, eddyz87, haoluo,
	john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau, netdev,
	sdf, song, syzkaller-bugs, yonghong.song

Hello,

syzbot found the following issue on:

HEAD commit:    f8c67d8550ee bpf: Use kmalloc_nolock() in range tree
git tree:       bpf-next
console output: https://syzkaller.appspot.com/x/log.txt?x=121a50b4580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=e46b8a1c645465a9
dashboard link: https://syzkaller.appspot.com/bug?extid=d1b7fa1092def3628bd7
compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=12270412580000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=128bd084580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/d9e95bfbe4ee/disk-f8c67d85.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/0766b6dd0e91/vmlinux-f8c67d85.xz
kernel image: https://storage.googleapis.com/syzbot-assets/79089f9e9e93/bzImage-f8c67d85.xz

The issue was bisected to:

commit e17d62fedd10ae56e2426858bd0757da544dbc73
Author: Arnaud Lecomte <contact@arnaud-lcm.com>
Date:   Sat Oct 25 19:28:58 2025 +0000

    bpf: Refactor stack map trace depth calculation into helper function

bisection log:  https://syzkaller.appspot.com/x/bisect.txt?x=1632d0b4580000
final oops:     https://syzkaller.appspot.com/x/report.txt?x=1532d0b4580000
console output: https://syzkaller.appspot.com/x/log.txt?x=1132d0b4580000

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")

==================================================================
BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x5a3/0xaa0 kernel/bpf/stackmap.c:493
Write of size 168 at addr ffffc900030e73a8 by task syz.1.44/6108

CPU: 0 UID: 0 PID: 6108 Comm: syz.1.44 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 print_address_description mm/kasan/report.c:378 [inline]
 print_report+0xca/0x240 mm/kasan/report.c:482
 kasan_report+0x118/0x150 mm/kasan/report.c:595
 check_region_inline mm/kasan/generic.c:-1 [inline]
 kasan_check_range+0x2b0/0x2c0 mm/kasan/generic.c:200
 __asan_memcpy+0x40/0x70 mm/kasan/shadow.c:106
 __bpf_get_stack+0x5a3/0xaa0 kernel/bpf/stackmap.c:493
 ____bpf_get_stack kernel/bpf/stackmap.c:517 [inline]
 bpf_get_stack+0x33/0x50 kernel/bpf/stackmap.c:514
 ____bpf_get_stack_raw_tp kernel/trace/bpf_trace.c:1653 [inline]
 bpf_get_stack_raw_tp+0x1a9/0x220 kernel/trace/bpf_trace.c:1643
 bpf_prog_4b3f8e3d902f6f0d+0x41/0x49
 bpf_dispatcher_nop_func include/linux/bpf.h:1364 [inline]
 __bpf_prog_run include/linux/filter.h:721 [inline]
 bpf_prog_run include/linux/filter.h:728 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2075 [inline]
 bpf_trace_run2+0x284/0x4b0 kernel/trace/bpf_trace.c:2116
 __traceiter_kfree+0x2e/0x50 include/trace/events/kmem.h:97
 __do_trace_kfree include/trace/events/kmem.h:97 [inline]
 trace_kfree include/trace/events/kmem.h:97 [inline]
 kfree+0x62f/0x6d0 mm/slub.c:6824
 compute_scc+0x9a6/0xa20 kernel/bpf/verifier.c:25021
 bpf_check+0x5df2/0x1c210 kernel/bpf/verifier.c:25162
 bpf_prog_load+0x13ba/0x1a10 kernel/bpf/syscall.c:3095
 __sys_bpf+0x507/0x860 kernel/bpf/syscall.c:6171
 __do_sys_bpf kernel/bpf/syscall.c:6281 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:6279 [inline]
 __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:6279
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fc4d8b8f6c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffcd2851bb8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fc4d8de5fa0 RCX: 00007fc4d8b8f6c9
RDX: 0000000000000094 RSI: 00002000000000c0 RDI: 0000000000000005
RBP: 00007fc4d8c11f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fc4d8de5fa0 R14: 00007fc4d8de5fa0 R15: 0000000000000003
 </TASK>

The buggy address belongs to stack of task syz.1.44/6108
 and is located at offset 296 in frame:
 __bpf_get_stack+0x0/0xaa0 include/linux/mmap_lock.h:-1

This frame has 1 object:
 [32, 36) 'rctx.i'

The buggy address belongs to a 8-page vmalloc region starting at 0xffffc900030e0000 allocated at copy_process+0x54b/0x3c00 kernel/fork.c:2012
The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x572fb
memcg:ffff88803037aa02
flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff)
raw: 00fff00000000000 0000000000000000 dead000000000122 0000000000000000
raw: 0000000000000000 0000000000000000 00000001ffffffff ffff88803037aa02
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x2dc2(GFP_KERNEL|__GFP_HIGHMEM|__GFP_ZERO|__GFP_NOWARN), pid 1340, tgid 1340 (kworker/u8:6), ts 107851542040, free_ts 101175357499
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x240/0x2a0 mm/page_alloc.c:1850
 prep_new_page mm/page_alloc.c:1858 [inline]
 get_page_from_freelist+0x2365/0x2440 mm/page_alloc.c:3884
 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5183
 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2416
 alloc_frozen_pages_noprof mm/mempolicy.c:2487 [inline]
 alloc_pages_noprof+0xa9/0x190 mm/mempolicy.c:2507
 vm_area_alloc_pages mm/vmalloc.c:3647 [inline]
 __vmalloc_area_node mm/vmalloc.c:3724 [inline]
 __vmalloc_node_range_noprof+0x96c/0x12d0 mm/vmalloc.c:3897
 __vmalloc_node_noprof+0xc2/0x110 mm/vmalloc.c:3960
 alloc_thread_stack_node kernel/fork.c:311 [inline]
 dup_task_struct+0x3d4/0x830 kernel/fork.c:881
 copy_process+0x54b/0x3c00 kernel/fork.c:2012
 kernel_clone+0x21e/0x840 kernel/fork.c:2609
 user_mode_thread+0xdd/0x140 kernel/fork.c:2685
 call_usermodehelper_exec_sync kernel/umh.c:132 [inline]
 call_usermodehelper_exec_work+0x9c/0x230 kernel/umh.c:163
 process_one_work kernel/workqueue.c:3263 [inline]
 process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3346
 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3427
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
page last free pid 5918 tgid 5918 stack trace:
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1394 [inline]
 __free_frozen_pages+0xbc4/0xd30 mm/page_alloc.c:2906
 vfree+0x25a/0x400 mm/vmalloc.c:3440
 kcov_put kernel/kcov.c:439 [inline]
 kcov_close+0x28/0x50 kernel/kcov.c:535
 __fput+0x44c/0xa70 fs/file_table.c:468
 task_work_run+0x1d4/0x260 kernel/task_work.c:227
 exit_task_work include/linux/task_work.h:40 [inline]
 do_exit+0x6b5/0x2300 kernel/exit.c:966
 do_group_exit+0x21c/0x2d0 kernel/exit.c:1107
 get_signal+0x1285/0x1340 kernel/signal.c:3034
 arch_do_signal_or_restart+0xa0/0x790 arch/x86/kernel/signal.c:337
 exit_to_user_mode_loop+0x72/0x130 kernel/entry/common.c:40
 exit_to_user_mode_prepare include/linux/irq-entry-common.h:225 [inline]
 syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline]
 do_syscall_64+0x2bd/0xfa0 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

Memory state around the buggy address:
 ffffc900030e7300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 ffffc900030e7380: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffffc900030e7400: f1 f1 f1 f1 00 00 f2 f2 00 00 f3 f3 00 00 00 00
                   ^
 ffffc900030e7480: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 ffffc900030e7500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
For information about bisection process see: https://goo.gl/tpsmEJ#bisection

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC bpf-next PATCH] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-10 18:41 [syzbot] [bpf?] KASAN: stack-out-of-bounds Write in __bpf_get_stack syzbot
@ 2025-11-10 21:16 ` Brahmajit Das
  2025-11-11  0:37 ` [PATCH bpf-next v2] " Brahmajit Das
  2025-11-11  8:12 ` [PATCH bpf-next v3] " Brahmajit Das
  2 siblings, 0 replies; 15+ messages in thread
From: Brahmajit Das @ 2025-11-10 21:16 UTC (permalink / raw)
  To: syzbot+d1b7fa1092def3628bd7
  Cc: andrii, ast, bpf, contact, daniel, eddyz87, haoluo,
	john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau, netdev,
	sdf, song, syzkaller-bugs, yonghong.song

syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
triggered via bpf_get_stack() when capturing a kernel stack trace.

After the recent refactor that introduced stack_map_calculate_max_depth(),
the code in stack_map_get_build_id_offset() (and related helpers) stopped
clamping the number of trace entries (`trace_nr`) to the number of elements
that fit into the stack map value (`num_elem`).

As a result, if the captured stack contained more frames than the map value
can hold, the subsequent memcpy() would write past the end of the buffer,
triggering a KASAN report like:

    BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
    Write of size N at addr ... by task syz-executor...

Restore the missing clamp by limiting `trace_nr` to `num_elem` before
computing the copy length. This mirrors the pre-refactor logic and ensures
we never copy more bytes than the destination buffer can hold.

No functional change intended beyond reintroducing the missing bound check.

Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
Signed-off-by: Brahmajit Das <listout@listout.xyz>
---
 kernel/bpf/stackmap.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 2365541c81dd..885130e4ab0d 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -480,6 +480,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
 	}
 
 	trace_nr = trace->nr - skip;
+	trace_nr = min_t(u32, trace_nr, size / elem_size);
 	copy_len = trace_nr * elem_size;
 
 	ips = trace->ip + skip;
-- 
2.51.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next v2] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-10 18:41 [syzbot] [bpf?] KASAN: stack-out-of-bounds Write in __bpf_get_stack syzbot
  2025-11-10 21:16 ` [RFC bpf-next PATCH] bpf: Clamp trace length in __bpf_get_stack to fix OOB write Brahmajit Das
@ 2025-11-11  0:37 ` Brahmajit Das
  2025-11-11  1:04   ` bot+bpf-ci
  2025-11-11  8:12 ` [PATCH bpf-next v3] " Brahmajit Das
  2 siblings, 1 reply; 15+ messages in thread
From: Brahmajit Das @ 2025-11-11  0:37 UTC (permalink / raw)
  To: syzbot+d1b7fa1092def3628bd7
  Cc: andrii, ast, bpf, contact, daniel, eddyz87, haoluo,
	john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau, netdev,
	sdf, song, syzkaller-bugs, yonghong.song

syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
triggered via bpf_get_stack() when capturing a kernel stack trace.

After the recent refactor that introduced stack_map_calculate_max_depth(),
the code in stack_map_get_build_id_offset() (and related helpers) stopped
clamping the number of trace entries (`trace_nr`) to the number of elements
that fit into the stack map value (`num_elem`).

As a result, if the captured stack contained more frames than the map value
can hold, the subsequent memcpy() would write past the end of the buffer,
triggering a KASAN report like:

    BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
    Write of size N at addr ... by task syz-executor...

Restore the missing clamp by limiting `trace_nr` to `num_elem` before
computing the copy length. This mirrors the pre-refactor logic and ensures
we never copy more bytes than the destination buffer can hold.

No functional change intended beyond reintroducing the missing bound check.

Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
Signed-off-by: Brahmajit Das <listout@listout.xyz>
---
Changes in v2:
- Use max_depth instead of num_elem logic, this logic is similar to what
we are already using __bpf_get_stackid

Changes in v1:
- RFC patch that restores the number of trace entries by setting
trace_nr to trace_nr or num_elem based on whichever is the smallest.
Link: https://lore.kernel.org/all/20251110211640.963-1-listout@listout.xyz/
---
 kernel/bpf/stackmap.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 2365541c81dd..f9081de43689 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -480,6 +480,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
 	}
 
 	trace_nr = trace->nr - skip;
+	trace_nr = min_t(u32, trace_nr, max_depth - skip);
 	copy_len = trace_nr * elem_size;
 
 	ips = trace->ip + skip;
-- 
2.51.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v2] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-11  0:37 ` [PATCH bpf-next v2] " Brahmajit Das
@ 2025-11-11  1:04   ` bot+bpf-ci
  0 siblings, 0 replies; 15+ messages in thread
From: bot+bpf-ci @ 2025-11-11  1:04 UTC (permalink / raw)
  To: listout, syzbot+d1b7fa1092def3628bd7
  Cc: andrii, ast, bpf, contact, daniel, eddyz87, haoluo,
	john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau, netdev,
	sdf, song, syzkaller-bugs, yonghong.song, ast, andrii, daniel,
	martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1828 bytes --]

> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 2365541c8..f9081de43 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -480,6 +480,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
>  	}
>
>  	trace_nr = trace->nr - skip;
> +	trace_nr = min_t(u32, trace_nr, max_depth - skip);
>  	copy_len = trace_nr * elem_size;
>
>  	ips = trace->ip + skip;

Can max_depth - skip underflow when max_depth < skip?

The stack_map_calculate_max_depth() function can return a value less than
skip when sysctl_perf_event_max_stack is lowered below the skip value:

    max_depth = size / elem_size;
    max_depth += skip;
    if (max_depth > curr_sysctl_max_stack)
        return curr_sysctl_max_stack;

If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.

Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
causing min_t() to not limit trace_nr at all. This means the original OOB
write is not fixed in cases where skip > max_depth.

With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
scenario is reachable even without admin changing sysctls.

The pre-refactor code used:

    num_elem = size / elem_size;
    trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem;

Perhaps the fix should directly use num_elem instead of max_depth - skip:

    u32 num_elem = size / elem_size;
    trace_nr = min_t(u32, trace_nr, num_elem);

Or check for underflow:

    if (max_depth > skip)
        trace_nr = min_t(u32, trace_nr, max_depth - skip);
    else
        trace_nr = 0;


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19251115736

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-10 18:41 [syzbot] [bpf?] KASAN: stack-out-of-bounds Write in __bpf_get_stack syzbot
  2025-11-10 21:16 ` [RFC bpf-next PATCH] bpf: Clamp trace length in __bpf_get_stack to fix OOB write Brahmajit Das
  2025-11-11  0:37 ` [PATCH bpf-next v2] " Brahmajit Das
@ 2025-11-11  8:12 ` Brahmajit Das
  2025-11-12  1:44   ` Yonghong Song
                     ` (2 more replies)
  2 siblings, 3 replies; 15+ messages in thread
From: Brahmajit Das @ 2025-11-11  8:12 UTC (permalink / raw)
  To: syzbot+d1b7fa1092def3628bd7
  Cc: andrii, ast, bpf, contact, daniel, eddyz87, haoluo,
	john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau, netdev,
	sdf, song, syzkaller-bugs, yonghong.song

syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
triggered via bpf_get_stack() when capturing a kernel stack trace.

After the recent refactor that introduced stack_map_calculate_max_depth(),
the code in stack_map_get_build_id_offset() (and related helpers) stopped
clamping the number of trace entries (`trace_nr`) to the number of elements
that fit into the stack map value (`num_elem`).

As a result, if the captured stack contained more frames than the map value
can hold, the subsequent memcpy() would write past the end of the buffer,
triggering a KASAN report like:

    BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
    Write of size N at addr ... by task syz-executor...

Restore the missing clamp by limiting `trace_nr` to `num_elem` before
computing the copy length. This mirrors the pre-refactor logic and ensures
we never copy more bytes than the destination buffer can hold.

No functional change intended beyond reintroducing the missing bound check.

Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
Signed-off-by: Brahmajit Das <listout@listout.xyz>
---
Changes in v3:
Revert back to num_elem based logic for setting trace_nr. This was
suggested by bpf-ci bot, mainly pointing out the chances of underflow
when  max_depth < skip.

Quoting the bot's reply:
The stack_map_calculate_max_depth() function can return a value less than
skip when sysctl_perf_event_max_stack is lowered below the skip value:

    max_depth = size / elem_size;
    max_depth += skip;
    if (max_depth > curr_sysctl_max_stack)
        return curr_sysctl_max_stack;

If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.

Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
causing min_t() to not limit trace_nr at all. This means the original OOB
write is not fixed in cases where skip > max_depth.

With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
scenario is reachable even without admin changing sysctls.

Changes in v2:
- Use max_depth instead of num_elem logic, this logic is similar to what
we are already using __bpf_get_stackid
Link: https://lore.kernel.org/all/20251111003721.7629-1-listout@listout.xyz/

Changes in v1:
- RFC patch that restores the number of trace entries by setting
trace_nr to trace_nr or num_elem based on whichever is the smallest.
Link: https://lore.kernel.org/all/20251110211640.963-1-listout@listout.xyz/
---
 kernel/bpf/stackmap.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 2365541c81dd..cef79d9517ab 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -426,7 +426,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
 			    struct perf_callchain_entry *trace_in,
 			    void *buf, u32 size, u64 flags, bool may_fault)
 {
-	u32 trace_nr, copy_len, elem_size, max_depth;
+	u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
 	bool user_build_id = flags & BPF_F_USER_BUILD_ID;
 	bool crosstask = task && task != current;
 	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
@@ -480,6 +480,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
 	}
 
 	trace_nr = trace->nr - skip;
+	num_elem = size / elem_size;
+	trace_nr = min_t(u32, trace_nr, num_elem);
 	copy_len = trace_nr * elem_size;
 
 	ips = trace->ip + skip;
-- 
2.51.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-11  8:12 ` [PATCH bpf-next v3] " Brahmajit Das
@ 2025-11-12  1:44   ` Yonghong Song
  2025-11-12  8:40   ` Lecomte, Arnaud
  2025-11-12 13:35   ` David Laight
  2 siblings, 0 replies; 15+ messages in thread
From: Yonghong Song @ 2025-11-12  1:44 UTC (permalink / raw)
  To: Brahmajit Das, syzbot+d1b7fa1092def3628bd7
  Cc: andrii, ast, bpf, contact, daniel, eddyz87, haoluo,
	john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau, netdev,
	sdf, song, syzkaller-bugs



On 11/11/25 12:12 AM, Brahmajit Das wrote:
> syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
> triggered via bpf_get_stack() when capturing a kernel stack trace.
>
> After the recent refactor that introduced stack_map_calculate_max_depth(),
> the code in stack_map_get_build_id_offset() (and related helpers) stopped
> clamping the number of trace entries (`trace_nr`) to the number of elements
> that fit into the stack map value (`num_elem`).
>
> As a result, if the captured stack contained more frames than the map value
> can hold, the subsequent memcpy() would write past the end of the buffer,
> triggering a KASAN report like:
>
>      BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
>      Write of size N at addr ... by task syz-executor...
>
> Restore the missing clamp by limiting `trace_nr` to `num_elem` before
> computing the copy length. This mirrors the pre-refactor logic and ensures
> we never copy more bytes than the destination buffer can hold.
>
> No functional change intended beyond reintroducing the missing bound check.
>
> Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
> Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
> Signed-off-by: Brahmajit Das <listout@listout.xyz>

Acked-by: Yonghong Song <yonghong.song@linux.dev>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-11  8:12 ` [PATCH bpf-next v3] " Brahmajit Das
  2025-11-12  1:44   ` Yonghong Song
@ 2025-11-12  8:40   ` Lecomte, Arnaud
  2025-11-12  8:58     ` Brahmajit Das
  2025-11-13 12:49     ` Brahmajit Das
  2025-11-12 13:35   ` David Laight
  2 siblings, 2 replies; 15+ messages in thread
From: Lecomte, Arnaud @ 2025-11-12  8:40 UTC (permalink / raw)
  To: Brahmajit Das, syzbot+d1b7fa1092def3628bd7
  Cc: andrii, ast, bpf, daniel, eddyz87, haoluo, john.fastabend, jolsa,
	kpsingh, linux-kernel, martin.lau, netdev, sdf, song,
	syzkaller-bugs, yonghong.song

I am a not sure this is the right solution and I am scared that by
forcing this clamping, we are hiding something else.
If we have a look at the code below:
```

|

	if (trace_in) {
		trace = trace_in;
		trace->nr = min_t(u32, trace->nr, max_depth);
	} else if (kernel && task) {
		trace = get_callchain_entry_for_task(task, max_depth);
	} else {
		trace = get_perf_callchain(regs, kernel, user, max_depth,
					crosstask, false, 0);
	} ``` trace should be (if I remember correctly) clamped there. If not, 
it might hide something else. I would like to have a look at the return 
for each if case through gdb. |

On 11/11/2025 08:12, Brahmajit Das wrote:
> syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
> triggered via bpf_get_stack() when capturing a kernel stack trace.
>
> After the recent refactor that introduced stack_map_calculate_max_depth(),
> the code in stack_map_get_build_id_offset() (and related helpers) stopped
> clamping the number of trace entries (`trace_nr`) to the number of elements
> that fit into the stack map value (`num_elem`).
>
> As a result, if the captured stack contained more frames than the map value
> can hold, the subsequent memcpy() would write past the end of the buffer,
> triggering a KASAN report like:
>
>      BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
>      Write of size N at addr ... by task syz-executor...
>
> Restore the missing clamp by limiting `trace_nr` to `num_elem` before
> computing the copy length. This mirrors the pre-refactor logic and ensures
> we never copy more bytes than the destination buffer can hold.
>
> No functional change intended beyond reintroducing the missing bound check.
>
> Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
> Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
> Signed-off-by: Brahmajit Das <listout@listout.xyz>
> ---
> Changes in v3:
> Revert back to num_elem based logic for setting trace_nr. This was
> suggested by bpf-ci bot, mainly pointing out the chances of underflow
> when  max_depth < skip.
>
> Quoting the bot's reply:
> The stack_map_calculate_max_depth() function can return a value less than
> skip when sysctl_perf_event_max_stack is lowered below the skip value:
>
>      max_depth = size / elem_size;
>      max_depth += skip;
>      if (max_depth > curr_sysctl_max_stack)
>          return curr_sysctl_max_stack;
>
> If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.
>
> Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
> causing min_t() to not limit trace_nr at all. This means the original OOB
> write is not fixed in cases where skip > max_depth.
>
> With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
> scenario is reachable even without admin changing sysctls.
>
> Changes in v2:
> - Use max_depth instead of num_elem logic, this logic is similar to what
> we are already using __bpf_get_stackid
> Link: https://lore.kernel.org/all/20251111003721.7629-1-listout@listout.xyz/
>
> Changes in v1:
> - RFC patch that restores the number of trace entries by setting
> trace_nr to trace_nr or num_elem based on whichever is the smallest.
> Link: https://lore.kernel.org/all/20251110211640.963-1-listout@listout.xyz/
> ---
>   kernel/bpf/stackmap.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 2365541c81dd..cef79d9517ab 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -426,7 +426,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
>   			    struct perf_callchain_entry *trace_in,
>   			    void *buf, u32 size, u64 flags, bool may_fault)
>   {
> -	u32 trace_nr, copy_len, elem_size, max_depth;
> +	u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
>   	bool user_build_id = flags & BPF_F_USER_BUILD_ID;
>   	bool crosstask = task && task != current;
>   	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
> @@ -480,6 +480,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
>   	}
>   
>   	trace_nr = trace->nr - skip;
> +	num_elem = size / elem_size;
> +	trace_nr = min_t(u32, trace_nr, num_elem);
>   	copy_len = trace_nr * elem_size;
>   
>   	ips = trace->ip + skip;

Thanks,
Arnaud


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-12  8:40   ` Lecomte, Arnaud
@ 2025-11-12  8:58     ` Brahmajit Das
  2025-11-13 12:49     ` Brahmajit Das
  1 sibling, 0 replies; 15+ messages in thread
From: Brahmajit Das @ 2025-11-12  8:58 UTC (permalink / raw)
  To: Lecomte, Arnaud
  Cc: syzbot+d1b7fa1092def3628bd7, andrii, ast, bpf, daniel, eddyz87,
	haoluo, john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau,
	netdev, sdf, song, syzkaller-bugs, yonghong.song

On 12.11.2025 08:40, 'Lecomte, Arnaud' via syzkaller-bugs wrote:
> I am a not sure this is the right solution and I am scared that by
> forcing this clamping, we are hiding something else.
> If we have a look at the code below:
> ```
> 
> |
> 
> 	if (trace_in) {
> 		trace = trace_in;
> 		trace->nr = min_t(u32, trace->nr, max_depth);
> 	} else if (kernel && task) {
> 		trace = get_callchain_entry_for_task(task, max_depth);
> 	} else {
> 		trace = get_perf_callchain(regs, kernel, user, max_depth,
> 					crosstask, false, 0);
> 	} ``` trace should be (if I remember correctly) clamped there. If not, it
> might hide something else. I would like to have a look at the return for
> each if case through gdb. |

Sure, I can do that.

> 
> Thanks,
> Arnaud

-- 
Regards,
listout

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-11  8:12 ` [PATCH bpf-next v3] " Brahmajit Das
  2025-11-12  1:44   ` Yonghong Song
  2025-11-12  8:40   ` Lecomte, Arnaud
@ 2025-11-12 13:35   ` David Laight
  2025-11-12 14:47     ` Brahmajit Das
  2 siblings, 1 reply; 15+ messages in thread
From: David Laight @ 2025-11-12 13:35 UTC (permalink / raw)
  To: Brahmajit Das
  Cc: syzbot+d1b7fa1092def3628bd7, andrii, ast, bpf, contact, daniel,
	eddyz87, haoluo, john.fastabend, jolsa, kpsingh, linux-kernel,
	martin.lau, netdev, sdf, song, syzkaller-bugs, yonghong.song

On Tue, 11 Nov 2025 13:42:54 +0530
Brahmajit Das <listout@listout.xyz> wrote:

> syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
> triggered via bpf_get_stack() when capturing a kernel stack trace.
> 
> After the recent refactor that introduced stack_map_calculate_max_depth(),
> the code in stack_map_get_build_id_offset() (and related helpers) stopped
> clamping the number of trace entries (`trace_nr`) to the number of elements
> that fit into the stack map value (`num_elem`).
> 
> As a result, if the captured stack contained more frames than the map value
> can hold, the subsequent memcpy() would write past the end of the buffer,
> triggering a KASAN report like:
> 
>     BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
>     Write of size N at addr ... by task syz-executor...
> 
> Restore the missing clamp by limiting `trace_nr` to `num_elem` before
> computing the copy length. This mirrors the pre-refactor logic and ensures
> we never copy more bytes than the destination buffer can hold.
> 
> No functional change intended beyond reintroducing the missing bound check.
> 
> Reported-by: syzbot+d1b7fa1092def3628bd7@syzkaller.appspotmail.com
> Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
> Signed-off-by: Brahmajit Das <listout@listout.xyz>
> ---
> Changes in v3:
> Revert back to num_elem based logic for setting trace_nr. This was
> suggested by bpf-ci bot, mainly pointing out the chances of underflow
> when  max_depth < skip.
> 
> Quoting the bot's reply:
> The stack_map_calculate_max_depth() function can return a value less than
> skip when sysctl_perf_event_max_stack is lowered below the skip value:
> 
>     max_depth = size / elem_size;
>     max_depth += skip;
>     if (max_depth > curr_sysctl_max_stack)
>         return curr_sysctl_max_stack;
> 
> If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.
> 
> Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
> causing min_t() to not limit trace_nr at all. This means the original OOB
> write is not fixed in cases where skip > max_depth.
> 
> With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
> scenario is reachable even without admin changing sysctls.
> 
> Changes in v2:
> - Use max_depth instead of num_elem logic, this logic is similar to what
> we are already using __bpf_get_stackid
> Link: https://lore.kernel.org/all/20251111003721.7629-1-listout@listout.xyz/
> 
> Changes in v1:
> - RFC patch that restores the number of trace entries by setting
> trace_nr to trace_nr or num_elem based on whichever is the smallest.
> Link: https://lore.kernel.org/all/20251110211640.963-1-listout@listout.xyz/
> ---
>  kernel/bpf/stackmap.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 2365541c81dd..cef79d9517ab 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -426,7 +426,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
>  			    struct perf_callchain_entry *trace_in,
>  			    void *buf, u32 size, u64 flags, bool may_fault)
>  {
> -	u32 trace_nr, copy_len, elem_size, max_depth;
> +	u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
>  	bool user_build_id = flags & BPF_F_USER_BUILD_ID;
>  	bool crosstask = task && task != current;
>  	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
> @@ -480,6 +480,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
>  	}
>  
>  	trace_nr = trace->nr - skip;
> +	num_elem = size / elem_size;
> +	trace_nr = min_t(u32, trace_nr, num_elem);

Please can we have no unnecessary min_t().
You wouldn't write:
	x = (u32)a < (u32)b ? (u32)a : (u32)b;

    David
 
>  	copy_len = trace_nr * elem_size;
>  
>  	ips = trace->ip + skip;


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-12 13:35   ` David Laight
@ 2025-11-12 14:47     ` Brahmajit Das
  2025-11-12 16:11       ` Lecomte, Arnaud
  0 siblings, 1 reply; 15+ messages in thread
From: Brahmajit Das @ 2025-11-12 14:47 UTC (permalink / raw)
  To: David Laight
  Cc: syzbot+d1b7fa1092def3628bd7, andrii, ast, bpf, contact, daniel,
	eddyz87, haoluo, john.fastabend, jolsa, kpsingh, linux-kernel,
	martin.lau, netdev, sdf, song, syzkaller-bugs, yonghong.song

On 12.11.2025 13:35, David Laight wrote:
> On Tue, 11 Nov 2025 13:42:54 +0530
> Brahmajit Das <listout@listout.xyz> wrote:
> 
...snip...
> 
> Please can we have no unnecessary min_t().
> You wouldn't write:
> 	x = (u32)a < (u32)b ? (u32)a : (u32)b;
> 
>     David
>  
> >  	copy_len = trace_nr * elem_size;
> >  
> >  	ips = trace->ip + skip;
> 

Hi David,

Sorry, I didn't quite get that. Would prefer something like:
	trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem;
The pre-refactor code.

-- 
Regards,
listout

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-12 14:47     ` Brahmajit Das
@ 2025-11-12 16:11       ` Lecomte, Arnaud
  2025-11-12 21:37         ` David Laight
  0 siblings, 1 reply; 15+ messages in thread
From: Lecomte, Arnaud @ 2025-11-12 16:11 UTC (permalink / raw)
  To: Brahmajit Das, David Laight
  Cc: syzbot+d1b7fa1092def3628bd7, andrii, ast, bpf, daniel, eddyz87,
	haoluo, john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau,
	netdev, sdf, song, syzkaller-bugs, yonghong.song


On 12/11/2025 14:47, Brahmajit Das wrote:
> On 12.11.2025 13:35, David Laight wrote:
>> On Tue, 11 Nov 2025 13:42:54 +0530
>> Brahmajit Das <listout@listout.xyz> wrote:
>>
> ...snip...
>> Please can we have no unnecessary min_t().
>> You wouldn't write:
>> 	x = (u32)a < (u32)b ? (u32)a : (u32)b;
>>
>>      David
>>   
>>>   	copy_len = trace_nr * elem_size;
>>>   
>>>   	ips = trace->ip + skip;
> Hi David,
>
> Sorry, I didn't quite get that. Would prefer something like:
> 	trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem;

min_t is a min with casting which is unnecessary in this case as 
trace_nr and num_elem
are already u32.

> The pre-refactor code.
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-12 16:11       ` Lecomte, Arnaud
@ 2025-11-12 21:37         ` David Laight
  0 siblings, 0 replies; 15+ messages in thread
From: David Laight @ 2025-11-12 21:37 UTC (permalink / raw)
  To: Lecomte, Arnaud
  Cc: Brahmajit Das, syzbot+d1b7fa1092def3628bd7, andrii, ast, bpf,
	daniel, eddyz87, haoluo, john.fastabend, jolsa, kpsingh,
	linux-kernel, martin.lau, netdev, sdf, song, syzkaller-bugs,
	yonghong.song

On Wed, 12 Nov 2025 16:11:41 +0000
"Lecomte, Arnaud" <contact@arnaud-lcm.com> wrote:

> On 12/11/2025 14:47, Brahmajit Das wrote:
> > On 12.11.2025 13:35, David Laight wrote:  
> >> On Tue, 11 Nov 2025 13:42:54 +0530
> >> Brahmajit Das <listout@listout.xyz> wrote:
> >>  
> > ...snip...  
> >> Please can we have no unnecessary min_t().
> >> You wouldn't write:
> >> 	x = (u32)a < (u32)b ? (u32)a : (u32)b;
> >>
> >>      David
> >>     
> >>>   	copy_len = trace_nr * elem_size;
> >>>   
> >>>   	ips = trace->ip + skip;  
> > Hi David,
> >
> > Sorry, I didn't quite get that. Would prefer something like:
> > 	trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem;  
> 
> min_t is a min with casting which is unnecessary in this case as 
> trace_nr and num_elem are already u32.

Correct

	David

> 
> > The pre-refactor code.
> >  
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-12  8:40   ` Lecomte, Arnaud
  2025-11-12  8:58     ` Brahmajit Das
@ 2025-11-13 12:49     ` Brahmajit Das
  2025-11-13 13:26       ` Lecomte, Arnaud
  1 sibling, 1 reply; 15+ messages in thread
From: Brahmajit Das @ 2025-11-13 12:49 UTC (permalink / raw)
  To: Lecomte, Arnaud
  Cc: syzbot+d1b7fa1092def3628bd7, andrii, ast, bpf, daniel, eddyz87,
	haoluo, john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau,
	netdev, sdf, song, syzkaller-bugs, yonghong.song

On 12.11.2025 08:40, 'Lecomte, Arnaud' via syzkaller-bugs wrote:
> I am a not sure this is the right solution and I am scared that by
> forcing this clamping, we are hiding something else.
> If we have a look at the code below:
> ```
> 
> |
> 
> 	if (trace_in) {
> 		trace = trace_in;
> 		trace->nr = min_t(u32, trace->nr, max_depth);
> 	} else if (kernel && task) {
> 		trace = get_callchain_entry_for_task(task, max_depth);
> 	} else {
> 		trace = get_perf_callchain(regs, kernel, user, max_depth,
> 					crosstask, false, 0);
> 	} ``` trace should be (if I remember correctly) clamped there. If not, it
> might hide something else. I would like to have a look at the return for
> each if case through gdb. |

Hi Arnaud,
So I've been debugging this the reproducer always takes the else branch
so trace holds whatever get_perf_callchain returns; in this situation.

I mostly found it to be a value around 4.

In some case the value would exceed to something 27 or 44, just after
the code block 

	if (unlikely(!trace) || trace->nr < skip) {
		if (may_fault)
			rcu_read_unlock();
		goto err_fault;
	}

So I'm assuming there's some race condition that might be going on
somewhere.
I'm still debugging bug I'm open to ideas and definitely I could be
wrong here, please feel free to correct/point out.

-- 
Regards,
listout

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-13 12:49     ` Brahmajit Das
@ 2025-11-13 13:26       ` Lecomte, Arnaud
  2025-11-13 13:49         ` Brahmajit Das
  0 siblings, 1 reply; 15+ messages in thread
From: Lecomte, Arnaud @ 2025-11-13 13:26 UTC (permalink / raw)
  To: Brahmajit Das
  Cc: syzbot+d1b7fa1092def3628bd7, andrii, ast, bpf, daniel, eddyz87,
	haoluo, john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau,
	netdev, sdf, song, syzkaller-bugs, yonghong.song


On 13/11/2025 12:49, Brahmajit Das wrote:
> On 12.11.2025 08:40, 'Lecomte, Arnaud' via syzkaller-bugs wrote:
>> I am a not sure this is the right solution and I am scared that by
>> forcing this clamping, we are hiding something else.
>> If we have a look at the code below:
>> ```
>>
>> |
>>
>> 	if (trace_in) {
>> 		trace = trace_in;
>> 		trace->nr = min_t(u32, trace->nr, max_depth);
>> 	} else if (kernel && task) {
>> 		trace = get_callchain_entry_for_task(task, max_depth);
>> 	} else {
>> 		trace = get_perf_callchain(regs, kernel, user, max_depth,
>> 					crosstask, false, 0);
>> 	} ``` trace should be (if I remember correctly) clamped there. If not, it
>> might hide something else. I would like to have a look at the return for
>> each if case through gdb. |
> Hi Arnaud,
> So I've been debugging this the reproducer always takes the else branch
> so trace holds whatever get_perf_callchain returns; in this situation.
>
> I mostly found it to be a value around 4.
>
> In some case the value would exceed to something 27 or 44, just after
> the code block
>
> 	if (unlikely(!trace) || trace->nr < skip) {
> 		if (may_fault)
> 			rcu_read_unlock();
> 		goto err_fault;
> 	}
>
> So I'm assuming there's some race condition that might be going on
> somewhere.
Which value ? trace->nr ?
> I'm still debugging bug I'm open to ideas and definitely I could be
> wrong here, please feel free to correct/point out.

I should be able to have a look tomorrow evening as I am currently a bit 
overloaded
with my work.

Thanks,
Arnaud


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack to fix OOB write
  2025-11-13 13:26       ` Lecomte, Arnaud
@ 2025-11-13 13:49         ` Brahmajit Das
  0 siblings, 0 replies; 15+ messages in thread
From: Brahmajit Das @ 2025-11-13 13:49 UTC (permalink / raw)
  To: Lecomte, Arnaud
  Cc: syzbot+d1b7fa1092def3628bd7, andrii, ast, bpf, daniel, eddyz87,
	haoluo, john.fastabend, jolsa, kpsingh, linux-kernel, martin.lau,
	netdev, sdf, song, syzkaller-bugs, yonghong.song

On 13.11.2025 13:26, Lecomte, Arnaud wrote:
> 
> On 13/11/2025 12:49, Brahmajit Das wrote:
> > On 12.11.2025 08:40, 'Lecomte, Arnaud' via syzkaller-bugs wrote:
> > > I am a not sure this is the right solution and I am scared that by
> > > forcing this clamping, we are hiding something else.
> > > If we have a look at the code below:
...snip...
> > > might hide something else. I would like to have a look at the return for
> > > each if case through gdb. |
> > Hi Arnaud,
> > So I've been debugging this the reproducer always takes the else branch
> > so trace holds whatever get_perf_callchain returns; in this situation.
> > 
> > I mostly found it to be a value around 4.
> > 
> > In some case the value would exceed to something 27 or 44, just after
> > the code block
> > 
> > 	if (unlikely(!trace) || trace->nr < skip) {
> > 		if (may_fault)
> > 			rcu_read_unlock();
> > 		goto err_fault;
> > 	}
> > 
> > So I'm assuming there's some race condition that might be going on
> > somewhere.
> Which value ? trace->nr ?

Yep, trace->nr

> > I'm still debugging bug I'm open to ideas and definitely I could be
> > wrong here, please feel free to correct/point out.
> 
> I should be able to have a look tomorrow evening as I am currently a bit
> overloaded
> with my work.

Awesome, thank you. I'll try to dig around a bit more meanwhile.

-- 
Regards,
listout

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2025-11-13 13:49 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-10 18:41 [syzbot] [bpf?] KASAN: stack-out-of-bounds Write in __bpf_get_stack syzbot
2025-11-10 21:16 ` [RFC bpf-next PATCH] bpf: Clamp trace length in __bpf_get_stack to fix OOB write Brahmajit Das
2025-11-11  0:37 ` [PATCH bpf-next v2] " Brahmajit Das
2025-11-11  1:04   ` bot+bpf-ci
2025-11-11  8:12 ` [PATCH bpf-next v3] " Brahmajit Das
2025-11-12  1:44   ` Yonghong Song
2025-11-12  8:40   ` Lecomte, Arnaud
2025-11-12  8:58     ` Brahmajit Das
2025-11-13 12:49     ` Brahmajit Das
2025-11-13 13:26       ` Lecomte, Arnaud
2025-11-13 13:49         ` Brahmajit Das
2025-11-12 13:35   ` David Laight
2025-11-12 14:47     ` Brahmajit Das
2025-11-12 16:11       ` Lecomte, Arnaud
2025-11-12 21:37         ` David Laight

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).