From: Puranjay Mohan <puranjay@kernel.org>
To: Kumar Kartikeya Dwivedi <memxor@gmail.com>,
Xu Kuohai <xukuohai@huaweicloud.com>
Cc: Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Martin KaFai Lau <martin.lau@linux.dev>,
Eduard Zingerman <eddyz87@gmail.com>, Song Liu <song@kernel.org>,
Yonghong Song <yonghong.song@linux.dev>,
John Fastabend <john.fastabend@gmail.com>,
KP Singh <kpsingh@kernel.org>,
Stanislav Fomichev <sdf@fomichev.me>, Hao Luo <haoluo@google.com>,
Jiri Olsa <jolsa@kernel.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
bpf@vger.kernel.org
Subject: Re: [PATCH bpf-next v4 0/3] bpf: Report arena faults to BPF streams
Date: Thu, 28 Aug 2025 12:13:34 +0000 [thread overview]
Message-ID: <mb61ph5xrmyoh.fsf@kernel.org> (raw)
In-Reply-To: <CAP01T77PGbpEEmGyCqKSy-+Zb18+dfWH=8ujEQFBDKEOca3Mjg@mail.gmail.com>
Kumar Kartikeya Dwivedi <memxor@gmail.com> writes:
> On Wed, 27 Aug 2025 at 17:37, Puranjay Mohan <puranjay@kernel.org> wrote:
>>
>> Changes in v3->v4:
>> v3: https://lore.kernel.org/all/20250827150113.15763-1-puranjay@kernel.org/
>> - Fixed a build issue when CONFIG_BPF_JIT=y and # CONFIG_BPF_SYSCALL is not set
>>
>> Changes in v2->v3:
>> v2: https://lore.kernel.org/all/20250811111828.13836-1-puranjay@kernel.org/
>> - Improved the selftest to check the exact fault address
>> - Dropped BPF_NO_KFUNC_PROTOTYPES and bpf_arena_alloc/free_pages() usage
>> - Rebased on bpf-next/master
>>
>> Changes in v1->v2:
>> v1: https://lore.kernel.org/all/20250806085847.18633-1-puranjay@kernel.org/
>> - Changed variable and mask names for consistency (Yonghong)
>> - Added Acked-by: Yonghong Song <yonghong.song@linux.dev> on two patches
>>
>> This set adds the support of reporting page faults inside arena to BPF
>> stderr stream. The reported address is the one that a user would expect
>> to see if they pass it to bpf_printk();
>>
>> Here is an example output from a stream and bpf_printk()
>>
>> ERROR: Arena WRITE access at unmapped address 0xdeaddead0000
>> CPU: 9 UID: 0 PID: 502 Comm: test_progs
>> Call trace:
>> bpf_stream_stage_dump_stack+0xc0/0x150
>> bpf_prog_report_arena_violation+0x98/0xf0
>> ex_handler_bpf+0x5c/0x78
>> fixup_exception+0xf8/0x160
>> __do_kernel_fault+0x40/0x188
>> do_bad_area+0x70/0x88
>> do_translation_fault+0x54/0x98
>> do_mem_abort+0x4c/0xa8
>> el1_abort+0x44/0x70
>> el1h_64_sync_handler+0x50/0x108
>> el1h_64_sync+0x6c/0x70
>> bpf_prog_a64a9778d31b8e88_stream_arena_write_fault+0x84/0xc8
>> *(page) = 1; @ stream.c:100
>> bpf_prog_test_run_syscall+0x100/0x328
>> __sys_bpf+0x508/0xb98
>> __arm64_sys_bpf+0x2c/0x48
>> invoke_syscall+0x50/0x120
>> el0_svc_common.constprop.0+0x48/0xf8
>> do_el0_svc+0x28/0x40
>> el0_svc+0x48/0xf8
>> el0t_64_sync_handler+0xa0/0xe8
>> el0t_64_sync+0x198/0x1a0
>>
>> Same address is seen by using bpf_printk():
>>
>> 1389.078831: bpf_trace_printk: Read Address: 0xdeaddead0000
>>
>> To make this possible, some extra metadata has to be passed to the bpf
>> exception handler, so the bpf exception handling mechanism for both
>> x86-64 and arm64 have been improved in this set.
>>
>> The streams selftest has been updated to also test this new feature.
>
> We also need arm64 experts to take a look before we land, since you'll
> respin anyway now.
> Xu, could you please provide acks on the patches?
>
> Thanks a lot.
Thanks for your review.
I will wait for Xu's feedback before respining.
Thanks,
Puranjay
prev parent reply other threads:[~2025-08-28 12:13 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-27 15:37 [PATCH bpf-next v4 0/3] bpf: Report arena faults to BPF streams Puranjay Mohan
2025-08-27 15:37 ` [PATCH bpf-next v4 1/3] bpf: arm64: simplify exception table handling Puranjay Mohan
2025-08-28 0:19 ` Kumar Kartikeya Dwivedi
2025-08-29 10:06 ` Xu Kuohai
2025-08-27 15:37 ` [PATCH bpf-next v4 2/3] bpf: Report arena faults to BPF stderr Puranjay Mohan
2025-08-28 0:22 ` Kumar Kartikeya Dwivedi
2025-08-28 0:27 ` Kumar Kartikeya Dwivedi
2025-08-28 12:14 ` Puranjay Mohan
2025-08-29 10:30 ` Xu Kuohai
2025-08-29 20:28 ` Alexei Starovoitov
2025-09-01 13:34 ` Puranjay Mohan
2025-09-01 16:39 ` Alexei Starovoitov
2025-09-01 19:22 ` Puranjay Mohan
2025-09-01 22:44 ` Kumar Kartikeya Dwivedi
2025-09-02 2:18 ` Alexei Starovoitov
2025-08-27 15:37 ` [PATCH bpf-next v4 3/3] selftests/bpf: Add tests for arena fault reporting Puranjay Mohan
2025-08-27 19:54 ` Yonghong Song
2025-08-27 23:49 ` Kumar Kartikeya Dwivedi
2025-08-28 12:25 ` Puranjay Mohan
2025-08-28 15:44 ` Yonghong Song
2025-08-28 0:23 ` [PATCH bpf-next v4 0/3] bpf: Report arena faults to BPF streams Kumar Kartikeya Dwivedi
2025-08-28 12:13 ` Puranjay Mohan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=mb61ph5xrmyoh.fsf@kernel.org \
--to=puranjay@kernel.org \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=catalin.marinas@arm.com \
--cc=daniel@iogearbox.net \
--cc=eddyz87@gmail.com \
--cc=haoluo@google.com \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kpsingh@kernel.org \
--cc=martin.lau@linux.dev \
--cc=memxor@gmail.com \
--cc=sdf@fomichev.me \
--cc=song@kernel.org \
--cc=will@kernel.org \
--cc=xukuohai@huaweicloud.com \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).