bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* (no subject)
@ 2024-06-26  6:11 Totoro W
  2024-06-26  7:01 ` your mail Shung-Hsi Yu
  2024-06-26  7:09 ` Eduard Zingerman
  0 siblings, 2 replies; 21+ messages in thread
From: Totoro W @ 2024-06-26  6:11 UTC (permalink / raw)
  To: bpf

Hi folks,

This is my first time to ask questions in this mailing list. I'm the
author of https://github.com/tw4452852/zbpf which is a framework to
write BPF programs with Zig toolchain.
During the development, as the BTF is totally generated by the Zig
toolchain, some naming conventions will make the BTF verifier refuse
to load.
Right now I have to patch the libbpf to do some fixup before loading
into the kernel
(https://github.com/tw4452852/libbpf_zig/blob/main/0001-temporary-WA-for-invalid-BTF-info-generated-by-Zig.patch).
Even though this just work-around the issue, I'm still curious about
the current naming sanitation, I want to know some background about
it.
If possible, could we relax this to accept more languages (like Zig)
to write BPF programs? Thanks in advance.

Regards.

^ permalink raw reply	[flat|nested] 21+ messages in thread
* (no subject)
@ 2025-04-24  0:40 Cong Wang
  2025-04-24  0:59 ` Jiayuan Chen
  0 siblings, 1 reply; 21+ messages in thread
From: Cong Wang @ 2025-04-24  0:40 UTC (permalink / raw)
  To: jiayuan.chen; +Cc: john.fastabend, jakub, netdev, bpf

netdev@vger.kernel.org, bpf@vger.kernel.org
Bcc: 
Subject: test_sockmap failures on the latest bpf-next
Reply-To: 

Hi all,

The latest bpf-next failed on test_sockmap tests, I got the following
failures (including 1 kernel warning). It is 100% reproducible here.

I don't have time to look into them, a quick glance at the changelog
shows quite some changes from Jiayuan. So please take a look, Jiayuan.

Meanwhile, please let me know if you need more information from me.

Thanks!

--------------->

[root@localhost bpf]# ./test_sockmap 
# 1/ 6  sockmap::txmsg test passthrough:OK
# 2/ 6  sockmap::txmsg test redirect:OK
# 3/ 2  sockmap::txmsg test redirect wait send mem:OK
# 4/ 6  sockmap::txmsg test drop:OK
[  182.498017] perf: interrupt took too long (3406 > 3238), lowering kernel.perf_event_max_sample_rate to 58500
# 5/ 6  sockmap::txmsg test ingress redirect:OK
# 6/ 7  sockmap::txmsg test skb:OK
# 7/12  sockmap::txmsg test apply:OK
# 8/12  sockmap::txmsg test cork:OK
# 9/ 3  sockmap::txmsg test hanging corks:OK
#10/11  sockmap::txmsg test push_data:OK
#11/17  sockmap::txmsg test pull-data:OK
#12/ 9  sockmap::txmsg test pop-data:OK
#13/ 6  sockmap::txmsg test push/pop data:OK
#14/ 1  sockmap::txmsg test ingress parser:OK
#15/ 1  sockmap::txmsg test ingress parser2:OK
#16/ 6 sockhash::txmsg test passthrough:OK
#17/ 6 sockhash::txmsg test redirect:OK
#18/ 2 sockhash::txmsg test redirect wait send mem:OK
#19/ 6 sockhash::txmsg test drop:OK
#20/ 6 sockhash::txmsg test ingress redirect:OK
#21/ 7 sockhash::txmsg test skb:OK
#22/12 sockhash::txmsg test apply:OK
#23/12 sockhash::txmsg test cork:OK
#24/ 3 sockhash::txmsg test hanging corks:OK
#25/11 sockhash::txmsg test push_data:OK
#26/17 sockhash::txmsg test pull-data:OK
#27/ 9 sockhash::txmsg test pop-data:OK
#28/ 6 sockhash::txmsg test push/pop data:OK
#29/ 1 sockhash::txmsg test ingress parser:OK
#30/ 1 sockhash::txmsg test ingress parser2:OK
#31/ 6 sockhash:ktls:txmsg test passthrough:OK
#32/ 6 sockhash:ktls:txmsg test redirect:OK
#33/ 2 sockhash:ktls:txmsg test redirect wait send mem:OK
[  263.509707] ------------[ cut here ]------------
[  263.510439] WARNING: CPU: 1 PID: 40 at net/ipv4/af_inet.c:156 inet_sock_destruct+0x173/0x1d5
[  263.511450] CPU: 1 UID: 0 PID: 40 Comm: kworker/1:1 Tainted: G        W           6.15.0-rc3+ #238 PREEMPT(voluntary) 
[  263.512683] Tainted: [W]=WARN
[  263.513062] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.15.0-1 04/01/2014
[  263.514763] Workqueue: events sk_psock_destroy
[  263.515332] RIP: 0010:inet_sock_destruct+0x173/0x1d5
[  263.515916] Code: e8 dc dc 3f ff 41 83 bc 24 c0 02 00 00 00 74 02 0f 0b 49 8d bc 24 ac 02 00 00 e8 c2 dc 3f ff 41 83 bc 24 ac 02 00 00 00 74 02 <0f> 0b e8 c7 95 3d 00 49 8d bc 24 b0 05 00 00 e8 c0 dd 3f ff 49 8b
[  263.518899] RSP: 0018:ffff8880085cfc18 EFLAGS: 00010202
[  263.519596] RAX: 1ffff11003dbfc00 RBX: ffff88801edfe3e8 RCX: ffffffff822f5af4
[  263.520502] RDX: 0000000000000007 RSI: dffffc0000000000 RDI: ffff88801edfe16c
[  263.522128] RBP: ffff88801edfe184 R08: ffffed1003dbfc31 R09: 0000000000000000
[  263.523008] R10: ffffffff822f5ab7 R11: ffff88801edfe187 R12: ffff88801edfdec0
[  263.523822] R13: ffff888020376ac0 R14: ffff888020376ac0 R15: ffff888020376a60
[  263.524682] FS:  0000000000000000(0000) GS:ffff8880b0e88000(0000) knlGS:0000000000000000
[  263.525999] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  263.526765] CR2: 0000556365155830 CR3: 000000001d6aa000 CR4: 0000000000350ef0
[  263.527700] Call Trace:
[  263.528037]  <TASK>
[  263.528339]  __sk_destruct+0x46/0x222
[  263.528856]  sk_psock_destroy+0x22f/0x242
[  263.529471]  process_one_work+0x504/0x8a8
[  263.530029]  ? process_one_work+0x39d/0x8a8
[  263.530587]  ? __pfx_process_one_work+0x10/0x10
[  263.531195]  ? worker_thread+0x44/0x2ae
[  263.531721]  ? __list_add_valid_or_report+0x83/0xea
[  263.532395]  ? srso_return_thunk+0x5/0x5f
[  263.532929]  ? __list_add+0x45/0x52
[  263.533482]  process_scheduled_works+0x73/0x82
[  263.534079]  worker_thread+0x1ce/0x2ae
[  263.534582]  ? _raw_spin_unlock_irqrestore+0x2e/0x44
[  263.535243]  ? __pfx_worker_thread+0x10/0x10
[  263.535822]  kthread+0x32a/0x33c
[  263.536278]  ? kthread+0x13c/0x33c
[  263.536724]  ? __pfx_kthread+0x10/0x10
[  263.537225]  ? srso_return_thunk+0x5/0x5f
[  263.537869]  ? find_held_lock+0x2b/0x75
[  263.538388]  ? __pfx_kthread+0x10/0x10
[  263.538866]  ? srso_return_thunk+0x5/0x5f
[  263.539523]  ? local_clock_noinstr+0x32/0x9c
[  263.540128]  ? srso_return_thunk+0x5/0x5f
[  263.540677]  ? srso_return_thunk+0x5/0x5f
[  263.541228]  ? __lock_release+0xd3/0x1ad
[  263.541890]  ? srso_return_thunk+0x5/0x5f
[  263.542442]  ? tracer_hardirqs_on+0x17/0x149
[  263.543047]  ? _raw_spin_unlock_irq+0x24/0x39
[  263.543589]  ? __pfx_kthread+0x10/0x10
[  263.544069]  ? __pfx_kthread+0x10/0x10
[  263.544543]  ret_from_fork+0x21/0x41
[  263.545000]  ? __pfx_kthread+0x10/0x10
[  263.545557]  ret_from_fork_asm+0x1a/0x30
[  263.546095]  </TASK>
[  263.546374] irq event stamp: 1094079
[  263.546798] hardirqs last  enabled at (1094089): [<ffffffff813be0f6>] __up_console_sem+0x47/0x4e
[  263.547762] hardirqs last disabled at (1094098): [<ffffffff813be0d6>] __up_console_sem+0x27/0x4e
[  263.548817] softirqs last  enabled at (1093692): [<ffffffff812f2906>] handle_softirqs+0x48c/0x4de
[  263.550127] softirqs last disabled at (1094117): [<ffffffff812f29b3>] __irq_exit_rcu+0x4b/0xc3
[  263.551104] ---[ end trace 0000000000000000 ]---
#34/ 6 sockhash:ktls:txmsg test drop:OK
#35/ 6 sockhash:ktls:txmsg test ingress redirect:OK
#36/ 7 sockhash:ktls:txmsg test skb:OK
#37/12 sockhash:ktls:txmsg test apply:OK
[  278.915147] perf: interrupt took too long (4331 > 4257), lowering kernel.perf_event_max_sample_rate to 46000
[  282.974989] test_sockmap (1077) used greatest stack depth: 25072 bytes left
#38/12 sockhash:ktls:txmsg test cork:OK
#39/ 3 sockhash:ktls:txmsg test hanging corks:OK
#40/11 sockhash:ktls:txmsg test push_data:OK
#41/17 sockhash:ktls:txmsg test pull-data:OK
recv failed(): Invalid argument
rx thread exited with err 1.
recv failed(): Invalid argument
rx thread exited with err 1.
recv failed(): Bad message
rx thread exited with err 1.
#42/ 9 sockhash:ktls:txmsg test pop-data:FAIL
recv failed(): Bad message
rx thread exited with err 1.
recv failed(): Message too long
rx thread exited with err 1.
#43/ 6 sockhash:ktls:txmsg test push/pop data:FAIL
#44/ 1 sockhash:ktls:txmsg test ingress parser:OK
#45/ 0 sockhash:ktls:txmsg test ingress parser2:OK
Pass: 43 Fail: 5



^ permalink raw reply	[flat|nested] 21+ messages in thread
* Re: [PATCH bpf-next] bpf: Remove bpf_get_smp_processor_id_proto
@ 2025-04-22  1:53 Alexei Starovoitov
  2025-04-22  8:04 ` Feng Yang
  0 siblings, 1 reply; 21+ messages in thread
From: Alexei Starovoitov @ 2025-04-22  1:53 UTC (permalink / raw)
  To: Feng Yang
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Eduard, Song Liu, Yonghong Song, bpf, LKML,
	linux-trace-kernel, Network Development, Feng Yang

On Thu, Apr 17, 2025 at 8:41 PM Feng Yang <yangfeng59949@163.com> wrote:
>
> From: Feng Yang <yangfeng@kylinos.cn>
>
> All BPF programs either disable CPU preemption or CPU migration,
> so the bpf_get_smp_processor_id_proto can be safely removed,
> and the bpf_get_raw_smp_processor_id_proto in bpf_base_func_proto works perfectly.
>
> Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Signed-off-by: Feng Yang <yangfeng@kylinos.cn>
> ---
>  include/linux/bpf.h      |  1 -
>  kernel/bpf/core.c        |  1 -
>  kernel/bpf/helpers.c     | 12 ------------
>  kernel/trace/bpf_trace.c |  2 --
>  net/core/filter.c        |  6 ------
>  5 files changed, 22 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 3f0cc89c0622..36e525141556 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -3316,7 +3316,6 @@ extern const struct bpf_func_proto bpf_map_peek_elem_proto;
>  extern const struct bpf_func_proto bpf_map_lookup_percpu_elem_proto;
>
>  extern const struct bpf_func_proto bpf_get_prandom_u32_proto;
> -extern const struct bpf_func_proto bpf_get_smp_processor_id_proto;
>  extern const struct bpf_func_proto bpf_get_numa_node_id_proto;
>  extern const struct bpf_func_proto bpf_tail_call_proto;
>  extern const struct bpf_func_proto bpf_ktime_get_ns_proto;
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index ba6b6118cf50..1ad41a16b86e 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -2943,7 +2943,6 @@ const struct bpf_func_proto bpf_spin_unlock_proto __weak;
>  const struct bpf_func_proto bpf_jiffies64_proto __weak;
>
>  const struct bpf_func_proto bpf_get_prandom_u32_proto __weak;
> -const struct bpf_func_proto bpf_get_smp_processor_id_proto __weak;
>  const struct bpf_func_proto bpf_get_numa_node_id_proto __weak;
>  const struct bpf_func_proto bpf_ktime_get_ns_proto __weak;
>  const struct bpf_func_proto bpf_ktime_get_boot_ns_proto __weak;
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index e3a2662f4e33..2d2bfb2911f8 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -149,18 +149,6 @@ const struct bpf_func_proto bpf_get_prandom_u32_proto = {
>         .ret_type       = RET_INTEGER,
>  };
>
> -BPF_CALL_0(bpf_get_smp_processor_id)
> -{
> -       return smp_processor_id();
> -}
> -
> -const struct bpf_func_proto bpf_get_smp_processor_id_proto = {
> -       .func           = bpf_get_smp_processor_id,
> -       .gpl_only       = false,
> -       .ret_type       = RET_INTEGER,
> -       .allow_fastcall = true,
> -};
> -

bpf_get_raw_smp_processor_id_proto doesn't have
allow_fastcall = true

so this breaks tests.

Instead of removing BPF_CALL_0(bpf_get_smp_processor_id)
we should probably remove BPF_CALL_0(bpf_get_raw_cpu_id)
and adjust SKF_AD_OFF + SKF_AD_CPU case.
I don't recall why raw_ version was used back in 2014.

pw-bot: cr

^ permalink raw reply	[flat|nested] 21+ messages in thread
* (no subject)
@ 2025-04-18  7:46 Shung-Hsi Yu
  2025-04-18  7:49 ` Shung-Hsi Yu
  2025-04-23 17:30 ` Re: patchwork-bot+netdevbpf
  0 siblings, 2 replies; 21+ messages in thread
From: Shung-Hsi Yu @ 2025-04-18  7:46 UTC (permalink / raw)
  To: bpf
  Cc: Martin KaFai Lau, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Eduard Zingerman, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Kumar Kartikeya Dwivedi, Dan Carpenter, Shung-Hsi Yu

From bda8bb8011d865cebf066350c8625e8be1625656 Mon Sep 17 00:00:00 2001
From: Shung-Hsi Yu <shung-hsi.yu@suse.com>
Date: Fri, 18 Apr 2025 15:22:00 +0800
Subject: [PATCH bpf-next 1/1] bpf: use proper type to calculate
 bpf_raw_tp_null_args.mask index

The calculation of the index used to access the mask field in 'struct
bpf_raw_tp_null_args' is done with 'int' type, which could overflow when
the tracepoint being attached has more than 8 arguments.

While none of the tracepoints mentioned in raw_tp_null_args[] currently
have more than 8 arguments, there do exist tracepoints that had more
than 8 arguments (e.g. iocost_iocg_forgive_debt), so use the correct
type for calculation and avoid Smatch static checker warning.

Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/r/843a3b94-d53d-42db-93d4-be10a4090146@stanley.mountain/
Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
---
 kernel/bpf/btf.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 16ba36f34dfa..656ee11aff67 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6829,10 +6829,10 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 			/* Is this a func with potential NULL args? */
 			if (strcmp(tname, raw_tp_null_args[i].func))
 				continue;
-			if (raw_tp_null_args[i].mask & (0x1 << (arg * 4)))
+			if (raw_tp_null_args[i].mask & (0x1ULL << (arg * 4)))
 				info->reg_type |= PTR_MAYBE_NULL;
 			/* Is the current arg IS_ERR? */
-			if (raw_tp_null_args[i].mask & (0x2 << (arg * 4)))
+			if (raw_tp_null_args[i].mask & (0x2ULL << (arg * 4)))
 				ptr_err_raw_tp = true;
 			break;
 		}
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread
* [PATCH bpf-next 1/2] cpuidle/rcu: Making arch_cpu_idle and rcu_idle_exit noinstr
@ 2022-05-15 20:36 Jiri Olsa
  2023-05-20  9:47 ` Ze Gao
  0 siblings, 1 reply; 21+ messages in thread
From: Jiri Olsa @ 2022-05-15 20:36 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Masami Hiramatsu, Paul E. McKenney
  Cc: netdev, bpf, lkml, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Steven Rostedt

Making arch_cpu_idle and rcu_idle_exit noinstr. Both functions run
in rcu 'not watching' context and if there's tracer attached to
them, which uses rcu (e.g. kprobe multi interface) it will hit RCU
warning like:

  [    3.017540] WARNING: suspicious RCU usage
  ...
  [    3.018363]  kprobe_multi_link_handler+0x68/0x1c0
  [    3.018364]  ? kprobe_multi_link_handler+0x3e/0x1c0
  [    3.018366]  ? arch_cpu_idle_dead+0x10/0x10
  [    3.018367]  ? arch_cpu_idle_dead+0x10/0x10
  [    3.018371]  fprobe_handler.part.0+0xab/0x150
  [    3.018374]  0xffffffffa00080c8
  [    3.018393]  ? arch_cpu_idle+0x5/0x10
  [    3.018398]  arch_cpu_idle+0x5/0x10
  [    3.018399]  default_idle_call+0x59/0x90
  [    3.018401]  do_idle+0x1c3/0x1d0

The call path is following:

default_idle_call
  rcu_idle_enter
  arch_cpu_idle
  rcu_idle_exit

The arch_cpu_idle and rcu_idle_exit are the only ones from above
path that are traceble and cause this problem on my setup.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/kernel/process.c | 2 +-
 kernel/rcu/tree.c         | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b370767f5b19..1345cb0124a6 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -720,7 +720,7 @@ void arch_cpu_idle_dead(void)
 /*
  * Called from the generic idle code.
  */
-void arch_cpu_idle(void)
+void noinstr arch_cpu_idle(void)
 {
 	x86_idle();
 }
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index a4b8189455d5..20d529722f51 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -896,7 +896,7 @@ static void noinstr rcu_eqs_exit(bool user)
  * If you add or remove a call to rcu_idle_exit(), be sure to test with
  * CONFIG_RCU_EQS_DEBUG=y.
  */
-void rcu_idle_exit(void)
+void noinstr rcu_idle_exit(void)
 {
 	unsigned long flags;
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 21+ messages in thread
* Re:
@ 2022-03-04  8:47 Harald Hauge
  0 siblings, 0 replies; 21+ messages in thread
From: Harald Hauge @ 2022-03-04  8:47 UTC (permalink / raw)
  To: bpf

Hello,
I'm Harald Hauge, an Investment Manager from Norway.
I will your assistance in executing this Business from my country
to yours.

This is a short term investment with good returns. Kindly
reply to confirm the validity of your email so I can give you comprehensive details about the project.

Best Regards,
Harald Hauge
Business Consultant

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2025-04-24  9:19 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-26  6:11 Totoro W
2024-06-26  7:01 ` your mail Shung-Hsi Yu
2024-06-26  7:09 ` Eduard Zingerman
  -- strict thread matches above, loose matches on Subject: below --
2025-04-24  0:40 Cong Wang
2025-04-24  0:59 ` Jiayuan Chen
2025-04-24  9:19   ` Re: Jiayuan Chen
2025-04-22  1:53 [PATCH bpf-next] bpf: Remove bpf_get_smp_processor_id_proto Alexei Starovoitov
2025-04-22  8:04 ` Feng Yang
2025-04-22 14:37   ` Alexei Starovoitov
2025-04-18  7:46 Shung-Hsi Yu
2025-04-18  7:49 ` Shung-Hsi Yu
2025-04-23 17:30 ` Re: patchwork-bot+netdevbpf
2022-05-15 20:36 [PATCH bpf-next 1/2] cpuidle/rcu: Making arch_cpu_idle and rcu_idle_exit noinstr Jiri Olsa
2023-05-20  9:47 ` Ze Gao
2023-05-21  3:58   ` Yonghong Song
2023-05-21 15:10     ` Re: Ze Gao
2023-05-21 20:26       ` Re: Jiri Olsa
2023-05-22  1:36         ` Re: Masami Hiramatsu
2023-05-22  2:07         ` Re: Ze Gao
2023-05-23  4:38           ` Re: Yonghong Song
2023-05-23  5:30           ` Re: Masami Hiramatsu
2023-05-23  6:59             ` Re: Paul E. McKenney
2023-05-25  0:13               ` Re: Masami Hiramatsu
2023-05-21  8:08   ` Re: Jiri Olsa
2023-05-21 10:09     ` Re: Masami Hiramatsu
2023-05-21 14:19       ` Re: Ze Gao
2022-03-04  8:47 Re: Harald Hauge

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).