public inbox for linux-riscv@lists.infradead.org
 help / color / mirror / Atom feed
From: Yonghong Song <yonghong.song@linux.dev>
To: "Björn Töpel" <bjorn@kernel.org>, bpf@vger.kernel.org
Cc: linux-riscv@lists.infradead.org
Subject: Re: WARNING: CPU: 3 PID: 261 at kernel/bpf/memalloc.c:342
Date: Fri, 25 Aug 2023 08:28:17 -0700	[thread overview]
Message-ID: <2f4f0dfc-ec06-8ac8-a56a-395cc2373def@linux.dev> (raw)
In-Reply-To: <87jztjmmy4.fsf@all.your.base.are.belong.to.us>



On 8/25/23 3:32 AM, Björn Töpel wrote:
> I'm chasing a workqueue hang on RISC-V/qemu (TCG), using the bpf
> selftests on bpf-next 9e3b47abeb8f.
> 
> I'm able to reproduce the hang by multiple runs of:
>   | ./test_progs -a link_api -a linked_list
> I'm currently investigating that.
> 
> But! Sometimes (every blue moon) I get a warn_on_once hit:
>   | ------------[ cut here ]------------
>   | WARNING: CPU: 3 PID: 261 at kernel/bpf/memalloc.c:342 bpf_mem_refill+0x1fc/0x206
>   | Modules linked in: bpf_testmod(OE)
>   | CPU: 3 PID: 261 Comm: test_progs-cpuv Tainted: G           OE    N 6.5.0-rc5-01743-gdcb152bb8328 #2
>   | Hardware name: riscv-virtio,qemu (DT)
>   | epc : bpf_mem_refill+0x1fc/0x206
>   |  ra : irq_work_single+0x68/0x70
>   | epc : ffffffff801b1bc4 ra : ffffffff8015fe84 sp : ff2000000001be20
>   |  gp : ffffffff82d26138 tp : ff6000008477a800 t0 : 0000000000046600
>   |  t1 : ffffffff812b6ddc t2 : 0000000000000000 s0 : ff2000000001be70
>   |  s1 : ff5ffffffffe8998 a0 : ff5ffffffffe8998 a1 : ff600003fef4b000
>   |  a2 : 000000000000003f a3 : ffffffff80008250 a4 : 0000000000000060
>   |  a5 : 0000000000000080 a6 : 0000000000000000 a7 : 0000000000735049
>   |  s2 : ff5ffffffffe8998 s3 : 0000000000000022 s4 : 0000000000001000
>   |  s5 : 0000000000000007 s6 : ff5ffffffffe8570 s7 : ffffffff82d6bd30
>   |  s8 : 000000000000003f s9 : ffffffff82d2c5e8 s10: 000000000000ffff
>   |  s11: ffffffff82d2c5d8 t3 : ffffffff81ea8f28 t4 : 0000000000000000
>   |  t5 : ff6000008fd28278 t6 : 0000000000040000
>   | status: 0000000200000100 badaddr: 0000000000000000 cause: 0000000000000003
>   | [<ffffffff801b1bc4>] bpf_mem_refill+0x1fc/0x206
>   | [<ffffffff8015fe84>] irq_work_single+0x68/0x70
>   | [<ffffffff8015feb4>] irq_work_run_list+0x28/0x36
>   | [<ffffffff8015fefa>] irq_work_run+0x38/0x66
>   | [<ffffffff8000828a>] handle_IPI+0x3a/0xb4
>   | [<ffffffff800a5c3a>] handle_percpu_devid_irq+0xa4/0x1f8
>   | [<ffffffff8009fafa>] generic_handle_domain_irq+0x28/0x36
>   | [<ffffffff800ae570>] ipi_mux_process+0xac/0xfa
>   | [<ffffffff8000a8ea>] sbi_ipi_handle+0x2e/0x88
>   | [<ffffffff8009fafa>] generic_handle_domain_irq+0x28/0x36
>   | [<ffffffff807ee70e>] riscv_intc_irq+0x36/0x4e
>   | [<ffffffff812b5d3a>] handle_riscv_irq+0x54/0x86
>   | [<ffffffff812b6904>] do_irq+0x66/0x98
>   | ---[ end trace 0000000000000000 ]---
> 
> Code:
>   | static void free_bulk(struct bpf_mem_cache *c)
>   | {
>   | 	struct bpf_mem_cache *tgt = c->tgt;
>   | 	struct llist_node *llnode, *t;
>   | 	unsigned long flags;
>   | 	int cnt;
>   |
>   | 	WARN_ON_ONCE(tgt->unit_size != c->unit_size);
>   | ...
> 
> I'm not well versed in the memory allocator; Before I dive into it --
> has anyone else hit it? Ideas on why the warn_on_once is hit?

Maybe take a look at the patch
   822fb26bdb55  bpf: Add a hint to allocated objects.

In the above patch, we have

+       /*
+        * Remember bpf_mem_cache that allocated this object.
+        * The hint is not accurate.
+        */
+       c->tgt = *(struct bpf_mem_cache **)llnode;

I suspect that the warning may be related to the above.
I tried the above ./test_progs command line (running multiple
at the same time) and didn't trigger the issue.

> 
> 
> Björn
> 

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

  reply	other threads:[~2023-08-25 15:28 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-25 10:32 WARNING: CPU: 3 PID: 261 at kernel/bpf/memalloc.c:342 Björn Töpel
2023-08-25 15:28 ` Yonghong Song [this message]
2023-08-25 18:53   ` Alexei Starovoitov
2023-08-25 19:49     ` Alexei Starovoitov
2023-08-25 21:31       ` Andrii Nakryiko
2023-08-26 22:49         ` Kumar Kartikeya Dwivedi
2023-08-26  3:48   ` Hou Tao
2023-08-26  9:23     ` Björn Töpel
2023-08-26 10:27       ` Hou Tao
2023-08-26 10:49         ` Björn Töpel
2023-08-27  8:37           ` Björn Töpel
2023-08-27 14:53             ` Yonghong Song
2023-08-28 13:57               ` Hou Tao
2023-08-29  0:54                 ` Yonghong Song
2023-08-29  7:26                 ` Björn Töpel
2023-08-29 11:46                   ` Björn Töpel
2023-08-30 12:15                     ` Hou Tao
2023-08-29 12:54                   ` Björn Töpel
2023-08-29 15:26                 ` Alexei Starovoitov
2023-08-30 12:08                   ` Hou Tao
2023-08-30 21:05                     ` Alexei Starovoitov
2023-08-26 13:44 ` RISC-V uprobe bug (Was: Re: WARNING: CPU: 3 PID: 261 at kernel/bpf/memalloc.c:342) Björn Töpel
2023-08-26 18:12   ` Nam Cao
2023-08-26 18:31     ` Nam Cao
2023-08-27  8:11     ` Björn Töpel
2023-08-27  8:35       ` Nam Cao
2023-08-27  9:04         ` Björn Töpel
2023-08-27  9:39           ` Nam Cao
2023-08-27 19:20             ` Björn Töpel
2023-08-27 19:41               ` Nam Cao
2023-08-27 20:15               ` Nam Cao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2f4f0dfc-ec06-8ac8-a56a-395cc2373def@linux.dev \
    --to=yonghong.song@linux.dev \
    --cc=bjorn@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox