From: Yonghong Song <yonghong.song@linux.dev>
To: YiFei Zhu <zhuyifei@google.com>, bpf@vger.kernel.org
Cc: Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Stanislav Fomichev <sdf@google.com>,
Martin KaFai Lau <martin.lau@linux.dev>,
Andrii Nakryiko <andrii@kernel.org>,
Hou Tao <houtao@huaweicloud.com>
Subject: Re: [PATCH v3 bpf-next] bpf/memalloc: Non-atomically allocate freelist during prefill
Date: Thu, 27 Jul 2023 21:53:43 -0700 [thread overview]
Message-ID: <63b0b09e-6c9c-e8b8-e652-3025752ede4d@linux.dev> (raw)
In-Reply-To: <20230728043359.3324347-1-zhuyifei@google.com>
On 7/27/23 9:33 PM, YiFei Zhu wrote:
> In internal testing of test_maps, we sometimes observed failures like:
> test_maps: test_maps.c:173: void test_hashmap_percpu(unsigned int, void *):
> Assertion `bpf_map_update_elem(fd, &key, value, BPF_ANY) == 0' failed.
> where the errno is ENOMEM. After some troubleshooting and enabling
> the warnings, we saw:
> [ 91.304708] percpu: allocation failed, size=8 align=8 atomic=1, atomic alloc failed, no space left
> [ 91.304716] CPU: 51 PID: 24145 Comm: test_maps Kdump: loaded Tainted: G N 6.1.38-smp-DEV #7
> [ 91.304719] Hardware name: Google Astoria/astoria, BIOS 0.20230627.0-0 06/27/2023
> [ 91.304721] Call Trace:
> [ 91.304724] <TASK>
> [ 91.304730] [<ffffffffa7ef83b9>] dump_stack_lvl+0x59/0x88
> [ 91.304737] [<ffffffffa7ef83f8>] dump_stack+0x10/0x18
> [ 91.304738] [<ffffffffa75caa0c>] pcpu_alloc+0x6fc/0x870
> [ 91.304741] [<ffffffffa75ca302>] __alloc_percpu_gfp+0x12/0x20
> [ 91.304743] [<ffffffffa756785e>] alloc_bulk+0xde/0x1e0
> [ 91.304746] [<ffffffffa7566c02>] bpf_mem_alloc_init+0xd2/0x2f0
> [ 91.304747] [<ffffffffa7547c69>] htab_map_alloc+0x479/0x650
> [ 91.304750] [<ffffffffa751d6e0>] map_create+0x140/0x2e0
> [ 91.304752] [<ffffffffa751d413>] __sys_bpf+0x5a3/0x6c0
> [ 91.304753] [<ffffffffa751c3ec>] __x64_sys_bpf+0x1c/0x30
> [ 91.304754] [<ffffffffa7ef847a>] do_syscall_64+0x5a/0x80
> [ 91.304756] [<ffffffffa800009b>] entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> This makes sense, because in atomic context, percpu allocation would
> not create new chunks; it would only create in non-atomic contexts.
> And if during prefill all precpu chunks are full, -ENOMEM would
> happen immediately upon next unit_alloc.
>
> Prefill phase does not actually run in atomic context, so we can
> use this fact to allocate non-atomically with GFP_KERNEL instead
> of GFP_NOWAIT. This avoids the immediate -ENOMEM.
>
> GFP_NOWAIT has to be used in unit_alloc when bpf program runs
> in atomic context. Even if bpf program runs in non-atomic context,
> in most cases, rcu read lock is enabled for the program so
> GFP_NOWAIT is still needed. This is often also the case for
> BPF_MAP_UPDATE_ELEM syscalls.
>
> Signed-off-by: YiFei Zhu <zhuyifei@google.com>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
next prev parent reply other threads:[~2023-07-28 4:53 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-28 4:33 [PATCH v3 bpf-next] bpf/memalloc: Non-atomically allocate freelist during prefill YiFei Zhu
2023-07-28 4:53 ` Yonghong Song [this message]
2023-07-28 6:16 ` Hou Tao
2023-07-28 16:50 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=63b0b09e-6c9c-e8b8-e652-3025752ede4d@linux.dev \
--to=yonghong.song@linux.dev \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=houtao@huaweicloud.com \
--cc=martin.lau@linux.dev \
--cc=sdf@google.com \
--cc=zhuyifei@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox