From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
To: Viktor Malik <vmalik@redhat.com>
Cc: linux-trace-kernel@vger.kernel.org,
Steven Rostedt <rostedt@goodmis.org>,
Masami Hiramatsu <mhiramat@kernel.org>,
Matt Wu <wuqiang.matt@bytedance.com>,
bpf@vger.kernel.org, Andrii Nakryiko <andrii@kernel.org>
Subject: Re: [PATCH v2] objpool: fix choosing allocation for percpu slots
Date: Tue, 22 Oct 2024 14:17:48 +0900 [thread overview]
Message-ID: <20241022141748.521cb2d6a4a86428c9bfc99e@kernel.org> (raw)
In-Reply-To: <20240826060718.267261-1-vmalik@redhat.com>
On Mon, 26 Aug 2024 08:07:18 +0200
Viktor Malik <vmalik@redhat.com> wrote:
> objpool intends to use vmalloc for default (non-atomic) allocations of
> percpu slots and objects. However, the condition checking if GFP flags
> are equal to GFP_ATOMIC is wrong b/c GFP_ATOMIC is a combination of bits
You meant "whether GFP flags sets any bit of GFP_ATOMIC is wrong"?
> (__GFP_HIGH|__GFP_KSWAPD_RECLAIM) and so `pool->gfp & GFP_ATOMIC` will
> be true if either bit is set. Since GFP_ATOMIC and GFP_KERNEL share the
> ___GFP_KSWAPD_RECLAIM bit, kmalloc will be used in cases when GFP_KERNEL
> is specified, i.e. in all current usages of objpool.
>
> This may lead to unexpected OOM errors since kmalloc cannot allocate
> large amounts of memory.
>
> For instance, objpool is used by fprobe rethook which in turn is used by
> BPF kretprobe.multi and kprobe.session probe types. Trying to attach
> these to all kernel functions with libbpf using
>
> SEC("kprobe.session/*")
> int kprobe(struct pt_regs *ctx)
> {
> [...]
> }
>
> fails on objpool slot allocation with ENOMEM.
>
> Fix the condition to truly use vmalloc by default.
>
Anyway, this looks good to me.
Thank you,
> Fixes: b4edb8d2d464 ("lib: objpool added: ring-array based lockless MPMC")
> Signed-off-by: Viktor Malik <vmalik@redhat.com>
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
> Reviewed-by: Matt Wu <wuqiang.matt@bytedance.com>
> ---
> lib/objpool.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/objpool.c b/lib/objpool.c
> index 234f9d0bd081..fd108fe0d095 100644
> --- a/lib/objpool.c
> +++ b/lib/objpool.c
> @@ -76,7 +76,7 @@ objpool_init_percpu_slots(struct objpool_head *pool, int nr_objs,
> * mimimal size of vmalloc is one page since vmalloc would
> * always align the requested size to page size
> */
> - if (pool->gfp & GFP_ATOMIC)
> + if ((pool->gfp & GFP_ATOMIC) == GFP_ATOMIC)
> slot = kmalloc_node(size, pool->gfp, cpu_to_node(i));
> else
> slot = __vmalloc_node(size, sizeof(void *), pool->gfp,
> --
> 2.46.0
>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
next prev parent reply other threads:[~2024-10-22 5:17 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-26 6:07 [PATCH v2] objpool: fix choosing allocation for percpu slots Viktor Malik
2024-08-26 17:31 ` Steven Rostedt
2024-10-22 5:17 ` Masami Hiramatsu [this message]
2024-10-22 11:45 ` Viktor Malik
2024-10-22 13:45 ` Masami Hiramatsu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241022141748.521cb2d6a4a86428c9bfc99e@kernel.org \
--to=mhiramat@kernel.org \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=rostedt@goodmis.org \
--cc=vmalik@redhat.com \
--cc=wuqiang.matt@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox