BPF List
 help / color / mirror / Atom feed
From: Yonghong Song <yonghong.song@linux.dev>
To: Daniel Xu <dxu@dxuuu.xyz>
Cc: bpf <bpf@vger.kernel.org>, Tejun Heo <tj@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	David Vernet <void@manifault.com>,
	lsf-pc@lists.linux-foundation.org
Subject: Re: [LSF/MM/BPF TOPIC] Segmented Stacks for BPF Programs
Date: Mon, 19 Feb 2024 10:56:14 -0800	[thread overview]
Message-ID: <26a1cd82-8fa0-4e99-9e8b-a6e136ac7e0f@linux.dev> (raw)
In-Reply-To: <g2eynf5qrku2y5g433syeftgp3l2yg2sqawmvcee37ygezkslx@tklh2vnevwhx>


On 2/15/24 9:03 PM, Daniel Xu wrote:
> Hi Yonghong,
>
> On Wed, Feb 14, 2024 at 11:53:13AM -0800, Yonghong Song wrote:
>> For each active kernel thread, the thread stack size is 2*PAGE_SIZE ([1]).
>> Each bpf program has a maximum stack size 512 bytes to avoid
>> overflowing the thread stack. But nested bpf programs may post
>> a challenge to avoid stack overflow.
>>
>> For example, currently we already allow nested bpf
>> programs esp in tracing, i.e.,
>>    Prog_A
>>      -> Call Helper_B
>>        -> Call Func_C
>>          -> fentry program is called due to Func_C.
>>            -> Call Helper_D and then Func_E
>>              -> fentry due to Func_E
>>                -> ...
>> If we have too many bpf programs in the chain and each bpf program
>> has close to 512 byte stack size, it could overflow the kernel thread
>> stack.
> Just curious - overflowing the thread stack would cause some kind of
> panic right? And also, segmented/split stacks for bpf just reduces

Yes. immediately after normal thread stack, there is a guard page.
If there is a load/store to that guard page, kernel will panic.

> likelihood of stack overflow due to BPF prog stack requirements. In

This is the intention as bpf prog will use a separate stack
for all its local variables.

> theory, a deep call stack due to fentry probes could still occur, right?

Yes, although currently I did not see a lot of use cases for this, but
still it is possible.

>
> [...]
>
> Thanks,
> Daniel

      reply	other threads:[~2024-02-19 18:56 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-14 19:53 [LSF/MM/BPF TOPIC] Segmented Stacks for BPF Programs Yonghong Song
2024-02-15  2:20 ` Alexei Starovoitov
2024-02-15  3:07   ` Yonghong Song
2024-02-16  5:03 ` Daniel Xu
2024-02-19 18:56   ` Yonghong Song [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=26a1cd82-8fa0-4e99-9e8b-a6e136ac7e0f@linux.dev \
    --to=yonghong.song@linux.dev \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=dxu@dxuuu.xyz \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=tj@kernel.org \
    --cc=void@manifault.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox