BPF List
 help / color / mirror / Atom feed
* status of BPF in conjunction with PREEMPT_RT for the 6.6 kernel?
@ 2025-01-09 22:21 Chris Friesen
  2025-01-10  2:06 ` Hou Tao
  0 siblings, 1 reply; 3+ messages in thread
From: Chris Friesen @ 2025-01-09 22:21 UTC (permalink / raw)
  To: bpf

Hi,

Back in 2019 there were some concerns raised 
(https://lwn.net/ml/bpf/20191017090500.ienqyium2phkxpdo@linutronix.de/#t) 
around using BPF in conjunction with PREEMPT_RT.

In the context of the 6.6 kernel and the corresponding PREEMPT_RT 
patchset, are those concerns still valid or have they been sorted out?

Please CC me on replies, I'm not subscribed to the list.

Thanks,
Chris Friesen

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: status of BPF in conjunction with PREEMPT_RT for the 6.6 kernel?
  2025-01-09 22:21 status of BPF in conjunction with PREEMPT_RT for the 6.6 kernel? Chris Friesen
@ 2025-01-10  2:06 ` Hou Tao
       [not found]   ` <94a4475f-51a8-4113-b16f-c2239eb01537@windriver.com>
  0 siblings, 1 reply; 3+ messages in thread
From: Hou Tao @ 2025-01-10  2:06 UTC (permalink / raw)
  To: Chris Friesen, bpf, Alexei Starovoitov, Sebastian Andrzej Siewior

Hi,

On 1/10/2025 6:21 AM, Chris Friesen wrote:
> Hi,
>
> Back in 2019 there were some concerns raised
> (https://lwn.net/ml/bpf/20191017090500.ienqyium2phkxpdo@linutronix.de/#t)
> around using BPF in conjunction with PREEMPT_RT.
>
> In the context of the 6.6 kernel and the corresponding PREEMPT_RT
> patchset, are those concerns still valid or have they been sorted out?
>
> Please CC me on replies, I'm not subscribed to the list.

Do you have any use case for BPF + PREEMPT_RT ?  I am not a RT expert,
however, In my understanding, BPF + PREEMPT_RT in the vanilla kernel
basically can work togerther basically. The memory allocation concern is
partially resolved and there is still undergoing effort trying to
resolve it [1]. The up_read_non_owner problem has been avoided
explicitly and the non-preemptible context for bpf prog has also been
fixed. Although the running of test_maps and test_progs under PREEMPT_RT
report some problems, I think these problem could be fixed. As for v6.6,
I think it may be OK for BPF + PREEMPT_RT case.

[1]:
https://lore.kernel.org/bpf/20241210023936.46871-1-alexei.starovoitov@gmail.com/
> Thanks,
> Chris Friesen
>
>
> .


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: status of BPF in conjunction with PREEMPT_RT for the 6.6 kernel?
       [not found]   ` <94a4475f-51a8-4113-b16f-c2239eb01537@windriver.com>
@ 2025-01-11  4:07     ` Hou Tao
  0 siblings, 0 replies; 3+ messages in thread
From: Hou Tao @ 2025-01-11  4:07 UTC (permalink / raw)
  To: Chris Friesen, bpf, Alexei Starovoitov, Sebastian Andrzej Siewior

Hi,

On 1/10/2025 11:05 PM, Chris Friesen wrote:
>
> Thanks for the info.
>
> Our system is mostly used for DPDK applications, which are basically
> latency-sensitive busy-looping CPU hogs that rarely make syscalls, so
> they want to run on the RT kernel to minimize interruptions.   There
> are various BPF-based tools and optimizations that I have been hearing
> about, and I was just wondering if they could now be used on RT
> kernels and if so whether there are any potential problems.
>

I see.
>
>
> I did have one additional question...some time back (on the 5.10
> kernel with RT patches) we noted that when net.core.bpf_jit_enable was
> enabled, merely restarting some systemd services would be enough to
> cause function call inter-processor interrupts on all CPUs, even
> isolated ones.  (Thus interrupting the RT tasks.)   Is that still the
> case?
>

I think the IPI may be due to kernel TLB flush. When bpf jit is enabled,
the memory allocated for the bpf program will be marked as executable,
and during the free of bpf program, the exec permission needs to be
cleared. The clearing of permission bit involves the update of kernel
page table, therefore, TLB flush is needed. I think the TLB flush will
still be there, however, the introduction of pack allocator in v5.18 in
bpf may alleviate it.
>
> Thanks,
>
> Chris
>
>
> On 1/9/2025 8:06 PM, Hou Tao wrote:
>> Hi,
>>
>> On 1/10/2025 6:21 AM, Chris Friesen wrote:
>>> Hi,
>>>
>>> Back in 2019 there were some concerns raised
>>> (https://lwn.net/ml/bpf/20191017090500.ienqyium2phkxpdo@linutronix.de/#t)
>>> around using BPF in conjunction with PREEMPT_RT.
>>>
>>> In the context of the 6.6 kernel and the corresponding PREEMPT_RT
>>> patchset, are those concerns still valid or have they been sorted out?
>>>
>>> Please CC me on replies, I'm not subscribed to the list.
>> Do you have any use case for BPF + PREEMPT_RT ?  I am not a RT expert,
>> however, In my understanding, BPF + PREEMPT_RT in the vanilla kernel
>> basically can work togerther basically. The memory allocation concern is
>> partially resolved and there is still undergoing effort trying to
>> resolve it [1]. The up_read_non_owner problem has been avoided
>> explicitly and the non-preemptible context for bpf prog has also been
>> fixed. Although the running of test_maps and test_progs under PREEMPT_RT
>> report some problems, I think these problem could be fixed. As for v6.6,
>> I think it may be OK for BPF + PREEMPT_RT case.
>>
>> [1]:
>> https://lore.kernel.org/bpf/20241210023936.46871-1-alexei.starovoitov@gmail.com/
>>> Thanks,
>>> Chris Friesen
>>>
>>>
>>> .
>
>


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-01-11  4:07 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-09 22:21 status of BPF in conjunction with PREEMPT_RT for the 6.6 kernel? Chris Friesen
2025-01-10  2:06 ` Hou Tao
     [not found]   ` <94a4475f-51a8-4113-b16f-c2239eb01537@windriver.com>
2025-01-11  4:07     ` Hou Tao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox