* [RFC] adding KASAN support to JIT compiler(s)
@ 2026-02-06 11:31 Alexis Lothoré
2026-02-07 3:02 ` Alexei Starovoitov
0 siblings, 1 reply; 8+ messages in thread
From: Alexis Lothoré @ 2026-02-06 11:31 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Thomas Petazzoni,
Bastien Curutchet (eBPF Foundation), Emil Tsalapatis,
Daniel Borkmann
Hi everyone,
I am starting to work on adding KASAN support for loaded programs in the
kernel (in the context of the sponsorship granted to the eBPF Foundation
by the Alpha-Omega project, see [0]). I have done a bit of
research/experiments, but before actually writing and sending things
upstream, I'm sharing here what I understood on the requirement overall,
and how I am currently considering to implement it. Please feel free to
correct or enrich any part below!
# Brief
Once a program has passed the verifier (and have been JIted into native
code), we have strong guarantees that its _behavior_ will not trigger
bugs like infinite loops, invalid access on input context, NULL pointer
dereference, and so on. However, eBPF programs can still generate
invalid memory accesses in some scenarios:
- programs validated behavior relies on the hypothesis
that the tools (helpers, kfuncs) and data (eg kernel data) provided to
eBPF programs are sane: eg if a pointer returned by the kernel to a
program is somehow bogus (used after free, triggering out of bounds
accesses, etc), a bpf program that has been validated by the verifier
will happily use this pointer.
- there could be a bug in the verifier and/or the JIT compiler, letting
invalid accesses slip through
To help detect those issues, Address Sanitizing can be implemented for
loaded programs.
Similarly to the KASAN tooling implemented for kernel code ([1]), the
goal will be to instrument any load/store performed by loaded eBPF
programs. This means detecting those load/store in eBPF bytecode (when
being processed by the JIT compiler), and insert calls to the
corresponding __asan_loadX/__asan_storeX before those accesses,
similarly to what compilers do for KASAN. The major difference here is
that this instrumentation will not be inserted at program build time: it
will be inserted dynamically by the JIT compiler, while it is turning
the eBPF bytecode into native machine code (and so, this work will focus
on jit-enabled systems). I am considering this general process:
# Basic instrumentation
- start in do_jit(), in arch/<$ARCH>/net/bpf_jit_comp.c
- note from Alexei: let's focus on x86 instead of all architectures.
- in the main loop going over all instructions, for all load/store
instructions
- note from Alexei: let's focus on LDX/STX for now instead of _any_
load/store instruction to keep JIT complexity reasonable
- identify the size (b, h, w, dw) and type (read/write) of the access.
Depending on those two, define the __asan_XXX function to call (those
functions are defined in mm/kasan/generic.c, and are basically
wrappers around check_region_inline)
- I did some tests, emitting explicit calls to the __asan_load/store
checkers implemented in the kernel directly into the jitted code,
and it seems to be fine to call those from bpf execution context
- similarly to what ASAN is doing ([2]), possibly implement a fast path/slow
path, mechanism:
- fast path: emit instructions to first check the whole corresponding
shadow byte, it it's 0, access is legitimate, jump back to the
actual memory access
- slow path: if the shadow byte is not 0, emit a call to the relevant
__asan_load/storeXXX to validate or report the access.
# Memory tracking
For this instrumentation to behave correctly, the monitored memory must
be tracked accordingly. The method used by KASAN is to allocate some
"shadow memory", which allows monitoring 8 bytes of memory with 1 byte
of shadow memory. Legit memory must have its corresponding shadow area
"unpoisoned", and must be poisoned/quarantined again once accesses are not
legitimate anymore. There are different memory types, with some of them
already covered/monitored:
- data passed to programs: KASAN on kernel side is already taking care
of {un}poisoning this memory, so we can check directly the
corresponding shadow memory.
- bpf global variables: global variables in bpf programs are turned into
maps. Those maps, when created by the kernel, are then already
monitored (ie: poisoned/unpoisoned) when the corresponding memory is
allocated/deallocated. I did not check all maps, but looking at a few
of those, they are vmalloc'ed, so the proper monitoring will depend
directly on CONFIG_KASAN_VMALLOC
- bpf stack: when a program allocate some memory on its own stack, it is
not tracked by KASAN. To be able to track stack memory misusage, JIT
compiler must insert some red zones around the variables on the stack.
This point looks more complex than the previous ones, as we need to:
- identify variables that live on bpf program stack instead of
registers
- insert asan left/right red zones, and possibly inter-variables red zones
- and so account for this "stack overhead", eg in the verifier
I then propose to put this "stack monitoring" as a next step, to be
implemented once we have a basic kasan monitoring integrated in x86
JIT compiler.
# Integration
- KASAN monitoring for eBPF programs can be set under a new
CONFIG_KASAN_EBPF kconfig
- will likely depend on a few other Kconfigs, eg CONFIG_BPF_JIT, and
possibly a CONFIG_HAVE_KASAN_EBPF, that would be set for x86 only
for now
- I am also thinking about adding a sysctl (present and enabled by
default if CONFIG_KASAN_EBPF=1), to allow temporarily disabling KASAN
for ebpf programs. When set to 0, JIT compiler would stop inserting
checking instructions in new programs being loaded.
# Testing
This new KASAN feature must obviously come with some tests. I'd like to
find a way to trigger KASAN reports in ebpf programs without having to
alter the verifier specifically for testing. I am thinking about
creating dedicated _faulty_ kfuncs in
tools/testing/selftests/bpf/bpf_testmod.c, and a corresponding bpf
program:
- those kfuncs would return kernel data to be used by the test program,
but they would trigger a variety of issues, eg:
- returned data has already been freed
- returned data is not pointing to the beginning of an allocated
buffer but at an arbitrary offset in this buffer, making the caller
potentially perform OoB accesses
- etc
- ebpf programs would trigger kasan reports when trying to access the
corresponding data, the test would be about making sure that the
report is indeed triggered
Alexei also made me aware that Emil Tsalapatis is working on adding ASAN
for bpf arenas ([3]). The main difference is that Emil's sanitizing will
be directly included in the bpf bytecode, while this RFC proposes to
inject ASAN checks in the JITed code. Also, Emil's series seems to need
dedicated KASAN support as it aims to sanitize accesses on memory
returned by the custom allocators brought by this same series (on top of
bpf arenas).
This plan is based from some small tests and quite a lot of code
reading, I hope not to miss any major feature and/or technical
constraint, hence this RFC. Once again, feel free to correct me and/or
challenge this plan.
Alexis
[0] https://alpha-omega.dev/grants/grantrecipients/
[1] https://www.kernel.org/doc/html/latest/dev-tools/kasan.html
[2] https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#mapping
[3] https://lore.kernel.org/bpf/20260127181610.86376-1-emil@etsalapatis.com/
--
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [RFC] adding KASAN support to JIT compiler(s)
2026-02-06 11:31 [RFC] adding KASAN support to JIT compiler(s) Alexis Lothoré
@ 2026-02-07 3:02 ` Alexei Starovoitov
2026-02-09 21:03 ` Alexis Lothoré
0 siblings, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2026-02-07 3:02 UTC (permalink / raw)
To: Alexis Lothoré
Cc: bpf, Alexei Starovoitov, Thomas Petazzoni,
Bastien Curutchet (eBPF Foundation), Emil Tsalapatis,
Daniel Borkmann
On Fri, Feb 6, 2026 at 3:31 AM Alexis Lothoré
<alexis.lothore@bootlin.com> wrote:
>
> Hi everyone,
>
> I am starting to work on adding KASAN support for loaded programs in the
> kernel (in the context of the sponsorship granted to the eBPF Foundation
> by the Alpha-Omega project, see [0]). I have done a bit of
> research/experiments, but before actually writing and sending things
> upstream, I'm sharing here what I understood on the requirement overall,
> and how I am currently considering to implement it. Please feel free to
> correct or enrich any part below!
>
> # Brief
>
> Once a program has passed the verifier (and have been JIted into native
> code), we have strong guarantees that its _behavior_ will not trigger
> bugs like infinite loops, invalid access on input context, NULL pointer
> dereference, and so on. However, eBPF programs can still generate
> invalid memory accesses in some scenarios:
> - programs validated behavior relies on the hypothesis
> that the tools (helpers, kfuncs) and data (eg kernel data) provided to
> eBPF programs are sane: eg if a pointer returned by the kernel to a
> program is somehow bogus (used after free, triggering out of bounds
> accesses, etc), a bpf program that has been validated by the verifier
> will happily use this pointer.
> - there could be a bug in the verifier and/or the JIT compiler, letting
> invalid accesses slip through
>
> To help detect those issues, Address Sanitizing can be implemented for
> loaded programs.
>
> Similarly to the KASAN tooling implemented for kernel code ([1]), the
> goal will be to instrument any load/store performed by loaded eBPF
> programs. This means detecting those load/store in eBPF bytecode (when
> being processed by the JIT compiler), and insert calls to the
> corresponding __asan_loadX/__asan_storeX before those accesses,
> similarly to what compilers do for KASAN. The major difference here is
> that this instrumentation will not be inserted at program build time: it
> will be inserted dynamically by the JIT compiler, while it is turning
> the eBPF bytecode into native machine code (and so, this work will focus
> on jit-enabled systems). I am considering this general process:
>
> # Basic instrumentation
>
> - start in do_jit(), in arch/<$ARCH>/net/bpf_jit_comp.c
> - note from Alexei: let's focus on x86 instead of all architectures.
> - in the main loop going over all instructions, for all load/store
> instructions
> - note from Alexei: let's focus on LDX/STX for now instead of _any_
> load/store instruction to keep JIT complexity reasonable
> - identify the size (b, h, w, dw) and type (read/write) of the access.
> Depending on those two, define the __asan_XXX function to call (those
> functions are defined in mm/kasan/generic.c, and are basically
> wrappers around check_region_inline)
> - I did some tests, emitting explicit calls to the __asan_load/store
> checkers implemented in the kernel directly into the jitted code,
> and it seems to be fine to call those from bpf execution context
> - similarly to what ASAN is doing ([2]), possibly implement a fast path/slow
> path, mechanism:
> - fast path: emit instructions to first check the whole corresponding
> shadow byte, it it's 0, access is legitimate, jump back to the
> actual memory access
> - slow path: if the shadow byte is not 0, emit a call to the relevant
> __asan_load/storeXXX to validate or report the access.
Are you sure compilers do this optimization for KASAN?
I don't remember ever seeing it in assembly.
It sounds interesting, but let's do it in phase 2 when the whole
thing is working and we start optimizing speed.
>
> # Memory tracking
>
> For this instrumentation to behave correctly, the monitored memory must
> be tracked accordingly. The method used by KASAN is to allocate some
> "shadow memory", which allows monitoring 8 bytes of memory with 1 byte
> of shadow memory. Legit memory must have its corresponding shadow area
> "unpoisoned", and must be poisoned/quarantined again once accesses are not
> legitimate anymore. There are different memory types, with some of them
> already covered/monitored:
> - data passed to programs: KASAN on kernel side is already taking care
> of {un}poisoning this memory, so we can check directly the
> corresponding shadow memory.
> - bpf global variables: global variables in bpf programs are turned into
> maps. Those maps, when created by the kernel, are then already
> monitored (ie: poisoned/unpoisoned) when the corresponding memory is
> allocated/deallocated. I did not check all maps, but looking at a few
> of those, they are vmalloc'ed, so the proper monitoring will depend
> directly on CONFIG_KASAN_VMALLOC
bpf global vars are in bpf array. So there is a shadow memory for it.
As far as other map types. There is arena and it's special.
load/store to arena are JITed differently already.
They cannot be instrumented for kasan.
> - bpf stack: when a program allocate some memory on its own stack, it is
> not tracked by KASAN. To be able to track stack memory misusage, JIT
> compiler must insert some red zones around the variables on the stack.
> This point looks more complex than the previous ones, as we need to:
> - identify variables that live on bpf program stack instead of
> registers
> - insert asan left/right red zones, and possibly inter-variables red zones
> - and so account for this "stack overhead", eg in the verifier
> I then propose to put this "stack monitoring" as a next step, to be
> implemented once we have a basic kasan monitoring integrated in x86
> JIT compiler.
I'm not sure it can be deferred. Pretty much all bpf programs
access stack with load/stores.
So all of such instructions should not be instrumented.
It shouldn't be hard to do though.
The verifier knows all stack accesses.
It can mark such LDX/STX insns in insn_aux_data with some flag.
Currently JITs don't have access to verifier info,
but that's the direction we're going to:
See proposal:
https://lore.kernel.org/all/CAADnVQLha64x_LQ1Ph+0dEdP2sNms71k41pwEVMwxrbBG78M5Q@mail.gmail.com/
It was couple weeks ago. If Xu Kuohai isn't going to follow up
then this can be a prerequisite.
>
> # Integration
>
> - KASAN monitoring for eBPF programs can be set under a new
> CONFIG_KASAN_EBPF kconfig
> - will likely depend on a few other Kconfigs, eg CONFIG_BPF_JIT, and
> possibly a CONFIG_HAVE_KASAN_EBPF, that would be set for x86 only
> for now
> - I am also thinking about adding a sysctl (present and enabled by
> default if CONFIG_KASAN_EBPF=1), to allow temporarily disabling KASAN
> for ebpf programs. When set to 0, JIT compiler would stop inserting
> checking instructions in new programs being loaded.
I'm not sure that sysctl is necessary. If the kernel is compiled
with kasan and JIT supports it, it should go and instrument all progs.
We can add sysctl later if necessary.
> # Testing
>
> This new KASAN feature must obviously come with some tests. I'd like to
> find a way to trigger KASAN reports in ebpf programs without having to
> alter the verifier specifically for testing. I am thinking about
> creating dedicated _faulty_ kfuncs in
> tools/testing/selftests/bpf/bpf_testmod.c, and a corresponding bpf
> program:
> - those kfuncs would return kernel data to be used by the test program,
> but they would trigger a variety of issues, eg:
> - returned data has already been freed
> - returned data is not pointing to the beginning of an allocated
> buffer but at an arbitrary offset in this buffer, making the caller
> potentially perform OoB accesses
> - etc
> - ebpf programs would trigger kasan reports when trying to access the
> corresponding data, the test would be about making sure that the
> report is indeed triggered
Everything else makes total sense.
Thank you for working on it.
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [RFC] adding KASAN support to JIT compiler(s)
2026-02-07 3:02 ` Alexei Starovoitov
@ 2026-02-09 21:03 ` Alexis Lothoré
2026-02-09 21:30 ` Emil Tsalapatis
2026-02-10 21:43 ` Alexei Starovoitov
0 siblings, 2 replies; 8+ messages in thread
From: Alexis Lothoré @ 2026-02-09 21:03 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, Alexei Starovoitov, Thomas Petazzoni,
Bastien Curutchet (eBPF Foundation), Emil Tsalapatis,
Daniel Borkmann
Hi Alexei, thanks for the feedback,
On Sat Feb 7, 2026 at 4:02 AM CET, Alexei Starovoitov wrote:
> On Fri, Feb 6, 2026 at 3:31 AM Alexis Lothoré
> <alexis.lothore@bootlin.com> wrote:
[...]
>> - similarly to what ASAN is doing ([2]), possibly implement a fast path/slow
>> path, mechanism:
>> - fast path: emit instructions to first check the whole corresponding
>> shadow byte, it it's 0, access is legitimate, jump back to the
>> actual memory access
>> - slow path: if the shadow byte is not 0, emit a call to the relevant
>> __asan_load/storeXXX to validate or report the access.
>
> Are you sure compilers do this optimization for KASAN?
> I don't remember ever seeing it in assembly.
> It sounds interesting, but let's do it in phase 2 when the whole
> thing is working and we start optimizing speed.
I've seen it described in ASAN documentation, but now that you mention
it, I did not see it in generated machine code... Anyway, that's indeed
likely some premature optimization, I'll keep this aside for now.
[...]
>> - bpf global variables: global variables in bpf programs are turned into
>> maps. Those maps, when created by the kernel, are then already
>> monitored (ie: poisoned/unpoisoned) when the corresponding memory is
>> allocated/deallocated. I did not check all maps, but looking at a few
>> of those, they are vmalloc'ed, so the proper monitoring will depend
>> directly on CONFIG_KASAN_VMALLOC
>
> bpf global vars are in bpf array. So there is a shadow memory for it.
>
> As far as other map types. There is arena and it's special.
> load/store to arena are JITed differently already.
> They cannot be instrumented for kasan.
ACK, thanks for the clarification.
>> - bpf stack: when a program allocate some memory on its own stack, it is
>> not tracked by KASAN. To be able to track stack memory misusage, JIT
>> compiler must insert some red zones around the variables on the stack.
>> This point looks more complex than the previous ones, as we need to:
>> - identify variables that live on bpf program stack instead of
>> registers
>> - insert asan left/right red zones, and possibly inter-variables red zones
>> - and so account for this "stack overhead", eg in the verifier
>> I then propose to put this "stack monitoring" as a next step, to be
>> implemented once we have a basic kasan monitoring integrated in x86
>> JIT compiler.
>
> I'm not sure it can be deferred. Pretty much all bpf programs
> access stack with load/stores.
> So all of such instructions should not be instrumented.
I am not sure to get your point here. If the matter can not be deferred
(because pretty much all bpf programs access stack with load/stores),
then all of such instructions _should_ be instrumented (so that we
detect invalid stack accesses), right ? Or am I getting it wrong ?
> It shouldn't be hard to do though.
> The verifier knows all stack accesses.
> It can mark such LDX/STX insns in insn_aux_data with some flag.
> Currently JITs don't have access to verifier info,
> but that's the direction we're going to:
> See proposal:
> https://lore.kernel.org/all/CAADnVQLha64x_LQ1Ph+0dEdP2sNms71k41pwEVMwxrbBG78M5Q@mail.gmail.com/
> It was couple weeks ago. If Xu Kuohai isn't going to follow up
> then this can be a prerequisite.
ACK. I'll follow Xu's work (and collaborate with him if
relevant/helpful), so that JIT comp can use verifier info to properly
monitor stack memory accesses.
I am not familiar yet with the verifier code, but I then expect this
work to potentially bring some changes into it as well (aside from the
info to pass to JIT comp. mentioned above). Eg, if adding red zones
around stack variables is indeed required, it will increase stack usage,
and so the verifier may have to account for those (eg when validating
max stack depth ?). I'll have to clarify this kind of point.
[...]
>> - I am also thinking about adding a sysctl (present and enabled by
>> default if CONFIG_KASAN_EBPF=1), to allow temporarily disabling KASAN
>> for ebpf programs. When set to 0, JIT compiler would stop inserting
>> checking instructions in new programs being loaded.
>
> I'm not sure that sysctl is necessary. If the kernel is compiled
> with kasan and JIT supports it, it should go and instrument all progs.
> We can add sysctl later if necessary.
ACK.
Thanks,
Alexis
--
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC] adding KASAN support to JIT compiler(s)
2026-02-09 21:03 ` Alexis Lothoré
@ 2026-02-09 21:30 ` Emil Tsalapatis
2026-02-10 21:43 ` Alexei Starovoitov
1 sibling, 0 replies; 8+ messages in thread
From: Emil Tsalapatis @ 2026-02-09 21:30 UTC (permalink / raw)
To: Alexis Lothoré, Alexei Starovoitov
Cc: bpf, Alexei Starovoitov, Thomas Petazzoni,
Bastien Curutchet (eBPF Foundation), Daniel Borkmann
On Mon Feb 9, 2026 at 4:03 PM EST, Alexis Lothoré wrote:
> Hi Alexei, thanks for the feedback,
>
> On Sat Feb 7, 2026 at 4:02 AM CET, Alexei Starovoitov wrote:
>> On Fri, Feb 6, 2026 at 3:31 AM Alexis Lothoré
>> <alexis.lothore@bootlin.com> wrote:
>
> [...]
>
>>> - similarly to what ASAN is doing ([2]), possibly implement a fast path/slow
>>> path, mechanism:
>>> - fast path: emit instructions to first check the whole corresponding
>>> shadow byte, it it's 0, access is legitimate, jump back to the
>>> actual memory access
>>> - slow path: if the shadow byte is not 0, emit a call to the relevant
>>> __asan_load/storeXXX to validate or report the access.
>>
>> Are you sure compilers do this optimization for KASAN?
>> I don't remember ever seeing it in assembly.
>> It sounds interesting, but let's do it in phase 2 when the whole
>> thing is working and we start optimizing speed.
>
> I've seen it described in ASAN documentation, but now that you mention
> it, I did not see it in generated machine code... Anyway, that's indeed
> likely some premature optimization, I'll keep this aside for now.
>
Short note on that, LLVM ASAN has code for automatically doing what sounds
like this optimization [1]. If the pass is configured to generate ASAN
code directly instead of injecting runtime calls, it does pretty much what
you're describing.
[1] https://github.com/llvm/llvm-project/blob/8418c4196d5c5a60003b0f19257dfef2ffbe008e/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp#L1937
> [...]
>
>>> - bpf global variables: global variables in bpf programs are turned into
>>> maps. Those maps, when created by the kernel, are then already
>>> monitored (ie: poisoned/unpoisoned) when the corresponding memory is
>>> allocated/deallocated. I did not check all maps, but looking at a few
>>> of those, they are vmalloc'ed, so the proper monitoring will depend
>>> directly on CONFIG_KASAN_VMALLOC
>>
>> bpf global vars are in bpf array. So there is a shadow memory for it.
>>
>> As far as other map types. There is arena and it's special.
>> load/store to arena are JITed differently already.
>> They cannot be instrumented for kasan.
>
> ACK, thanks for the clarification.
>
>>> - bpf stack: when a program allocate some memory on its own stack, it is
>>> not tracked by KASAN. To be able to track stack memory misusage, JIT
>>> compiler must insert some red zones around the variables on the stack.
>>> This point looks more complex than the previous ones, as we need to:
>>> - identify variables that live on bpf program stack instead of
>>> registers
>>> - insert asan left/right red zones, and possibly inter-variables red zones
>>> - and so account for this "stack overhead", eg in the verifier
>>> I then propose to put this "stack monitoring" as a next step, to be
>>> implemented once we have a basic kasan monitoring integrated in x86
>>> JIT compiler.
>>
>> I'm not sure it can be deferred. Pretty much all bpf programs
>> access stack with load/stores.
>> So all of such instructions should not be instrumented.
>
> I am not sure to get your point here. If the matter can not be deferred
> (because pretty much all bpf programs access stack with load/stores),
> then all of such instructions _should_ be instrumented (so that we
> detect invalid stack accesses), right ? Or am I getting it wrong ?
>
>> It shouldn't be hard to do though.
>> The verifier knows all stack accesses.
>> It can mark such LDX/STX insns in insn_aux_data with some flag.
>> Currently JITs don't have access to verifier info,
>> but that's the direction we're going to:
>> See proposal:
>> https://lore.kernel.org/all/CAADnVQLha64x_LQ1Ph+0dEdP2sNms71k41pwEVMwxrbBG78M5Q@mail.gmail.com/
>> It was couple weeks ago. If Xu Kuohai isn't going to follow up
>> then this can be a prerequisite.
>
> ACK. I'll follow Xu's work (and collaborate with him if
> relevant/helpful), so that JIT comp can use verifier info to properly
> monitor stack memory accesses.
>
> I am not familiar yet with the verifier code, but I then expect this
> work to potentially bring some changes into it as well (aside from the
> info to pass to JIT comp. mentioned above). Eg, if adding red zones
> around stack variables is indeed required, it will increase stack usage,
> and so the verifier may have to account for those (eg when validating
> max stack depth ?). I'll have to clarify this kind of point.
>
> [...]
>
>>> - I am also thinking about adding a sysctl (present and enabled by
>>> default if CONFIG_KASAN_EBPF=1), to allow temporarily disabling KASAN
>>> for ebpf programs. When set to 0, JIT compiler would stop inserting
>>> checking instructions in new programs being loaded.
>>
>> I'm not sure that sysctl is necessary. If the kernel is compiled
>> with kasan and JIT supports it, it should go and instrument all progs.
>> We can add sysctl later if necessary.
>
> ACK.
>
> Thanks,
>
> Alexis
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC] adding KASAN support to JIT compiler(s)
2026-02-09 21:03 ` Alexis Lothoré
2026-02-09 21:30 ` Emil Tsalapatis
@ 2026-02-10 21:43 ` Alexei Starovoitov
2026-02-11 23:17 ` Alexis Lothoré
1 sibling, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2026-02-10 21:43 UTC (permalink / raw)
To: Alexis Lothoré
Cc: bpf, Alexei Starovoitov, Thomas Petazzoni,
Bastien Curutchet (eBPF Foundation), Emil Tsalapatis,
Daniel Borkmann
On Mon, Feb 9, 2026 at 1:03 PM Alexis Lothoré
<alexis.lothore@bootlin.com> wrote:
>
> >> - bpf stack: when a program allocate some memory on its own stack, it is
> >> not tracked by KASAN. To be able to track stack memory misusage, JIT
> >> compiler must insert some red zones around the variables on the stack.
> >> This point looks more complex than the previous ones, as we need to:
> >> - identify variables that live on bpf program stack instead of
> >> registers
> >> - insert asan left/right red zones, and possibly inter-variables red zones
> >> - and so account for this "stack overhead", eg in the verifier
> >> I then propose to put this "stack monitoring" as a next step, to be
> >> implemented once we have a basic kasan monitoring integrated in x86
> >> JIT compiler.
> >
> > I'm not sure it can be deferred. Pretty much all bpf programs
> > access stack with load/stores.
> > So all of such instructions should not be instrumented.
>
> I am not sure to get your point here. If the matter can not be deferred
> (because pretty much all bpf programs access stack with load/stores),
> then all of such instructions _should_ be instrumented (so that we
> detect invalid stack accesses), right ? Or am I getting it wrong ?
As far as I understand compilers don't sanitize stack access
because there is no shadow memory behind it.
There is no "allocation of stack" or "deallocation", so no UAF
or things like that.
The kernel already has guard pages for stack overflow and that's
about it.
> I am not familiar yet with the verifier code, but I then expect this
> work to potentially bring some changes into it as well (aside from the
> info to pass to JIT comp. mentioned above). Eg, if adding red zones
> around stack variables is indeed required, it will increase stack usage,
> and so the verifier may have to account for those (eg when validating
> max stack depth ?). I'll have to clarify this kind of point.
Redzones? Around what? There is no way to tell where variables
are and that one access aliases into another.
imo existing stack guard pages are enough here.
So no instrumentation of stack ldx/stx.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC] adding KASAN support to JIT compiler(s)
2026-02-10 21:43 ` Alexei Starovoitov
@ 2026-02-11 23:17 ` Alexis Lothoré
2026-02-12 2:06 ` Alexei Starovoitov
0 siblings, 1 reply; 8+ messages in thread
From: Alexis Lothoré @ 2026-02-11 23:17 UTC (permalink / raw)
To: Alexei Starovoitov, Alexis Lothoré
Cc: bpf, Alexei Starovoitov, Thomas Petazzoni,
Bastien Curutchet (eBPF Foundation), Emil Tsalapatis,
Daniel Borkmann
On Tue Feb 10, 2026 at 10:43 PM CET, Alexei Starovoitov wrote:
> On Mon, Feb 9, 2026 at 1:03 PM Alexis Lothoré
> <alexis.lothore@bootlin.com> wrote:
>>
>> >> - bpf stack: when a program allocate some memory on its own stack, it is
>> >> not tracked by KASAN. To be able to track stack memory misusage, JIT
>> >> compiler must insert some red zones around the variables on the stack.
>> >> This point looks more complex than the previous ones, as we need to:
>> >> - identify variables that live on bpf program stack instead of
>> >> registers
>> >> - insert asan left/right red zones, and possibly inter-variables red zones
>> >> - and so account for this "stack overhead", eg in the verifier
>> >> I then propose to put this "stack monitoring" as a next step, to be
>> >> implemented once we have a basic kasan monitoring integrated in x86
>> >> JIT compiler.
>> >
>> > I'm not sure it can be deferred. Pretty much all bpf programs
>> > access stack with load/stores.
>> > So all of such instructions should not be instrumented.
>>
>> I am not sure to get your point here. If the matter can not be deferred
>> (because pretty much all bpf programs access stack with load/stores),
>> then all of such instructions _should_ be instrumented (so that we
>> detect invalid stack accesses), right ? Or am I getting it wrong ?
>
> As far as I understand compilers don't sanitize stack access
> because there is no shadow memory behind it.
Is that so ? Because if I take a look at kasan tests in the kernel, I
find for example this kasan_stack_oob in mm/kasan/kasan_test_c.c:
static void kasan_stack_oob(struct kunit *test)
{
char stack_array[10];
/* See comment in kasan_global_oob_right. */
char *volatile array = stack_array;
char *p = &array[ARRAY_SIZE(stack_array) + OOB_TAG_OFF];
KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK);
KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p);
}
which gives me, for x86:
0000000000000340 <kasan_stack_oob>:
340: f3 0f 1e fa endbr64
344: e8 00 00 00 00 call 349 <kasan_stack_oob+0x9>
349: 48 ba 00 00 00 00 00 movabs $0xdffffc0000000000,%rdx
350: fc ff df
353: 41 56 push %r14
355: 31 c9 xor %ecx,%ecx
357: 4c 8d b7 e0 01 00 00 lea 0x1e0(%rdi),%r14
35e: 41 55 push %r13
360: 41 54 push %r12
362: 55 push %rbp
363: 48 89 fd mov %rdi,%rbp
366: 53 push %rbx
367: 48 81 ec c0 00 00 00 sub $0xc0,%rsp
36e: 48 89 e0 mov %rsp,%rax
371: 48 c7 04 24 b3 8a b5 movq $0x41b58ab3,(%rsp)
378: 41
379: 48 c7 44 24 08 00 00 movq $0x0,0x8(%rsp)
380: 00 00
382: 48 c1 e8 03 shr $0x3,%rax
386: 48 c7 44 24 10 00 00 movq $0x0,0x10(%rsp)
38d: 00 00
38f: 48 89 c3 mov %rax,%rbx
392: 48 01 d0 add %rdx,%rax <= %rax = (%rsp >> 3) + SHADOW_OFFSET
395: c7 00 f1 f1 f1 f1 movl $0xf1f1f1f1,(%rax) <= left red zone
39b: c7 40 04 f1 f1 01 f2 movl $0xf201f1f1,0x4(%rax) <= left red zone, ??? (1 byte unpoisoned), mid red zone
3a2: c7 40 08 00 f2 f2 f2 movl $0xf2f2f200,0x8(%rax) <= mid red zone, array unpoisoned ?
3a9: c7 40 0c 00 00 f2 f2 movl $0xf2f20000,0xc(%rax) <= ??? (16 bytes unpoisoned), mid red zone
3b0: c7 40 10 00 02 f3 f3 movl $0xf3f30200,0x10(%rax) <= stack_array unpoisoned ? right red zone
[...]
Am I misinterpreting something here ?
For the record, my kernel is compiled with GCC, for x86, with generic,
inlined kasan. Digging a bit more in the corresponding commits and
Kconfig file, I see that this stack monitoring is not enabled for all
architectures and compilers, though. For example, I see that it is
deemed "unsafe" when using clang.
> There is no "allocation of stack" or "deallocation", so no UAF
> or things like that.
> The kernel already has guard pages for stack overflow and that's
> about it.
>
>> I am not familiar yet with the verifier code, but I then expect this
>> work to potentially bring some changes into it as well (aside from the
>> info to pass to JIT comp. mentioned above). Eg, if adding red zones
>> around stack variables is indeed required, it will increase stack usage,
>> and so the verifier may have to account for those (eg when validating
>> max stack depth ?). I'll have to clarify this kind of point.
>
> Redzones? Around what? There is no way to tell where variables
> are and that one access aliases into another.
> imo existing stack guard pages are enough here.
> So no instrumentation of stack ldx/stx.
Aside from the point above, if guard pages are considered enough, ok
then, I can make sure to make JIT comp ignore stack accesses when
instrumenting load/store insns (with the data passing from the verifier
to be implemented, related to Xu's work).
Thanks,
Alexis
--
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [RFC] adding KASAN support to JIT compiler(s)
2026-02-11 23:17 ` Alexis Lothoré
@ 2026-02-12 2:06 ` Alexei Starovoitov
2026-02-13 13:31 ` Alexis Lothoré
0 siblings, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2026-02-12 2:06 UTC (permalink / raw)
To: Alexis Lothoré
Cc: bpf, Alexei Starovoitov, Thomas Petazzoni,
Bastien Curutchet (eBPF Foundation), Emil Tsalapatis,
Daniel Borkmann
On Wed, Feb 11, 2026 at 3:17 PM Alexis Lothoré
<alexis.lothore@bootlin.com> wrote:
>
> On Tue Feb 10, 2026 at 10:43 PM CET, Alexei Starovoitov wrote:
> > On Mon, Feb 9, 2026 at 1:03 PM Alexis Lothoré
> > <alexis.lothore@bootlin.com> wrote:
> >>
> >> >> - bpf stack: when a program allocate some memory on its own stack, it is
> >> >> not tracked by KASAN. To be able to track stack memory misusage, JIT
> >> >> compiler must insert some red zones around the variables on the stack.
> >> >> This point looks more complex than the previous ones, as we need to:
> >> >> - identify variables that live on bpf program stack instead of
> >> >> registers
> >> >> - insert asan left/right red zones, and possibly inter-variables red zones
> >> >> - and so account for this "stack overhead", eg in the verifier
> >> >> I then propose to put this "stack monitoring" as a next step, to be
> >> >> implemented once we have a basic kasan monitoring integrated in x86
> >> >> JIT compiler.
> >> >
> >> > I'm not sure it can be deferred. Pretty much all bpf programs
> >> > access stack with load/stores.
> >> > So all of such instructions should not be instrumented.
> >>
> >> I am not sure to get your point here. If the matter can not be deferred
> >> (because pretty much all bpf programs access stack with load/stores),
> >> then all of such instructions _should_ be instrumented (so that we
> >> detect invalid stack accesses), right ? Or am I getting it wrong ?
> >
> > As far as I understand compilers don't sanitize stack access
> > because there is no shadow memory behind it.
>
> Is that so ? Because if I take a look at kasan tests in the kernel, I
> find for example this kasan_stack_oob in mm/kasan/kasan_test_c.c:
>
> static void kasan_stack_oob(struct kunit *test)
> {
> char stack_array[10];
> /* See comment in kasan_global_oob_right. */
> char *volatile array = stack_array;
> char *p = &array[ARRAY_SIZE(stack_array) + OOB_TAG_OFF];
>
> KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK);
>
> KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p);
> }
>
> which gives me, for x86:
>
> 0000000000000340 <kasan_stack_oob>:
> 340: f3 0f 1e fa endbr64
> 344: e8 00 00 00 00 call 349 <kasan_stack_oob+0x9>
> 349: 48 ba 00 00 00 00 00 movabs $0xdffffc0000000000,%rdx
> 350: fc ff df
> 353: 41 56 push %r14
> 355: 31 c9 xor %ecx,%ecx
> 357: 4c 8d b7 e0 01 00 00 lea 0x1e0(%rdi),%r14
> 35e: 41 55 push %r13
> 360: 41 54 push %r12
> 362: 55 push %rbp
> 363: 48 89 fd mov %rdi,%rbp
> 366: 53 push %rbx
> 367: 48 81 ec c0 00 00 00 sub $0xc0,%rsp
> 36e: 48 89 e0 mov %rsp,%rax
> 371: 48 c7 04 24 b3 8a b5 movq $0x41b58ab3,(%rsp)
> 378: 41
> 379: 48 c7 44 24 08 00 00 movq $0x0,0x8(%rsp)
> 380: 00 00
> 382: 48 c1 e8 03 shr $0x3,%rax
> 386: 48 c7 44 24 10 00 00 movq $0x0,0x10(%rsp)
> 38d: 00 00
> 38f: 48 89 c3 mov %rax,%rbx
> 392: 48 01 d0 add %rdx,%rax <= %rax = (%rsp >> 3) + SHADOW_OFFSET
> 395: c7 00 f1 f1 f1 f1 movl $0xf1f1f1f1,(%rax) <= left red zone
> 39b: c7 40 04 f1 f1 01 f2 movl $0xf201f1f1,0x4(%rax) <= left red zone, ??? (1 byte unpoisoned), mid red zone
> 3a2: c7 40 08 00 f2 f2 f2 movl $0xf2f2f200,0x8(%rax) <= mid red zone, array unpoisoned ?
> 3a9: c7 40 0c 00 00 f2 f2 movl $0xf2f20000,0xc(%rax) <= ??? (16 bytes unpoisoned), mid red zone
> 3b0: c7 40 10 00 02 f3 f3 movl $0xf3f30200,0x10(%rax) <= stack_array unpoisoned ? right red zone
> [...]
hmm. indeed. For arrays on stack passed by pointer deeper
into calls the compiler creates these run-time zones.
The other stack accesses are not instrumented.
How do you propose to support this in JIT?
I see no workable way of doing it.
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [RFC] adding KASAN support to JIT compiler(s)
2026-02-12 2:06 ` Alexei Starovoitov
@ 2026-02-13 13:31 ` Alexis Lothoré
0 siblings, 0 replies; 8+ messages in thread
From: Alexis Lothoré @ 2026-02-13 13:31 UTC (permalink / raw)
To: Alexei Starovoitov, Alexis Lothoré
Cc: bpf, Alexei Starovoitov, Thomas Petazzoni,
Bastien Curutchet (eBPF Foundation), Emil Tsalapatis,
Daniel Borkmann
On Thu Feb 12, 2026 at 3:06 AM CET, Alexei Starovoitov wrote:
> On Wed, Feb 11, 2026 at 3:17 PM Alexis Lothoré
> <alexis.lothore@bootlin.com> wrote:
>>
>> On Tue Feb 10, 2026 at 10:43 PM CET, Alexei Starovoitov wrote:
>> > On Mon, Feb 9, 2026 at 1:03 PM Alexis Lothoré
>> > <alexis.lothore@bootlin.com> wrote:
>> >>
>> >> >> - bpf stack: when a program allocate some memory on its own stack, it is
>> >> >> not tracked by KASAN. To be able to track stack memory misusage, JIT
>> >> >> compiler must insert some red zones around the variables on the stack.
>> >> >> This point looks more complex than the previous ones, as we need to:
>> >> >> - identify variables that live on bpf program stack instead of
>> >> >> registers
>> >> >> - insert asan left/right red zones, and possibly inter-variables red zones
>> >> >> - and so account for this "stack overhead", eg in the verifier
>> >> >> I then propose to put this "stack monitoring" as a next step, to be
>> >> >> implemented once we have a basic kasan monitoring integrated in x86
>> >> >> JIT compiler.
>> >> >
>> >> > I'm not sure it can be deferred. Pretty much all bpf programs
>> >> > access stack with load/stores.
>> >> > So all of such instructions should not be instrumented.
>> >>
>> >> I am not sure to get your point here. If the matter can not be deferred
>> >> (because pretty much all bpf programs access stack with load/stores),
>> >> then all of such instructions _should_ be instrumented (so that we
>> >> detect invalid stack accesses), right ? Or am I getting it wrong ?
>> >
>> > As far as I understand compilers don't sanitize stack access
>> > because there is no shadow memory behind it.
>>
>> Is that so ? Because if I take a look at kasan tests in the kernel, I
>> find for example this kasan_stack_oob in mm/kasan/kasan_test_c.c:
>>
>> static void kasan_stack_oob(struct kunit *test)
>> {
>> char stack_array[10];
>> /* See comment in kasan_global_oob_right. */
>> char *volatile array = stack_array;
>> char *p = &array[ARRAY_SIZE(stack_array) + OOB_TAG_OFF];
>>
>> KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK);
>>
>> KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p);
>> }
>>
>> which gives me, for x86:
>>
>> 0000000000000340 <kasan_stack_oob>:
>> 340: f3 0f 1e fa endbr64
>> 344: e8 00 00 00 00 call 349 <kasan_stack_oob+0x9>
>> 349: 48 ba 00 00 00 00 00 movabs $0xdffffc0000000000,%rdx
>> 350: fc ff df
>> 353: 41 56 push %r14
>> 355: 31 c9 xor %ecx,%ecx
>> 357: 4c 8d b7 e0 01 00 00 lea 0x1e0(%rdi),%r14
>> 35e: 41 55 push %r13
>> 360: 41 54 push %r12
>> 362: 55 push %rbp
>> 363: 48 89 fd mov %rdi,%rbp
>> 366: 53 push %rbx
>> 367: 48 81 ec c0 00 00 00 sub $0xc0,%rsp
>> 36e: 48 89 e0 mov %rsp,%rax
>> 371: 48 c7 04 24 b3 8a b5 movq $0x41b58ab3,(%rsp)
>> 378: 41
>> 379: 48 c7 44 24 08 00 00 movq $0x0,0x8(%rsp)
>> 380: 00 00
>> 382: 48 c1 e8 03 shr $0x3,%rax
>> 386: 48 c7 44 24 10 00 00 movq $0x0,0x10(%rsp)
>> 38d: 00 00
>> 38f: 48 89 c3 mov %rax,%rbx
>> 392: 48 01 d0 add %rdx,%rax <= %rax = (%rsp >> 3) + SHADOW_OFFSET
>> 395: c7 00 f1 f1 f1 f1 movl $0xf1f1f1f1,(%rax) <= left red zone
>> 39b: c7 40 04 f1 f1 01 f2 movl $0xf201f1f1,0x4(%rax) <= left red zone, ??? (1 byte unpoisoned), mid red zone
>> 3a2: c7 40 08 00 f2 f2 f2 movl $0xf2f2f200,0x8(%rax) <= mid red zone, array unpoisoned ?
>> 3a9: c7 40 0c 00 00 f2 f2 movl $0xf2f20000,0xc(%rax) <= ??? (16 bytes unpoisoned), mid red zone
>> 3b0: c7 40 10 00 02 f3 f3 movl $0xf3f30200,0x10(%rax) <= stack_array unpoisoned ? right red zone
>> [...]
>
> hmm. indeed. For arrays on stack passed by pointer deeper
> into calls the compiler creates these run-time zones.
> The other stack accesses are not instrumented.
>
> How do you propose to support this in JIT?
> I see no workable way of doing it.
I did not think deeply enough about it to identify clear blockers, but I
was imagining quite some difficulties to do it, hence my initial
proposal to "postpone" this specific monitoring :) (before you suggested
to rather take it into account to actually make sure to _ignore_ all the
corresponding stack accesss insns in the monitoring).
I was initially imagining that I could somehow benefit from a program
BTF info to identify variables on stack and their location, but it would
not work on this kind of example, as this really is a local variable,
not a function argument, and so this is not described in the BTF info. I
guess this is only one issue amongst all those I did not even think of,
but remembering about the issues encountered when working on trampolines
for ARM64, I guess there's also a wide variety of corner cases about
variables location and size, eg when they have custom attributes...
On top of that, I am not sure how frequent could the example I mentioned
above be present in in BPF programs. More likely pretty uncommon ?
So as for myself, I'm fine with your proposal to rather make JIT comp
aware of stack-related insns and just ignore those :)
Thanks,
Alexis
--
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-02-13 13:31 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-06 11:31 [RFC] adding KASAN support to JIT compiler(s) Alexis Lothoré
2026-02-07 3:02 ` Alexei Starovoitov
2026-02-09 21:03 ` Alexis Lothoré
2026-02-09 21:30 ` Emil Tsalapatis
2026-02-10 21:43 ` Alexei Starovoitov
2026-02-11 23:17 ` Alexis Lothoré
2026-02-12 2:06 ` Alexei Starovoitov
2026-02-13 13:31 ` Alexis Lothoré
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox