* [PATCH bpf] bpf, x64: Fix prog_array_map_poke_run map poke update
@ 2023-11-27 9:45 Jiri Olsa
2023-11-27 13:09 ` Jiri Olsa
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Jiri Olsa @ 2023-11-27 9:45 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: Lee Jones, Maciej Fijalkowski, syzbot+97a4fe20470e9bc30810, bpf,
Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo
Lee pointed out issue found by syscaller [0] hitting BUG in prog
array map poke update in prog_array_map_poke_run function due to
bpf_arch_text_poke error return value.
There's race window where bpf_arch_text_poke can fail due to missing
bpf program kallsym symbols, which is accounted for with check for
-EINVAL in BUG_ON.
The problem is that in such case we won't update the tail call jump
and cause imballance for the next tail call update check which will
fail with -EBUSY in __bpf_arch_text_poke.
I'm hitting following race during the program load:
CPU 0 CPU 1
bpf_prog_load
bpf_check
do_misc_fixups
prog_array_map_poke_track {}
map_update_elem
bpf_fd_array_map_update_elem
prog_array_map_poke_run
bpf_arch_text_poke returns -EINVAL
bpf_prog_kallsyms_add
After bpf_arch_text_poke (CPU 1) fails to update the tail call jump,
the next poke update fails on expected jump instruction check in
__bpf_arch_text_poke with -EBUSY and triggers the BUG_ON in
prog_array_map_poke_run.
Similar race exists on the program unload.
Fixing this by calling directly __bpf_arch_text_poke and skipping the bpf
symbol check like we do in bpf_tail_call_direct_fixup. This way the
prog_array_map_poke_run does not depend on bpf program having the kallsym
symbol in place.
[0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810
Cc: Lee Jones <lee@kernel.org>
Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT")
Reported-by: syzbot+97a4fe20470e9bc30810@syzkaller.appspotmail.com
Tested-by: Lee Jones <lee@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
arch/x86/net/bpf_jit_comp.c | 4 ++--
include/linux/bpf.h | 2 ++
kernel/bpf/arraymap.c | 31 +++++++++++--------------------
3 files changed, 15 insertions(+), 22 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 8c10d9abc239..35c2988caf29 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -391,8 +391,8 @@ static int emit_jump(u8 **pprog, void *func, void *ip)
return emit_patch(pprog, func, ip, 0xE9);
}
-static int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
- void *old_addr, void *new_addr)
+int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
+ void *old_addr, void *new_addr)
{
const u8 *nop_insn = x86_nops[5];
u8 old_insn[X86_PATCH_SIZE];
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 6762dac3ef76..c28a8563e845 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -3174,6 +3174,8 @@ enum bpf_text_poke_type {
int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
void *addr1, void *addr2);
+int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
+ void *old_addr, void *new_addr);
void *bpf_arch_text_copy(void *dst, void *src, size_t len);
int bpf_arch_text_invalidate(void *dst, size_t len);
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 2058e89b5ddd..0b5afa2ec17a 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -1044,20 +1044,11 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
* activated, so tail call updates can arrive from here
* while JIT is still finishing its final fixup for
* non-activated poke entries.
- * 3) On program teardown, the program's kallsym entry gets
- * removed out of RCU callback, but we can only untrack
- * from sleepable context, therefore bpf_arch_text_poke()
- * might not see that this is in BPF text section and
- * bails out with -EINVAL. As these are unreachable since
- * RCU grace period already passed, we simply skip them.
- * 4) Also programs reaching refcount of zero while patching
+ * 3) Also programs reaching refcount of zero while patching
* is in progress is okay since we're protected under
* poke_mutex and untrack the programs before the JIT
- * buffer is freed. When we're still in the middle of
- * patching and suddenly kallsyms entry of the program
- * gets evicted, we just skip the rest which is fine due
- * to point 3).
- * 5) Any other error happening below from bpf_arch_text_poke()
+ * buffer is freed.
+ * 4) Any error happening below from __bpf_arch_text_poke()
* is a unexpected bug.
*/
if (!READ_ONCE(poke->tailcall_target_stable))
@@ -1073,33 +1064,33 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL;
if (new) {
- ret = bpf_arch_text_poke(poke->tailcall_target,
+ ret = __bpf_arch_text_poke(poke->tailcall_target,
BPF_MOD_JUMP,
old_addr, new_addr);
- BUG_ON(ret < 0 && ret != -EINVAL);
+ BUG_ON(ret < 0);
if (!old) {
- ret = bpf_arch_text_poke(poke->tailcall_bypass,
+ ret = __bpf_arch_text_poke(poke->tailcall_bypass,
BPF_MOD_JUMP,
poke->bypass_addr,
NULL);
- BUG_ON(ret < 0 && ret != -EINVAL);
+ BUG_ON(ret < 0);
}
} else {
- ret = bpf_arch_text_poke(poke->tailcall_bypass,
+ ret = __bpf_arch_text_poke(poke->tailcall_bypass,
BPF_MOD_JUMP,
old_bypass_addr,
poke->bypass_addr);
- BUG_ON(ret < 0 && ret != -EINVAL);
+ BUG_ON(ret < 0);
/* let other CPUs finish the execution of program
* so that it will not possible to expose them
* to invalid nop, stack unwind, nop state
*/
if (!ret)
synchronize_rcu();
- ret = bpf_arch_text_poke(poke->tailcall_target,
+ ret = __bpf_arch_text_poke(poke->tailcall_target,
BPF_MOD_JUMP,
old_addr, NULL);
- BUG_ON(ret < 0 && ret != -EINVAL);
+ BUG_ON(ret < 0);
}
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 4+ messages in thread* Re: [PATCH bpf] bpf, x64: Fix prog_array_map_poke_run map poke update
2023-11-27 9:45 [PATCH bpf] bpf, x64: Fix prog_array_map_poke_run map poke update Jiri Olsa
@ 2023-11-27 13:09 ` Jiri Olsa
2023-11-27 16:26 ` kernel test robot
2023-11-27 16:27 ` kernel test robot
2 siblings, 0 replies; 4+ messages in thread
From: Jiri Olsa @ 2023-11-27 13:09 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: Lee Jones, Maciej Fijalkowski, syzbot+97a4fe20470e9bc30810, bpf,
Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo
On Mon, Nov 27, 2023 at 10:45:25AM +0100, Jiri Olsa wrote:
> Lee pointed out issue found by syscaller [0] hitting BUG in prog
> array map poke update in prog_array_map_poke_run function due to
> bpf_arch_text_poke error return value.
>
> There's race window where bpf_arch_text_poke can fail due to missing
> bpf program kallsym symbols, which is accounted for with check for
> -EINVAL in BUG_ON.
>
> The problem is that in such case we won't update the tail call jump
> and cause imballance for the next tail call update check which will
> fail with -EBUSY in __bpf_arch_text_poke.
>
> I'm hitting following race during the program load:
>
> CPU 0 CPU 1
>
> bpf_prog_load
> bpf_check
> do_misc_fixups
> prog_array_map_poke_track {}
>
> map_update_elem
> bpf_fd_array_map_update_elem
> prog_array_map_poke_run
>
> bpf_arch_text_poke returns -EINVAL
>
> bpf_prog_kallsyms_add
>
> After bpf_arch_text_poke (CPU 1) fails to update the tail call jump,
> the next poke update fails on expected jump instruction check in
> __bpf_arch_text_poke with -EBUSY and triggers the BUG_ON in
> prog_array_map_poke_run.
>
> Similar race exists on the program unload.
>
> Fixing this by calling directly __bpf_arch_text_poke and skipping the bpf
> symbol check like we do in bpf_tail_call_direct_fixup. This way the
> prog_array_map_poke_run does not depend on bpf program having the kallsym
> symbol in place.
>
> [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810
>
> Cc: Lee Jones <lee@kernel.org>
> Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT")
> Reported-by: syzbot+97a4fe20470e9bc30810@syzkaller.appspotmail.com
> Tested-by: Lee Jones <lee@kernel.org>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
hum, this breaks non x86 builds.. will send new version
jirka
> ---
> arch/x86/net/bpf_jit_comp.c | 4 ++--
> include/linux/bpf.h | 2 ++
> kernel/bpf/arraymap.c | 31 +++++++++++--------------------
> 3 files changed, 15 insertions(+), 22 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 8c10d9abc239..35c2988caf29 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -391,8 +391,8 @@ static int emit_jump(u8 **pprog, void *func, void *ip)
> return emit_patch(pprog, func, ip, 0xE9);
> }
>
> -static int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
> - void *old_addr, void *new_addr)
> +int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
> + void *old_addr, void *new_addr)
> {
> const u8 *nop_insn = x86_nops[5];
> u8 old_insn[X86_PATCH_SIZE];
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 6762dac3ef76..c28a8563e845 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -3174,6 +3174,8 @@ enum bpf_text_poke_type {
>
> int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
> void *addr1, void *addr2);
> +int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
> + void *old_addr, void *new_addr);
>
> void *bpf_arch_text_copy(void *dst, void *src, size_t len);
> int bpf_arch_text_invalidate(void *dst, size_t len);
> diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
> index 2058e89b5ddd..0b5afa2ec17a 100644
> --- a/kernel/bpf/arraymap.c
> +++ b/kernel/bpf/arraymap.c
> @@ -1044,20 +1044,11 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
> * activated, so tail call updates can arrive from here
> * while JIT is still finishing its final fixup for
> * non-activated poke entries.
> - * 3) On program teardown, the program's kallsym entry gets
> - * removed out of RCU callback, but we can only untrack
> - * from sleepable context, therefore bpf_arch_text_poke()
> - * might not see that this is in BPF text section and
> - * bails out with -EINVAL. As these are unreachable since
> - * RCU grace period already passed, we simply skip them.
> - * 4) Also programs reaching refcount of zero while patching
> + * 3) Also programs reaching refcount of zero while patching
> * is in progress is okay since we're protected under
> * poke_mutex and untrack the programs before the JIT
> - * buffer is freed. When we're still in the middle of
> - * patching and suddenly kallsyms entry of the program
> - * gets evicted, we just skip the rest which is fine due
> - * to point 3).
> - * 5) Any other error happening below from bpf_arch_text_poke()
> + * buffer is freed.
> + * 4) Any error happening below from __bpf_arch_text_poke()
> * is a unexpected bug.
> */
> if (!READ_ONCE(poke->tailcall_target_stable))
> @@ -1073,33 +1064,33 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
> new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL;
>
> if (new) {
> - ret = bpf_arch_text_poke(poke->tailcall_target,
> + ret = __bpf_arch_text_poke(poke->tailcall_target,
> BPF_MOD_JUMP,
> old_addr, new_addr);
> - BUG_ON(ret < 0 && ret != -EINVAL);
> + BUG_ON(ret < 0);
> if (!old) {
> - ret = bpf_arch_text_poke(poke->tailcall_bypass,
> + ret = __bpf_arch_text_poke(poke->tailcall_bypass,
> BPF_MOD_JUMP,
> poke->bypass_addr,
> NULL);
> - BUG_ON(ret < 0 && ret != -EINVAL);
> + BUG_ON(ret < 0);
> }
> } else {
> - ret = bpf_arch_text_poke(poke->tailcall_bypass,
> + ret = __bpf_arch_text_poke(poke->tailcall_bypass,
> BPF_MOD_JUMP,
> old_bypass_addr,
> poke->bypass_addr);
> - BUG_ON(ret < 0 && ret != -EINVAL);
> + BUG_ON(ret < 0);
> /* let other CPUs finish the execution of program
> * so that it will not possible to expose them
> * to invalid nop, stack unwind, nop state
> */
> if (!ret)
> synchronize_rcu();
> - ret = bpf_arch_text_poke(poke->tailcall_target,
> + ret = __bpf_arch_text_poke(poke->tailcall_target,
> BPF_MOD_JUMP,
> old_addr, NULL);
> - BUG_ON(ret < 0 && ret != -EINVAL);
> + BUG_ON(ret < 0);
> }
> }
> }
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH bpf] bpf, x64: Fix prog_array_map_poke_run map poke update
2023-11-27 9:45 [PATCH bpf] bpf, x64: Fix prog_array_map_poke_run map poke update Jiri Olsa
2023-11-27 13:09 ` Jiri Olsa
@ 2023-11-27 16:26 ` kernel test robot
2023-11-27 16:27 ` kernel test robot
2 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2023-11-27 16:26 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: oe-kbuild-all, Lee Jones, Maciej Fijalkowski,
syzbot+97a4fe20470e9bc30810, bpf, Martin KaFai Lau, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Stanislav Fomichev,
Hao Luo
Hi Jiri,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf/master]
url: https://github.com/intel-lab-lkp/linux/commits/Jiri-Olsa/bpf-x64-Fix-prog_array_map_poke_run-map-poke-update/20231127-174900
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git master
patch link: https://lore.kernel.org/r/20231127094525.1366740-1-jolsa%40kernel.org
patch subject: [PATCH bpf] bpf, x64: Fix prog_array_map_poke_run map poke update
config: parisc-randconfig-r071-20231127 (https://download.01.org/0day-ci/archive/20231127/202311272311.WsiMBsbq-lkp@intel.com/config)
compiler: hppa-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231127/202311272311.WsiMBsbq-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202311272311.WsiMBsbq-lkp@intel.com/
All errors (new ones prefixed by >>):
hppa-linux-ld: kernel/bpf/arraymap.o: in function `prog_array_map_poke_run':
>> (.text+0x103c): undefined reference to `__bpf_arch_text_poke'
>> hppa-linux-ld: (.text+0x106c): undefined reference to `__bpf_arch_text_poke'
hppa-linux-ld: (.text+0x1090): undefined reference to `__bpf_arch_text_poke'
hppa-linux-ld: (.text+0x10c0): undefined reference to `__bpf_arch_text_poke'
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH bpf] bpf, x64: Fix prog_array_map_poke_run map poke update
2023-11-27 9:45 [PATCH bpf] bpf, x64: Fix prog_array_map_poke_run map poke update Jiri Olsa
2023-11-27 13:09 ` Jiri Olsa
2023-11-27 16:26 ` kernel test robot
@ 2023-11-27 16:27 ` kernel test robot
2 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2023-11-27 16:27 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: oe-kbuild-all, Lee Jones, Maciej Fijalkowski,
syzbot+97a4fe20470e9bc30810, bpf, Martin KaFai Lau, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Stanislav Fomichev,
Hao Luo
Hi Jiri,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf/master]
url: https://github.com/intel-lab-lkp/linux/commits/Jiri-Olsa/bpf-x64-Fix-prog_array_map_poke_run-map-poke-update/20231127-174900
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git master
patch link: https://lore.kernel.org/r/20231127094525.1366740-1-jolsa%40kernel.org
patch subject: [PATCH bpf] bpf, x64: Fix prog_array_map_poke_run map poke update
config: i386-randconfig-061-20231127 (https://download.01.org/0day-ci/archive/20231127/202311272245.sevnkuSF-lkp@intel.com/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231127/202311272245.sevnkuSF-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202311272245.sevnkuSF-lkp@intel.com/
All errors (new ones prefixed by >>):
ld: kernel/bpf/arraymap.o: in function `prog_array_map_poke_run':
>> kernel/bpf/arraymap.c:1067: undefined reference to `__bpf_arch_text_poke'
>> ld: kernel/bpf/arraymap.c:1079: undefined reference to `__bpf_arch_text_poke'
ld: kernel/bpf/arraymap.c:1090: undefined reference to `__bpf_arch_text_poke'
ld: kernel/bpf/arraymap.c:1072: undefined reference to `__bpf_arch_text_poke'
vim +1067 kernel/bpf/arraymap.c
1014
1015 static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
1016 struct bpf_prog *old,
1017 struct bpf_prog *new)
1018 {
1019 u8 *old_addr, *new_addr, *old_bypass_addr;
1020 struct prog_poke_elem *elem;
1021 struct bpf_array_aux *aux;
1022
1023 aux = container_of(map, struct bpf_array, map)->aux;
1024 WARN_ON_ONCE(!mutex_is_locked(&aux->poke_mutex));
1025
1026 list_for_each_entry(elem, &aux->poke_progs, list) {
1027 struct bpf_jit_poke_descriptor *poke;
1028 int i, ret;
1029
1030 for (i = 0; i < elem->aux->size_poke_tab; i++) {
1031 poke = &elem->aux->poke_tab[i];
1032
1033 /* Few things to be aware of:
1034 *
1035 * 1) We can only ever access aux in this context, but
1036 * not aux->prog since it might not be stable yet and
1037 * there could be danger of use after free otherwise.
1038 * 2) Initially when we start tracking aux, the program
1039 * is not JITed yet and also does not have a kallsyms
1040 * entry. We skip these as poke->tailcall_target_stable
1041 * is not active yet. The JIT will do the final fixup
1042 * before setting it stable. The various
1043 * poke->tailcall_target_stable are successively
1044 * activated, so tail call updates can arrive from here
1045 * while JIT is still finishing its final fixup for
1046 * non-activated poke entries.
1047 * 3) Also programs reaching refcount of zero while patching
1048 * is in progress is okay since we're protected under
1049 * poke_mutex and untrack the programs before the JIT
1050 * buffer is freed.
1051 * 4) Any error happening below from __bpf_arch_text_poke()
1052 * is a unexpected bug.
1053 */
1054 if (!READ_ONCE(poke->tailcall_target_stable))
1055 continue;
1056 if (poke->reason != BPF_POKE_REASON_TAIL_CALL)
1057 continue;
1058 if (poke->tail_call.map != map ||
1059 poke->tail_call.key != key)
1060 continue;
1061
1062 old_bypass_addr = old ? NULL : poke->bypass_addr;
1063 old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL;
1064 new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL;
1065
1066 if (new) {
> 1067 ret = __bpf_arch_text_poke(poke->tailcall_target,
1068 BPF_MOD_JUMP,
1069 old_addr, new_addr);
1070 BUG_ON(ret < 0);
1071 if (!old) {
1072 ret = __bpf_arch_text_poke(poke->tailcall_bypass,
1073 BPF_MOD_JUMP,
1074 poke->bypass_addr,
1075 NULL);
1076 BUG_ON(ret < 0);
1077 }
1078 } else {
> 1079 ret = __bpf_arch_text_poke(poke->tailcall_bypass,
1080 BPF_MOD_JUMP,
1081 old_bypass_addr,
1082 poke->bypass_addr);
1083 BUG_ON(ret < 0);
1084 /* let other CPUs finish the execution of program
1085 * so that it will not possible to expose them
1086 * to invalid nop, stack unwind, nop state
1087 */
1088 if (!ret)
1089 synchronize_rcu();
1090 ret = __bpf_arch_text_poke(poke->tailcall_target,
1091 BPF_MOD_JUMP,
1092 old_addr, NULL);
1093 BUG_ON(ret < 0);
1094 }
1095 }
1096 }
1097 }
1098
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-11-27 16:30 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-27 9:45 [PATCH bpf] bpf, x64: Fix prog_array_map_poke_run map poke update Jiri Olsa
2023-11-27 13:09 ` Jiri Olsa
2023-11-27 16:26 ` kernel test robot
2023-11-27 16:27 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox