* [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12
@ 2024-10-07 5:14 Howard Chu
2024-10-07 5:14 ` [PATCH 1/2] perf build: Change the clang version check back to 12.0.1 Howard Chu
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Howard Chu @ 2024-10-07 5:14 UTC (permalink / raw)
To: peterz
Cc: mingo, acme, namhyung, mark.rutland, alexander.shishkin, jolsa,
irogers, adrian.hunter, kan.liang, linux-perf-users, linux-kernel,
Howard Chu
The new augmentation feature in perf trace, along with the protocol
change (from payload to payload->value), breaks the clang 12 build.
perf trace actually builds for any clang version newer than clang 16.
However, as pointed out by Namhyung Kim <namhyung@kernel.org> and Ian
Rogers <irogers@google.com>, clang 16, which was released in 2023, is
still too new for most users. Additionally, as James Clark
<james.clark@linaro.org> noted, some commonly used distributions do not
yet support clang 16. Therefore, breaking BPF features between clang 12
and clang 15 is not a good approach.
This patch series rewrites the BPF program in a way that allows it to
pass the BPF verifier, even when the BPF bytecode is generated by older
versions of clang.
However, I have only tested it till clang 14, as older versions are not
supported by my distribution.
Howard Chu (2):
perf build: Change the clang check back to 12.0.1
perf trace: Rewrite BPF code to pass the verifier
tools/perf/Makefile.config | 4 +-
.../bpf_skel/augmented_raw_syscalls.bpf.c | 117 ++++++++++--------
2 files changed, 65 insertions(+), 56 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 1/2] perf build: Change the clang version check back to 12.0.1
2024-10-07 5:14 [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12 Howard Chu
@ 2024-10-07 5:14 ` Howard Chu
2024-10-07 5:14 ` [PATCH 2/2] perf trace: Rewrite BPF programs to pass the verifier Howard Chu
2024-10-10 9:06 ` [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12 James Clark
2 siblings, 0 replies; 6+ messages in thread
From: Howard Chu @ 2024-10-07 5:14 UTC (permalink / raw)
To: peterz
Cc: mingo, acme, namhyung, mark.rutland, alexander.shishkin, jolsa,
irogers, adrian.hunter, kan.liang, linux-perf-users, linux-kernel,
Howard Chu
This serves as a revert for this patch:
https://lore.kernel.org/linux-perf-users/ZuGL9ROeTV2uXoSp@x1/
Signed-off-by: Howard Chu <howardchu95@gmail.com>
---
tools/perf/Makefile.config | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
index 4dcf7a0fd235..adfad92ac8ef 100644
--- a/tools/perf/Makefile.config
+++ b/tools/perf/Makefile.config
@@ -701,8 +701,8 @@ ifeq ($(BUILD_BPF_SKEL),1)
BUILD_BPF_SKEL := 0
else
CLANG_VERSION := $(shell $(CLANG) --version | head -1 | sed 's/.*clang version \([[:digit:]]\+.[[:digit:]]\+.[[:digit:]]\+\).*/\1/g')
- ifeq ($(call version-lt3,$(CLANG_VERSION),16.0.6),1)
- $(warning Warning: Disabled BPF skeletons as at least $(CLANG) version 16.0.6 is reported to be a working setup with the current of BPF based perf features)
+ ifeq ($(call version-lt3,$(CLANG_VERSION),12.0.1),1)
+ $(warning Warning: Disabled BPF skeletons as reliable BTF generation needs at least $(CLANG) version 12.0.1)
BUILD_BPF_SKEL := 0
endif
endif
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 2/2] perf trace: Rewrite BPF programs to pass the verifier
2024-10-07 5:14 [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12 Howard Chu
2024-10-07 5:14 ` [PATCH 1/2] perf build: Change the clang version check back to 12.0.1 Howard Chu
@ 2024-10-07 5:14 ` Howard Chu
2024-10-10 9:06 ` [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12 James Clark
2 siblings, 0 replies; 6+ messages in thread
From: Howard Chu @ 2024-10-07 5:14 UTC (permalink / raw)
To: peterz
Cc: mingo, acme, namhyung, mark.rutland, alexander.shishkin, jolsa,
irogers, adrian.hunter, kan.liang, linux-perf-users, linux-kernel,
Howard Chu
Rewrite the code to add more memory bound checkings in order to pass the
BPF verifier, not logic is changed.
This rewrite is centered around two main ideas:
- Always use a variable instead of an expression in if block's condition,
so BPF verifier keeps track of the correct register.
- Delay the check until just before the function call, as late as possible.
Things that can be done better still:
- Instead of allowing a theoretical maximum of a 6-argument augmentation
payload, reduce the payload to a smaller fixed size.
Signed-off-by: Howard Chu <howardchu95@gmail.com>
---
.../bpf_skel/augmented_raw_syscalls.bpf.c | 117 ++++++++++--------
1 file changed, 63 insertions(+), 54 deletions(-)
diff --git a/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c b/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
index b2f17cca014b..a2b67365cedf 100644
--- a/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
+++ b/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
@@ -277,25 +277,31 @@ int sys_enter_rename(struct syscall_enter_args *args)
struct augmented_args_payload *augmented_args = augmented_args_payload();
const void *oldpath_arg = (const void *)args->args[0],
*newpath_arg = (const void *)args->args[1];
- unsigned int len = sizeof(augmented_args->args), oldpath_len, newpath_len;
+ unsigned int len = sizeof(augmented_args->args), oldpath_len, newpath_len, aligned_size;
if (augmented_args == NULL)
- return 1; /* Failure: don't filter */
+ goto failure;
len += 2 * sizeof(u64); // The overhead of size and err, just before the payload...
oldpath_len = augmented_arg__read_str(&augmented_args->arg, oldpath_arg, sizeof(augmented_args->arg.value));
- augmented_args->arg.size = PERF_ALIGN(oldpath_len + 1, sizeof(u64));
- len += augmented_args->arg.size;
+ aligned_size = PERF_ALIGN(oldpath_len + 1, sizeof(u64));
+ augmented_args->arg.size = aligned_size;
+ len += aligned_size;
- struct augmented_arg *arg2 = (void *)&augmented_args->arg.value + augmented_args->arg.size;
+ /* Every read from userspace is limited to value size */
+ if (aligned_size > sizeof(augmented_args->arg.value))
+ goto failure;
+
+ struct augmented_arg *arg2 = (void *)&augmented_args->arg.value + aligned_size;
newpath_len = augmented_arg__read_str(arg2, newpath_arg, sizeof(augmented_args->arg.value));
arg2->size = newpath_len;
-
len += newpath_len;
return augmented__output(args, augmented_args, len);
+failure:
+ return 1; /* Failure: don't filter */
}
SEC("tp/syscalls/sys_enter_renameat2")
@@ -304,25 +310,31 @@ int sys_enter_renameat2(struct syscall_enter_args *args)
struct augmented_args_payload *augmented_args = augmented_args_payload();
const void *oldpath_arg = (const void *)args->args[1],
*newpath_arg = (const void *)args->args[3];
- unsigned int len = sizeof(augmented_args->args), oldpath_len, newpath_len;
+ unsigned int len = sizeof(augmented_args->args), oldpath_len, newpath_len, aligned_size;
if (augmented_args == NULL)
- return 1; /* Failure: don't filter */
+ goto failure;
len += 2 * sizeof(u64); // The overhead of size and err, just before the payload...
oldpath_len = augmented_arg__read_str(&augmented_args->arg, oldpath_arg, sizeof(augmented_args->arg.value));
- augmented_args->arg.size = PERF_ALIGN(oldpath_len + 1, sizeof(u64));
- len += augmented_args->arg.size;
+ aligned_size = PERF_ALIGN(oldpath_len + 1, sizeof(u64));
+ augmented_args->arg.size = aligned_size;
+ len += aligned_size;
- struct augmented_arg *arg2 = (void *)&augmented_args->arg.value + augmented_args->arg.size;
+ /* Every read from userspace is limited to value size */
+ if (aligned_size > sizeof(augmented_args->arg.value))
+ goto failure;
+
+ struct augmented_arg *arg2 = (void *)&augmented_args->arg.value + aligned_size;
newpath_len = augmented_arg__read_str(arg2, newpath_arg, sizeof(augmented_args->arg.value));
arg2->size = newpath_len;
-
len += newpath_len;
return augmented__output(args, augmented_args, len);
+failure:
+ return 1; /* Failure: don't filter */
}
#define PERF_ATTR_SIZE_VER0 64 /* sizeof first published struct */
@@ -422,12 +434,11 @@ static bool pid_filter__has(struct pids_filtered *pids, pid_t pid)
static int augment_sys_enter(void *ctx, struct syscall_enter_args *args)
{
- bool augmented, do_output = false;
- int zero = 0, size, aug_size, index, output = 0,
- value_size = sizeof(struct augmented_arg) - offsetof(struct augmented_arg, value);
- unsigned int nr, *beauty_map;
+ bool do_augment = false;
+ int zero = 0, value_size = sizeof(struct augmented_arg) - sizeof(u64);
+ unsigned int nr, *beauty_map, len = sizeof(struct syscall_enter_args);
struct beauty_payload_enter *payload;
- void *arg, *payload_offset;
+ void *payload_offset, *value_offset;
/* fall back to do predefined tail call */
if (args == NULL)
@@ -436,12 +447,13 @@ static int augment_sys_enter(void *ctx, struct syscall_enter_args *args)
/* use syscall number to get beauty_map entry */
nr = (__u32)args->syscall_nr;
beauty_map = bpf_map_lookup_elem(&beauty_map_enter, &nr);
+ if (beauty_map == NULL)
+ return 1;
/* set up payload for output */
payload = bpf_map_lookup_elem(&beauty_payload_enter_map, &zero);
payload_offset = (void *)&payload->aug_args;
-
- if (beauty_map == NULL || payload == NULL)
+ if (payload == NULL)
return 1;
/* copy the sys_enter header, which has the syscall_nr */
@@ -457,52 +469,49 @@ static int augment_sys_enter(void *ctx, struct syscall_enter_args *args)
* buffer: -1 * (index of paired len) -> value of paired len (maximum: TRACE_AUG_MAX_BUF)
*/
for (int i = 0; i < 6; i++) {
- arg = (void *)args->args[i];
- augmented = false;
- size = beauty_map[i];
- aug_size = size; /* size of the augmented data read from user space */
+ int augment_size = beauty_map[i], augment_size_with_header;
+ void *addr = (void *)args->args[i];
+ bool is_augmented = false;
- if (size == 0 || arg == NULL)
+ if (augment_size == 0 || addr == NULL)
continue;
- if (size == 1) { /* string */
- aug_size = bpf_probe_read_user_str(((struct augmented_arg *)payload_offset)->value, value_size, arg);
- /* minimum of 0 to pass the verifier */
- if (aug_size < 0)
- aug_size = 0;
-
- augmented = true;
- } else if (size > 0 && size <= value_size) { /* struct */
- if (!bpf_probe_read_user(((struct augmented_arg *)payload_offset)->value, size, arg))
- augmented = true;
- } else if (size < 0 && size >= -6) { /* buffer */
- index = -(size + 1);
- aug_size = args->args[index];
-
- if (aug_size > TRACE_AUG_MAX_BUF)
- aug_size = TRACE_AUG_MAX_BUF;
-
- if (aug_size > 0) {
- if (!bpf_probe_read_user(((struct augmented_arg *)payload_offset)->value, aug_size, arg))
- augmented = true;
- }
+ value_offset = ((struct augmented_arg *)payload_offset)->value;
+
+ if (augment_size == 1) { /* string */
+ augment_size = bpf_probe_read_user_str(value_offset, value_size, addr);
+ is_augmented = true;
+ } else if (augment_size > 1 && augment_size <= value_size) { /* struct */
+ if (!bpf_probe_read_user(value_offset, value_size, addr))
+ is_augmented = true;
+ } else if (augment_size < 0 && augment_size >= -6) { /* buffer */
+ int index = -(augment_size + 1);
+
+ augment_size = args->args[index] > TRACE_AUG_MAX_BUF ? TRACE_AUG_MAX_BUF : args->args[index];
+ if (!bpf_probe_read_user(value_offset, augment_size, addr))
+ is_augmented = true;
}
- /* write data to payload */
- if (augmented) {
- int written = offsetof(struct augmented_arg, value) + aug_size;
+ /* Augmented data size is limited to value size */
+ if (augment_size > value_size)
+ augment_size = value_size;
+
+ /* Explicitly define this variable to pass the verifier */
+ augment_size_with_header = sizeof(u64) + augment_size;
- ((struct augmented_arg *)payload_offset)->size = aug_size;
- output += written;
- payload_offset += written;
- do_output = true;
+ /* Write data to payload */
+ if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) {
+ ((struct augmented_arg *)payload_offset)->size = augment_size;
+ do_augment = true;
+ len += augment_size_with_header;
+ payload_offset += augment_size_with_header;
}
}
- if (!do_output)
+ if (!do_augment || len > sizeof(struct beauty_payload_enter))
return 1;
- return augmented__beauty_output(ctx, payload, sizeof(struct syscall_enter_args) + output);
+ return augmented__beauty_output(ctx, payload, len);
}
SEC("tp/raw_syscalls/sys_enter")
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12
2024-10-07 5:14 [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12 Howard Chu
2024-10-07 5:14 ` [PATCH 1/2] perf build: Change the clang version check back to 12.0.1 Howard Chu
2024-10-07 5:14 ` [PATCH 2/2] perf trace: Rewrite BPF programs to pass the verifier Howard Chu
@ 2024-10-10 9:06 ` James Clark
2024-10-11 0:20 ` Namhyung Kim
2 siblings, 1 reply; 6+ messages in thread
From: James Clark @ 2024-10-10 9:06 UTC (permalink / raw)
To: Howard Chu, Namhyung Kim, Arnaldo Carvalho de Melo
Cc: mingo, mark.rutland, alexander.shishkin, jolsa, irogers,
adrian.hunter, kan.liang, linux-perf-users, linux-kernel,
Peter Zijlstra
On 07/10/2024 6:14 am, Howard Chu wrote:
> The new augmentation feature in perf trace, along with the protocol
> change (from payload to payload->value), breaks the clang 12 build.
>
> perf trace actually builds for any clang version newer than clang 16.
> However, as pointed out by Namhyung Kim <namhyung@kernel.org> and Ian
> Rogers <irogers@google.com>, clang 16, which was released in 2023, is
> still too new for most users. Additionally, as James Clark
> <james.clark@linaro.org> noted, some commonly used distributions do not
> yet support clang 16. Therefore, breaking BPF features between clang 12
> and clang 15 is not a good approach.
>
> This patch series rewrites the BPF program in a way that allows it to
> pass the BPF verifier, even when the BPF bytecode is generated by older
> versions of clang.
>
> However, I have only tested it till clang 14, as older versions are not
> supported by my distribution.
>
> Howard Chu (2):
> perf build: Change the clang check back to 12.0.1
> perf trace: Rewrite BPF code to pass the verifier
>
> tools/perf/Makefile.config | 4 +-
> .../bpf_skel/augmented_raw_syscalls.bpf.c | 117 ++++++++++--------
> 2 files changed, 65 insertions(+), 56 deletions(-)
>
Tested with clang 15:
$ sudo perf trace -e write --max-events=100 -- echo hello
0.000 ( 0.014 ms): echo/834165 write(fd: 1, buf: hello\10, count: 6)
=
Tested-by: James Clark <james.clark@linaro.org>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12
2024-10-10 9:06 ` [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12 James Clark
@ 2024-10-11 0:20 ` Namhyung Kim
2024-10-11 2:16 ` Howard Chu
0 siblings, 1 reply; 6+ messages in thread
From: Namhyung Kim @ 2024-10-11 0:20 UTC (permalink / raw)
To: James Clark
Cc: Howard Chu, Arnaldo Carvalho de Melo, mingo, mark.rutland,
alexander.shishkin, jolsa, irogers, adrian.hunter, kan.liang,
linux-perf-users, linux-kernel, Peter Zijlstra
On Thu, Oct 10, 2024 at 10:06:05AM +0100, James Clark wrote:
>
>
> On 07/10/2024 6:14 am, Howard Chu wrote:
> > The new augmentation feature in perf trace, along with the protocol
> > change (from payload to payload->value), breaks the clang 12 build.
> >
> > perf trace actually builds for any clang version newer than clang 16.
> > However, as pointed out by Namhyung Kim <namhyung@kernel.org> and Ian
> > Rogers <irogers@google.com>, clang 16, which was released in 2023, is
> > still too new for most users. Additionally, as James Clark
> > <james.clark@linaro.org> noted, some commonly used distributions do not
> > yet support clang 16. Therefore, breaking BPF features between clang 12
> > and clang 15 is not a good approach.
> >
> > This patch series rewrites the BPF program in a way that allows it to
> > pass the BPF verifier, even when the BPF bytecode is generated by older
> > versions of clang.
> >
> > However, I have only tested it till clang 14, as older versions are not
> > supported by my distribution.
> >
> > Howard Chu (2):
> > perf build: Change the clang check back to 12.0.1
> > perf trace: Rewrite BPF code to pass the verifier
> >
> > tools/perf/Makefile.config | 4 +-
> > .../bpf_skel/augmented_raw_syscalls.bpf.c | 117 ++++++++++--------
> > 2 files changed, 65 insertions(+), 56 deletions(-)
> >
>
> Tested with clang 15:
>
> $ sudo perf trace -e write --max-events=100 -- echo hello
> 0.000 ( 0.014 ms): echo/834165 write(fd: 1, buf: hello\10, count: 6)
> =
>
> Tested-by: James Clark <james.clark@linaro.org>
I got this on my system (clang 16). The kernel refused to load it.
$ sudo ./perf trace -e write --max-events=10 -- echo hello
libbpf: prog 'sys_enter': BPF program load failed: Permission denied
libbpf: prog 'sys_enter': -- BEGIN PROG LOAD LOG --
0: R1=ctx() R10=fp0
; int sys_enter(struct syscall_enter_args *args) @ augmented_raw_syscalls.bpf.c:518
0: (bf) r7 = r1 ; R1=ctx() R7_w=ctx()
; return bpf_get_current_pid_tgid(); @ augmented_raw_syscalls.bpf.c:427
1: (85) call bpf_get_current_pid_tgid#14 ; R0_w=scalar()
2: (63) *(u32 *)(r10 -4) = r0 ; R0_w=scalar() R10=fp0 fp-8=mmmm????
3: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
; @ augmented_raw_syscalls.bpf.c:0
4: (07) r2 += -4 ; R2_w=fp-4
; return bpf_map_lookup_elem(pids, &pid) != NULL; @ augmented_raw_syscalls.bpf.c:432
5: (18) r1 = 0xffff9dcccdfe7000 ; R1_w=map_ptr(map=pids_filtered,ks=4,vs=1)
7: (85) call bpf_map_lookup_elem#1 ; R0=map_value_or_null(id=1,map=pids_filtered,ks=4,vs=1)
8: (bf) r1 = r0 ; R0=map_value_or_null(id=1,map=pids_filtered,ks=4,vs=1) R1_w=map_value_or_null(id=1,map=pids_filtered,ks=4,vs=1)
9: (b7) r0 = 0 ; R0_w=0
; if (pid_filter__has(&pids_filtered, getpid())) @ augmented_raw_syscalls.bpf.c:531
10: (55) if r1 != 0x0 goto pc+161 ; R1_w=0
11: (b7) r6 = 0 ; R6_w=0
; int key = 0; @ augmented_raw_syscalls.bpf.c:150
12: (63) *(u32 *)(r10 -4) = r6 ; R6_w=0 R10=fp0 fp-8=0000????
13: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
; @ augmented_raw_syscalls.bpf.c:0
14: (07) r2 += -4 ; R2_w=fp-4
; return bpf_map_lookup_elem(&augmented_args_tmp, &key); @ augmented_raw_syscalls.bpf.c:151
15: (18) r1 = 0xffff9dcc73f8f200 ; R1_w=map_ptr(map=augmented_args_,ks=4,vs=8272)
17: (85) call bpf_map_lookup_elem#1 ; R0=map_value_or_null(id=2,map=augmented_args_,ks=4,vs=8272)
18: (bf) r8 = r0 ; R0=map_value_or_null(id=2,map=augmented_args_,ks=4,vs=8272) R8_w=map_value_or_null(id=2,map=augmented_args_,ks=4,vs=8272)
19: (b7) r0 = 1 ; R0_w=1
; if (augmented_args == NULL) @ augmented_raw_syscalls.bpf.c:535
20: (15) if r8 == 0x0 goto pc+151 ; R8_w=map_value(map=augmented_args_,ks=4,vs=8272)
; bpf_probe_read_kernel(&augmented_args->args, sizeof(augmented_args->args), args); @ augmented_raw_syscalls.bpf.c:538
21: (bf) r1 = r8 ; R1_w=map_value(map=augmented_args_,ks=4,vs=8272) R8_w=map_value(map=augmented_args_,ks=4,vs=8272)
22: (b7) r2 = 64 ; R2_w=64
23: (bf) r3 = r7 ; R3_w=ctx() R7=ctx()
24: (85) call bpf_probe_read_kernel#113 ; R0_w=scalar()
; int zero = 0, value_size = sizeof(struct augmented_arg) - sizeof(u64); @ augmented_raw_syscalls.bpf.c:438
25: (63) *(u32 *)(r10 -4) = r6 ; R6=0 R10=fp0 fp-8=0000????
; nr = (__u32)args->syscall_nr; @ augmented_raw_syscalls.bpf.c:448
26: (79) r1 = *(u64 *)(r8 +8) ; R1_w=scalar() R8_w=map_value(map=augmented_args_,ks=4,vs=8272)
27: (63) *(u32 *)(r10 -8) = r1 ; R1_w=scalar() R10=fp0 fp-8=0000scalar()
28: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
; bpf_probe_read_kernel(&augmented_args->args, sizeof(augmented_args->args), args); @ augmented_raw_syscalls.bpf.c:538
29: (07) r2 += -8 ; R2_w=fp-8
; beauty_map = bpf_map_lookup_elem(&beauty_map_enter, &nr); @ augmented_raw_syscalls.bpf.c:449
30: (18) r1 = 0xffff9dcccdfe5800 ; R1_w=map_ptr(map=beauty_map_ente,ks=4,vs=24)
32: (85) call bpf_map_lookup_elem#1 ; R0=map_value_or_null(id=3,map=beauty_map_ente,ks=4,vs=24)
; if (beauty_map == NULL) @ augmented_raw_syscalls.bpf.c:450
33: (15) if r0 == 0x0 goto pc+132 ; R0=map_value(map=beauty_map_ente,ks=4,vs=24)
34: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
; @ augmented_raw_syscalls.bpf.c:0
35: (07) r2 += -4 ; R2_w=fp-4
; payload = bpf_map_lookup_elem(&beauty_payload_enter_map, &zero); @ augmented_raw_syscalls.bpf.c:454
36: (18) r1 = 0xffff9dcc73f8e800 ; R1_w=map_ptr(map=beauty_payload_,ks=4,vs=24688)
38: (7b) *(u64 *)(r10 -16) = r0 ; R0=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16_w=map_value(map=beauty_map_ente,ks=4,vs=24)
39: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=4,map=beauty_payload_,ks=4,vs=24688)
40: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16_w=map_value(map=beauty_map_ente,ks=4,vs=24)
; if (payload == NULL) @ augmented_raw_syscalls.bpf.c:456
41: (15) if r0 == 0x0 goto pc+124 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688)
42: (7b) *(u64 *)(r10 -48) = r7 ; R7=ctx() R10=fp0 fp-48_w=ctx()
; __builtin_memcpy(&payload->args, args, sizeof(struct syscall_enter_args)); @ augmented_raw_syscalls.bpf.c:460
43: (79) r1 = *(u64 *)(r8 +56) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
44: (7b) *(u64 *)(r0 +56) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
45: (79) r1 = *(u64 *)(r8 +48) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
46: (7b) *(u64 *)(r0 +48) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
47: (79) r1 = *(u64 *)(r8 +40) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
48: (7b) *(u64 *)(r0 +40) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
49: (79) r1 = *(u64 *)(r8 +32) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
50: (7b) *(u64 *)(r0 +32) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
51: (79) r1 = *(u64 *)(r8 +24) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
52: (7b) *(u64 *)(r0 +24) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
53: (79) r1 = *(u64 *)(r8 +16) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
54: (7b) *(u64 *)(r0 +16) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
55: (79) r1 = *(u64 *)(r8 +8) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
56: (7b) *(u64 *)(r0 +8) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
57: (79) r1 = *(u64 *)(r8 +0) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
58: (7b) *(u64 *)(r0 +0) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
59: (b7) r1 = 64 ; R1_w=64
60: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=64 R10=fp0 fp-24_w=64
61: (7b) *(u64 *)(r10 -40) = r8 ; R8=map_value(map=augmented_args_,ks=4,vs=8272) R10=fp0 fp-40_w=map_value(map=augmented_args_,ks=4,vs=8272)
62: (bf) r7 = r8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272) R8=map_value(map=augmented_args_,ks=4,vs=8272)
63: (07) r7 += 16 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=16)
64: (7b) *(u64 *)(r10 -56) = r0 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R10=fp0 fp-56_w=map_value(map=beauty_payload_,ks=4,vs=24688)
; payload_offset = (void *)&payload->aug_args; @ augmented_raw_syscalls.bpf.c:455
65: (bf) r9 = r0 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688)
66: (07) r9 += 64 ; R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=64)
67: (b7) r1 = 0 ; R1_w=0
; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
68: (7b) *(u64 *)(r10 -32) = r1 ; R1_w=0 R10=fp0 fp-32_w=0
69: (05) goto pc+11
; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4=map_value(map=beauty_map_ente,ks=4,vs=24)
82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R6=0
83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
85: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
86: (15) if r8 == 0x0 goto pc-9 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
; @ augmented_raw_syscalls.bpf.c:0
87: (79) r3 = *(u64 *)(r7 +0) ; R3_w=scalar() R7=map_value(map=augmented_args_,ks=4,vs=8272,off=16)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
88: (15) if r3 == 0x0 goto pc-11 ; R3_w=scalar(umin=1)
; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=64) R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=64)
90: (07) r1 += 8 ; R1=map_value(map=beauty_payload_,ks=4,vs=24688,off=72)
; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
91: (55) if r8 != 0x1 goto pc-22 ; R8=1
; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
92: (b7) r2 = 4096 ; R2_w=4096
93: (85) call bpf_probe_read_user_str#114 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=4096)
94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
95: (bf) r8 = r0 ; R0_w=scalar(id=5,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=5,smin=smin32=-4095,smax=smax32=4096)
96: (b7) r1 = 1 ; R1_w=1
; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
99: (b7) r2 = 4096 ; R2=4096
100: (6d) if r2 s> r8 goto pc+1 ; R2=4096 R8=4096
101: (b7) r8 = 4096 ; R8_w=4096
; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
102: (57) r1 &= 1 ; R1_w=1
103: (15) if r1 == 0x0 goto pc-26 ; R1_w=1
104: (bf) r1 = r8 ; R1_w=4096 R8_w=4096
105: (07) r1 += 8 ; R1_w=4104
106: (bf) r2 = r1 ; R1_w=4104 R2_w=4104
107: (67) r2 <<= 32 ; R2_w=0x100800000000
108: (77) r2 >>= 32 ; R2=4104
109: (25) if r2 > 0x1008 goto pc-32 ; R2=4104
; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
110: (63) *(u32 *)(r9 +0) = r8 ; R8=4096 R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=64)
; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=64 R10=fp0 fp-24=64
112: (0f) r1 += r3 ; R1_w=4168 R3_w=64
; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
113: (0f) r9 += r2 ; R2=4104 R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=4168)
114: (b7) r2 = 1 ; R2_w=1
115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=4168 R10=fp0 fp-24_w=4168
117: (05) goto pc-40
; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=24)
79: (07) r6 += 4 ; R6_w=4
80: (15) if r6 == 0x18 goto pc+56 ; R6_w=4
; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4=map_value(map=beauty_map_ente,ks=4,vs=24)
82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=4) R6_w=4
83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=4) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
85: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
86: (15) if r8 == 0x0 goto pc-9 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
; @ augmented_raw_syscalls.bpf.c:0
87: (79) r3 = *(u64 *)(r7 +0) ; R3=scalar() R7=map_value(map=augmented_args_,ks=4,vs=8272,off=24)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
88: (15) if r3 == 0x0 goto pc-11 ; R3=scalar(umin=1)
; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=4168) R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=4168)
90: (07) r1 += 8 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=4176)
; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
91: (55) if r8 != 0x1 goto pc-22 ; R8=1
; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
92: (b7) r2 = 4096 ; R2_w=4096
93: (85) call bpf_probe_read_user_str#114 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=4096)
94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
95: (bf) r8 = r0 ; R0_w=scalar(id=6,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=6,smin=smin32=-4095,smax=smax32=4096)
96: (b7) r1 = 1 ; R1=1
; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
99: (b7) r2 = 4096 ; R2_w=4096
100: (6d) if r2 s> r8 goto pc+1 ; R2_w=4096 R8_w=4096
101: (b7) r8 = 4096 ; R8_w=4096
; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
102: (57) r1 &= 1 ; R1_w=1
103: (15) if r1 == 0x0 goto pc-26 ; R1_w=1
104: (bf) r1 = r8 ; R1_w=4096 R8_w=4096
105: (07) r1 += 8 ; R1_w=4104
106: (bf) r2 = r1 ; R1_w=4104 R2_w=4104
107: (67) r2 <<= 32 ; R2_w=0x100800000000
108: (77) r2 >>= 32 ; R2_w=4104
109: (25) if r2 > 0x1008 goto pc-32 ; R2_w=4104
; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
110: (63) *(u32 *)(r9 +0) = r8 ; R8_w=4096 R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=4168)
; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=4168 R10=fp0 fp-24=4168
112: (0f) r1 += r3 ; R1_w=8272 R3_w=4168
; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
113: (0f) r9 += r2 ; R2_w=4104 R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=8272)
114: (b7) r2 = 1 ; R2_w=1
115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=8272 R10=fp0 fp-24_w=8272
117: (05) goto pc-40
; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=32)
79: (07) r6 += 4 ; R6=8
80: (15) if r6 == 0x18 goto pc+56 ; R6=8
; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4=map_value(map=beauty_map_ente,ks=4,vs=24)
82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=8) R6=8
83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=8) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
85: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
86: (15) if r8 == 0x0 goto pc-9 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
; @ augmented_raw_syscalls.bpf.c:0
87: (79) r3 = *(u64 *)(r7 +0) ; R3_w=scalar() R7=map_value(map=augmented_args_,ks=4,vs=8272,off=32)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
88: (15) if r3 == 0x0 goto pc-11 ; R3_w=scalar(umin=1)
; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=8272) R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=8272)
90: (07) r1 += 8 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=8280)
; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
91: (55) if r8 != 0x1 goto pc-22 ; R8_w=1
; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
92: (b7) r2 = 4096 ; R2_w=4096
93: (85) call bpf_probe_read_user_str#114 ; R0=scalar(smin=smin32=-4095,smax=smax32=4096)
94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
95: (bf) r8 = r0 ; R0=scalar(id=7,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=7,smin=smin32=-4095,smax=smax32=4096)
96: (b7) r1 = 1 ; R1_w=1
; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
99: (b7) r2 = 4096 ; R2_w=4096
100: (6d) if r2 s> r8 goto pc+1 ; R2_w=4096 R8_w=4096
101: (b7) r8 = 4096 ; R8_w=4096
; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
102: (57) r1 &= 1 ; R1_w=1
103: (15) if r1 == 0x0 goto pc-26 ; R1_w=1
104: (bf) r1 = r8 ; R1_w=4096 R8_w=4096
105: (07) r1 += 8 ; R1_w=4104
106: (bf) r2 = r1 ; R1_w=4104 R2_w=4104
107: (67) r2 <<= 32 ; R2_w=0x100800000000
108: (77) r2 >>= 32 ; R2_w=4104
109: (25) if r2 > 0x1008 goto pc-32 ; R2_w=4104
; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
110: (63) *(u32 *)(r9 +0) = r8 ; R8_w=4096 R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=8272)
; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=8272 R10=fp0 fp-24=8272
112: (0f) r1 += r3 ; R1_w=12376 R3_w=8272
; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
113: (0f) r9 += r2 ; R2_w=4104 R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=12376)
114: (b7) r2 = 1 ; R2_w=1
115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=12376 R10=fp0 fp-24_w=12376
117: (05) goto pc-40
; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=40)
79: (07) r6 += 4 ; R6_w=12
80: (15) if r6 == 0x18 goto pc+56 ; R6_w=12
; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4_w=map_value(map=beauty_map_ente,ks=4,vs=24)
82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=12) R6_w=12
83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=12) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
85: (c7) r8 s>>= 32 ; R8=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
86: (15) if r8 == 0x0 goto pc-9 ; R8=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
; @ augmented_raw_syscalls.bpf.c:0
87: (79) r3 = *(u64 *)(r7 +0) ; R3_w=scalar() R7=map_value(map=augmented_args_,ks=4,vs=8272,off=40)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
88: (15) if r3 == 0x0 goto pc-11 ; R3_w=scalar(umin=1)
; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=12376) R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=12376)
90: (07) r1 += 8 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=12384)
; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
91: (55) if r8 != 0x1 goto pc-22 ; R8=1
; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
92: (b7) r2 = 4096 ; R2_w=4096
93: (85) call bpf_probe_read_user_str#114 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=4096)
94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
95: (bf) r8 = r0 ; R0_w=scalar(id=8,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=8,smin=smin32=-4095,smax=smax32=4096)
96: (b7) r1 = 1 ; R1_w=1
; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
99: (b7) r2 = 4096 ; R2_w=4096
100: (6d) if r2 s> r8 goto pc+1 ; R2_w=4096 R8_w=4096
101: (b7) r8 = 4096 ; R8=4096
; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
102: (57) r1 &= 1 ; R1_w=1
103: (15) if r1 == 0x0 goto pc-26 ; R1_w=1
104: (bf) r1 = r8 ; R1_w=4096 R8=4096
105: (07) r1 += 8 ; R1_w=4104
106: (bf) r2 = r1 ; R1_w=4104 R2_w=4104
107: (67) r2 <<= 32 ; R2_w=0x100800000000
108: (77) r2 >>= 32 ; R2_w=4104
109: (25) if r2 > 0x1008 goto pc-32 ; R2_w=4104
; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
110: (63) *(u32 *)(r9 +0) = r8 ; R8=4096 R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=12376)
; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=12376 R10=fp0 fp-24=12376
112: (0f) r1 += r3 ; R1_w=16480 R3_w=12376
; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
113: (0f) r9 += r2 ; R2_w=4104 R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=16480)
114: (b7) r2 = 1 ; R2_w=1
115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=16480 R10=fp0 fp-24_w=16480
117: (05) goto pc-40
; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=48)
79: (07) r6 += 4 ; R6_w=16
80: (15) if r6 == 0x18 goto pc+56 ; R6_w=16
; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4=map_value(map=beauty_map_ente,ks=4,vs=24)
82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=16) R6_w=16
83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=16) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
85: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
86: (15) if r8 == 0x0 goto pc-9 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
; @ augmented_raw_syscalls.bpf.c:0
87: (79) r3 = *(u64 *)(r7 +0) ; R3_w=scalar() R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=48)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
88: (15) if r3 == 0x0 goto pc-11 ; R3_w=scalar(umin=1)
; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=16480) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=16480)
90: (07) r1 += 8 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=16488)
; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
91: (55) if r8 != 0x1 goto pc-22 ; R8_w=1
; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
92: (b7) r2 = 4096 ; R2_w=4096
93: (85) call bpf_probe_read_user_str#114 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=4096)
94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
95: (bf) r8 = r0 ; R0_w=scalar(id=9,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=9,smin=smin32=-4095,smax=smax32=4096)
96: (b7) r1 = 1 ; R1_w=1
; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
99: (b7) r2 = 4096 ; R2_w=4096
100: (6d) if r2 s> r8 goto pc+1 ; R2_w=4096 R8_w=4096
101: (b7) r8 = 4096 ; R8_w=4096
; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
102: (57) r1 &= 1 ; R1=1
103: (15) if r1 == 0x0 goto pc-26 ; R1=1
104: (bf) r1 = r8 ; R1_w=4096 R8=4096
105: (07) r1 += 8 ; R1_w=4104
106: (bf) r2 = r1 ; R1_w=4104 R2_w=4104
107: (67) r2 <<= 32 ; R2_w=0x100800000000
108: (77) r2 >>= 32 ; R2_w=4104
109: (25) if r2 > 0x1008 goto pc-32 ; R2_w=4104
; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
110: (63) *(u32 *)(r9 +0) = r8 ; R8=4096 R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=16480)
; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=16480 R10=fp0 fp-24=16480
112: (0f) r1 += r3 ; R1_w=20584 R3_w=16480
; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
113: (0f) r9 += r2 ; R2_w=4104 R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584)
114: (b7) r2 = 1 ; R2_w=1
115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=20584 R10=fp0 fp-24_w=20584
117: (05) goto pc-40
; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=56)
79: (07) r6 += 4 ; R6_w=20
80: (15) if r6 == 0x18 goto pc+56 ; R6_w=20
; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4=map_value(map=beauty_map_ente,ks=4,vs=24)
82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=20) R6_w=20
83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=20) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
85: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
86: (15) if r8 == 0x0 goto pc-9 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
; @ augmented_raw_syscalls.bpf.c:0
87: (79) r3 = *(u64 *)(r7 +0) ; R3_w=scalar() R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=56)
; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
88: (15) if r3 == 0x0 goto pc-11 ; R3_w=scalar(umin=1)
; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584)
90: (07) r1 += 8 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20592)
; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
91: (55) if r8 != 0x1 goto pc-22 ; R8_w=1
; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
92: (b7) r2 = 4096 ; R2_w=4096
93: (85) call bpf_probe_read_user_str#114 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=4096)
94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
95: (bf) r8 = r0 ; R0_w=scalar(id=10,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=10,smin=smin32=-4095,smax=smax32=4096)
96: (b7) r1 = 1 ; R1_w=1
; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
99: (b7) r2 = 4096 ; R2_w=4096
100: (6d) if r2 s> r8 goto pc+1 102: R0_w=scalar(id=10,smin=smin32=-4095,smax=smax32=4096) R1_w=1 R2_w=4096 R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R6_w=20 R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=56) R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4095) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584) R10=fp0 fp-8=mmmmmmmm fp-16=map_value(map=beauty_map_ente,ks=4,vs=24) fp-24_w=20584 fp-32_w=1 fp-40=map_value(map=augmented_args_,ks=4,vs=8272) fp-48=ctx() fp-56=map_value(map=beauty_payload_,ks=4,vs=24688)
; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
102: (57) r1 &= 1 ; R1_w=1
103: (15) if r1 == 0x0 goto pc-26 ; R1_w=1
104: (bf) r1 = r8 ; R1_w=scalar(id=12,smin=0xffffffff80000000,smax=smax32=4095) R8_w=scalar(id=12,smin=0xffffffff80000000,smax=smax32=4095)
105: (07) r1 += 8 ; R1_w=scalar(smin=0xffffffff80000008,smax=smax32=4103,smin32=0x80000008)
106: (bf) r2 = r1 ; R1_w=scalar(id=13,smin=0xffffffff80000008,smax=smax32=4103,smin32=0x80000008) R2_w=scalar(id=13,smin=0xffffffff80000008,smax=smax32=4103,smin32=0x80000008)
107: (67) r2 <<= 32 ; R2_w=scalar(smax=0x100700000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
108: (77) r2 >>= 32 ; R2_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
109: (25) if r2 > 0x1008 goto pc-32 ; R2_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=4104,var_off=(0x0; 0x1fff))
; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
110: (63) *(u32 *)(r9 +0) = r8 ; R8_w=scalar(id=12,smin=0xffffffff80000000,smax=smax32=4095) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584)
; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=20584 R10=fp0 fp-24_w=20584
112: (0f) r1 += r3 ; R1_w=scalar(smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) R3_w=20584
; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
113: (0f) r9 += r2 ; R2_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=4104,var_off=(0x0; 0x1fff)) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584,smin=smin32=0,smax=umax=smax32=umax32=4104,var_off=(0x0; 0x1fff))
114: (b7) r2 = 1 ; R2_w=1
115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) R10=fp0 fp-24_w=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070)
117: (05) goto pc-40
; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=64)
79: (07) r6 += 4 ; R6_w=24
80: (15) if r6 == 0x18 goto pc+56 ; R6_w=24
; if (!bpf_probe_read_user(value_offset, augment_size, addr)) @ augmented_raw_syscalls.bpf.c:491
137: (79) r5 = *(u64 *)(r10 -24) ; R5_w=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) R10=fp0 fp-24=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070)
138: (bf) r2 = r5 ; R2_w=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) R5_w=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070)
139: (67) r2 <<= 32 ; R2_w=scalar(smax=0x606f00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
140: (77) r2 >>= 32 ; R2_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
141: (b7) r1 = 1 ; R1_w=1
142: (b7) r3 = 24689 ; R3_w=24689
143: (2d) if r3 > r2 goto pc+1 145: R0=scalar(id=10,smin=smin32=-4095,smax=smax32=4096) R1=1 R2=scalar(smin=smin32=0,smax=umax=smax32=umax32=24688,var_off=(0x0; 0x7fff)) R3=24689 R4=map_value(map=beauty_map_ente,ks=4,vs=24) R5=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) R6=24 R7=map_value(map=augmented_args_,ks=4,vs=8272,off=64) R8=scalar(id=12,smin=0xffffffff80000000,smax=smax32=4095) R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584,smin=smin32=0,smax=umax=smax32=umax32=4104,var_off=(0x0; 0x1fff)) R10=fp0 fp-8=mmmmmmmm fp-16=map_value(map=beauty_map_ente,ks=4,vs=24) fp-24=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) fp-32=1 fp-40=map_value(map=augmented_args_,ks=4,vs=8272) fp-48=ctx() fp-56=map_value(map=beauty_payload_,ks=4,vs=24688)
; if (!bpf_probe_read_user(value_offset, augment_size, addr)) @ augmented_raw_syscalls.bpf.c:491
145: (79) r2 = *(u64 *)(r10 -32) ; R2_w=1 R10=fp0 fp-32=1
; if (!do_augment || len > sizeof(struct beauty_payload_enter)) @ augmented_raw_syscalls.bpf.c:511
146: (5f) r2 &= r1 ; R1=1 R2_w=1
147: (57) r2 &= 1 ; R2_w=1
148: (79) r7 = *(u64 *)(r10 -48) ; R7_w=ctx() R10=fp0 fp-48=ctx()
149: (79) r8 = *(u64 *)(r10 -40) ; R8_w=map_value(map=augmented_args_,ks=4,vs=8272) R10=fp0 fp-40=map_value(map=augmented_args_,ks=4,vs=8272)
150: (79) r4 = *(u64 *)(r10 -56) ; R4_w=map_value(map=beauty_payload_,ks=4,vs=24688) R10=fp0 fp-56=map_value(map=beauty_payload_,ks=4,vs=24688)
151: (55) if r2 != 0x0 goto pc+1 ; R2_w=1
; return bpf_perf_event_output(ctx, &__augmented_syscalls__, BPF_F_CURRENT_CPU, data, len); @ augmented_raw_syscalls.bpf.c:162
153: (67) r5 <<= 32 ; R5_w=scalar(smax=0x606f00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
154: (77) r5 >>= 32 ; R5_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
155: (bf) r1 = r7 ; R1_w=ctx() R7_w=ctx()
156: (18) r2 = 0xffffaed2058d9000 ; R2_w=map_ptr(map=__augmented_sys,ks=4,vs=4)
158: (18) r3 = 0xffffffff ; R3_w=0xffffffff
160: (85) call bpf_perf_event_output#25
R5 unbounded memory access, use 'var &= const' or 'if (var < const)'
processed 387 insns (limit 1000000) max_states_per_insn 1 total_states 20 peak_states 20 mark_read 13
-- END PROG LOAD LOG --
libbpf: prog 'sys_enter': failed to load: -13
libbpf: failed to load object 'augmented_raw_syscalls_bpf'
libbpf: failed to load BPF skeleton 'augmented_raw_syscalls_bpf': -13
libbpf: map '__augmented_syscalls__': can't use BPF map without FD (was it created?)
libbpf: map '__augmented_syscalls__': can't use BPF map without FD (was it created?)
libbpf: map '__augmented_syscalls__': can't use BPF map without FD (was it created?)
libbpf: map '__augmented_syscalls__': can't use BPF map without FD (was it created?)
hello
0.000 ( 0.008 ms): write(fd: 1, buf: , count: 6) =
Also like James said, the buf doesn't show anything and the return
value is missing.
Thanks,
Namhyung
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12
2024-10-11 0:20 ` Namhyung Kim
@ 2024-10-11 2:16 ` Howard Chu
0 siblings, 0 replies; 6+ messages in thread
From: Howard Chu @ 2024-10-11 2:16 UTC (permalink / raw)
To: Namhyung Kim
Cc: James Clark, Arnaldo Carvalho de Melo, mingo, mark.rutland,
alexander.shishkin, jolsa, irogers, adrian.hunter, kan.liang,
linux-perf-users, linux-kernel, Peter Zijlstra
Hi Namhyung,
Fixed it in v2 (Link:
https://lore.kernel.org/linux-perf-users/20241011021403.4089793-1-howardchu95@gmail.com/)
, and tested it on clang-14 ~ clang-18 (did make clean every time just
incase)
Thanks,
Howard
On Thu, Oct 10, 2024 at 5:20 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> On Thu, Oct 10, 2024 at 10:06:05AM +0100, James Clark wrote:
> >
> >
> > On 07/10/2024 6:14 am, Howard Chu wrote:
> > > The new augmentation feature in perf trace, along with the protocol
> > > change (from payload to payload->value), breaks the clang 12 build.
> > >
> > > perf trace actually builds for any clang version newer than clang 16.
> > > However, as pointed out by Namhyung Kim <namhyung@kernel.org> and Ian
> > > Rogers <irogers@google.com>, clang 16, which was released in 2023, is
> > > still too new for most users. Additionally, as James Clark
> > > <james.clark@linaro.org> noted, some commonly used distributions do not
> > > yet support clang 16. Therefore, breaking BPF features between clang 12
> > > and clang 15 is not a good approach.
> > >
> > > This patch series rewrites the BPF program in a way that allows it to
> > > pass the BPF verifier, even when the BPF bytecode is generated by older
> > > versions of clang.
> > >
> > > However, I have only tested it till clang 14, as older versions are not
> > > supported by my distribution.
> > >
> > > Howard Chu (2):
> > > perf build: Change the clang check back to 12.0.1
> > > perf trace: Rewrite BPF code to pass the verifier
> > >
> > > tools/perf/Makefile.config | 4 +-
> > > .../bpf_skel/augmented_raw_syscalls.bpf.c | 117 ++++++++++--------
> > > 2 files changed, 65 insertions(+), 56 deletions(-)
> > >
> >
> > Tested with clang 15:
> >
> > $ sudo perf trace -e write --max-events=100 -- echo hello
> > 0.000 ( 0.014 ms): echo/834165 write(fd: 1, buf: hello\10, count: 6)
> > =
> >
> > Tested-by: James Clark <james.clark@linaro.org>
>
> I got this on my system (clang 16). The kernel refused to load it.
>
> $ sudo ./perf trace -e write --max-events=10 -- echo hello
> libbpf: prog 'sys_enter': BPF program load failed: Permission denied
> libbpf: prog 'sys_enter': -- BEGIN PROG LOAD LOG --
> 0: R1=ctx() R10=fp0
> ; int sys_enter(struct syscall_enter_args *args) @ augmented_raw_syscalls.bpf.c:518
> 0: (bf) r7 = r1 ; R1=ctx() R7_w=ctx()
> ; return bpf_get_current_pid_tgid(); @ augmented_raw_syscalls.bpf.c:427
> 1: (85) call bpf_get_current_pid_tgid#14 ; R0_w=scalar()
> 2: (63) *(u32 *)(r10 -4) = r0 ; R0_w=scalar() R10=fp0 fp-8=mmmm????
> 3: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
> ; @ augmented_raw_syscalls.bpf.c:0
> 4: (07) r2 += -4 ; R2_w=fp-4
> ; return bpf_map_lookup_elem(pids, &pid) != NULL; @ augmented_raw_syscalls.bpf.c:432
> 5: (18) r1 = 0xffff9dcccdfe7000 ; R1_w=map_ptr(map=pids_filtered,ks=4,vs=1)
> 7: (85) call bpf_map_lookup_elem#1 ; R0=map_value_or_null(id=1,map=pids_filtered,ks=4,vs=1)
> 8: (bf) r1 = r0 ; R0=map_value_or_null(id=1,map=pids_filtered,ks=4,vs=1) R1_w=map_value_or_null(id=1,map=pids_filtered,ks=4,vs=1)
> 9: (b7) r0 = 0 ; R0_w=0
> ; if (pid_filter__has(&pids_filtered, getpid())) @ augmented_raw_syscalls.bpf.c:531
> 10: (55) if r1 != 0x0 goto pc+161 ; R1_w=0
> 11: (b7) r6 = 0 ; R6_w=0
> ; int key = 0; @ augmented_raw_syscalls.bpf.c:150
> 12: (63) *(u32 *)(r10 -4) = r6 ; R6_w=0 R10=fp0 fp-8=0000????
> 13: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
> ; @ augmented_raw_syscalls.bpf.c:0
> 14: (07) r2 += -4 ; R2_w=fp-4
> ; return bpf_map_lookup_elem(&augmented_args_tmp, &key); @ augmented_raw_syscalls.bpf.c:151
> 15: (18) r1 = 0xffff9dcc73f8f200 ; R1_w=map_ptr(map=augmented_args_,ks=4,vs=8272)
> 17: (85) call bpf_map_lookup_elem#1 ; R0=map_value_or_null(id=2,map=augmented_args_,ks=4,vs=8272)
> 18: (bf) r8 = r0 ; R0=map_value_or_null(id=2,map=augmented_args_,ks=4,vs=8272) R8_w=map_value_or_null(id=2,map=augmented_args_,ks=4,vs=8272)
> 19: (b7) r0 = 1 ; R0_w=1
> ; if (augmented_args == NULL) @ augmented_raw_syscalls.bpf.c:535
> 20: (15) if r8 == 0x0 goto pc+151 ; R8_w=map_value(map=augmented_args_,ks=4,vs=8272)
> ; bpf_probe_read_kernel(&augmented_args->args, sizeof(augmented_args->args), args); @ augmented_raw_syscalls.bpf.c:538
> 21: (bf) r1 = r8 ; R1_w=map_value(map=augmented_args_,ks=4,vs=8272) R8_w=map_value(map=augmented_args_,ks=4,vs=8272)
> 22: (b7) r2 = 64 ; R2_w=64
> 23: (bf) r3 = r7 ; R3_w=ctx() R7=ctx()
> 24: (85) call bpf_probe_read_kernel#113 ; R0_w=scalar()
> ; int zero = 0, value_size = sizeof(struct augmented_arg) - sizeof(u64); @ augmented_raw_syscalls.bpf.c:438
> 25: (63) *(u32 *)(r10 -4) = r6 ; R6=0 R10=fp0 fp-8=0000????
> ; nr = (__u32)args->syscall_nr; @ augmented_raw_syscalls.bpf.c:448
> 26: (79) r1 = *(u64 *)(r8 +8) ; R1_w=scalar() R8_w=map_value(map=augmented_args_,ks=4,vs=8272)
> 27: (63) *(u32 *)(r10 -8) = r1 ; R1_w=scalar() R10=fp0 fp-8=0000scalar()
> 28: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
> ; bpf_probe_read_kernel(&augmented_args->args, sizeof(augmented_args->args), args); @ augmented_raw_syscalls.bpf.c:538
> 29: (07) r2 += -8 ; R2_w=fp-8
> ; beauty_map = bpf_map_lookup_elem(&beauty_map_enter, &nr); @ augmented_raw_syscalls.bpf.c:449
> 30: (18) r1 = 0xffff9dcccdfe5800 ; R1_w=map_ptr(map=beauty_map_ente,ks=4,vs=24)
> 32: (85) call bpf_map_lookup_elem#1 ; R0=map_value_or_null(id=3,map=beauty_map_ente,ks=4,vs=24)
> ; if (beauty_map == NULL) @ augmented_raw_syscalls.bpf.c:450
> 33: (15) if r0 == 0x0 goto pc+132 ; R0=map_value(map=beauty_map_ente,ks=4,vs=24)
> 34: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
> ; @ augmented_raw_syscalls.bpf.c:0
> 35: (07) r2 += -4 ; R2_w=fp-4
> ; payload = bpf_map_lookup_elem(&beauty_payload_enter_map, &zero); @ augmented_raw_syscalls.bpf.c:454
> 36: (18) r1 = 0xffff9dcc73f8e800 ; R1_w=map_ptr(map=beauty_payload_,ks=4,vs=24688)
> 38: (7b) *(u64 *)(r10 -16) = r0 ; R0=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16_w=map_value(map=beauty_map_ente,ks=4,vs=24)
> 39: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=4,map=beauty_payload_,ks=4,vs=24688)
> 40: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16_w=map_value(map=beauty_map_ente,ks=4,vs=24)
> ; if (payload == NULL) @ augmented_raw_syscalls.bpf.c:456
> 41: (15) if r0 == 0x0 goto pc+124 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688)
> 42: (7b) *(u64 *)(r10 -48) = r7 ; R7=ctx() R10=fp0 fp-48_w=ctx()
> ; __builtin_memcpy(&payload->args, args, sizeof(struct syscall_enter_args)); @ augmented_raw_syscalls.bpf.c:460
> 43: (79) r1 = *(u64 *)(r8 +56) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
> 44: (7b) *(u64 *)(r0 +56) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
> 45: (79) r1 = *(u64 *)(r8 +48) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
> 46: (7b) *(u64 *)(r0 +48) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
> 47: (79) r1 = *(u64 *)(r8 +40) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
> 48: (7b) *(u64 *)(r0 +40) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
> 49: (79) r1 = *(u64 *)(r8 +32) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
> 50: (7b) *(u64 *)(r0 +32) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
> 51: (79) r1 = *(u64 *)(r8 +24) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
> 52: (7b) *(u64 *)(r0 +24) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
> 53: (79) r1 = *(u64 *)(r8 +16) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
> 54: (7b) *(u64 *)(r0 +16) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
> 55: (79) r1 = *(u64 *)(r8 +8) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
> 56: (7b) *(u64 *)(r0 +8) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
> 57: (79) r1 = *(u64 *)(r8 +0) ; R1_w=scalar() R8=map_value(map=augmented_args_,ks=4,vs=8272)
> 58: (7b) *(u64 *)(r0 +0) = r1 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R1_w=scalar()
> 59: (b7) r1 = 64 ; R1_w=64
> 60: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=64 R10=fp0 fp-24_w=64
> 61: (7b) *(u64 *)(r10 -40) = r8 ; R8=map_value(map=augmented_args_,ks=4,vs=8272) R10=fp0 fp-40_w=map_value(map=augmented_args_,ks=4,vs=8272)
> 62: (bf) r7 = r8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272) R8=map_value(map=augmented_args_,ks=4,vs=8272)
> 63: (07) r7 += 16 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=16)
> 64: (7b) *(u64 *)(r10 -56) = r0 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R10=fp0 fp-56_w=map_value(map=beauty_payload_,ks=4,vs=24688)
> ; payload_offset = (void *)&payload->aug_args; @ augmented_raw_syscalls.bpf.c:455
> 65: (bf) r9 = r0 ; R0_w=map_value(map=beauty_payload_,ks=4,vs=24688) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688)
> 66: (07) r9 += 64 ; R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=64)
> 67: (b7) r1 = 0 ; R1_w=0
> ; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
> 68: (7b) *(u64 *)(r10 -32) = r1 ; R1_w=0 R10=fp0 fp-32_w=0
> 69: (05) goto pc+11
> ; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
> 81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4=map_value(map=beauty_map_ente,ks=4,vs=24)
> 82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R6=0
> 83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
> 84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 85: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 86: (15) if r8 == 0x0 goto pc-9 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
> ; @ augmented_raw_syscalls.bpf.c:0
> 87: (79) r3 = *(u64 *)(r7 +0) ; R3_w=scalar() R7=map_value(map=augmented_args_,ks=4,vs=8272,off=16)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 88: (15) if r3 == 0x0 goto pc-11 ; R3_w=scalar(umin=1)
> ; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
> 89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=64) R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=64)
> 90: (07) r1 += 8 ; R1=map_value(map=beauty_payload_,ks=4,vs=24688,off=72)
> ; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
> 91: (55) if r8 != 0x1 goto pc-22 ; R8=1
> ; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
> 92: (b7) r2 = 4096 ; R2_w=4096
> 93: (85) call bpf_probe_read_user_str#114 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=4096)
> 94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
> 95: (bf) r8 = r0 ; R0_w=scalar(id=5,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=5,smin=smin32=-4095,smax=smax32=4096)
> 96: (b7) r1 = 1 ; R1_w=1
> ; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
> 97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
> 99: (b7) r2 = 4096 ; R2=4096
> 100: (6d) if r2 s> r8 goto pc+1 ; R2=4096 R8=4096
> 101: (b7) r8 = 4096 ; R8_w=4096
> ; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
> 102: (57) r1 &= 1 ; R1_w=1
> 103: (15) if r1 == 0x0 goto pc-26 ; R1_w=1
> 104: (bf) r1 = r8 ; R1_w=4096 R8_w=4096
> 105: (07) r1 += 8 ; R1_w=4104
> 106: (bf) r2 = r1 ; R1_w=4104 R2_w=4104
> 107: (67) r2 <<= 32 ; R2_w=0x100800000000
> 108: (77) r2 >>= 32 ; R2=4104
> 109: (25) if r2 > 0x1008 goto pc-32 ; R2=4104
> ; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
> 110: (63) *(u32 *)(r9 +0) = r8 ; R8=4096 R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=64)
> ; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
> 111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=64 R10=fp0 fp-24=64
> 112: (0f) r1 += r3 ; R1_w=4168 R3_w=64
> ; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
> 113: (0f) r9 += r2 ; R2=4104 R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=4168)
> 114: (b7) r2 = 1 ; R2_w=1
> 115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
> 116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=4168 R10=fp0 fp-24_w=4168
> 117: (05) goto pc-40
> ; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
> 78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=24)
> 79: (07) r6 += 4 ; R6_w=4
> 80: (15) if r6 == 0x18 goto pc+56 ; R6_w=4
> ; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
> 81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4=map_value(map=beauty_map_ente,ks=4,vs=24)
> 82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=4) R6_w=4
> 83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=4) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
> 84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 85: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 86: (15) if r8 == 0x0 goto pc-9 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
> ; @ augmented_raw_syscalls.bpf.c:0
> 87: (79) r3 = *(u64 *)(r7 +0) ; R3=scalar() R7=map_value(map=augmented_args_,ks=4,vs=8272,off=24)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 88: (15) if r3 == 0x0 goto pc-11 ; R3=scalar(umin=1)
> ; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
> 89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=4168) R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=4168)
> 90: (07) r1 += 8 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=4176)
> ; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
> 91: (55) if r8 != 0x1 goto pc-22 ; R8=1
> ; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
> 92: (b7) r2 = 4096 ; R2_w=4096
> 93: (85) call bpf_probe_read_user_str#114 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=4096)
> 94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
> 95: (bf) r8 = r0 ; R0_w=scalar(id=6,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=6,smin=smin32=-4095,smax=smax32=4096)
> 96: (b7) r1 = 1 ; R1=1
> ; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
> 97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
> 99: (b7) r2 = 4096 ; R2_w=4096
> 100: (6d) if r2 s> r8 goto pc+1 ; R2_w=4096 R8_w=4096
> 101: (b7) r8 = 4096 ; R8_w=4096
> ; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
> 102: (57) r1 &= 1 ; R1_w=1
> 103: (15) if r1 == 0x0 goto pc-26 ; R1_w=1
> 104: (bf) r1 = r8 ; R1_w=4096 R8_w=4096
> 105: (07) r1 += 8 ; R1_w=4104
> 106: (bf) r2 = r1 ; R1_w=4104 R2_w=4104
> 107: (67) r2 <<= 32 ; R2_w=0x100800000000
> 108: (77) r2 >>= 32 ; R2_w=4104
> 109: (25) if r2 > 0x1008 goto pc-32 ; R2_w=4104
> ; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
> 110: (63) *(u32 *)(r9 +0) = r8 ; R8_w=4096 R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=4168)
> ; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
> 111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=4168 R10=fp0 fp-24=4168
> 112: (0f) r1 += r3 ; R1_w=8272 R3_w=4168
> ; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
> 113: (0f) r9 += r2 ; R2_w=4104 R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=8272)
> 114: (b7) r2 = 1 ; R2_w=1
> 115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
> 116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=8272 R10=fp0 fp-24_w=8272
> 117: (05) goto pc-40
> ; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
> 78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=32)
> 79: (07) r6 += 4 ; R6=8
> 80: (15) if r6 == 0x18 goto pc+56 ; R6=8
> ; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
> 81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4=map_value(map=beauty_map_ente,ks=4,vs=24)
> 82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=8) R6=8
> 83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=8) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
> 84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 85: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 86: (15) if r8 == 0x0 goto pc-9 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
> ; @ augmented_raw_syscalls.bpf.c:0
> 87: (79) r3 = *(u64 *)(r7 +0) ; R3_w=scalar() R7=map_value(map=augmented_args_,ks=4,vs=8272,off=32)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 88: (15) if r3 == 0x0 goto pc-11 ; R3_w=scalar(umin=1)
> ; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
> 89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=8272) R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=8272)
> 90: (07) r1 += 8 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=8280)
> ; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
> 91: (55) if r8 != 0x1 goto pc-22 ; R8_w=1
> ; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
> 92: (b7) r2 = 4096 ; R2_w=4096
> 93: (85) call bpf_probe_read_user_str#114 ; R0=scalar(smin=smin32=-4095,smax=smax32=4096)
> 94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
> 95: (bf) r8 = r0 ; R0=scalar(id=7,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=7,smin=smin32=-4095,smax=smax32=4096)
> 96: (b7) r1 = 1 ; R1_w=1
> ; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
> 97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
> 99: (b7) r2 = 4096 ; R2_w=4096
> 100: (6d) if r2 s> r8 goto pc+1 ; R2_w=4096 R8_w=4096
> 101: (b7) r8 = 4096 ; R8_w=4096
> ; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
> 102: (57) r1 &= 1 ; R1_w=1
> 103: (15) if r1 == 0x0 goto pc-26 ; R1_w=1
> 104: (bf) r1 = r8 ; R1_w=4096 R8_w=4096
> 105: (07) r1 += 8 ; R1_w=4104
> 106: (bf) r2 = r1 ; R1_w=4104 R2_w=4104
> 107: (67) r2 <<= 32 ; R2_w=0x100800000000
> 108: (77) r2 >>= 32 ; R2_w=4104
> 109: (25) if r2 > 0x1008 goto pc-32 ; R2_w=4104
> ; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
> 110: (63) *(u32 *)(r9 +0) = r8 ; R8_w=4096 R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=8272)
> ; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
> 111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=8272 R10=fp0 fp-24=8272
> 112: (0f) r1 += r3 ; R1_w=12376 R3_w=8272
> ; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
> 113: (0f) r9 += r2 ; R2_w=4104 R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=12376)
> 114: (b7) r2 = 1 ; R2_w=1
> 115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
> 116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=12376 R10=fp0 fp-24_w=12376
> 117: (05) goto pc-40
> ; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
> 78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=40)
> 79: (07) r6 += 4 ; R6_w=12
> 80: (15) if r6 == 0x18 goto pc+56 ; R6_w=12
> ; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
> 81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4_w=map_value(map=beauty_map_ente,ks=4,vs=24)
> 82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=12) R6_w=12
> 83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=12) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
> 84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 85: (c7) r8 s>>= 32 ; R8=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 86: (15) if r8 == 0x0 goto pc-9 ; R8=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
> ; @ augmented_raw_syscalls.bpf.c:0
> 87: (79) r3 = *(u64 *)(r7 +0) ; R3_w=scalar() R7=map_value(map=augmented_args_,ks=4,vs=8272,off=40)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 88: (15) if r3 == 0x0 goto pc-11 ; R3_w=scalar(umin=1)
> ; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
> 89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=12376) R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=12376)
> 90: (07) r1 += 8 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=12384)
> ; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
> 91: (55) if r8 != 0x1 goto pc-22 ; R8=1
> ; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
> 92: (b7) r2 = 4096 ; R2_w=4096
> 93: (85) call bpf_probe_read_user_str#114 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=4096)
> 94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
> 95: (bf) r8 = r0 ; R0_w=scalar(id=8,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=8,smin=smin32=-4095,smax=smax32=4096)
> 96: (b7) r1 = 1 ; R1_w=1
> ; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
> 97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
> 99: (b7) r2 = 4096 ; R2_w=4096
> 100: (6d) if r2 s> r8 goto pc+1 ; R2_w=4096 R8_w=4096
> 101: (b7) r8 = 4096 ; R8=4096
> ; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
> 102: (57) r1 &= 1 ; R1_w=1
> 103: (15) if r1 == 0x0 goto pc-26 ; R1_w=1
> 104: (bf) r1 = r8 ; R1_w=4096 R8=4096
> 105: (07) r1 += 8 ; R1_w=4104
> 106: (bf) r2 = r1 ; R1_w=4104 R2_w=4104
> 107: (67) r2 <<= 32 ; R2_w=0x100800000000
> 108: (77) r2 >>= 32 ; R2_w=4104
> 109: (25) if r2 > 0x1008 goto pc-32 ; R2_w=4104
> ; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
> 110: (63) *(u32 *)(r9 +0) = r8 ; R8=4096 R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=12376)
> ; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
> 111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=12376 R10=fp0 fp-24=12376
> 112: (0f) r1 += r3 ; R1_w=16480 R3_w=12376
> ; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
> 113: (0f) r9 += r2 ; R2_w=4104 R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=16480)
> 114: (b7) r2 = 1 ; R2_w=1
> 115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
> 116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=16480 R10=fp0 fp-24_w=16480
> 117: (05) goto pc-40
> ; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
> 78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=48)
> 79: (07) r6 += 4 ; R6_w=16
> 80: (15) if r6 == 0x18 goto pc+56 ; R6_w=16
> ; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
> 81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4=map_value(map=beauty_map_ente,ks=4,vs=24)
> 82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=16) R6_w=16
> 83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=16) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
> 84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 85: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 86: (15) if r8 == 0x0 goto pc-9 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
> ; @ augmented_raw_syscalls.bpf.c:0
> 87: (79) r3 = *(u64 *)(r7 +0) ; R3_w=scalar() R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=48)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 88: (15) if r3 == 0x0 goto pc-11 ; R3_w=scalar(umin=1)
> ; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
> 89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=16480) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=16480)
> 90: (07) r1 += 8 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=16488)
> ; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
> 91: (55) if r8 != 0x1 goto pc-22 ; R8_w=1
> ; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
> 92: (b7) r2 = 4096 ; R2_w=4096
> 93: (85) call bpf_probe_read_user_str#114 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=4096)
> 94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
> 95: (bf) r8 = r0 ; R0_w=scalar(id=9,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=9,smin=smin32=-4095,smax=smax32=4096)
> 96: (b7) r1 = 1 ; R1_w=1
> ; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
> 97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
> 99: (b7) r2 = 4096 ; R2_w=4096
> 100: (6d) if r2 s> r8 goto pc+1 ; R2_w=4096 R8_w=4096
> 101: (b7) r8 = 4096 ; R8_w=4096
> ; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
> 102: (57) r1 &= 1 ; R1=1
> 103: (15) if r1 == 0x0 goto pc-26 ; R1=1
> 104: (bf) r1 = r8 ; R1_w=4096 R8=4096
> 105: (07) r1 += 8 ; R1_w=4104
> 106: (bf) r2 = r1 ; R1_w=4104 R2_w=4104
> 107: (67) r2 <<= 32 ; R2_w=0x100800000000
> 108: (77) r2 >>= 32 ; R2_w=4104
> 109: (25) if r2 > 0x1008 goto pc-32 ; R2_w=4104
> ; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
> 110: (63) *(u32 *)(r9 +0) = r8 ; R8=4096 R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=16480)
> ; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
> 111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=16480 R10=fp0 fp-24=16480
> 112: (0f) r1 += r3 ; R1_w=20584 R3_w=16480
> ; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
> 113: (0f) r9 += r2 ; R2_w=4104 R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584)
> 114: (b7) r2 = 1 ; R2_w=1
> 115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
> 116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=20584 R10=fp0 fp-24_w=20584
> 117: (05) goto pc-40
> ; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
> 78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=56)
> 79: (07) r6 += 4 ; R6_w=20
> 80: (15) if r6 == 0x18 goto pc+56 ; R6_w=20
> ; int augment_size = beauty_map[i], augment_size_with_header; @ augmented_raw_syscalls.bpf.c:472
> 81: (bf) r1 = r4 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24) R4=map_value(map=beauty_map_ente,ks=4,vs=24)
> 82: (0f) r1 += r6 ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=20) R6_w=20
> 83: (61) r8 = *(u32 *)(r1 +0) ; R1_w=map_value(map=beauty_map_ente,ks=4,vs=24,off=20) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
> 84: (67) r8 <<= 32 ; R8_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 85: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 86: (15) if r8 == 0x0 goto pc-9 ; R8_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff,umin=1)
> ; @ augmented_raw_syscalls.bpf.c:0
> 87: (79) r3 = *(u64 *)(r7 +0) ; R3_w=scalar() R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=56)
> ; if (augment_size == 0 || addr == NULL) @ augmented_raw_syscalls.bpf.c:476
> 88: (15) if r3 == 0x0 goto pc-11 ; R3_w=scalar(umin=1)
> ; value_offset = ((struct augmented_arg *)payload_offset)->value; @ augmented_raw_syscalls.bpf.c:479
> 89: (bf) r1 = r9 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584)
> 90: (07) r1 += 8 ; R1_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20592)
> ; if (augment_size == 1) { /* string */ @ augmented_raw_syscalls.bpf.c:481
> 91: (55) if r8 != 0x1 goto pc-22 ; R8_w=1
> ; augment_size = bpf_probe_read_user_str(value_offset, value_size, addr); @ augmented_raw_syscalls.bpf.c:482
> 92: (b7) r2 = 4096 ; R2_w=4096
> 93: (85) call bpf_probe_read_user_str#114 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=4096)
> 94: (79) r4 = *(u64 *)(r10 -16) ; R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R10=fp0 fp-16=map_value(map=beauty_map_ente,ks=4,vs=24)
> 95: (bf) r8 = r0 ; R0_w=scalar(id=10,smin=smin32=-4095,smax=smax32=4096) R8_w=scalar(id=10,smin=smin32=-4095,smax=smax32=4096)
> 96: (b7) r1 = 1 ; R1_w=1
> ; if (augment_size > value_size) @ augmented_raw_syscalls.bpf.c:496
> 97: (67) r8 <<= 32 ; R8_w=scalar(smax=0x100000000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 98: (c7) r8 s>>= 32 ; R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4096)
> 99: (b7) r2 = 4096 ; R2_w=4096
> 100: (6d) if r2 s> r8 goto pc+1 102: R0_w=scalar(id=10,smin=smin32=-4095,smax=smax32=4096) R1_w=1 R2_w=4096 R4_w=map_value(map=beauty_map_ente,ks=4,vs=24) R6_w=20 R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=56) R8_w=scalar(smin=0xffffffff80000000,smax=smax32=4095) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584) R10=fp0 fp-8=mmmmmmmm fp-16=map_value(map=beauty_map_ente,ks=4,vs=24) fp-24_w=20584 fp-32_w=1 fp-40=map_value(map=augmented_args_,ks=4,vs=8272) fp-48=ctx() fp-56=map_value(map=beauty_payload_,ks=4,vs=24688)
> ; if (is_augmented && augment_size_with_header <= sizeof(struct augmented_arg)) { @ augmented_raw_syscalls.bpf.c:503
> 102: (57) r1 &= 1 ; R1_w=1
> 103: (15) if r1 == 0x0 goto pc-26 ; R1_w=1
> 104: (bf) r1 = r8 ; R1_w=scalar(id=12,smin=0xffffffff80000000,smax=smax32=4095) R8_w=scalar(id=12,smin=0xffffffff80000000,smax=smax32=4095)
> 105: (07) r1 += 8 ; R1_w=scalar(smin=0xffffffff80000008,smax=smax32=4103,smin32=0x80000008)
> 106: (bf) r2 = r1 ; R1_w=scalar(id=13,smin=0xffffffff80000008,smax=smax32=4103,smin32=0x80000008) R2_w=scalar(id=13,smin=0xffffffff80000008,smax=smax32=4103,smin32=0x80000008)
> 107: (67) r2 <<= 32 ; R2_w=scalar(smax=0x100700000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 108: (77) r2 >>= 32 ; R2_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
> 109: (25) if r2 > 0x1008 goto pc-32 ; R2_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=4104,var_off=(0x0; 0x1fff))
> ; ((struct augmented_arg *)payload_offset)->size = augment_size; @ augmented_raw_syscalls.bpf.c:504
> 110: (63) *(u32 *)(r9 +0) = r8 ; R8_w=scalar(id=12,smin=0xffffffff80000000,smax=smax32=4095) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584)
> ; len += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:506
> 111: (79) r3 = *(u64 *)(r10 -24) ; R3_w=20584 R10=fp0 fp-24_w=20584
> 112: (0f) r1 += r3 ; R1_w=scalar(smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) R3_w=20584
> ; payload_offset += augment_size_with_header; @ augmented_raw_syscalls.bpf.c:507
> 113: (0f) r9 += r2 ; R2_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=4104,var_off=(0x0; 0x1fff)) R9_w=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584,smin=smin32=0,smax=umax=smax32=umax32=4104,var_off=(0x0; 0x1fff))
> 114: (b7) r2 = 1 ; R2_w=1
> 115: (7b) *(u64 *)(r10 -32) = r2 ; R2_w=1 R10=fp0 fp-32_w=1
> 116: (7b) *(u64 *)(r10 -24) = r1 ; R1_w=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) R10=fp0 fp-24_w=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070)
> 117: (05) goto pc-40
> ; for (int i = 0; i < 6; i++) { @ augmented_raw_syscalls.bpf.c:471
> 78: (07) r7 += 8 ; R7_w=map_value(map=augmented_args_,ks=4,vs=8272,off=64)
> 79: (07) r6 += 4 ; R6_w=24
> 80: (15) if r6 == 0x18 goto pc+56 ; R6_w=24
> ; if (!bpf_probe_read_user(value_offset, augment_size, addr)) @ augmented_raw_syscalls.bpf.c:491
> 137: (79) r5 = *(u64 *)(r10 -24) ; R5_w=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) R10=fp0 fp-24=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070)
> 138: (bf) r2 = r5 ; R2_w=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) R5_w=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070)
> 139: (67) r2 <<= 32 ; R2_w=scalar(smax=0x606f00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 140: (77) r2 >>= 32 ; R2_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
> 141: (b7) r1 = 1 ; R1_w=1
> 142: (b7) r3 = 24689 ; R3_w=24689
> 143: (2d) if r3 > r2 goto pc+1 145: R0=scalar(id=10,smin=smin32=-4095,smax=smax32=4096) R1=1 R2=scalar(smin=smin32=0,smax=umax=smax32=umax32=24688,var_off=(0x0; 0x7fff)) R3=24689 R4=map_value(map=beauty_map_ente,ks=4,vs=24) R5=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) R6=24 R7=map_value(map=augmented_args_,ks=4,vs=8272,off=64) R8=scalar(id=12,smin=0xffffffff80000000,smax=smax32=4095) R9=map_value(map=beauty_payload_,ks=4,vs=24688,off=20584,smin=smin32=0,smax=umax=smax32=umax32=4104,var_off=(0x0; 0x1fff)) R10=fp0 fp-8=mmmmmmmm fp-16=map_value(map=beauty_map_ente,ks=4,vs=24) fp-24=scalar(id=14,smin=0xffffffff80005070,smax=smax32=24687,smin32=0x80005070) fp-32=1 fp-40=map_value(map=augmented_args_,ks=4,vs=8272) fp-48=ctx() fp-56=map_value(map=beauty_payload_,ks=4,vs=24688)
> ; if (!bpf_probe_read_user(value_offset, augment_size, addr)) @ augmented_raw_syscalls.bpf.c:491
> 145: (79) r2 = *(u64 *)(r10 -32) ; R2_w=1 R10=fp0 fp-32=1
> ; if (!do_augment || len > sizeof(struct beauty_payload_enter)) @ augmented_raw_syscalls.bpf.c:511
> 146: (5f) r2 &= r1 ; R1=1 R2_w=1
> 147: (57) r2 &= 1 ; R2_w=1
> 148: (79) r7 = *(u64 *)(r10 -48) ; R7_w=ctx() R10=fp0 fp-48=ctx()
> 149: (79) r8 = *(u64 *)(r10 -40) ; R8_w=map_value(map=augmented_args_,ks=4,vs=8272) R10=fp0 fp-40=map_value(map=augmented_args_,ks=4,vs=8272)
> 150: (79) r4 = *(u64 *)(r10 -56) ; R4_w=map_value(map=beauty_payload_,ks=4,vs=24688) R10=fp0 fp-56=map_value(map=beauty_payload_,ks=4,vs=24688)
> 151: (55) if r2 != 0x0 goto pc+1 ; R2_w=1
> ; return bpf_perf_event_output(ctx, &__augmented_syscalls__, BPF_F_CURRENT_CPU, data, len); @ augmented_raw_syscalls.bpf.c:162
> 153: (67) r5 <<= 32 ; R5_w=scalar(smax=0x606f00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
> 154: (77) r5 >>= 32 ; R5_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
> 155: (bf) r1 = r7 ; R1_w=ctx() R7_w=ctx()
> 156: (18) r2 = 0xffffaed2058d9000 ; R2_w=map_ptr(map=__augmented_sys,ks=4,vs=4)
> 158: (18) r3 = 0xffffffff ; R3_w=0xffffffff
> 160: (85) call bpf_perf_event_output#25
> R5 unbounded memory access, use 'var &= const' or 'if (var < const)'
> processed 387 insns (limit 1000000) max_states_per_insn 1 total_states 20 peak_states 20 mark_read 13
> -- END PROG LOAD LOG --
> libbpf: prog 'sys_enter': failed to load: -13
> libbpf: failed to load object 'augmented_raw_syscalls_bpf'
> libbpf: failed to load BPF skeleton 'augmented_raw_syscalls_bpf': -13
> libbpf: map '__augmented_syscalls__': can't use BPF map without FD (was it created?)
> libbpf: map '__augmented_syscalls__': can't use BPF map without FD (was it created?)
> libbpf: map '__augmented_syscalls__': can't use BPF map without FD (was it created?)
> libbpf: map '__augmented_syscalls__': can't use BPF map without FD (was it created?)
> hello
> 0.000 ( 0.008 ms): write(fd: 1, buf: , count: 6) =
>
> Also like James said, the buf doesn't show anything and the return
> value is missing.
>
> Thanks,
> Namhyung
>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-10-11 2:17 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-07 5:14 [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12 Howard Chu
2024-10-07 5:14 ` [PATCH 1/2] perf build: Change the clang version check back to 12.0.1 Howard Chu
2024-10-07 5:14 ` [PATCH 2/2] perf trace: Rewrite BPF programs to pass the verifier Howard Chu
2024-10-10 9:06 ` [PATCH 0/2] perf trace: Fix support for the new BPF feature in clang 12 James Clark
2024-10-11 0:20 ` Namhyung Kim
2024-10-11 2:16 ` Howard Chu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).