* [PATCH bpf-next v2 0/4] Mixing bpf2bpf and tailcalls for RV64
@ 2024-01-30 4:09 Pu Lehui
2024-01-30 4:09 ` [PATCH bpf-next v2 1/4] riscv, bpf: Remove redundant ctx->offset initialization Pu Lehui
` (3 more replies)
0 siblings, 4 replies; 15+ messages in thread
From: Pu Lehui @ 2024-01-30 4:09 UTC (permalink / raw)
To: bpf, linux-riscv, netdev
Cc: Björn Töpel, Song Liu, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
Eduard Zingerman, Yonghong Song, John Fastabend, KP Singh,
Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui, Pu Lehui
From: Pu Lehui <pulehui@huawei.com>
In the current RV64 JIT, if we just don't initialize the TCC in subprog,
the TCC can be propagated from the parent process to the subprocess, but
the TCC of the parent process cannot be restored when the subprocess
exits. Since the RV64 TCC is initialized before saving the callee saved
registers into the stack, we cannot use the callee saved register to
pass the TCC, otherwise the original value of the callee saved register
will be destroyed. So we implemented mixing bpf2bpf and tailcalls
similar to x86_64, i.e. using a non-callee saved register to transfer
the TCC between functions, and saving that register to the stack to
protect the TCC value. At the same time, we also consider the scenario
of mixing trampoline.
In addition, some code cleans are also attached to this patchset.
Tests test_bpf.ko and test_verifier have passed, as well as the relative
testcases of test_progs*.
v2:
- Fix emit restore RV_REG_TCC double times when `flags & BPF_TRAMP_F_CALL_ORIG`
- Use bpf_is_subprog helper
v1: https://lore.kernel.org/bpf/20230919035711.3297256-1-pulehui@huaweicloud.com
Pu Lehui (4):
riscv, bpf: Remove redundant ctx->offset initialization
riscv, bpf: Using kvcalloc to allocate cache buffer
riscv, bpf: Add RV_TAILCALL_OFFSET macro to format tailcall offset
riscv, bpf: Mixing bpf2bpf and tailcalls
arch/riscv/net/bpf_jit.h | 1 +
arch/riscv/net/bpf_jit_comp64.c | 102 ++++++++++++++------------------
arch/riscv/net/bpf_jit_core.c | 9 +--
3 files changed, 46 insertions(+), 66 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH bpf-next v2 1/4] riscv, bpf: Remove redundant ctx->offset initialization
2024-01-30 4:09 [PATCH bpf-next v2 0/4] Mixing bpf2bpf and tailcalls for RV64 Pu Lehui
@ 2024-01-30 4:09 ` Pu Lehui
2024-01-30 16:04 ` Björn Töpel
2024-01-30 4:09 ` [PATCH bpf-next v2 2/4] riscv, bpf: Using kvcalloc to allocate cache buffer Pu Lehui
` (2 subsequent siblings)
3 siblings, 1 reply; 15+ messages in thread
From: Pu Lehui @ 2024-01-30 4:09 UTC (permalink / raw)
To: bpf, linux-riscv, netdev
Cc: Björn Töpel, Song Liu, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
Eduard Zingerman, Yonghong Song, John Fastabend, KP Singh,
Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui, Pu Lehui
From: Pu Lehui <pulehui@huawei.com>
There is no gain in initializing ctx->offset and prev_insns, and
ctx->offset is already zero-initialized. Let's remove this code.
Signed-off-by: Pu Lehui <pulehui@huawei.com>
---
arch/riscv/net/bpf_jit_core.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c
index 7b70ccb7fec3..b271240f48c9 100644
--- a/arch/riscv/net/bpf_jit_core.c
+++ b/arch/riscv/net/bpf_jit_core.c
@@ -91,11 +91,6 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
goto out_offset;
}
- for (i = 0; i < prog->len; i++) {
- prev_ninsns += 32;
- ctx->offset[i] = prev_ninsns;
- }
-
for (i = 0; i < NR_JIT_ITERATIONS; i++) {
pass++;
ctx->ninsns = 0;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf-next v2 2/4] riscv, bpf: Using kvcalloc to allocate cache buffer
2024-01-30 4:09 [PATCH bpf-next v2 0/4] Mixing bpf2bpf and tailcalls for RV64 Pu Lehui
2024-01-30 4:09 ` [PATCH bpf-next v2 1/4] riscv, bpf: Remove redundant ctx->offset initialization Pu Lehui
@ 2024-01-30 4:09 ` Pu Lehui
2024-01-30 16:05 ` Björn Töpel
2024-01-30 4:09 ` [PATCH bpf-next v2 3/4] riscv, bpf: Add RV_TAILCALL_OFFSET macro to format tailcall offset Pu Lehui
2024-01-30 4:09 ` [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls Pu Lehui
3 siblings, 1 reply; 15+ messages in thread
From: Pu Lehui @ 2024-01-30 4:09 UTC (permalink / raw)
To: bpf, linux-riscv, netdev
Cc: Björn Töpel, Song Liu, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
Eduard Zingerman, Yonghong Song, John Fastabend, KP Singh,
Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui, Pu Lehui
From: Pu Lehui <pulehui@huawei.com>
It is unnecessary to allocate continuous physical memory for cache
buffer, and when ebpf program is too large, it may cause memory
allocation failure.
Signed-off-by: Pu Lehui <pulehui@huawei.com>
---
arch/riscv/net/bpf_jit_comp64.c | 4 ++--
arch/riscv/net/bpf_jit_core.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index fda6b4f6a4c1..74f995abf2c2 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -911,7 +911,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
}
if (fmod_ret->nr_links) {
- branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
+ branches_off = kvcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
if (!branches_off)
return -ENOMEM;
@@ -1001,7 +1001,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
ret = ctx->ninsns;
out:
- kfree(branches_off);
+ kvfree(branches_off);
return ret;
}
diff --git a/arch/riscv/net/bpf_jit_core.c b/arch/riscv/net/bpf_jit_core.c
index b271240f48c9..5ba68b1888ab 100644
--- a/arch/riscv/net/bpf_jit_core.c
+++ b/arch/riscv/net/bpf_jit_core.c
@@ -80,7 +80,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
}
ctx->prog = prog;
- ctx->offset = kcalloc(prog->len, sizeof(int), GFP_KERNEL);
+ ctx->offset = kvcalloc(prog->len, sizeof(int), GFP_KERNEL);
if (!ctx->offset) {
prog = orig_prog;
goto out_offset;
@@ -188,7 +188,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
ctx->offset[i] = ninsns_rvoff(ctx->offset[i]);
bpf_prog_fill_jited_linfo(prog, ctx->offset);
out_offset:
- kfree(ctx->offset);
+ kvfree(ctx->offset);
kfree(jit_data);
prog->aux->jit_data = NULL;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf-next v2 3/4] riscv, bpf: Add RV_TAILCALL_OFFSET macro to format tailcall offset
2024-01-30 4:09 [PATCH bpf-next v2 0/4] Mixing bpf2bpf and tailcalls for RV64 Pu Lehui
2024-01-30 4:09 ` [PATCH bpf-next v2 1/4] riscv, bpf: Remove redundant ctx->offset initialization Pu Lehui
2024-01-30 4:09 ` [PATCH bpf-next v2 2/4] riscv, bpf: Using kvcalloc to allocate cache buffer Pu Lehui
@ 2024-01-30 4:09 ` Pu Lehui
2024-01-30 16:05 ` Björn Töpel
2024-01-30 4:09 ` [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls Pu Lehui
3 siblings, 1 reply; 15+ messages in thread
From: Pu Lehui @ 2024-01-30 4:09 UTC (permalink / raw)
To: bpf, linux-riscv, netdev
Cc: Björn Töpel, Song Liu, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
Eduard Zingerman, Yonghong Song, John Fastabend, KP Singh,
Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui, Pu Lehui
From: Pu Lehui <pulehui@huawei.com>
Add RV_TAILCALL_OFFSET macro to format tailcall offset, and correct the
relevant comments.
Signed-off-by: Pu Lehui <pulehui@huawei.com>
---
arch/riscv/net/bpf_jit_comp64.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index 74f995abf2c2..3516d425c5eb 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -13,7 +13,9 @@
#include <asm/patch.h>
#include "bpf_jit.h"
-#define RV_FENTRY_NINSNS 2
+#define RV_FENTRY_NINSNS 2
+/* fentry and TCC init insns will be skipped on tailcall */
+#define RV_TAILCALL_OFFSET ((RV_FENTRY_NINSNS + 1) * 4)
#define RV_REG_TCC RV_REG_A6
#define RV_REG_TCC_SAVED RV_REG_S6 /* Store A6 in S6 if program do calls */
@@ -260,8 +262,7 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
if (!is_tail_call)
emit_addiw(RV_REG_A0, RV_REG_A5, 0, ctx);
emit_jalr(RV_REG_ZERO, is_tail_call ? RV_REG_T3 : RV_REG_RA,
- is_tail_call ? (RV_FENTRY_NINSNS + 1) * 4 : 0, /* skip reserved nops and TCC init */
- ctx);
+ is_tail_call ? RV_TAILCALL_OFFSET : 0, ctx);
}
static void emit_bcc(u8 cond, u8 rd, u8 rs, int rvoff,
@@ -382,7 +383,7 @@ static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
off = ninsns_rvoff(tc_ninsn - (ctx->ninsns - start_insn));
emit_branch(BPF_JEQ, RV_REG_T2, RV_REG_ZERO, off, ctx);
- /* goto *(prog->bpf_func + 4); */
+ /* goto *(prog->bpf_func + RV_TAILCALL_OFFSET); */
off = offsetof(struct bpf_prog, bpf_func);
if (is_12b_check(off, insn))
return -1;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls
2024-01-30 4:09 [PATCH bpf-next v2 0/4] Mixing bpf2bpf and tailcalls for RV64 Pu Lehui
` (2 preceding siblings ...)
2024-01-30 4:09 ` [PATCH bpf-next v2 3/4] riscv, bpf: Add RV_TAILCALL_OFFSET macro to format tailcall offset Pu Lehui
@ 2024-01-30 4:09 ` Pu Lehui
2024-01-30 17:30 ` Björn Töpel
3 siblings, 1 reply; 15+ messages in thread
From: Pu Lehui @ 2024-01-30 4:09 UTC (permalink / raw)
To: bpf, linux-riscv, netdev
Cc: Björn Töpel, Song Liu, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
Eduard Zingerman, Yonghong Song, John Fastabend, KP Singh,
Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui, Pu Lehui
From: Pu Lehui <pulehui@huawei.com>
In the current RV64 JIT, if we just don't initialize the TCC in subprog,
the TCC can be propagated from the parent process to the subprocess, but
the TCC of the parent process cannot be restored when the subprocess
exits. Since the RV64 TCC is initialized before saving the callee saved
registers into the stack, we cannot use the callee saved register to
pass the TCC, otherwise the original value of the callee saved register
will be destroyed. So we implemented mixing bpf2bpf and tailcalls
similar to x86_64, i.e. using a non-callee saved register to transfer
the TCC between functions, and saving that register to the stack to
protect the TCC value. At the same time, we also consider the scenario
of mixing trampoline.
Tests test_bpf.ko and test_verifier have passed, as well as the relative
testcases of test_progs*.
Signed-off-by: Pu Lehui <pulehui@huawei.com>
---
arch/riscv/net/bpf_jit.h | 1 +
arch/riscv/net/bpf_jit_comp64.c | 89 +++++++++++++--------------------
2 files changed, 37 insertions(+), 53 deletions(-)
diff --git a/arch/riscv/net/bpf_jit.h b/arch/riscv/net/bpf_jit.h
index 8b35f12a4452..d8be89dadf18 100644
--- a/arch/riscv/net/bpf_jit.h
+++ b/arch/riscv/net/bpf_jit.h
@@ -81,6 +81,7 @@ struct rv_jit_context {
int nexentries;
unsigned long flags;
int stack_size;
+ int tcc_offset;
};
/* Convert from ninsns to bytes. */
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index 3516d425c5eb..64e0c86d60c4 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -13,13 +13,11 @@
#include <asm/patch.h>
#include "bpf_jit.h"
+#define RV_REG_TCC RV_REG_A6
#define RV_FENTRY_NINSNS 2
/* fentry and TCC init insns will be skipped on tailcall */
#define RV_TAILCALL_OFFSET ((RV_FENTRY_NINSNS + 1) * 4)
-#define RV_REG_TCC RV_REG_A6
-#define RV_REG_TCC_SAVED RV_REG_S6 /* Store A6 in S6 if program do calls */
-
static const int regmap[] = {
[BPF_REG_0] = RV_REG_A5,
[BPF_REG_1] = RV_REG_A0,
@@ -51,14 +49,12 @@ static const int pt_regmap[] = {
};
enum {
- RV_CTX_F_SEEN_TAIL_CALL = 0,
RV_CTX_F_SEEN_CALL = RV_REG_RA,
RV_CTX_F_SEEN_S1 = RV_REG_S1,
RV_CTX_F_SEEN_S2 = RV_REG_S2,
RV_CTX_F_SEEN_S3 = RV_REG_S3,
RV_CTX_F_SEEN_S4 = RV_REG_S4,
RV_CTX_F_SEEN_S5 = RV_REG_S5,
- RV_CTX_F_SEEN_S6 = RV_REG_S6,
};
static u8 bpf_to_rv_reg(int bpf_reg, struct rv_jit_context *ctx)
@@ -71,7 +67,6 @@ static u8 bpf_to_rv_reg(int bpf_reg, struct rv_jit_context *ctx)
case RV_CTX_F_SEEN_S3:
case RV_CTX_F_SEEN_S4:
case RV_CTX_F_SEEN_S5:
- case RV_CTX_F_SEEN_S6:
__set_bit(reg, &ctx->flags);
}
return reg;
@@ -86,7 +81,6 @@ static bool seen_reg(int reg, struct rv_jit_context *ctx)
case RV_CTX_F_SEEN_S3:
case RV_CTX_F_SEEN_S4:
case RV_CTX_F_SEEN_S5:
- case RV_CTX_F_SEEN_S6:
return test_bit(reg, &ctx->flags);
}
return false;
@@ -102,32 +96,6 @@ static void mark_call(struct rv_jit_context *ctx)
__set_bit(RV_CTX_F_SEEN_CALL, &ctx->flags);
}
-static bool seen_call(struct rv_jit_context *ctx)
-{
- return test_bit(RV_CTX_F_SEEN_CALL, &ctx->flags);
-}
-
-static void mark_tail_call(struct rv_jit_context *ctx)
-{
- __set_bit(RV_CTX_F_SEEN_TAIL_CALL, &ctx->flags);
-}
-
-static bool seen_tail_call(struct rv_jit_context *ctx)
-{
- return test_bit(RV_CTX_F_SEEN_TAIL_CALL, &ctx->flags);
-}
-
-static u8 rv_tail_call_reg(struct rv_jit_context *ctx)
-{
- mark_tail_call(ctx);
-
- if (seen_call(ctx)) {
- __set_bit(RV_CTX_F_SEEN_S6, &ctx->flags);
- return RV_REG_S6;
- }
- return RV_REG_A6;
-}
-
static bool is_32b_int(s64 val)
{
return -(1L << 31) <= val && val < (1L << 31);
@@ -252,10 +220,7 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
emit_ld(RV_REG_S5, store_offset, RV_REG_SP, ctx);
store_offset -= 8;
}
- if (seen_reg(RV_REG_S6, ctx)) {
- emit_ld(RV_REG_S6, store_offset, RV_REG_SP, ctx);
- store_offset -= 8;
- }
+ emit_ld(RV_REG_TCC, store_offset, RV_REG_SP, ctx);
emit_addi(RV_REG_SP, RV_REG_SP, stack_adjust, ctx);
/* Set return value. */
@@ -343,7 +308,6 @@ static void emit_branch(u8 cond, u8 rd, u8 rs, int rvoff,
static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
{
int tc_ninsn, off, start_insn = ctx->ninsns;
- u8 tcc = rv_tail_call_reg(ctx);
/* a0: &ctx
* a1: &array
@@ -366,9 +330,11 @@ static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
/* if (--TCC < 0)
* goto out;
*/
- emit_addi(RV_REG_TCC, tcc, -1, ctx);
+ emit_ld(RV_REG_TCC, ctx->tcc_offset, RV_REG_SP, ctx);
+ emit_addi(RV_REG_TCC, RV_REG_TCC, -1, ctx);
off = ninsns_rvoff(tc_ninsn - (ctx->ninsns - start_insn));
emit_branch(BPF_JSLT, RV_REG_TCC, RV_REG_ZERO, off, ctx);
+ emit_sd(RV_REG_SP, ctx->tcc_offset, RV_REG_TCC, ctx);
/* prog = array->ptrs[index];
* if (!prog)
@@ -767,7 +733,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
int i, ret, offset;
int *branches_off = NULL;
int stack_size = 0, nregs = m->nr_args;
- int retval_off, args_off, nregs_off, ip_off, run_ctx_off, sreg_off;
+ int retval_off, args_off, nregs_off, ip_off, run_ctx_off, sreg_off, tcc_off;
struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
@@ -812,6 +778,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
*
* FP - sreg_off [ callee saved reg ]
*
+ * FP - tcc_off [ tail call count ] BPF_TRAMP_F_TAIL_CALL_CTX
+ *
* [ pads ] pads for 16 bytes alignment
*/
@@ -853,6 +821,11 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
stack_size += 8;
sreg_off = stack_size;
+ if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
+ stack_size += 8;
+ tcc_off = stack_size;
+ }
+
stack_size = round_up(stack_size, 16);
if (!is_struct_ops) {
@@ -879,6 +852,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
emit_addi(RV_REG_FP, RV_REG_SP, stack_size, ctx);
}
+ /* store tail call count */
+ if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
+ emit_sd(RV_REG_FP, -tcc_off, RV_REG_TCC, ctx);
+
/* callee saved register S1 to pass start time */
emit_sd(RV_REG_FP, -sreg_off, RV_REG_S1, ctx);
@@ -932,6 +909,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
if (flags & BPF_TRAMP_F_CALL_ORIG) {
restore_args(nregs, args_off, ctx);
+ /* restore TCC to RV_REG_TCC before calling the original function */
+ if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
+ emit_ld(RV_REG_TCC, -tcc_off, RV_REG_FP, ctx);
ret = emit_call((const u64)orig_call, true, ctx);
if (ret)
goto out;
@@ -963,6 +943,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
ret = emit_call((const u64)__bpf_tramp_exit, true, ctx);
if (ret)
goto out;
+ } else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
+ /* restore TCC to RV_REG_TCC before calling the original function */
+ emit_ld(RV_REG_TCC, -tcc_off, RV_REG_FP, ctx);
}
if (flags & BPF_TRAMP_F_RESTORE_REGS)
@@ -1455,6 +1438,9 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
if (ret < 0)
return ret;
+ /* restore TCC from stack to RV_REG_TCC */
+ emit_ld(RV_REG_TCC, ctx->tcc_offset, RV_REG_SP, ctx);
+
ret = emit_call(addr, fixed_addr, ctx);
if (ret)
return ret;
@@ -1733,8 +1719,7 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx)
stack_adjust += 8;
if (seen_reg(RV_REG_S5, ctx))
stack_adjust += 8;
- if (seen_reg(RV_REG_S6, ctx))
- stack_adjust += 8;
+ stack_adjust += 8; /* RV_REG_TCC */
stack_adjust = round_up(stack_adjust, 16);
stack_adjust += bpf_stack_adjust;
@@ -1749,7 +1734,8 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx)
* (TCC) register. This instruction is skipped for tail calls.
* Force using a 4-byte (non-compressed) instruction.
*/
- emit(rv_addi(RV_REG_TCC, RV_REG_ZERO, MAX_TAIL_CALL_CNT), ctx);
+ if (!bpf_is_subprog(ctx->prog))
+ emit(rv_addi(RV_REG_TCC, RV_REG_ZERO, MAX_TAIL_CALL_CNT), ctx);
emit_addi(RV_REG_SP, RV_REG_SP, -stack_adjust, ctx);
@@ -1779,22 +1765,14 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx)
emit_sd(RV_REG_SP, store_offset, RV_REG_S5, ctx);
store_offset -= 8;
}
- if (seen_reg(RV_REG_S6, ctx)) {
- emit_sd(RV_REG_SP, store_offset, RV_REG_S6, ctx);
- store_offset -= 8;
- }
+ emit_sd(RV_REG_SP, store_offset, RV_REG_TCC, ctx);
+ ctx->tcc_offset = store_offset;
emit_addi(RV_REG_FP, RV_REG_SP, stack_adjust, ctx);
if (bpf_stack_adjust)
emit_addi(RV_REG_S5, RV_REG_SP, bpf_stack_adjust, ctx);
- /* Program contains calls and tail calls, so RV_REG_TCC need
- * to be saved across calls.
- */
- if (seen_tail_call(ctx) && seen_call(ctx))
- emit_mv(RV_REG_TCC_SAVED, RV_REG_TCC, ctx);
-
ctx->stack_size = stack_adjust;
}
@@ -1807,3 +1785,8 @@ bool bpf_jit_supports_kfunc_call(void)
{
return true;
}
+
+bool bpf_jit_supports_subprog_tailcalls(void)
+{
+ return true;
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH bpf-next v2 1/4] riscv, bpf: Remove redundant ctx->offset initialization
2024-01-30 4:09 ` [PATCH bpf-next v2 1/4] riscv, bpf: Remove redundant ctx->offset initialization Pu Lehui
@ 2024-01-30 16:04 ` Björn Töpel
0 siblings, 0 replies; 15+ messages in thread
From: Björn Töpel @ 2024-01-30 16:04 UTC (permalink / raw)
To: Pu Lehui, bpf, linux-riscv, netdev
Cc: Song Liu, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui, Pu Lehui
Pu Lehui <pulehui@huaweicloud.com> writes:
> From: Pu Lehui <pulehui@huawei.com>
>
> There is no gain in initializing ctx->offset and prev_insns, and
> ctx->offset is already zero-initialized. Let's remove this code.
>
> Signed-off-by: Pu Lehui <pulehui@huawei.com>
Acked-by: Björn Töpel <bjorn@kernel.org>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf-next v2 2/4] riscv, bpf: Using kvcalloc to allocate cache buffer
2024-01-30 4:09 ` [PATCH bpf-next v2 2/4] riscv, bpf: Using kvcalloc to allocate cache buffer Pu Lehui
@ 2024-01-30 16:05 ` Björn Töpel
0 siblings, 0 replies; 15+ messages in thread
From: Björn Töpel @ 2024-01-30 16:05 UTC (permalink / raw)
To: Pu Lehui, bpf, linux-riscv, netdev
Cc: Song Liu, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui, Pu Lehui
Pu Lehui <pulehui@huaweicloud.com> writes:
> From: Pu Lehui <pulehui@huawei.com>
>
> It is unnecessary to allocate continuous physical memory for cache
> buffer, and when ebpf program is too large, it may cause memory
> allocation failure.
>
> Signed-off-by: Pu Lehui <pulehui@huawei.com>
Acked-by: Björn Töpel <bjorn@kernel.org>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf-next v2 3/4] riscv, bpf: Add RV_TAILCALL_OFFSET macro to format tailcall offset
2024-01-30 4:09 ` [PATCH bpf-next v2 3/4] riscv, bpf: Add RV_TAILCALL_OFFSET macro to format tailcall offset Pu Lehui
@ 2024-01-30 16:05 ` Björn Töpel
0 siblings, 0 replies; 15+ messages in thread
From: Björn Töpel @ 2024-01-30 16:05 UTC (permalink / raw)
To: Pu Lehui, bpf, linux-riscv, netdev
Cc: Song Liu, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui, Pu Lehui
Pu Lehui <pulehui@huaweicloud.com> writes:
> From: Pu Lehui <pulehui@huawei.com>
>
> Add RV_TAILCALL_OFFSET macro to format tailcall offset, and correct the
> relevant comments.
>
> Signed-off-by: Pu Lehui <pulehui@huawei.com>
Acked-by: Björn Töpel <bjorn@kernel.org>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls
2024-01-30 4:09 ` [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls Pu Lehui
@ 2024-01-30 17:30 ` Björn Töpel
2024-02-01 8:22 ` Pu Lehui
0 siblings, 1 reply; 15+ messages in thread
From: Björn Töpel @ 2024-01-30 17:30 UTC (permalink / raw)
To: Pu Lehui, bpf, linux-riscv, netdev
Cc: Song Liu, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui, Pu Lehui
Pu Lehui <pulehui@huaweicloud.com> writes:
> From: Pu Lehui <pulehui@huawei.com>
>
> In the current RV64 JIT, if we just don't initialize the TCC in subprog,
> the TCC can be propagated from the parent process to the subprocess, but
> the TCC of the parent process cannot be restored when the subprocess
> exits. Since the RV64 TCC is initialized before saving the callee saved
> registers into the stack, we cannot use the callee saved register to
> pass the TCC, otherwise the original value of the callee saved register
> will be destroyed. So we implemented mixing bpf2bpf and tailcalls
> similar to x86_64, i.e. using a non-callee saved register to transfer
> the TCC between functions, and saving that register to the stack to
> protect the TCC value. At the same time, we also consider the scenario
> of mixing trampoline.
>
> Tests test_bpf.ko and test_verifier have passed, as well as the relative
> testcases of test_progs*.
Ok, I'll summarize, so that I know that I get it. ;-)
All BPF progs (except the main), get the current TCC passed in a6. TCC
is stored in each BPF stack frame.
During tail calls, the TCC from the stack is loaded, decremented, and
stored to the stack again.
Mixing bpf2bpf/tailcalls means that each *BPF stackframe* can perform up
to "current TCC to max_tailscalls" number of calls.
main_prog() calls subprog1(). subprog1() can perform max_tailscalls.
subprog1() returns, and main_prog() calls subprog2(). subprog2() can
also perform max_tailscalls.
Correct?
Some comments below as well.
> Signed-off-by: Pu Lehui <pulehui@huawei.com>
> ---
> arch/riscv/net/bpf_jit.h | 1 +
> arch/riscv/net/bpf_jit_comp64.c | 89 +++++++++++++--------------------
> 2 files changed, 37 insertions(+), 53 deletions(-)
>
> diff --git a/arch/riscv/net/bpf_jit.h b/arch/riscv/net/bpf_jit.h
> index 8b35f12a4452..d8be89dadf18 100644
> --- a/arch/riscv/net/bpf_jit.h
> +++ b/arch/riscv/net/bpf_jit.h
> @@ -81,6 +81,7 @@ struct rv_jit_context {
> int nexentries;
> unsigned long flags;
> int stack_size;
> + int tcc_offset;
> };
>
> /* Convert from ninsns to bytes. */
> diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
> index 3516d425c5eb..64e0c86d60c4 100644
> --- a/arch/riscv/net/bpf_jit_comp64.c
> +++ b/arch/riscv/net/bpf_jit_comp64.c
> @@ -13,13 +13,11 @@
> #include <asm/patch.h>
> #include "bpf_jit.h"
>
> +#define RV_REG_TCC RV_REG_A6
> #define RV_FENTRY_NINSNS 2
> /* fentry and TCC init insns will be skipped on tailcall */
> #define RV_TAILCALL_OFFSET ((RV_FENTRY_NINSNS + 1) * 4)
>
> -#define RV_REG_TCC RV_REG_A6
> -#define RV_REG_TCC_SAVED RV_REG_S6 /* Store A6 in S6 if program do calls */
> -
> static const int regmap[] = {
> [BPF_REG_0] = RV_REG_A5,
> [BPF_REG_1] = RV_REG_A0,
> @@ -51,14 +49,12 @@ static const int pt_regmap[] = {
> };
>
> enum {
> - RV_CTX_F_SEEN_TAIL_CALL = 0,
> RV_CTX_F_SEEN_CALL = RV_REG_RA,
> RV_CTX_F_SEEN_S1 = RV_REG_S1,
> RV_CTX_F_SEEN_S2 = RV_REG_S2,
> RV_CTX_F_SEEN_S3 = RV_REG_S3,
> RV_CTX_F_SEEN_S4 = RV_REG_S4,
> RV_CTX_F_SEEN_S5 = RV_REG_S5,
> - RV_CTX_F_SEEN_S6 = RV_REG_S6,
> };
>
> static u8 bpf_to_rv_reg(int bpf_reg, struct rv_jit_context *ctx)
> @@ -71,7 +67,6 @@ static u8 bpf_to_rv_reg(int bpf_reg, struct rv_jit_context *ctx)
> case RV_CTX_F_SEEN_S3:
> case RV_CTX_F_SEEN_S4:
> case RV_CTX_F_SEEN_S5:
> - case RV_CTX_F_SEEN_S6:
> __set_bit(reg, &ctx->flags);
> }
> return reg;
> @@ -86,7 +81,6 @@ static bool seen_reg(int reg, struct rv_jit_context *ctx)
> case RV_CTX_F_SEEN_S3:
> case RV_CTX_F_SEEN_S4:
> case RV_CTX_F_SEEN_S5:
> - case RV_CTX_F_SEEN_S6:
> return test_bit(reg, &ctx->flags);
> }
> return false;
> @@ -102,32 +96,6 @@ static void mark_call(struct rv_jit_context *ctx)
> __set_bit(RV_CTX_F_SEEN_CALL, &ctx->flags);
> }
>
> -static bool seen_call(struct rv_jit_context *ctx)
> -{
> - return test_bit(RV_CTX_F_SEEN_CALL, &ctx->flags);
> -}
> -
> -static void mark_tail_call(struct rv_jit_context *ctx)
> -{
> - __set_bit(RV_CTX_F_SEEN_TAIL_CALL, &ctx->flags);
> -}
> -
> -static bool seen_tail_call(struct rv_jit_context *ctx)
> -{
> - return test_bit(RV_CTX_F_SEEN_TAIL_CALL, &ctx->flags);
> -}
> -
> -static u8 rv_tail_call_reg(struct rv_jit_context *ctx)
> -{
> - mark_tail_call(ctx);
> -
> - if (seen_call(ctx)) {
> - __set_bit(RV_CTX_F_SEEN_S6, &ctx->flags);
> - return RV_REG_S6;
> - }
> - return RV_REG_A6;
> -}
> -
> static bool is_32b_int(s64 val)
> {
> return -(1L << 31) <= val && val < (1L << 31);
> @@ -252,10 +220,7 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
> emit_ld(RV_REG_S5, store_offset, RV_REG_SP, ctx);
> store_offset -= 8;
> }
> - if (seen_reg(RV_REG_S6, ctx)) {
> - emit_ld(RV_REG_S6, store_offset, RV_REG_SP, ctx);
> - store_offset -= 8;
> - }
> + emit_ld(RV_REG_TCC, store_offset, RV_REG_SP, ctx);
Why do you need to restore RV_REG_TCC? We're passing RV_REG_TCC (a6) as
an argument at all call-sites, and for tailcalls we're loading from the
stack.
Is this to fake the a6 argument for the tail-call? If so, it's better to
move it to emit_bpf_tail_call(), instead of letting all programs pay for
it.
>
> emit_addi(RV_REG_SP, RV_REG_SP, stack_adjust, ctx);
> /* Set return value. */
> @@ -343,7 +308,6 @@ static void emit_branch(u8 cond, u8 rd, u8 rs, int rvoff,
> static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
> {
> int tc_ninsn, off, start_insn = ctx->ninsns;
> - u8 tcc = rv_tail_call_reg(ctx);
>
> /* a0: &ctx
> * a1: &array
> @@ -366,9 +330,11 @@ static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
> /* if (--TCC < 0)
> * goto out;
> */
> - emit_addi(RV_REG_TCC, tcc, -1, ctx);
> + emit_ld(RV_REG_TCC, ctx->tcc_offset, RV_REG_SP, ctx);
> + emit_addi(RV_REG_TCC, RV_REG_TCC, -1, ctx);
> off = ninsns_rvoff(tc_ninsn - (ctx->ninsns - start_insn));
> emit_branch(BPF_JSLT, RV_REG_TCC, RV_REG_ZERO, off, ctx);
> + emit_sd(RV_REG_SP, ctx->tcc_offset, RV_REG_TCC, ctx);
>
> /* prog = array->ptrs[index];
> * if (!prog)
> @@ -767,7 +733,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
> int i, ret, offset;
> int *branches_off = NULL;
> int stack_size = 0, nregs = m->nr_args;
> - int retval_off, args_off, nregs_off, ip_off, run_ctx_off, sreg_off;
> + int retval_off, args_off, nregs_off, ip_off, run_ctx_off, sreg_off, tcc_off;
> struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
> struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
> struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
> @@ -812,6 +778,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
> *
> * FP - sreg_off [ callee saved reg ]
> *
> + * FP - tcc_off [ tail call count ] BPF_TRAMP_F_TAIL_CALL_CTX
> + *
> * [ pads ] pads for 16 bytes alignment
> */
>
> @@ -853,6 +821,11 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
> stack_size += 8;
> sreg_off = stack_size;
>
> + if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
> + stack_size += 8;
> + tcc_off = stack_size;
> + }
> +
> stack_size = round_up(stack_size, 16);
>
> if (!is_struct_ops) {
> @@ -879,6 +852,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
> emit_addi(RV_REG_FP, RV_REG_SP, stack_size, ctx);
> }
>
> + /* store tail call count */
> + if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
> + emit_sd(RV_REG_FP, -tcc_off, RV_REG_TCC, ctx);
> +
> /* callee saved register S1 to pass start time */
> emit_sd(RV_REG_FP, -sreg_off, RV_REG_S1, ctx);
>
> @@ -932,6 +909,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>
> if (flags & BPF_TRAMP_F_CALL_ORIG) {
> restore_args(nregs, args_off, ctx);
> + /* restore TCC to RV_REG_TCC before calling the original function */
> + if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
> + emit_ld(RV_REG_TCC, -tcc_off, RV_REG_FP, ctx);
> ret = emit_call((const u64)orig_call, true, ctx);
> if (ret)
> goto out;
> @@ -963,6 +943,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
> ret = emit_call((const u64)__bpf_tramp_exit, true, ctx);
> if (ret)
> goto out;
> + } else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
> + /* restore TCC to RV_REG_TCC before calling the original function */
> + emit_ld(RV_REG_TCC, -tcc_off, RV_REG_FP, ctx);
> }
>
> if (flags & BPF_TRAMP_F_RESTORE_REGS)
> @@ -1455,6 +1438,9 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
> if (ret < 0)
> return ret;
>
> + /* restore TCC from stack to RV_REG_TCC */
> + emit_ld(RV_REG_TCC, ctx->tcc_offset, RV_REG_SP, ctx);
> +
> ret = emit_call(addr, fixed_addr, ctx);
> if (ret)
> return ret;
> @@ -1733,8 +1719,7 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx)
> stack_adjust += 8;
> if (seen_reg(RV_REG_S5, ctx))
> stack_adjust += 8;
> - if (seen_reg(RV_REG_S6, ctx))
> - stack_adjust += 8;
> + stack_adjust += 8; /* RV_REG_TCC */
>
> stack_adjust = round_up(stack_adjust, 16);
> stack_adjust += bpf_stack_adjust;
> @@ -1749,7 +1734,8 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx)
> * (TCC) register. This instruction is skipped for tail calls.
> * Force using a 4-byte (non-compressed) instruction.
> */
> - emit(rv_addi(RV_REG_TCC, RV_REG_ZERO, MAX_TAIL_CALL_CNT), ctx);
> + if (!bpf_is_subprog(ctx->prog))
> + emit(rv_addi(RV_REG_TCC, RV_REG_ZERO, MAX_TAIL_CALL_CNT), ctx);
You're conditionally emitting the instruction. Doesn't this break
RV_TAILCALL_OFFSET?
Björn
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls
2024-01-30 17:30 ` Björn Töpel
@ 2024-02-01 8:22 ` Pu Lehui
2024-02-01 10:10 ` Björn Töpel
0 siblings, 1 reply; 15+ messages in thread
From: Pu Lehui @ 2024-02-01 8:22 UTC (permalink / raw)
To: Björn Töpel, bpf, linux-riscv, netdev
Cc: Song Liu, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui
On 2024/1/31 1:30, Björn Töpel wrote:
> Pu Lehui <pulehui@huaweicloud.com> writes:
>
>> From: Pu Lehui <pulehui@huawei.com>
>>
>> In the current RV64 JIT, if we just don't initialize the TCC in subprog,
>> the TCC can be propagated from the parent process to the subprocess, but
>> the TCC of the parent process cannot be restored when the subprocess
>> exits. Since the RV64 TCC is initialized before saving the callee saved
>> registers into the stack, we cannot use the callee saved register to
>> pass the TCC, otherwise the original value of the callee saved register
>> will be destroyed. So we implemented mixing bpf2bpf and tailcalls
>> similar to x86_64, i.e. using a non-callee saved register to transfer
>> the TCC between functions, and saving that register to the stack to
>> protect the TCC value. At the same time, we also consider the scenario
>> of mixing trampoline.
>>
>> Tests test_bpf.ko and test_verifier have passed, as well as the relative
>> testcases of test_progs*.
>
> Ok, I'll summarize, so that I know that I get it. ;-)
>
> All BPF progs (except the main), get the current TCC passed in a6. TCC
> is stored in each BPF stack frame.
>
> During tail calls, the TCC from the stack is loaded, decremented, and
> stored to the stack again.
>
> Mixing bpf2bpf/tailcalls means that each *BPF stackframe* can perform up
> to "current TCC to max_tailscalls" number of calls.
>
> main_prog() calls subprog1(). subprog1() can perform max_tailscalls.
> subprog1() returns, and main_prog() calls subprog2(). subprog2() can
> also perform max_tailscalls.
>
> Correct?
Your summarize is the same as what I thought, A6 is a carrier. I write a
use case to verify this:
diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
index 59993fc9c0d7..65550e24c843 100644
--- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
@@ -975,6 +975,80 @@ static void test_tailcall_bpf2bpf_6(void)
tailcall_bpf2bpf6__destroy(obj);
}
+#include "tailcall_bpf2bpf7.skel.h"
+
+static void test_tailcall_bpf2bpf_7(void)
+{
+ int err, map_fd, prog_fd, main_fd, data_fd, i;
+ struct tailcall_bpf2bpf7__bss val;
+ struct bpf_map *prog_array, *data_map;
+ struct bpf_program *prog;
+ struct bpf_object *obj;
+ char prog_name[32];
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
+
+ err = bpf_prog_test_load("tailcall_bpf2bpf7.bpf.o",
BPF_PROG_TYPE_SCHED_CLS,
+ &obj, &prog_fd);
+ if (CHECK_FAIL(err))
+ return;
+
+ prog = bpf_object__find_program_by_name(obj, "entry");
+ if (CHECK_FAIL(!prog))
+ goto out;
+
+ main_fd = bpf_program__fd(prog);
+ if (CHECK_FAIL(main_fd < 0))
+ goto out;
+
+ prog_array = bpf_object__find_map_by_name(obj, "jmp_table");
+ if (CHECK_FAIL(!prog_array))
+ goto out;
+
+ map_fd = bpf_map__fd(prog_array);
+ if (CHECK_FAIL(map_fd < 0))
+ goto out;
+
+ for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
+ snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
+
+ prog = bpf_object__find_program_by_name(obj, prog_name);
+ if (CHECK_FAIL(!prog))
+ goto out;
+
+ prog_fd = bpf_program__fd(prog);
+ if (CHECK_FAIL(prog_fd < 0))
+ goto out;
+
+ err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY);
+ if (CHECK_FAIL(err))
+ goto out;
+ }
+
+ data_map = bpf_object__find_map_by_name(obj, "tailcall.bss");
+ if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map)))
+ goto out;
+
+ data_fd = bpf_map__fd(data_map);
+ if (CHECK_FAIL(data_fd < 0))
+ goto out;
+
+ err = bpf_prog_test_run_opts(main_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+
+ i = 0;
+ err = bpf_map_lookup_elem(data_fd, &i, &val);
+ ASSERT_OK(err, "tailcall count");
+ ASSERT_EQ(val.count0, 33, "tailcall count0");
+ ASSERT_EQ(val.count1, 33, "tailcall count1");
+
+out:
+ bpf_object__close(obj);
+}
+
/* test_tailcall_bpf2bpf_fentry checks that the count value of the
tail call
* limit enforcement matches with expectations when tailcall is
preceded with
* bpf2bpf call, and the bpf2bpf call is traced by fentry.
@@ -1213,6 +1287,8 @@ void test_tailcalls(void)
test_tailcall_bpf2bpf_4(true);
if (test__start_subtest("tailcall_bpf2bpf_6"))
test_tailcall_bpf2bpf_6();
+ if (test__start_subtest("tailcall_bpf2bpf_7"))
+ test_tailcall_bpf2bpf_7();
if (test__start_subtest("tailcall_bpf2bpf_fentry"))
test_tailcall_bpf2bpf_fentry();
if (test__start_subtest("tailcall_bpf2bpf_fexit"))
diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf7.c
b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf7.c
new file mode 100644
index 000000000000..9818f4056283
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf7.c
@@ -0,0 +1,52 @@
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+
+struct {
+ __uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+ __uint(max_entries, 2);
+ __uint(key_size, sizeof(__u32));
+ __uint(value_size, sizeof(__u32));
+} jmp_table SEC(".maps");
+
+int count0;
+int count1;
+
+static __noinline
+int subprog_1(struct __sk_buff *skb)
+{
+ bpf_tail_call_static(skb, &jmp_table, 1);
+ return 0;
+}
+
+static __noinline
+int subprog_0(struct __sk_buff *skb)
+{
+ bpf_tail_call_static(skb, &jmp_table, 0);
+ return 0;
+}
+
+SEC("tc")
+int classifier_1(struct __sk_buff *skb)
+{
+ count1++;
+ subprog_1(skb);
+ return 0;
+}
+
+SEC("tc")
+int classifier_0(struct __sk_buff *skb)
+{
+ count0++;
+ subprog_0(skb);
+ return 0;
+}
+
+SEC("tc")
+int entry(struct __sk_buff *skb)
+{
+ subprog_0(skb);
+ subprog_1(skb);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
>
> Some comments below as well.
>
>> Signed-off-by: Pu Lehui <pulehui@huawei.com>
>> ---
>> arch/riscv/net/bpf_jit.h | 1 +
>> arch/riscv/net/bpf_jit_comp64.c | 89 +++++++++++++--------------------
>> 2 files changed, 37 insertions(+), 53 deletions(-)
>>
>> diff --git a/arch/riscv/net/bpf_jit.h b/arch/riscv/net/bpf_jit.h
>> index 8b35f12a4452..d8be89dadf18 100644
>> --- a/arch/riscv/net/bpf_jit.h
>> +++ b/arch/riscv/net/bpf_jit.h
>> @@ -81,6 +81,7 @@ struct rv_jit_context {
>> int nexentries;
>> unsigned long flags;
>> int stack_size;
>> + int tcc_offset;
>> };
>>
>> /* Convert from ninsns to bytes. */
>> diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
>> index 3516d425c5eb..64e0c86d60c4 100644
>> --- a/arch/riscv/net/bpf_jit_comp64.c
>> +++ b/arch/riscv/net/bpf_jit_comp64.c
>> @@ -13,13 +13,11 @@
>> #include <asm/patch.h>
>> #include "bpf_jit.h"
>>
>> +#define RV_REG_TCC RV_REG_A6
>> #define RV_FENTRY_NINSNS 2
>> /* fentry and TCC init insns will be skipped on tailcall */
>> #define RV_TAILCALL_OFFSET ((RV_FENTRY_NINSNS + 1) * 4)
>>
>> -#define RV_REG_TCC RV_REG_A6
>> -#define RV_REG_TCC_SAVED RV_REG_S6 /* Store A6 in S6 if program do calls */
>> -
>> static const int regmap[] = {
>> [BPF_REG_0] = RV_REG_A5,
>> [BPF_REG_1] = RV_REG_A0,
>> @@ -51,14 +49,12 @@ static const int pt_regmap[] = {
>> };
>>
>> enum {
>> - RV_CTX_F_SEEN_TAIL_CALL = 0,
>> RV_CTX_F_SEEN_CALL = RV_REG_RA,
>> RV_CTX_F_SEEN_S1 = RV_REG_S1,
>> RV_CTX_F_SEEN_S2 = RV_REG_S2,
>> RV_CTX_F_SEEN_S3 = RV_REG_S3,
>> RV_CTX_F_SEEN_S4 = RV_REG_S4,
>> RV_CTX_F_SEEN_S5 = RV_REG_S5,
>> - RV_CTX_F_SEEN_S6 = RV_REG_S6,
>> };
>>
>> static u8 bpf_to_rv_reg(int bpf_reg, struct rv_jit_context *ctx)
>> @@ -71,7 +67,6 @@ static u8 bpf_to_rv_reg(int bpf_reg, struct rv_jit_context *ctx)
>> case RV_CTX_F_SEEN_S3:
>> case RV_CTX_F_SEEN_S4:
>> case RV_CTX_F_SEEN_S5:
>> - case RV_CTX_F_SEEN_S6:
>> __set_bit(reg, &ctx->flags);
>> }
>> return reg;
>> @@ -86,7 +81,6 @@ static bool seen_reg(int reg, struct rv_jit_context *ctx)
>> case RV_CTX_F_SEEN_S3:
>> case RV_CTX_F_SEEN_S4:
>> case RV_CTX_F_SEEN_S5:
>> - case RV_CTX_F_SEEN_S6:
>> return test_bit(reg, &ctx->flags);
>> }
>> return false;
>> @@ -102,32 +96,6 @@ static void mark_call(struct rv_jit_context *ctx)
>> __set_bit(RV_CTX_F_SEEN_CALL, &ctx->flags);
>> }
>>
>> -static bool seen_call(struct rv_jit_context *ctx)
>> -{
>> - return test_bit(RV_CTX_F_SEEN_CALL, &ctx->flags);
>> -}
>> -
>> -static void mark_tail_call(struct rv_jit_context *ctx)
>> -{
>> - __set_bit(RV_CTX_F_SEEN_TAIL_CALL, &ctx->flags);
>> -}
>> -
>> -static bool seen_tail_call(struct rv_jit_context *ctx)
>> -{
>> - return test_bit(RV_CTX_F_SEEN_TAIL_CALL, &ctx->flags);
>> -}
>> -
>> -static u8 rv_tail_call_reg(struct rv_jit_context *ctx)
>> -{
>> - mark_tail_call(ctx);
>> -
>> - if (seen_call(ctx)) {
>> - __set_bit(RV_CTX_F_SEEN_S6, &ctx->flags);
>> - return RV_REG_S6;
>> - }
>> - return RV_REG_A6;
>> -}
>> -
>> static bool is_32b_int(s64 val)
>> {
>> return -(1L << 31) <= val && val < (1L << 31);
>> @@ -252,10 +220,7 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
>> emit_ld(RV_REG_S5, store_offset, RV_REG_SP, ctx);
>> store_offset -= 8;
>> }
>> - if (seen_reg(RV_REG_S6, ctx)) {
>> - emit_ld(RV_REG_S6, store_offset, RV_REG_SP, ctx);
>> - store_offset -= 8;
>> - }
>> + emit_ld(RV_REG_TCC, store_offset, RV_REG_SP, ctx);
>
> Why do you need to restore RV_REG_TCC? We're passing RV_REG_TCC (a6) as
> an argument at all call-sites, and for tailcalls we're loading from the
> stack.
>
> Is this to fake the a6 argument for the tail-call? If so, it's better to
> move it to emit_bpf_tail_call(), instead of letting all programs pay for
> it.
Yes, we can remove this duplicate load. will do that at next version.
>
>>
>> emit_addi(RV_REG_SP, RV_REG_SP, stack_adjust, ctx);
>> /* Set return value. */
>> @@ -343,7 +308,6 @@ static void emit_branch(u8 cond, u8 rd, u8 rs, int rvoff,
>> static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
>> {
>> int tc_ninsn, off, start_insn = ctx->ninsns;
>> - u8 tcc = rv_tail_call_reg(ctx);
>>
>> /* a0: &ctx
>> * a1: &array
>> @@ -366,9 +330,11 @@ static int emit_bpf_tail_call(int insn, struct rv_jit_context *ctx)
>> /* if (--TCC < 0)
>> * goto out;
>> */
>> - emit_addi(RV_REG_TCC, tcc, -1, ctx);
>> + emit_ld(RV_REG_TCC, ctx->tcc_offset, RV_REG_SP, ctx);
>> + emit_addi(RV_REG_TCC, RV_REG_TCC, -1, ctx);
>> off = ninsns_rvoff(tc_ninsn - (ctx->ninsns - start_insn));
>> emit_branch(BPF_JSLT, RV_REG_TCC, RV_REG_ZERO, off, ctx);
>> + emit_sd(RV_REG_SP, ctx->tcc_offset, RV_REG_TCC, ctx);
>>
>> /* prog = array->ptrs[index];
>> * if (!prog)
>> @@ -767,7 +733,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>> int i, ret, offset;
>> int *branches_off = NULL;
>> int stack_size = 0, nregs = m->nr_args;
>> - int retval_off, args_off, nregs_off, ip_off, run_ctx_off, sreg_off;
>> + int retval_off, args_off, nregs_off, ip_off, run_ctx_off, sreg_off, tcc_off;
>> struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
>> struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
>> struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
>> @@ -812,6 +778,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>> *
>> * FP - sreg_off [ callee saved reg ]
>> *
>> + * FP - tcc_off [ tail call count ] BPF_TRAMP_F_TAIL_CALL_CTX
>> + *
>> * [ pads ] pads for 16 bytes alignment
>> */
>>
>> @@ -853,6 +821,11 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>> stack_size += 8;
>> sreg_off = stack_size;
>>
>> + if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
>> + stack_size += 8;
>> + tcc_off = stack_size;
>> + }
>> +
>> stack_size = round_up(stack_size, 16);
>>
>> if (!is_struct_ops) {
>> @@ -879,6 +852,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>> emit_addi(RV_REG_FP, RV_REG_SP, stack_size, ctx);
>> }
>>
>> + /* store tail call count */
>> + if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
>> + emit_sd(RV_REG_FP, -tcc_off, RV_REG_TCC, ctx);
>> +
>> /* callee saved register S1 to pass start time */
>> emit_sd(RV_REG_FP, -sreg_off, RV_REG_S1, ctx);
>>
>> @@ -932,6 +909,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>>
>> if (flags & BPF_TRAMP_F_CALL_ORIG) {
>> restore_args(nregs, args_off, ctx);
>> + /* restore TCC to RV_REG_TCC before calling the original function */
>> + if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
>> + emit_ld(RV_REG_TCC, -tcc_off, RV_REG_FP, ctx);
>> ret = emit_call((const u64)orig_call, true, ctx);
>> if (ret)
>> goto out;
>> @@ -963,6 +943,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>> ret = emit_call((const u64)__bpf_tramp_exit, true, ctx);
>> if (ret)
>> goto out;
>> + } else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
>> + /* restore TCC to RV_REG_TCC before calling the original function */
>> + emit_ld(RV_REG_TCC, -tcc_off, RV_REG_FP, ctx);
>> }
>>
>> if (flags & BPF_TRAMP_F_RESTORE_REGS)
>> @@ -1455,6 +1438,9 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
>> if (ret < 0)
>> return ret;
>>
>> + /* restore TCC from stack to RV_REG_TCC */
>> + emit_ld(RV_REG_TCC, ctx->tcc_offset, RV_REG_SP, ctx);
>> +
>> ret = emit_call(addr, fixed_addr, ctx);
>> if (ret)
>> return ret;
>> @@ -1733,8 +1719,7 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx)
>> stack_adjust += 8;
>> if (seen_reg(RV_REG_S5, ctx))
>> stack_adjust += 8;
>> - if (seen_reg(RV_REG_S6, ctx))
>> - stack_adjust += 8;
>> + stack_adjust += 8; /* RV_REG_TCC */
>>
>> stack_adjust = round_up(stack_adjust, 16);
>> stack_adjust += bpf_stack_adjust;
>> @@ -1749,7 +1734,8 @@ void bpf_jit_build_prologue(struct rv_jit_context *ctx)
>> * (TCC) register. This instruction is skipped for tail calls.
>> * Force using a 4-byte (non-compressed) instruction.
>> */
>> - emit(rv_addi(RV_REG_TCC, RV_REG_ZERO, MAX_TAIL_CALL_CNT), ctx);
>> + if (!bpf_is_subprog(ctx->prog))
>> + emit(rv_addi(RV_REG_TCC, RV_REG_ZERO, MAX_TAIL_CALL_CNT), ctx);
>
> You're conditionally emitting the instruction. Doesn't this break
> RV_TAILCALL_OFFSET?
>
This does not break RV_TAILCALL_OFFSET, because The target of tailcall
is always `main` prog, but not subprog.
>
> Björn
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls
2024-02-01 8:22 ` Pu Lehui
@ 2024-02-01 10:10 ` Björn Töpel
2024-02-01 12:10 ` Pu Lehui
0 siblings, 1 reply; 15+ messages in thread
From: Björn Töpel @ 2024-02-01 10:10 UTC (permalink / raw)
To: Pu Lehui, bpf, linux-riscv, netdev
Cc: Song Liu, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui
Pu Lehui <pulehui@huaweicloud.com> writes:
>>> @@ -252,10 +220,7 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
>>> emit_ld(RV_REG_S5, store_offset, RV_REG_SP, ctx);
>>> store_offset -= 8;
>>> }
>>> - if (seen_reg(RV_REG_S6, ctx)) {
>>> - emit_ld(RV_REG_S6, store_offset, RV_REG_SP, ctx);
>>> - store_offset -= 8;
>>> - }
>>> + emit_ld(RV_REG_TCC, store_offset, RV_REG_SP, ctx);
>>
>> Why do you need to restore RV_REG_TCC? We're passing RV_REG_TCC (a6) as
>> an argument at all call-sites, and for tailcalls we're loading from the
>> stack.
>>
>> Is this to fake the a6 argument for the tail-call? If so, it's better to
>> move it to emit_bpf_tail_call(), instead of letting all programs pay for
>> it.
>
> Yes, we can remove this duplicate load. will do that at next version.
Hmm, no remove, but *move* right? Otherwise a6 can contain gargabe on
entering the tailcall?
Move it before __emit_epilogue() in the tailcall, no?
Björn
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls
2024-02-01 10:10 ` Björn Töpel
@ 2024-02-01 12:10 ` Pu Lehui
2024-02-01 13:35 ` Björn Töpel
0 siblings, 1 reply; 15+ messages in thread
From: Pu Lehui @ 2024-02-01 12:10 UTC (permalink / raw)
To: Björn Töpel, bpf, linux-riscv, netdev
Cc: Song Liu, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui
On 2024/2/1 18:10, Björn Töpel wrote:
> Pu Lehui <pulehui@huaweicloud.com> writes:
>
>>>> @@ -252,10 +220,7 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
>>>> emit_ld(RV_REG_S5, store_offset, RV_REG_SP, ctx);
>>>> store_offset -= 8;
>>>> }
>>>> - if (seen_reg(RV_REG_S6, ctx)) {
>>>> - emit_ld(RV_REG_S6, store_offset, RV_REG_SP, ctx);
>>>> - store_offset -= 8;
>>>> - }
>>>> + emit_ld(RV_REG_TCC, store_offset, RV_REG_SP, ctx);
>>>
>>> Why do you need to restore RV_REG_TCC? We're passing RV_REG_TCC (a6) as
>>> an argument at all call-sites, and for tailcalls we're loading from the
>>> stack.
>>>
>>> Is this to fake the a6 argument for the tail-call? If so, it's better to
>>> move it to emit_bpf_tail_call(), instead of letting all programs pay for
>>> it.
>>
>> Yes, we can remove this duplicate load. will do that at next version.
>
> Hmm, no remove, but *move* right? Otherwise a6 can contain gargabe on
> entering the tailcall?
>
> Move it before __emit_epilogue() in the tailcall, no?
>
IIUC, we don't need to load it again. In emit_bpf_tail_call function, we
load TCC from stack to A6, A6--, then store A6 back to stack. Then
unwind the current stack and jump to target bpf prog, during this
period, we did not touch the A6 register, do we still need to load it again?
>
> Björn
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls
2024-02-01 12:10 ` Pu Lehui
@ 2024-02-01 13:35 ` Björn Töpel
2024-02-02 9:44 ` Pu Lehui
0 siblings, 1 reply; 15+ messages in thread
From: Björn Töpel @ 2024-02-01 13:35 UTC (permalink / raw)
To: Pu Lehui, bpf, linux-riscv, netdev
Cc: Song Liu, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui
Pu Lehui <pulehui@huaweicloud.com> writes:
> On 2024/2/1 18:10, Björn Töpel wrote:
>> Pu Lehui <pulehui@huaweicloud.com> writes:
>>
>>>>> @@ -252,10 +220,7 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
>>>>> emit_ld(RV_REG_S5, store_offset, RV_REG_SP, ctx);
>>>>> store_offset -= 8;
>>>>> }
>>>>> - if (seen_reg(RV_REG_S6, ctx)) {
>>>>> - emit_ld(RV_REG_S6, store_offset, RV_REG_SP, ctx);
>>>>> - store_offset -= 8;
>>>>> - }
>>>>> + emit_ld(RV_REG_TCC, store_offset, RV_REG_SP, ctx);
>>>>
>>>> Why do you need to restore RV_REG_TCC? We're passing RV_REG_TCC (a6) as
>>>> an argument at all call-sites, and for tailcalls we're loading from the
>>>> stack.
>>>>
>>>> Is this to fake the a6 argument for the tail-call? If so, it's better to
>>>> move it to emit_bpf_tail_call(), instead of letting all programs pay for
>>>> it.
>>>
>>> Yes, we can remove this duplicate load. will do that at next version.
>>
>> Hmm, no remove, but *move* right? Otherwise a6 can contain gargabe on
>> entering the tailcall?
>>
>> Move it before __emit_epilogue() in the tailcall, no?
>>
>
> IIUC, we don't need to load it again. In emit_bpf_tail_call function, we
> load TCC from stack to A6, A6--, then store A6 back to stack. Then
> unwind the current stack and jump to target bpf prog, during this
> period, we did not touch the A6 register, do we still need to load it again?
a6 has to be populated prior each call -- including tailcalls. An
example, how it can break:
main_prog() -> prologue (a6 := 0; push a6) -> bpf_helper() (random
kernel path that clobbers a6) -> tailcall(foo()) (unwinds stack, enters
foo() with a6 garbage, and push a6).
Am I missing something?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls
2024-02-01 13:35 ` Björn Töpel
@ 2024-02-02 9:44 ` Pu Lehui
2024-02-02 13:04 ` Björn Töpel
0 siblings, 1 reply; 15+ messages in thread
From: Pu Lehui @ 2024-02-02 9:44 UTC (permalink / raw)
To: Björn Töpel, bpf, linux-riscv, netdev
Cc: Song Liu, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui
[-- Attachment #1: Type: text/plain, Size: 3694 bytes --]
On 2024/2/1 21:35, Björn Töpel wrote:
> Pu Lehui <pulehui@huaweicloud.com> writes:
>
>> On 2024/2/1 18:10, Björn Töpel wrote:
>>> Pu Lehui <pulehui@huaweicloud.com> writes:
>>>
>>>>>> @@ -252,10 +220,7 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
>>>>>> emit_ld(RV_REG_S5, store_offset, RV_REG_SP, ctx);
>>>>>> store_offset -= 8;
>>>>>> }
>>>>>> - if (seen_reg(RV_REG_S6, ctx)) {
>>>>>> - emit_ld(RV_REG_S6, store_offset, RV_REG_SP, ctx);
>>>>>> - store_offset -= 8;
>>>>>> - }
>>>>>> + emit_ld(RV_REG_TCC, store_offset, RV_REG_SP, ctx);
>>>>>
>>>>> Why do you need to restore RV_REG_TCC? We're passing RV_REG_TCC (a6) as
>>>>> an argument at all call-sites, and for tailcalls we're loading from the
>>>>> stack.
>>>>>
>>>>> Is this to fake the a6 argument for the tail-call? If so, it's better to
>>>>> move it to emit_bpf_tail_call(), instead of letting all programs pay for
>>>>> it.
>>>>
>>>> Yes, we can remove this duplicate load. will do that at next version.
>>>
>>> Hmm, no remove, but *move* right? Otherwise a6 can contain gargabe on
>>> entering the tailcall?
>>>
>>> Move it before __emit_epilogue() in the tailcall, no?
>>>
>>
>> IIUC, we don't need to load it again. In emit_bpf_tail_call function, we
>> load TCC from stack to A6, A6--, then store A6 back to stack. Then
>> unwind the current stack and jump to target bpf prog, during this
>> period, we did not touch the A6 register, do we still need to load it again?
>
> a6 has to be populated prior each call -- including tailcalls. An
> example, how it can break:
>
> main_prog() -> prologue (a6 := 0; push a6) -> bpf_helper() (random
> kernel path that clobbers a6) -> tailcall(foo()) (unwinds stack, enters
It's OK to clobbers A6 reg for helper/kfunc call, because we will load
TCC from stack to A6 reg before jump to tailcall target prog. In
addition, I found that we can remove the store A6 back to stack command
from the tailcall process. I try to describe the process involved:
PS: I'm attaching a picture to avoid the formatting below.
Main prog
A6 reg = 33
STORE A6 value(TCC=33) to main prog stack
...
/* helper call/kfunc call (not call to bpf prog)*/
LOAD TCC=33 from main prog stack to A6 reg
call bpf_helper_prog1/kfunc1
bpf_helper_prog1/kfunc1
break A6 reg
return Main prog
/* it's ok to break A6 reg, because we have stored A6 value to main
prog stack */
...
/* start tailcall(foo) */
LOAD TCC=33 from main prog stack to A6 reg
A6--: TCC=32
STORE A6 value(TCC=32) to main prog stack
UNWIND Main prog stack (at this time, A6 value 32 will not on any stack)
/* at this time, A6 reg is not affected. */
jump to foo()
foo() --- tailcall target
STORE A6 value(TCC=32) to foo prog stack
...
/* subprog call (call to bpf prog)*/
LOAD TCC=32 from foo prog stack to A6 reg
call subprog1
subprog1
STORE A6 value(TCC=32) to subprog1 stack
...
/* start tailcall(foo2) */
LOAD TCC=32 from subprog1 stack to A6 reg
A6--:TCC=31
STORE A6 value(TCC=31) to subprog1 stack
UNWIND subprog1 stack (at this time, `old` A6 value 32 still in foo
prog stack)
/* at this time, A6 reg is not affected. */
jump to foo2()
foo2() --- tailcall target
STORE A6 value(TCC=31) to foo2 prog stack
...
UNWIND foo2 prog stack (at this time, `old` A6 value 32 still in
foo prog stack)
return to foo()
...
/* if have any call will load `old` A6 value 32 to A6 reg */
...
UNWIND foo prog stack (at this time, old A6 32 will not on any stack)
return to the caller of Main prog
> foo() with a6 garbage, and push a6).
>
> Am I missing something?
[-- Attachment #2: tailcalls.JPG --]
[-- Type: image/jpeg, Size: 92740 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls
2024-02-02 9:44 ` Pu Lehui
@ 2024-02-02 13:04 ` Björn Töpel
0 siblings, 0 replies; 15+ messages in thread
From: Björn Töpel @ 2024-02-02 13:04 UTC (permalink / raw)
To: Pu Lehui, bpf, linux-riscv, netdev
Cc: Song Liu, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Palmer Dabbelt,
Luke Nelson, Pu Lehui
Pu Lehui <pulehui@huaweicloud.com> writes:
> On 2024/2/1 21:35, Björn Töpel wrote:
>> Pu Lehui <pulehui@huaweicloud.com> writes:
>>
>>> On 2024/2/1 18:10, Björn Töpel wrote:
>>>> Pu Lehui <pulehui@huaweicloud.com> writes:
>>>>
>>>>>>> @@ -252,10 +220,7 @@ static void __build_epilogue(bool is_tail_call, struct rv_jit_context *ctx)
>>>>>>> emit_ld(RV_REG_S5, store_offset, RV_REG_SP, ctx);
>>>>>>> store_offset -= 8;
>>>>>>> }
>>>>>>> - if (seen_reg(RV_REG_S6, ctx)) {
>>>>>>> - emit_ld(RV_REG_S6, store_offset, RV_REG_SP, ctx);
>>>>>>> - store_offset -= 8;
>>>>>>> - }
>>>>>>> + emit_ld(RV_REG_TCC, store_offset, RV_REG_SP, ctx);
>>>>>>
>>>>>> Why do you need to restore RV_REG_TCC? We're passing RV_REG_TCC (a6) as
>>>>>> an argument at all call-sites, and for tailcalls we're loading from the
>>>>>> stack.
>>>>>>
>>>>>> Is this to fake the a6 argument for the tail-call? If so, it's better to
>>>>>> move it to emit_bpf_tail_call(), instead of letting all programs pay for
>>>>>> it.
>>>>>
>>>>> Yes, we can remove this duplicate load. will do that at next version.
>>>>
>>>> Hmm, no remove, but *move* right? Otherwise a6 can contain gargabe on
>>>> entering the tailcall?
>>>>
>>>> Move it before __emit_epilogue() in the tailcall, no?
>>>>
>>>
>>> IIUC, we don't need to load it again. In emit_bpf_tail_call function, we
>>> load TCC from stack to A6, A6--, then store A6 back to stack. Then
>>> unwind the current stack and jump to target bpf prog, during this
>>> period, we did not touch the A6 register, do we still need to load it again?
>>
>> a6 has to be populated prior each call -- including tailcalls. An
>> example, how it can break:
>>
>> main_prog() -> prologue (a6 := 0; push a6) -> bpf_helper() (random
>> kernel path that clobbers a6) -> tailcall(foo()) (unwinds stack, enters
>
> It's OK to clobbers A6 reg for helper/kfunc call, because we will load
> TCC from stack to A6 reg before jump to tailcall target prog. In
> addition, I found that we can remove the store A6 back to stack command
> from the tailcall process. I try to describe the process involved:
Indeed! tailcall *is* already populating a6, and yes, the store can be
omitted. Nice!
Now, we still have the bug Alexei described, so until there's a
fix/workaround, this series can't be merged.
Cheers,
Björn
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2024-02-02 13:04 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-30 4:09 [PATCH bpf-next v2 0/4] Mixing bpf2bpf and tailcalls for RV64 Pu Lehui
2024-01-30 4:09 ` [PATCH bpf-next v2 1/4] riscv, bpf: Remove redundant ctx->offset initialization Pu Lehui
2024-01-30 16:04 ` Björn Töpel
2024-01-30 4:09 ` [PATCH bpf-next v2 2/4] riscv, bpf: Using kvcalloc to allocate cache buffer Pu Lehui
2024-01-30 16:05 ` Björn Töpel
2024-01-30 4:09 ` [PATCH bpf-next v2 3/4] riscv, bpf: Add RV_TAILCALL_OFFSET macro to format tailcall offset Pu Lehui
2024-01-30 16:05 ` Björn Töpel
2024-01-30 4:09 ` [PATCH bpf-next v2 4/4] riscv, bpf: Mixing bpf2bpf and tailcalls Pu Lehui
2024-01-30 17:30 ` Björn Töpel
2024-02-01 8:22 ` Pu Lehui
2024-02-01 10:10 ` Björn Töpel
2024-02-01 12:10 ` Pu Lehui
2024-02-01 13:35 ` Björn Töpel
2024-02-02 9:44 ` Pu Lehui
2024-02-02 13:04 ` Björn Töpel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).