From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 905053019BF for ; Wed, 22 Oct 2025 08:02:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.195 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761120155; cv=none; b=Qd3n6omGQZUUo1XzXOQDc5slpq7blgJ2u3EXScaFWDniSD77XPwMynopI396GAPqFAsCkwT9+WLifSNyG5owMcCak/G7Z96aNFMCSiLEOIZpZwpm0bTXGe1P6sStRY8iOhwML+cIIkSogJl+PMo5hE2wjxWahuD2ezaC4iZUaBE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761120155; c=relaxed/simple; bh=ge4RKcTL9mwWJ/8YcU4iA3Hebcqf1IXM5VAR/s9Qs0U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kkUe1/Ru3HF3KC0IR/WfF6jNxsMCTzKZVoKh6Tmijyrq3eYkYy8dQMX5eWBBLxc8yuWkEDKLZvLv1tKd+/bs1UtZPGb+IgKd+MGPycoZIUiaHjmEf1E8oWsczk36YA1ndHEAbvsgTIuEMPF1LEeYNrkGM2NHnPr7z8JT7QS6YaE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=b+Ixmedz; arc=none smtp.client-ip=209.85.215.195 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="b+Ixmedz" Received: by mail-pg1-f195.google.com with SMTP id 41be03b00d2f7-b633b54d05dso4526076a12.2 for ; Wed, 22 Oct 2025 01:02:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1761120151; x=1761724951; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jfXwmoObYIYRyTIzn9fIn8LwwiT3RUaRjh3izqidMDI=; b=b+IxmedzdgbsmoRLVXa0XAbMGAh8g5xDGn50Y1g9PfO8V2qkNSNxisWMc9uZppg4UU ztIob0Rso2jeeSeY6hCMw27aaz2jdxOS2N4ryoJ5XpAEse/C2qP+sneFxqpX0+bEHo5M BuQ1qKR3qe3uGxObRB9yeiaPnEaCveUYBfCBNNN/jR6rVZP/Eg0SsDdM9M34PVwILVrA y0+pu5urP+R8vI34PDJMvtjYNevqAk/6gH2uV0UvbvsKv/5OxNQPYagmYi9KZNySbk9U RGE1kXwGwVCMT4OPVRg9cDwRc8JhYAJ9FCGrMjf95BH4TBa0HG2SpCfzZDPdImjltZ7s NatA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761120151; x=1761724951; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jfXwmoObYIYRyTIzn9fIn8LwwiT3RUaRjh3izqidMDI=; b=i/+Y1hr8ZhoLc+o7d6+GaCFLuBgosUbgR4N444FhAdwoW7GmMllNSeVFhqfuVobwYt DPvDbmtr/wJVI4tB3wrEjicdjSagmtj63dqLE61O9Ej1WumFPAo4oi4QI0aPsD2LTFUa VawS1XfCn6AX4NSk+hx6a1NFlelVAK4JVY68qMmme8aTjFHR2BYP2RgkBw+dKfenl4BA CteF0hARlyutrrh4pgFMmJAj2RQ2rXURj/mq2DdkFjPsRR6mL2QvFMHSS73C7dpkaGfL d1K4K1HCjPxRQ8wrlt5vVnJTLCtdS0d0Yjs84GKRrwy9fuwZqQCsI9Ug8VXXgBNQZM7B mgCg== X-Forwarded-Encrypted: i=1; AJvYcCXGQng2gnt4v6oZFbcA8OTcRwAHO5O3aM+Wr2ZezWgkCaUIFzbxY512otLQKNLBiGgWdUvTkQQgwNHySF89VG/gmKg=@vger.kernel.org X-Gm-Message-State: AOJu0YzZKBU1P0qWgLqu+rjIJq4VRATQwRR0DzBWFKFsAtzPIgSXBlnL txVQ2Kk0hyZo6UMLtiFERIPD+cjn8NUaVqStFGMGt6vi7G/Yhsa6Smzi X-Gm-Gg: ASbGncvv0YqrDuWXdkdS3xlZ3mreQyFiLrGwjiHqgxWFiQiZtZXVfQvAETWbYMH09YU cpj9V56KIDphVN9qnn7osoFyrR8j88B53IwlgSDWxJODAZ4katzz1N2LnJKELMpzdMNsktc8/uS nKq+jJojS6geiRYu5gYAjUOXGkIT9vgODQB9STZ+OgmDJBeskHZlJlwT/n9QQf43ytSbIA7bwXh Ch9Vf32N4b07ftQ/JAvVm96v697aSHVhFGM5ay3WUUDbGymnfcuVND8hpcpaQp4xABGJ2NkeoyT yv8z2dmZCW3ICO/79lWtPj6ZpmlR4HAtmsLkPAAlgipxOa+H9ixUAh40+Uh/LWO8WbunN1zXY40 /bSBSbfQ2E8vLpjyzEt0IJJKj1FJHEM7TdIeudrMJNZGPPERNdebLXWwaWSxoc0qKIkz/ZUr7j4 pvQtho02U= X-Google-Smtp-Source: AGHT+IHcRP/+rFsLy2RENva1Jsb8xo2NqJEfdzP+xic7X2vYpKPlauqBCppFkorSXIkJjpXUbP1Srw== X-Received: by 2002:a17:902:dac2:b0:275:2328:5d3e with SMTP id d9443c01a7336-290c9ca32f9mr265453375ad.18.1761120151286; Wed, 22 Oct 2025 01:02:31 -0700 (PDT) Received: from 7950hx ([43.129.244.20]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-292471d7e41sm131947785ad.57.2025.10.22.01.02.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Oct 2025 01:02:30 -0700 (PDT) From: Menglong Dong X-Google-Original-From: Menglong Dong To: ast@kernel.org, jolsa@kernel.org Cc: daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, mattbobrowski@google.com, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, leon.hwang@linux.dev, jiang.biao@linux.dev, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH bpf-next v2 04/10] bpf,x86: add ret_off to invoke_bpf() Date: Wed, 22 Oct 2025 16:01:53 +0800 Message-ID: <20251022080159.553805-5-dongml2@chinatelecom.cn> X-Mailer: git-send-email 2.51.1.dirty In-Reply-To: <20251022080159.553805-1-dongml2@chinatelecom.cn> References: <20251022080159.553805-1-dongml2@chinatelecom.cn> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit For now, the offset of the return value in trampoline is fixed 8-bytes. In this commit, we introduce the variable "ret_off" to represent the offset of the return value. For now, the "ret_off" is just 8. And in the following patch, we will make it something else to use the room after it. Signed-off-by: Menglong Dong --- arch/x86/net/bpf_jit_comp.c | 41 +++++++++++++++++++++---------------- 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 389c3a96e2b8..7a604ee9713f 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -2940,7 +2940,7 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog, static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog, struct bpf_tramp_link *l, int stack_size, - int run_ctx_off, bool save_ret, + int run_ctx_off, bool save_ret, int ret_off, void *image, void *rw_image) { u8 *prog = *pprog; @@ -3005,7 +3005,7 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog, * value of BPF_PROG_TYPE_STRUCT_OPS prog. */ if (save_ret) - emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8); + emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ret_off); /* replace 2 nops with JE insn, since jmp target is known */ jmp_insn[0] = X86_JE; @@ -3055,7 +3055,7 @@ static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond) static int invoke_bpf(const struct btf_func_model *m, u8 **pprog, struct bpf_tramp_links *tl, int stack_size, - int run_ctx_off, bool save_ret, + int run_ctx_off, bool save_ret, int ret_off, void *image, void *rw_image) { int i; @@ -3063,7 +3063,8 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog, for (i = 0; i < tl->nr_links; i++) { if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size, - run_ctx_off, save_ret, image, rw_image)) + run_ctx_off, save_ret, ret_off, image, + rw_image)) return -EINVAL; } *pprog = prog; @@ -3072,7 +3073,7 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog, static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog, struct bpf_tramp_links *tl, int stack_size, - int run_ctx_off, u8 **branches, + int run_ctx_off, int ret_off, u8 **branches, void *image, void *rw_image) { u8 *prog = *pprog; @@ -3082,18 +3083,18 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog, * Set this to 0 to avoid confusing the program. */ emit_mov_imm32(&prog, false, BPF_REG_0, 0); - emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8); + emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ret_off); for (i = 0; i < tl->nr_links; i++) { if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size, run_ctx_off, true, - image, rw_image)) + ret_off, image, rw_image)) return -EINVAL; - /* mod_ret prog stored return value into [rbp - 8]. Emit: - * if (*(u64 *)(rbp - 8) != 0) + /* mod_ret prog stored return value into [rbp - ret_off]. Emit: + * if (*(u64 *)(rbp - ret_off) != 0) * goto do_fexit; */ - /* cmp QWORD PTR [rbp - 0x8], 0x0 */ - EMIT4(0x48, 0x83, 0x7d, 0xf8); EMIT1(0x00); + /* cmp QWORD PTR [rbp - ret_off], 0x0 */ + EMIT4(0x48, 0x83, 0x7d, -ret_off); EMIT1(0x00); /* Save the location of the branch and Generate 6 nops * (4 bytes for an offset and 2 bytes for the jump) These nops @@ -3179,7 +3180,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im void *func_addr) { int i, ret, nr_regs = m->nr_args, stack_size = 0; - int regs_off, nregs_off, ip_off, run_ctx_off, arg_stack_off, rbx_off; + int ret_off, regs_off, nregs_off, ip_off, run_ctx_off, arg_stack_off, + rbx_off; struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY]; struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT]; struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN]; @@ -3213,7 +3215,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im * RBP + 8 [ return address ] * RBP + 0 [ RBP ] * - * RBP - 8 [ return value ] BPF_TRAMP_F_CALL_ORIG or + * RBP - ret_off [ return value ] BPF_TRAMP_F_CALL_ORIG or * BPF_TRAMP_F_RET_FENTRY_RET flags * * [ reg_argN ] always @@ -3239,6 +3241,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET); if (save_ret) stack_size += 8; + ret_off = stack_size; stack_size += nr_regs * 8; regs_off = stack_size; @@ -3341,7 +3344,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im if (fentry->nr_links) { if (invoke_bpf(m, &prog, fentry, regs_off, run_ctx_off, - flags & BPF_TRAMP_F_RET_FENTRY_RET, image, rw_image)) + flags & BPF_TRAMP_F_RET_FENTRY_RET, ret_off, + image, rw_image)) return -EINVAL; } @@ -3352,7 +3356,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im return -ENOMEM; if (invoke_bpf_mod_ret(m, &prog, fmod_ret, regs_off, - run_ctx_off, branches, image, rw_image)) { + run_ctx_off, ret_off, branches, + image, rw_image)) { ret = -EINVAL; goto cleanup; } @@ -3380,7 +3385,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im } } /* remember return value in a stack for bpf prog to access */ - emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8); + emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ret_off); im->ip_after_call = image + (prog - (u8 *)rw_image); emit_nops(&prog, X86_PATCH_SIZE); } @@ -3403,7 +3408,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im if (fexit->nr_links) { if (invoke_bpf(m, &prog, fexit, regs_off, run_ctx_off, - false, image, rw_image)) { + false, ret_off, image, rw_image)) { ret = -EINVAL; goto cleanup; } @@ -3433,7 +3438,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im /* restore return value of orig_call or fentry prog back into RAX */ if (save_ret) - emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8); + emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -ret_off); emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, -rbx_off); EMIT1(0xC9); /* leave */ -- 2.51.1.dirty