* [PATCH bpf-next v7 0/4] bpf: add cpu cycles kfuncss
@ 2024-11-18 18:52 Vadim Fedorenko
2024-11-18 18:52 ` [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc Vadim Fedorenko
` (3 more replies)
0 siblings, 4 replies; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-18 18:52 UTC (permalink / raw)
To: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Vadim Fedorenko, Mykola Lysenko
Cc: x86, bpf, Vadim Fedorenko, Martin KaFai Lau
This patchset adds 2 kfuncs to provide a way to precisely measure the
time spent running some code. The first patch provides a way to get cpu
cycles counter which is used to feed CLOCK_MONOTONIC_RAW. On x86
architecture it is effectively rdtsc_ordered() function while on other
architectures it falls back to __arch_get_hw_counter(). The second patch
adds a kfunc to convert cpu cycles to nanoseconds using shift/mult
constants discovered by kernel. JIT version is done for x86 for now, on
other architectures it falls back to slightly simplified version of
vdso_calc_ns.
Selftests are also added to check whether the JIT implementation is
correct and to show the simplest usage example.
Change log:
v6 -> v7:
* change boot_cpu_has() to cpu_feature_enabled() (Borislav)
* return constant clock_mode in __arch_get_hw_counter() call
v5 -> v6:
* added cover letter
* add comment about dropping S64_MAX manipulation in jitted
implementation of rdtsc_oredered (Alexey)
* add comment about using 'lfence;rdtsc' variant (Alexey)
* change the check in fixup_kfunc_call() (Eduard)
* make __arch_get_hw_counter() call more aligned with vDSO
implementation (Yonghong)
v4 -> v5:
* use #if instead of #ifdef with IS_ENABLED
v3 -> v4:
* change name of the helper to bpf_get_cpu_cycles (Andrii)
* Hide the helper behind CONFIG_GENERIC_GETTIMEOFDAY to avoid exposing
it on architectures which do not have vDSO functions and data
* reduce the scope of check of inlined functions in verifier to only 2,
which are actually inlined.
* change helper name to bpf_cpu_cycles_to_ns.
* hide it behind CONFIG_GENERIC_GETTIMEOFDAY to avoid exposing on
unsupported architectures.
v2 -> v3:
* change name of the helper to bpf_get_cpu_cycles_counter to
* explicitly mention what counter it provides (Andrii)
* move kfunc definition to bpf.h to use it in JIT.
* introduce another kfunc to convert cycles into nanoseconds as
* more meaningful time units for generic tracing use case (Andrii)
v1 -> v2:
* Fix incorrect function return value type to u64
* Introduce bpf_jit_inlines_kfunc_call() and use it in
mark_fastcall_pattern_for_call() to avoid clobbering in case
of running programs with no JIT (Eduard)
* Avoid rewriting instruction and check function pointer directly
in JIT (Alexei)
* Change includes to fix compile issues on non x86 architectures
Vadim Fedorenko (4):
bpf: add bpf_get_cpu_cycles kfunc
bpf: add bpf_cpu_cycles_to_ns helper
selftests/bpf: add selftest to check rdtsc jit
selftests/bpf: add usage example for cpu cycles kfuncs
arch/x86/net/bpf_jit_comp.c | 61 ++++++++++
arch/x86/net/bpf_jit_comp32.c | 33 ++++++
include/linux/bpf.h | 6 +
include/linux/filter.h | 1 +
kernel/bpf/core.c | 11 ++
kernel/bpf/helpers.c | 39 +++++++
kernel/bpf/verifier.c | 41 ++++++-
.../bpf/prog_tests/test_cpu_cycles.c | 35 ++++++
.../selftests/bpf/prog_tests/verifier.c | 2 +
.../selftests/bpf/progs/test_cpu_cycles.c | 25 +++++
.../selftests/bpf/progs/verifier_cpu_cycles.c | 104 ++++++++++++++++++
11 files changed, 352 insertions(+), 6 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_cpu_cycles.c
create mode 100644 tools/testing/selftests/bpf/progs/test_cpu_cycles.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_cpu_cycles.c
--
2.43.5
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc
2024-11-18 18:52 [PATCH bpf-next v7 0/4] bpf: add cpu cycles kfuncss Vadim Fedorenko
@ 2024-11-18 18:52 ` Vadim Fedorenko
2024-11-19 11:18 ` Peter Zijlstra
2024-11-18 18:52 ` [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper Vadim Fedorenko
` (2 subsequent siblings)
3 siblings, 1 reply; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-18 18:52 UTC (permalink / raw)
To: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Vadim Fedorenko, Mykola Lysenko
Cc: x86, bpf, Vadim Fedorenko, Martin KaFai Lau
New kfunc to return ARCH-specific timecounter. For x86 BPF JIT converts
it into rdtsc ordered call. Other architectures will get JIT
implementation too if supported. The fallback is to
__arch_get_hw_counter().
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Signed-off-by: Vadim Fedorenko <vadfed@meta.com>
---
v6 -> v7:
* change boot_cpu_has() -> cpu_feature_enabled() (Borislav)
* change __arch_get_hw_counter() back to use constant clock mode
to avoid linking issues with CONFIG_HYPERV_TIMER or
CONFIG_PARAVIRT_CLOCK enabled on x86.
v5 -> v6:
* add comment about dropping S64_MAX manipulation in jitted
implementation of rdtsc_oredered (Alexey)
* add comment about using 'lfence;rdtsc' variant (Alexey)
* change the check in fixup_kfunc_call() (Eduard)
* make __arch_get_hw_counter() call more aligned with vDSO
implementation (Yonghong)
v4 -> v5:
* use if instead of ifdef with IS_ENABLED
v3 -> v4:
* change name of the helper to bpf_get_cpu_cycles (Andrii)
* Hide the helper behind CONFIG_GENERIC_GETTIMEOFDAY to avoid exposing
it on architectures which do not have vDSO functions and data
* reduce the scope of check of inlined functions in verifier to only 2,
which are actually inlined.
v2 -> v3:
* change name of the helper to bpf_get_cpu_cycles_counter to explicitly
mention what counter it provides (Andrii)
* move kfunc definition to bpf.h to use it in JIT.
* introduce another kfunc to convert cycles into nanoseconds as more
meaningful time units for generic tracing use case (Andrii)
v1 -> v2:
* Fix incorrect function return value type to u64
* Introduce bpf_jit_inlines_kfunc_call() and use it in
mark_fastcall_pattern_for_call() to avoid clobbering in case of
running programs with no JIT (Eduard)
* Avoid rewriting instruction and check function pointer directly
in JIT (Alexei)
* Change includes to fix compile issues on non x86 architectures
---
arch/x86/net/bpf_jit_comp.c | 39 +++++++++++++++++++++++++++++++++
arch/x86/net/bpf_jit_comp32.c | 14 ++++++++++++
include/linux/bpf.h | 5 +++++
include/linux/filter.h | 1 +
kernel/bpf/core.c | 11 ++++++++++
kernel/bpf/helpers.c | 27 +++++++++++++++++++++++
kernel/bpf/verifier.c | 41 ++++++++++++++++++++++++++++++-----
7 files changed, 132 insertions(+), 6 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index a43fc5af973d..5e0c16d8bba3 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -2185,6 +2185,37 @@ st: if (is_imm8(insn->off))
case BPF_JMP | BPF_CALL: {
u8 *ip = image + addrs[i - 1];
+ if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
+ imm32 == BPF_CALL_IMM(bpf_get_cpu_cycles)) {
+ /* The default implementation of this kfunc uses
+ * __arch_get_hw_counter() which is implemented as
+ * `(u64)rdtsc_ordered() & S64_MAX`. We skip masking
+ * part because we assume it's not needed in BPF
+ * use case (two measurements close in time).
+ * Original code for rdtsc_ordered() uses sequence:
+ * 'rdtsc; nop; nop; nop' to patch it into
+ * 'lfence; rdtsc' or 'rdtscp' depending on CPU features.
+ * JIT uses 'lfence; rdtsc' variant because BPF program
+ * doesn't care about cookie provided by rdtscp in RCX.
+ * Save RDX because RDTSC will use EDX:EAX to return u64
+ */
+ emit_mov_reg(&prog, true, AUX_REG, BPF_REG_3);
+ if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
+ EMIT_LFENCE();
+ EMIT2(0x0F, 0x31);
+
+ /* shl RDX, 32 */
+ maybe_emit_1mod(&prog, BPF_REG_3, true);
+ EMIT3(0xC1, add_1reg(0xE0, BPF_REG_3), 32);
+ /* or RAX, RDX */
+ maybe_emit_mod(&prog, BPF_REG_0, BPF_REG_3, true);
+ EMIT2(0x09, add_2reg(0xC0, BPF_REG_0, BPF_REG_3));
+ /* restore RDX from R11 */
+ emit_mov_reg(&prog, true, BPF_REG_3, AUX_REG);
+
+ break;
+ }
+
func = (u8 *) __bpf_call_base + imm32;
if (src_reg == BPF_PSEUDO_CALL && tail_call_reachable) {
LOAD_TAIL_CALL_CNT_PTR(stack_depth);
@@ -3791,3 +3822,11 @@ u64 bpf_arch_uaddress_limit(void)
{
return 0;
}
+
+/* x86-64 JIT can inline kfunc */
+bool bpf_jit_inlines_kfunc_call(s32 imm)
+{
+ if (imm == BPF_CALL_IMM(bpf_get_cpu_cycles))
+ return true;
+ return false;
+}
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index de0f9e5f9f73..11a5c41302a3 100644
--- a/arch/x86/net/bpf_jit_comp32.c
+++ b/arch/x86/net/bpf_jit_comp32.c
@@ -2094,6 +2094,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
int err;
+ if (imm32 == BPF_CALL_IMM(bpf_get_cpu_cycles)) {
+ if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
+ EMIT3(0x0F, 0xAE, 0xE8);
+ EMIT2(0x0F, 0x31);
+ break;
+ }
+
err = emit_kfunc_call(bpf_prog,
image + addrs[i],
insn, &prog);
@@ -2621,3 +2628,10 @@ bool bpf_jit_supports_kfunc_call(void)
{
return true;
}
+
+bool bpf_jit_inlines_kfunc_call(s32 imm)
+{
+ if (imm == BPF_CALL_IMM(bpf_get_cpu_cycles))
+ return true;
+ return false;
+}
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3ace0d6227e3..43a5207a1591 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -3333,6 +3333,11 @@ void bpf_user_rnd_init_once(void);
u64 bpf_user_rnd_u32(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
u64 bpf_get_raw_cpu_id(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
+/* Inlined kfuncs */
+#if IS_ENABLED(CONFIG_GENERIC_GETTIMEOFDAY)
+u64 bpf_get_cpu_cycles(void);
+#endif
+
#if defined(CONFIG_NET)
bool bpf_sock_common_is_valid_access(int off, int size,
enum bpf_access_type type,
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 3a21947f2fd4..9cf57233874f 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -1111,6 +1111,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog);
void bpf_jit_compile(struct bpf_prog *prog);
bool bpf_jit_needs_zext(void);
bool bpf_jit_inlines_helper_call(s32 imm);
+bool bpf_jit_inlines_kfunc_call(s32 imm);
bool bpf_jit_supports_subprog_tailcalls(void);
bool bpf_jit_supports_percpu_insn(void);
bool bpf_jit_supports_kfunc_call(void);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 14d9288441f2..daa3ab458c8a 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2965,6 +2965,17 @@ bool __weak bpf_jit_inlines_helper_call(s32 imm)
return false;
}
+/* Return true if the JIT inlines the call to the kfunc corresponding to
+ * the imm.
+ *
+ * The verifier will not patch the insn->imm for the call to the helper if
+ * this returns true.
+ */
+bool __weak bpf_jit_inlines_kfunc_call(s32 imm)
+{
+ return false;
+}
+
/* Return TRUE if the JIT backend supports mixing bpf2bpf and tailcalls. */
bool __weak bpf_jit_supports_subprog_tailcalls(void)
{
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 751c150f9e1c..9f1a51bdb365 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -23,6 +23,10 @@
#include <linux/btf_ids.h>
#include <linux/bpf_mem_alloc.h>
#include <linux/kasan.h>
+#if IS_ENABLED(CONFIG_GENERIC_GETTIMEOFDAY)
+#include <vdso/datapage.h>
+#include <asm/vdso/vsyscall.h>
+#endif
#include "../../lib/kstrtox.h"
@@ -3057,6 +3061,26 @@ __bpf_kfunc int bpf_copy_from_user_str(void *dst, u32 dst__sz, const void __user
return ret + 1;
}
+#if IS_ENABLED(CONFIG_GENERIC_GETTIMEOFDAY)
+__bpf_kfunc u64 bpf_get_cpu_cycles(void)
+{
+ const struct vdso_data *vd = __arch_get_k_vdso_data();
+
+ vd = &vd[CS_RAW];
+
+ /* CS_RAW clock_mode translates to VDSO_CLOCKMODE_TSC on x86 and
+ * to VDSO_CLOCKMODE_ARCHTIMER on aarch64/risc-v. We cannot use
+ * vd->clock_mode directly because it brings possible access to
+ * pages visible by user-space only via vDSO. But the constant value
+ * of 1 is exactly what we need - it works for any architecture and
+ * translates to reading of HW timecounter regardles of architecture.
+ * We still have to provide vdso_data for some architectures to avoid
+ * NULL pointer dereference.
+ */
+ return __arch_get_hw_counter(1, vd);
+}
+#endif
+
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(generic_btf_ids)
@@ -3149,6 +3173,9 @@ BTF_ID_FLAGS(func, bpf_get_kmem_cache)
BTF_ID_FLAGS(func, bpf_iter_kmem_cache_new, KF_ITER_NEW | KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_iter_kmem_cache_next, KF_ITER_NEXT | KF_RET_NULL | KF_SLEEPABLE)
BTF_ID_FLAGS(func, bpf_iter_kmem_cache_destroy, KF_ITER_DESTROY | KF_SLEEPABLE)
+#if IS_ENABLED(CONFIG_GENERIC_GETTIMEOFDAY)
+BTF_ID_FLAGS(func, bpf_get_cpu_cycles, KF_FASTCALL)
+#endif
BTF_KFUNCS_END(common_btf_ids)
static const struct btf_kfunc_id_set common_kfunc_set = {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1c4ebb326785..dbfad4457bef 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -16407,6 +16407,24 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
}
}
+/* True if fixup_kfunc_call() replaces calls to kfunc number 'imm',
+ * replacement patch is presumed to follow bpf_fastcall contract
+ * (see mark_fastcall_pattern_for_call() below).
+ */
+static bool verifier_inlines_kfunc_call(struct bpf_verifier_env *env, s32 imm)
+{
+ const struct bpf_kfunc_desc *desc = find_kfunc_desc(env->prog, imm, 0);
+
+ if (!env->prog->jit_requested)
+ return false;
+
+ if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
+ desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast])
+ return true;
+
+ return false;
+}
+
/* Same as helper_fastcall_clobber_mask() but for kfuncs, see comment above */
static u32 kfunc_fastcall_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
{
@@ -16534,7 +16552,10 @@ static void mark_fastcall_pattern_for_call(struct bpf_verifier_env *env,
return;
clobbered_regs_mask = kfunc_fastcall_clobber_mask(&meta);
- can_be_inlined = is_fastcall_kfunc_call(&meta);
+ can_be_inlined = is_fastcall_kfunc_call(&meta) &&
+ (verifier_inlines_kfunc_call(env, call->imm) ||
+ (meta.btf == btf_vmlinux &&
+ bpf_jit_inlines_kfunc_call(call->imm)));
}
if (clobbered_regs_mask == ALL_CALLER_SAVED_REGS)
@@ -20541,6 +20562,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
struct bpf_insn *insn_buf, int insn_idx, int *cnt)
{
const struct bpf_kfunc_desc *desc;
+ s32 imm = insn->imm;
if (!insn->imm) {
verbose(env, "invalid kernel function call not eliminated in verifier pass\n");
@@ -20564,7 +20586,18 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
insn->imm = BPF_CALL_IMM(desc->addr);
if (insn->off)
return 0;
- if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl] ||
+ if (verifier_inlines_kfunc_call(env, imm)) {
+ if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
+ desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
+ insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
+ *cnt = 1;
+ } else {
+ verbose(env, "verifier internal error: kfunc id %d has no inline code\n",
+ desc->func_id);
+ return -EFAULT;
+ }
+
+ } else if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl] ||
desc->func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl]) {
struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta;
struct bpf_insn addr[2] = { BPF_LD_IMM64(BPF_REG_2, (long)kptr_struct_meta) };
@@ -20625,10 +20658,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
__fixup_collection_insert_kfunc(&env->insn_aux_data[insn_idx], struct_meta_reg,
node_offset_reg, insn, insn_buf, cnt);
- } else if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
- desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
- insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
- *cnt = 1;
} else if (is_bpf_wq_set_callback_impl_kfunc(desc->func_id)) {
struct bpf_insn ld_addrs[2] = { BPF_LD_IMM64(BPF_REG_4, (long)env->prog->aux) };
--
2.43.5
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper
2024-11-18 18:52 [PATCH bpf-next v7 0/4] bpf: add cpu cycles kfuncss Vadim Fedorenko
2024-11-18 18:52 ` [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc Vadim Fedorenko
@ 2024-11-18 18:52 ` Vadim Fedorenko
2024-11-19 11:28 ` Peter Zijlstra
2024-11-18 18:52 ` [PATCH bpf-next v7 3/4] selftests/bpf: add selftest to check rdtsc jit Vadim Fedorenko
2024-11-18 18:52 ` [PATCH bpf-next v7 4/4] selftests/bpf: add usage example for cpu cycles kfuncs Vadim Fedorenko
3 siblings, 1 reply; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-18 18:52 UTC (permalink / raw)
To: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Vadim Fedorenko, Mykola Lysenko
Cc: x86, bpf, Vadim Fedorenko, Martin KaFai Lau
The new helper should be used to convert cycles received by
bpf_get_cpu_cycle() into nanoseconds.
Reviewed-by: Eduard Zingerman <eddyz87@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Vadim Fedorenko <vadfed@meta.com>
---
v6 -> v7:
* change boot_cpu_has() -> cpu_feature_enabled() (Borislav)
v4 -> v6:
* add comment about simplified implementation (Eduard)
v4:
* change helper name to bpf_cpu_cycles_to_ns.
* hide it behind CONFIG_GENERIC_GETTIMEOFDAY to avoid exposing on
unsupported architectures.
---
arch/x86/net/bpf_jit_comp.c | 22 ++++++++++++++++++++++
arch/x86/net/bpf_jit_comp32.c | 19 +++++++++++++++++++
include/linux/bpf.h | 1 +
kernel/bpf/helpers.c | 14 +++++++++++++-
4 files changed, 55 insertions(+), 1 deletion(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 5e0c16d8bba3..2a3f7d5fdf26 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -11,6 +11,7 @@
#include <linux/bpf.h>
#include <linux/memory.h>
#include <linux/sort.h>
+#include <linux/clocksource.h>
#include <asm/extable.h>
#include <asm/ftrace.h>
#include <asm/set_memory.h>
@@ -2216,6 +2217,24 @@ st: if (is_imm8(insn->off))
break;
}
+ if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
+ imm32 == BPF_CALL_IMM(bpf_cpu_cycles_to_ns) &&
+ cpu_feature_enabled(X86_FEATURE_CONSTANT_TSC)) {
+ u32 mult, shift;
+
+ clocks_calc_mult_shift(&mult, &shift, tsc_khz, USEC_PER_SEC, 0);
+ /* imul RAX, RDI, mult */
+ maybe_emit_mod(&prog, BPF_REG_1, BPF_REG_0, true);
+ EMIT2_off32(0x69, add_2reg(0xC0, BPF_REG_1, BPF_REG_0),
+ mult);
+
+ /* shr RAX, shift (which is less than 64) */
+ maybe_emit_1mod(&prog, BPF_REG_0, true);
+ EMIT3(0xC1, add_1reg(0xE8, BPF_REG_0), shift);
+
+ break;
+ }
+
func = (u8 *) __bpf_call_base + imm32;
if (src_reg == BPF_PSEUDO_CALL && tail_call_reachable) {
LOAD_TAIL_CALL_CNT_PTR(stack_depth);
@@ -3828,5 +3847,8 @@ bool bpf_jit_inlines_kfunc_call(s32 imm)
{
if (imm == BPF_CALL_IMM(bpf_get_cpu_cycles))
return true;
+ if (imm == BPF_CALL_IMM(bpf_cpu_cycles_to_ns) &&
+ boot_cpu_has(X86_FEATURE_CONSTANT_TSC))
+ return true;
return false;
}
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index 11a5c41302a3..2bc560c47c00 100644
--- a/arch/x86/net/bpf_jit_comp32.c
+++ b/arch/x86/net/bpf_jit_comp32.c
@@ -12,6 +12,7 @@
#include <linux/netdevice.h>
#include <linux/filter.h>
#include <linux/if_vlan.h>
+#include <linux/clocksource.h>
#include <asm/cacheflush.h>
#include <asm/set_memory.h>
#include <asm/nospec-branch.h>
@@ -2100,6 +2101,24 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
EMIT2(0x0F, 0x31);
break;
}
+ if (imm32 == BPF_CALL_IMM(bpf_cpu_cycles_to_ns) &&
+ cpu_feature_enabled(X86_FEATURE_CONSTANT_TSC)) {
+ u32 mult, shift;
+
+ clocks_calc_mult_shift(&mult, &shift, tsc_khz,
+ USEC_PER_SEC, 0);
+
+ /* move parameter to BPF_REG_0 */
+ emit_ia32_mov_r64(true, bpf2ia32[BPF_REG_0],
+ bpf2ia32[BPF_REG_1], true, true,
+ &prog, bpf_prog->aux);
+ /* multiply parameter by mut */
+ emit_ia32_mul_i64(bpf2ia32[BPF_REG_0],
+ mult, true, &prog);
+ /* shift parameter by shift which is less than 64 */
+ emit_ia32_rsh_i64(bpf2ia32[BPF_REG_0],
+ shift, true, &prog);
+ }
err = emit_kfunc_call(bpf_prog,
image + addrs[i],
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 43a5207a1591..af47704afeaa 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -3336,6 +3336,7 @@ u64 bpf_get_raw_cpu_id(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
/* Inlined kfuncs */
#if IS_ENABLED(CONFIG_GENERIC_GETTIMEOFDAY)
u64 bpf_get_cpu_cycles(void);
+u64 bpf_cpu_cycles_to_ns(u64 cycles);
#endif
#if defined(CONFIG_NET)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 9f1a51bdb365..ed3876aa30ad 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -3079,8 +3079,19 @@ __bpf_kfunc u64 bpf_get_cpu_cycles(void)
*/
return __arch_get_hw_counter(1, vd);
}
-#endif
+__bpf_kfunc u64 bpf_cpu_cycles_to_ns(u64 cycles)
+{
+ const struct vdso_data *vd = __arch_get_k_vdso_data();
+
+ vd = &vd[CS_RAW];
+ /* kfunc implementation does less manipulations than vDSO
+ * implementation. BPF use-case assumes two measurements are close
+ * in time and can simplify the logic.
+ */
+ return mul_u64_u32_shr(cycles, vd->mult, vd->shift);
+}
+#endif
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(generic_btf_ids)
@@ -3175,6 +3186,7 @@ BTF_ID_FLAGS(func, bpf_iter_kmem_cache_next, KF_ITER_NEXT | KF_RET_NULL | KF_SLE
BTF_ID_FLAGS(func, bpf_iter_kmem_cache_destroy, KF_ITER_DESTROY | KF_SLEEPABLE)
#if IS_ENABLED(CONFIG_GENERIC_GETTIMEOFDAY)
BTF_ID_FLAGS(func, bpf_get_cpu_cycles, KF_FASTCALL)
+BTF_ID_FLAGS(func, bpf_cpu_cycles_to_ns, KF_FASTCALL)
#endif
BTF_KFUNCS_END(common_btf_ids)
--
2.43.5
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH bpf-next v7 3/4] selftests/bpf: add selftest to check rdtsc jit
2024-11-18 18:52 [PATCH bpf-next v7 0/4] bpf: add cpu cycles kfuncss Vadim Fedorenko
2024-11-18 18:52 ` [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc Vadim Fedorenko
2024-11-18 18:52 ` [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper Vadim Fedorenko
@ 2024-11-18 18:52 ` Vadim Fedorenko
2024-11-18 18:52 ` [PATCH bpf-next v7 4/4] selftests/bpf: add usage example for cpu cycles kfuncs Vadim Fedorenko
3 siblings, 0 replies; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-18 18:52 UTC (permalink / raw)
To: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Vadim Fedorenko, Mykola Lysenko
Cc: x86, bpf, Vadim Fedorenko, Martin KaFai Lau
get_cpu_cycles() is replaced with rdtsc instruction on x86_64. Add
tests to check it.
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Vadim Fedorenko <vadfed@meta.com>
---
.../selftests/bpf/prog_tests/verifier.c | 2 +
.../selftests/bpf/progs/verifier_cpu_cycles.c | 104 ++++++++++++++++++
2 files changed, 106 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/verifier_cpu_cycles.c
diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index d9f65adb456b..6cbb8949164a 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -98,6 +98,7 @@
#include "verifier_xdp_direct_packet_access.skel.h"
#include "verifier_bits_iter.skel.h"
#include "verifier_lsm.skel.h"
+#include "verifier_cpu_cycles.skel.h"
#define MAX_ENTRIES 11
@@ -225,6 +226,7 @@ void test_verifier_xdp(void) { RUN(verifier_xdp); }
void test_verifier_xdp_direct_packet_access(void) { RUN(verifier_xdp_direct_packet_access); }
void test_verifier_bits_iter(void) { RUN(verifier_bits_iter); }
void test_verifier_lsm(void) { RUN(verifier_lsm); }
+void test_verifier_cpu_cycles(void) { RUN(verifier_cpu_cycles); }
void test_verifier_mtu(void)
{
diff --git a/tools/testing/selftests/bpf/progs/verifier_cpu_cycles.c b/tools/testing/selftests/bpf/progs/verifier_cpu_cycles.c
new file mode 100644
index 000000000000..88bfa7211858
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_cpu_cycles.c
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Inc. */
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+
+extern u64 bpf_cpu_cycles_to_ns(u64 cycles) __weak __ksym;
+extern u64 bpf_get_cpu_cycles(void) __weak __ksym;
+
+SEC("syscall")
+__arch_x86_64
+__xlated("0: call kernel-function")
+__naked int bpf_rdtsc(void)
+{
+ asm volatile(
+ "call %[bpf_get_cpu_cycles];"
+ "exit"
+ :
+ : __imm(bpf_get_cpu_cycles)
+ : __clobber_all
+ );
+}
+
+SEC("syscall")
+__arch_x86_64
+/* program entry for bpf_rdtsc_jit_x86_64(), regular function prologue */
+__jited(" endbr64")
+__jited(" nopl (%rax,%rax)")
+__jited(" nopl (%rax)")
+__jited(" pushq %rbp")
+__jited(" movq %rsp, %rbp")
+__jited(" endbr64")
+/* save RDX in R11 as it will be overwritten */
+__jited(" movq %rdx, %r11")
+/* lfence may not be executed depending on cpu features */
+__jited(" {{(lfence|)}}")
+__jited(" rdtsc")
+/* combine EDX:EAX into RAX */
+__jited(" shlq ${{(32|0x20)}}, %rdx")
+__jited(" orq %rdx, %rax")
+/* restore RDX from R11 */
+__jited(" movq %r11, %rdx")
+__jited(" leave")
+__naked int bpf_rdtsc_jit_x86_64(void)
+{
+ asm volatile(
+ "call %[bpf_get_cpu_cycles];"
+ "exit"
+ :
+ : __imm(bpf_get_cpu_cycles)
+ : __clobber_all
+ );
+}
+
+SEC("syscall")
+__arch_x86_64
+__xlated("0: r1 = 42")
+__xlated("1: call kernel-function")
+__naked int bpf_cyc2ns(void)
+{
+ asm volatile(
+ "r1=0x2a;"
+ "call %[bpf_cpu_cycles_to_ns];"
+ "exit"
+ :
+ : __imm(bpf_cpu_cycles_to_ns)
+ : __clobber_all
+ );
+}
+
+SEC("syscall")
+__arch_x86_64
+/* program entry for bpf_rdtsc_jit_x86_64(), regular function prologue */
+__jited(" endbr64")
+__jited(" nopl (%rax,%rax)")
+__jited(" nopl (%rax)")
+__jited(" pushq %rbp")
+__jited(" movq %rsp, %rbp")
+__jited(" endbr64")
+/* save RDX in R11 as it will be overwritten */
+__jited(" movabsq $0x2a2a2a2a2a, %rdi")
+__jited(" imulq ${{.*}}, %rdi, %rax")
+__jited(" shrq ${{.*}}, %rax")
+__jited(" leave")
+__naked int bpf_cyc2ns_jit_x86(void)
+{
+ asm volatile(
+ "r1=0x2a2a2a2a2a ll;"
+ "call %[bpf_cpu_cycles_to_ns];"
+ "exit"
+ :
+ : __imm(bpf_cpu_cycles_to_ns)
+ : __clobber_all
+ );
+}
+
+void rdtsc(void)
+{
+ bpf_get_cpu_cycles();
+ bpf_cpu_cycles_to_ns(42);
+}
+
+char _license[] SEC("license") = "GPL";
--
2.43.5
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH bpf-next v7 4/4] selftests/bpf: add usage example for cpu cycles kfuncs
2024-11-18 18:52 [PATCH bpf-next v7 0/4] bpf: add cpu cycles kfuncss Vadim Fedorenko
` (2 preceding siblings ...)
2024-11-18 18:52 ` [PATCH bpf-next v7 3/4] selftests/bpf: add selftest to check rdtsc jit Vadim Fedorenko
@ 2024-11-18 18:52 ` Vadim Fedorenko
2024-11-19 11:47 ` Peter Zijlstra
3 siblings, 1 reply; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-18 18:52 UTC (permalink / raw)
To: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Vadim Fedorenko, Mykola Lysenko
Cc: x86, bpf, Vadim Fedorenko, Martin KaFai Lau
The selftest provides an example of how to measure the latency of bpf
kfunc/helper call in cycles and nanoseconds.
Signed-off-by: Vadim Fedorenko <vadfed@meta.com>
---
.../bpf/prog_tests/test_cpu_cycles.c | 35 +++++++++++++++++++
.../selftests/bpf/progs/test_cpu_cycles.c | 25 +++++++++++++
2 files changed, 60 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/test_cpu_cycles.c
create mode 100644 tools/testing/selftests/bpf/progs/test_cpu_cycles.c
diff --git a/tools/testing/selftests/bpf/prog_tests/test_cpu_cycles.c b/tools/testing/selftests/bpf/prog_tests/test_cpu_cycles.c
new file mode 100644
index 000000000000..d7f3b66594b3
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_cpu_cycles.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Inc. */
+
+#include <test_progs.h>
+#include "test_cpu_cycles.skel.h"
+
+static void cpu_cycles(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, opts);
+ struct test_cpu_cycles *skel;
+ int err, pfd;
+
+ skel = test_cpu_cycles__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "test_cpu_cycles open and load"))
+ return;
+
+ pfd = bpf_program__fd(skel->progs.bpf_cpu_cycles);
+ if (!ASSERT_GT(pfd, 0, "test_cpu_cycles fd"))
+ goto fail;
+
+ err = bpf_prog_test_run_opts(pfd, &opts);
+ if (!ASSERT_OK(err, "test_cpu_cycles test run"))
+ goto fail;
+
+ ASSERT_NEQ(skel->bss->cycles, 0, "test_cpu_cycles 0 cycles");
+ ASSERT_NEQ(skel->bss->ns, 0, "test_cpu_cycles 0 ns");
+fail:
+ test_cpu_cycles__destroy(skel);
+}
+
+void test_cpu_cycles(void)
+{
+ if (test__start_subtest("cpu_cycles"))
+ cpu_cycles();
+}
diff --git a/tools/testing/selftests/bpf/progs/test_cpu_cycles.c b/tools/testing/selftests/bpf/progs/test_cpu_cycles.c
new file mode 100644
index 000000000000..48762f07ad7f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_cpu_cycles.c
@@ -0,0 +1,25 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Inc. */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+
+extern u64 bpf_cpu_cycles_to_ns(u64 cycles) __weak __ksym;
+extern u64 bpf_get_cpu_cycles(void) __weak __ksym;
+
+__u64 cycles, ns;
+
+SEC("syscall")
+int bpf_cpu_cycles(void)
+{
+ struct bpf_pidns_info pidns;
+ __u64 start;
+
+ start = bpf_get_cpu_cycles();
+ bpf_get_ns_current_pid_tgid(0, 0, &pidns, sizeof(struct bpf_pidns_info));
+ cycles = bpf_get_cpu_cycles() - start;
+ ns = bpf_cpu_cycles_to_ns(cycles);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
--
2.43.5
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc
2024-11-18 18:52 ` [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc Vadim Fedorenko
@ 2024-11-19 11:18 ` Peter Zijlstra
2024-11-19 14:29 ` Vadim Fedorenko
0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2024-11-19 11:18 UTC (permalink / raw)
To: Vadim Fedorenko
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Vadim Fedorenko, Mykola Lysenko, x86, bpf, Martin KaFai Lau
On Mon, Nov 18, 2024 at 10:52:42AM -0800, Vadim Fedorenko wrote:
> @@ -2094,6 +2094,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
> if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
> int err;
>
> + if (imm32 == BPF_CALL_IMM(bpf_get_cpu_cycles)) {
> + if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
> + EMIT3(0x0F, 0xAE, 0xE8);
> + EMIT2(0x0F, 0x31);
> + break;
> + }
TSC != cycles. Naming is bad.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper
2024-11-18 18:52 ` [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper Vadim Fedorenko
@ 2024-11-19 11:28 ` Peter Zijlstra
2024-11-19 14:38 ` Vadim Fedorenko
0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2024-11-19 11:28 UTC (permalink / raw)
To: Vadim Fedorenko
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Vadim Fedorenko, Mykola Lysenko, x86, bpf, Martin KaFai Lau
On Mon, Nov 18, 2024 at 10:52:43AM -0800, Vadim Fedorenko wrote:
> + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
> + imm32 == BPF_CALL_IMM(bpf_cpu_cycles_to_ns) &&
> + cpu_feature_enabled(X86_FEATURE_CONSTANT_TSC)) {
> + u32 mult, shift;
> +
> + clocks_calc_mult_shift(&mult, &shift, tsc_khz, USEC_PER_SEC, 0);
> + /* imul RAX, RDI, mult */
> + maybe_emit_mod(&prog, BPF_REG_1, BPF_REG_0, true);
> + EMIT2_off32(0x69, add_2reg(0xC0, BPF_REG_1, BPF_REG_0),
> + mult);
> +
> + /* shr RAX, shift (which is less than 64) */
> + maybe_emit_1mod(&prog, BPF_REG_0, true);
> + EMIT3(0xC1, add_1reg(0xE8, BPF_REG_0), shift);
> +
> + break;
> + }
This is ludicrously horrible. Why are you using your own mult/shift and
not offset here instead of using the one from either sched_clock or
clocksource_tsc ?
And being totally inconsistent with your own alternative implementation
which uses the VDSO, which in turn uses clocksource_tsc:
> +__bpf_kfunc u64 bpf_cpu_cycles_to_ns(u64 cycles)
> +{
> + const struct vdso_data *vd = __arch_get_k_vdso_data();
> +
> + vd = &vd[CS_RAW];
> + /* kfunc implementation does less manipulations than vDSO
> + * implementation. BPF use-case assumes two measurements are close
> + * in time and can simplify the logic.
> + */
> + return mul_u64_u32_shr(cycles, vd->mult, vd->shift);
> +}
Also, if I'm not mistaken, the above is broken, you really should add
the offset, without it I don't think we guarantee the result is
monotonic.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 4/4] selftests/bpf: add usage example for cpu cycles kfuncs
2024-11-18 18:52 ` [PATCH bpf-next v7 4/4] selftests/bpf: add usage example for cpu cycles kfuncs Vadim Fedorenko
@ 2024-11-19 11:47 ` Peter Zijlstra
2024-11-19 14:45 ` Vadim Fedorenko
0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2024-11-19 11:47 UTC (permalink / raw)
To: Vadim Fedorenko
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Vadim Fedorenko, Mykola Lysenko, x86, bpf, Martin KaFai Lau
On Mon, Nov 18, 2024 at 10:52:45AM -0800, Vadim Fedorenko wrote:
> +int bpf_cpu_cycles(void)
> +{
> + struct bpf_pidns_info pidns;
> + __u64 start;
> +
> + start = bpf_get_cpu_cycles();
> + bpf_get_ns_current_pid_tgid(0, 0, &pidns, sizeof(struct bpf_pidns_info));
> + cycles = bpf_get_cpu_cycles() - start;
> + ns = bpf_cpu_cycles_to_ns(cycles);
> + return 0;
> +}
Oh, the intent is to use that cycles_to_ns() on deltas. That wasn't at
all clear.
Anyway, the above has more problems than just bad naming. TSC is
constant and not affected by DVFS, so depending on the DVFS state of
things your function will return wildly different readings.
Why do you think you need this?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc
2024-11-19 11:18 ` Peter Zijlstra
@ 2024-11-19 14:29 ` Vadim Fedorenko
2024-11-19 16:17 ` Peter Zijlstra
0 siblings, 1 reply; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-19 14:29 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Mykola Lysenko, x86, bpf, Martin KaFai Lau
On 19/11/2024 03:18, Peter Zijlstra wrote:
> On Mon, Nov 18, 2024 at 10:52:42AM -0800, Vadim Fedorenko wrote:
>> @@ -2094,6 +2094,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
>> if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
>> int err;
>>
>> + if (imm32 == BPF_CALL_IMM(bpf_get_cpu_cycles)) {
>> + if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
>> + EMIT3(0x0F, 0xAE, 0xE8);
>> + EMIT2(0x0F, 0x31);
>> + break;
>> + }
>
> TSC != cycles. Naming is bad.
Any suggestions?
JIT for other architectures will come after this one is merged and some
of them will be using cycles, so not too far away form the truth..
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper
2024-11-19 11:28 ` Peter Zijlstra
@ 2024-11-19 14:38 ` Vadim Fedorenko
2024-11-20 8:49 ` Peter Zijlstra
0 siblings, 1 reply; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-19 14:38 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Mykola Lysenko, x86, bpf, Martin KaFai Lau
On 19/11/2024 03:28, Peter Zijlstra wrote:
> On Mon, Nov 18, 2024 at 10:52:43AM -0800, Vadim Fedorenko wrote:
>
>> + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
>> + imm32 == BPF_CALL_IMM(bpf_cpu_cycles_to_ns) &&
>> + cpu_feature_enabled(X86_FEATURE_CONSTANT_TSC)) {
>> + u32 mult, shift;
>> +
>> + clocks_calc_mult_shift(&mult, &shift, tsc_khz, USEC_PER_SEC, 0);
>> + /* imul RAX, RDI, mult */
>> + maybe_emit_mod(&prog, BPF_REG_1, BPF_REG_0, true);
>> + EMIT2_off32(0x69, add_2reg(0xC0, BPF_REG_1, BPF_REG_0),
>> + mult);
>> +
>> + /* shr RAX, shift (which is less than 64) */
>> + maybe_emit_1mod(&prog, BPF_REG_0, true);
>> + EMIT3(0xC1, add_1reg(0xE8, BPF_REG_0), shift);
>> +
>> + break;
>> + }
>
> This is ludicrously horrible. Why are you using your own mult/shift and
> not offset here instead of using the one from either sched_clock or
> clocksource_tsc ?
With X86_FEATURE_CONSTANT_TSC, tsc_khz is actually constant after
switching from tsc_early. And the very same call to
clocks_calc_mult_shift() is used to create clocksource_tsc mult and
shift constants. Unfortunately, clocksources don't have proper API to
get the underlying info, that's why I have to calculate shift and mult
values on my own.
> And being totally inconsistent with your own alternative implementation
> which uses the VDSO, which in turn uses clocksource_tsc:
With what I said above it is consistent with clocksource_tsc.
>
>> +__bpf_kfunc u64 bpf_cpu_cycles_to_ns(u64 cycles)
>> +{
>> + const struct vdso_data *vd = __arch_get_k_vdso_data();
>> +
>> + vd = &vd[CS_RAW];
>> + /* kfunc implementation does less manipulations than vDSO
>> + * implementation. BPF use-case assumes two measurements are close
>> + * in time and can simplify the logic.
>> + */
>> + return mul_u64_u32_shr(cycles, vd->mult, vd->shift);
>> +}
>
> Also, if I'm not mistaken, the above is broken, you really should add
> the offset, without it I don't think we guarantee the result is
> monotonic.
Not quite sure how constant offset can affect monotonic guarantee of
cycles, given that the main use case will be to calculate ns out of
small deltas? AFAIU, the offset is needed to get ns of CLOCK_MONOTONIC,
which can be affected by NTP manipulation. But in this helper we don't
follow any clock_id, we just want to calculate nanoseconds value out of
stable and monotonically increasing counter provided by architecture.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 4/4] selftests/bpf: add usage example for cpu cycles kfuncs
2024-11-19 11:47 ` Peter Zijlstra
@ 2024-11-19 14:45 ` Vadim Fedorenko
2024-11-20 8:51 ` Peter Zijlstra
0 siblings, 1 reply; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-19 14:45 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Mykola Lysenko, x86, bpf, Martin KaFai Lau
On 19/11/2024 03:47, Peter Zijlstra wrote:
> On Mon, Nov 18, 2024 at 10:52:45AM -0800, Vadim Fedorenko wrote:
>
>> +int bpf_cpu_cycles(void)
>> +{
>> + struct bpf_pidns_info pidns;
>> + __u64 start;
>> +
>> + start = bpf_get_cpu_cycles();
>> + bpf_get_ns_current_pid_tgid(0, 0, &pidns, sizeof(struct bpf_pidns_info));
>> + cycles = bpf_get_cpu_cycles() - start;
>> + ns = bpf_cpu_cycles_to_ns(cycles);
>> + return 0;
>> +}
>
> Oh, the intent is to use that cycles_to_ns() on deltas. That wasn't at
> all clear.
Yep, that's the main use case, it was discussed in the previous
versions of the patchset.
>
> Anyway, the above has more problems than just bad naming. TSC is
> constant and not affected by DVFS, so depending on the DVFS state of
> things your function will return wildly different readings.
Why should I care about DVFS? The use case is to measure the time spent
in some code. If we replace it with bpf_ktime_get_ns(), it will also be
affected by DVFS, and it's fine. We will be able to see the difference
for different DVFS states.
> Why do you think you need this?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc
2024-11-19 14:29 ` Vadim Fedorenko
@ 2024-11-19 16:17 ` Peter Zijlstra
2024-11-19 18:03 ` Vadim Fedorenko
0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2024-11-19 16:17 UTC (permalink / raw)
To: Vadim Fedorenko
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Mykola Lysenko, x86, bpf, Martin KaFai Lau
On Tue, Nov 19, 2024 at 06:29:09AM -0800, Vadim Fedorenko wrote:
> On 19/11/2024 03:18, Peter Zijlstra wrote:
> > On Mon, Nov 18, 2024 at 10:52:42AM -0800, Vadim Fedorenko wrote:
> > > @@ -2094,6 +2094,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
> > > if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
> > > int err;
> > > + if (imm32 == BPF_CALL_IMM(bpf_get_cpu_cycles)) {
> > > + if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
> > > + EMIT3(0x0F, 0xAE, 0xE8);
> > > + EMIT2(0x0F, 0x31);
> > > + break;
> > > + }
> >
> > TSC != cycles. Naming is bad.
>
> Any suggestions?
>
> JIT for other architectures will come after this one is merged and some
> of them will be using cycles, so not too far away form the truth..
bpf_get_time_stamp() ?
bpf_get_counter() ?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc
2024-11-19 16:17 ` Peter Zijlstra
@ 2024-11-19 18:03 ` Vadim Fedorenko
2024-11-19 19:16 ` Andrii Nakryiko
0 siblings, 1 reply; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-19 18:03 UTC (permalink / raw)
To: Peter Zijlstra, Andrii Nakryiko
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Eduard Zingerman, Thomas Gleixner, Yonghong Song, Mykola Lysenko,
x86, bpf, Martin KaFai Lau
On 19/11/2024 08:17, Peter Zijlstra wrote:
> On Tue, Nov 19, 2024 at 06:29:09AM -0800, Vadim Fedorenko wrote:
>> On 19/11/2024 03:18, Peter Zijlstra wrote:
>>> On Mon, Nov 18, 2024 at 10:52:42AM -0800, Vadim Fedorenko wrote:
>>>> @@ -2094,6 +2094,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
>>>> if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
>>>> int err;
>>>> + if (imm32 == BPF_CALL_IMM(bpf_get_cpu_cycles)) {
>>>> + if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
>>>> + EMIT3(0x0F, 0xAE, 0xE8);
>>>> + EMIT2(0x0F, 0x31);
>>>> + break;
>>>> + }
>>>
>>> TSC != cycles. Naming is bad.
>>
>> Any suggestions?
>>
>> JIT for other architectures will come after this one is merged and some
>> of them will be using cycles, so not too far away form the truth..
>
> bpf_get_time_stamp() ?
> bpf_get_counter() ?
Well, we have already been somewhere nearby these names [1].
[1]
https://lore.kernel.org/bpf/CAEf4BzaBNNCYaf9a4oHsB2AzYyc6JCWXpHx6jk22Btv=UAgX4A@mail.gmail.com/
bpf_get_time_stamp() doesn't really explain that the actual timestamp
will be provided by CPU hardware.
bpf_get_counter() is again too general, doesn't provide any information
about what type of counter will be returned. The more specific name,
bpf_get_cycles_counter(), was also discussed in v3 (accidentally, it
didn't reach mailing list). The quote of feedback from Andrii is:
Bikeshedding time, but let's be consistently slightly verbose, but
readable. Give nwe have bpf_get_cpu_cycles_counter (which maybe we
should shorten to "bpf_get_cpu_cycles()"), we should call this
something like "bpf_cpu_cycles_to_ns()".
It might make a bit more sense to name it bpf_get_cpu_counter(), but it
still looks too general.
Honestly, I'm not a fan of renaming functions once again, I would let
Andrii to vote for naming.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc
2024-11-19 18:03 ` Vadim Fedorenko
@ 2024-11-19 19:16 ` Andrii Nakryiko
2024-11-19 19:27 ` Vadim Fedorenko
0 siblings, 1 reply; 19+ messages in thread
From: Andrii Nakryiko @ 2024-11-19 19:16 UTC (permalink / raw)
To: Vadim Fedorenko
Cc: Peter Zijlstra, Andrii Nakryiko, Borislav Petkov,
Alexei Starovoitov, Daniel Borkmann, Eduard Zingerman,
Thomas Gleixner, Yonghong Song, Mykola Lysenko, x86, bpf,
Martin KaFai Lau
On Tue, Nov 19, 2024 at 10:03 AM Vadim Fedorenko
<vadim.fedorenko@linux.dev> wrote:
>
> On 19/11/2024 08:17, Peter Zijlstra wrote:
> > On Tue, Nov 19, 2024 at 06:29:09AM -0800, Vadim Fedorenko wrote:
> >> On 19/11/2024 03:18, Peter Zijlstra wrote:
> >>> On Mon, Nov 18, 2024 at 10:52:42AM -0800, Vadim Fedorenko wrote:
> >>>> @@ -2094,6 +2094,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
> >>>> if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
> >>>> int err;
> >>>> + if (imm32 == BPF_CALL_IMM(bpf_get_cpu_cycles)) {
> >>>> + if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
> >>>> + EMIT3(0x0F, 0xAE, 0xE8);
> >>>> + EMIT2(0x0F, 0x31);
> >>>> + break;
> >>>> + }
> >>>
> >>> TSC != cycles. Naming is bad.
> >>
> >> Any suggestions?
> >>
> >> JIT for other architectures will come after this one is merged and some
> >> of them will be using cycles, so not too far away form the truth..
> >
> > bpf_get_time_stamp() ?
> > bpf_get_counter() ?
>
> Well, we have already been somewhere nearby these names [1].
>
> [1]
> https://lore.kernel.org/bpf/CAEf4BzaBNNCYaf9a4oHsB2AzYyc6JCWXpHx6jk22Btv=UAgX4A@mail.gmail.com/
>
> bpf_get_time_stamp() doesn't really explain that the actual timestamp
> will be provided by CPU hardware.
> bpf_get_counter() is again too general, doesn't provide any information
> about what type of counter will be returned. The more specific name,
> bpf_get_cycles_counter(), was also discussed in v3 (accidentally, it
> didn't reach mailing list). The quote of feedback from Andrii is:
>
> Bikeshedding time, but let's be consistently slightly verbose, but
> readable. Give nwe have bpf_get_cpu_cycles_counter (which maybe we
> should shorten to "bpf_get_cpu_cycles()"), we should call this
> something like "bpf_cpu_cycles_to_ns()".
>
> It might make a bit more sense to name it bpf_get_cpu_counter(), but it
> still looks too general.
>
> Honestly, I'm not a fan of renaming functions once again, I would let
> Andrii to vote for naming.
Let's go with bpf_get_cpu_time_counter() and bpf_cpu_time_counter_to_ns().
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc
2024-11-19 19:16 ` Andrii Nakryiko
@ 2024-11-19 19:27 ` Vadim Fedorenko
0 siblings, 0 replies; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-19 19:27 UTC (permalink / raw)
To: Andrii Nakryiko, Peter Zijlstra
Cc: Andrii Nakryiko, Borislav Petkov, Alexei Starovoitov,
Daniel Borkmann, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Mykola Lysenko, x86, bpf, Martin KaFai Lau
On 19/11/2024 11:16, Andrii Nakryiko wrote:
> On Tue, Nov 19, 2024 at 10:03 AM Vadim Fedorenko
> <vadim.fedorenko@linux.dev> wrote:
>>
>> On 19/11/2024 08:17, Peter Zijlstra wrote:
>>> On Tue, Nov 19, 2024 at 06:29:09AM -0800, Vadim Fedorenko wrote:
>>>> On 19/11/2024 03:18, Peter Zijlstra wrote:
>>>>> On Mon, Nov 18, 2024 at 10:52:42AM -0800, Vadim Fedorenko wrote:
>>>>>> @@ -2094,6 +2094,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
>>>>>> if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
>>>>>> int err;
>>>>>> + if (imm32 == BPF_CALL_IMM(bpf_get_cpu_cycles)) {
>>>>>> + if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
>>>>>> + EMIT3(0x0F, 0xAE, 0xE8);
>>>>>> + EMIT2(0x0F, 0x31);
>>>>>> + break;
>>>>>> + }
>>>>>
>>>>> TSC != cycles. Naming is bad.
>>>>
>>>> Any suggestions?
>>>>
>>>> JIT for other architectures will come after this one is merged and some
>>>> of them will be using cycles, so not too far away form the truth..
>>>
>>> bpf_get_time_stamp() ?
>>> bpf_get_counter() ?
>>
>> Well, we have already been somewhere nearby these names [1].
>>
>> [1]
>> https://lore.kernel.org/bpf/CAEf4BzaBNNCYaf9a4oHsB2AzYyc6JCWXpHx6jk22Btv=UAgX4A@mail.gmail.com/
>>
>> bpf_get_time_stamp() doesn't really explain that the actual timestamp
>> will be provided by CPU hardware.
>> bpf_get_counter() is again too general, doesn't provide any information
>> about what type of counter will be returned. The more specific name,
>> bpf_get_cycles_counter(), was also discussed in v3 (accidentally, it
>> didn't reach mailing list). The quote of feedback from Andrii is:
>>
>> Bikeshedding time, but let's be consistently slightly verbose, but
>> readable. Give nwe have bpf_get_cpu_cycles_counter (which maybe we
>> should shorten to "bpf_get_cpu_cycles()"), we should call this
>> something like "bpf_cpu_cycles_to_ns()".
>>
>> It might make a bit more sense to name it bpf_get_cpu_counter(), but it
>> still looks too general.
>>
>> Honestly, I'm not a fan of renaming functions once again, I would let
>> Andrii to vote for naming.
>
> Let's go with bpf_get_cpu_time_counter() and bpf_cpu_time_counter_to_ns().
Ok, sure. @Peter are you OK with these names?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper
2024-11-19 14:38 ` Vadim Fedorenko
@ 2024-11-20 8:49 ` Peter Zijlstra
2024-11-20 13:39 ` Vadim Fedorenko
0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2024-11-20 8:49 UTC (permalink / raw)
To: Vadim Fedorenko
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Mykola Lysenko, x86, bpf, Martin KaFai Lau
On Tue, Nov 19, 2024 at 06:38:51AM -0800, Vadim Fedorenko wrote:
> On 19/11/2024 03:28, Peter Zijlstra wrote:
> > On Mon, Nov 18, 2024 at 10:52:43AM -0800, Vadim Fedorenko wrote:
> >
> > > + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
> > > + imm32 == BPF_CALL_IMM(bpf_cpu_cycles_to_ns) &&
> > > + cpu_feature_enabled(X86_FEATURE_CONSTANT_TSC)) {
> > > + u32 mult, shift;
> > > +
> > > + clocks_calc_mult_shift(&mult, &shift, tsc_khz, USEC_PER_SEC, 0);
> > > + /* imul RAX, RDI, mult */
> > > + maybe_emit_mod(&prog, BPF_REG_1, BPF_REG_0, true);
> > > + EMIT2_off32(0x69, add_2reg(0xC0, BPF_REG_1, BPF_REG_0),
> > > + mult);
> > > +
> > > + /* shr RAX, shift (which is less than 64) */
> > > + maybe_emit_1mod(&prog, BPF_REG_0, true);
> > > + EMIT3(0xC1, add_1reg(0xE8, BPF_REG_0), shift);
> > > +
> > > + break;
> > > + }
> >
> > This is ludicrously horrible. Why are you using your own mult/shift and
> > not offset here instead of using the one from either sched_clock or
> > clocksource_tsc ?
>
> With X86_FEATURE_CONSTANT_TSC, tsc_khz is actually constant after
> switching from tsc_early. And the very same call to
> clocks_calc_mult_shift() is used to create clocksource_tsc mult and
> shift constants. Unfortunately, clocksources don't have proper API to
> get the underlying info, that's why I have to calculate shift and mult
> values on my own.
There is cyc2ns_read_begin() / cyc2ns_read_end(), and you can use the
VDSO thing you do below.
> > And being totally inconsistent with your own alternative implementation
> > which uses the VDSO, which in turn uses clocksource_tsc:
>
> With what I said above it is consistent with clocksource_tsc.
>
> >
> > > +__bpf_kfunc u64 bpf_cpu_cycles_to_ns(u64 cycles)
> > > +{
> > > + const struct vdso_data *vd = __arch_get_k_vdso_data();
> > > +
> > > + vd = &vd[CS_RAW];
> > > + /* kfunc implementation does less manipulations than vDSO
> > > + * implementation. BPF use-case assumes two measurements are close
> > > + * in time and can simplify the logic.
> > > + */
> > > + return mul_u64_u32_shr(cycles, vd->mult, vd->shift);
> > > +}
> >
> > Also, if I'm not mistaken, the above is broken, you really should add
> > the offset, without it I don't think we guarantee the result is
> > monotonic.
>
> Not quite sure how constant offset can affect monotonic guarantee of
> cycles, given that the main use case will be to calculate ns out of
> small deltas?
Well, when I read this patch I didn't know, because your changelogs
don't mention anything at all.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 4/4] selftests/bpf: add usage example for cpu cycles kfuncs
2024-11-19 14:45 ` Vadim Fedorenko
@ 2024-11-20 8:51 ` Peter Zijlstra
2024-11-20 17:19 ` Vadim Fedorenko
0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2024-11-20 8:51 UTC (permalink / raw)
To: Vadim Fedorenko
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Mykola Lysenko, x86, bpf, Martin KaFai Lau
On Tue, Nov 19, 2024 at 06:45:57AM -0800, Vadim Fedorenko wrote:
> On 19/11/2024 03:47, Peter Zijlstra wrote:
> > On Mon, Nov 18, 2024 at 10:52:45AM -0800, Vadim Fedorenko wrote:
> >
> > > +int bpf_cpu_cycles(void)
> > > +{
> > > + struct bpf_pidns_info pidns;
> > > + __u64 start;
> > > +
> > > + start = bpf_get_cpu_cycles();
> > > + bpf_get_ns_current_pid_tgid(0, 0, &pidns, sizeof(struct bpf_pidns_info));
> > > + cycles = bpf_get_cpu_cycles() - start;
> > > + ns = bpf_cpu_cycles_to_ns(cycles);
> > > + return 0;
> > > +}
> >
> > Oh, the intent is to use that cycles_to_ns() on deltas. That wasn't at
> > all clear.
>
> Yep, that's the main use case, it was discussed in the previous
> versions of the patchset.
Should bloody well be in the changelogs then. As is I'm tempted to NAK
the entire series because there is not a single word on WHY for any of
this.
> > Anyway, the above has more problems than just bad naming. TSC is
> > constant and not affected by DVFS, so depending on the DVFS state of
> > things your function will return wildly different readings.
>
> Why should I care about DVFS? The use case is to measure the time spent
> in some code. If we replace it with bpf_ktime_get_ns(), it will also be
> affected by DVFS, and it's fine. We will be able to see the difference
> for different DVFS states.
Again, this goes to usage, why do you want this, what are you going to
do with the results?
Run-ro-run numbers will be absolutely useless because of DVFS.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper
2024-11-20 8:49 ` Peter Zijlstra
@ 2024-11-20 13:39 ` Vadim Fedorenko
0 siblings, 0 replies; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-20 13:39 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Mykola Lysenko, x86, bpf, Martin KaFai Lau
On 20/11/2024 00:49, Peter Zijlstra wrote:
> On Tue, Nov 19, 2024 at 06:38:51AM -0800, Vadim Fedorenko wrote:
>> On 19/11/2024 03:28, Peter Zijlstra wrote:
>>> On Mon, Nov 18, 2024 at 10:52:43AM -0800, Vadim Fedorenko wrote:
>>>
>>>> + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
>>>> + imm32 == BPF_CALL_IMM(bpf_cpu_cycles_to_ns) &&
>>>> + cpu_feature_enabled(X86_FEATURE_CONSTANT_TSC)) {
>>>> + u32 mult, shift;
>>>> +
>>>> + clocks_calc_mult_shift(&mult, &shift, tsc_khz, USEC_PER_SEC, 0);
>>>> + /* imul RAX, RDI, mult */
>>>> + maybe_emit_mod(&prog, BPF_REG_1, BPF_REG_0, true);
>>>> + EMIT2_off32(0x69, add_2reg(0xC0, BPF_REG_1, BPF_REG_0),
>>>> + mult);
>>>> +
>>>> + /* shr RAX, shift (which is less than 64) */
>>>> + maybe_emit_1mod(&prog, BPF_REG_0, true);
>>>> + EMIT3(0xC1, add_1reg(0xE8, BPF_REG_0), shift);
>>>> +
>>>> + break;
>>>> + }
>>>
>>> This is ludicrously horrible. Why are you using your own mult/shift and
>>> not offset here instead of using the one from either sched_clock or
>>> clocksource_tsc ?
>>
>> With X86_FEATURE_CONSTANT_TSC, tsc_khz is actually constant after
>> switching from tsc_early. And the very same call to
>> clocks_calc_mult_shift() is used to create clocksource_tsc mult and
>> shift constants. Unfortunately, clocksources don't have proper API to
>> get the underlying info, that's why I have to calculate shift and mult
>> values on my own.
>
> There is cyc2ns_read_begin() / cyc2ns_read_end(), and you can use the
> VDSO thing you do below.
Looks like I missed arch-specific implementation. Thanks, I'll use it in
the next version.
>>> And being totally inconsistent with your own alternative implementation
>>> which uses the VDSO, which in turn uses clocksource_tsc:
>>
>> With what I said above it is consistent with clocksource_tsc.
>>
>>>
>>>> +__bpf_kfunc u64 bpf_cpu_cycles_to_ns(u64 cycles)
>>>> +{
>>>> + const struct vdso_data *vd = __arch_get_k_vdso_data();
>>>> +
>>>> + vd = &vd[CS_RAW];
>>>> + /* kfunc implementation does less manipulations than vDSO
>>>> + * implementation. BPF use-case assumes two measurements are close
>>>> + * in time and can simplify the logic.
>>>> + */
>>>> + return mul_u64_u32_shr(cycles, vd->mult, vd->shift);
>>>> +}
>>>
>>> Also, if I'm not mistaken, the above is broken, you really should add
>>> the offset, without it I don't think we guarantee the result is
>>> monotonic.
>>
>> Not quite sure how constant offset can affect monotonic guarantee of
>> cycles, given that the main use case will be to calculate ns out of
>> small deltas?
>
> Well, when I read this patch I didn't know, because your changelogs
> don't mention anything at all.
Fair, I'll improve commit message in v8, thanks.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH bpf-next v7 4/4] selftests/bpf: add usage example for cpu cycles kfuncs
2024-11-20 8:51 ` Peter Zijlstra
@ 2024-11-20 17:19 ` Vadim Fedorenko
0 siblings, 0 replies; 19+ messages in thread
From: Vadim Fedorenko @ 2024-11-20 17:19 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Borislav Petkov, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Eduard Zingerman, Thomas Gleixner, Yonghong Song,
Mykola Lysenko, x86, bpf, Martin KaFai Lau
On 20/11/2024 00:51, Peter Zijlstra wrote:
> On Tue, Nov 19, 2024 at 06:45:57AM -0800, Vadim Fedorenko wrote:
>> On 19/11/2024 03:47, Peter Zijlstra wrote:
>>> On Mon, Nov 18, 2024 at 10:52:45AM -0800, Vadim Fedorenko wrote:
>>>
>>>> +int bpf_cpu_cycles(void)
>>>> +{
>>>> + struct bpf_pidns_info pidns;
>>>> + __u64 start;
>>>> +
>>>> + start = bpf_get_cpu_cycles();
>>>> + bpf_get_ns_current_pid_tgid(0, 0, &pidns, sizeof(struct bpf_pidns_info));
>>>> + cycles = bpf_get_cpu_cycles() - start;
>>>> + ns = bpf_cpu_cycles_to_ns(cycles);
>>>> + return 0;
>>>> +}
>>>
>>> Oh, the intent is to use that cycles_to_ns() on deltas. That wasn't at
>>> all clear.
>>
>> Yep, that's the main use case, it was discussed in the previous
>> versions of the patchset.
>
> Should bloody well be in the changelogs then. As is I'm tempted to NAK
> the entire series because there is not a single word on WHY for any of
> this.
Sure, I'll add this info in the next version.
>>> Anyway, the above has more problems than just bad naming. TSC is
>>> constant and not affected by DVFS, so depending on the DVFS state of
>>> things your function will return wildly different readings.
>>
>> Why should I care about DVFS? The use case is to measure the time spent
>> in some code. If we replace it with bpf_ktime_get_ns(), it will also be
>> affected by DVFS, and it's fine. We will be able to see the difference
>> for different DVFS states.
>
> Again, this goes to usage, why do you want this, what are you going to
> do with the results?
>
> Run-ro-run numbers will be absolutely useless because of DVFS.
We do a lot of measurements of bpf programs and kernel stack functions
at scale. We can filter out variations due to DVFS as well as slice the
results by the HW generations, etc. In general, we do see benefits of
the values we gather. The goal of this patchset is to optimize the
overhead added by bpf_ktime_get_ns(). Andrii has already shown the
benefit in [1]. TL;DR - it's about 35-40% faster than the pair of
bpf_ktime_get_ns(). And helps to save a lot of CPU at scale.
[1]
https://lore.kernel.org/bpf/CAEf4BzaRb+fUK17wrj4sWnYM5oKxTvwZC=U-GjvsdUtF94PqrA@mail.gmail.com/
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2024-11-20 17:20 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-18 18:52 [PATCH bpf-next v7 0/4] bpf: add cpu cycles kfuncss Vadim Fedorenko
2024-11-18 18:52 ` [PATCH bpf-next v7 1/4] bpf: add bpf_get_cpu_cycles kfunc Vadim Fedorenko
2024-11-19 11:18 ` Peter Zijlstra
2024-11-19 14:29 ` Vadim Fedorenko
2024-11-19 16:17 ` Peter Zijlstra
2024-11-19 18:03 ` Vadim Fedorenko
2024-11-19 19:16 ` Andrii Nakryiko
2024-11-19 19:27 ` Vadim Fedorenko
2024-11-18 18:52 ` [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper Vadim Fedorenko
2024-11-19 11:28 ` Peter Zijlstra
2024-11-19 14:38 ` Vadim Fedorenko
2024-11-20 8:49 ` Peter Zijlstra
2024-11-20 13:39 ` Vadim Fedorenko
2024-11-18 18:52 ` [PATCH bpf-next v7 3/4] selftests/bpf: add selftest to check rdtsc jit Vadim Fedorenko
2024-11-18 18:52 ` [PATCH bpf-next v7 4/4] selftests/bpf: add usage example for cpu cycles kfuncs Vadim Fedorenko
2024-11-19 11:47 ` Peter Zijlstra
2024-11-19 14:45 ` Vadim Fedorenko
2024-11-20 8:51 ` Peter Zijlstra
2024-11-20 17:19 ` Vadim Fedorenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox