* [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part
@ 2024-07-19 11:00 Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 1/9] bpf, lsm: Add disabled BPF LSM hook list Xu Kuohai
` (9 more replies)
0 siblings, 10 replies; 18+ messages in thread
From: Xu Kuohai @ 2024-07-19 11:00 UTC (permalink / raw)
To: bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Eduard Zingerman, Shung-Hsi Yu, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
From: Xu Kuohai <xukuohai@huawei.com>
LSM BPF prog may make kernel panic when returning an unexpected value,
such as returning positive value on hook file_alloc_security.
To fix it, series [1] refactored LSM hook return values and added
BPF return value check on top of that. Since the refactoring of LSM
hooks and checking BPF prog return value patches is not closely related,
this series separates BPF-related patches from [1].
v2:
- Update Shung-Hsi's patch with [3]
v1: https://lore.kernel.org/bpf/20240719081749.769748-1-xukuohai@huaweicloud.com/
Changes to [1]:
1. Extend LSM disabled list to include hooks refactored in [1] to avoid
dependency on the hooks return value refactoring patches.
2. Replace the special case patch for bitwise AND on [-1, 0] with Shung-Hsi's
general bitwise AND improvement patch [2].
3. Remove unused patches.
[1] https://lore.kernel.org/bpf/20240711111908.3817636-1-xukuohai@huaweicloud.com
https://lore.kernel.org/bpf/20240711113828.3818398-1-xukuohai@huaweicloud.com
[2] https://lore.kernel.org/bpf/ykuhustu7vt2ilwhl32kj655xfdgdlm2xkl5rff6tw2ycksovp@ss2n4gpjysnw
[3] https://lore.kernel.org/bpf/20240719081702.137173-1-shung-hsi.yu@suse.com/
Shung-Hsi Yu (1):
bpf, verifier: improve signed ranges inference for BPF_AND
Xu Kuohai (8):
bpf, lsm: Add disabled BPF LSM hook list
bpf, lsm: Add check for BPF LSM return value
bpf: Prevent tail call between progs attached to different hooks
bpf: Fix compare error in function retval_range_within
selftests/bpf: Avoid load failure for token_lsm.c
selftests/bpf: Add return value checks for failed tests
selftests/bpf: Add test for lsm tail call
selftests/bpf: Add verifier tests for bpf lsm
include/linux/bpf.h | 2 +
include/linux/bpf_lsm.h | 8 +
kernel/bpf/bpf_lsm.c | 65 ++++++-
kernel/bpf/btf.c | 5 +-
kernel/bpf/core.c | 21 ++-
kernel/bpf/verifier.c | 139 ++++++++++----
.../selftests/bpf/prog_tests/test_lsm.c | 46 ++++-
.../selftests/bpf/prog_tests/verifier.c | 2 +
tools/testing/selftests/bpf/progs/err.h | 10 +
.../selftests/bpf/progs/lsm_tailcall.c | 34 ++++
.../selftests/bpf/progs/test_sig_in_xattr.c | 4 +
.../bpf/progs/test_verify_pkcs7_sig.c | 8 +-
tools/testing/selftests/bpf/progs/token_lsm.c | 4 +-
.../bpf/progs/verifier_global_subprogs.c | 7 +-
.../selftests/bpf/progs/verifier_lsm.c | 178 ++++++++++++++++++
15 files changed, 486 insertions(+), 47 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/lsm_tailcall.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_lsm.c
--
2.30.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH bpf-next v2 1/9] bpf, lsm: Add disabled BPF LSM hook list
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
@ 2024-07-19 11:00 ` Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 2/9] bpf, lsm: Add check for BPF LSM return value Xu Kuohai
` (8 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Xu Kuohai @ 2024-07-19 11:00 UTC (permalink / raw)
To: bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Eduard Zingerman, Shung-Hsi Yu, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
From: Xu Kuohai <xukuohai@huawei.com>
Add a disabled hooks list for BPF LSM. progs being attached to the
listed hooks will be rejected by the verifier.
Suggested-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
---
kernel/bpf/bpf_lsm.c | 31 +++++++++++++++++++++++++++++--
1 file changed, 29 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 08a338e1f231..1f596ad6257c 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -36,6 +36,24 @@ BTF_SET_START(bpf_lsm_hooks)
#undef LSM_HOOK
BTF_SET_END(bpf_lsm_hooks)
+BTF_SET_START(bpf_lsm_disabled_hooks)
+BTF_ID(func, bpf_lsm_vm_enough_memory)
+BTF_ID(func, bpf_lsm_inode_need_killpriv)
+BTF_ID(func, bpf_lsm_inode_getsecurity)
+BTF_ID(func, bpf_lsm_inode_listsecurity)
+BTF_ID(func, bpf_lsm_inode_copy_up_xattr)
+BTF_ID(func, bpf_lsm_getselfattr)
+BTF_ID(func, bpf_lsm_getprocattr)
+BTF_ID(func, bpf_lsm_setprocattr)
+#ifdef CONFIG_KEYS
+BTF_ID(func, bpf_lsm_key_getsecurity)
+#endif
+#ifdef CONFIG_AUDIT
+BTF_ID(func, bpf_lsm_audit_rule_match)
+#endif
+BTF_ID(func, bpf_lsm_ismaclabel)
+BTF_SET_END(bpf_lsm_disabled_hooks)
+
/* List of LSM hooks that should operate on 'current' cgroup regardless
* of function signature.
*/
@@ -97,15 +115,24 @@ void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog,
int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
const struct bpf_prog *prog)
{
+ u32 btf_id = prog->aux->attach_btf_id;
+ const char *func_name = prog->aux->attach_func_name;
+
if (!prog->gpl_compatible) {
bpf_log(vlog,
"LSM programs must have a GPL compatible license\n");
return -EINVAL;
}
- if (!btf_id_set_contains(&bpf_lsm_hooks, prog->aux->attach_btf_id)) {
+ if (btf_id_set_contains(&bpf_lsm_disabled_hooks, btf_id)) {
+ bpf_log(vlog, "attach_btf_id %u points to disabled hook %s\n",
+ btf_id, func_name);
+ return -EINVAL;
+ }
+
+ if (!btf_id_set_contains(&bpf_lsm_hooks, btf_id)) {
bpf_log(vlog, "attach_btf_id %u points to wrong type name %s\n",
- prog->aux->attach_btf_id, prog->aux->attach_func_name);
+ btf_id, func_name);
return -EINVAL;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next v2 2/9] bpf, lsm: Add check for BPF LSM return value
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 1/9] bpf, lsm: Add disabled BPF LSM hook list Xu Kuohai
@ 2024-07-19 11:00 ` Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 3/9] bpf: Prevent tail call between progs attached to different hooks Xu Kuohai
` (7 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Xu Kuohai @ 2024-07-19 11:00 UTC (permalink / raw)
To: bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Eduard Zingerman, Shung-Hsi Yu, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
From: Xu Kuohai <xukuohai@huawei.com>
A bpf prog returning a positive number attached to file_alloc_security
hook makes kernel panic.
This happens because file system can not filter out the positive number
returned by the LSM prog using IS_ERR, and misinterprets this positive
number as a file pointer.
Given that hook file_alloc_security never returned positive number
before the introduction of BPF LSM, and other BPF LSM hooks may
encounter similar issues, this patch adds LSM return value check
in verifier, to ensure no unexpected value is returned.
Fixes: 520b7aa00d8c ("bpf: lsm: Initialize the BPF LSM hooks")
Reported-by: Xin Liu <liuxin350@huawei.com>
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
---
include/linux/bpf.h | 1 +
include/linux/bpf_lsm.h | 8 ++++++
kernel/bpf/bpf_lsm.c | 34 ++++++++++++++++++++++-
kernel/bpf/btf.c | 5 +++-
kernel/bpf/verifier.c | 60 ++++++++++++++++++++++++++++++++++-------
5 files changed, 97 insertions(+), 11 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4f1d4a97b9d1..d255201035c4 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -927,6 +927,7 @@ struct bpf_insn_access_aux {
};
};
struct bpf_verifier_log *log; /* for verbose logs */
+ bool is_retval; /* is accessing function return value ? */
};
static inline void
diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
index 1de7ece5d36d..aefcd6564251 100644
--- a/include/linux/bpf_lsm.h
+++ b/include/linux/bpf_lsm.h
@@ -9,6 +9,7 @@
#include <linux/sched.h>
#include <linux/bpf.h>
+#include <linux/bpf_verifier.h>
#include <linux/lsm_hooks.h>
#ifdef CONFIG_BPF_LSM
@@ -45,6 +46,8 @@ void bpf_inode_storage_free(struct inode *inode);
void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog, bpf_func_t *bpf_func);
+int bpf_lsm_get_retval_range(const struct bpf_prog *prog,
+ struct bpf_retval_range *range);
#else /* !CONFIG_BPF_LSM */
static inline bool bpf_lsm_is_sleepable_hook(u32 btf_id)
@@ -78,6 +81,11 @@ static inline void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog,
{
}
+static inline int bpf_lsm_get_retval_range(const struct bpf_prog *prog,
+ struct bpf_retval_range *range)
+{
+ return -EOPNOTSUPP;
+}
#endif /* CONFIG_BPF_LSM */
#endif /* _LINUX_BPF_LSM_H */
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 1f596ad6257c..6292ac5f9bd1 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -11,7 +11,6 @@
#include <linux/lsm_hooks.h>
#include <linux/bpf_lsm.h>
#include <linux/kallsyms.h>
-#include <linux/bpf_verifier.h>
#include <net/bpf_sk_storage.h>
#include <linux/bpf_local_storage.h>
#include <linux/btf_ids.h>
@@ -417,3 +416,36 @@ const struct bpf_verifier_ops lsm_verifier_ops = {
.get_func_proto = bpf_lsm_func_proto,
.is_valid_access = btf_ctx_access,
};
+
+/* hooks return 0 or 1 */
+BTF_SET_START(bool_lsm_hooks)
+#ifdef CONFIG_SECURITY_NETWORK_XFRM
+BTF_ID(func, bpf_lsm_xfrm_state_pol_flow_match)
+#endif
+#ifdef CONFIG_AUDIT
+BTF_ID(func, bpf_lsm_audit_rule_known)
+#endif
+BTF_ID(func, bpf_lsm_inode_xattr_skipcap)
+BTF_SET_END(bool_lsm_hooks)
+
+int bpf_lsm_get_retval_range(const struct bpf_prog *prog,
+ struct bpf_retval_range *retval_range)
+{
+ /* no return value range for void hooks */
+ if (!prog->aux->attach_func_proto->type)
+ return -EINVAL;
+
+ if (btf_id_set_contains(&bool_lsm_hooks, prog->aux->attach_btf_id)) {
+ retval_range->minval = 0;
+ retval_range->maxval = 1;
+ } else {
+ /* All other available LSM hooks, except task_prctl, return 0
+ * on success and negative error code on failure.
+ * To keep things simple, we only allow bpf progs to return 0
+ * or negative errno for task_prctl too.
+ */
+ retval_range->minval = -MAX_ERRNO;
+ retval_range->maxval = 0;
+ }
+ return 0;
+}
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 520f49f422fe..95426d5b634e 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6416,8 +6416,11 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
if (arg == nr_args) {
switch (prog->expected_attach_type) {
- case BPF_LSM_CGROUP:
case BPF_LSM_MAC:
+ /* mark we are accessing the return value */
+ info->is_retval = true;
+ fallthrough;
+ case BPF_LSM_CGROUP:
case BPF_TRACE_FEXIT:
/* When LSM programs are attached to void LSM hooks
* they use FEXIT trampolines and when attached to
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8da132a1ef28..fefa1d5d2faa 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2334,6 +2334,25 @@ static void mark_reg_unknown(struct bpf_verifier_env *env,
__mark_reg_unknown(env, regs + regno);
}
+static int __mark_reg_s32_range(struct bpf_verifier_env *env,
+ struct bpf_reg_state *regs,
+ u32 regno,
+ s32 s32_min,
+ s32 s32_max)
+{
+ struct bpf_reg_state *reg = regs + regno;
+
+ reg->s32_min_value = max_t(s32, reg->s32_min_value, s32_min);
+ reg->s32_max_value = min_t(s32, reg->s32_max_value, s32_max);
+
+ reg->smin_value = max_t(s64, reg->smin_value, s32_min);
+ reg->smax_value = min_t(s64, reg->smax_value, s32_max);
+
+ reg_bounds_sync(reg);
+
+ return reg_bounds_sanity_check(env, reg, "s32_range");
+}
+
static void __mark_reg_not_init(const struct bpf_verifier_env *env,
struct bpf_reg_state *reg)
{
@@ -5587,11 +5606,12 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
/* check access to 'struct bpf_context' fields. Supports fixed offsets only */
static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off, int size,
enum bpf_access_type t, enum bpf_reg_type *reg_type,
- struct btf **btf, u32 *btf_id)
+ struct btf **btf, u32 *btf_id, bool *is_retval)
{
struct bpf_insn_access_aux info = {
.reg_type = *reg_type,
.log = &env->log,
+ .is_retval = false,
};
if (env->ops->is_valid_access &&
@@ -5604,6 +5624,7 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off,
* type of narrower access.
*/
*reg_type = info.reg_type;
+ *is_retval = info.is_retval;
if (base_type(*reg_type) == PTR_TO_BTF_ID) {
*btf = info.btf;
@@ -6772,6 +6793,17 @@ static int check_stack_access_within_bounds(
return grow_stack_state(env, state, -min_off /* size */);
}
+static bool get_func_retval_range(struct bpf_prog *prog,
+ struct bpf_retval_range *range)
+{
+ if (prog->type == BPF_PROG_TYPE_LSM &&
+ prog->expected_attach_type == BPF_LSM_MAC &&
+ !bpf_lsm_get_retval_range(prog, range)) {
+ return true;
+ }
+ return false;
+}
+
/* check whether memory at (regno + off) is accessible for t = (read | write)
* if t==write, value_regno is a register which value is stored into memory
* if t==read, value_regno is a register which will receive the value from memory
@@ -6876,6 +6908,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
mark_reg_unknown(env, regs, value_regno);
} else if (reg->type == PTR_TO_CTX) {
+ bool is_retval = false;
+ struct bpf_retval_range range;
enum bpf_reg_type reg_type = SCALAR_VALUE;
struct btf *btf = NULL;
u32 btf_id = 0;
@@ -6891,7 +6925,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
return err;
err = check_ctx_access(env, insn_idx, off, size, t, ®_type, &btf,
- &btf_id);
+ &btf_id, &is_retval);
if (err)
verbose_linfo(env, insn_idx, "; ");
if (!err && t == BPF_READ && value_regno >= 0) {
@@ -6900,7 +6934,14 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
* case, we know the offset is zero.
*/
if (reg_type == SCALAR_VALUE) {
- mark_reg_unknown(env, regs, value_regno);
+ if (is_retval && get_func_retval_range(env->prog, &range)) {
+ err = __mark_reg_s32_range(env, regs, value_regno,
+ range.minval, range.maxval);
+ if (err)
+ return err;
+ } else {
+ mark_reg_unknown(env, regs, value_regno);
+ }
} else {
mark_reg_known_zero(env, regs,
value_regno);
@@ -15674,12 +15715,13 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
case BPF_PROG_TYPE_LSM:
if (env->prog->expected_attach_type != BPF_LSM_CGROUP) {
- /* Regular BPF_PROG_TYPE_LSM programs can return
- * any value.
- */
- return 0;
- }
- if (!env->prog->aux->attach_func_proto->type) {
+ /* no range found, any return value is allowed */
+ if (!get_func_retval_range(env->prog, &range))
+ return 0;
+ /* no restricted range, any return value is allowed */
+ if (range.minval == S32_MIN && range.maxval == S32_MAX)
+ return 0;
+ } else if (!env->prog->aux->attach_func_proto->type) {
/* Make sure programs that attach to void
* hooks don't try to modify return value.
*/
--
2.30.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next v2 3/9] bpf: Prevent tail call between progs attached to different hooks
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 1/9] bpf, lsm: Add disabled BPF LSM hook list Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 2/9] bpf, lsm: Add check for BPF LSM return value Xu Kuohai
@ 2024-07-19 11:00 ` Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 4/9] bpf: Fix compare error in function retval_range_within Xu Kuohai
` (6 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Xu Kuohai @ 2024-07-19 11:00 UTC (permalink / raw)
To: bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Eduard Zingerman, Shung-Hsi Yu, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
From: Xu Kuohai <xukuohai@huawei.com>
bpf progs can be attached to kernel functions, and the attached functions
can take different parameters or return different return values. If
prog attached to one kernel function tail calls prog attached to another
kernel function, the ctx access or return value verification could be
bypassed.
For example, if prog1 is attached to func1 which takes only 1 parameter
and prog2 is attached to func2 which takes two parameters. Since verifier
assumes the bpf ctx passed to prog2 is constructed based on func2's
prototype, verifier allows prog2 to access the second parameter from
the bpf ctx passed to it. The problem is that verifier does not prevent
prog1 from passing its bpf ctx to prog2 via tail call. In this case,
the bpf ctx passed to prog2 is constructed from func1 instead of func2,
that is, the assumption for ctx access verification is bypassed.
Another example, if BPF LSM prog1 is attached to hook file_alloc_security,
and BPF LSM prog2 is attached to hook bpf_lsm_audit_rule_known. Verifier
knows the return value rules for these two hooks, e.g. it is legal for
bpf_lsm_audit_rule_known to return positive number 1, and it is illegal
for file_alloc_security to return positive number. So verifier allows
prog2 to return positive number 1, but does not allow prog1 to return
positive number. The problem is that verifier does not prevent prog1
from calling prog2 via tail call. In this case, prog2's return value 1
will be used as the return value for prog1's hook file_alloc_security.
That is, the return value rule is bypassed.
This patch adds restriction for tail call to prevent such bypasses.
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
---
include/linux/bpf.h | 1 +
kernel/bpf/core.c | 21 ++++++++++++++++++---
2 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d255201035c4..bf71edb260cd 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -294,6 +294,7 @@ struct bpf_map {
* same prog type, JITed flag and xdp_has_frags flag.
*/
struct {
+ const struct btf_type *attach_func_proto;
spinlock_t lock;
enum bpf_prog_type type;
bool jited;
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 7ee62e38faf0..4e07cc057d6f 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2302,6 +2302,7 @@ bool bpf_prog_map_compatible(struct bpf_map *map,
{
enum bpf_prog_type prog_type = resolve_prog_type(fp);
bool ret;
+ struct bpf_prog_aux *aux = fp->aux;
if (fp->kprobe_override)
return false;
@@ -2311,7 +2312,7 @@ bool bpf_prog_map_compatible(struct bpf_map *map,
* in the case of devmap and cpumap). Until device checks
* are implemented, prohibit adding dev-bound programs to program maps.
*/
- if (bpf_prog_is_dev_bound(fp->aux))
+ if (bpf_prog_is_dev_bound(aux))
return false;
spin_lock(&map->owner.lock);
@@ -2321,12 +2322,26 @@ bool bpf_prog_map_compatible(struct bpf_map *map,
*/
map->owner.type = prog_type;
map->owner.jited = fp->jited;
- map->owner.xdp_has_frags = fp->aux->xdp_has_frags;
+ map->owner.xdp_has_frags = aux->xdp_has_frags;
+ map->owner.attach_func_proto = aux->attach_func_proto;
ret = true;
} else {
ret = map->owner.type == prog_type &&
map->owner.jited == fp->jited &&
- map->owner.xdp_has_frags == fp->aux->xdp_has_frags;
+ map->owner.xdp_has_frags == aux->xdp_has_frags;
+ if (ret &&
+ map->owner.attach_func_proto != aux->attach_func_proto) {
+ switch (prog_type) {
+ case BPF_PROG_TYPE_TRACING:
+ case BPF_PROG_TYPE_LSM:
+ case BPF_PROG_TYPE_EXT:
+ case BPF_PROG_TYPE_STRUCT_OPS:
+ ret = false;
+ break;
+ default:
+ break;
+ }
+ }
}
spin_unlock(&map->owner.lock);
--
2.30.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next v2 4/9] bpf: Fix compare error in function retval_range_within
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
` (2 preceding siblings ...)
2024-07-19 11:00 ` [PATCH bpf-next v2 3/9] bpf: Prevent tail call between progs attached to different hooks Xu Kuohai
@ 2024-07-19 11:00 ` Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND Xu Kuohai
` (5 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Xu Kuohai @ 2024-07-19 11:00 UTC (permalink / raw)
To: bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Eduard Zingerman, Shung-Hsi Yu, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
From: Xu Kuohai <xukuohai@huawei.com>
After checking lsm hook return range in verifier, the test case
"test_progs -t test_lsm" failed, and the failure log says:
libbpf: prog 'test_int_hook': BPF program load failed: Invalid argument
libbpf: prog 'test_int_hook': -- BEGIN PROG LOAD LOG --
0: R1=ctx() R10=fp0
; int BPF_PROG(test_int_hook, struct vm_area_struct *vma, @ lsm.c:89
0: (79) r0 = *(u64 *)(r1 +24) ; R0_w=scalar(smin=smin32=-4095,smax=smax32=0) R1=ctx()
[...]
24: (b4) w0 = -1 ; R0_w=0xffffffff
; int BPF_PROG(test_int_hook, struct vm_area_struct *vma, @ lsm.c:89
25: (95) exit
At program exit the register R0 has smin=4294967295 smax=4294967295 should have been in [-4095, 0]
It can be seen that instruction "w0 = -1" zero extended -1 to 64-bit
register r0, setting both smin and smax values of r0 to 4294967295.
This resulted in a false reject when r0 was checked with range [-4095, 0].
Given bpf lsm does not return 64-bit values, this patch fixes it by changing
the compare between r0 and return range from 64-bit operation to 32-bit
operation for bpf lsm.
Fixes: 8fa4ecd49b81 ("bpf: enforce exact retval range on subprog/callback exit")
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
Acked-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
---
kernel/bpf/verifier.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index fefa1d5d2faa..78104bd85274 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9964,9 +9964,13 @@ static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env)
return is_rbtree_lock_required_kfunc(kfunc_btf_id);
}
-static bool retval_range_within(struct bpf_retval_range range, const struct bpf_reg_state *reg)
+static bool retval_range_within(struct bpf_retval_range range, const struct bpf_reg_state *reg,
+ bool return_32bit)
{
- return range.minval <= reg->smin_value && reg->smax_value <= range.maxval;
+ if (return_32bit)
+ return range.minval <= reg->s32_min_value && reg->s32_max_value <= range.maxval;
+ else
+ return range.minval <= reg->smin_value && reg->smax_value <= range.maxval;
}
static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
@@ -10003,8 +10007,8 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
if (err)
return err;
- /* enforce R0 return value range */
- if (!retval_range_within(callee->callback_ret_range, r0)) {
+ /* enforce R0 return value range, and bpf_callback_t returns 64bit */
+ if (!retval_range_within(callee->callback_ret_range, r0, false)) {
verbose_invalid_scalar(env, r0, callee->callback_ret_range,
"At callback return", "R0");
return -EINVAL;
@@ -15610,6 +15614,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
int err;
struct bpf_func_state *frame = env->cur_state->frame[0];
const bool is_subprog = frame->subprogno;
+ bool return_32bit = false;
/* LSM and struct_ops func-ptr's return type could be "void" */
if (!is_subprog || frame->in_exception_callback_fn) {
@@ -15721,6 +15726,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
/* no restricted range, any return value is allowed */
if (range.minval == S32_MIN && range.maxval == S32_MAX)
return 0;
+ return_32bit = true;
} else if (!env->prog->aux->attach_func_proto->type) {
/* Make sure programs that attach to void
* hooks don't try to modify return value.
@@ -15751,7 +15757,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
if (err)
return err;
- if (!retval_range_within(range, reg)) {
+ if (!retval_range_within(range, reg, return_32bit)) {
verbose_invalid_scalar(env, reg, range, exit_ctx, reg_name);
if (!is_subprog &&
prog->expected_attach_type == BPF_LSM_CGROUP &&
--
2.30.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
` (3 preceding siblings ...)
2024-07-19 11:00 ` [PATCH bpf-next v2 4/9] bpf: Fix compare error in function retval_range_within Xu Kuohai
@ 2024-07-19 11:00 ` Xu Kuohai
2024-07-22 7:13 ` Eduard Zingerman
2024-07-19 11:00 ` [PATCH bpf-next v2 6/9] selftests/bpf: Avoid load failure for token_lsm.c Xu Kuohai
` (4 subsequent siblings)
9 siblings, 1 reply; 18+ messages in thread
From: Xu Kuohai @ 2024-07-19 11:00 UTC (permalink / raw)
To: bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Eduard Zingerman, Shung-Hsi Yu, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
From: Shung-Hsi Yu <shung-hsi.yu@suse.com>
This commit improve BPF verifier's inference of signed ranges by learning new
signed ranges directly from signed ranges of the operands by doing
dst_reg->smin_value = negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value))
dst_reg->smax_value = max(dst_reg->smax_value, src_reg->smax_value)
See below for th complete explanation. The improvement is needed to prevent
verifier rejection of BPF program like the one presented by Xu Kuohai:
SEC("lsm/bpf_map")
int BPF_PROG(check_access, struct bpf_map *map, fmode_t fmode)
{
if (map != (struct bpf_map *)&data_input)
return 0;
if (fmode & FMODE_WRITE)
return -EACCES;
return 0;
}
Where the relevant verifer log upon rejection are:
...
5: (79) r0 = *(u64 *)(r1 +8) ; R0_w=scalar() R1=ctx()
; if (fmode & FMODE_WRITE) @ test_libbpf_get_fd_by_id_opts.c:32
6: (67) r0 <<= 62 ; R0_w=scalar(smax=0x4000000000000000,umax=0xc000000000000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xc000000000000000))
7: (c7) r0 s>>= 63 ; R0_w=scalar(smin=smin32=-1,smax=smax32=0)
; @ test_libbpf_get_fd_by_id_opts.c:0
8: (57) r0 &= -13 ; R0_w=scalar(smax=0x7ffffffffffffff3,umax=0xfffffffffffffff3,smax32=0x7ffffff3,umax32=0xfffffff3,var_off=(0x0; 0xfffffffffffffff3))
9: (95) exit
This sequence of instructions comes from Clang's transformation located in
DAGCombiner::SimplifySelectCC() method, which combined the "fmode &
FMODE_WRITE" check with the return statement without needing BPF_JMP at all.
See Eduard's comment for more detail of this transformation[0].
While the verifier can correctly infer that the value of r0 is in a tight [-1,
0] range after instruction "r0 s>>= 63", is was not able to come up with a
tight range for "r0 &= -13" (which would be [-13, 0]), and instead inferred a
very loose range:
r0 s>>= 63; R0_w=scalar(smin=smin32=-1,smax=smax32=0)
r0 &= -13 ; R0_w=scalar(smax=0x7ffffffffffffff3,umax=0xfffffffffffffff3,smax32=0x7ffffff3,umax32=0xfffffff3,var_off=(0x0; 0xfffffffffffffff3))
The reason is that scalar*_min_max_add() mainly relies on tnum for inferring
bounds in register after BPF_AND, however [-1, 0] cannot be tracked precisely
with tnum, and was effectively turns into [0, -1] (i.e. tnum_unknown). So upon
BPF_AND the resulting tnum is equivalent to
dst_reg->var_off = tnum_and(tnum_unknown, tnum_const(-13))
And from there the BPF verifier was only able to infer smin=S64_MIN and
smax=0x7ffffffffffffff3, which is outside of the expected [-4095, 0] range for
return values, and thus the program was rejected.
To allow verification of such instruction pattern, update scalar*_min_max_and()
to infer signed ranges directly from signed ranges of the operands.
For BPF_AND, the resulting value always gains more unset '0' bit, thus it only
move towards 0x0000000000000000. The difficulty lies with how to deal with
signs. While non-negative (positive and zero) value simply grows smaller, a
negative number can grows smaller, but may also "underflow" and become a larger
value.
To better address this situation we split the signed ranges into negative range
and non-negative range cases, ignoring the mixed sign cases for now; and only
consider how to calculate smax_value.
Since negative range & negative range preserve the sign bit, so we know the
result is still a negative value, thus it only move towards S64_MIN, but never
underflow, thus a save bet is to use a value in ranges that is closet to 0,
thus "max(dst_reg->smax_value, src->smax_value)". For negative range & positive
range the sign bit is always cleared, thus we know the resulting value is
non-negative, and only moves towards 0, so a safe bet is to use smax_value of
the non-negative range. Last but not least, non-negative range & non-negative
range is still a non-negative value, and only moves towards 0; however same as
the unsigned range case, the maximum is actually capped by the lesser of the
two, and thus min(dst_reg->smax_value, src_reg->smax_value);
Listing out the above reasoning as a table (dst_reg abbreviated as dst, src_reg
abbreviated as src, smax_value abbrivated as smax) we get:
| src_reg
smax = ? +---------------------------+---------------------------
| negative | non-negative
---------+--------------+---------------------------+---------------------------
| negative | max(dst->smax, src->smax) | src->smax
dst_reg +--------------+---------------------------+---------------------------
| non-negative | dst->smax | min(dst->smax, src->smax)
However this is quite complicated, and could use some simplification given the
following observations:
max(dst_reg->smax_value, src_reg->smax_value) >= src_reg->smax_value
max(dst_reg->smax_value, src_reg->smax_value) >= dst_reg->smax_value
max(dst_reg->smax_value, src_reg->smax_value) >= min(dst_reg->smax_value, src_reg->smax_value)
So we could substitute the cells in the table above all with max(...), and
arrive at:
| src_reg
smax' = ? +---------------------------+---------------------------
smax'(r) >= smax(r) | negative | non-negative
---------+--------------+---------------------------+---------------------------
| negative | max(dst->smax, src->smax) | max(dst->smax, src->smax)
dst_reg +--------------+---------------------------+---------------------------
| non-negative | max(dst->smax, src->smax) | max(dst->smax, src->smax)
Meaning that simply using
max(dst_reg->smax_value, src_reg->smax_value)
to calculate the resulting smax_value would work across all sign combinations.
For smin_value, we know that both non-negative range & non-negative range and
negative range & non-negative range both result in a non-negative value, so an
easy guess is to use the minimum value in non-negative range, thus 0.
| src_reg
smin = ? +----------------------------+---------------------------
| negative | non-negative
---------+--------------+----------------------------+---------------------------
| negative | ? | 0
dst_reg +--------------+----------------------------+---------------------------
| non-negative | 0 | 0
That leaves the negative range & negative range case to be considered. We know
that negative range & negative range always yield a negative value, so a
preliminary guess would be S64_MIN. However, that guess is too imprecise to
help with the r0 <<= 62, r0 s>>= 63, r0 &= -13 pattern we're trying to deal
with here.
Further improvement comes with the observation that for negative range &
negative range, the smallest possible value must be one that has longest
_common_ most-significant set '1' bits sequence, thus we can use
min(dst_reg->smin_value, src->smin_value) as the starting point, as the smaller
value will be the one with the shorter most-significant set '1' bits sequence.
But that alone is not enough, as we do not know whether rest of the bits would
be set, so the safest guess would be one that clear alls bits after the
most-significant set '1' bits sequence, something akin to bit_floor(), but for
rounding to a negative power-of-2 instead.
negative_bit_floor(0xffff000000000003) == 0xffff000000000000
negative_bit_floor(0xfffffb0000000000) == 0xfffff80000000000
negative_bit_floor(0xffffffffffffffff) == 0xffffffffffffffff /* -1 remains unchanged */
negative_bit_floor(0x0000fb0000000000) == 0x0000000000000000 /* non-negative values became 0 */
With negative range & negative range solve, we now have:
| src_reg
smin = ? +----------------------------+---------------------------
| negative | non-negative
---------+--------------+----------------------------+---------------------------
| negative |negative_bit_floor( | 0
| | min(dst->smin, src->smin))|
dst_reg +--------------+----------------------------+---------------------------
| non-negative | 0 | 0
This can be also simplified with some observations (quadrants refers to the
cells in above table, number start from top-right cell -- I, and goes
counter-clockwise):
A. min(dst_reg->smin_value, src_reg->smin_value) < 0 /* dst negative & src non-negative, quadrant I */
B. min(dst_reg->smin_value, src_reg->smin_value) < 0 /* dst non-negative & src negative, quadrant III */
C. min(dst_reg->smin_value, src_reg->smin_value) >= 0 /* dst non-negative & src non-negative, quadrant IV */
D. negative_bit_floor(x) s<= x /* for any x, negative_bit_floor(x) is always smaller (or equal to the original value) */
E. negative_bit_floor(y) == 0 /* when y is non-negative, i.e. y >= 0, since the most-significant is unset, so every bit is unset */
Thus we can derive
negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value)) < 0 /* combine A and D, where dst negative & src non-negative */
negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value)) < 0 /* combine B and D, where dst non-negative & src negative */
negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value)) == 0 /* combine C and E, where dst non-negative & src non-negative */
Substitute quadrants I, III, and IV cells in the table above all with
negative_bit_floor(min(...)), we arrive at:
| src_reg
smin' = ? +----------------------------+---------------------------
smin'(r) <= smin(r) | negative | non-negative
---------+--------------+----------------------------+---------------------------
| negative |negative_bit_floor( |negative_bit_floor(
| | min(dst->smin, src->smin))| min(dst->smin, src->smin))
dst_reg +--------------+----------------------------+---------------------------
| non-negative |negative_bit_floor( |negative_bit_floor(
| | min(dst->smin, src->smin))| min(dst->smin, src->smin))
Meaning that simply using
negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value))
to calculate the resulting smin_value would work across all sign combinations.
Together these allows the BPF verifier to infer the signed range of the
result of BPF_AND operation using the signed range from its operands,
and use that information
r0 s>>= 63; R0_w=scalar(smin=smin32=-1,smax=smax32=0)
r0 &= -13 ; R0_w=scalar(smin=smin32=-16,smax=smax32=0,umax=0xfffffffffffffff3,umax32=0xfffffff3,var_off=(0x0; 0xfffffffffffffff3))
[0] https://lore.kernel.org/bpf/e62e2971301ca7f2e9eb74fc500c520285cad8f5.camel@gmail.com/
Link: https://lore.kernel.org/bpf/phcqmyzeqrsfzy7sb4rwpluc37hxyz7rcajk2bqw6cjk2x7rt5@m2hl6enudv7d/
Cc: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
Acked-by: Xu Kuohai <xukuohai@huawei.com>
---
kernel/bpf/verifier.c | 63 +++++++++++++++++++++++++++++--------------
1 file changed, 43 insertions(+), 20 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 78104bd85274..d3f3a464a871 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13511,6 +13511,40 @@ static void scalar_min_max_mul(struct bpf_reg_state *dst_reg,
}
}
+/* Clears all trailing bits after the most significant unset bit.
+ *
+ * Used for estimating the minimum possible value after BPF_AND. This
+ * effectively rounds a negative value down to a negative power-of-2 value
+ * (except for -1, which just return -1) and returning 0 for non-negative
+ * values. E.g. negative32_bit_floor(0xff0ff0ff) == 0xff000000.
+ */
+static inline s32 negative32_bit_floor(s32 v)
+{
+ u8 bits = fls(~v); /* find most-significant unset bit */
+ u32 delta;
+
+ /* special case, needed because 1UL << 32 is undefined */
+ if (bits > 31)
+ return 0;
+
+ delta = (1UL << bits) - 1;
+ return ~delta;
+}
+
+/* Same as negative32_bit_floor() above, but for 64-bit signed value */
+static inline s64 negative_bit_floor(s64 v)
+{
+ u8 bits = fls64(~v); /* find most-significant unset bit */
+ u64 delta;
+
+ /* special case, needed because 1ULL << 64 is undefined */
+ if (bits > 63)
+ return 0;
+
+ delta = (1ULL << bits) - 1;
+ return ~delta;
+}
+
static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
struct bpf_reg_state *src_reg)
{
@@ -13530,16 +13564,10 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
dst_reg->u32_min_value = var32_off.value;
dst_reg->u32_max_value = min(dst_reg->u32_max_value, umax_val);
- /* Safe to set s32 bounds by casting u32 result into s32 when u32
- * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded.
- */
- if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) {
- dst_reg->s32_min_value = dst_reg->u32_min_value;
- dst_reg->s32_max_value = dst_reg->u32_max_value;
- } else {
- dst_reg->s32_min_value = S32_MIN;
- dst_reg->s32_max_value = S32_MAX;
- }
+ /* Handle the [-1, 0] & -CONSTANT case that's difficult for tnum */
+ dst_reg->s32_min_value = negative32_bit_floor(min(dst_reg->s32_min_value,
+ src_reg->s32_min_value));
+ dst_reg->s32_max_value = max(dst_reg->s32_max_value, src_reg->s32_max_value);
}
static void scalar_min_max_and(struct bpf_reg_state *dst_reg,
@@ -13560,16 +13588,11 @@ static void scalar_min_max_and(struct bpf_reg_state *dst_reg,
dst_reg->umin_value = dst_reg->var_off.value;
dst_reg->umax_value = min(dst_reg->umax_value, umax_val);
- /* Safe to set s64 bounds by casting u64 result into s64 when u64
- * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded.
- */
- if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) {
- dst_reg->smin_value = dst_reg->umin_value;
- dst_reg->smax_value = dst_reg->umax_value;
- } else {
- dst_reg->smin_value = S64_MIN;
- dst_reg->smax_value = S64_MAX;
- }
+ /* Handle the [-1, 0] & -CONSTANT case that's difficult for tnum */
+ dst_reg->smin_value = negative_bit_floor(min(dst_reg->smin_value,
+ src_reg->smin_value));
+ dst_reg->smax_value = max(dst_reg->smax_value, src_reg->smax_value);
+
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
}
--
2.30.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next v2 6/9] selftests/bpf: Avoid load failure for token_lsm.c
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
` (4 preceding siblings ...)
2024-07-19 11:00 ` [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND Xu Kuohai
@ 2024-07-19 11:00 ` Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 7/9] selftests/bpf: Add return value checks for failed tests Xu Kuohai
` (3 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Xu Kuohai @ 2024-07-19 11:00 UTC (permalink / raw)
To: bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Eduard Zingerman, Shung-Hsi Yu, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
From: Xu Kuohai <xukuohai@huawei.com>
The compiler optimized the two bpf progs in token_lsm.c to make return
value from the bool variable in the "return -1" path, causing an
unexpected rejection:
0: R1=ctx() R10=fp0
; int BPF_PROG(bpf_token_capable, struct bpf_token *token, int cap) @ bpf_lsm.c:17
0: (b7) r6 = 0 ; R6_w=0
; if (my_pid == 0 || my_pid != (bpf_get_current_pid_tgid() >> 32)) @ bpf_lsm.c:19
1: (18) r1 = 0xffffc9000102a000 ; R1_w=map_value(map=bpf_lsm.bss,ks=4,vs=5)
3: (61) r7 = *(u32 *)(r1 +0) ; R1_w=map_value(map=bpf_lsm.bss,ks=4,vs=5) R7_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
4: (15) if r7 == 0x0 goto pc+11 ; R7_w=scalar(smin=umin=umin32=1,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
5: (67) r7 <<= 32 ; R7_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000))
6: (c7) r7 s>>= 32 ; R7_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff)
7: (85) call bpf_get_current_pid_tgid#14 ; R0=scalar()
8: (77) r0 >>= 32 ; R0_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
9: (5d) if r0 != r7 goto pc+6 ; R0_w=scalar(smin=smin32=0,smax=umax=umax32=0x7fffffff,var_off=(0x0; 0x7fffffff)) R7=scalar(smin=smin32=0,smax=umax=umax32=0x7fffffff,var_off=(0x0; 0x7fffffff))
; if (reject_capable) @ bpf_lsm.c:21
10: (18) r1 = 0xffffc9000102a004 ; R1_w=map_value(map=bpf_lsm.bss,ks=4,vs=5,off=4)
12: (71) r6 = *(u8 *)(r1 +0) ; R1_w=map_value(map=bpf_lsm.bss,ks=4,vs=5,off=4) R6_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=255,var_off=(0x0; 0xff))
; @ bpf_lsm.c:0
13: (87) r6 = -r6 ; R6_w=scalar()
14: (67) r6 <<= 56 ; R6_w=scalar(smax=0x7f00000000000000,umax=0xff00000000000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xff00000000000000))
15: (c7) r6 s>>= 56 ; R6_w=scalar(smin=smin32=-128,smax=smax32=127)
; int BPF_PROG(bpf_token_capable, struct bpf_token *token, int cap) @ bpf_lsm.c:17
16: (bf) r0 = r6 ; R0_w=scalar(id=1,smin=smin32=-128,smax=smax32=127) R6_w=scalar(id=1,smin=smin32=-128,smax=smax32=127)
17: (95) exit
At program exit the register R0 has smin=-128 smax=127 should have been in [-4095, 0]
To avoid this failure, change the variable type from bool to int.
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
---
tools/testing/selftests/bpf/progs/token_lsm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/token_lsm.c b/tools/testing/selftests/bpf/progs/token_lsm.c
index e4d59b6ba743..a6002d073b1b 100644
--- a/tools/testing/selftests/bpf/progs/token_lsm.c
+++ b/tools/testing/selftests/bpf/progs/token_lsm.c
@@ -8,8 +8,8 @@
char _license[] SEC("license") = "GPL";
int my_pid;
-bool reject_capable;
-bool reject_cmd;
+int reject_capable;
+int reject_cmd;
SEC("lsm/bpf_token_capable")
int BPF_PROG(token_capable, struct bpf_token *token, int cap)
--
2.30.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next v2 7/9] selftests/bpf: Add return value checks for failed tests
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
` (5 preceding siblings ...)
2024-07-19 11:00 ` [PATCH bpf-next v2 6/9] selftests/bpf: Avoid load failure for token_lsm.c Xu Kuohai
@ 2024-07-19 11:00 ` Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 8/9] selftests/bpf: Add test for lsm tail call Xu Kuohai
` (2 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Xu Kuohai @ 2024-07-19 11:00 UTC (permalink / raw)
To: bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Eduard Zingerman, Shung-Hsi Yu, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
From: Xu Kuohai <xukuohai@huawei.com>
The return ranges of some bpf lsm test progs can not be deduced by
the verifier accurately. To avoid erroneous rejections, add explicit
return value checks for these progs.
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
---
tools/testing/selftests/bpf/progs/err.h | 10 ++++++++++
tools/testing/selftests/bpf/progs/test_sig_in_xattr.c | 4 ++++
.../selftests/bpf/progs/test_verify_pkcs7_sig.c | 8 ++++++--
.../selftests/bpf/progs/verifier_global_subprogs.c | 7 ++++++-
4 files changed, 26 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/err.h b/tools/testing/selftests/bpf/progs/err.h
index d66d283d9e59..38529779a236 100644
--- a/tools/testing/selftests/bpf/progs/err.h
+++ b/tools/testing/selftests/bpf/progs/err.h
@@ -5,6 +5,16 @@
#define MAX_ERRNO 4095
#define IS_ERR_VALUE(x) (unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO
+#define __STR(x) #x
+
+#define set_if_not_errno_or_zero(x, y) \
+({ \
+ asm volatile ("if %0 s< -4095 goto +1\n" \
+ "if %0 s<= 0 goto +1\n" \
+ "%0 = " __STR(y) "\n" \
+ : "+r"(x)); \
+})
+
static inline int IS_ERR_OR_NULL(const void *ptr)
{
return !ptr || IS_ERR_VALUE((unsigned long)ptr);
diff --git a/tools/testing/selftests/bpf/progs/test_sig_in_xattr.c b/tools/testing/selftests/bpf/progs/test_sig_in_xattr.c
index 2f0eb1334d65..8ef6b39335b6 100644
--- a/tools/testing/selftests/bpf/progs/test_sig_in_xattr.c
+++ b/tools/testing/selftests/bpf/progs/test_sig_in_xattr.c
@@ -6,6 +6,7 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include "bpf_kfuncs.h"
+#include "err.h"
char _license[] SEC("license") = "GPL";
@@ -79,5 +80,8 @@ int BPF_PROG(test_file_open, struct file *f)
ret = bpf_verify_pkcs7_signature(&digest_ptr, &sig_ptr, trusted_keyring);
bpf_key_put(trusted_keyring);
+
+ set_if_not_errno_or_zero(ret, -EFAULT);
+
return ret;
}
diff --git a/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c b/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c
index f42e9f3831a1..12034a73ee2d 100644
--- a/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c
+++ b/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c
@@ -11,6 +11,7 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include "bpf_kfuncs.h"
+#include "err.h"
#define MAX_DATA_SIZE (1024 * 1024)
#define MAX_SIG_SIZE 1024
@@ -55,12 +56,12 @@ int BPF_PROG(bpf, int cmd, union bpf_attr *attr, unsigned int size)
ret = bpf_probe_read_kernel(&value, sizeof(value), &attr->value);
if (ret)
- return ret;
+ goto out;
ret = bpf_copy_from_user(data_val, sizeof(struct data),
(void *)(unsigned long)value);
if (ret)
- return ret;
+ goto out;
if (data_val->data_len > sizeof(data_val->data))
return -EINVAL;
@@ -84,5 +85,8 @@ int BPF_PROG(bpf, int cmd, union bpf_attr *attr, unsigned int size)
bpf_key_put(trusted_keyring);
+out:
+ set_if_not_errno_or_zero(ret, -EFAULT);
+
return ret;
}
diff --git a/tools/testing/selftests/bpf/progs/verifier_global_subprogs.c b/tools/testing/selftests/bpf/progs/verifier_global_subprogs.c
index a9fc30ed4d73..20904cd2baa2 100644
--- a/tools/testing/selftests/bpf/progs/verifier_global_subprogs.c
+++ b/tools/testing/selftests/bpf/progs/verifier_global_subprogs.c
@@ -7,6 +7,7 @@
#include "bpf_misc.h"
#include "xdp_metadata.h"
#include "bpf_kfuncs.h"
+#include "err.h"
/* The compiler may be able to detect the access to uninitialized
memory in the routines performing out of bound memory accesses and
@@ -331,7 +332,11 @@ SEC("?lsm/bpf")
__success __log_level(2)
int BPF_PROG(arg_tag_ctx_lsm)
{
- return tracing_subprog_void(ctx) + tracing_subprog_u64(ctx);
+ int ret;
+
+ ret = tracing_subprog_void(ctx) + tracing_subprog_u64(ctx);
+ set_if_not_errno_or_zero(ret, -1);
+ return ret;
}
SEC("?struct_ops/test_1")
--
2.30.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next v2 8/9] selftests/bpf: Add test for lsm tail call
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
` (6 preceding siblings ...)
2024-07-19 11:00 ` [PATCH bpf-next v2 7/9] selftests/bpf: Add return value checks for failed tests Xu Kuohai
@ 2024-07-19 11:00 ` Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 9/9] selftests/bpf: Add verifier tests for bpf lsm Xu Kuohai
2024-07-23 0:50 ` [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part patchwork-bot+netdevbpf
9 siblings, 0 replies; 18+ messages in thread
From: Xu Kuohai @ 2024-07-19 11:00 UTC (permalink / raw)
To: bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Eduard Zingerman, Shung-Hsi Yu, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
From: Xu Kuohai <xukuohai@huawei.com>
Add test for lsm tail call to ensure tail call can only be used between
bpf lsm progs attached to the same hook.
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
---
.../selftests/bpf/prog_tests/test_lsm.c | 46 ++++++++++++++++++-
.../selftests/bpf/progs/lsm_tailcall.c | 34 ++++++++++++++
2 files changed, 79 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/progs/lsm_tailcall.c
diff --git a/tools/testing/selftests/bpf/prog_tests/test_lsm.c b/tools/testing/selftests/bpf/prog_tests/test_lsm.c
index 16175d579bc7..2a27f3714f5c 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_lsm.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_lsm.c
@@ -12,6 +12,7 @@
#include <stdlib.h>
#include "lsm.skel.h"
+#include "lsm_tailcall.skel.h"
char *CMD_ARGS[] = {"true", NULL};
@@ -95,7 +96,7 @@ static int test_lsm(struct lsm *skel)
return 0;
}
-void test_test_lsm(void)
+static void test_lsm_basic(void)
{
struct lsm *skel = NULL;
int err;
@@ -114,3 +115,46 @@ void test_test_lsm(void)
close_prog:
lsm__destroy(skel);
}
+
+static void test_lsm_tailcall(void)
+{
+ struct lsm_tailcall *skel = NULL;
+ int map_fd, prog_fd;
+ int err, key;
+
+ skel = lsm_tailcall__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "lsm_tailcall__skel_load"))
+ goto close_prog;
+
+ map_fd = bpf_map__fd(skel->maps.jmp_table);
+ if (CHECK_FAIL(map_fd < 0))
+ goto close_prog;
+
+ prog_fd = bpf_program__fd(skel->progs.lsm_file_permission_prog);
+ if (CHECK_FAIL(prog_fd < 0))
+ goto close_prog;
+
+ key = 0;
+ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
+ if (CHECK_FAIL(!err))
+ goto close_prog;
+
+ prog_fd = bpf_program__fd(skel->progs.lsm_file_alloc_security_prog);
+ if (CHECK_FAIL(prog_fd < 0))
+ goto close_prog;
+
+ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
+ if (CHECK_FAIL(err))
+ goto close_prog;
+
+close_prog:
+ lsm_tailcall__destroy(skel);
+}
+
+void test_test_lsm(void)
+{
+ if (test__start_subtest("lsm_basic"))
+ test_lsm_basic();
+ if (test__start_subtest("lsm_tailcall"))
+ test_lsm_tailcall();
+}
diff --git a/tools/testing/selftests/bpf/progs/lsm_tailcall.c b/tools/testing/selftests/bpf/progs/lsm_tailcall.c
new file mode 100644
index 000000000000..49c075ce2d4c
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/lsm_tailcall.c
@@ -0,0 +1,34 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Huawei Technologies Co., Ltd */
+
+#include "vmlinux.h"
+#include <errno.h>
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct {
+ __uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+ __uint(max_entries, 1);
+ __uint(key_size, sizeof(__u32));
+ __uint(value_size, sizeof(__u32));
+} jmp_table SEC(".maps");
+
+SEC("lsm/file_permission")
+int lsm_file_permission_prog(void *ctx)
+{
+ return 0;
+}
+
+SEC("lsm/file_alloc_security")
+int lsm_file_alloc_security_prog(void *ctx)
+{
+ return 0;
+}
+
+SEC("lsm/file_alloc_security")
+int lsm_file_alloc_security_entry(void *ctx)
+{
+ bpf_tail_call_static(ctx, &jmp_table, 0);
+ return 0;
+}
--
2.30.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH bpf-next v2 9/9] selftests/bpf: Add verifier tests for bpf lsm
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
` (7 preceding siblings ...)
2024-07-19 11:00 ` [PATCH bpf-next v2 8/9] selftests/bpf: Add test for lsm tail call Xu Kuohai
@ 2024-07-19 11:00 ` Xu Kuohai
2024-07-23 0:50 ` [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part patchwork-bot+netdevbpf
9 siblings, 0 replies; 18+ messages in thread
From: Xu Kuohai @ 2024-07-19 11:00 UTC (permalink / raw)
To: bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Eduard Zingerman, Shung-Hsi Yu, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
From: Xu Kuohai <xukuohai@huawei.com>
Add verifier tests to check bpf lsm return values and disabled hooks.
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
---
.../selftests/bpf/prog_tests/verifier.c | 2 +
.../selftests/bpf/progs/verifier_lsm.c | 178 ++++++++++++++++++
2 files changed, 180 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/verifier_lsm.c
diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 9dc3687bc406..ff1c7da1d06e 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -88,6 +88,7 @@
#include "verifier_xdp.skel.h"
#include "verifier_xdp_direct_packet_access.skel.h"
#include "verifier_bits_iter.skel.h"
+#include "verifier_lsm.skel.h"
#define MAX_ENTRIES 11
@@ -206,6 +207,7 @@ void test_verifier_xadd(void) { RUN(verifier_xadd); }
void test_verifier_xdp(void) { RUN(verifier_xdp); }
void test_verifier_xdp_direct_packet_access(void) { RUN(verifier_xdp_direct_packet_access); }
void test_verifier_bits_iter(void) { RUN(verifier_bits_iter); }
+void test_verifier_lsm(void) { RUN(verifier_lsm); }
static int init_test_val_map(struct bpf_object *obj, char *map_name)
{
diff --git a/tools/testing/selftests/bpf/progs/verifier_lsm.c b/tools/testing/selftests/bpf/progs/verifier_lsm.c
new file mode 100644
index 000000000000..08251c517154
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_lsm.c
@@ -0,0 +1,178 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("lsm/file_alloc_security")
+__description("lsm bpf prog with -4095~0 retval. test 1")
+__success
+__naked int errno_zero_retval_test1(void *ctx)
+{
+ asm volatile (
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/file_alloc_security")
+__description("lsm bpf prog with -4095~0 retval. test 2")
+__success
+__naked int errno_zero_retval_test2(void *ctx)
+{
+ asm volatile (
+ "r0 = -4095;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/file_alloc_security")
+__description("lsm bpf prog with -4095~0 retval. test 3")
+__success
+__naked int errno_zero_retval_test3(void *ctx)
+{
+ asm volatile (
+ "call %[bpf_get_prandom_u32];"
+ "r0 <<= 63;"
+ "r0 s>>= 63;"
+ "r0 &= -13;"
+ "exit;"
+ :
+ : __imm(bpf_get_prandom_u32)
+ : __clobber_all);
+}
+
+SEC("lsm/file_mprotect")
+__description("lsm bpf prog with -4095~0 retval. test 4")
+__failure __msg("R0 has smin=-4096 smax=-4096 should have been in [-4095, 0]")
+__naked int errno_zero_retval_test4(void *ctx)
+{
+ asm volatile (
+ "r0 = -4096;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/file_mprotect")
+__description("lsm bpf prog with -4095~0 retval. test 5")
+__failure __msg("R0 has smin=4096 smax=4096 should have been in [-4095, 0]")
+__naked int errno_zero_retval_test5(void *ctx)
+{
+ asm volatile (
+ "r0 = 4096;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/file_mprotect")
+__description("lsm bpf prog with -4095~0 retval. test 6")
+__failure __msg("R0 has smin=1 smax=1 should have been in [-4095, 0]")
+__naked int errno_zero_retval_test6(void *ctx)
+{
+ asm volatile (
+ "r0 = 1;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/audit_rule_known")
+__description("lsm bpf prog with bool retval. test 1")
+__success
+__naked int bool_retval_test1(void *ctx)
+{
+ asm volatile (
+ "r0 = 1;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/audit_rule_known")
+__description("lsm bpf prog with bool retval. test 2")
+__success
+__success
+__naked int bool_retval_test2(void *ctx)
+{
+ asm volatile (
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/audit_rule_known")
+__description("lsm bpf prog with bool retval. test 3")
+__failure __msg("R0 has smin=-1 smax=-1 should have been in [0, 1]")
+__naked int bool_retval_test3(void *ctx)
+{
+ asm volatile (
+ "r0 = -1;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/audit_rule_known")
+__description("lsm bpf prog with bool retval. test 4")
+__failure __msg("R0 has smin=2 smax=2 should have been in [0, 1]")
+__naked int bool_retval_test4(void *ctx)
+{
+ asm volatile (
+ "r0 = 2;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/file_free_security")
+__success
+__description("lsm bpf prog with void retval. test 1")
+__naked int void_retval_test1(void *ctx)
+{
+ asm volatile (
+ "r0 = -4096;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/file_free_security")
+__success
+__description("lsm bpf prog with void retval. test 2")
+__naked int void_retval_test2(void *ctx)
+{
+ asm volatile (
+ "r0 = 4096;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/getprocattr")
+__description("lsm disabled hook: getprocattr")
+__failure __msg("points to disabled hook")
+__naked int disabled_hook_test1(void *ctx)
+{
+ asm volatile (
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/setprocattr")
+__description("lsm disabled hook: setprocattr")
+__failure __msg("points to disabled hook")
+__naked int disabled_hook_test2(void *ctx)
+{
+ asm volatile (
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("lsm/ismaclabel")
+__description("lsm disabled hook: ismaclabel")
+__failure __msg("points to disabled hook")
+__naked int disabled_hook_test3(void *ctx)
+{
+ asm volatile (
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
--
2.30.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND
2024-07-19 11:00 ` [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND Xu Kuohai
@ 2024-07-22 7:13 ` Eduard Zingerman
2024-07-22 12:57 ` Shung-Hsi Yu
0 siblings, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2024-07-22 7:13 UTC (permalink / raw)
To: Xu Kuohai, bpf, netdev, linux-security-module
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Shung-Hsi Yu, Yonghong Song, KP Singh, Roberto Sassu,
Matt Bobrowski, Yafang Shao, Ilya Leoshkevich, Jose E . Marchesi,
James Morris, Kees Cook, Brendan Jackman, Florent Revest
On Fri, 2024-07-19 at 19:00 +0800, Xu Kuohai wrote:
> From: Shung-Hsi Yu <shung-hsi.yu@suse.com>
[...]
>
> | src_reg
> smin' = ? +----------------------------+---------------------------
> smin'(r) <= smin(r) | negative | non-negative
> ---------+--------------+----------------------------+---------------------------
> | negative |negative_bit_floor( |negative_bit_floor(
> | | min(dst->smin, src->smin))| min(dst->smin, src->smin))
> dst_reg +--------------+----------------------------+---------------------------
> | non-negative |negative_bit_floor( |negative_bit_floor(
> | | min(dst->smin, src->smin))| min(dst->smin, src->smin))
>
> Meaning that simply using
>
> negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value))
>
> to calculate the resulting smin_value would work across all sign combinations.
>
> Together these allows the BPF verifier to infer the signed range of the
> result of BPF_AND operation using the signed range from its operands,
> and use that information
>
> r0 s>>= 63; R0_w=scalar(smin=smin32=-1,smax=smax32=0)
> r0 &= -13 ; R0_w=scalar(smin=smin32=-16,smax=smax32=0,umax=0xfffffffffffffff3,umax32=0xfffffff3,var_off=(0x0; 0xfffffffffffffff3))
>
> [0] https://lore.kernel.org/bpf/e62e2971301ca7f2e9eb74fc500c520285cad8f5.camel@gmail.com/
>
> Link: https://lore.kernel.org/bpf/phcqmyzeqrsfzy7sb4rwpluc37hxyz7rcajk2bqw6cjk2x7rt5@m2hl6enudv7d/
> Cc: Eduard Zingerman <eddyz87@gmail.com>
> Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
> Acked-by: Xu Kuohai <xukuohai@huawei.com>
> ---
I find derivation of these new rules logical.
Also tried a simple brute force testing of this algorithm for 6-bit
signed integers, and have not found any constraint violations:
https://github.com/eddyz87/bpf-and-brute-force-check
As a nitpick, I think that it would be good to have some shortened
version of the derivation in the comments alongside the code.
(Maybe with a link to the mailing list).
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
[...]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND
2024-07-22 7:13 ` Eduard Zingerman
@ 2024-07-22 12:57 ` Shung-Hsi Yu
2024-07-22 18:47 ` Eduard Zingerman
0 siblings, 1 reply; 18+ messages in thread
From: Shung-Hsi Yu @ 2024-07-22 12:57 UTC (permalink / raw)
To: Eduard Zingerman, Xu Kuohai
Cc: bpf, netdev, linux-security-module, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
On Mon, Jul 22, 2024 at 12:13:20AM GMT, Eduard Zingerman wrote:
> On Fri, 2024-07-19 at 19:00 +0800, Xu Kuohai wrote:
> > From: Shung-Hsi Yu <shung-hsi.yu@suse.com>
>
> [...]
>
> >
> > | src_reg
> > smin' = ? +----------------------------+---------------------------
> > smin'(r) <= smin(r) | negative | non-negative
> > ---------+--------------+----------------------------+---------------------------
> > | negative |negative_bit_floor( |negative_bit_floor(
> > | | min(dst->smin, src->smin))| min(dst->smin, src->smin))
> > dst_reg +--------------+----------------------------+---------------------------
> > | non-negative |negative_bit_floor( |negative_bit_floor(
> > | | min(dst->smin, src->smin))| min(dst->smin, src->smin))
> >
> > Meaning that simply using
> >
> > negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value))
> >
> > to calculate the resulting smin_value would work across all sign combinations.
> >
> > Together these allows the BPF verifier to infer the signed range of the
> > result of BPF_AND operation using the signed range from its operands,
> > and use that information
I accidentally left the above paragraph unfinished, it should end with
... and using that information, it can be sure that that the result of
[-1, 0] & -13 will be within that expected range of [-4095, 0].
> > r0 s>>= 63; R0_w=scalar(smin=smin32=-1,smax=smax32=0)
> > r0 &= -13 ; R0_w=scalar(smin=smin32=-16,smax=smax32=0,umax=0xfffffffffffffff3,umax32=0xfffffff3,var_off=(0x0; 0xfffffffffffffff3))
> >
> > [0] https://lore.kernel.org/bpf/e62e2971301ca7f2e9eb74fc500c520285cad8f5.camel@gmail.com/
> >
> > Link: https://lore.kernel.org/bpf/phcqmyzeqrsfzy7sb4rwpluc37hxyz7rcajk2bqw6cjk2x7rt5@m2hl6enudv7d/
> > Cc: Eduard Zingerman <eddyz87@gmail.com>
> > Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
> > Acked-by: Xu Kuohai <xukuohai@huawei.com>
> > ---
>
> I find derivation of these new rules logical.
> Also tried a simple brute force testing of this algorithm for 6-bit
> signed integers, and have not found any constraint violations:
> https://github.com/eddyz87/bpf-and-brute-force-check
Thanks!
> As a nitpick, I think that it would be good to have some shortened
> version of the derivation in the comments alongside the code.
Agree it would. Will try to add a 2-4 sentence explanation.
> (Maybe with a link to the mailing list).
Adding a link to the mailing list seems out of the usual for comment in
verifier.c though, and it would be quite long. That said, it would be
nice to hint that there exists a more verbose version of the
explanation.
Maybe an explicit "see commit for the full detail" at the end of
the added comment?
> Acked-by: Eduard Zingerman <eddyz87@gmail.com>
>
> [...]
Will send an update for this tomorrow.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND
2024-07-22 12:57 ` Shung-Hsi Yu
@ 2024-07-22 18:47 ` Eduard Zingerman
2024-07-23 0:48 ` Alexei Starovoitov
0 siblings, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2024-07-22 18:47 UTC (permalink / raw)
To: Shung-Hsi Yu, Xu Kuohai
Cc: bpf, netdev, linux-security-module, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
On Mon, 2024-07-22 at 20:57 +0800, Shung-Hsi Yu wrote:
[...]
> > As a nitpick, I think that it would be good to have some shortened
> > version of the derivation in the comments alongside the code.
>
> Agree it would. Will try to add a 2-4 sentence explanation.
>
> > (Maybe with a link to the mailing list).
>
> Adding a link to the mailing list seems out of the usual for comment in
> verifier.c though, and it would be quite long. That said, it would be
> nice to hint that there exists a more verbose version of the
> explanation.
>
> Maybe an explicit "see commit for the full detail" at the end of
> the added comment?
Tbh, I find bounds deduction code extremely confusing.
Imho, having lengthy comments there is a good thing.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND
2024-07-22 18:47 ` Eduard Zingerman
@ 2024-07-23 0:48 ` Alexei Starovoitov
2024-07-23 6:36 ` Shung-Hsi Yu
0 siblings, 1 reply; 18+ messages in thread
From: Alexei Starovoitov @ 2024-07-23 0:48 UTC (permalink / raw)
To: Eduard Zingerman
Cc: Shung-Hsi Yu, Xu Kuohai, bpf, Network Development, LSM List,
Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Yonghong Song, KP Singh, Roberto Sassu, Matt Bobrowski,
Yafang Shao, Ilya Leoshkevich, Jose E . Marchesi, James Morris,
Kees Cook, Brendan Jackman, Florent Revest
On Mon, Jul 22, 2024 at 11:48 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Mon, 2024-07-22 at 20:57 +0800, Shung-Hsi Yu wrote:
>
> [...]
>
> > > As a nitpick, I think that it would be good to have some shortened
> > > version of the derivation in the comments alongside the code.
> >
> > Agree it would. Will try to add a 2-4 sentence explanation.
> >
> > > (Maybe with a link to the mailing list).
> >
> > Adding a link to the mailing list seems out of the usual for comment in
> > verifier.c though, and it would be quite long. That said, it would be
> > nice to hint that there exists a more verbose version of the
> > explanation.
> >
> > Maybe an explicit "see commit for the full detail" at the end of
> > the added comment?
>
> Tbh, I find bounds deduction code extremely confusing.
> Imho, having lengthy comments there is a good thing.
+1
Pls document the logic in the code.
commit log is good, but good chunk of it probably should be copied
as a comment.
I've applied the rest of the patches and removed 'test 3' selftest.
Pls respin this patch and a test.
More than one test would be nice too.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
` (8 preceding siblings ...)
2024-07-19 11:00 ` [PATCH bpf-next v2 9/9] selftests/bpf: Add verifier tests for bpf lsm Xu Kuohai
@ 2024-07-23 0:50 ` patchwork-bot+netdevbpf
9 siblings, 0 replies; 18+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-07-23 0:50 UTC (permalink / raw)
To: Xu Kuohai
Cc: bpf, netdev, linux-security-module, ast, andrii, daniel, eddyz87,
shung-hsi.yu, yonghong.song, kpsingh, roberto.sassu,
mattbobrowski, laoar.shao, iii, jose.marchesi, jamorris, kees,
jackmanb, revest
Hello:
This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Fri, 19 Jul 2024 19:00:50 +0800 you wrote:
> From: Xu Kuohai <xukuohai@huawei.com>
>
> LSM BPF prog may make kernel panic when returning an unexpected value,
> such as returning positive value on hook file_alloc_security.
>
> To fix it, series [1] refactored LSM hook return values and added
> BPF return value check on top of that. Since the refactoring of LSM
> hooks and checking BPF prog return value patches is not closely related,
> this series separates BPF-related patches from [1].
>
> [...]
Here is the summary with links:
- [bpf-next,v2,1/9] bpf, lsm: Add disabled BPF LSM hook list
https://git.kernel.org/bpf/bpf-next/c/afe4588df73f
- [bpf-next,v2,2/9] bpf, lsm: Add check for BPF LSM return value
https://git.kernel.org/bpf/bpf-next/c/af980eb89f06
- [bpf-next,v2,3/9] bpf: Prevent tail call between progs attached to different hooks
https://git.kernel.org/bpf/bpf-next/c/b39ffa50b415
- [bpf-next,v2,4/9] bpf: Fix compare error in function retval_range_within
https://git.kernel.org/bpf/bpf-next/c/9e14de5b9c12
- [bpf-next,v2,5/9] bpf, verifier: improve signed ranges inference for BPF_AND
(no matching commit)
- [bpf-next,v2,6/9] selftests/bpf: Avoid load failure for token_lsm.c
https://git.kernel.org/bpf/bpf-next/c/f81ad29cdf88
- [bpf-next,v2,7/9] selftests/bpf: Add return value checks for failed tests
https://git.kernel.org/bpf/bpf-next/c/fc2baf1730f9
- [bpf-next,v2,8/9] selftests/bpf: Add test for lsm tail call
https://git.kernel.org/bpf/bpf-next/c/2f56fae88135
- [bpf-next,v2,9/9] selftests/bpf: Add verifier tests for bpf lsm
https://git.kernel.org/bpf/bpf-next/c/cc1bfd52e4ca
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND
2024-07-23 0:48 ` Alexei Starovoitov
@ 2024-07-23 6:36 ` Shung-Hsi Yu
2024-07-23 7:07 ` Shung-Hsi Yu
0 siblings, 1 reply; 18+ messages in thread
From: Shung-Hsi Yu @ 2024-07-23 6:36 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Eduard Zingerman, Xu Kuohai, bpf, Network Development, LSM List,
Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Yonghong Song, KP Singh, Roberto Sassu, Matt Bobrowski,
Yafang Shao, Ilya Leoshkevich, Jose E . Marchesi, James Morris,
Kees Cook, Brendan Jackman, Florent Revest
On Mon, Jul 22, 2024 at 05:48:22PM GMT, Alexei Starovoitov wrote:
> On Mon, Jul 22, 2024 at 11:48 AM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > On Mon, 2024-07-22 at 20:57 +0800, Shung-Hsi Yu wrote:
> >
> > [...]
> >
> > > > As a nitpick, I think that it would be good to have some shortened
> > > > version of the derivation in the comments alongside the code.
> > >
> > > Agree it would. Will try to add a 2-4 sentence explanation.
> > >
> > > > (Maybe with a link to the mailing list).
> > >
> > > Adding a link to the mailing list seems out of the usual for comment in
> > > verifier.c though, and it would be quite long. That said, it would be
> > > nice to hint that there exists a more verbose version of the
> > > explanation.
> > >
> > > Maybe an explicit "see commit for the full detail" at the end of
> > > the added comment?
> >
> > Tbh, I find bounds deduction code extremely confusing.
> > Imho, having lengthy comments there is a good thing.
>
> +1
> Pls document the logic in the code.
> commit log is good, but good chunk of it probably should be copied
> as a comment.
>
> I've applied the rest of the patches and removed 'test 3' selftest.
> Pls respin this patch and a test.
> More than one test would be nice too.
Ack. Will send send another series that:
1. update current patch
- add code comment explanation how signed ranges are deduced in
scalar*_min_max_and()
- revert 229d6db14942 "selftests/bpf: Workaround strict bpf_lsm return
value check."
2. reintroduce Xu Kuohai's "test 3" into verifier_lsm.c
3. add a few tests for BPF_AND's signed range deduction
- should it be added to verifier_bounds*.c or verifier_and.c?
I think former, because if we later add signed range deduction for
BPF_OR as well, then test for signed range deducation of both
BPF_AND and BPF_OR can live in the same file, which would be nice
as signed range deduction of the two are somewhat symmetric
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND
2024-07-23 6:36 ` Shung-Hsi Yu
@ 2024-07-23 7:07 ` Shung-Hsi Yu
2024-07-24 1:17 ` Alexei Starovoitov
0 siblings, 1 reply; 18+ messages in thread
From: Shung-Hsi Yu @ 2024-07-23 7:07 UTC (permalink / raw)
To: Eduard Zingerman, Alexei Starovoitov
Cc: Xu Kuohai, bpf, Network Development, LSM List, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Yonghong Song, KP Singh,
Roberto Sassu, Matt Bobrowski, Yafang Shao, Ilya Leoshkevich,
Jose E . Marchesi, James Morris, Kees Cook, Brendan Jackman,
Florent Revest
On Tue, Jul 23, 2024 at 02:36:18PM GMT, Shung-Hsi Yu wrote:
[...]
> > +1
> > Pls document the logic in the code.
> > commit log is good, but good chunk of it probably should be copied
> > as a comment.
> >
> > I've applied the rest of the patches and removed 'test 3' selftest.
> > Pls respin this patch and a test.
> > More than one test would be nice too.
>
> Ack. Will send send another series that:
>
> 1. update current patch
> - add code comment explanation how signed ranges are deduced in
> scalar*_min_max_and()
> - revert 229d6db14942 "selftests/bpf: Workaround strict bpf_lsm return
> value check."
> 2. reintroduce Xu Kuohai's "test 3" into verifier_lsm.c
> 3. add a few tests for BPF_AND's signed range deduction
> - should it be added to verifier_bounds*.c or verifier_and.c?
>
> I think former, because if we later add signed range deduction for
> BPF_OR as well...
I was curious whether there would be imminent need for signed range
deduction for BPF_OR, though looks like there is _not_.
Looking at DAGCombiner::SimplifySelectCC() it does not do the
bitwise-OR variant of what we've encountered[1,2], that is
fold (select_cc seteq (and x, y), 0, A, -1) -> (or (sra (shl x)) A)
In other words, transforming the following theoretial C code that
returns -EACCES when certain bit is unset, and -1 when certain bit is
set
if (fmode & FMODE_WRITE)
return -1;
return -EACCESS;
into the following instructions
r0 <<= 62
r0 s>>= 63 /* set => r0 = -1, unset => r0 = 0 */
r0 |= -13 /* set => r0 = (-1 | -13) = -1, unset => r0 = (0 | -13) = -13 = -EACCESS */
exit /* returns either -1 or -EACCESS */
So signed ranged deduction with BPF_OR is probably just a nice-to-have
for now.
1: https://github.com/llvm/llvm-project/blob/2b78303/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp#L27657-L27684
2: neither was the setne version transformed, i.e.
fold (select_cc setne (and x, y), 0, A, 0) -> (and (sra (shl x)) A)
> then test for signed range deducation of both
> BPF_AND and BPF_OR can live in the same file, which would be nice
> as signed range deduction of the two are somewhat symmetric
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND
2024-07-23 7:07 ` Shung-Hsi Yu
@ 2024-07-24 1:17 ` Alexei Starovoitov
0 siblings, 0 replies; 18+ messages in thread
From: Alexei Starovoitov @ 2024-07-24 1:17 UTC (permalink / raw)
To: Shung-Hsi Yu
Cc: Eduard Zingerman, Xu Kuohai, bpf, Network Development, LSM List,
Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Yonghong Song, KP Singh, Roberto Sassu, Matt Bobrowski,
Yafang Shao, Ilya Leoshkevich, Jose E . Marchesi, James Morris,
Kees Cook, Brendan Jackman, Florent Revest
On Tue, Jul 23, 2024 at 12:07 AM Shung-Hsi Yu <shung-hsi.yu@suse.com> wrote:
>
> On Tue, Jul 23, 2024 at 02:36:18PM GMT, Shung-Hsi Yu wrote:
> [...]
> > > +1
> > > Pls document the logic in the code.
> > > commit log is good, but good chunk of it probably should be copied
> > > as a comment.
> > >
> > > I've applied the rest of the patches and removed 'test 3' selftest.
> > > Pls respin this patch and a test.
> > > More than one test would be nice too.
> >
> > Ack. Will send send another series that:
> >
> > 1. update current patch
> > - add code comment explanation how signed ranges are deduced in
> > scalar*_min_max_and()
> > - revert 229d6db14942 "selftests/bpf: Workaround strict bpf_lsm return
> > value check."
> > 2. reintroduce Xu Kuohai's "test 3" into verifier_lsm.c
> > 3. add a few tests for BPF_AND's signed range deduction
> > - should it be added to verifier_bounds*.c or verifier_and.c?
> >
> > I think former, because if we later add signed range deduction for
> > BPF_OR as well...
>
> I was curious whether there would be imminent need for signed range
> deduction for BPF_OR, though looks like there is _not_.
>
> Looking at DAGCombiner::SimplifySelectCC() it does not do the
> bitwise-OR variant of what we've encountered[1,2], that is
>
> fold (select_cc seteq (and x, y), 0, A, -1) -> (or (sra (shl x)) A)
>
> In other words, transforming the following theoretial C code that
> returns -EACCES when certain bit is unset, and -1 when certain bit is
> set
>
> if (fmode & FMODE_WRITE)
> return -1;
>
> return -EACCESS;
>
> into the following instructions
>
> r0 <<= 62
> r0 s>>= 63 /* set => r0 = -1, unset => r0 = 0 */
> r0 |= -13 /* set => r0 = (-1 | -13) = -1, unset => r0 = (0 | -13) = -13 = -EACCESS */
> exit /* returns either -1 or -EACCESS */
>
> So signed ranged deduction with BPF_OR is probably just a nice-to-have
> for now.
Yeah. Let's not complicate the verifier until really necessary.
But I wonder whether we should override shouldFoldSelectWithSingleBitTest()
in the backend to suppress this optimization.
I guess not, since removal of a branch is a good thing.
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2024-07-24 1:17 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-19 11:00 [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 1/9] bpf, lsm: Add disabled BPF LSM hook list Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 2/9] bpf, lsm: Add check for BPF LSM return value Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 3/9] bpf: Prevent tail call between progs attached to different hooks Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 4/9] bpf: Fix compare error in function retval_range_within Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND Xu Kuohai
2024-07-22 7:13 ` Eduard Zingerman
2024-07-22 12:57 ` Shung-Hsi Yu
2024-07-22 18:47 ` Eduard Zingerman
2024-07-23 0:48 ` Alexei Starovoitov
2024-07-23 6:36 ` Shung-Hsi Yu
2024-07-23 7:07 ` Shung-Hsi Yu
2024-07-24 1:17 ` Alexei Starovoitov
2024-07-19 11:00 ` [PATCH bpf-next v2 6/9] selftests/bpf: Avoid load failure for token_lsm.c Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 7/9] selftests/bpf: Add return value checks for failed tests Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 8/9] selftests/bpf: Add test for lsm tail call Xu Kuohai
2024-07-19 11:00 ` [PATCH bpf-next v2 9/9] selftests/bpf: Add verifier tests for bpf lsm Xu Kuohai
2024-07-23 0:50 ` [PATCH bpf-next v2 0/9] Add BPF LSM return value range check, BPF part patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).