* [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types
@ 2021-11-30 1:29 Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 1/9] bpf: Introduce composable reg, ret and arg types Hao Luo
` (8 more replies)
0 siblings, 9 replies; 20+ messages in thread
From: Hao Luo @ 2021-11-30 1:29 UTC (permalink / raw)
To: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann
Cc: Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf, Hao Luo
This patch set consists of two changes:
- a cleanup of arg_type, ret_type and reg_type which try to make those
types composable. (patch 1/9 - patch 6/9)
- a bug fix that prevents bpf programs from writing kernel memory.
(patch 7/9 - patch 9/9)
The purpose of the cleanup is to find a scalable way to expressing type
nullness and read-onliness. This patchset introduces two flags that
can be applied on all three types: PTR_MAYBE_NULL and MEM_RDONLY.
Previous types such as ARG_XXX_OR_NULL can now be written as
ARG_XXX | PTR_MAYBE_NULL
Similarly, PTR_TO_RDONLY_BUF is now "PTR_TO_BUF | MEM_RDONLY".
Flags can be composed, as ARGs can be both MEM_RDONLY and MAYBE_NULL.
ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY
Based on this new composable types, patch 7/9 applies MEM_RDONLY on
PTR_TO_MEM, in order to tag the returned memory from per_cpu_ptr as
read-only. Therefore fixing a previous bug that one can leverage
per_cpu_ptr to modify kernel memory within BPF programs.
Patch 8/9 generalizes the use of MEM_RDONLY further by tagging a set of
helper arguments ARG_PTR_TO_MEM with MEM_RDONLY. Some helper functions
may override their arguments, such as bpf_d_path, bpf_snprintf. In this
patch, we narrow the ARG_PTR_TO_MEM to be compatible with only a subset
of memory types. This prevents these helpers from writing read-only
memories. For the helpers that do not write its arguments, we add tag
MEM_RDONLY to allow taking a RDONLY memory as argument.
Previous versions of this patchset:
[1] https://lore.kernel.org/bpf/20211109003052.3499225-1-haoluo@google.com/T/
[2] https://lore.kernel.org/bpf/20211109021624.1140446-8-haoluo@google.com/T/
Hao Luo (9):
bpf: Introduce composable reg, ret and arg types.
bpf: Replace ARG_XXX_OR_NULL with ARG_XXX | PTR_MAYBE_NULL
bpf: Replace RET_XXX_OR_NULL with RET_XXX | PTR_MAYBE_NULL
bpf: Replace PTR_TO_XXX_OR_NULL with PTR_TO_XXX | PTR_MAYBE_NULL
bpf: Introduce MEM_RDONLY flag
bpf: Convert PTR_TO_MEM_OR_NULL to composable types.
bpf: Make per_cpu_ptr return rdonly PTR_TO_MEM.
bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem.
bpf/selftests: Test PTR_TO_RDONLY_MEM
include/linux/bpf.h | 105 +++-
kernel/bpf/btf.c | 13 +-
kernel/bpf/cgroup.c | 2 +-
kernel/bpf/helpers.c | 12 +-
kernel/bpf/map_iter.c | 4 +-
kernel/bpf/ringbuf.c | 2 +-
kernel/bpf/syscall.c | 2 +-
kernel/bpf/verifier.c | 455 +++++++++---------
kernel/trace/bpf_trace.c | 26 +-
net/core/bpf_sk_storage.c | 2 +-
net/core/filter.c | 64 +--
net/core/sock_map.c | 2 +-
.../selftests/bpf/prog_tests/ksyms_btf.c | 14 +
.../bpf/progs/test_ksyms_btf_write_check.c | 29 ++
14 files changed, 414 insertions(+), 318 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c
--
2.34.0.384.gca35af8252-goog
^ permalink raw reply [flat|nested] 20+ messages in thread
* [RFC PATCH bpf-next v2 1/9] bpf: Introduce composable reg, ret and arg types.
2021-11-30 1:29 [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types Hao Luo
@ 2021-11-30 1:29 ` Hao Luo
2021-12-01 20:29 ` Alexei Starovoitov
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 2/9] bpf: Replace ARG_XXX_OR_NULL with ARG_XXX | PTR_MAYBE_NULL Hao Luo
` (7 subsequent siblings)
8 siblings, 1 reply; 20+ messages in thread
From: Hao Luo @ 2021-11-30 1:29 UTC (permalink / raw)
To: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann
Cc: Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf, Hao Luo
There are some common properties shared between bpf reg, ret and arg
values. For instance, a value may be a NULL pointer, or a pointer to
a read-only memory. Previously, to express these properties, enumeration
was used. For example, in order to test whether a reg value can be NULL,
reg_type_may_be_null() simply enumerates all types that are possibly
NULL. The problem of this approach is that it's not scalable and causes
a lot of duplication. These properties can be combined, for example, a
type could be either MAYBE_NULL or RDONLY, or both.
This patch series rewrites the layout of reg_type, arg_type and
ret_type, so that common properties can be extracted and represented as
composable flag. For example, one can write
ARG_PTR_TO_MEM | PTR_MAYBE_NULL
which is equivalent to the previous
ARG_PTR_TO_MEM_OR_NULL
The type ARG_PTR_TO_MEM are called "base types" in this patch. Base
types can be extended with flags. A flag occupies the higher bits while
base types sits in the lower bits.
This patch in particular sets up a set of macro for this purpose. The
followed patches rewrites arg_types, ret_types and reg_types
respectively.
Signed-off-by: Hao Luo <haoluo@google.com>
---
include/linux/bpf.h | 50 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cc7a0c36e7df..b592b3f7d223 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -297,6 +297,37 @@ bool bpf_map_meta_equal(const struct bpf_map *meta0,
extern const struct bpf_map_ops bpf_map_offload_ops;
+/* bpf_type_flag contains a set of flags that are applicable to the values of
+ * arg_type, ret_type and reg_type. For example, a pointer value may be null,
+ * or a memory is read-only. We classify types into two categories: base types
+ * and extended types. Extended types are base types combined with a type flag.
+ *
+ * Currently there are no more than 32 base types in arg_type, ret_type and
+ * reg_types.
+ */
+#define BPF_BASE_TYPE_BITS 8
+
+enum bpf_type_flag {
+ /* PTR may be NULL. */
+ PTR_MAYBE_NULL = BIT(0 + BPF_BASE_TYPE_BITS),
+
+ __BPF_TYPE_LAST_FLAG = PTR_MAYBE_NULL,
+};
+
+#define BPF_BASE_TYPE_MASK GENMASK(BPF_BASE_TYPE_BITS, 0)
+
+/* Max number of base types. */
+#define BPF_BASE_TYPE_LIMIT (1UL << BPF_BASE_TYPE_BITS)
+
+/* Max number of all types. */
+#define BPF_TYPE_LIMIT (__BPF_TYPE_LAST_FLAG | (__BPF_TYPE_LAST_FLAG - 1))
+
+/* extract base type. */
+#define BPF_BASE_TYPE(x) ((x) & BPF_BASE_TYPE_MASK)
+
+/* extract flags from an extended type. */
+#define BPF_TYPE_FLAG(x) ((enum bpf_type_flag)((x) & ~BPF_BASE_TYPE_MASK))
+
/* function argument constraints */
enum bpf_arg_type {
ARG_DONTCARE = 0, /* unused argument in helper function */
@@ -343,7 +374,13 @@ enum bpf_arg_type {
ARG_PTR_TO_CONST_STR, /* pointer to a null terminated read-only string */
ARG_PTR_TO_TIMER, /* pointer to bpf_timer */
__BPF_ARG_TYPE_MAX,
+
+ /* This must be the last entry. Its purpose is to ensure the enum is
+ * wide enough to hold the higher bits reserved for bpf_type_flag.
+ */
+ __BPF_ARG_TYPE_LIMIT = BPF_TYPE_LIMIT,
};
+static_assert(__BPF_ARG_TYPE_MAX <= BPF_BASE_TYPE_LIMIT);
/* type of values returned from helper functions */
enum bpf_return_type {
@@ -359,7 +396,14 @@ enum bpf_return_type {
RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL, /* returns a pointer to a valid memory or a btf_id or NULL */
RET_PTR_TO_MEM_OR_BTF_ID, /* returns a pointer to a valid memory or a btf_id */
RET_PTR_TO_BTF_ID, /* returns a pointer to a btf_id */
+ __BPF_RET_TYPE_MAX,
+
+ /* This must be the last entry. Its purpose is to ensure the enum is
+ * wide enough to hold the higher bits reserved for bpf_type_flag.
+ */
+ __BPF_RET_TYPE_LIMIT = BPF_TYPE_LIMIT,
};
+static_assert(__BPF_RET_TYPE_MAX <= BPF_BASE_TYPE_LIMIT);
/* eBPF function prototype used by verifier to allow BPF_CALLs from eBPF programs
* to in-kernel helper functions and for adjusting imm32 field in BPF_CALL
@@ -461,7 +505,13 @@ enum bpf_reg_type {
PTR_TO_FUNC, /* reg points to a bpf program function */
PTR_TO_MAP_KEY, /* reg points to a map element key */
__BPF_REG_TYPE_MAX,
+
+ /* This must be the last entry. Its purpose is to ensure the enum is
+ * wide enough to hold the higher bits reserved for bpf_type_flag.
+ */
+ __BPF_REG_TYPE_LIMIT = BPF_TYPE_LIMIT,
};
+static_assert(__BPF_REG_TYPE_MAX <= BPF_BASE_TYPE_LIMIT);
/* The information passed from prog-specific *_is_valid_access
* back to the verifier.
--
2.34.0.384.gca35af8252-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [RFC PATCH bpf-next v2 2/9] bpf: Replace ARG_XXX_OR_NULL with ARG_XXX | PTR_MAYBE_NULL
2021-11-30 1:29 [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 1/9] bpf: Introduce composable reg, ret and arg types Hao Luo
@ 2021-11-30 1:29 ` Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 3/9] bpf: Replace RET_XXX_OR_NULL with RET_XXX " Hao Luo
` (6 subsequent siblings)
8 siblings, 0 replies; 20+ messages in thread
From: Hao Luo @ 2021-11-30 1:29 UTC (permalink / raw)
To: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann
Cc: Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf, Hao Luo
We have introduced a new type to make bpf_arg composable, by
introducing flags to tag a particular type.
One of the flag is PTR_MAYBE_NULL, which indicates a pointer
may be NULL. This patch switches the qualified arg_types to
use this flag. The arg_types changed in this patch include:
1. ARG_PTR_TO_MAP_VALUE_OR_NULL
2. ARG_PTR_TO_MEM_OR_NULL
3. ARG_PTR_TO_CTX_OR_NULL
4. ARG_PTR_TO_SOCKET_OR_NULL
5. ARG_PTR_TO_ALLOC_MEM_OR_NULL
6. ARG_PTR_TO_STACK_OR_NULL
This patch does not eliminate the use of these arg_types, instead, it
makes them an alias to the 'ARG_XXX | PTR_MAYBE_NULL'.
Signed-off-by: Hao Luo <haoluo@google.com>
---
include/linux/bpf.h | 15 +++++++++------
kernel/bpf/verifier.c | 35 ++++++++++++-----------------------
2 files changed, 21 insertions(+), 29 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index b592b3f7d223..da31c79724d4 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -339,13 +339,11 @@ enum bpf_arg_type {
ARG_PTR_TO_MAP_KEY, /* pointer to stack used as map key */
ARG_PTR_TO_MAP_VALUE, /* pointer to stack used as map value */
ARG_PTR_TO_UNINIT_MAP_VALUE, /* pointer to valid memory used to store a map value */
- ARG_PTR_TO_MAP_VALUE_OR_NULL, /* pointer to stack used as map value or NULL */
/* the following constraints used to prototype bpf_memcmp() and other
* functions that access data on eBPF program stack
*/
ARG_PTR_TO_MEM, /* pointer to valid memory (stack, packet, map value) */
- ARG_PTR_TO_MEM_OR_NULL, /* pointer to valid memory or NULL */
ARG_PTR_TO_UNINIT_MEM, /* pointer to memory does not need to be initialized,
* helper function must fill all bytes or clear
* them in error case.
@@ -355,26 +353,31 @@ enum bpf_arg_type {
ARG_CONST_SIZE_OR_ZERO, /* number of bytes accessed from memory or 0 */
ARG_PTR_TO_CTX, /* pointer to context */
- ARG_PTR_TO_CTX_OR_NULL, /* pointer to context or NULL */
ARG_ANYTHING, /* any (initialized) argument is ok */
ARG_PTR_TO_SPIN_LOCK, /* pointer to bpf_spin_lock */
ARG_PTR_TO_SOCK_COMMON, /* pointer to sock_common */
ARG_PTR_TO_INT, /* pointer to int */
ARG_PTR_TO_LONG, /* pointer to long */
ARG_PTR_TO_SOCKET, /* pointer to bpf_sock (fullsock) */
- ARG_PTR_TO_SOCKET_OR_NULL, /* pointer to bpf_sock (fullsock) or NULL */
ARG_PTR_TO_BTF_ID, /* pointer to in-kernel struct */
ARG_PTR_TO_ALLOC_MEM, /* pointer to dynamically allocated memory */
- ARG_PTR_TO_ALLOC_MEM_OR_NULL, /* pointer to dynamically allocated memory or NULL */
ARG_CONST_ALLOC_SIZE_OR_ZERO, /* number of allocated bytes requested */
ARG_PTR_TO_BTF_ID_SOCK_COMMON, /* pointer to in-kernel sock_common or bpf-mirrored bpf_sock */
ARG_PTR_TO_PERCPU_BTF_ID, /* pointer to in-kernel percpu type */
ARG_PTR_TO_FUNC, /* pointer to a bpf program function */
- ARG_PTR_TO_STACK_OR_NULL, /* pointer to stack or NULL */
+ ARG_PTR_TO_STACK, /* pointer to stack */
ARG_PTR_TO_CONST_STR, /* pointer to a null terminated read-only string */
ARG_PTR_TO_TIMER, /* pointer to bpf_timer */
__BPF_ARG_TYPE_MAX,
+ /* Extended arg_types. */
+ ARG_PTR_TO_MAP_VALUE_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_MAP_VALUE,
+ ARG_PTR_TO_MEM_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_MEM,
+ ARG_PTR_TO_CTX_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_CTX,
+ ARG_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_SOCKET,
+ ARG_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_ALLOC_MEM,
+ ARG_PTR_TO_STACK_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_STACK,
+
/* This must be the last entry. Its purpose is to ensure the enum is
* wide enough to hold the higher bits reserved for bpf_type_flag.
*/
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0763cca139a7..78227d2cffb1 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -474,12 +474,7 @@ static bool arg_type_may_be_refcounted(enum bpf_arg_type type)
static bool arg_type_may_be_null(enum bpf_arg_type type)
{
- return type == ARG_PTR_TO_MAP_VALUE_OR_NULL ||
- type == ARG_PTR_TO_MEM_OR_NULL ||
- type == ARG_PTR_TO_CTX_OR_NULL ||
- type == ARG_PTR_TO_SOCKET_OR_NULL ||
- type == ARG_PTR_TO_ALLOC_MEM_OR_NULL ||
- type == ARG_PTR_TO_STACK_OR_NULL;
+ return BPF_TYPE_FLAG(type) & PTR_MAYBE_NULL;
}
/* Determine whether the function releases some resources allocated by another
@@ -4932,9 +4927,8 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno,
static bool arg_type_is_mem_ptr(enum bpf_arg_type type)
{
- return type == ARG_PTR_TO_MEM ||
- type == ARG_PTR_TO_MEM_OR_NULL ||
- type == ARG_PTR_TO_UNINIT_MEM;
+ return BPF_BASE_TYPE(type) == ARG_PTR_TO_MEM ||
+ BPF_BASE_TYPE(type) == ARG_PTR_TO_UNINIT_MEM;
}
static bool arg_type_is_mem_size(enum bpf_arg_type type)
@@ -5071,31 +5065,26 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
[ARG_PTR_TO_MAP_KEY] = &map_key_value_types,
[ARG_PTR_TO_MAP_VALUE] = &map_key_value_types,
[ARG_PTR_TO_UNINIT_MAP_VALUE] = &map_key_value_types,
- [ARG_PTR_TO_MAP_VALUE_OR_NULL] = &map_key_value_types,
[ARG_CONST_SIZE] = &scalar_types,
[ARG_CONST_SIZE_OR_ZERO] = &scalar_types,
[ARG_CONST_ALLOC_SIZE_OR_ZERO] = &scalar_types,
[ARG_CONST_MAP_PTR] = &const_map_ptr_types,
[ARG_PTR_TO_CTX] = &context_types,
- [ARG_PTR_TO_CTX_OR_NULL] = &context_types,
[ARG_PTR_TO_SOCK_COMMON] = &sock_types,
#ifdef CONFIG_NET
[ARG_PTR_TO_BTF_ID_SOCK_COMMON] = &btf_id_sock_common_types,
#endif
[ARG_PTR_TO_SOCKET] = &fullsock_types,
- [ARG_PTR_TO_SOCKET_OR_NULL] = &fullsock_types,
[ARG_PTR_TO_BTF_ID] = &btf_ptr_types,
[ARG_PTR_TO_SPIN_LOCK] = &spin_lock_types,
[ARG_PTR_TO_MEM] = &mem_types,
- [ARG_PTR_TO_MEM_OR_NULL] = &mem_types,
[ARG_PTR_TO_UNINIT_MEM] = &mem_types,
[ARG_PTR_TO_ALLOC_MEM] = &alloc_mem_types,
- [ARG_PTR_TO_ALLOC_MEM_OR_NULL] = &alloc_mem_types,
[ARG_PTR_TO_INT] = &int_ptr_types,
[ARG_PTR_TO_LONG] = &int_ptr_types,
[ARG_PTR_TO_PERCPU_BTF_ID] = &percpu_btf_ptr_types,
[ARG_PTR_TO_FUNC] = &func_ptr_types,
- [ARG_PTR_TO_STACK_OR_NULL] = &stack_ptr_types,
+ [ARG_PTR_TO_STACK] = &stack_ptr_types,
[ARG_PTR_TO_CONST_STR] = &const_str_ptr_types,
[ARG_PTR_TO_TIMER] = &timer_types,
};
@@ -5109,7 +5098,7 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
const struct bpf_reg_types *compatible;
int i, j;
- compatible = compatible_reg_types[arg_type];
+ compatible = compatible_reg_types[BPF_BASE_TYPE(arg_type)];
if (!compatible) {
verbose(env, "verifier internal error: unsupported arg type %d\n", arg_type);
return -EFAULT;
@@ -5190,9 +5179,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
return -EACCES;
}
- if (arg_type == ARG_PTR_TO_MAP_VALUE ||
- arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE ||
- arg_type == ARG_PTR_TO_MAP_VALUE_OR_NULL) {
+ if (BPF_BASE_TYPE(arg_type) == ARG_PTR_TO_MAP_VALUE ||
+ BPF_BASE_TYPE(arg_type) == ARG_PTR_TO_UNINIT_MAP_VALUE) {
err = resolve_map_arg_type(env, meta, &arg_type);
if (err)
return err;
@@ -5267,10 +5255,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
err = check_helper_mem_access(env, regno,
meta->map_ptr->key_size, false,
NULL);
- } else if (arg_type == ARG_PTR_TO_MAP_VALUE ||
- (arg_type == ARG_PTR_TO_MAP_VALUE_OR_NULL &&
- !register_is_null(reg)) ||
- arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE) {
+ } else if (BPF_BASE_TYPE(arg_type) == ARG_PTR_TO_MAP_VALUE ||
+ BPF_BASE_TYPE(arg_type) == ARG_PTR_TO_UNINIT_MAP_VALUE) {
+ if (!arg_type_may_be_null(arg_type) && register_is_null(reg))
+ return err;
+
/* bpf_map_xxx(..., map_ptr, ..., value) call:
* check [value, value + map->value_size) validity
*/
--
2.34.0.384.gca35af8252-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [RFC PATCH bpf-next v2 3/9] bpf: Replace RET_XXX_OR_NULL with RET_XXX | PTR_MAYBE_NULL
2021-11-30 1:29 [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 1/9] bpf: Introduce composable reg, ret and arg types Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 2/9] bpf: Replace ARG_XXX_OR_NULL with ARG_XXX | PTR_MAYBE_NULL Hao Luo
@ 2021-11-30 1:29 ` Hao Luo
2021-12-01 20:30 ` Alexei Starovoitov
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 4/9] bpf: Replace PTR_TO_XXX_OR_NULL with PTR_TO_XXX " Hao Luo
` (5 subsequent siblings)
8 siblings, 1 reply; 20+ messages in thread
From: Hao Luo @ 2021-11-30 1:29 UTC (permalink / raw)
To: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann
Cc: Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf, Hao Luo
We have introduced a new type to make bpf_ret composable, by
introducing flags to tag a particular type.
One of the flag is PTR_MAYBE_NULL, which indicates a pointer
may be NULL. This patch switches the qualified arg_types to
use this flag. The ret_types changed in this patch include:
1. RET_PTR_TO_MAP_VALUE_OR_NULL
2. RET_PTR_TO_SOCKET_OR_NULL
3. RET_PTR_TO_TCP_SOCK_OR_NULL
4. RET_PTR_TO_SOCK_COMMON_OR_NULL
5. RET_PTR_TO_ALLOC_MEM_OR_NULL
6. RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL
7. RET_PTR_TO_BTF_ID_OR_NULL
Signed-off-by: Hao Luo <haoluo@google.com>
---
include/linux/bpf.h | 19 ++++++++++------
kernel/bpf/helpers.c | 2 +-
kernel/bpf/verifier.c | 53 +++++++++++++++++++++++--------------------
3 files changed, 42 insertions(+), 32 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index da31c79724d4..d6189eb14a1f 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -390,17 +390,22 @@ enum bpf_return_type {
RET_INTEGER, /* function returns integer */
RET_VOID, /* function doesn't return anything */
RET_PTR_TO_MAP_VALUE, /* returns a pointer to map elem value */
- RET_PTR_TO_MAP_VALUE_OR_NULL, /* returns a pointer to map elem value or NULL */
- RET_PTR_TO_SOCKET_OR_NULL, /* returns a pointer to a socket or NULL */
- RET_PTR_TO_TCP_SOCK_OR_NULL, /* returns a pointer to a tcp_sock or NULL */
- RET_PTR_TO_SOCK_COMMON_OR_NULL, /* returns a pointer to a sock_common or NULL */
- RET_PTR_TO_ALLOC_MEM_OR_NULL, /* returns a pointer to dynamically allocated memory or NULL */
- RET_PTR_TO_BTF_ID_OR_NULL, /* returns a pointer to a btf_id or NULL */
- RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL, /* returns a pointer to a valid memory or a btf_id or NULL */
+ RET_PTR_TO_SOCKET, /* returns a pointer to a socket */
+ RET_PTR_TO_TCP_SOCK, /* returns a pointer to a tcp_sock */
+ RET_PTR_TO_SOCK_COMMON, /* returns a pointer to a sock_common */
+ RET_PTR_TO_ALLOC_MEM, /* returns a pointer to dynamically allocated memory */
RET_PTR_TO_MEM_OR_BTF_ID, /* returns a pointer to a valid memory or a btf_id */
RET_PTR_TO_BTF_ID, /* returns a pointer to a btf_id */
__BPF_RET_TYPE_MAX,
+ /* Extended ret_types. */
+ RET_PTR_TO_MAP_VALUE_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_MAP_VALUE,
+ RET_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCKET,
+ RET_PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_TCP_SOCK,
+ RET_PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON,
+ RET_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_ALLOC_MEM,
+ RET_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID,
+
/* This must be the last entry. Its purpose is to ensure the enum is
* wide enough to hold the higher bits reserved for bpf_type_flag.
*/
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 1ffd469c217f..293d9314ec7f 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -667,7 +667,7 @@ BPF_CALL_2(bpf_per_cpu_ptr, const void *, ptr, u32, cpu)
const struct bpf_func_proto bpf_per_cpu_ptr_proto = {
.func = bpf_per_cpu_ptr,
.gpl_only = false,
- .ret_type = RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL,
+ .ret_type = RET_PTR_TO_MEM_OR_BTF_ID | PTR_MAYBE_NULL,
.arg1_type = ARG_PTR_TO_PERCPU_BTF_ID,
.arg2_type = ARG_ANYTHING,
};
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 78227d2cffb1..fb44db2456f2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -477,6 +477,11 @@ static bool arg_type_may_be_null(enum bpf_arg_type type)
return BPF_TYPE_FLAG(type) & PTR_MAYBE_NULL;
}
+static bool ret_type_may_be_null(enum bpf_return_type type)
+{
+ return BPF_TYPE_FLAG(type) & PTR_MAYBE_NULL;
+}
+
/* Determine whether the function releases some resources allocated by another
* function call. The first reference type argument will be assumed to be
* released by release_reference().
@@ -6370,6 +6375,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
int *insn_idx_p)
{
const struct bpf_func_proto *fn = NULL;
+ enum bpf_return_type ret_type;
struct bpf_reg_state *regs;
struct bpf_call_arg_meta meta;
int insn_idx = *insn_idx_p;
@@ -6510,13 +6516,13 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG;
/* update return register (already marked as written above) */
- if (fn->ret_type == RET_INTEGER) {
+ ret_type = fn->ret_type;
+ if (ret_type == RET_INTEGER) {
/* sets type to SCALAR_VALUE */
mark_reg_unknown(env, regs, BPF_REG_0);
- } else if (fn->ret_type == RET_VOID) {
+ } else if (ret_type == RET_VOID) {
regs[BPF_REG_0].type = NOT_INIT;
- } else if (fn->ret_type == RET_PTR_TO_MAP_VALUE_OR_NULL ||
- fn->ret_type == RET_PTR_TO_MAP_VALUE) {
+ } else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_MAP_VALUE) {
/* There is no offset yet applied, variable or fixed */
mark_reg_known_zero(env, regs, BPF_REG_0);
/* remember map_ptr, so that check_map_access()
@@ -6530,28 +6536,27 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
}
regs[BPF_REG_0].map_ptr = meta.map_ptr;
regs[BPF_REG_0].map_uid = meta.map_uid;
- if (fn->ret_type == RET_PTR_TO_MAP_VALUE) {
+ if (ret_type_may_be_null(fn->ret_type)) {
+ regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
+ } else {
regs[BPF_REG_0].type = PTR_TO_MAP_VALUE;
if (map_value_has_spin_lock(meta.map_ptr))
regs[BPF_REG_0].id = ++env->id_gen;
- } else {
- regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
}
- } else if (fn->ret_type == RET_PTR_TO_SOCKET_OR_NULL) {
+ } else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_SOCKET) {
mark_reg_known_zero(env, regs, BPF_REG_0);
regs[BPF_REG_0].type = PTR_TO_SOCKET_OR_NULL;
- } else if (fn->ret_type == RET_PTR_TO_SOCK_COMMON_OR_NULL) {
+ } else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_SOCK_COMMON) {
mark_reg_known_zero(env, regs, BPF_REG_0);
regs[BPF_REG_0].type = PTR_TO_SOCK_COMMON_OR_NULL;
- } else if (fn->ret_type == RET_PTR_TO_TCP_SOCK_OR_NULL) {
+ } else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_TCP_SOCK) {
mark_reg_known_zero(env, regs, BPF_REG_0);
regs[BPF_REG_0].type = PTR_TO_TCP_SOCK_OR_NULL;
- } else if (fn->ret_type == RET_PTR_TO_ALLOC_MEM_OR_NULL) {
+ } else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_ALLOC_MEM) {
mark_reg_known_zero(env, regs, BPF_REG_0);
regs[BPF_REG_0].type = PTR_TO_MEM_OR_NULL;
regs[BPF_REG_0].mem_size = meta.mem_size;
- } else if (fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL ||
- fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID) {
+ } else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_MEM_OR_BTF_ID) {
const struct btf_type *t;
mark_reg_known_zero(env, regs, BPF_REG_0);
@@ -6570,28 +6575,28 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
return -EINVAL;
}
regs[BPF_REG_0].type =
- fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID ?
- PTR_TO_MEM : PTR_TO_MEM_OR_NULL;
+ (ret_type & PTR_MAYBE_NULL) ?
+ PTR_TO_MEM_OR_NULL : PTR_TO_MEM;
regs[BPF_REG_0].mem_size = tsize;
} else {
regs[BPF_REG_0].type =
- fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID ?
- PTR_TO_BTF_ID : PTR_TO_BTF_ID_OR_NULL;
+ (ret_type & PTR_MAYBE_NULL) ?
+ PTR_TO_BTF_ID_OR_NULL : PTR_TO_BTF_ID;
regs[BPF_REG_0].btf = meta.ret_btf;
regs[BPF_REG_0].btf_id = meta.ret_btf_id;
}
- } else if (fn->ret_type == RET_PTR_TO_BTF_ID_OR_NULL ||
- fn->ret_type == RET_PTR_TO_BTF_ID) {
+ } else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_BTF_ID) {
int ret_btf_id;
mark_reg_known_zero(env, regs, BPF_REG_0);
- regs[BPF_REG_0].type = fn->ret_type == RET_PTR_TO_BTF_ID ?
- PTR_TO_BTF_ID :
- PTR_TO_BTF_ID_OR_NULL;
+ regs[BPF_REG_0].type = (ret_type & PTR_MAYBE_NULL) ?
+ PTR_TO_BTF_ID_OR_NULL :
+ PTR_TO_BTF_ID;
ret_btf_id = *fn->ret_btf_id;
if (ret_btf_id == 0) {
verbose(env, "invalid return type %d of func %s#%d\n",
- fn->ret_type, func_id_name(func_id), func_id);
+ BPF_BASE_TYPE(ret_type), func_id_name(func_id),
+ func_id);
return -EINVAL;
}
/* current BPF helper definitions are only coming from
@@ -6601,7 +6606,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
regs[BPF_REG_0].btf_id = ret_btf_id;
} else {
verbose(env, "unknown return type %d of func %s#%d\n",
- fn->ret_type, func_id_name(func_id), func_id);
+ BPF_BASE_TYPE(ret_type), func_id_name(func_id), func_id);
return -EINVAL;
}
--
2.34.0.384.gca35af8252-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [RFC PATCH bpf-next v2 4/9] bpf: Replace PTR_TO_XXX_OR_NULL with PTR_TO_XXX | PTR_MAYBE_NULL
2021-11-30 1:29 [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types Hao Luo
` (2 preceding siblings ...)
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 3/9] bpf: Replace RET_XXX_OR_NULL with RET_XXX " Hao Luo
@ 2021-11-30 1:29 ` Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 5/9] bpf: Introduce MEM_RDONLY flag Hao Luo
` (4 subsequent siblings)
8 siblings, 0 replies; 20+ messages in thread
From: Hao Luo @ 2021-11-30 1:29 UTC (permalink / raw)
To: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann
Cc: Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf, Hao Luo
We have introduced a new type to make bpf_reg composable, by
introducing flags to tag a particular type.
One of the flag is PTR_MAYBE_NULL, which indicates a pointer
may be NULL. This patch switches the qualified reg_types to
use this flag. The reg_types changed in this patch include:
1. PTR_TO_MAP_VALUE_OR_NULL
2. PTR_TO_SOCKET_OR_NULL
3. PTR_TO_SOCK_COMMON_OR_NULL
4. PTR_TO_TCP_SOCK_OR_NULL
5. PTR_TO_BTF_ID_OR_NULL
6. PTR_TO_MEM_OR_NULL
7. PTR_TO_RDONLY_BUF_OR_NULL
8. PTR_TO_RDWR_BUF_OR_NULL
Signed-off-by: Hao Luo <haoluo@google.com>
---
include/linux/bpf.h | 16 +--
kernel/bpf/btf.c | 8 +-
kernel/bpf/map_iter.c | 4 +-
kernel/bpf/verifier.c | 265 +++++++++++++++-----------------------
net/core/bpf_sk_storage.c | 2 +-
net/core/sock_map.c | 2 +-
6 files changed, 121 insertions(+), 176 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d6189eb14a1f..d484f6637e60 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -473,18 +473,14 @@ enum bpf_reg_type {
PTR_TO_CTX, /* reg points to bpf_context */
CONST_PTR_TO_MAP, /* reg points to struct bpf_map */
PTR_TO_MAP_VALUE, /* reg points to map element value */
- PTR_TO_MAP_VALUE_OR_NULL,/* points to map elem value or NULL */
PTR_TO_STACK, /* reg == frame_pointer + offset */
PTR_TO_PACKET_META, /* skb->data - meta_len */
PTR_TO_PACKET, /* reg points to skb->data */
PTR_TO_PACKET_END, /* skb->data + headlen */
PTR_TO_FLOW_KEYS, /* reg points to bpf_flow_keys */
PTR_TO_SOCKET, /* reg points to struct bpf_sock */
- PTR_TO_SOCKET_OR_NULL, /* reg points to struct bpf_sock or NULL */
PTR_TO_SOCK_COMMON, /* reg points to sock_common */
- PTR_TO_SOCK_COMMON_OR_NULL, /* reg points to sock_common or NULL */
PTR_TO_TCP_SOCK, /* reg points to struct tcp_sock */
- PTR_TO_TCP_SOCK_OR_NULL, /* reg points to struct tcp_sock or NULL */
PTR_TO_TP_BUFFER, /* reg points to a writable raw tp's buffer */
PTR_TO_XDP_SOCK, /* reg points to struct xdp_sock */
/* PTR_TO_BTF_ID points to a kernel struct that does not need
@@ -502,18 +498,22 @@ enum bpf_reg_type {
* been checked for null. Used primarily to inform the verifier
* an explicit null check is required for this struct.
*/
- PTR_TO_BTF_ID_OR_NULL,
PTR_TO_MEM, /* reg points to valid memory region */
- PTR_TO_MEM_OR_NULL, /* reg points to valid memory region or NULL */
PTR_TO_RDONLY_BUF, /* reg points to a readonly buffer */
- PTR_TO_RDONLY_BUF_OR_NULL, /* reg points to a readonly buffer or NULL */
PTR_TO_RDWR_BUF, /* reg points to a read/write buffer */
- PTR_TO_RDWR_BUF_OR_NULL, /* reg points to a read/write buffer or NULL */
PTR_TO_PERCPU_BTF_ID, /* reg points to a percpu kernel variable */
PTR_TO_FUNC, /* reg points to a bpf program function */
PTR_TO_MAP_KEY, /* reg points to a map element key */
__BPF_REG_TYPE_MAX,
+ /* Extended reg_types. */
+ PTR_TO_MAP_VALUE_OR_NULL = PTR_MAYBE_NULL | PTR_TO_MAP_VALUE,
+ PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | PTR_TO_SOCKET,
+ PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | PTR_TO_SOCK_COMMON,
+ PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | PTR_TO_TCP_SOCK,
+ PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | PTR_TO_BTF_ID,
+ PTR_TO_MEM_OR_NULL = PTR_MAYBE_NULL | PTR_TO_MEM,
+
/* This must be the last entry. Its purpose is to ensure the enum is
* wide enough to hold the higher bits reserved for bpf_type_flag.
*/
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 6b9d23be1e99..62a86db7d8ec 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -4941,10 +4941,14 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
/* check for PTR_TO_RDONLY_BUF_OR_NULL or PTR_TO_RDWR_BUF_OR_NULL */
for (i = 0; i < prog->aux->ctx_arg_info_size; i++) {
const struct bpf_ctx_arg_aux *ctx_arg_info = &prog->aux->ctx_arg_info[i];
+ enum bpf_reg_type type;
+ enum bpf_type_flag flag;
+ type = BPF_BASE_TYPE(ctx_arg_info->reg_type);
+ flag = BPF_TYPE_FLAG(ctx_arg_info->reg_type);
if (ctx_arg_info->offset == off &&
- (ctx_arg_info->reg_type == PTR_TO_RDONLY_BUF_OR_NULL ||
- ctx_arg_info->reg_type == PTR_TO_RDWR_BUF_OR_NULL)) {
+ (type == PTR_TO_RDWR_BUF || type == PTR_TO_RDONLY_BUF) &&
+ (flag & PTR_MAYBE_NULL)) {
info->reg_type = ctx_arg_info->reg_type;
return true;
}
diff --git a/kernel/bpf/map_iter.c b/kernel/bpf/map_iter.c
index 6a9542af4212..631f0e44b7a9 100644
--- a/kernel/bpf/map_iter.c
+++ b/kernel/bpf/map_iter.c
@@ -174,9 +174,9 @@ static const struct bpf_iter_reg bpf_map_elem_reg_info = {
.ctx_arg_info_size = 2,
.ctx_arg_info = {
{ offsetof(struct bpf_iter__bpf_map_elem, key),
- PTR_TO_RDONLY_BUF_OR_NULL },
+ PTR_TO_RDONLY_BUF | PTR_MAYBE_NULL },
{ offsetof(struct bpf_iter__bpf_map_elem, value),
- PTR_TO_RDWR_BUF_OR_NULL },
+ PTR_TO_RDWR_BUF | PTR_MAYBE_NULL },
},
};
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index fb44db2456f2..01a564a23562 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -441,14 +441,7 @@ static bool reg_type_not_null(enum bpf_reg_type type)
static bool reg_type_may_be_null(enum bpf_reg_type type)
{
- return type == PTR_TO_MAP_VALUE_OR_NULL ||
- type == PTR_TO_SOCKET_OR_NULL ||
- type == PTR_TO_SOCK_COMMON_OR_NULL ||
- type == PTR_TO_TCP_SOCK_OR_NULL ||
- type == PTR_TO_BTF_ID_OR_NULL ||
- type == PTR_TO_MEM_OR_NULL ||
- type == PTR_TO_RDONLY_BUF_OR_NULL ||
- type == PTR_TO_RDWR_BUF_OR_NULL;
+ return BPF_TYPE_FLAG(type) & PTR_MAYBE_NULL;
}
static bool reg_may_point_to_spin_lock(const struct bpf_reg_state *reg)
@@ -459,12 +452,9 @@ static bool reg_may_point_to_spin_lock(const struct bpf_reg_state *reg)
static bool reg_type_may_be_refcounted_or_null(enum bpf_reg_type type)
{
- return type == PTR_TO_SOCKET ||
- type == PTR_TO_SOCKET_OR_NULL ||
- type == PTR_TO_TCP_SOCK ||
- type == PTR_TO_TCP_SOCK_OR_NULL ||
- type == PTR_TO_MEM ||
- type == PTR_TO_MEM_OR_NULL;
+ return BPF_BASE_TYPE(type) == PTR_TO_SOCKET ||
+ BPF_BASE_TYPE(type) == PTR_TO_TCP_SOCK ||
+ BPF_BASE_TYPE(type) == PTR_TO_MEM;
}
static bool arg_type_may_be_refcounted(enum bpf_arg_type type)
@@ -540,38 +530,35 @@ static bool is_cmpxchg_insn(const struct bpf_insn *insn)
}
/* string representation of 'enum bpf_reg_type' */
-static const char * const reg_type_str[] = {
- [NOT_INIT] = "?",
- [SCALAR_VALUE] = "inv",
- [PTR_TO_CTX] = "ctx",
- [CONST_PTR_TO_MAP] = "map_ptr",
- [PTR_TO_MAP_VALUE] = "map_value",
- [PTR_TO_MAP_VALUE_OR_NULL] = "map_value_or_null",
- [PTR_TO_STACK] = "fp",
- [PTR_TO_PACKET] = "pkt",
- [PTR_TO_PACKET_META] = "pkt_meta",
- [PTR_TO_PACKET_END] = "pkt_end",
- [PTR_TO_FLOW_KEYS] = "flow_keys",
- [PTR_TO_SOCKET] = "sock",
- [PTR_TO_SOCKET_OR_NULL] = "sock_or_null",
- [PTR_TO_SOCK_COMMON] = "sock_common",
- [PTR_TO_SOCK_COMMON_OR_NULL] = "sock_common_or_null",
- [PTR_TO_TCP_SOCK] = "tcp_sock",
- [PTR_TO_TCP_SOCK_OR_NULL] = "tcp_sock_or_null",
- [PTR_TO_TP_BUFFER] = "tp_buffer",
- [PTR_TO_XDP_SOCK] = "xdp_sock",
- [PTR_TO_BTF_ID] = "ptr_",
- [PTR_TO_BTF_ID_OR_NULL] = "ptr_or_null_",
- [PTR_TO_PERCPU_BTF_ID] = "percpu_ptr_",
- [PTR_TO_MEM] = "mem",
- [PTR_TO_MEM_OR_NULL] = "mem_or_null",
- [PTR_TO_RDONLY_BUF] = "rdonly_buf",
- [PTR_TO_RDONLY_BUF_OR_NULL] = "rdonly_buf_or_null",
- [PTR_TO_RDWR_BUF] = "rdwr_buf",
- [PTR_TO_RDWR_BUF_OR_NULL] = "rdwr_buf_or_null",
- [PTR_TO_FUNC] = "func",
- [PTR_TO_MAP_KEY] = "map_key",
-};
+static const char * const reg_type_str(enum bpf_reg_type type)
+{
+ static const char * const str[] = {
+ [NOT_INIT] = "?",
+ [SCALAR_VALUE] = "inv",
+ [PTR_TO_CTX] = "ctx",
+ [CONST_PTR_TO_MAP] = "map_ptr",
+ [PTR_TO_MAP_VALUE] = "map_value",
+ [PTR_TO_STACK] = "fp",
+ [PTR_TO_PACKET] = "pkt",
+ [PTR_TO_PACKET_META] = "pkt_meta",
+ [PTR_TO_PACKET_END] = "pkt_end",
+ [PTR_TO_FLOW_KEYS] = "flow_keys",
+ [PTR_TO_SOCKET] = "sock",
+ [PTR_TO_SOCK_COMMON] = "sock_common",
+ [PTR_TO_TCP_SOCK] = "tcp_sock",
+ [PTR_TO_TP_BUFFER] = "tp_buffer",
+ [PTR_TO_XDP_SOCK] = "xdp_sock",
+ [PTR_TO_BTF_ID] = "ptr_",
+ [PTR_TO_PERCPU_BTF_ID] = "percpu_ptr_",
+ [PTR_TO_MEM] = "mem",
+ [PTR_TO_RDONLY_BUF] = "rdonly_buf",
+ [PTR_TO_RDWR_BUF] = "rdwr_buf",
+ [PTR_TO_FUNC] = "func",
+ [PTR_TO_MAP_KEY] = "map_key",
+ };
+
+ return str[BPF_BASE_TYPE(type)];
+}
static char slot_type_char[] = {
[STACK_INVALID] = '?',
@@ -636,7 +623,7 @@ static void print_verifier_state(struct bpf_verifier_env *env,
continue;
verbose(env, " R%d", i);
print_liveness(env, reg->live);
- verbose(env, "=%s", reg_type_str[t]);
+ verbose(env, "=%s", reg_type_str(t));
if (t == SCALAR_VALUE && reg->precise)
verbose(env, "P");
if ((t == SCALAR_VALUE || t == PTR_TO_STACK) &&
@@ -644,9 +631,8 @@ static void print_verifier_state(struct bpf_verifier_env *env,
/* reg->off should be 0 for SCALAR_VALUE */
verbose(env, "%lld", reg->var_off.value + reg->off);
} else {
- if (t == PTR_TO_BTF_ID ||
- t == PTR_TO_BTF_ID_OR_NULL ||
- t == PTR_TO_PERCPU_BTF_ID)
+ if (BPF_BASE_TYPE(t) == PTR_TO_BTF_ID ||
+ BPF_BASE_TYPE(t) == PTR_TO_PERCPU_BTF_ID)
verbose(env, "%s", kernel_type_name(reg->btf, reg->btf_id));
verbose(env, "(id=%d", reg->id);
if (reg_type_may_be_refcounted_or_null(t))
@@ -655,10 +641,9 @@ static void print_verifier_state(struct bpf_verifier_env *env,
verbose(env, ",off=%d", reg->off);
if (type_is_pkt_pointer(t))
verbose(env, ",r=%d", reg->range);
- else if (t == CONST_PTR_TO_MAP ||
- t == PTR_TO_MAP_KEY ||
- t == PTR_TO_MAP_VALUE ||
- t == PTR_TO_MAP_VALUE_OR_NULL)
+ else if (BPF_BASE_TYPE(t) == CONST_PTR_TO_MAP ||
+ BPF_BASE_TYPE(t) == PTR_TO_MAP_KEY ||
+ BPF_BASE_TYPE(t) == PTR_TO_MAP_VALUE)
verbose(env, ",ks=%d,vs=%d",
reg->map_ptr->key_size,
reg->map_ptr->value_size);
@@ -728,7 +713,7 @@ static void print_verifier_state(struct bpf_verifier_env *env,
if (is_spilled_reg(&state->stack[i])) {
reg = &state->stack[i].spilled_ptr;
t = reg->type;
- verbose(env, "=%s", reg_type_str[t]);
+ verbose(env, "=%s", reg_type_str(t));
if (t == SCALAR_VALUE && reg->precise)
verbose(env, "P");
if (t == SCALAR_VALUE && tnum_is_const(reg->var_off))
@@ -1141,8 +1126,7 @@ static void mark_reg_known_zero(struct bpf_verifier_env *env,
static void mark_ptr_not_null_reg(struct bpf_reg_state *reg)
{
- switch (reg->type) {
- case PTR_TO_MAP_VALUE_OR_NULL: {
+ if (BPF_BASE_TYPE(reg->type) == PTR_TO_MAP_VALUE) {
const struct bpf_map *map = reg->map_ptr;
if (map->inner_map_meta) {
@@ -1160,32 +1144,10 @@ static void mark_ptr_not_null_reg(struct bpf_reg_state *reg)
} else {
reg->type = PTR_TO_MAP_VALUE;
}
- break;
- }
- case PTR_TO_SOCKET_OR_NULL:
- reg->type = PTR_TO_SOCKET;
- break;
- case PTR_TO_SOCK_COMMON_OR_NULL:
- reg->type = PTR_TO_SOCK_COMMON;
- break;
- case PTR_TO_TCP_SOCK_OR_NULL:
- reg->type = PTR_TO_TCP_SOCK;
- break;
- case PTR_TO_BTF_ID_OR_NULL:
- reg->type = PTR_TO_BTF_ID;
- break;
- case PTR_TO_MEM_OR_NULL:
- reg->type = PTR_TO_MEM;
- break;
- case PTR_TO_RDONLY_BUF_OR_NULL:
- reg->type = PTR_TO_RDONLY_BUF;
- break;
- case PTR_TO_RDWR_BUF_OR_NULL:
- reg->type = PTR_TO_RDWR_BUF;
- break;
- default:
- WARN_ONCE(1, "unknown nullable register type");
+ return;
}
+
+ reg->type &= ~PTR_MAYBE_NULL;
}
static bool reg_is_pkt_pointer(const struct bpf_reg_state *reg)
@@ -2040,7 +2002,7 @@ static int mark_reg_read(struct bpf_verifier_env *env,
break;
if (parent->live & REG_LIVE_DONE) {
verbose(env, "verifier BUG type %s var_off %lld off %d\n",
- reg_type_str[parent->type],
+ reg_type_str(parent->type),
parent->var_off.value, parent->off);
return -EFAULT;
}
@@ -2703,9 +2665,8 @@ static int mark_chain_precision_stack(struct bpf_verifier_env *env, int spi)
static bool is_spillable_regtype(enum bpf_reg_type type)
{
- switch (type) {
+ switch (BPF_BASE_TYPE(type)) {
case PTR_TO_MAP_VALUE:
- case PTR_TO_MAP_VALUE_OR_NULL:
case PTR_TO_STACK:
case PTR_TO_CTX:
case PTR_TO_PACKET:
@@ -2714,21 +2675,14 @@ static bool is_spillable_regtype(enum bpf_reg_type type)
case PTR_TO_FLOW_KEYS:
case CONST_PTR_TO_MAP:
case PTR_TO_SOCKET:
- case PTR_TO_SOCKET_OR_NULL:
case PTR_TO_SOCK_COMMON:
- case PTR_TO_SOCK_COMMON_OR_NULL:
case PTR_TO_TCP_SOCK:
- case PTR_TO_TCP_SOCK_OR_NULL:
case PTR_TO_XDP_SOCK:
case PTR_TO_BTF_ID:
- case PTR_TO_BTF_ID_OR_NULL:
case PTR_TO_RDONLY_BUF:
- case PTR_TO_RDONLY_BUF_OR_NULL:
case PTR_TO_RDWR_BUF:
- case PTR_TO_RDWR_BUF_OR_NULL:
case PTR_TO_PERCPU_BTF_ID:
case PTR_TO_MEM:
- case PTR_TO_MEM_OR_NULL:
case PTR_TO_FUNC:
case PTR_TO_MAP_KEY:
return true;
@@ -3569,7 +3523,7 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off,
*/
*reg_type = info.reg_type;
- if (*reg_type == PTR_TO_BTF_ID || *reg_type == PTR_TO_BTF_ID_OR_NULL) {
+ if (BPF_BASE_TYPE(*reg_type) == PTR_TO_BTF_ID) {
*btf = info.btf;
*btf_id = info.btf_id;
} else {
@@ -3637,7 +3591,7 @@ static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
}
verbose(env, "R%d invalid %s access off=%d size=%d\n",
- regno, reg_type_str[reg->type], off, size);
+ regno, reg_type_str(reg->type), off, size);
return -EACCES;
}
@@ -4395,8 +4349,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
* a sub-register.
*/
regs[value_regno].subreg_def = DEF_NOT_SUBREG;
- if (reg_type == PTR_TO_BTF_ID ||
- reg_type == PTR_TO_BTF_ID_OR_NULL) {
+ if (BPF_BASE_TYPE(reg_type) == PTR_TO_BTF_ID) {
regs[value_regno].btf = btf;
regs[value_regno].btf_id = btf_id;
}
@@ -4449,7 +4402,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
} else if (type_is_sk_pointer(reg->type)) {
if (t == BPF_WRITE) {
verbose(env, "R%d cannot write into %s\n",
- regno, reg_type_str[reg->type]);
+ regno, reg_type_str(reg->type));
return -EACCES;
}
err = check_sock_access(env, insn_idx, regno, off, size, t);
@@ -4468,7 +4421,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
} else if (reg->type == PTR_TO_RDONLY_BUF) {
if (t == BPF_WRITE) {
verbose(env, "R%d cannot write into %s\n",
- regno, reg_type_str[reg->type]);
+ regno, reg_type_str(reg->type));
return -EACCES;
}
err = check_buffer_access(env, reg, regno, off, size, false,
@@ -4484,7 +4437,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
mark_reg_unknown(env, regs, value_regno);
} else {
verbose(env, "R%d invalid mem access '%s'\n", regno,
- reg_type_str[reg->type]);
+ reg_type_str(reg->type));
return -EACCES;
}
@@ -4551,7 +4504,7 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
is_sk_reg(env, insn->dst_reg)) {
verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n",
insn->dst_reg,
- reg_type_str[reg_state(env, insn->dst_reg)->type]);
+ reg_type_str(reg_state(env, insn->dst_reg)->type));
return -EACCES;
}
@@ -4772,8 +4725,8 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
return 0;
verbose(env, "R%d type=%s expected=%s\n", regno,
- reg_type_str[reg->type],
- reg_type_str[PTR_TO_STACK]);
+ reg_type_str(reg->type),
+ reg_type_str(PTR_TO_STACK));
return -EACCES;
}
}
@@ -5118,10 +5071,10 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
goto found;
}
- verbose(env, "R%d type=%s expected=", regno, reg_type_str[type]);
+ verbose(env, "R%d type=%s expected=", regno, reg_type_str(type));
for (j = 0; j + 1 < i; j++)
- verbose(env, "%s, ", reg_type_str[compatible->types[j]]);
- verbose(env, "%s\n", reg_type_str[compatible->types[j]]);
+ verbose(env, "%s, ", reg_type_str(compatible->types[j]));
+ verbose(env, "%s\n", reg_type_str(compatible->types[j]));
return -EACCES;
found:
@@ -6376,6 +6329,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
{
const struct bpf_func_proto *fn = NULL;
enum bpf_return_type ret_type;
+ enum bpf_type_flag ret_flag;
struct bpf_reg_state *regs;
struct bpf_call_arg_meta meta;
int insn_idx = *insn_idx_p;
@@ -6517,6 +6471,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
/* update return register (already marked as written above) */
ret_type = fn->ret_type;
+ ret_flag = BPF_TYPE_FLAG(fn->ret_type);
if (ret_type == RET_INTEGER) {
/* sets type to SCALAR_VALUE */
mark_reg_unknown(env, regs, BPF_REG_0);
@@ -6536,25 +6491,23 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
}
regs[BPF_REG_0].map_ptr = meta.map_ptr;
regs[BPF_REG_0].map_uid = meta.map_uid;
- if (ret_type_may_be_null(fn->ret_type)) {
- regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
- } else {
- regs[BPF_REG_0].type = PTR_TO_MAP_VALUE;
- if (map_value_has_spin_lock(meta.map_ptr))
- regs[BPF_REG_0].id = ++env->id_gen;
+ regs[BPF_REG_0].type = PTR_TO_MAP_VALUE | ret_flag;
+ if (!ret_type_may_be_null(fn->ret_type) &&
+ map_value_has_spin_lock(meta.map_ptr)) {
+ regs[BPF_REG_0].id = ++env->id_gen;
}
} else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_SOCKET) {
mark_reg_known_zero(env, regs, BPF_REG_0);
- regs[BPF_REG_0].type = PTR_TO_SOCKET_OR_NULL;
+ regs[BPF_REG_0].type = PTR_TO_SOCKET | ret_flag;
} else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_SOCK_COMMON) {
mark_reg_known_zero(env, regs, BPF_REG_0);
- regs[BPF_REG_0].type = PTR_TO_SOCK_COMMON_OR_NULL;
+ regs[BPF_REG_0].type = PTR_TO_SOCK_COMMON | ret_flag;
} else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_TCP_SOCK) {
mark_reg_known_zero(env, regs, BPF_REG_0);
- regs[BPF_REG_0].type = PTR_TO_TCP_SOCK_OR_NULL;
+ regs[BPF_REG_0].type = PTR_TO_TCP_SOCK | ret_flag;
} else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_ALLOC_MEM) {
mark_reg_known_zero(env, regs, BPF_REG_0);
- regs[BPF_REG_0].type = PTR_TO_MEM_OR_NULL;
+ regs[BPF_REG_0].type = PTR_TO_MEM | ret_flag;
regs[BPF_REG_0].mem_size = meta.mem_size;
} else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_MEM_OR_BTF_ID) {
const struct btf_type *t;
@@ -6574,14 +6527,10 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
tname, PTR_ERR(ret));
return -EINVAL;
}
- regs[BPF_REG_0].type =
- (ret_type & PTR_MAYBE_NULL) ?
- PTR_TO_MEM_OR_NULL : PTR_TO_MEM;
+ regs[BPF_REG_0].type = PTR_TO_MEM | ret_flag;
regs[BPF_REG_0].mem_size = tsize;
} else {
- regs[BPF_REG_0].type =
- (ret_type & PTR_MAYBE_NULL) ?
- PTR_TO_BTF_ID_OR_NULL : PTR_TO_BTF_ID;
+ regs[BPF_REG_0].type = PTR_TO_BTF_ID | ret_flag;
regs[BPF_REG_0].btf = meta.ret_btf;
regs[BPF_REG_0].btf_id = meta.ret_btf_id;
}
@@ -6589,9 +6538,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
int ret_btf_id;
mark_reg_known_zero(env, regs, BPF_REG_0);
- regs[BPF_REG_0].type = (ret_type & PTR_MAYBE_NULL) ?
- PTR_TO_BTF_ID_OR_NULL :
- PTR_TO_BTF_ID;
+ regs[BPF_REG_0].type = PTR_TO_BTF_ID | ret_flag;
ret_btf_id = *fn->ret_btf_id;
if (ret_btf_id == 0) {
verbose(env, "invalid return type %d of func %s#%d\n",
@@ -6819,25 +6766,25 @@ static bool check_reg_sane_offset(struct bpf_verifier_env *env,
if (known && (val >= BPF_MAX_VAR_OFF || val <= -BPF_MAX_VAR_OFF)) {
verbose(env, "math between %s pointer and %lld is not allowed\n",
- reg_type_str[type], val);
+ reg_type_str(type), val);
return false;
}
if (reg->off >= BPF_MAX_VAR_OFF || reg->off <= -BPF_MAX_VAR_OFF) {
verbose(env, "%s pointer offset %d is not allowed\n",
- reg_type_str[type], reg->off);
+ reg_type_str(type), reg->off);
return false;
}
if (smin == S64_MIN) {
verbose(env, "math between %s pointer and register with unbounded min value is not allowed\n",
- reg_type_str[type]);
+ reg_type_str(type));
return false;
}
if (smin >= BPF_MAX_VAR_OFF || smin <= -BPF_MAX_VAR_OFF) {
verbose(env, "value %lld makes %s pointer be out of bounds\n",
- smin, reg_type_str[type]);
+ smin, reg_type_str(type));
return false;
}
@@ -7214,11 +7161,13 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
return -EACCES;
}
- switch (ptr_reg->type) {
- case PTR_TO_MAP_VALUE_OR_NULL:
+ if (ptr_reg->type == PTR_TO_MAP_VALUE_OR_NULL) {
verbose(env, "R%d pointer arithmetic on %s prohibited, null-check it first\n",
- dst, reg_type_str[ptr_reg->type]);
+ dst, reg_type_str(ptr_reg->type));
return -EACCES;
+ }
+
+ switch (BPF_BASE_TYPE(ptr_reg->type)) {
case CONST_PTR_TO_MAP:
/* smin_val represents the known value */
if (known && smin_val == 0 && opcode == BPF_ADD)
@@ -7226,14 +7175,11 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
fallthrough;
case PTR_TO_PACKET_END:
case PTR_TO_SOCKET:
- case PTR_TO_SOCKET_OR_NULL:
case PTR_TO_SOCK_COMMON:
- case PTR_TO_SOCK_COMMON_OR_NULL:
case PTR_TO_TCP_SOCK:
- case PTR_TO_TCP_SOCK_OR_NULL:
case PTR_TO_XDP_SOCK:
verbose(env, "R%d pointer arithmetic on %s prohibited\n",
- dst, reg_type_str[ptr_reg->type]);
+ dst, reg_type_str(ptr_reg->type));
return -EACCES;
default:
break;
@@ -9584,7 +9530,7 @@ static int check_return_code(struct bpf_verifier_env *env)
/* enforce return zero from async callbacks like timer */
if (reg->type != SCALAR_VALUE) {
verbose(env, "In async callback the register R0 is not a known value (%s)\n",
- reg_type_str[reg->type]);
+ reg_type_str(reg->type));
return -EINVAL;
}
@@ -9598,7 +9544,7 @@ static int check_return_code(struct bpf_verifier_env *env)
if (is_subprog) {
if (reg->type != SCALAR_VALUE) {
verbose(env, "At subprogram exit the register R0 is not a scalar value (%s)\n",
- reg_type_str[reg->type]);
+ reg_type_str(reg->type));
return -EINVAL;
}
return 0;
@@ -9662,7 +9608,7 @@ static int check_return_code(struct bpf_verifier_env *env)
if (reg->type != SCALAR_VALUE) {
verbose(env, "At program exit the register R0 is not a known value (%s)\n",
- reg_type_str[reg->type]);
+ reg_type_str(reg->type));
return -EINVAL;
}
@@ -10443,7 +10389,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
return true;
if (rcur->type == NOT_INIT)
return false;
- switch (rold->type) {
+ switch (BPF_BASE_TYPE(rold->type)) {
case SCALAR_VALUE:
if (env->explore_alu_limits)
return false;
@@ -10465,6 +10411,22 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
}
case PTR_TO_MAP_KEY:
case PTR_TO_MAP_VALUE:
+ /* a PTR_TO_MAP_VALUE could be safe to use as a
+ * PTR_TO_MAP_VALUE_OR_NULL into the same map.
+ * However, if the old PTR_TO_MAP_VALUE_OR_NULL then got NULL-
+ * checked, doing so could have affected others with the same
+ * id, and we can't check for that because we lost the id when
+ * we converted to a PTR_TO_MAP_VALUE.
+ */
+ if (reg_type_may_be_null(rold->type)) {
+ if (!reg_type_may_be_null(rcur->type))
+ return false;
+ if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)))
+ return false;
+ /* Check our ids match any regs they're supposed to */
+ return check_ids(rold->id, rcur->id, idmap);
+ }
+
/* If the new min/max/var_off satisfy the old ones and
* everything else matches, we are OK.
* 'id' is not compared, since it's only used for maps with
@@ -10476,20 +10438,6 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
- case PTR_TO_MAP_VALUE_OR_NULL:
- /* a PTR_TO_MAP_VALUE could be safe to use as a
- * PTR_TO_MAP_VALUE_OR_NULL into the same map.
- * However, if the old PTR_TO_MAP_VALUE_OR_NULL then got NULL-
- * checked, doing so could have affected others with the same
- * id, and we can't check for that because we lost the id when
- * we converted to a PTR_TO_MAP_VALUE.
- */
- if (rcur->type != PTR_TO_MAP_VALUE_OR_NULL)
- return false;
- if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)))
- return false;
- /* Check our ids match any regs they're supposed to */
- return check_ids(rold->id, rcur->id, idmap);
case PTR_TO_PACKET_META:
case PTR_TO_PACKET:
if (rcur->type != rold->type)
@@ -10518,11 +10466,8 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
case PTR_TO_PACKET_END:
case PTR_TO_FLOW_KEYS:
case PTR_TO_SOCKET:
- case PTR_TO_SOCKET_OR_NULL:
case PTR_TO_SOCK_COMMON:
- case PTR_TO_SOCK_COMMON_OR_NULL:
case PTR_TO_TCP_SOCK:
- case PTR_TO_TCP_SOCK_OR_NULL:
case PTR_TO_XDP_SOCK:
/* Only valid matches are exact, which memcmp() above
* would have accepted
@@ -11048,17 +10993,13 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
/* Return true if it's OK to have the same insn return a different type. */
static bool reg_type_mismatch_ok(enum bpf_reg_type type)
{
- switch (type) {
+ switch (BPF_BASE_TYPE(type)) {
case PTR_TO_CTX:
case PTR_TO_SOCKET:
- case PTR_TO_SOCKET_OR_NULL:
case PTR_TO_SOCK_COMMON:
- case PTR_TO_SOCK_COMMON_OR_NULL:
case PTR_TO_TCP_SOCK:
- case PTR_TO_TCP_SOCK_OR_NULL:
case PTR_TO_XDP_SOCK:
case PTR_TO_BTF_ID:
- case PTR_TO_BTF_ID_OR_NULL:
return false;
default:
return true;
@@ -11282,7 +11223,7 @@ static int do_check(struct bpf_verifier_env *env)
if (is_ctx_reg(env, insn->dst_reg)) {
verbose(env, "BPF_ST stores into R%d %s is not allowed\n",
insn->dst_reg,
- reg_type_str[reg_state(env, insn->dst_reg)->type]);
+ reg_type_str(reg_state(env, insn->dst_reg)->type));
return -EACCES;
}
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index 68d2cbf8331a..4cb5ef8eddbc 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -929,7 +929,7 @@ static struct bpf_iter_reg bpf_sk_storage_map_reg_info = {
{ offsetof(struct bpf_iter__bpf_sk_storage_map, sk),
PTR_TO_BTF_ID_OR_NULL },
{ offsetof(struct bpf_iter__bpf_sk_storage_map, value),
- PTR_TO_RDWR_BUF_OR_NULL },
+ PTR_TO_RDWR_BUF | PTR_MAYBE_NULL },
},
.seq_info = &iter_seq_info,
};
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index f39ef79ced67..8b2632be3771 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -1559,7 +1559,7 @@ static struct bpf_iter_reg sock_map_iter_reg = {
.ctx_arg_info_size = 2,
.ctx_arg_info = {
{ offsetof(struct bpf_iter__sockmap, key),
- PTR_TO_RDONLY_BUF_OR_NULL },
+ PTR_TO_RDONLY_BUF | PTR_MAYBE_NULL },
{ offsetof(struct bpf_iter__sockmap, sk),
PTR_TO_BTF_ID_OR_NULL },
},
--
2.34.0.384.gca35af8252-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [RFC PATCH bpf-next v2 5/9] bpf: Introduce MEM_RDONLY flag
2021-11-30 1:29 [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types Hao Luo
` (3 preceding siblings ...)
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 4/9] bpf: Replace PTR_TO_XXX_OR_NULL with PTR_TO_XXX " Hao Luo
@ 2021-11-30 1:29 ` Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 6/9] bpf: Convert PTR_TO_MEM_OR_NULL to composable types Hao Luo
` (3 subsequent siblings)
8 siblings, 0 replies; 20+ messages in thread
From: Hao Luo @ 2021-11-30 1:29 UTC (permalink / raw)
To: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann
Cc: Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf, Hao Luo
This patch introduce another flag MEM_RDONLY to tag a reg value is
pointing to a read-only memory. It makes the following changes:
1. PTR_TO_RDWR_BUF -> PTR_TO_BUF
2. PTR_TO_RDONLY_BUF -> PTR_TO_BUF | MEM_RDONLY
Signed-off-by: Hao Luo <haoluo@google.com>
---
include/linux/bpf.h | 8 +++--
kernel/bpf/btf.c | 3 +-
kernel/bpf/map_iter.c | 4 +--
kernel/bpf/verifier.c | 76 +++++++++++++++++++++++----------------
net/core/bpf_sk_storage.c | 2 +-
net/core/sock_map.c | 2 +-
6 files changed, 55 insertions(+), 40 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d484f6637e60..61b72dbaeae8 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -311,7 +311,10 @@ enum bpf_type_flag {
/* PTR may be NULL. */
PTR_MAYBE_NULL = BIT(0 + BPF_BASE_TYPE_BITS),
- __BPF_TYPE_LAST_FLAG = PTR_MAYBE_NULL,
+ /* MEM is read-only. */
+ MEM_RDONLY = BIT(1 + BPF_BASE_TYPE_BITS),
+
+ __BPF_TYPE_LAST_FLAG = MEM_RDONLY,
};
#define BPF_BASE_TYPE_MASK GENMASK(BPF_BASE_TYPE_BITS, 0)
@@ -499,8 +502,7 @@ enum bpf_reg_type {
* an explicit null check is required for this struct.
*/
PTR_TO_MEM, /* reg points to valid memory region */
- PTR_TO_RDONLY_BUF, /* reg points to a readonly buffer */
- PTR_TO_RDWR_BUF, /* reg points to a read/write buffer */
+ PTR_TO_BUF, /* reg points to a read/write buffer */
PTR_TO_PERCPU_BTF_ID, /* reg points to a percpu kernel variable */
PTR_TO_FUNC, /* reg points to a bpf program function */
PTR_TO_MAP_KEY, /* reg points to a map element key */
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 62a86db7d8ec..19ddd6fe5663 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -4946,8 +4946,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
type = BPF_BASE_TYPE(ctx_arg_info->reg_type);
flag = BPF_TYPE_FLAG(ctx_arg_info->reg_type);
- if (ctx_arg_info->offset == off &&
- (type == PTR_TO_RDWR_BUF || type == PTR_TO_RDONLY_BUF) &&
+ if (ctx_arg_info->offset == off && type == PTR_TO_BUF &&
(flag & PTR_MAYBE_NULL)) {
info->reg_type = ctx_arg_info->reg_type;
return true;
diff --git a/kernel/bpf/map_iter.c b/kernel/bpf/map_iter.c
index 631f0e44b7a9..b0fa190b0979 100644
--- a/kernel/bpf/map_iter.c
+++ b/kernel/bpf/map_iter.c
@@ -174,9 +174,9 @@ static const struct bpf_iter_reg bpf_map_elem_reg_info = {
.ctx_arg_info_size = 2,
.ctx_arg_info = {
{ offsetof(struct bpf_iter__bpf_map_elem, key),
- PTR_TO_RDONLY_BUF | PTR_MAYBE_NULL },
+ PTR_TO_BUF | PTR_MAYBE_NULL | MEM_RDONLY },
{ offsetof(struct bpf_iter__bpf_map_elem, value),
- PTR_TO_RDWR_BUF | PTR_MAYBE_NULL },
+ PTR_TO_BUF | PTR_MAYBE_NULL },
},
};
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 01a564a23562..27f3440f4b18 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -457,6 +457,11 @@ static bool reg_type_may_be_refcounted_or_null(enum bpf_reg_type type)
BPF_BASE_TYPE(type) == PTR_TO_MEM;
}
+static bool reg_type_is_rdonly_mem(enum bpf_reg_type type)
+{
+ return BPF_TYPE_FLAG(type) & MEM_RDONLY;
+}
+
static bool arg_type_may_be_refcounted(enum bpf_arg_type type)
{
return type == ARG_PTR_TO_SOCK_COMMON;
@@ -551,8 +556,7 @@ static const char * const reg_type_str(enum bpf_reg_type type)
[PTR_TO_BTF_ID] = "ptr_",
[PTR_TO_PERCPU_BTF_ID] = "percpu_ptr_",
[PTR_TO_MEM] = "mem",
- [PTR_TO_RDONLY_BUF] = "rdonly_buf",
- [PTR_TO_RDWR_BUF] = "rdwr_buf",
+ [PTR_TO_BUF] = "rdwr_buf",
[PTR_TO_FUNC] = "func",
[PTR_TO_MAP_KEY] = "map_key",
};
@@ -2679,8 +2683,7 @@ static bool is_spillable_regtype(enum bpf_reg_type type)
case PTR_TO_TCP_SOCK:
case PTR_TO_XDP_SOCK:
case PTR_TO_BTF_ID:
- case PTR_TO_RDONLY_BUF:
- case PTR_TO_RDWR_BUF:
+ case PTR_TO_BUF:
case PTR_TO_PERCPU_BTF_ID:
case PTR_TO_MEM:
case PTR_TO_FUNC:
@@ -4418,23 +4421,30 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
} else if (reg->type == CONST_PTR_TO_MAP) {
err = check_ptr_to_map_access(env, regs, regno, off, size, t,
value_regno);
- } else if (reg->type == PTR_TO_RDONLY_BUF) {
- if (t == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n",
- regno, reg_type_str(reg->type));
- return -EACCES;
+ } else if (BPF_BASE_TYPE(reg->type) == PTR_TO_BUF) {
+ bool rdonly_mem = reg_type_is_rdonly_mem(reg->type);
+ const char *buf_info;
+ u32 *max_access;
+
+ if (rdonly_mem) {
+ if (t == BPF_WRITE) {
+ verbose(env, "R%d cannot write into rdonly %s\n",
+ regno, reg_type_str(reg->type));
+ return -EACCES;
+ }
+ buf_info = "rdonly";
+ max_access = &env->prog->aux->max_rdonly_access;
+ } else {
+ buf_info = "rdwr";
+ max_access = &env->prog->aux->max_rdwr_access;
}
+
err = check_buffer_access(env, reg, regno, off, size, false,
- "rdonly",
- &env->prog->aux->max_rdonly_access);
+ buf_info, max_access);
+
if (!err && value_regno >= 0)
- mark_reg_unknown(env, regs, value_regno);
- } else if (reg->type == PTR_TO_RDWR_BUF) {
- err = check_buffer_access(env, reg, regno, off, size, false,
- "rdwr",
- &env->prog->aux->max_rdwr_access);
- if (!err && t == BPF_READ && value_regno >= 0)
- mark_reg_unknown(env, regs, value_regno);
+ if (rdonly_mem || t == BPF_READ)
+ mark_reg_unknown(env, regs, value_regno);
} else {
verbose(env, "R%d invalid mem access '%s'\n", regno,
reg_type_str(reg->type));
@@ -4681,8 +4691,10 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
struct bpf_call_arg_meta *meta)
{
struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno];
+ const char *buf_info;
+ u32 *max_access;
- switch (reg->type) {
+ switch (BPF_BASE_TYPE(reg->type)) {
case PTR_TO_PACKET:
case PTR_TO_PACKET_META:
return check_packet_access(env, regno, reg->off, access_size,
@@ -4701,18 +4713,20 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
return check_mem_region_access(env, regno, reg->off,
access_size, reg->mem_size,
zero_size_allowed);
- case PTR_TO_RDONLY_BUF:
- if (meta && meta->raw_mode)
- return -EACCES;
- return check_buffer_access(env, reg, regno, reg->off,
- access_size, zero_size_allowed,
- "rdonly",
- &env->prog->aux->max_rdonly_access);
- case PTR_TO_RDWR_BUF:
+ case PTR_TO_BUF:
+ if (reg_type_is_rdonly_mem(reg->type)) {
+ if (meta && meta->raw_mode)
+ return -EACCES;
+
+ buf_info = "rdonly";
+ max_access = &env->prog->aux->max_rdonly_access;
+ } else {
+ buf_info = "rdwr";
+ max_access = &env->prog->aux->max_rdwr_access;
+ }
return check_buffer_access(env, reg, regno, reg->off,
access_size, zero_size_allowed,
- "rdwr",
- &env->prog->aux->max_rdwr_access);
+ buf_info, max_access);
case PTR_TO_STACK:
return check_stack_range_initialized(
env,
@@ -4991,8 +5005,8 @@ static const struct bpf_reg_types mem_types = {
PTR_TO_MAP_KEY,
PTR_TO_MAP_VALUE,
PTR_TO_MEM,
- PTR_TO_RDONLY_BUF,
- PTR_TO_RDWR_BUF,
+ PTR_TO_BUF,
+ PTR_TO_BUF | MEM_RDONLY,
},
};
diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
index 4cb5ef8eddbc..ea61dfe19c86 100644
--- a/net/core/bpf_sk_storage.c
+++ b/net/core/bpf_sk_storage.c
@@ -929,7 +929,7 @@ static struct bpf_iter_reg bpf_sk_storage_map_reg_info = {
{ offsetof(struct bpf_iter__bpf_sk_storage_map, sk),
PTR_TO_BTF_ID_OR_NULL },
{ offsetof(struct bpf_iter__bpf_sk_storage_map, value),
- PTR_TO_RDWR_BUF | PTR_MAYBE_NULL },
+ PTR_TO_BUF | PTR_MAYBE_NULL },
},
.seq_info = &iter_seq_info,
};
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 8b2632be3771..005bf58b1148 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -1559,7 +1559,7 @@ static struct bpf_iter_reg sock_map_iter_reg = {
.ctx_arg_info_size = 2,
.ctx_arg_info = {
{ offsetof(struct bpf_iter__sockmap, key),
- PTR_TO_RDONLY_BUF | PTR_MAYBE_NULL },
+ PTR_TO_BUF | PTR_MAYBE_NULL | MEM_RDONLY },
{ offsetof(struct bpf_iter__sockmap, sk),
PTR_TO_BTF_ID_OR_NULL },
},
--
2.34.0.384.gca35af8252-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [RFC PATCH bpf-next v2 6/9] bpf: Convert PTR_TO_MEM_OR_NULL to composable types.
2021-11-30 1:29 [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types Hao Luo
` (4 preceding siblings ...)
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 5/9] bpf: Introduce MEM_RDONLY flag Hao Luo
@ 2021-11-30 1:29 ` Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 7/9] bpf: Make per_cpu_ptr return rdonly PTR_TO_MEM Hao Luo
` (2 subsequent siblings)
8 siblings, 0 replies; 20+ messages in thread
From: Hao Luo @ 2021-11-30 1:29 UTC (permalink / raw)
To: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann
Cc: Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf, Hao Luo
Remove PTR_TO_MEM_OR_NULL and replace it with PTR_TO_MEM combined with
flag PTR_MAYBE_NULL.
Signed-off-by: Hao Luo <haoluo@google.com>
---
include/linux/bpf.h | 1 -
kernel/bpf/btf.c | 2 +-
kernel/bpf/verifier.c | 2 +-
3 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 61b72dbaeae8..19d27d5a1d94 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -514,7 +514,6 @@ enum bpf_reg_type {
PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | PTR_TO_SOCK_COMMON,
PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | PTR_TO_TCP_SOCK,
PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | PTR_TO_BTF_ID,
- PTR_TO_MEM_OR_NULL = PTR_MAYBE_NULL | PTR_TO_MEM,
/* This must be the last entry. Its purpose is to ensure the enum is
* wide enough to hold the higher bits reserved for bpf_type_flag.
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 19ddd6fe5663..776b90de0971 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -5861,7 +5861,7 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog,
return -EINVAL;
}
- reg->type = PTR_TO_MEM_OR_NULL;
+ reg->type = PTR_TO_MEM | PTR_MAYBE_NULL;
reg->id = ++env->id_gen;
continue;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 27f3440f4b18..72c7135bee3e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13313,7 +13313,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
mark_reg_known_zero(env, regs, i);
else if (regs[i].type == SCALAR_VALUE)
mark_reg_unknown(env, regs, i);
- else if (regs[i].type == PTR_TO_MEM_OR_NULL) {
+ else if (BPF_BASE_TYPE(regs[i].type) == PTR_TO_MEM) {
const u32 mem_size = regs[i].mem_size;
mark_reg_known_zero(env, regs, i);
--
2.34.0.384.gca35af8252-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [RFC PATCH bpf-next v2 7/9] bpf: Make per_cpu_ptr return rdonly PTR_TO_MEM.
2021-11-30 1:29 [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types Hao Luo
` (5 preceding siblings ...)
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 6/9] bpf: Convert PTR_TO_MEM_OR_NULL to composable types Hao Luo
@ 2021-11-30 1:29 ` Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 9/9] bpf/selftests: Test PTR_TO_RDONLY_MEM Hao Luo
8 siblings, 0 replies; 20+ messages in thread
From: Hao Luo @ 2021-11-30 1:29 UTC (permalink / raw)
To: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann
Cc: Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf, Hao Luo
Tag the return type of {per, this}_cpu_ptr with RDONLY_MEM. The
returned value of this pair of helpers is kernel object, which
can not be updated by bpf programs. In addition to tagging these two
helpers, the verifier is now able to differentiate RDONLY MEM from
RDWR MEM.
Fixes: 63d9b80dcf2c ("bpf: Introducte bpf_this_cpu_ptr()")
Fixes: eaa6bcb71ef6 ("bpf: Introduce bpf_per_cpu_ptr()")
Fixes: 4976b718c355 ("bpf: Introduce pseudo_btf_id")
Signed-off-by: Hao Luo <haoluo@google.com>
---
kernel/bpf/helpers.c | 4 ++--
kernel/bpf/verifier.c | 24 ++++++++++++++++++++----
2 files changed, 22 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 293d9314ec7f..a5e349c9d3e3 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -667,7 +667,7 @@ BPF_CALL_2(bpf_per_cpu_ptr, const void *, ptr, u32, cpu)
const struct bpf_func_proto bpf_per_cpu_ptr_proto = {
.func = bpf_per_cpu_ptr,
.gpl_only = false,
- .ret_type = RET_PTR_TO_MEM_OR_BTF_ID | PTR_MAYBE_NULL,
+ .ret_type = RET_PTR_TO_MEM_OR_BTF_ID | PTR_MAYBE_NULL | MEM_RDONLY,
.arg1_type = ARG_PTR_TO_PERCPU_BTF_ID,
.arg2_type = ARG_ANYTHING,
};
@@ -680,7 +680,7 @@ BPF_CALL_1(bpf_this_cpu_ptr, const void *, percpu_ptr)
const struct bpf_func_proto bpf_this_cpu_ptr_proto = {
.func = bpf_this_cpu_ptr,
.gpl_only = false,
- .ret_type = RET_PTR_TO_MEM_OR_BTF_ID,
+ .ret_type = RET_PTR_TO_MEM_OR_BTF_ID | MEM_RDONLY,
.arg1_type = ARG_PTR_TO_PERCPU_BTF_ID,
};
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 72c7135bee3e..e2663544362a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4306,16 +4306,32 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
mark_reg_unknown(env, regs, value_regno);
}
}
- } else if (reg->type == PTR_TO_MEM) {
+ } else if (BPF_BASE_TYPE(reg->type) == PTR_TO_MEM) {
+ bool rdonly_mem = reg_type_is_rdonly_mem(reg->type);
+
+ if (reg_type_may_be_null(reg->type)) {
+ verbose(env, "R%d invalid mem access '%s'\n", regno,
+ reg_type_str(reg->type));
+ return -EACCES;
+ }
+
+ if (t == BPF_WRITE && rdonly_mem) {
+ verbose(env, "R%d cannot write into rdonly %s\n",
+ regno, reg_type_str(reg->type));
+ return -EACCES;
+ }
+
if (t == BPF_WRITE && value_regno >= 0 &&
is_pointer_value(env, value_regno)) {
verbose(env, "R%d leaks addr into mem\n", value_regno);
return -EACCES;
}
+
err = check_mem_region_access(env, regno, off, size,
reg->mem_size, false);
if (!err && t == BPF_READ && value_regno >= 0)
- mark_reg_unknown(env, regs, value_regno);
+ if (t == BPF_READ || rdonly_mem)
+ mark_reg_unknown(env, regs, value_regno);
} else if (reg->type == PTR_TO_CTX) {
enum bpf_reg_type reg_type = SCALAR_VALUE;
struct btf *btf = NULL;
@@ -9345,7 +9361,7 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
mark_reg_known_zero(env, regs, insn->dst_reg);
dst_reg->type = aux->btf_var.reg_type;
- switch (dst_reg->type) {
+ switch (BPF_BASE_TYPE(dst_reg->type)) {
case PTR_TO_MEM:
dst_reg->mem_size = aux->btf_var.mem_size;
break;
@@ -11489,7 +11505,7 @@ static int check_pseudo_btf_id(struct bpf_verifier_env *env,
err = -EINVAL;
goto err_put;
}
- aux->btf_var.reg_type = PTR_TO_MEM;
+ aux->btf_var.reg_type = PTR_TO_MEM | MEM_RDONLY;
aux->btf_var.mem_size = tsize;
} else {
aux->btf_var.reg_type = PTR_TO_BTF_ID;
--
2.34.0.384.gca35af8252-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem.
2021-11-30 1:29 [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types Hao Luo
` (6 preceding siblings ...)
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 7/9] bpf: Make per_cpu_ptr return rdonly PTR_TO_MEM Hao Luo
@ 2021-11-30 1:29 ` Hao Luo
2021-12-01 20:34 ` Alexei Starovoitov
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 9/9] bpf/selftests: Test PTR_TO_RDONLY_MEM Hao Luo
8 siblings, 1 reply; 20+ messages in thread
From: Hao Luo @ 2021-11-30 1:29 UTC (permalink / raw)
To: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann
Cc: Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf, Hao Luo
Some helper functions may modify its arguments, for example,
bpf_d_path, bpf_get_stack etc. Previously, their argument types
were marked as ARG_PTR_TO_MEM, which is compatible with read-only
mem types, such as PTR_TO_RDONLY_BUF. Therefore it's legitimate
to modify a read-only memory by passing it into one of such helper
functions.
This patch tags the args that point to immutable memory with
MEM_RDONLY flag. The arguments that don't have this flag will be
only compatible with mutable memory types. The arguments that have
MEM_RDONLY flag are compatible with both mutable memory and immutable
memory.
Signed-off-by: Hao Luo <haoluo@google.com>
---
kernel/bpf/btf.c | 2 +-
kernel/bpf/cgroup.c | 2 +-
kernel/bpf/helpers.c | 8 ++---
kernel/bpf/ringbuf.c | 2 +-
kernel/bpf/syscall.c | 2 +-
kernel/bpf/verifier.c | 28 +++++++++++++++++-
kernel/trace/bpf_trace.c | 26 ++++++++--------
net/core/filter.c | 64 ++++++++++++++++++++--------------------
8 files changed, 80 insertions(+), 54 deletions(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 776b90de0971..f0da2f997937 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6351,7 +6351,7 @@ const struct bpf_func_proto bpf_btf_find_by_name_kind_proto = {
.func = bpf_btf_find_by_name_kind,
.gpl_only = false,
.ret_type = RET_INTEGER,
- .arg1_type = ARG_PTR_TO_MEM,
+ .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg2_type = ARG_CONST_SIZE,
.arg3_type = ARG_ANYTHING,
.arg4_type = ARG_ANYTHING,
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 2ca643af9a54..b8fe671af13c 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -1789,7 +1789,7 @@ static const struct bpf_func_proto bpf_sysctl_set_new_value_proto = {
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
};
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index a5e349c9d3e3..66b466903a4e 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -530,7 +530,7 @@ const struct bpf_func_proto bpf_strtol_proto = {
.func = bpf_strtol,
.gpl_only = false,
.ret_type = RET_INTEGER,
- .arg1_type = ARG_PTR_TO_MEM,
+ .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg2_type = ARG_CONST_SIZE,
.arg3_type = ARG_ANYTHING,
.arg4_type = ARG_PTR_TO_LONG,
@@ -558,7 +558,7 @@ const struct bpf_func_proto bpf_strtoul_proto = {
.func = bpf_strtoul,
.gpl_only = false,
.ret_type = RET_INTEGER,
- .arg1_type = ARG_PTR_TO_MEM,
+ .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg2_type = ARG_CONST_SIZE,
.arg3_type = ARG_ANYTHING,
.arg4_type = ARG_PTR_TO_LONG,
@@ -630,7 +630,7 @@ const struct bpf_func_proto bpf_event_output_data_proto = {
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_CONST_MAP_PTR,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -1011,7 +1011,7 @@ const struct bpf_func_proto bpf_snprintf_proto = {
.arg1_type = ARG_PTR_TO_MEM_OR_NULL,
.arg2_type = ARG_CONST_SIZE_OR_ZERO,
.arg3_type = ARG_PTR_TO_CONST_STR,
- .arg4_type = ARG_PTR_TO_MEM_OR_NULL,
+ .arg4_type = ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
};
diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index 9e0c10c6892a..638d7fd7b375 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -444,7 +444,7 @@ const struct bpf_func_proto bpf_ringbuf_output_proto = {
.func = bpf_ringbuf_output,
.ret_type = RET_INTEGER,
.arg1_type = ARG_CONST_MAP_PTR,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE_OR_ZERO,
.arg4_type = ARG_ANYTHING,
};
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 50f96ea4452a..301afb44e47f 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -4757,7 +4757,7 @@ static const struct bpf_func_proto bpf_sys_bpf_proto = {
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_ANYTHING,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
};
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e2663544362a..11ec26c1caa4 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4976,9 +4976,15 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env,
return 0;
}
+
struct bpf_reg_types {
const enum bpf_reg_type types[10];
u32 *btf_id;
+
+ /* Certain types require customized type matching function. */
+ bool (*type_match_fn)(enum bpf_arg_type arg_type,
+ enum bpf_reg_type type,
+ enum bpf_reg_type expected);
};
static const struct bpf_reg_types map_key_value_types = {
@@ -5013,6 +5019,19 @@ static const struct bpf_reg_types btf_id_sock_common_types = {
};
#endif
+static bool mem_type_match(enum bpf_arg_type arg_type,
+ enum bpf_reg_type type, enum bpf_reg_type expected)
+{
+ /* If arg_type is tagged with MEM_RDONLY, type is compatible with both
+ * RDONLY and RDWR mem, fold the MEM_RDONLY flag in 'type' before
+ * comparison.
+ */
+ if ((arg_type & MEM_RDONLY) != 0)
+ type &= ~MEM_RDONLY;
+
+ return type == expected;
+}
+
static const struct bpf_reg_types mem_types = {
.types = {
PTR_TO_STACK,
@@ -5022,8 +5041,8 @@ static const struct bpf_reg_types mem_types = {
PTR_TO_MAP_VALUE,
PTR_TO_MEM,
PTR_TO_BUF,
- PTR_TO_BUF | MEM_RDONLY,
},
+ .type_match_fn = mem_type_match,
};
static const struct bpf_reg_types int_ptr_types = {
@@ -5097,6 +5116,13 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
if (expected == NOT_INIT)
break;
+ if (compatible->type_match_fn) {
+ if (compatible->type_match_fn(arg_type, type, expected))
+ goto found;
+
+ continue;
+ }
+
if (type == expected)
goto found;
}
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 25ea521fb8f1..79404049b70f 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -345,7 +345,7 @@ static const struct bpf_func_proto bpf_probe_write_user_proto = {
.gpl_only = true,
.ret_type = RET_INTEGER,
.arg1_type = ARG_ANYTHING,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
};
@@ -394,7 +394,7 @@ static const struct bpf_func_proto bpf_trace_printk_proto = {
.func = bpf_trace_printk,
.gpl_only = true,
.ret_type = RET_INTEGER,
- .arg1_type = ARG_PTR_TO_MEM,
+ .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg2_type = ARG_CONST_SIZE,
};
@@ -450,9 +450,9 @@ static const struct bpf_func_proto bpf_trace_vprintk_proto = {
.func = bpf_trace_vprintk,
.gpl_only = true,
.ret_type = RET_INTEGER,
- .arg1_type = ARG_PTR_TO_MEM,
+ .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg2_type = ARG_CONST_SIZE,
- .arg3_type = ARG_PTR_TO_MEM_OR_NULL,
+ .arg3_type = ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
.arg4_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -492,9 +492,9 @@ static const struct bpf_func_proto bpf_seq_printf_proto = {
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_BTF_ID,
.arg1_btf_id = &btf_seq_file_ids[0],
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
- .arg4_type = ARG_PTR_TO_MEM_OR_NULL,
+ .arg4_type = ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -509,7 +509,7 @@ static const struct bpf_func_proto bpf_seq_write_proto = {
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_BTF_ID,
.arg1_btf_id = &btf_seq_file_ids[0],
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -533,7 +533,7 @@ static const struct bpf_func_proto bpf_seq_printf_btf_proto = {
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_BTF_ID,
.arg1_btf_id = &btf_seq_file_ids[0],
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE_OR_ZERO,
.arg4_type = ARG_ANYTHING,
};
@@ -694,7 +694,7 @@ static const struct bpf_func_proto bpf_perf_event_output_proto = {
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_CONST_MAP_PTR,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -1004,7 +1004,7 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_MEM,
.arg2_type = ARG_CONST_SIZE,
- .arg3_type = ARG_PTR_TO_MEM,
+ .arg3_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg4_type = ARG_CONST_SIZE,
.arg5_type = ARG_ANYTHING,
};
@@ -1289,7 +1289,7 @@ static const struct bpf_func_proto bpf_perf_event_output_proto_tp = {
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_CONST_MAP_PTR,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -1515,7 +1515,7 @@ static const struct bpf_func_proto bpf_perf_event_output_proto_raw_tp = {
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_CONST_MAP_PTR,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -1569,7 +1569,7 @@ static const struct bpf_func_proto bpf_get_stack_proto_raw_tp = {
.gpl_only = true,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE_OR_ZERO,
.arg4_type = ARG_ANYTHING,
};
diff --git a/net/core/filter.c b/net/core/filter.c
index 26e0276aa00d..eadca10b436d 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1713,7 +1713,7 @@ static const struct bpf_func_proto bpf_skb_store_bytes_proto = {
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
- .arg3_type = ARG_PTR_TO_MEM,
+ .arg3_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg4_type = ARG_CONST_SIZE,
.arg5_type = ARG_ANYTHING,
};
@@ -2018,9 +2018,9 @@ static const struct bpf_func_proto bpf_csum_diff_proto = {
.gpl_only = false,
.pkt_access = true,
.ret_type = RET_INTEGER,
- .arg1_type = ARG_PTR_TO_MEM_OR_NULL,
+ .arg1_type = ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
.arg2_type = ARG_CONST_SIZE_OR_ZERO,
- .arg3_type = ARG_PTR_TO_MEM_OR_NULL,
+ .arg3_type = ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
.arg4_type = ARG_CONST_SIZE_OR_ZERO,
.arg5_type = ARG_ANYTHING,
};
@@ -2541,7 +2541,7 @@ static const struct bpf_func_proto bpf_redirect_neigh_proto = {
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_ANYTHING,
- .arg2_type = ARG_PTR_TO_MEM_OR_NULL,
+ .arg2_type = ARG_PTR_TO_MEM | PTR_MAYBE_NULL | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE_OR_ZERO,
.arg4_type = ARG_ANYTHING,
};
@@ -4174,7 +4174,7 @@ static const struct bpf_func_proto bpf_skb_event_output_proto = {
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_CONST_MAP_PTR,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -4188,7 +4188,7 @@ const struct bpf_func_proto bpf_skb_output_proto = {
.arg1_btf_id = &bpf_skb_output_btf_ids[0],
.arg2_type = ARG_CONST_MAP_PTR,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -4371,7 +4371,7 @@ static const struct bpf_func_proto bpf_skb_set_tunnel_key_proto = {
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
};
@@ -4397,7 +4397,7 @@ static const struct bpf_func_proto bpf_skb_set_tunnel_opt_proto = {
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
};
@@ -4567,7 +4567,7 @@ static const struct bpf_func_proto bpf_xdp_event_output_proto = {
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_CONST_MAP_PTR,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -4581,7 +4581,7 @@ const struct bpf_func_proto bpf_xdp_output_proto = {
.arg1_btf_id = &bpf_xdp_output_btf_ids[0],
.arg2_type = ARG_CONST_MAP_PTR,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
};
@@ -5067,7 +5067,7 @@ const struct bpf_func_proto bpf_sk_setsockopt_proto = {
.arg1_type = ARG_PTR_TO_BTF_ID_SOCK_COMMON,
.arg2_type = ARG_ANYTHING,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE,
};
@@ -5101,7 +5101,7 @@ static const struct bpf_func_proto bpf_sock_addr_setsockopt_proto = {
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE,
};
@@ -5135,7 +5135,7 @@ static const struct bpf_func_proto bpf_sock_ops_setsockopt_proto = {
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
.arg3_type = ARG_ANYTHING,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE,
};
@@ -5310,7 +5310,7 @@ static const struct bpf_func_proto bpf_bind_proto = {
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
};
@@ -5898,7 +5898,7 @@ static const struct bpf_func_proto bpf_lwt_in_push_encap_proto = {
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
- .arg3_type = ARG_PTR_TO_MEM,
+ .arg3_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg4_type = ARG_CONST_SIZE
};
@@ -5908,7 +5908,7 @@ static const struct bpf_func_proto bpf_lwt_xmit_push_encap_proto = {
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
- .arg3_type = ARG_PTR_TO_MEM,
+ .arg3_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg4_type = ARG_CONST_SIZE
};
@@ -5951,7 +5951,7 @@ static const struct bpf_func_proto bpf_lwt_seg6_store_bytes_proto = {
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
- .arg3_type = ARG_PTR_TO_MEM,
+ .arg3_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg4_type = ARG_CONST_SIZE
};
@@ -6039,7 +6039,7 @@ static const struct bpf_func_proto bpf_lwt_seg6_action_proto = {
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
- .arg3_type = ARG_PTR_TO_MEM,
+ .arg3_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg4_type = ARG_CONST_SIZE
};
@@ -6264,7 +6264,7 @@ static const struct bpf_func_proto bpf_skc_lookup_tcp_proto = {
.pkt_access = true,
.ret_type = RET_PTR_TO_SOCK_COMMON_OR_NULL,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
.arg5_type = ARG_ANYTHING,
@@ -6283,7 +6283,7 @@ static const struct bpf_func_proto bpf_sk_lookup_tcp_proto = {
.pkt_access = true,
.ret_type = RET_PTR_TO_SOCKET_OR_NULL,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
.arg5_type = ARG_ANYTHING,
@@ -6302,7 +6302,7 @@ static const struct bpf_func_proto bpf_sk_lookup_udp_proto = {
.pkt_access = true,
.ret_type = RET_PTR_TO_SOCKET_OR_NULL,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
.arg5_type = ARG_ANYTHING,
@@ -6339,7 +6339,7 @@ static const struct bpf_func_proto bpf_xdp_sk_lookup_udp_proto = {
.pkt_access = true,
.ret_type = RET_PTR_TO_SOCKET_OR_NULL,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
.arg5_type = ARG_ANYTHING,
@@ -6362,7 +6362,7 @@ static const struct bpf_func_proto bpf_xdp_skc_lookup_tcp_proto = {
.pkt_access = true,
.ret_type = RET_PTR_TO_SOCK_COMMON_OR_NULL,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
.arg5_type = ARG_ANYTHING,
@@ -6385,7 +6385,7 @@ static const struct bpf_func_proto bpf_xdp_sk_lookup_tcp_proto = {
.pkt_access = true,
.ret_type = RET_PTR_TO_SOCKET_OR_NULL,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
.arg5_type = ARG_ANYTHING,
@@ -6404,7 +6404,7 @@ static const struct bpf_func_proto bpf_sock_addr_skc_lookup_tcp_proto = {
.gpl_only = false,
.ret_type = RET_PTR_TO_SOCK_COMMON_OR_NULL,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
.arg5_type = ARG_ANYTHING,
@@ -6423,7 +6423,7 @@ static const struct bpf_func_proto bpf_sock_addr_sk_lookup_tcp_proto = {
.gpl_only = false,
.ret_type = RET_PTR_TO_SOCKET_OR_NULL,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
.arg5_type = ARG_ANYTHING,
@@ -6442,7 +6442,7 @@ static const struct bpf_func_proto bpf_sock_addr_sk_lookup_udp_proto = {
.gpl_only = false,
.ret_type = RET_PTR_TO_SOCKET_OR_NULL,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
.arg5_type = ARG_ANYTHING,
@@ -6755,9 +6755,9 @@ static const struct bpf_func_proto bpf_tcp_check_syncookie_proto = {
.pkt_access = true,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_BTF_ID_SOCK_COMMON,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE,
};
@@ -6824,9 +6824,9 @@ static const struct bpf_func_proto bpf_tcp_gen_syncookie_proto = {
.pkt_access = true,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_BTF_ID_SOCK_COMMON,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
- .arg4_type = ARG_PTR_TO_MEM,
+ .arg4_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg5_type = ARG_CONST_SIZE,
};
@@ -7055,7 +7055,7 @@ static const struct bpf_func_proto bpf_sock_ops_store_hdr_opt_proto = {
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
- .arg2_type = ARG_PTR_TO_MEM,
+ .arg2_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg3_type = ARG_CONST_SIZE,
.arg4_type = ARG_ANYTHING,
};
--
2.34.0.384.gca35af8252-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [RFC PATCH bpf-next v2 9/9] bpf/selftests: Test PTR_TO_RDONLY_MEM
2021-11-30 1:29 [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types Hao Luo
` (7 preceding siblings ...)
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem Hao Luo
@ 2021-11-30 1:29 ` Hao Luo
8 siblings, 0 replies; 20+ messages in thread
From: Hao Luo @ 2021-11-30 1:29 UTC (permalink / raw)
To: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann
Cc: Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf, Hao Luo
This test verifies that a ksym of non-struct can not be directly
updated.
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Hao Luo <haoluo@google.com>
---
.../selftests/bpf/prog_tests/ksyms_btf.c | 14 +++++++++
.../bpf/progs/test_ksyms_btf_write_check.c | 29 +++++++++++++++++++
2 files changed, 43 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c
diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c b/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
index 79f6bd1e50d6..f6933b06daf8 100644
--- a/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
+++ b/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
@@ -8,6 +8,7 @@
#include "test_ksyms_btf_null_check.skel.h"
#include "test_ksyms_weak.skel.h"
#include "test_ksyms_weak.lskel.h"
+#include "test_ksyms_btf_write_check.skel.h"
static int duration;
@@ -137,6 +138,16 @@ static void test_weak_syms_lskel(void)
test_ksyms_weak_lskel__destroy(skel);
}
+static void test_write_check(void)
+{
+ struct test_ksyms_btf_write_check *skel;
+
+ skel = test_ksyms_btf_write_check__open_and_load();
+ ASSERT_ERR_PTR(skel, "unexpected load of a prog writing to ksym memory\n");
+
+ test_ksyms_btf_write_check__destroy(skel);
+}
+
void test_ksyms_btf(void)
{
int percpu_datasec;
@@ -167,4 +178,7 @@ void test_ksyms_btf(void)
if (test__start_subtest("weak_ksyms_lskel"))
test_weak_syms_lskel();
+
+ if (test__start_subtest("write_check"))
+ test_write_check();
}
diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c b/tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c
new file mode 100644
index 000000000000..2180c41cd890
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2021 Google */
+
+#include "vmlinux.h"
+
+#include <bpf/bpf_helpers.h>
+
+extern const int bpf_prog_active __ksym; /* int type global var. */
+
+SEC("raw_tp/sys_enter")
+int handler(const void *ctx)
+{
+ int *active;
+ __u32 cpu;
+
+ cpu = bpf_get_smp_processor_id();
+ active = (int *)bpf_per_cpu_ptr(&bpf_prog_active, cpu);
+ if (active) {
+ /* Kernel memory obtained from bpf_{per,this}_cpu_ptr
+ * is read-only, should _not_ pass verification.
+ */
+ /* WRITE_ONCE */
+ *(volatile int *)active = -1;
+ }
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
--
2.34.0.384.gca35af8252-goog
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [RFC PATCH bpf-next v2 1/9] bpf: Introduce composable reg, ret and arg types.
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 1/9] bpf: Introduce composable reg, ret and arg types Hao Luo
@ 2021-12-01 20:29 ` Alexei Starovoitov
2021-12-01 22:36 ` Hao Luo
0 siblings, 1 reply; 20+ messages in thread
From: Alexei Starovoitov @ 2021-12-01 20:29 UTC (permalink / raw)
To: Hao Luo
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf
On Mon, Nov 29, 2021 at 05:29:40PM -0800, Hao Luo wrote:
> There are some common properties shared between bpf reg, ret and arg
> values. For instance, a value may be a NULL pointer, or a pointer to
> a read-only memory. Previously, to express these properties, enumeration
> was used. For example, in order to test whether a reg value can be NULL,
> reg_type_may_be_null() simply enumerates all types that are possibly
> NULL. The problem of this approach is that it's not scalable and causes
> a lot of duplication. These properties can be combined, for example, a
> type could be either MAYBE_NULL or RDONLY, or both.
>
> This patch series rewrites the layout of reg_type, arg_type and
> ret_type, so that common properties can be extracted and represented as
> composable flag. For example, one can write
>
> ARG_PTR_TO_MEM | PTR_MAYBE_NULL
>
> which is equivalent to the previous
>
> ARG_PTR_TO_MEM_OR_NULL
>
> The type ARG_PTR_TO_MEM are called "base types" in this patch. Base
> types can be extended with flags. A flag occupies the higher bits while
> base types sits in the lower bits.
>
> This patch in particular sets up a set of macro for this purpose. The
> followed patches rewrites arg_types, ret_types and reg_types
> respectively.
>
> Signed-off-by: Hao Luo <haoluo@google.com>
> ---
> include/linux/bpf.h | 50 +++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 50 insertions(+)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index cc7a0c36e7df..b592b3f7d223 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -297,6 +297,37 @@ bool bpf_map_meta_equal(const struct bpf_map *meta0,
>
> extern const struct bpf_map_ops bpf_map_offload_ops;
>
> +/* bpf_type_flag contains a set of flags that are applicable to the values of
> + * arg_type, ret_type and reg_type. For example, a pointer value may be null,
> + * or a memory is read-only. We classify types into two categories: base types
> + * and extended types. Extended types are base types combined with a type flag.
> + *
> + * Currently there are no more than 32 base types in arg_type, ret_type and
> + * reg_types.
> + */
> +#define BPF_BASE_TYPE_BITS 8
> +
> +enum bpf_type_flag {
> + /* PTR may be NULL. */
> + PTR_MAYBE_NULL = BIT(0 + BPF_BASE_TYPE_BITS),
> +
> + __BPF_TYPE_LAST_FLAG = PTR_MAYBE_NULL,
> +};
> +
> +#define BPF_BASE_TYPE_MASK GENMASK(BPF_BASE_TYPE_BITS, 0)
> +
> +/* Max number of base types. */
> +#define BPF_BASE_TYPE_LIMIT (1UL << BPF_BASE_TYPE_BITS)
> +
> +/* Max number of all types. */
> +#define BPF_TYPE_LIMIT (__BPF_TYPE_LAST_FLAG | (__BPF_TYPE_LAST_FLAG - 1))
> +
> +/* extract base type. */
> +#define BPF_BASE_TYPE(x) ((x) & BPF_BASE_TYPE_MASK)
> +
> +/* extract flags from an extended type. */
> +#define BPF_TYPE_FLAG(x) ((enum bpf_type_flag)((x) & ~BPF_BASE_TYPE_MASK))
Overall I think it's really great.
The only suggestion is to use:
static inline u32 base_type(u32 x)
{
return x & BPF_BASE_TYPE_MASK;
}
and
static inline u32 type_flag(u32 x) ..
The capital letter macros are too loud.
wdyt?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [RFC PATCH bpf-next v2 3/9] bpf: Replace RET_XXX_OR_NULL with RET_XXX | PTR_MAYBE_NULL
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 3/9] bpf: Replace RET_XXX_OR_NULL with RET_XXX " Hao Luo
@ 2021-12-01 20:30 ` Alexei Starovoitov
2021-12-01 22:40 ` Hao Luo
0 siblings, 1 reply; 20+ messages in thread
From: Alexei Starovoitov @ 2021-12-01 20:30 UTC (permalink / raw)
To: Hao Luo
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf
On Mon, Nov 29, 2021 at 05:29:42PM -0800, Hao Luo wrote:
> /* update return register (already marked as written above) */
> - if (fn->ret_type == RET_INTEGER) {
> + ret_type = fn->ret_type;
> + if (ret_type == RET_INTEGER) {
> /* sets type to SCALAR_VALUE */
> mark_reg_unknown(env, regs, BPF_REG_0);
> - } else if (fn->ret_type == RET_VOID) {
> + } else if (ret_type == RET_VOID) {
> regs[BPF_REG_0].type = NOT_INIT;
> - } else if (fn->ret_type == RET_PTR_TO_MAP_VALUE_OR_NULL ||
> - fn->ret_type == RET_PTR_TO_MAP_VALUE) {
> + } else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_MAP_VALUE) {
> /* There is no offset yet applied, variable or fixed */
> mark_reg_known_zero(env, regs, BPF_REG_0);
> /* remember map_ptr, so that check_map_access()
> @@ -6530,28 +6536,27 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> }
> regs[BPF_REG_0].map_ptr = meta.map_ptr;
> regs[BPF_REG_0].map_uid = meta.map_uid;
> - if (fn->ret_type == RET_PTR_TO_MAP_VALUE) {
> + if (ret_type_may_be_null(fn->ret_type)) {
it should have been ret_type here?
> + regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
> + } else {
> regs[BPF_REG_0].type = PTR_TO_MAP_VALUE;
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem.
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem Hao Luo
@ 2021-12-01 20:34 ` Alexei Starovoitov
2021-12-01 22:21 ` Hao Luo
0 siblings, 1 reply; 20+ messages in thread
From: Alexei Starovoitov @ 2021-12-01 20:34 UTC (permalink / raw)
To: Hao Luo
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf
On Mon, Nov 29, 2021 at 05:29:47PM -0800, Hao Luo wrote:
>
> +
> struct bpf_reg_types {
> const enum bpf_reg_type types[10];
> u32 *btf_id;
> +
> + /* Certain types require customized type matching function. */
> + bool (*type_match_fn)(enum bpf_arg_type arg_type,
> + enum bpf_reg_type type,
> + enum bpf_reg_type expected);
> };
>
> static const struct bpf_reg_types map_key_value_types = {
> @@ -5013,6 +5019,19 @@ static const struct bpf_reg_types btf_id_sock_common_types = {
> };
> #endif
>
> +static bool mem_type_match(enum bpf_arg_type arg_type,
> + enum bpf_reg_type type, enum bpf_reg_type expected)
> +{
> + /* If arg_type is tagged with MEM_RDONLY, type is compatible with both
> + * RDONLY and RDWR mem, fold the MEM_RDONLY flag in 'type' before
> + * comparison.
> + */
> + if ((arg_type & MEM_RDONLY) != 0)
> + type &= ~MEM_RDONLY;
> +
> + return type == expected;
> +}
> +
> static const struct bpf_reg_types mem_types = {
> .types = {
> PTR_TO_STACK,
> @@ -5022,8 +5041,8 @@ static const struct bpf_reg_types mem_types = {
> PTR_TO_MAP_VALUE,
> PTR_TO_MEM,
> PTR_TO_BUF,
> - PTR_TO_BUF | MEM_RDONLY,
> },
> + .type_match_fn = mem_type_match,
why add a callback for this logic?
Isn't it a universal rule for MEM_RDONLY?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem.
2021-12-01 20:34 ` Alexei Starovoitov
@ 2021-12-01 22:21 ` Hao Luo
2021-12-02 3:53 ` Alexei Starovoitov
0 siblings, 1 reply; 20+ messages in thread
From: Hao Luo @ 2021-12-01 22:21 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf
On Wed, Dec 1, 2021 at 12:34 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Mon, Nov 29, 2021 at 05:29:47PM -0800, Hao Luo wrote:
> >
> > +
> > struct bpf_reg_types {
> > const enum bpf_reg_type types[10];
> > u32 *btf_id;
> > +
> > + /* Certain types require customized type matching function. */
> > + bool (*type_match_fn)(enum bpf_arg_type arg_type,
> > + enum bpf_reg_type type,
> > + enum bpf_reg_type expected);
> > };
> >
> > static const struct bpf_reg_types map_key_value_types = {
> > @@ -5013,6 +5019,19 @@ static const struct bpf_reg_types btf_id_sock_common_types = {
> > };
> > #endif
> >
> > +static bool mem_type_match(enum bpf_arg_type arg_type,
> > + enum bpf_reg_type type, enum bpf_reg_type expected)
> > +{
> > + /* If arg_type is tagged with MEM_RDONLY, type is compatible with both
> > + * RDONLY and RDWR mem, fold the MEM_RDONLY flag in 'type' before
> > + * comparison.
> > + */
> > + if ((arg_type & MEM_RDONLY) != 0)
> > + type &= ~MEM_RDONLY;
> > +
> > + return type == expected;
> > +}
> > +
> > static const struct bpf_reg_types mem_types = {
> > .types = {
> > PTR_TO_STACK,
> > @@ -5022,8 +5041,8 @@ static const struct bpf_reg_types mem_types = {
> > PTR_TO_MAP_VALUE,
> > PTR_TO_MEM,
> > PTR_TO_BUF,
> > - PTR_TO_BUF | MEM_RDONLY,
> > },
> > + .type_match_fn = mem_type_match,
>
> why add a callback for this logic?
> Isn't it a universal rule for MEM_RDONLY?
Ah, good point, I didn't realize that. Maybe, not only MEM_RDONLY, but
all flags can be checked in the same way? Like the following
static const struct bpf_reg_types int_ptr_types = {
@@ -5097,6 +5116,13 @@ static int check_reg_type(struct
bpf_verifier_env *env, u32 regno,
if (expected == NOT_INIT)
break;
+ flag = bpf_type_flag(arg_type);
- if (type == expected)
+ if ((type & ~flag) == expected)
goto found;
}
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [RFC PATCH bpf-next v2 1/9] bpf: Introduce composable reg, ret and arg types.
2021-12-01 20:29 ` Alexei Starovoitov
@ 2021-12-01 22:36 ` Hao Luo
0 siblings, 0 replies; 20+ messages in thread
From: Hao Luo @ 2021-12-01 22:36 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf
On Wed, Dec 1, 2021 at 12:29 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Mon, Nov 29, 2021 at 05:29:40PM -0800, Hao Luo wrote:
> > There are some common properties shared between bpf reg, ret and arg
> > values. For instance, a value may be a NULL pointer, or a pointer to
> > a read-only memory. Previously, to express these properties, enumeration
> > was used. For example, in order to test whether a reg value can be NULL,
> > reg_type_may_be_null() simply enumerates all types that are possibly
> > NULL. The problem of this approach is that it's not scalable and causes
> > a lot of duplication. These properties can be combined, for example, a
> > type could be either MAYBE_NULL or RDONLY, or both.
> >
> > This patch series rewrites the layout of reg_type, arg_type and
> > ret_type, so that common properties can be extracted and represented as
> > composable flag. For example, one can write
> >
> > ARG_PTR_TO_MEM | PTR_MAYBE_NULL
> >
> > which is equivalent to the previous
> >
> > ARG_PTR_TO_MEM_OR_NULL
> >
> > The type ARG_PTR_TO_MEM are called "base types" in this patch. Base
> > types can be extended with flags. A flag occupies the higher bits while
> > base types sits in the lower bits.
> >
> > This patch in particular sets up a set of macro for this purpose. The
> > followed patches rewrites arg_types, ret_types and reg_types
> > respectively.
> >
> > Signed-off-by: Hao Luo <haoluo@google.com>
> > ---
> > include/linux/bpf.h | 50 +++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 50 insertions(+)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index cc7a0c36e7df..b592b3f7d223 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -297,6 +297,37 @@ bool bpf_map_meta_equal(const struct bpf_map *meta0,
> >
> > extern const struct bpf_map_ops bpf_map_offload_ops;
> >
> > +/* bpf_type_flag contains a set of flags that are applicable to the values of
> > + * arg_type, ret_type and reg_type. For example, a pointer value may be null,
> > + * or a memory is read-only. We classify types into two categories: base types
> > + * and extended types. Extended types are base types combined with a type flag.
> > + *
> > + * Currently there are no more than 32 base types in arg_type, ret_type and
> > + * reg_types.
> > + */
> > +#define BPF_BASE_TYPE_BITS 8
> > +
> > +enum bpf_type_flag {
> > + /* PTR may be NULL. */
> > + PTR_MAYBE_NULL = BIT(0 + BPF_BASE_TYPE_BITS),
> > +
> > + __BPF_TYPE_LAST_FLAG = PTR_MAYBE_NULL,
> > +};
> > +
> > +#define BPF_BASE_TYPE_MASK GENMASK(BPF_BASE_TYPE_BITS, 0)
> > +
> > +/* Max number of base types. */
> > +#define BPF_BASE_TYPE_LIMIT (1UL << BPF_BASE_TYPE_BITS)
> > +
> > +/* Max number of all types. */
> > +#define BPF_TYPE_LIMIT (__BPF_TYPE_LAST_FLAG | (__BPF_TYPE_LAST_FLAG - 1))
> > +
> > +/* extract base type. */
> > +#define BPF_BASE_TYPE(x) ((x) & BPF_BASE_TYPE_MASK)
> > +
> > +/* extract flags from an extended type. */
> > +#define BPF_TYPE_FLAG(x) ((enum bpf_type_flag)((x) & ~BPF_BASE_TYPE_MASK))
>
> Overall I think it's really great.
> The only suggestion is to use:
> static inline u32 base_type(u32 x)
> {
> return x & BPF_BASE_TYPE_MASK;
> }
> and
> static inline u32 type_flag(u32 x) ..
>
> The capital letter macros are too loud.
>
> wdyt?
>
Sounds good to me. Will do in the next iteration.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [RFC PATCH bpf-next v2 3/9] bpf: Replace RET_XXX_OR_NULL with RET_XXX | PTR_MAYBE_NULL
2021-12-01 20:30 ` Alexei Starovoitov
@ 2021-12-01 22:40 ` Hao Luo
0 siblings, 0 replies; 20+ messages in thread
From: Hao Luo @ 2021-12-01 22:40 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf
On Wed, Dec 1, 2021 at 12:30 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Mon, Nov 29, 2021 at 05:29:42PM -0800, Hao Luo wrote:
> > /* update return register (already marked as written above) */
> > - if (fn->ret_type == RET_INTEGER) {
> > + ret_type = fn->ret_type;
> > + if (ret_type == RET_INTEGER) {
> > /* sets type to SCALAR_VALUE */
> > mark_reg_unknown(env, regs, BPF_REG_0);
> > - } else if (fn->ret_type == RET_VOID) {
> > + } else if (ret_type == RET_VOID) {
> > regs[BPF_REG_0].type = NOT_INIT;
> > - } else if (fn->ret_type == RET_PTR_TO_MAP_VALUE_OR_NULL ||
> > - fn->ret_type == RET_PTR_TO_MAP_VALUE) {
> > + } else if (BPF_BASE_TYPE(ret_type) == RET_PTR_TO_MAP_VALUE) {
> > /* There is no offset yet applied, variable or fixed */
> > mark_reg_known_zero(env, regs, BPF_REG_0);
> > /* remember map_ptr, so that check_map_access()
> > @@ -6530,28 +6536,27 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> > }
> > regs[BPF_REG_0].map_ptr = meta.map_ptr;
> > regs[BPF_REG_0].map_uid = meta.map_uid;
> > - if (fn->ret_type == RET_PTR_TO_MAP_VALUE) {
> > + if (ret_type_may_be_null(fn->ret_type)) {
>
> it should have been ret_type here?
>
Yes. I think fn->ret_type and ret_type are the same here. I can
replace it with 'ret_type' in the next version.
> > + regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
> > + } else {
> > regs[BPF_REG_0].type = PTR_TO_MAP_VALUE;
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem.
2021-12-01 22:21 ` Hao Luo
@ 2021-12-02 3:53 ` Alexei Starovoitov
2021-12-02 18:42 ` Hao Luo
0 siblings, 1 reply; 20+ messages in thread
From: Alexei Starovoitov @ 2021-12-02 3:53 UTC (permalink / raw)
To: Hao Luo
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf
On Wed, Dec 1, 2021 at 2:21 PM Hao Luo <haoluo@google.com> wrote:
>
> On Wed, Dec 1, 2021 at 12:34 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Mon, Nov 29, 2021 at 05:29:47PM -0800, Hao Luo wrote:
> > >
> > > +
> > > struct bpf_reg_types {
> > > const enum bpf_reg_type types[10];
> > > u32 *btf_id;
> > > +
> > > + /* Certain types require customized type matching function. */
> > > + bool (*type_match_fn)(enum bpf_arg_type arg_type,
> > > + enum bpf_reg_type type,
> > > + enum bpf_reg_type expected);
> > > };
> > >
> > > static const struct bpf_reg_types map_key_value_types = {
> > > @@ -5013,6 +5019,19 @@ static const struct bpf_reg_types btf_id_sock_common_types = {
> > > };
> > > #endif
> > >
> > > +static bool mem_type_match(enum bpf_arg_type arg_type,
> > > + enum bpf_reg_type type, enum bpf_reg_type expected)
> > > +{
> > > + /* If arg_type is tagged with MEM_RDONLY, type is compatible with both
> > > + * RDONLY and RDWR mem, fold the MEM_RDONLY flag in 'type' before
> > > + * comparison.
> > > + */
> > > + if ((arg_type & MEM_RDONLY) != 0)
> > > + type &= ~MEM_RDONLY;
> > > +
> > > + return type == expected;
> > > +}
> > > +
> > > static const struct bpf_reg_types mem_types = {
> > > .types = {
> > > PTR_TO_STACK,
> > > @@ -5022,8 +5041,8 @@ static const struct bpf_reg_types mem_types = {
> > > PTR_TO_MAP_VALUE,
> > > PTR_TO_MEM,
> > > PTR_TO_BUF,
> > > - PTR_TO_BUF | MEM_RDONLY,
> > > },
> > > + .type_match_fn = mem_type_match,
> >
> > why add a callback for this logic?
> > Isn't it a universal rule for MEM_RDONLY?
>
> Ah, good point, I didn't realize that. Maybe, not only MEM_RDONLY, but
> all flags can be checked in the same way? Like the following
>
> static const struct bpf_reg_types int_ptr_types = {
> @@ -5097,6 +5116,13 @@ static int check_reg_type(struct
> bpf_verifier_env *env, u32 regno,
> if (expected == NOT_INIT)
> break;
>
> + flag = bpf_type_flag(arg_type);
>
> - if (type == expected)
> + if ((type & ~flag) == expected)
> goto found;
> }
I think for MAYBE_NULL and MEM_RDONLY that would be correct,
but not necessarily apply to future flags.
I would open code it for these specific flags.
Also what do you think about dropping bpf_ prefix from type_flag()?
It won't conflict with anything and less verbose.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem.
2021-12-02 3:53 ` Alexei Starovoitov
@ 2021-12-02 18:42 ` Hao Luo
2021-12-02 21:13 ` Alexei Starovoitov
0 siblings, 1 reply; 20+ messages in thread
From: Hao Luo @ 2021-12-02 18:42 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf
On Wed, Dec 1, 2021 at 7:53 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Wed, Dec 1, 2021 at 2:21 PM Hao Luo <haoluo@google.com> wrote:
> >
> > On Wed, Dec 1, 2021 at 12:34 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Mon, Nov 29, 2021 at 05:29:47PM -0800, Hao Luo wrote:
> > > >
> > > > +
> > > > struct bpf_reg_types {
> > > > const enum bpf_reg_type types[10];
> > > > u32 *btf_id;
> > > > +
> > > > + /* Certain types require customized type matching function. */
> > > > + bool (*type_match_fn)(enum bpf_arg_type arg_type,
> > > > + enum bpf_reg_type type,
> > > > + enum bpf_reg_type expected);
> > > > };
> > > >
> > > > static const struct bpf_reg_types map_key_value_types = {
> > > > @@ -5013,6 +5019,19 @@ static const struct bpf_reg_types btf_id_sock_common_types = {
> > > > };
> > > > #endif
> > > >
> > > > +static bool mem_type_match(enum bpf_arg_type arg_type,
> > > > + enum bpf_reg_type type, enum bpf_reg_type expected)
> > > > +{
> > > > + /* If arg_type is tagged with MEM_RDONLY, type is compatible with both
> > > > + * RDONLY and RDWR mem, fold the MEM_RDONLY flag in 'type' before
> > > > + * comparison.
> > > > + */
> > > > + if ((arg_type & MEM_RDONLY) != 0)
> > > > + type &= ~MEM_RDONLY;
> > > > +
> > > > + return type == expected;
> > > > +}
> > > > +
> > > > static const struct bpf_reg_types mem_types = {
> > > > .types = {
> > > > PTR_TO_STACK,
> > > > @@ -5022,8 +5041,8 @@ static const struct bpf_reg_types mem_types = {
> > > > PTR_TO_MAP_VALUE,
> > > > PTR_TO_MEM,
> > > > PTR_TO_BUF,
> > > > - PTR_TO_BUF | MEM_RDONLY,
> > > > },
> > > > + .type_match_fn = mem_type_match,
> > >
> > > why add a callback for this logic?
> > > Isn't it a universal rule for MEM_RDONLY?
> >
> > Ah, good point, I didn't realize that. Maybe, not only MEM_RDONLY, but
> > all flags can be checked in the same way? Like the following
> >
> > static const struct bpf_reg_types int_ptr_types = {
> > @@ -5097,6 +5116,13 @@ static int check_reg_type(struct
> > bpf_verifier_env *env, u32 regno,
> > if (expected == NOT_INIT)
> > break;
> >
> > + flag = bpf_type_flag(arg_type);
> >
> > - if (type == expected)
> > + if ((type & ~flag) == expected)
> > goto found;
> > }
>
> I think for MAYBE_NULL and MEM_RDONLY that would be correct,
> but not necessarily apply to future flags.
> I would open code it for these specific flags.
ack.
> Also what do you think about dropping bpf_ prefix from type_flag()?
> It won't conflict with anything and less verbose.
Sounds good as long as it won't conflict. IMO it would be good to have
an internal header in kernel/bpf. It appears to me that we put
everything in linux/bpf.h now. In sched, there is a
kernel/sched/sched.h used internally in kernel/sched and a
linux/sched.h that is public to other subsystems.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem.
2021-12-02 18:42 ` Hao Luo
@ 2021-12-02 21:13 ` Alexei Starovoitov
2021-12-03 0:14 ` Hao Luo
0 siblings, 1 reply; 20+ messages in thread
From: Alexei Starovoitov @ 2021-12-02 21:13 UTC (permalink / raw)
To: Hao Luo
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf
On Thu, Dec 2, 2021 at 10:42 AM Hao Luo <haoluo@google.com> wrote:
>
> On Wed, Dec 1, 2021 at 7:53 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Wed, Dec 1, 2021 at 2:21 PM Hao Luo <haoluo@google.com> wrote:
> > >
> > > On Wed, Dec 1, 2021 at 12:34 PM Alexei Starovoitov
> > > <alexei.starovoitov@gmail.com> wrote:
> > > >
> > > > On Mon, Nov 29, 2021 at 05:29:47PM -0800, Hao Luo wrote:
> > > > >
> > > > > +
> > > > > struct bpf_reg_types {
> > > > > const enum bpf_reg_type types[10];
> > > > > u32 *btf_id;
> > > > > +
> > > > > + /* Certain types require customized type matching function. */
> > > > > + bool (*type_match_fn)(enum bpf_arg_type arg_type,
> > > > > + enum bpf_reg_type type,
> > > > > + enum bpf_reg_type expected);
> > > > > };
> > > > >
> > > > > static const struct bpf_reg_types map_key_value_types = {
> > > > > @@ -5013,6 +5019,19 @@ static const struct bpf_reg_types btf_id_sock_common_types = {
> > > > > };
> > > > > #endif
> > > > >
> > > > > +static bool mem_type_match(enum bpf_arg_type arg_type,
> > > > > + enum bpf_reg_type type, enum bpf_reg_type expected)
> > > > > +{
> > > > > + /* If arg_type is tagged with MEM_RDONLY, type is compatible with both
> > > > > + * RDONLY and RDWR mem, fold the MEM_RDONLY flag in 'type' before
> > > > > + * comparison.
> > > > > + */
> > > > > + if ((arg_type & MEM_RDONLY) != 0)
> > > > > + type &= ~MEM_RDONLY;
> > > > > +
> > > > > + return type == expected;
> > > > > +}
> > > > > +
> > > > > static const struct bpf_reg_types mem_types = {
> > > > > .types = {
> > > > > PTR_TO_STACK,
> > > > > @@ -5022,8 +5041,8 @@ static const struct bpf_reg_types mem_types = {
> > > > > PTR_TO_MAP_VALUE,
> > > > > PTR_TO_MEM,
> > > > > PTR_TO_BUF,
> > > > > - PTR_TO_BUF | MEM_RDONLY,
> > > > > },
> > > > > + .type_match_fn = mem_type_match,
> > > >
> > > > why add a callback for this logic?
> > > > Isn't it a universal rule for MEM_RDONLY?
> > >
> > > Ah, good point, I didn't realize that. Maybe, not only MEM_RDONLY, but
> > > all flags can be checked in the same way? Like the following
> > >
> > > static const struct bpf_reg_types int_ptr_types = {
> > > @@ -5097,6 +5116,13 @@ static int check_reg_type(struct
> > > bpf_verifier_env *env, u32 regno,
> > > if (expected == NOT_INIT)
> > > break;
> > >
> > > + flag = bpf_type_flag(arg_type);
> > >
> > > - if (type == expected)
> > > + if ((type & ~flag) == expected)
> > > goto found;
> > > }
> >
> > I think for MAYBE_NULL and MEM_RDONLY that would be correct,
> > but not necessarily apply to future flags.
> > I would open code it for these specific flags.
>
> ack.
>
> > Also what do you think about dropping bpf_ prefix from type_flag()?
> > It won't conflict with anything and less verbose.
>
> Sounds good as long as it won't conflict. IMO it would be good to have
> an internal header in kernel/bpf. It appears to me that we put
> everything in linux/bpf.h now. In sched, there is a
> kernel/sched/sched.h used internally in kernel/sched and a
> linux/sched.h that is public to other subsystems.
yeah. That's long overdue. We have include/linux/bpf_verifier.h
that might work for this case too.
Or even directly in verifier.c (if possible).
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem.
2021-12-02 21:13 ` Alexei Starovoitov
@ 2021-12-03 0:14 ` Hao Luo
0 siblings, 0 replies; 20+ messages in thread
From: Hao Luo @ 2021-12-03 0:14 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, bpf
On Thu, Dec 2, 2021 at 1:13 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Thu, Dec 2, 2021 at 10:42 AM Hao Luo <haoluo@google.com> wrote:
> >
> > On Wed, Dec 1, 2021 at 7:53 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Wed, Dec 1, 2021 at 2:21 PM Hao Luo <haoluo@google.com> wrote:
> > > >
> > > > On Wed, Dec 1, 2021 at 12:34 PM Alexei Starovoitov
> > > > <alexei.starovoitov@gmail.com> wrote:
> > > > >
> > > > > On Mon, Nov 29, 2021 at 05:29:47PM -0800, Hao Luo wrote:
> > > > > >
> > > > > > +
> > > > > > struct bpf_reg_types {
> > > > > > const enum bpf_reg_type types[10];
> > > > > > u32 *btf_id;
> > > > > > +
> > > > > > + /* Certain types require customized type matching function. */
> > > > > > + bool (*type_match_fn)(enum bpf_arg_type arg_type,
> > > > > > + enum bpf_reg_type type,
> > > > > > + enum bpf_reg_type expected);
> > > > > > };
> > > > > >
> > > > > > static const struct bpf_reg_types map_key_value_types = {
> > > > > > @@ -5013,6 +5019,19 @@ static const struct bpf_reg_types btf_id_sock_common_types = {
> > > > > > };
> > > > > > #endif
> > > > > >
> > > > > > +static bool mem_type_match(enum bpf_arg_type arg_type,
> > > > > > + enum bpf_reg_type type, enum bpf_reg_type expected)
> > > > > > +{
> > > > > > + /* If arg_type is tagged with MEM_RDONLY, type is compatible with both
> > > > > > + * RDONLY and RDWR mem, fold the MEM_RDONLY flag in 'type' before
> > > > > > + * comparison.
> > > > > > + */
> > > > > > + if ((arg_type & MEM_RDONLY) != 0)
> > > > > > + type &= ~MEM_RDONLY;
> > > > > > +
> > > > > > + return type == expected;
> > > > > > +}
> > > > > > +
> > > > > > static const struct bpf_reg_types mem_types = {
> > > > > > .types = {
> > > > > > PTR_TO_STACK,
> > > > > > @@ -5022,8 +5041,8 @@ static const struct bpf_reg_types mem_types = {
> > > > > > PTR_TO_MAP_VALUE,
> > > > > > PTR_TO_MEM,
> > > > > > PTR_TO_BUF,
> > > > > > - PTR_TO_BUF | MEM_RDONLY,
> > > > > > },
> > > > > > + .type_match_fn = mem_type_match,
> > > > >
> > > > > why add a callback for this logic?
> > > > > Isn't it a universal rule for MEM_RDONLY?
> > > >
> > > > Ah, good point, I didn't realize that. Maybe, not only MEM_RDONLY, but
> > > > all flags can be checked in the same way? Like the following
> > > >
> > > > static const struct bpf_reg_types int_ptr_types = {
> > > > @@ -5097,6 +5116,13 @@ static int check_reg_type(struct
> > > > bpf_verifier_env *env, u32 regno,
> > > > if (expected == NOT_INIT)
> > > > break;
> > > >
> > > > + flag = bpf_type_flag(arg_type);
> > > >
> > > > - if (type == expected)
> > > > + if ((type & ~flag) == expected)
> > > > goto found;
> > > > }
> > >
> > > I think for MAYBE_NULL and MEM_RDONLY that would be correct,
> > > but not necessarily apply to future flags.
> > > I would open code it for these specific flags.
> >
> > ack.
> >
> > > Also what do you think about dropping bpf_ prefix from type_flag()?
> > > It won't conflict with anything and less verbose.
> >
> > Sounds good as long as it won't conflict. IMO it would be good to have
> > an internal header in kernel/bpf. It appears to me that we put
> > everything in linux/bpf.h now. In sched, there is a
> > kernel/sched/sched.h used internally in kernel/sched and a
> > linux/sched.h that is public to other subsystems.
>
> yeah. That's long overdue. We have include/linux/bpf_verifier.h
> that might work for this case too.
> Or even directly in verifier.c (if possible).
Ack. Directly in verifier.c would be best, but bpf_verifier.h works.
There is only one place outside verifier.c (i.e. btf_ctx_access() in
btf.c) that needs base_type, the rest are all in verifier.c. Anyway, I
am going to put these functions in bpf_verifer.h for now.
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2021-12-03 0:14 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-11-30 1:29 [RFC PATCH bpf-next v2 0/9] Introduce composable bpf types Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 1/9] bpf: Introduce composable reg, ret and arg types Hao Luo
2021-12-01 20:29 ` Alexei Starovoitov
2021-12-01 22:36 ` Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 2/9] bpf: Replace ARG_XXX_OR_NULL with ARG_XXX | PTR_MAYBE_NULL Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 3/9] bpf: Replace RET_XXX_OR_NULL with RET_XXX " Hao Luo
2021-12-01 20:30 ` Alexei Starovoitov
2021-12-01 22:40 ` Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 4/9] bpf: Replace PTR_TO_XXX_OR_NULL with PTR_TO_XXX " Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 5/9] bpf: Introduce MEM_RDONLY flag Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 6/9] bpf: Convert PTR_TO_MEM_OR_NULL to composable types Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 7/9] bpf: Make per_cpu_ptr return rdonly PTR_TO_MEM Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 8/9] bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem Hao Luo
2021-12-01 20:34 ` Alexei Starovoitov
2021-12-01 22:21 ` Hao Luo
2021-12-02 3:53 ` Alexei Starovoitov
2021-12-02 18:42 ` Hao Luo
2021-12-02 21:13 ` Alexei Starovoitov
2021-12-03 0:14 ` Hao Luo
2021-11-30 1:29 ` [RFC PATCH bpf-next v2 9/9] bpf/selftests: Test PTR_TO_RDONLY_MEM Hao Luo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox