* [PATCH bpf-next 0/4] Clean up BPF permissions checks
@ 2023-06-13 22:35 Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 1/4] bpf: move unprivileged checks into map_create() and bpf_prog_load() Andrii Nakryiko
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Andrii Nakryiko @ 2023-06-13 22:35 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
This patch set contains a few refactorings to BPF map and BPF program creation
permissions checks, which were originally part of BPF token patch set ([0]),
but are logically completely independent and useful in their own right.
[0] https://patchwork.kernel.org/project/netdevbpf/list/?series=755113&state=*
Andrii Nakryiko (4):
bpf: move unprivileged checks into map_create() and bpf_prog_load()
bpf: inline map creation logic in map_create() function
bpf: centralize permissions checks for all BPF map types
bpf: keep BPF_PROG_LOAD permission checks clear of validations
kernel/bpf/bloom_filter.c | 3 -
kernel/bpf/bpf_local_storage.c | 3 -
kernel/bpf/bpf_struct_ops.c | 3 -
kernel/bpf/cpumap.c | 4 -
kernel/bpf/devmap.c | 3 -
kernel/bpf/hashtab.c | 6 -
kernel/bpf/lpm_trie.c | 3 -
kernel/bpf/queue_stack_maps.c | 4 -
kernel/bpf/reuseport_array.c | 3 -
kernel/bpf/stackmap.c | 3 -
kernel/bpf/syscall.c | 155 +++++++++++-------
net/core/sock_map.c | 4 -
net/xdp/xskmap.c | 4 -
.../bpf/prog_tests/unpriv_bpf_disabled.c | 6 +-
14 files changed, 102 insertions(+), 102 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH bpf-next 1/4] bpf: move unprivileged checks into map_create() and bpf_prog_load()
2023-06-13 22:35 [PATCH bpf-next 0/4] Clean up BPF permissions checks Andrii Nakryiko
@ 2023-06-13 22:35 ` Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 2/4] bpf: inline map creation logic in map_create() function Andrii Nakryiko
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Andrii Nakryiko @ 2023-06-13 22:35 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Make each bpf() syscall command a bit more self-contained, making it
easier to further enhance it. We move sysctl_unprivileged_bpf_disabled
handling down to map_create() and bpf_prog_load(), two special commands
in this regard.
Also swap the order of checks, calling bpf_capable() only if
sysctl_unprivileged_bpf_disabled is true, avoiding unnecessary audit
messages.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/bpf/syscall.c | 34 +++++++++++++++++++---------------
1 file changed, 19 insertions(+), 15 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 92a57efc77de..1cc590101e19 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1157,6 +1157,15 @@ static int map_create(union bpf_attr *attr)
!node_online(numa_node)))
return -EINVAL;
+ /* Intent here is for unprivileged_bpf_disabled to block BPF map
+ * creation for unprivileged users; other actions depend
+ * on fd availability and access to bpffs, so are dependent on
+ * object creation success. Even with unprivileged BPF disabled,
+ * capability checks are still carried out.
+ */
+ if (sysctl_unprivileged_bpf_disabled && !bpf_capable())
+ return -EPERM;
+
/* find map type and init map: hashtable vs rbtree vs bloom vs ... */
map = find_and_alloc_map(attr);
if (IS_ERR(map))
@@ -2532,6 +2541,16 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
/* eBPF programs must be GPL compatible to use GPL-ed functions */
is_gpl = license_is_gpl_compatible(license);
+ /* Intent here is for unprivileged_bpf_disabled to block BPF program
+ * creation for unprivileged users; other actions depend
+ * on fd availability and access to bpffs, so are dependent on
+ * object creation success. Even with unprivileged BPF disabled,
+ * capability checks are still carried out for these
+ * and other operations.
+ */
+ if (sysctl_unprivileged_bpf_disabled && !bpf_capable())
+ return -EPERM;
+
if (attr->insn_cnt == 0 ||
attr->insn_cnt > (bpf_capable() ? BPF_COMPLEXITY_LIMIT_INSNS : BPF_MAXINSNS))
return -E2BIG;
@@ -5027,23 +5046,8 @@ static int bpf_prog_bind_map(union bpf_attr *attr)
static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size)
{
union bpf_attr attr;
- bool capable;
int err;
- capable = bpf_capable() || !sysctl_unprivileged_bpf_disabled;
-
- /* Intent here is for unprivileged_bpf_disabled to block key object
- * creation commands for unprivileged users; other actions depend
- * of fd availability and access to bpffs, so are dependent on
- * object creation success. Capabilities are later verified for
- * operations such as load and map create, so even with unprivileged
- * BPF disabled, capability checks are still carried out for these
- * and other operations.
- */
- if (!capable &&
- (cmd == BPF_MAP_CREATE || cmd == BPF_PROG_LOAD))
- return -EPERM;
-
err = bpf_check_uarg_tail_zero(uattr, sizeof(attr), size);
if (err)
return err;
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH bpf-next 2/4] bpf: inline map creation logic in map_create() function
2023-06-13 22:35 [PATCH bpf-next 0/4] Clean up BPF permissions checks Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 1/4] bpf: move unprivileged checks into map_create() and bpf_prog_load() Andrii Nakryiko
@ 2023-06-13 22:35 ` Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 3/4] bpf: centralize permissions checks for all BPF map types Andrii Nakryiko
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Andrii Nakryiko @ 2023-06-13 22:35 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Currently find_and_alloc_map() performs two separate functions: some
argument sanity checking and partial map creation workflow hanling.
Neither of those functions are self-sufficient and are augmented by
further checks and initialization logic in the caller (map_create()
function). So unify all the sanity checks, permission checks, and
creation and initialization logic in one linear piece of code in
map_create() instead. This also make it easier to further enhance
permission checks and keep them located in one place.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/bpf/syscall.c | 57 +++++++++++++++++++-------------------------
1 file changed, 24 insertions(+), 33 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 1cc590101e19..be885d547cde 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -109,37 +109,6 @@ const struct bpf_map_ops bpf_map_offload_ops = {
.map_mem_usage = bpf_map_offload_map_mem_usage,
};
-static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)
-{
- const struct bpf_map_ops *ops;
- u32 type = attr->map_type;
- struct bpf_map *map;
- int err;
-
- if (type >= ARRAY_SIZE(bpf_map_types))
- return ERR_PTR(-EINVAL);
- type = array_index_nospec(type, ARRAY_SIZE(bpf_map_types));
- ops = bpf_map_types[type];
- if (!ops)
- return ERR_PTR(-EINVAL);
-
- if (ops->map_alloc_check) {
- err = ops->map_alloc_check(attr);
- if (err)
- return ERR_PTR(err);
- }
- if (attr->map_ifindex)
- ops = &bpf_map_offload_ops;
- if (!ops->map_mem_usage)
- return ERR_PTR(-EINVAL);
- map = ops->map_alloc(attr);
- if (IS_ERR(map))
- return map;
- map->ops = ops;
- map->map_type = type;
- return map;
-}
-
static void bpf_map_write_active_inc(struct bpf_map *map)
{
atomic64_inc(&map->writecnt);
@@ -1127,7 +1096,9 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
/* called via syscall */
static int map_create(union bpf_attr *attr)
{
+ const struct bpf_map_ops *ops;
int numa_node = bpf_map_attr_numa_node(attr);
+ u32 map_type = attr->map_type;
struct bpf_map *map;
int f_flags;
int err;
@@ -1157,6 +1128,25 @@ static int map_create(union bpf_attr *attr)
!node_online(numa_node)))
return -EINVAL;
+ /* find map type and init map: hashtable vs rbtree vs bloom vs ... */
+ map_type = attr->map_type;
+ if (map_type >= ARRAY_SIZE(bpf_map_types))
+ return -EINVAL;
+ map_type = array_index_nospec(map_type, ARRAY_SIZE(bpf_map_types));
+ ops = bpf_map_types[map_type];
+ if (!ops)
+ return -EINVAL;
+
+ if (ops->map_alloc_check) {
+ err = ops->map_alloc_check(attr);
+ if (err)
+ return err;
+ }
+ if (attr->map_ifindex)
+ ops = &bpf_map_offload_ops;
+ if (!ops->map_mem_usage)
+ return -EINVAL;
+
/* Intent here is for unprivileged_bpf_disabled to block BPF map
* creation for unprivileged users; other actions depend
* on fd availability and access to bpffs, so are dependent on
@@ -1166,10 +1156,11 @@ static int map_create(union bpf_attr *attr)
if (sysctl_unprivileged_bpf_disabled && !bpf_capable())
return -EPERM;
- /* find map type and init map: hashtable vs rbtree vs bloom vs ... */
- map = find_and_alloc_map(attr);
+ map = ops->map_alloc(attr);
if (IS_ERR(map))
return PTR_ERR(map);
+ map->ops = ops;
+ map->map_type = map_type;
err = bpf_obj_name_cpy(map->name, attr->map_name,
sizeof(attr->map_name));
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH bpf-next 3/4] bpf: centralize permissions checks for all BPF map types
2023-06-13 22:35 [PATCH bpf-next 0/4] Clean up BPF permissions checks Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 1/4] bpf: move unprivileged checks into map_create() and bpf_prog_load() Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 2/4] bpf: inline map creation logic in map_create() function Andrii Nakryiko
@ 2023-06-13 22:35 ` Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 4/4] bpf: keep BPF_PROG_LOAD permission checks clear of validations Andrii Nakryiko
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Andrii Nakryiko @ 2023-06-13 22:35 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
This allows to do more centralized decisions later on, and generally
makes it very explicit which maps are privileged and which are not
(e.g., LRU_HASH and LRU_PERCPU_HASH, which are privileged HASH variants,
as opposed to unprivileged HASH and HASH_PERCPU; now this is explicit
and easy to verify).
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/bpf/bloom_filter.c | 3 --
kernel/bpf/bpf_local_storage.c | 3 --
kernel/bpf/bpf_struct_ops.c | 3 --
kernel/bpf/cpumap.c | 4 --
kernel/bpf/devmap.c | 3 --
kernel/bpf/hashtab.c | 6 ---
kernel/bpf/lpm_trie.c | 3 --
kernel/bpf/queue_stack_maps.c | 4 --
kernel/bpf/reuseport_array.c | 3 --
kernel/bpf/stackmap.c | 3 --
kernel/bpf/syscall.c | 47 +++++++++++++++++++
net/core/sock_map.c | 4 --
net/xdp/xskmap.c | 4 --
.../bpf/prog_tests/unpriv_bpf_disabled.c | 6 ++-
14 files changed, 52 insertions(+), 44 deletions(-)
diff --git a/kernel/bpf/bloom_filter.c b/kernel/bpf/bloom_filter.c
index 540331b610a9..addf3dd57b59 100644
--- a/kernel/bpf/bloom_filter.c
+++ b/kernel/bpf/bloom_filter.c
@@ -86,9 +86,6 @@ static struct bpf_map *bloom_map_alloc(union bpf_attr *attr)
int numa_node = bpf_map_attr_numa_node(attr);
struct bpf_bloom_filter *bloom;
- if (!bpf_capable())
- return ERR_PTR(-EPERM);
-
if (attr->key_size != 0 || attr->value_size == 0 ||
attr->max_entries == 0 ||
attr->map_flags & ~BLOOM_CREATE_FLAG_MASK ||
diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
index 47d9948d768f..b5149cfce7d4 100644
--- a/kernel/bpf/bpf_local_storage.c
+++ b/kernel/bpf/bpf_local_storage.c
@@ -723,9 +723,6 @@ int bpf_local_storage_map_alloc_check(union bpf_attr *attr)
!attr->btf_key_type_id || !attr->btf_value_type_id)
return -EINVAL;
- if (!bpf_capable())
- return -EPERM;
-
if (attr->value_size > BPF_LOCAL_STORAGE_MAX_VALUE_SIZE)
return -E2BIG;
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index d3f0a4825fa6..116a0ce378ec 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -655,9 +655,6 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
const struct btf_type *t, *vt;
struct bpf_map *map;
- if (!bpf_capable())
- return ERR_PTR(-EPERM);
-
st_ops = bpf_struct_ops_find_value(attr->btf_vmlinux_value_type_id);
if (!st_ops)
return ERR_PTR(-ENOTSUPP);
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 8ec18faa74ac..8a33e8747a0e 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -28,7 +28,6 @@
#include <linux/sched.h>
#include <linux/workqueue.h>
#include <linux/kthread.h>
-#include <linux/capability.h>
#include <trace/events/xdp.h>
#include <linux/btf_ids.h>
@@ -89,9 +88,6 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr)
u32 value_size = attr->value_size;
struct bpf_cpu_map *cmap;
- if (!bpf_capable())
- return ERR_PTR(-EPERM);
-
/* check sanity of attributes */
if (attr->max_entries == 0 || attr->key_size != 4 ||
(value_size != offsetofend(struct bpf_cpumap_val, qsize) &&
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 802692fa3905..49cc0b5671c6 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -160,9 +160,6 @@ static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
struct bpf_dtab *dtab;
int err;
- if (!capable(CAP_NET_ADMIN))
- return ERR_PTR(-EPERM);
-
dtab = bpf_map_area_alloc(sizeof(*dtab), NUMA_NO_NODE);
if (!dtab)
return ERR_PTR(-ENOMEM);
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 9901efee4339..56d3da7d0bc6 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -422,12 +422,6 @@ static int htab_map_alloc_check(union bpf_attr *attr)
BUILD_BUG_ON(offsetof(struct htab_elem, fnode.next) !=
offsetof(struct htab_elem, hash_node.pprev));
- if (lru && !bpf_capable())
- /* LRU implementation is much complicated than other
- * maps. Hence, limit to CAP_BPF.
- */
- return -EPERM;
-
if (zero_seed && !capable(CAP_SYS_ADMIN))
/* Guard against local DoS, and discourage production use. */
return -EPERM;
diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index e0d3ddf2037a..17c7e7782a1f 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -544,9 +544,6 @@ static struct bpf_map *trie_alloc(union bpf_attr *attr)
{
struct lpm_trie *trie;
- if (!bpf_capable())
- return ERR_PTR(-EPERM);
-
/* check sanity of attributes */
if (attr->max_entries == 0 ||
!(attr->map_flags & BPF_F_NO_PREALLOC) ||
diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c
index 601609164ef3..8d2ddcb7566b 100644
--- a/kernel/bpf/queue_stack_maps.c
+++ b/kernel/bpf/queue_stack_maps.c
@@ -7,7 +7,6 @@
#include <linux/bpf.h>
#include <linux/list.h>
#include <linux/slab.h>
-#include <linux/capability.h>
#include <linux/btf_ids.h>
#include "percpu_freelist.h"
@@ -46,9 +45,6 @@ static bool queue_stack_map_is_full(struct bpf_queue_stack *qs)
/* Called from syscall */
static int queue_stack_map_alloc_check(union bpf_attr *attr)
{
- if (!bpf_capable())
- return -EPERM;
-
/* check sanity of attributes */
if (attr->max_entries == 0 || attr->key_size != 0 ||
attr->value_size == 0 ||
diff --git a/kernel/bpf/reuseport_array.c b/kernel/bpf/reuseport_array.c
index cbf2d8d784b8..4b4f9670f1a9 100644
--- a/kernel/bpf/reuseport_array.c
+++ b/kernel/bpf/reuseport_array.c
@@ -151,9 +151,6 @@ static struct bpf_map *reuseport_array_alloc(union bpf_attr *attr)
int numa_node = bpf_map_attr_numa_node(attr);
struct reuseport_array *array;
- if (!bpf_capable())
- return ERR_PTR(-EPERM);
-
/* allocate all map elements and zero-initialize them */
array = bpf_map_area_alloc(struct_size(array, ptrs, attr->max_entries), numa_node);
if (!array)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index b25fce425b2c..458bb80b14d5 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -74,9 +74,6 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
u64 cost, n_buckets;
int err;
- if (!bpf_capable())
- return ERR_PTR(-EPERM);
-
if (attr->map_flags & ~STACK_CREATE_FLAG_MASK)
return ERR_PTR(-EINVAL);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index be885d547cde..8324c64e1e54 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1156,6 +1156,53 @@ static int map_create(union bpf_attr *attr)
if (sysctl_unprivileged_bpf_disabled && !bpf_capable())
return -EPERM;
+ /* check privileged map type permissions */
+ switch (map_type) {
+ case BPF_MAP_TYPE_ARRAY:
+ case BPF_MAP_TYPE_PERCPU_ARRAY:
+ case BPF_MAP_TYPE_PROG_ARRAY:
+ case BPF_MAP_TYPE_PERF_EVENT_ARRAY:
+ case BPF_MAP_TYPE_CGROUP_ARRAY:
+ case BPF_MAP_TYPE_ARRAY_OF_MAPS:
+ case BPF_MAP_TYPE_HASH:
+ case BPF_MAP_TYPE_PERCPU_HASH:
+ case BPF_MAP_TYPE_HASH_OF_MAPS:
+ case BPF_MAP_TYPE_RINGBUF:
+ case BPF_MAP_TYPE_USER_RINGBUF:
+ case BPF_MAP_TYPE_CGROUP_STORAGE:
+ case BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE:
+ /* unprivileged */
+ break;
+ case BPF_MAP_TYPE_SK_STORAGE:
+ case BPF_MAP_TYPE_INODE_STORAGE:
+ case BPF_MAP_TYPE_TASK_STORAGE:
+ case BPF_MAP_TYPE_CGRP_STORAGE:
+ case BPF_MAP_TYPE_BLOOM_FILTER:
+ case BPF_MAP_TYPE_LPM_TRIE:
+ case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
+ case BPF_MAP_TYPE_STACK_TRACE:
+ case BPF_MAP_TYPE_QUEUE:
+ case BPF_MAP_TYPE_STACK:
+ case BPF_MAP_TYPE_LRU_HASH:
+ case BPF_MAP_TYPE_LRU_PERCPU_HASH:
+ case BPF_MAP_TYPE_STRUCT_OPS:
+ case BPF_MAP_TYPE_CPUMAP:
+ if (!bpf_capable())
+ return -EPERM;
+ break;
+ case BPF_MAP_TYPE_SOCKMAP:
+ case BPF_MAP_TYPE_SOCKHASH:
+ case BPF_MAP_TYPE_DEVMAP:
+ case BPF_MAP_TYPE_DEVMAP_HASH:
+ case BPF_MAP_TYPE_XSKMAP:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ break;
+ default:
+ WARN(1, "unsupported map type %d", map_type);
+ return -EPERM;
+ }
+
map = ops->map_alloc(attr);
if (IS_ERR(map))
return PTR_ERR(map);
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 00afb66cd095..19538d628714 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -32,8 +32,6 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
{
struct bpf_stab *stab;
- if (!capable(CAP_NET_ADMIN))
- return ERR_PTR(-EPERM);
if (attr->max_entries == 0 ||
attr->key_size != 4 ||
(attr->value_size != sizeof(u32) &&
@@ -1085,8 +1083,6 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
struct bpf_shtab *htab;
int i, err;
- if (!capable(CAP_NET_ADMIN))
- return ERR_PTR(-EPERM);
if (attr->max_entries == 0 ||
attr->key_size == 0 ||
(attr->value_size != sizeof(u32) &&
diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
index 2c1427074a3b..e1c526f97ce3 100644
--- a/net/xdp/xskmap.c
+++ b/net/xdp/xskmap.c
@@ -5,7 +5,6 @@
#include <linux/bpf.h>
#include <linux/filter.h>
-#include <linux/capability.h>
#include <net/xdp_sock.h>
#include <linux/slab.h>
#include <linux/sched.h>
@@ -68,9 +67,6 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr)
int numa_node;
u64 size;
- if (!capable(CAP_NET_ADMIN))
- return ERR_PTR(-EPERM);
-
if (attr->max_entries == 0 || attr->key_size != 4 ||
attr->value_size != 4 ||
attr->map_flags & ~(BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY))
diff --git a/tools/testing/selftests/bpf/prog_tests/unpriv_bpf_disabled.c b/tools/testing/selftests/bpf/prog_tests/unpriv_bpf_disabled.c
index 8383a99f610f..0adf8d9475cb 100644
--- a/tools/testing/selftests/bpf/prog_tests/unpriv_bpf_disabled.c
+++ b/tools/testing/selftests/bpf/prog_tests/unpriv_bpf_disabled.c
@@ -171,7 +171,11 @@ static void test_unpriv_bpf_disabled_negative(struct test_unpriv_bpf_disabled *s
prog_insns, prog_insn_cnt, &load_opts),
-EPERM, "prog_load_fails");
- for (i = BPF_MAP_TYPE_HASH; i <= BPF_MAP_TYPE_BLOOM_FILTER; i++)
+ /* some map types require particular correct parameters which could be
+ * sanity-checked before enforcing -EPERM, so only validate that
+ * the simple ARRAY and HASH maps are failing with -EPERM
+ */
+ for (i = BPF_MAP_TYPE_HASH; i <= BPF_MAP_TYPE_ARRAY; i++)
ASSERT_EQ(bpf_map_create(i, NULL, sizeof(int), sizeof(int), 1, NULL),
-EPERM, "map_create_fails");
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH bpf-next 4/4] bpf: keep BPF_PROG_LOAD permission checks clear of validations
2023-06-13 22:35 [PATCH bpf-next 0/4] Clean up BPF permissions checks Andrii Nakryiko
` (2 preceding siblings ...)
2023-06-13 22:35 ` [PATCH bpf-next 3/4] bpf: centralize permissions checks for all BPF map types Andrii Nakryiko
@ 2023-06-13 22:35 ` Andrii Nakryiko
2023-06-14 22:08 ` [PATCH bpf-next 0/4] Clean up BPF permissions checks Stanislav Fomichev
2023-06-19 12:10 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 7+ messages in thread
From: Andrii Nakryiko @ 2023-06-13 22:35 UTC (permalink / raw)
To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team
Move out flags validation and license checks out of the permission
checks. They were intermingled, which makes subsequent changes harder.
Clean this up: perform straightforward flag validation upfront, and
fetch and check license later, right where we use it. Also consolidate
capabilities check in one block, right after basic attribute sanity
checks.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
kernel/bpf/syscall.c | 21 +++++++++------------
1 file changed, 9 insertions(+), 12 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 8324c64e1e54..1eaa16d3c6db 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2550,7 +2550,6 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
struct btf *attach_btf = NULL;
int err;
char license[128];
- bool is_gpl;
if (CHECK_ATTR(BPF_PROG_LOAD))
return -EINVAL;
@@ -2569,16 +2568,6 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
!bpf_capable())
return -EPERM;
- /* copy eBPF program license from user space */
- if (strncpy_from_bpfptr(license,
- make_bpfptr(attr->license, uattr.is_kernel),
- sizeof(license) - 1) < 0)
- return -EFAULT;
- license[sizeof(license) - 1] = 0;
-
- /* eBPF programs must be GPL compatible to use GPL-ed functions */
- is_gpl = license_is_gpl_compatible(license);
-
/* Intent here is for unprivileged_bpf_disabled to block BPF program
* creation for unprivileged users; other actions depend
* on fd availability and access to bpffs, so are dependent on
@@ -2671,12 +2660,20 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
make_bpfptr(attr->insns, uattr.is_kernel),
bpf_prog_insn_size(prog)) != 0)
goto free_prog_sec;
+ /* copy eBPF program license from user space */
+ if (strncpy_from_bpfptr(license,
+ make_bpfptr(attr->license, uattr.is_kernel),
+ sizeof(license) - 1) < 0)
+ goto free_prog_sec;
+ license[sizeof(license) - 1] = 0;
+
+ /* eBPF programs must be GPL compatible to use GPL-ed functions */
+ prog->gpl_compatible = license_is_gpl_compatible(license) ? 1 : 0;
prog->orig_prog = NULL;
prog->jited = 0;
atomic64_set(&prog->aux->refcnt, 1);
- prog->gpl_compatible = is_gpl ? 1 : 0;
if (bpf_prog_is_dev_bound(prog->aux)) {
err = bpf_prog_dev_bound_init(prog, attr);
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next 0/4] Clean up BPF permissions checks
2023-06-13 22:35 [PATCH bpf-next 0/4] Clean up BPF permissions checks Andrii Nakryiko
` (3 preceding siblings ...)
2023-06-13 22:35 ` [PATCH bpf-next 4/4] bpf: keep BPF_PROG_LOAD permission checks clear of validations Andrii Nakryiko
@ 2023-06-14 22:08 ` Stanislav Fomichev
2023-06-19 12:10 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 7+ messages in thread
From: Stanislav Fomichev @ 2023-06-14 22:08 UTC (permalink / raw)
To: Andrii Nakryiko; +Cc: bpf, ast, daniel, martin.lau, kernel-team
On 06/13, Andrii Nakryiko wrote:
> This patch set contains a few refactorings to BPF map and BPF program creation
> permissions checks, which were originally part of BPF token patch set ([0]),
> but are logically completely independent and useful in their own right.
>
> [0] https://patchwork.kernel.org/project/netdevbpf/list/?series=755113&state=*
>
> Andrii Nakryiko (4):
> bpf: move unprivileged checks into map_create() and bpf_prog_load()
> bpf: inline map creation logic in map_create() function
> bpf: centralize permissions checks for all BPF map types
> bpf: keep BPF_PROG_LOAD permission checks clear of validations
>
> kernel/bpf/bloom_filter.c | 3 -
> kernel/bpf/bpf_local_storage.c | 3 -
> kernel/bpf/bpf_struct_ops.c | 3 -
> kernel/bpf/cpumap.c | 4 -
> kernel/bpf/devmap.c | 3 -
> kernel/bpf/hashtab.c | 6 -
> kernel/bpf/lpm_trie.c | 3 -
> kernel/bpf/queue_stack_maps.c | 4 -
> kernel/bpf/reuseport_array.c | 3 -
> kernel/bpf/stackmap.c | 3 -
> kernel/bpf/syscall.c | 155 +++++++++++-------
> net/core/sock_map.c | 4 -
> net/xdp/xskmap.c | 4 -
> .../bpf/prog_tests/unpriv_bpf_disabled.c | 6 +-
> 14 files changed, 102 insertions(+), 102 deletions(-)
>
> --
> 2.34.1
>
Since I took a look at this as part of the original series, and these
are really non-controversial changes, feel free to use:
Acked-by: Stanislav Fomichev <sdf@google.com>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next 0/4] Clean up BPF permissions checks
2023-06-13 22:35 [PATCH bpf-next 0/4] Clean up BPF permissions checks Andrii Nakryiko
` (4 preceding siblings ...)
2023-06-14 22:08 ` [PATCH bpf-next 0/4] Clean up BPF permissions checks Stanislav Fomichev
@ 2023-06-19 12:10 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 7+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-06-19 12:10 UTC (permalink / raw)
To: Andrii Nakryiko; +Cc: bpf, ast, daniel, martin.lau, kernel-team
Hello:
This series was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:
On Tue, 13 Jun 2023 15:35:29 -0700 you wrote:
> This patch set contains a few refactorings to BPF map and BPF program creation
> permissions checks, which were originally part of BPF token patch set ([0]),
> but are logically completely independent and useful in their own right.
>
> [0] https://patchwork.kernel.org/project/netdevbpf/list/?series=755113&state=*
>
> Andrii Nakryiko (4):
> bpf: move unprivileged checks into map_create() and bpf_prog_load()
> bpf: inline map creation logic in map_create() function
> bpf: centralize permissions checks for all BPF map types
> bpf: keep BPF_PROG_LOAD permission checks clear of validations
>
> [...]
Here is the summary with links:
- [bpf-next,1/4] bpf: move unprivileged checks into map_create() and bpf_prog_load()
https://git.kernel.org/bpf/bpf-next/c/1d28635abcf1
- [bpf-next,2/4] bpf: inline map creation logic in map_create() function
https://git.kernel.org/bpf/bpf-next/c/22db41226b67
- [bpf-next,3/4] bpf: centralize permissions checks for all BPF map types
https://git.kernel.org/bpf/bpf-next/c/6c3eba1c5e28
- [bpf-next,4/4] bpf: keep BPF_PROG_LOAD permission checks clear of validations
https://git.kernel.org/bpf/bpf-next/c/7f6719f7a866
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-06-19 12:10 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-13 22:35 [PATCH bpf-next 0/4] Clean up BPF permissions checks Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 1/4] bpf: move unprivileged checks into map_create() and bpf_prog_load() Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 2/4] bpf: inline map creation logic in map_create() function Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 3/4] bpf: centralize permissions checks for all BPF map types Andrii Nakryiko
2023-06-13 22:35 ` [PATCH bpf-next 4/4] bpf: keep BPF_PROG_LOAD permission checks clear of validations Andrii Nakryiko
2023-06-14 22:08 ` [PATCH bpf-next 0/4] Clean up BPF permissions checks Stanislav Fomichev
2023-06-19 12:10 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox