public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/2] bpf: Disallow BPF_F_LOCK with mixed special fields
@ 2026-01-23  5:56 Leon Hwang
  2026-01-23  5:56 ` [PATCH bpf-next v2 1/2] bpf: Disallow BPF_F_LOCK with mixed special fields and centralize flag checks Leon Hwang
  2026-01-23  5:56 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add tests to verify BPF_F_LOCK restrictions Leon Hwang
  0 siblings, 2 replies; 6+ messages in thread
From: Leon Hwang @ 2026-01-23  5:56 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan, Leon Hwang, linux-kernel, linux-kselftest,
	kernel-patches-bot

Disallow combining BPF_F_LOCK with map values that mix bpf_spin_lock and
other special BTF fields, which may lead to undefined behavior. Such cases
are now rejected with -EOPNOTSUPP.

Map update flag validation is centralized in
bpf_map_check_op_flags() and reused across array, hash, local-storage,
and task-storage update paths, with selftests added to verify the new
behavior.

Changes:
v1 -> v2:
* Reject any unknown flag bits in map_flags when updating maps (per AI review).
* v1: https://lore.kernel.org/bpf/20260122153120.69249-1-leon.hwang@linux.dev/

Leon Hwang (2):
  bpf: Disallow BPF_F_LOCK with mixed special fields and centralize flag
    checks
  selftests/bpf: Add tests to verify BPF_F_LOCK restrictions

 include/linux/bpf.h                           |  7 ++
 kernel/bpf/arraymap.c                         | 11 ++-
 kernel/bpf/bpf_local_storage.c                |  7 --
 kernel/bpf/bpf_task_storage.c                 |  3 -
 kernel/bpf/hashtab.c                          |  8 +--
 kernel/bpf/syscall.c                          |  4 +-
 .../selftests/bpf/prog_tests/map_lock.c       | 70 +++++++++++++++++++
 .../selftests/bpf/progs/test_map_lock.c       | 31 +++++++-
 8 files changed, 117 insertions(+), 24 deletions(-)

--
2.52.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH bpf-next v2 1/2] bpf: Disallow BPF_F_LOCK with mixed special fields and centralize flag checks
  2026-01-23  5:56 [PATCH bpf-next v2 0/2] bpf: Disallow BPF_F_LOCK with mixed special fields Leon Hwang
@ 2026-01-23  5:56 ` Leon Hwang
  2026-01-28  2:27   ` Kumar Kartikeya Dwivedi
  2026-01-23  5:56 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add tests to verify BPF_F_LOCK restrictions Leon Hwang
  1 sibling, 1 reply; 6+ messages in thread
From: Leon Hwang @ 2026-01-23  5:56 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan, Leon Hwang, linux-kernel, linux-kselftest,
	kernel-patches-bot

Disallow combining BPF_F_LOCK with map values that contain special BTF
fields other than bpf_spin_lock (e.g. kptr or uptr). Such mixing may lead
to subtle or undefined behavior in map value handling. Reject these
combinations early by returning -EOPNOTSUPP.

Centralize map update flag validation in bpf_map_check_op_flags() and
reuse it across array, hash, local-storage, and task-storage map update
paths. Explicitly reject incompatible BPF_NOEXIST/BPF_EXIST combinations
and invalid BPF_F_LOCK usage to keep flag validation consistent and
eliminate duplicated per-map checks.

Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
---
 include/linux/bpf.h            |  7 +++++++
 kernel/bpf/arraymap.c          | 11 ++++-------
 kernel/bpf/bpf_local_storage.c |  7 -------
 kernel/bpf/bpf_task_storage.c  |  3 ---
 kernel/bpf/hashtab.c           |  8 +++-----
 kernel/bpf/syscall.c           |  4 +++-
 6 files changed, 17 insertions(+), 23 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 5936f8e2996f..c5863487ee73 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -3935,9 +3935,16 @@ static inline int bpf_map_check_op_flags(struct bpf_map *map, u64 flags, u64 all
 	if ((u32)flags & ~allowed_flags)
 		return -EINVAL;
 
+	/* BPF_NOEXIST and BPF_EXIST are mutually exclusive. */
+	if ((flags & (BPF_NOEXIST | BPF_EXIST)) == (BPF_NOEXIST | BPF_EXIST))
+		return -EINVAL;
+
 	if ((flags & BPF_F_LOCK) && !btf_record_has_field(map->record, BPF_SPIN_LOCK))
 		return -EINVAL;
 
+	if ((flags & BPF_F_LOCK) && btf_record_has_field(map->record, ~BPF_SPIN_LOCK))
+		return -EOPNOTSUPP;
+
 	if (!(flags & BPF_F_CPU) && flags >> 32)
 		return -EINVAL;
 
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 67e9e811de3a..1cff40f109cd 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -366,10 +366,7 @@ static long array_map_update_elem(struct bpf_map *map, void *key, void *value,
 	struct bpf_array *array = container_of(map, struct bpf_array, map);
 	u32 index = *(u32 *)key;
 	char *val;
-
-	if (unlikely((map_flags & ~BPF_F_LOCK) > BPF_EXIST))
-		/* unknown flags */
-		return -EINVAL;
+	int err;
 
 	if (unlikely(index >= array->map.max_entries))
 		/* all elements were pre-allocated, cannot insert a new one */
@@ -379,9 +376,9 @@ static long array_map_update_elem(struct bpf_map *map, void *key, void *value,
 		/* all elements already exist */
 		return -EEXIST;
 
-	if (unlikely((map_flags & BPF_F_LOCK) &&
-		     !btf_record_has_field(map->record, BPF_SPIN_LOCK)))
-		return -EINVAL;
+	err = bpf_map_check_op_flags(map, map_flags, BPF_EXIST | BPF_F_LOCK);
+	if (unlikely(err))
+		return err;
 
 	if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY) {
 		val = this_cpu_ptr(array->pptrs[index & array->index_mask]);
diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
index e2fe6c32822b..80b50091cbbf 100644
--- a/kernel/bpf/bpf_local_storage.c
+++ b/kernel/bpf/bpf_local_storage.c
@@ -493,13 +493,6 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
 	unsigned long flags;
 	int err;
 
-	/* BPF_EXIST and BPF_NOEXIST cannot be both set */
-	if (unlikely((map_flags & ~BPF_F_LOCK) > BPF_EXIST) ||
-	    /* BPF_F_LOCK can only be used in a value with spin_lock */
-	    unlikely((map_flags & BPF_F_LOCK) &&
-		     !btf_record_has_field(smap->map.record, BPF_SPIN_LOCK)))
-		return ERR_PTR(-EINVAL);
-
 	if (gfp_flags == GFP_KERNEL && (map_flags & ~BPF_F_LOCK) != BPF_NOEXIST)
 		return ERR_PTR(-EINVAL);
 
diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
index a1dc1bf0848a..21d84818e64e 100644
--- a/kernel/bpf/bpf_task_storage.c
+++ b/kernel/bpf/bpf_task_storage.c
@@ -125,9 +125,6 @@ static long bpf_pid_task_storage_update_elem(struct bpf_map *map, void *key,
 	struct pid *pid;
 	int fd, err;
 
-	if ((map_flags & BPF_F_LOCK) && btf_record_has_field(map->record, BPF_UPTR))
-		return -EOPNOTSUPP;
-
 	fd = *(int *)key;
 	pid = pidfd_get_pid(fd, &f_flags);
 	if (IS_ERR(pid))
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 3b9d297a53be..2f6ed3e80308 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -1093,9 +1093,9 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value,
 	u32 key_size, hash;
 	int ret;
 
-	if (unlikely((map_flags & ~BPF_F_LOCK) > BPF_EXIST))
-		/* unknown flags */
-		return -EINVAL;
+	ret = bpf_map_check_op_flags(map, map_flags, BPF_NOEXIST | BPF_EXIST | BPF_F_LOCK);
+	if (unlikely(ret))
+		return ret;
 
 	WARN_ON_ONCE(!bpf_rcu_lock_held());
 
@@ -1107,8 +1107,6 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value,
 	head = &b->head;
 
 	if (unlikely(map_flags & BPF_F_LOCK)) {
-		if (unlikely(!btf_record_has_field(map->record, BPF_SPIN_LOCK)))
-			return -EINVAL;
 		/* find an element without taking the bucket lock */
 		l_old = lookup_nulls_elem_raw(head, hash, key, key_size,
 					      htab->n_buckets);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 3c5c03d43f5f..49e424e5f492 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1780,6 +1780,7 @@ static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr)
 	bpfptr_t uvalue = make_bpfptr(attr->value, uattr.is_kernel);
 	struct bpf_map *map;
 	void *key, *value;
+	u64 allowed_flags;
 	u32 value_size;
 	int err;
 
@@ -1796,7 +1797,8 @@ static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr)
 		goto err_put;
 	}
 
-	err = bpf_map_check_op_flags(map, attr->flags, ~0);
+	allowed_flags = BPF_NOEXIST | BPF_EXIST | BPF_F_LOCK | BPF_F_CPU | BPF_F_ALL_CPUS;
+	err = bpf_map_check_op_flags(map, attr->flags, allowed_flags);
 	if (err)
 		goto err_put;
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH bpf-next v2 2/2] selftests/bpf: Add tests to verify BPF_F_LOCK restrictions
  2026-01-23  5:56 [PATCH bpf-next v2 0/2] bpf: Disallow BPF_F_LOCK with mixed special fields Leon Hwang
  2026-01-23  5:56 ` [PATCH bpf-next v2 1/2] bpf: Disallow BPF_F_LOCK with mixed special fields and centralize flag checks Leon Hwang
@ 2026-01-23  5:56 ` Leon Hwang
  1 sibling, 0 replies; 6+ messages in thread
From: Leon Hwang @ 2026-01-23  5:56 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan, Leon Hwang, linux-kernel, linux-kselftest,
	kernel-patches-bot

Add a tc-based helper program that updates a map value containing both
bpf_spin_lock and bpf_timer and records errors for invalid flag
combinations.

Extend the existing map_lock tests to cover array and hash maps. Verify
that BPF_NOEXIST|BPF_EXIST is rejected with -EINVAL/-EEXIST, and that
BPF_F_LOCK returns -EOPNOTSUPP when mixed with other special fields.

Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
---
 .../selftests/bpf/prog_tests/map_lock.c       | 70 +++++++++++++++++++
 .../selftests/bpf/progs/test_map_lock.c       | 31 +++++++-
 2 files changed, 100 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/map_lock.c b/tools/testing/selftests/bpf/prog_tests/map_lock.c
index 1d6726f01dd2..aafae8d02e1d 100644
--- a/tools/testing/selftests/bpf/prog_tests/map_lock.c
+++ b/tools/testing/selftests/bpf/prog_tests/map_lock.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 #include <test_progs.h>
 #include <network_helpers.h>
+#include "test_map_lock.skel.h"
 
 static void *spin_lock_thread(void *arg)
 {
@@ -90,3 +91,72 @@ void test_map_lock(void)
 close_prog:
 	bpf_object__close(obj);
 }
+
+struct map_value {
+	struct bpf_spin_lock lock;
+	struct bpf_timer timer;
+	__u64 payload;
+};
+
+static void test_map_lock_update_elem(enum bpf_map_type map_type, int err_exist)
+{
+	struct map_value val = {};
+	struct test_map_lock *skel;
+	int prog_fd, err;
+	u32 key = 0;
+	char buff[128] = {};
+	LIBBPF_OPTS(bpf_test_run_opts, topts,
+		    .data_in = buff,
+		    .data_size_in = sizeof(buff),
+		    .repeat = 1,
+	);
+
+	skel = test_map_lock__open();
+	if (!ASSERT_OK_PTR(skel, "test_map_lock__open"))
+		return;
+
+	bpf_map__set_type(skel->maps.map, map_type);
+
+	err = test_map_lock__load(skel);
+	if (!ASSERT_OK(err, "test_map_lock__load"))
+		goto out;
+
+	err = bpf_map__update_elem(skel->maps.map, &key, sizeof(key), &val, sizeof(val),
+				   BPF_NOEXIST | BPF_EXIST);
+	if (!ASSERT_EQ(err, -EINVAL, "err_exist"))
+		goto out;
+
+	err = bpf_map__update_elem(skel->maps.map, &key, sizeof(key), &val, sizeof(val),
+				   BPF_F_LOCK);
+	if (!ASSERT_EQ(err, -EOPNOTSUPP, "err_lock"))
+		goto out;
+
+	prog_fd = bpf_program__fd(skel->progs.map_update);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	if (!ASSERT_OK(err, "bpf_prog_test_run_opts"))
+		goto out;
+
+	ASSERT_EQ(skel->bss->err_exist, err_exist, "err_exist");
+	ASSERT_EQ(skel->bss->err_lock, -EOPNOTSUPP, "err_lock");
+
+out:
+	test_map_lock__destroy(skel);
+}
+
+static void test_array_map_lock_update_elem(void)
+{
+	test_map_lock_update_elem(BPF_MAP_TYPE_ARRAY, -EEXIST);
+}
+
+static void test_hash_map_lock_update_elem(void)
+{
+	test_map_lock_update_elem(BPF_MAP_TYPE_HASH, -EINVAL);
+}
+
+void test_map_lock_flag(void)
+{
+	if (test__start_subtest("array_map"))
+		test_array_map_lock_update_elem();
+	if (test__start_subtest("hash_map"))
+		test_hash_map_lock_update_elem();
+}
diff --git a/tools/testing/selftests/bpf/progs/test_map_lock.c b/tools/testing/selftests/bpf/progs/test_map_lock.c
index 1c02511b73cd..f1b7b741795c 100644
--- a/tools/testing/selftests/bpf/progs/test_map_lock.c
+++ b/tools/testing/selftests/bpf/progs/test_map_lock.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0
 // Copyright (c) 2019 Facebook
-#include <linux/bpf.h>
+#include "vmlinux.h"
 #include <linux/version.h>
 #include <bpf/bpf_helpers.h>
 
@@ -59,4 +59,33 @@ int bpf_map_lock_test(struct __sk_buff *skb)
 err:
 	return err;
 }
+
+int err_exist;
+int err_lock;
+
+struct map_value {
+	struct bpf_spin_lock lock;
+	struct bpf_timer timer;
+	__u64 payload;
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__type(key, u32);
+	__type(value, struct map_value);
+	__uint(max_entries, 1);
+} map SEC(".maps");
+
+SEC("tc")
+int map_update(struct __sk_buff *skb)
+{
+	struct map_value val = {};
+	u32 key = 0;
+
+	val.payload = 0xDEADBEEF;
+	err_exist = bpf_map_update_elem(&map, &key, &val, BPF_NOEXIST | BPF_EXIST);
+	err_lock = bpf_map_update_elem(&map, &key, &val, BPF_F_LOCK);
+	return BPF_OK;
+}
+
 char _license[] SEC("license") = "GPL";
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: Disallow BPF_F_LOCK with mixed special fields and centralize flag checks
  2026-01-23  5:56 ` [PATCH bpf-next v2 1/2] bpf: Disallow BPF_F_LOCK with mixed special fields and centralize flag checks Leon Hwang
@ 2026-01-28  2:27   ` Kumar Kartikeya Dwivedi
  2026-01-28 15:22     ` Leon Hwang
  0 siblings, 1 reply; 6+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-01-28  2:27 UTC (permalink / raw)
  To: Leon Hwang
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan, linux-kernel, linux-kselftest, kernel-patches-bot

On Fri, 23 Jan 2026 at 06:58, Leon Hwang <leon.hwang@linux.dev> wrote:
>
> Disallow combining BPF_F_LOCK with map values that contain special BTF
> fields other than bpf_spin_lock (e.g. kptr or uptr). Such mixing may lead
> to subtle or undefined behavior in map value handling. Reject these
> combinations early by returning -EOPNOTSUPP.

The commit log is really suboptimal in giving context on why you're doing this.
You should summarize the discussion from [0], otherwise unless people
go dig that thread they'd have no clue.

Also, I would remove the 'undefined behavior' wording. It's just
semantically different, in that the update doesn't free fields,
but there's no undefined behavior.

  [0]: https://lore.kernel.org/bpf/CAADnVQLib8ebe8cmGRj98YZiArendX8u=dSKNUrUFz6NGq7LRg@mail.gmail.com

Please also increase test coverage for other maps in patch 2. Even if
covering all local storages is not practical, we can definitely do
task local storage.

>
> Centralize map update flag validation in bpf_map_check_op_flags() and
> reuse it across array, hash, local-storage, and task-storage map update
> paths. Explicitly reject incompatible BPF_NOEXIST/BPF_EXIST combinations
> and invalid BPF_F_LOCK usage to keep flag validation consistent and
> eliminate duplicated per-map checks.
>
> Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
> ---
>  include/linux/bpf.h            |  7 +++++++
>  kernel/bpf/arraymap.c          | 11 ++++-------
>  kernel/bpf/bpf_local_storage.c |  7 -------
>  kernel/bpf/bpf_task_storage.c  |  3 ---
>  kernel/bpf/hashtab.c           |  8 +++-----
>  kernel/bpf/syscall.c           |  4 +++-
>  6 files changed, 17 insertions(+), 23 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 5936f8e2996f..c5863487ee73 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -3935,9 +3935,16 @@ static inline int bpf_map_check_op_flags(struct bpf_map *map, u64 flags, u64 all
>         if ((u32)flags & ~allowed_flags)
>                 return -EINVAL;
>
> +       /* BPF_NOEXIST and BPF_EXIST are mutually exclusive. */
> +       if ((flags & (BPF_NOEXIST | BPF_EXIST)) == (BPF_NOEXIST | BPF_EXIST))
> +               return -EINVAL;
> +
>         if ((flags & BPF_F_LOCK) && !btf_record_has_field(map->record, BPF_SPIN_LOCK))
>                 return -EINVAL;
>
> +       if ((flags & BPF_F_LOCK) && btf_record_has_field(map->record, ~BPF_SPIN_LOCK))
> +               return -EOPNOTSUPP;
> +
>         if (!(flags & BPF_F_CPU) && flags >> 32)
>                 return -EINVAL;
>
> diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
> index 67e9e811de3a..1cff40f109cd 100644
> --- a/kernel/bpf/arraymap.c
> +++ b/kernel/bpf/arraymap.c
> @@ -366,10 +366,7 @@ static long array_map_update_elem(struct bpf_map *map, void *key, void *value,
>         struct bpf_array *array = container_of(map, struct bpf_array, map);
>         u32 index = *(u32 *)key;
>         char *val;
> -
> -       if (unlikely((map_flags & ~BPF_F_LOCK) > BPF_EXIST))
> -               /* unknown flags */
> -               return -EINVAL;
> +       int err;
>
>         if (unlikely(index >= array->map.max_entries))
>                 /* all elements were pre-allocated, cannot insert a new one */
> @@ -379,9 +376,9 @@ static long array_map_update_elem(struct bpf_map *map, void *key, void *value,
>                 /* all elements already exist */
>                 return -EEXIST;
>
> -       if (unlikely((map_flags & BPF_F_LOCK) &&
> -                    !btf_record_has_field(map->record, BPF_SPIN_LOCK)))
> -               return -EINVAL;
> +       err = bpf_map_check_op_flags(map, map_flags, BPF_EXIST | BPF_F_LOCK);
> +       if (unlikely(err))
> +               return err;
>
>         if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY) {
>                 val = this_cpu_ptr(array->pptrs[index & array->index_mask]);
> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> index e2fe6c32822b..80b50091cbbf 100644
> --- a/kernel/bpf/bpf_local_storage.c
> +++ b/kernel/bpf/bpf_local_storage.c
> @@ -493,13 +493,6 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
>         unsigned long flags;
>         int err;
>
> -       /* BPF_EXIST and BPF_NOEXIST cannot be both set */
> -       if (unlikely((map_flags & ~BPF_F_LOCK) > BPF_EXIST) ||
> -           /* BPF_F_LOCK can only be used in a value with spin_lock */
> -           unlikely((map_flags & BPF_F_LOCK) &&
> -                    !btf_record_has_field(smap->map.record, BPF_SPIN_LOCK)))
> -               return ERR_PTR(-EINVAL);
> -
>         if (gfp_flags == GFP_KERNEL && (map_flags & ~BPF_F_LOCK) != BPF_NOEXIST)
>                 return ERR_PTR(-EINVAL);
>
> diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
> index a1dc1bf0848a..21d84818e64e 100644
> --- a/kernel/bpf/bpf_task_storage.c
> +++ b/kernel/bpf/bpf_task_storage.c
> @@ -125,9 +125,6 @@ static long bpf_pid_task_storage_update_elem(struct bpf_map *map, void *key,
>         struct pid *pid;
>         int fd, err;
>
> -       if ((map_flags & BPF_F_LOCK) && btf_record_has_field(map->record, BPF_UPTR))
> -               return -EOPNOTSUPP;
> -
>         fd = *(int *)key;
>         pid = pidfd_get_pid(fd, &f_flags);
>         if (IS_ERR(pid))
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index 3b9d297a53be..2f6ed3e80308 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -1093,9 +1093,9 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value,
>         u32 key_size, hash;
>         int ret;
>
> -       if (unlikely((map_flags & ~BPF_F_LOCK) > BPF_EXIST))
> -               /* unknown flags */
> -               return -EINVAL;
> +       ret = bpf_map_check_op_flags(map, map_flags, BPF_NOEXIST | BPF_EXIST | BPF_F_LOCK);
> +       if (unlikely(ret))
> +               return ret;
>
>         WARN_ON_ONCE(!bpf_rcu_lock_held());
>
> @@ -1107,8 +1107,6 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value,
>         head = &b->head;
>
>         if (unlikely(map_flags & BPF_F_LOCK)) {
> -               if (unlikely(!btf_record_has_field(map->record, BPF_SPIN_LOCK)))
> -                       return -EINVAL;
>                 /* find an element without taking the bucket lock */
>                 l_old = lookup_nulls_elem_raw(head, hash, key, key_size,
>                                               htab->n_buckets);
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 3c5c03d43f5f..49e424e5f492 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -1780,6 +1780,7 @@ static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr)
>         bpfptr_t uvalue = make_bpfptr(attr->value, uattr.is_kernel);
>         struct bpf_map *map;
>         void *key, *value;
> +       u64 allowed_flags;
>         u32 value_size;
>         int err;
>
> @@ -1796,7 +1797,8 @@ static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr)
>                 goto err_put;
>         }
>
> -       err = bpf_map_check_op_flags(map, attr->flags, ~0);
> +       allowed_flags = BPF_NOEXIST | BPF_EXIST | BPF_F_LOCK | BPF_F_CPU | BPF_F_ALL_CPUS;
> +       err = bpf_map_check_op_flags(map, attr->flags, allowed_flags);
>         if (err)
>                 goto err_put;
>
> --
> 2.52.0
>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: Disallow BPF_F_LOCK with mixed special fields and centralize flag checks
  2026-01-28  2:27   ` Kumar Kartikeya Dwivedi
@ 2026-01-28 15:22     ` Leon Hwang
  2026-02-02  5:54       ` Leon Hwang
  0 siblings, 1 reply; 6+ messages in thread
From: Leon Hwang @ 2026-01-28 15:22 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan, linux-kernel, linux-kselftest, kernel-patches-bot



On 2026/1/28 10:27, Kumar Kartikeya Dwivedi wrote:
> On Fri, 23 Jan 2026 at 06:58, Leon Hwang <leon.hwang@linux.dev> wrote:
>>
>> Disallow combining BPF_F_LOCK with map values that contain special BTF
>> fields other than bpf_spin_lock (e.g. kptr or uptr). Such mixing may lead
>> to subtle or undefined behavior in map value handling. Reject these
>> combinations early by returning -EOPNOTSUPP.
> 
> The commit log is really suboptimal in giving context on why you're doing this.
> You should summarize the discussion from [0], otherwise unless people
> go dig that thread they'd have no clue.
> 
> Also, I would remove the 'undefined behavior' wording. It's just
> semantically different, in that the update doesn't free fields,
> but there's no undefined behavior.
> 
>   [0]: https://lore.kernel.org/bpf/CAADnVQLib8ebe8cmGRj98YZiArendX8u=dSKNUrUFz6NGq7LRg@mail.gmail.com
> 

Agreed.

The commit message needs more context. I'll summarize the prior
discussion and clearly explain why the BPF_F_LOCK + special-field
combination is being disallowed, without using “undefined behavior” wording.

> Please also increase test coverage for other maps in patch 2. Even if
> covering all local storages is not practical, we can definitely do
> task local storage.
> 
Ack.

I'll add a test to cover the change of task local storage.

Thanks,
Leon

[...]


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: Disallow BPF_F_LOCK with mixed special fields and centralize flag checks
  2026-01-28 15:22     ` Leon Hwang
@ 2026-02-02  5:54       ` Leon Hwang
  0 siblings, 0 replies; 6+ messages in thread
From: Leon Hwang @ 2026-02-02  5:54 UTC (permalink / raw)
  To: Martin KaFai Lau, Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Eduard Zingerman, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Shuah Khan,
	linux-kernel, linux-kselftest, kernel-patches-bot



On 28/1/26 23:22, Leon Hwang wrote:
> 
> 
> On 2026/1/28 10:27, Kumar Kartikeya Dwivedi wrote:
>> On Fri, 23 Jan 2026 at 06:58, Leon Hwang <leon.hwang@linux.dev> wrote:
>>>
>>> Disallow combining BPF_F_LOCK with map values that contain special BTF
>>> fields other than bpf_spin_lock (e.g. kptr or uptr). Such mixing may lead
>>> to subtle or undefined behavior in map value handling. Reject these
>>> combinations early by returning -EOPNOTSUPP.
>>
>> The commit log is really suboptimal in giving context on why you're doing this.
>> You should summarize the discussion from [0], otherwise unless people
>> go dig that thread they'd have no clue.
>>
>> Also, I would remove the 'undefined behavior' wording. It's just
>> semantically different, in that the update doesn't free fields,
>> but there's no undefined behavior.
>>
>>   [0]: https://lore.kernel.org/bpf/CAADnVQLib8ebe8cmGRj98YZiArendX8u=dSKNUrUFz6NGq7LRg@mail.gmail.com
>>

Hi Martin,

Do you recall the original reasoning for disallowing BPF_F_LOCK together
with BPF_UPTR in 'bpf_task_storage.c::bpf_pid_task_storage_update_elem()'?

I didn’t find an explicit explanation in the commit message of
ba512b00e5ef (“bpf: Add uptr support in the map_value of the task local
storage”), and I’m trying to better understand the underlying concern.

This is in the context of addressing Alexei’s comment in the linked
discussion: I’d like to clearly articulate the risks of mixing
BPF_F_LOCK with other special fields, rather than relying on vague
phrasing.

Thanks,
Leon


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-02-02  5:54 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-23  5:56 [PATCH bpf-next v2 0/2] bpf: Disallow BPF_F_LOCK with mixed special fields Leon Hwang
2026-01-23  5:56 ` [PATCH bpf-next v2 1/2] bpf: Disallow BPF_F_LOCK with mixed special fields and centralize flag checks Leon Hwang
2026-01-28  2:27   ` Kumar Kartikeya Dwivedi
2026-01-28 15:22     ` Leon Hwang
2026-02-02  5:54       ` Leon Hwang
2026-01-23  5:56 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add tests to verify BPF_F_LOCK restrictions Leon Hwang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox