public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key
@ 2022-10-20 16:07 Dave Marchevsky
  2022-10-20 16:07 ` [PATCH v5 bpf-next 2/4] bpf: Consider all mem_types compatible for map_{key,value} args Dave Marchevsky
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Dave Marchevsky @ 2022-10-20 16:07 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team,
	Yonghong Song, Kumar Kartikeya Dwivedi, Dave Marchevsky

This patch adds support for the following pattern:

  struct some_data *data = bpf_ringbuf_reserve(&ringbuf, sizeof(struct some_data, 0));
  if (!data)
    return;
  bpf_map_lookup_elem(&another_map, &data->some_field);
  bpf_ringbuf_submit(data);

Currently the verifier does not consider bpf_ringbuf_reserve's
PTR_TO_MEM | MEM_ALLOC ret type a valid key input to bpf_map_lookup_elem.
Since PTR_TO_MEM is by definition a valid region of memory, it is safe
to use it as a key for lookups.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
v2->v3: lore.kernel.org/bpf/20220914123600.927632-1-davemarchevsky@fb.com

  * Add Yonghong ack, rebase

v1->v2: lore.kernel.org/bpf/20220912101106.2765921-1-davemarchevsky@fb.com

  * Move test changes into separate patch - patch 2 in this series.
    (Kumar, Yonghong). That patch's changelog enumerates specific
    changes from v1
  * Remove PTR_TO_MEM addition from this patch - patch 1 (Yonghong)
    * I don't have a usecase for PTR_TO_MEM w/o MEM_ALLOC
  * Add "if (!data)" error check to example pattern in this patch
    (Yonghong)
  * Remove patch 2 from v1's series, which removed map_key_value_types
    as it was more-or-less duplicate of mem_types
    * Now that PTR_TO_MEM isn't added here, more differences between
      map_key_value_types and mem_types, and no usecase for PTR_TO_BUF,
      so drop for now.

 kernel/bpf/verifier.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6f6d2d511c06..97351ae3e7a7 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5641,6 +5641,7 @@ static const struct bpf_reg_types map_key_value_types = {
 		PTR_TO_PACKET_META,
 		PTR_TO_MAP_KEY,
 		PTR_TO_MAP_VALUE,
+		PTR_TO_MEM | MEM_ALLOC,
 	},
 };
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v5 bpf-next 2/4] bpf: Consider all mem_types compatible for map_{key,value} args
  2022-10-20 16:07 [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key Dave Marchevsky
@ 2022-10-20 16:07 ` Dave Marchevsky
  2022-10-21 23:04   ` Andrii Nakryiko
  2022-10-20 16:07 ` [PATCH v5 bpf-next 3/4] selftests/bpf: Add test verifying bpf_ringbuf_reserve retval use in map ops Dave Marchevsky
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Dave Marchevsky @ 2022-10-20 16:07 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team,
	Yonghong Song, Kumar Kartikeya Dwivedi, Dave Marchevsky

After the previous patch, which added PTR_TO_MEM | MEM_ALLOC type
map_key_value_types, the only difference between map_key_value_types and
mem_types sets is PTR_TO_BUF and PTR_TO_MEM, which are in the latter set
but not the former.

Helpers which expect ARG_PTR_TO_MAP_KEY or ARG_PTR_TO_MAP_VALUE
already effectively expect a valid blob of arbitrary memory that isn't
necessarily explicitly associated with a map. When validating a
PTR_TO_MAP_{KEY,VALUE} arg, the verifier expects meta->map_ptr to have
already been set, either by an earlier ARG_CONST_MAP_PTR arg, or custom
logic like that in process_timer_func or process_kptr_func.

So let's get rid of map_key_value_types and just use mem_types for those
args.

This has the effect of adding PTR_TO_BUF and PTR_TO_MEM to the set of
compatible types for ARG_PTR_TO_MAP_KEY and ARG_PTR_TO_MAP_VALUE.

PTR_TO_BUF is used by various bpf_iter implementations to represent a
chunk of valid r/w memory in ctx args for iter prog.

PTR_TO_MEM is used by networking, tracing, and ringbuf helpers to
represent a chunk of valid memory. The PTR_TO_MEM | MEM_ALLOC
type added in previous commmit is specific to ringbuf helpers.
Presence or absence of MEM_ALLOC doesn't change the validity of using
PTR_TO_MEM as a map_{key,val} input.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
v1 -> v5: lore.kernel.org/bpf/20220912101106.2765921-2-davemarchevsky@fb.com

  * This patch was dropped in v2 as I had no concrete usecase for
    PTR_TO_BUF and PTR_TO_MEM w/o MEM_ALLOC. Andrii encouraged me to
    re-add the patch as we both share desire to eventually cleanup all
    these separate "valid chunk of memory" types. Starting to treat them
    similarly is a good step in that direction.
    * A usecase for PTR_TO_BUF is now demonstrated in patch 4 of this
      series.
    * PTR_TO_MEM w/o MEM_ALLOC is returned by bpf_{this,per}_cpu_ptr
      helpers via RET_PTR_TO_MEM_OR_BTF_ID, but in both cases the return
      type is also tagged MEM_RDONLY, which map helpers don't currently
      accept (see patch 4 summary). So no selftest for this specific
      case is added in the series, but by logic in this patch summary
      there's no reason to treat it differently.

 kernel/bpf/verifier.c | 15 ++-------------
 1 file changed, 2 insertions(+), 13 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 97351ae3e7a7..ddc1452cf023 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5634,17 +5634,6 @@ struct bpf_reg_types {
 	u32 *btf_id;
 };
 
-static const struct bpf_reg_types map_key_value_types = {
-	.types = {
-		PTR_TO_STACK,
-		PTR_TO_PACKET,
-		PTR_TO_PACKET_META,
-		PTR_TO_MAP_KEY,
-		PTR_TO_MAP_VALUE,
-		PTR_TO_MEM | MEM_ALLOC,
-	},
-};
-
 static const struct bpf_reg_types sock_types = {
 	.types = {
 		PTR_TO_SOCK_COMMON,
@@ -5711,8 +5700,8 @@ static const struct bpf_reg_types dynptr_types = {
 };
 
 static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
-	[ARG_PTR_TO_MAP_KEY]		= &map_key_value_types,
-	[ARG_PTR_TO_MAP_VALUE]		= &map_key_value_types,
+	[ARG_PTR_TO_MAP_KEY]		= &mem_types,
+	[ARG_PTR_TO_MAP_VALUE]		= &mem_types,
 	[ARG_CONST_SIZE]		= &scalar_types,
 	[ARG_CONST_SIZE_OR_ZERO]	= &scalar_types,
 	[ARG_CONST_ALLOC_SIZE_OR_ZERO]	= &scalar_types,
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v5 bpf-next 3/4] selftests/bpf: Add test verifying bpf_ringbuf_reserve retval use in map ops
  2022-10-20 16:07 [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key Dave Marchevsky
  2022-10-20 16:07 ` [PATCH v5 bpf-next 2/4] bpf: Consider all mem_types compatible for map_{key,value} args Dave Marchevsky
@ 2022-10-20 16:07 ` Dave Marchevsky
  2022-10-21 23:04   ` Andrii Nakryiko
  2022-10-20 16:07 ` [PATCH v5 bpf-next 4/4] selftests/bpf: Add write to hashmap to array_map iter test Dave Marchevsky
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Dave Marchevsky @ 2022-10-20 16:07 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team,
	Yonghong Song, Kumar Kartikeya Dwivedi, Dave Marchevsky

Add a test_ringbuf_map_key test prog, borrowing heavily from extant
test_ringbuf.c. The program tries to use the result of
bpf_ringbuf_reserve as map_key, which was not possible before previouis
commits in this series. The test runner added to prog_tests/ringbuf.c
verifies that the program loads and does basic sanity checks to confirm
that it runs as expected.

Also, refactor test_ringbuf such that runners for existing test_ringbuf
and newly-added test_ringbuf_map_key are subtests of 'ringbuf' top-level
test.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
v4->v5: lore.kernel.org/bpf/20220923060614.4025371-2-davemarchevsky@fb.com

* Fix some nits (Andrii)
  * migrating prog from fentry -> ksyscall wasn't done as lskel doesn't
    support the latter. Talked to Andrii about it offlist, he's fine with it.

v3->v4: lore.kernel.org/bpf/20220922142208.3009672-2-davemarchevsky@fb.com

* Fix some nits (Yonghong)
  * make subtest runner functions static
  * don't goto cleanup if -EDONE check fails
  * add 'workaround' to comment in test to ease future grepping
* Add Yonghong ack

v2->v3: lore.kernel.org/bpf/20220914123600.927632-2-davemarchevsky@fb.com

* Test that ring_buffer__poll returns -EDONE (Alexei)

v1->v2: lore.kernel.org/bpf/20220912101106.2765921-1-davemarchevsky@fb.com

* Actually run the program instead of just loading (Yonghong)
* Add a bpf_map_update_elem call to the test (Yonghong)
* Refactor runner such that existing test and newly-added test are
  subtests of 'ringbuf' top-level test (Yonghong)
* Remove unused globals in test prog (Yonghong)

 tools/testing/selftests/bpf/Makefile          |  8 ++-
 .../selftests/bpf/prog_tests/ringbuf.c        | 66 ++++++++++++++++-
 .../bpf/progs/test_ringbuf_map_key.c          | 70 +++++++++++++++++++
 3 files changed, 140 insertions(+), 4 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index e6cf21fad69f..79edef1dbda4 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -359,9 +359,11 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
 		test_subskeleton.skel.h test_subskeleton_lib.skel.h	\
 		test_usdt.skel.h
 
-LSKELS := fentry_test.c fexit_test.c fexit_sleep.c \
-	test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c \
-	map_ptr_kern.c core_kern.c core_kern_overflow.c
+LSKELS := fentry_test.c fexit_test.c fexit_sleep.c atomics.c 		\
+	trace_printk.c trace_vprintk.c map_ptr_kern.c 			\
+	core_kern.c core_kern_overflow.c test_ringbuf.c			\
+	test_ringbuf_map_key.c
+
 # Generate both light skeleton and libbpf skeleton for these
 LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test.c \
 	kfunc_call_test_subprog.c
diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf.c b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
index 9a80fe8a6427..ac104dc652e3 100644
--- a/tools/testing/selftests/bpf/prog_tests/ringbuf.c
+++ b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
@@ -13,6 +13,7 @@
 #include <linux/perf_event.h>
 #include <linux/ring_buffer.h>
 #include "test_ringbuf.lskel.h"
+#include "test_ringbuf_map_key.lskel.h"
 
 #define EDONE 7777
 
@@ -58,6 +59,7 @@ static int process_sample(void *ctx, void *data, size_t len)
 	}
 }
 
+static struct test_ringbuf_map_key_lskel *skel_map_key;
 static struct test_ringbuf_lskel *skel;
 static struct ring_buffer *ringbuf;
 
@@ -81,7 +83,7 @@ static void *poll_thread(void *input)
 	return (void *)(long)ring_buffer__poll(ringbuf, timeout);
 }
 
-void test_ringbuf(void)
+static void ringbuf_subtest(void)
 {
 	const size_t rec_sz = BPF_RINGBUF_HDR_SZ + sizeof(struct sample);
 	pthread_t thread;
@@ -297,3 +299,65 @@ void test_ringbuf(void)
 	ring_buffer__free(ringbuf);
 	test_ringbuf_lskel__destroy(skel);
 }
+
+static int process_map_key_sample(void *ctx, void *data, size_t len)
+{
+	struct sample *s;
+	int err, val;
+
+	s = data;
+	switch (s->seq) {
+	case 1:
+		ASSERT_EQ(s->value, 42, "sample_value");
+		err = bpf_map_lookup_elem(skel_map_key->maps.hash_map.map_fd,
+					  s, &val);
+		ASSERT_OK(err, "hash_map bpf_map_lookup_elem");
+		ASSERT_EQ(val, 1, "hash_map val");
+		return -EDONE;
+	default:
+		return 0;
+	}
+}
+
+static void ringbuf_map_key_subtest(void)
+{
+	int err;
+
+	skel_map_key = test_ringbuf_map_key_lskel__open();
+	if (!ASSERT_OK_PTR(skel_map_key, "test_ringbuf_map_key_lskel__open"))
+		return;
+
+	skel_map_key->maps.ringbuf.max_entries = getpagesize();
+	skel_map_key->bss->pid = getpid();
+
+	err = test_ringbuf_map_key_lskel__load(skel_map_key);
+	if (!ASSERT_OK(err, "test_ringbuf_map_key_lskel__load"))
+		goto cleanup;
+
+	ringbuf = ring_buffer__new(skel_map_key->maps.ringbuf.map_fd,
+				   process_map_key_sample, NULL, NULL);
+	if (!ASSERT_OK_PTR(ringbuf, "ring_buffer__new"))
+		goto cleanup;
+
+	err = test_ringbuf_map_key_lskel__attach(skel_map_key);
+	if (!ASSERT_OK(err, "test_ringbuf_map_key_lskel__attach"))
+		goto cleanup_ringbuf;
+
+	syscall(__NR_getpgid);
+	ASSERT_EQ(skel_map_key->bss->seq, 1, "skel_map_key->bss->seq");
+	err = ring_buffer__poll(ringbuf, -1);
+	ASSERT_EQ(err, -EDONE, "ring_buffer__poll");
+
+cleanup_ringbuf:
+	ring_buffer__free(ringbuf);
+cleanup:
+	test_ringbuf_map_key_lskel__destroy(skel_map_key);
+}
+
+void test_ringbuf(void)
+{
+	if (test__start_subtest("ringbuf"))
+		ringbuf_subtest();
+	if (test__start_subtest("ringbuf_map_key"))
+		ringbuf_map_key_subtest();
+}
diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c b/tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c
new file mode 100644
index 000000000000..2760bf60d05a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c
@@ -0,0 +1,70 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct sample {
+	int pid;
+	int seq;
+	long value;
+	char comm[16];
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+} ringbuf SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1000);
+	__type(key, struct sample);
+	__type(value, int);
+} hash_map SEC(".maps");
+
+/* inputs */
+int pid = 0;
+
+/* inner state */
+long seq = 0;
+
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
+int test_ringbuf_mem_map_key(void *ctx)
+{
+	int cur_pid = bpf_get_current_pid_tgid() >> 32;
+	struct sample *sample, sample_copy;
+	int *lookup_val;
+
+	if (cur_pid != pid)
+		return 0;
+
+	sample = bpf_ringbuf_reserve(&ringbuf, sizeof(*sample), 0);
+	if (!sample)
+		return 0;
+
+	sample->pid = pid;
+	bpf_get_current_comm(sample->comm, sizeof(sample->comm));
+	sample->seq = ++seq;
+	sample->value = 42;
+
+	/* test using 'sample' (PTR_TO_MEM | MEM_ALLOC) as map key arg
+	 */
+	lookup_val = (int *)bpf_map_lookup_elem(&hash_map, sample);
+
+	/* workaround - memcpy is necessary so that verifier doesn't
+	 * complain with:
+	 *   verifier internal error: more than one arg with ref_obj_id R3
+	 * when trying to do bpf_map_update_elem(&hash_map, sample, &sample->seq, BPF_ANY);
+	 *
+	 * Since bpf_map_lookup_elem above uses 'sample' as key, test using
+	 * sample field as value below
+	 */
+	__builtin_memcpy(&sample_copy, sample, sizeof(struct sample));
+	bpf_map_update_elem(&hash_map, &sample_copy, &sample->seq, BPF_ANY);
+
+	bpf_ringbuf_submit(sample, 0);
+	return 0;
+}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v5 bpf-next 4/4] selftests/bpf: Add write to hashmap to array_map iter test
  2022-10-20 16:07 [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key Dave Marchevsky
  2022-10-20 16:07 ` [PATCH v5 bpf-next 2/4] bpf: Consider all mem_types compatible for map_{key,value} args Dave Marchevsky
  2022-10-20 16:07 ` [PATCH v5 bpf-next 3/4] selftests/bpf: Add test verifying bpf_ringbuf_reserve retval use in map ops Dave Marchevsky
@ 2022-10-20 16:07 ` Dave Marchevsky
  2022-10-21 23:04   ` Andrii Nakryiko
  2022-10-21 23:04 ` [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key Andrii Nakryiko
  2022-10-22  2:30 ` patchwork-bot+netdevbpf
  4 siblings, 1 reply; 10+ messages in thread
From: Dave Marchevsky @ 2022-10-20 16:07 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Kernel Team,
	Yonghong Song, Kumar Kartikeya Dwivedi, Dave Marchevsky

Modify iter prog in existing bpf_iter_bpf_array_map.c, which currently
dumps arraymap key/val, to also do a write of (val, key) into a
newly-added hashmap. Confirm that the write succeeds as expected by
modifying the userspace runner program.

Before a change added in an earlier commit - considering PTR_TO_BUF reg
a valid input to helpers which expect MAP_{KEY,VAL} - the verifier
would've rejected this prog change due to type mismatch. Since using
current iter's key/val to access a separate map is a reasonable usecase,
let's add support for it.

Note that the test prog cannot directly write (val, key) into hashmap
via bpf_map_update_elem when both come from iter context because key is
marked MEM_RDONLY. This is due to bpf_map_update_elem - and other basic
map helpers - taking ARG_PTR_TO_MAP_{KEY,VALUE} w/o MEM_RDONLY type
flag. bpf_map_{lookup,update,delete}_elem don't modify their
input key/val so it should be possible to tag their args READONLY, but
due to the ubiquitous use of these helpers and verifier checks for
type == MAP_VALUE, such a change is nontrivial and seems better to
address in a followup series.

Also fixup some 'goto's in test runner's map checking loop.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
 .../selftests/bpf/prog_tests/bpf_iter.c       | 20 ++++++++++++------
 .../bpf/progs/bpf_iter_bpf_array_map.c        | 21 ++++++++++++++++++-
 2 files changed, 34 insertions(+), 7 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
index c39d40f4b268..6f8ed61fc4b4 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
@@ -941,10 +941,10 @@ static void test_bpf_array_map(void)
 {
 	__u64 val, expected_val = 0, res_first_val, first_val = 0;
 	DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
-	__u32 expected_key = 0, res_first_key;
+	__u32 key, expected_key = 0, res_first_key;
+	int err, i, map_fd, hash_fd, iter_fd;
 	struct bpf_iter_bpf_array_map *skel;
 	union bpf_iter_link_info linfo;
-	int err, i, map_fd, iter_fd;
 	struct bpf_link *link;
 	char buf[64] = {};
 	int len, start;
@@ -1001,12 +1001,20 @@ static void test_bpf_array_map(void)
 	if (!ASSERT_EQ(skel->bss->val_sum, expected_val, "val_sum"))
 		goto close_iter;
 
+	hash_fd = bpf_map__fd(skel->maps.hashmap1);
 	for (i = 0; i < bpf_map__max_entries(skel->maps.arraymap1); i++) {
 		err = bpf_map_lookup_elem(map_fd, &i, &val);
-		if (!ASSERT_OK(err, "map_lookup"))
-			goto out;
-		if (!ASSERT_EQ(i, val, "invalid_val"))
-			goto out;
+		if (!ASSERT_OK(err, "map_lookup arraymap1"))
+			goto close_iter;
+		if (!ASSERT_EQ(i, val, "invalid_val arraymap1"))
+			goto close_iter;
+
+		val = i + 4;
+		err = bpf_map_lookup_elem(hash_fd, &val, &key);
+		if (!ASSERT_OK(err, "map_lookup hashmap1"))
+			goto close_iter;
+		if (!ASSERT_EQ(key, val - 4, "invalid_val hashmap1"))
+			goto close_iter;
 	}
 
 close_iter:
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_bpf_array_map.c b/tools/testing/selftests/bpf/progs/bpf_iter_bpf_array_map.c
index 6286023fd62b..c5969ca6f26b 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_bpf_array_map.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_bpf_array_map.c
@@ -19,13 +19,20 @@ struct {
 	__type(value, __u64);
 } arraymap1 SEC(".maps");
 
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 10);
+	__type(key, __u64);
+	__type(value, __u32);
+} hashmap1 SEC(".maps");
+
 __u32 key_sum = 0;
 __u64 val_sum = 0;
 
 SEC("iter/bpf_map_elem")
 int dump_bpf_array_map(struct bpf_iter__bpf_map_elem *ctx)
 {
-	__u32 *key = ctx->key;
+	__u32 *hmap_val, *key = ctx->key;
 	__u64 *val = ctx->value;
 
 	if (key == (void *)0 || val == (void *)0)
@@ -35,6 +42,18 @@ int dump_bpf_array_map(struct bpf_iter__bpf_map_elem *ctx)
 	bpf_seq_write(ctx->meta->seq, val, sizeof(__u64));
 	key_sum += *key;
 	val_sum += *val;
+
+	/* workaround - It's necessary to do this convoluted (val, key)
+	 * write into hashmap1, instead of simply doing
+	 *   bpf_map_update_elem(&hashmap1, val, key, BPF_ANY);
+	 * because key has MEM_RDONLY flag and bpf_map_update elem expects
+	 * types without this flag
+	 */
+	bpf_map_update_elem(&hashmap1, val, val, BPF_ANY);
+	hmap_val = bpf_map_lookup_elem(&hashmap1, val);
+	if (hmap_val)
+		*hmap_val = *key;
+
 	*val = *key;
 	return 0;
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key
  2022-10-20 16:07 [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key Dave Marchevsky
                   ` (2 preceding siblings ...)
  2022-10-20 16:07 ` [PATCH v5 bpf-next 4/4] selftests/bpf: Add write to hashmap to array_map iter test Dave Marchevsky
@ 2022-10-21 23:04 ` Andrii Nakryiko
  2022-10-22  2:30 ` patchwork-bot+netdevbpf
  4 siblings, 0 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2022-10-21 23:04 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Kernel Team, Yonghong Song, Kumar Kartikeya Dwivedi

On Thu, Oct 20, 2022 at 9:07 AM Dave Marchevsky <davemarchevsky@fb.com> wrote:
>
> This patch adds support for the following pattern:
>
>   struct some_data *data = bpf_ringbuf_reserve(&ringbuf, sizeof(struct some_data, 0));
>   if (!data)
>     return;
>   bpf_map_lookup_elem(&another_map, &data->some_field);
>   bpf_ringbuf_submit(data);
>
> Currently the verifier does not consider bpf_ringbuf_reserve's
> PTR_TO_MEM | MEM_ALLOC ret type a valid key input to bpf_map_lookup_elem.
> Since PTR_TO_MEM is by definition a valid region of memory, it is safe
> to use it as a key for lookups.
>
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> Acked-by: Yonghong Song <yhs@fb.com>
> ---

LGTM

Acked-by: Andrii Nakryiko <andrii@kernel.org>


> v2->v3: lore.kernel.org/bpf/20220914123600.927632-1-davemarchevsky@fb.com
>
>   * Add Yonghong ack, rebase
>
> v1->v2: lore.kernel.org/bpf/20220912101106.2765921-1-davemarchevsky@fb.com
>
>   * Move test changes into separate patch - patch 2 in this series.
>     (Kumar, Yonghong). That patch's changelog enumerates specific
>     changes from v1
>   * Remove PTR_TO_MEM addition from this patch - patch 1 (Yonghong)
>     * I don't have a usecase for PTR_TO_MEM w/o MEM_ALLOC
>   * Add "if (!data)" error check to example pattern in this patch
>     (Yonghong)
>   * Remove patch 2 from v1's series, which removed map_key_value_types
>     as it was more-or-less duplicate of mem_types
>     * Now that PTR_TO_MEM isn't added here, more differences between
>       map_key_value_types and mem_types, and no usecase for PTR_TO_BUF,
>       so drop for now.
>
>  kernel/bpf/verifier.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6f6d2d511c06..97351ae3e7a7 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -5641,6 +5641,7 @@ static const struct bpf_reg_types map_key_value_types = {
>                 PTR_TO_PACKET_META,
>                 PTR_TO_MAP_KEY,
>                 PTR_TO_MAP_VALUE,
> +               PTR_TO_MEM | MEM_ALLOC,
>         },
>  };
>
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 bpf-next 2/4] bpf: Consider all mem_types compatible for map_{key,value} args
  2022-10-20 16:07 ` [PATCH v5 bpf-next 2/4] bpf: Consider all mem_types compatible for map_{key,value} args Dave Marchevsky
@ 2022-10-21 23:04   ` Andrii Nakryiko
  2022-10-22  2:26     ` Alexei Starovoitov
  0 siblings, 1 reply; 10+ messages in thread
From: Andrii Nakryiko @ 2022-10-21 23:04 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Kernel Team, Yonghong Song, Kumar Kartikeya Dwivedi

On Thu, Oct 20, 2022 at 9:07 AM Dave Marchevsky <davemarchevsky@fb.com> wrote:
>
> After the previous patch, which added PTR_TO_MEM | MEM_ALLOC type
> map_key_value_types, the only difference between map_key_value_types and
> mem_types sets is PTR_TO_BUF and PTR_TO_MEM, which are in the latter set
> but not the former.
>
> Helpers which expect ARG_PTR_TO_MAP_KEY or ARG_PTR_TO_MAP_VALUE
> already effectively expect a valid blob of arbitrary memory that isn't
> necessarily explicitly associated with a map. When validating a
> PTR_TO_MAP_{KEY,VALUE} arg, the verifier expects meta->map_ptr to have
> already been set, either by an earlier ARG_CONST_MAP_PTR arg, or custom
> logic like that in process_timer_func or process_kptr_func.
>
> So let's get rid of map_key_value_types and just use mem_types for those
> args.
>
> This has the effect of adding PTR_TO_BUF and PTR_TO_MEM to the set of
> compatible types for ARG_PTR_TO_MAP_KEY and ARG_PTR_TO_MAP_VALUE.
>
> PTR_TO_BUF is used by various bpf_iter implementations to represent a
> chunk of valid r/w memory in ctx args for iter prog.
>
> PTR_TO_MEM is used by networking, tracing, and ringbuf helpers to
> represent a chunk of valid memory. The PTR_TO_MEM | MEM_ALLOC
> type added in previous commmit is specific to ringbuf helpers.

typo: s/commmit/commit/ (but not worth reposting just to fix this)

btw, I have a strong desire to change PTR_TO_MEM | MEM_ALLOC into its
own PTR_TO_RINGBUF_RECORD (or something less verbose), it's very
confusing that "MEM_ALLOC" is very crucially *a ringbuf record*
pointer. Can't be anything else, but name won't suggest this, we'll
trip ourselves over this in the future.

> Presence or absence of MEM_ALLOC doesn't change the validity of using
> PTR_TO_MEM as a map_{key,val} input.
>
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> ---
> v1 -> v5: lore.kernel.org/bpf/20220912101106.2765921-2-davemarchevsky@fb.com
>
>   * This patch was dropped in v2 as I had no concrete usecase for
>     PTR_TO_BUF and PTR_TO_MEM w/o MEM_ALLOC. Andrii encouraged me to
>     re-add the patch as we both share desire to eventually cleanup all
>     these separate "valid chunk of memory" types. Starting to treat them
>     similarly is a good step in that direction.

Yep, 100% agree. We should try to generalize code and types for
conceptually similar things to make things a bit more manageable. As
another example, seems like ARG_PTR_TO_MAP_KEY and
ARG_PTR_TO_MAP_VALUE handling inside check_func_arg() is basically
identical, we just need to pass meta->raw_mode = false for
ARG_PTR_TO_MAP_KEY case to mark "read-only" operation. Something for
future clean ups, though.

This patch looks great, thanks!

Acked-by: Andrii Nakryiko <andrii@kernel.org>



>     * A usecase for PTR_TO_BUF is now demonstrated in patch 4 of this
>       series.
>     * PTR_TO_MEM w/o MEM_ALLOC is returned by bpf_{this,per}_cpu_ptr
>       helpers via RET_PTR_TO_MEM_OR_BTF_ID, but in both cases the return
>       type is also tagged MEM_RDONLY, which map helpers don't currently
>       accept (see patch 4 summary). So no selftest for this specific
>       case is added in the series, but by logic in this patch summary
>       there's no reason to treat it differently.
>
>  kernel/bpf/verifier.c | 15 ++-------------
>  1 file changed, 2 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 97351ae3e7a7..ddc1452cf023 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -5634,17 +5634,6 @@ struct bpf_reg_types {
>         u32 *btf_id;
>  };
>
> -static const struct bpf_reg_types map_key_value_types = {
> -       .types = {
> -               PTR_TO_STACK,
> -               PTR_TO_PACKET,
> -               PTR_TO_PACKET_META,
> -               PTR_TO_MAP_KEY,
> -               PTR_TO_MAP_VALUE,
> -               PTR_TO_MEM | MEM_ALLOC,
> -       },
> -};
> -
>  static const struct bpf_reg_types sock_types = {
>         .types = {
>                 PTR_TO_SOCK_COMMON,
> @@ -5711,8 +5700,8 @@ static const struct bpf_reg_types dynptr_types = {
>  };
>
>  static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
> -       [ARG_PTR_TO_MAP_KEY]            = &map_key_value_types,
> -       [ARG_PTR_TO_MAP_VALUE]          = &map_key_value_types,
> +       [ARG_PTR_TO_MAP_KEY]            = &mem_types,
> +       [ARG_PTR_TO_MAP_VALUE]          = &mem_types,
>         [ARG_CONST_SIZE]                = &scalar_types,
>         [ARG_CONST_SIZE_OR_ZERO]        = &scalar_types,
>         [ARG_CONST_ALLOC_SIZE_OR_ZERO]  = &scalar_types,
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 bpf-next 3/4] selftests/bpf: Add test verifying bpf_ringbuf_reserve retval use in map ops
  2022-10-20 16:07 ` [PATCH v5 bpf-next 3/4] selftests/bpf: Add test verifying bpf_ringbuf_reserve retval use in map ops Dave Marchevsky
@ 2022-10-21 23:04   ` Andrii Nakryiko
  0 siblings, 0 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2022-10-21 23:04 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Kernel Team, Yonghong Song, Kumar Kartikeya Dwivedi

On Thu, Oct 20, 2022 at 9:07 AM Dave Marchevsky <davemarchevsky@fb.com> wrote:
>
> Add a test_ringbuf_map_key test prog, borrowing heavily from extant
> test_ringbuf.c. The program tries to use the result of
> bpf_ringbuf_reserve as map_key, which was not possible before previouis
> commits in this series. The test runner added to prog_tests/ringbuf.c
> verifies that the program loads and does basic sanity checks to confirm
> that it runs as expected.
>
> Also, refactor test_ringbuf such that runners for existing test_ringbuf
> and newly-added test_ringbuf_map_key are subtests of 'ringbuf' top-level
> test.
>
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> Acked-by: Yonghong Song <yhs@fb.com>
> ---

LGTM.

Acked-by: Andrii Nakryiko <andrii@kernel.org>

> v4->v5: lore.kernel.org/bpf/20220923060614.4025371-2-davemarchevsky@fb.com
>
> * Fix some nits (Andrii)
>   * migrating prog from fentry -> ksyscall wasn't done as lskel doesn't
>     support the latter. Talked to Andrii about it offlist, he's fine with it.
>
> v3->v4: lore.kernel.org/bpf/20220922142208.3009672-2-davemarchevsky@fb.com
>
> * Fix some nits (Yonghong)
>   * make subtest runner functions static
>   * don't goto cleanup if -EDONE check fails
>   * add 'workaround' to comment in test to ease future grepping
> * Add Yonghong ack
>
> v2->v3: lore.kernel.org/bpf/20220914123600.927632-2-davemarchevsky@fb.com
>
> * Test that ring_buffer__poll returns -EDONE (Alexei)
>
> v1->v2: lore.kernel.org/bpf/20220912101106.2765921-1-davemarchevsky@fb.com
>
> * Actually run the program instead of just loading (Yonghong)
> * Add a bpf_map_update_elem call to the test (Yonghong)
> * Refactor runner such that existing test and newly-added test are
>   subtests of 'ringbuf' top-level test (Yonghong)
> * Remove unused globals in test prog (Yonghong)
>
>  tools/testing/selftests/bpf/Makefile          |  8 ++-
>  .../selftests/bpf/prog_tests/ringbuf.c        | 66 ++++++++++++++++-
>  .../bpf/progs/test_ringbuf_map_key.c          | 70 +++++++++++++++++++
>  3 files changed, 140 insertions(+), 4 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c
>
> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> index e6cf21fad69f..79edef1dbda4 100644
> --- a/tools/testing/selftests/bpf/Makefile
> +++ b/tools/testing/selftests/bpf/Makefile
> @@ -359,9 +359,11 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h              \
>                 test_subskeleton.skel.h test_subskeleton_lib.skel.h     \
>                 test_usdt.skel.h

[...]

> diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c b/tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c
> new file mode 100644
> index 000000000000..2760bf60d05a
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c
> @@ -0,0 +1,70 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
> +
> +#include <linux/bpf.h>
> +#include <bpf/bpf_helpers.h>
> +#include "bpf_misc.h"
> +
> +char _license[] SEC("license") = "GPL";
> +
> +struct sample {
> +       int pid;
> +       int seq;
> +       long value;
> +       char comm[16];
> +};
> +
> +struct {
> +       __uint(type, BPF_MAP_TYPE_RINGBUF);

btw, libbpf is smart enough now to auto-fix ringbuf size, so you could
have used __uint(max_entries, 4096) and that would work even on
architectures that have 64KB pages. Just FYI.

> +} ringbuf SEC(".maps");
> +
> +struct {
> +       __uint(type, BPF_MAP_TYPE_HASH);
> +       __uint(max_entries, 1000);
> +       __type(key, struct sample);
> +       __type(value, int);
> +} hash_map SEC(".maps");
> +
> +/* inputs */
> +int pid = 0;
> +
> +/* inner state */
> +long seq = 0;
> +
> +SEC("fentry/" SYS_PREFIX "sys_getpgid")

it's fine as is, my suggestion to use ksyscall was to 1) avoid using
BPF trampoline (and so have these tests work on s390x) and 2) no have
to use ugly SYS_PREFIX. SEC("kprobe/" SYS_PREFIX "sys_getpgid") would
solve 1), which is more important in practical terms. 2) is a wishlist
:)

I'm not insisting or asking to change this, just pointing out the
rationale for ksyscall suggestion in the first place.


> +int test_ringbuf_mem_map_key(void *ctx)
> +{
> +       int cur_pid = bpf_get_current_pid_tgid() >> 32;
> +       struct sample *sample, sample_copy;
> +       int *lookup_val;
> +
> +       if (cur_pid != pid)
> +               return 0;
> +
> +       sample = bpf_ringbuf_reserve(&ringbuf, sizeof(*sample), 0);
> +       if (!sample)
> +               return 0;
> +
> +       sample->pid = pid;
> +       bpf_get_current_comm(sample->comm, sizeof(sample->comm));
> +       sample->seq = ++seq;
> +       sample->value = 42;
> +
> +       /* test using 'sample' (PTR_TO_MEM | MEM_ALLOC) as map key arg
> +        */
> +       lookup_val = (int *)bpf_map_lookup_elem(&hash_map, sample);
> +
> +       /* workaround - memcpy is necessary so that verifier doesn't
> +        * complain with:
> +        *   verifier internal error: more than one arg with ref_obj_id R3
> +        * when trying to do bpf_map_update_elem(&hash_map, sample, &sample->seq, BPF_ANY);
> +        *
> +        * Since bpf_map_lookup_elem above uses 'sample' as key, test using
> +        * sample field as value below
> +        */
> +       __builtin_memcpy(&sample_copy, sample, sizeof(struct sample));
> +       bpf_map_update_elem(&hash_map, &sample_copy, &sample->seq, BPF_ANY);
> +
> +       bpf_ringbuf_submit(sample, 0);
> +       return 0;
> +}
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 bpf-next 4/4] selftests/bpf: Add write to hashmap to array_map iter test
  2022-10-20 16:07 ` [PATCH v5 bpf-next 4/4] selftests/bpf: Add write to hashmap to array_map iter test Dave Marchevsky
@ 2022-10-21 23:04   ` Andrii Nakryiko
  0 siblings, 0 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2022-10-21 23:04 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Kernel Team, Yonghong Song, Kumar Kartikeya Dwivedi

On Thu, Oct 20, 2022 at 9:07 AM Dave Marchevsky <davemarchevsky@fb.com> wrote:
>
> Modify iter prog in existing bpf_iter_bpf_array_map.c, which currently
> dumps arraymap key/val, to also do a write of (val, key) into a
> newly-added hashmap. Confirm that the write succeeds as expected by
> modifying the userspace runner program.
>
> Before a change added in an earlier commit - considering PTR_TO_BUF reg
> a valid input to helpers which expect MAP_{KEY,VAL} - the verifier
> would've rejected this prog change due to type mismatch. Since using
> current iter's key/val to access a separate map is a reasonable usecase,
> let's add support for it.
>
> Note that the test prog cannot directly write (val, key) into hashmap
> via bpf_map_update_elem when both come from iter context because key is
> marked MEM_RDONLY. This is due to bpf_map_update_elem - and other basic
> map helpers - taking ARG_PTR_TO_MAP_{KEY,VALUE} w/o MEM_RDONLY type
> flag. bpf_map_{lookup,update,delete}_elem don't modify their
> input key/val so it should be possible to tag their args READONLY, but
> due to the ubiquitous use of these helpers and verifier checks for
> type == MAP_VALUE, such a change is nontrivial and seems better to
> address in a followup series.

Agree about addressing it separately, but I'm not sure what's
non-trivial or dangerous? If I remember correctly, MEM_RDONLY on
helper input arg just means that it accepts both read-only and
read-write views. While the input argument doesn't have MEM_RDONLY we
accept *only* read/write memory views. So basically adding MEM_RDONLY
in BPF helper proto makes it more general and permissive in what can
be passed into it. I think that's a good change, we added tons of
MEM_RDONLY to helpers that were accepting PTR_TO_MEM already.

But anyways, patch looks good:

Acked-by: Andrii Nakryiko <andrii@kernel.org>


>
> Also fixup some 'goto's in test runner's map checking loop.
>
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> ---
>  .../selftests/bpf/prog_tests/bpf_iter.c       | 20 ++++++++++++------
>  .../bpf/progs/bpf_iter_bpf_array_map.c        | 21 ++++++++++++++++++-
>  2 files changed, 34 insertions(+), 7 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 bpf-next 2/4] bpf: Consider all mem_types compatible for map_{key,value} args
  2022-10-21 23:04   ` Andrii Nakryiko
@ 2022-10-22  2:26     ` Alexei Starovoitov
  0 siblings, 0 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2022-10-22  2:26 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Dave Marchevsky, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Kernel Team, Yonghong Song,
	Kumar Kartikeya Dwivedi

On Fri, Oct 21, 2022 at 4:04 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Oct 20, 2022 at 9:07 AM Dave Marchevsky <davemarchevsky@fb.com> wrote:
> >
> > After the previous patch, which added PTR_TO_MEM | MEM_ALLOC type
> > map_key_value_types, the only difference between map_key_value_types and
> > mem_types sets is PTR_TO_BUF and PTR_TO_MEM, which are in the latter set
> > but not the former.
> >
> > Helpers which expect ARG_PTR_TO_MAP_KEY or ARG_PTR_TO_MAP_VALUE
> > already effectively expect a valid blob of arbitrary memory that isn't
> > necessarily explicitly associated with a map. When validating a
> > PTR_TO_MAP_{KEY,VALUE} arg, the verifier expects meta->map_ptr to have
> > already been set, either by an earlier ARG_CONST_MAP_PTR arg, or custom
> > logic like that in process_timer_func or process_kptr_func.
> >
> > So let's get rid of map_key_value_types and just use mem_types for those
> > args.
> >
> > This has the effect of adding PTR_TO_BUF and PTR_TO_MEM to the set of
> > compatible types for ARG_PTR_TO_MAP_KEY and ARG_PTR_TO_MAP_VALUE.
> >
> > PTR_TO_BUF is used by various bpf_iter implementations to represent a
> > chunk of valid r/w memory in ctx args for iter prog.
> >
> > PTR_TO_MEM is used by networking, tracing, and ringbuf helpers to
> > represent a chunk of valid memory. The PTR_TO_MEM | MEM_ALLOC
> > type added in previous commmit is specific to ringbuf helpers.
>
> typo: s/commmit/commit/ (but not worth reposting just to fix this)

Patched it up while applying.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key
  2022-10-20 16:07 [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key Dave Marchevsky
                   ` (3 preceding siblings ...)
  2022-10-21 23:04 ` [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key Andrii Nakryiko
@ 2022-10-22  2:30 ` patchwork-bot+netdevbpf
  4 siblings, 0 replies; 10+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-10-22  2:30 UTC (permalink / raw)
  To: Dave Marchevsky; +Cc: bpf, ast, daniel, andrii, kernel-team, yhs, memxor

Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Thu, 20 Oct 2022 09:07:18 -0700 you wrote:
> This patch adds support for the following pattern:
> 
>   struct some_data *data = bpf_ringbuf_reserve(&ringbuf, sizeof(struct some_data, 0));
>   if (!data)
>     return;
>   bpf_map_lookup_elem(&another_map, &data->some_field);
>   bpf_ringbuf_submit(data);
> 
> [...]

Here is the summary with links:
  - [v5,bpf-next,1/4] bpf: Allow ringbuf memory to be used as map key
    https://git.kernel.org/bpf/bpf-next/c/9ef40974a82a
  - [v5,bpf-next,2/4] bpf: Consider all mem_types compatible for map_{key,value} args
    https://git.kernel.org/bpf/bpf-next/c/d1673304097c
  - [v5,bpf-next,3/4] selftests/bpf: Add test verifying bpf_ringbuf_reserve retval use in map ops
    https://git.kernel.org/bpf/bpf-next/c/51ee71d38d8c
  - [v5,bpf-next,4/4] selftests/bpf: Add write to hashmap to array_map iter test
    https://git.kernel.org/bpf/bpf-next/c/8f4bc15b9ad7

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-10-22  2:30 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-10-20 16:07 [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key Dave Marchevsky
2022-10-20 16:07 ` [PATCH v5 bpf-next 2/4] bpf: Consider all mem_types compatible for map_{key,value} args Dave Marchevsky
2022-10-21 23:04   ` Andrii Nakryiko
2022-10-22  2:26     ` Alexei Starovoitov
2022-10-20 16:07 ` [PATCH v5 bpf-next 3/4] selftests/bpf: Add test verifying bpf_ringbuf_reserve retval use in map ops Dave Marchevsky
2022-10-21 23:04   ` Andrii Nakryiko
2022-10-20 16:07 ` [PATCH v5 bpf-next 4/4] selftests/bpf: Add write to hashmap to array_map iter test Dave Marchevsky
2022-10-21 23:04   ` Andrii Nakryiko
2022-10-21 23:04 ` [PATCH v5 bpf-next 1/4] bpf: Allow ringbuf memory to be used as map key Andrii Nakryiko
2022-10-22  2:30 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox