BPF List
 help / color / mirror / Atom feed
* [PATCH bpf v3 1/2] bpf: Fix overrunning reservations in ringbuf
@ 2024-06-21 14:08 Daniel Borkmann
  2024-06-21 14:08 ` [PATCH bpf v3 2/2] selftests/bpf: Add more ring buffer test coverage Daniel Borkmann
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Daniel Borkmann @ 2024-06-21 14:08 UTC (permalink / raw)
  To: ast
  Cc: bpf, Daniel Borkmann, Bing-Jhong Billy Jheng, Muhammad Ramdhan,
	Andrii Nakryiko

The BPF ring buffer internally is implemented as a power-of-2 sized circular
buffer, with two logical and ever-increasing counters: consumer_pos is the
consumer counter to show which logical position the consumer consumed the
data, and producer_pos which is the producer counter denoting the amount of
data reserved by all producers.

Each time a record is reserved, the producer that "owns" the record will
successfully advance producer counter. In user space each time a record is
read, the consumer of the data advanced the consumer counter once it finished
processing. Both counters are stored in separate pages so that from user
space, the producer counter is read-only and the consumer counter is read-write.

One aspect that simplifies and thus speeds up the implementation of both
producers and consumers is how the data area is mapped twice contiguously
back-to-back in the virtual memory, allowing to not take any special measures
for samples that have to wrap around at the end of the circular buffer data
area, because the next page after the last data page would be first data page
again, and thus the sample will still appear completely contiguous in virtual
memory.

Each record has a struct bpf_ringbuf_hdr { u32 len; u32 pg_off; } header for
book-keeping the length and offset, and is inaccessible to the BPF program.
Helpers like bpf_ringbuf_reserve() return `(void *)hdr + BPF_RINGBUF_HDR_SZ`
for the BPF program to use. Bing-Jhong and Muhammad reported that it is however
possible to make a second allocated memory chunk overlapping with the first
chunk and as a result, the BPF program is now able to edit first chunk's
header.

For example, consider the creation of a BPF_MAP_TYPE_RINGBUF map with size
of 0x4000. Next, the consumer_pos is modified to 0x3000 /before/ a call to
bpf_ringbuf_reserve() is made. This will allocate a chunk A, which is in
[0x0,0x3008], and the BPF program is able to edit [0x8,0x3008]. Now, lets
allocate a chunk B with size 0x3000. This will succeed because consumer_pos
was edited ahead of time to pass the `new_prod_pos - cons_pos > rb->mask`
check. Chunk B will be in range [0x3008,0x6010], and the BPF program is able
to edit [0x3010,0x6010]. Due to the ring buffer memory layout mentioned
earlier, the ranges [0x0,0x4000] and [0x4000,0x8000] point to the same data
pages. This means that chunk B at [0x4000,0x4008] is chunk A's header.
bpf_ringbuf_submit() / bpf_ringbuf_discard() use the header's pg_off to then
locate the bpf_ringbuf itself via bpf_ringbuf_restore_from_rec(). Once chunk
B modified chunk A's header, then bpf_ringbuf_commit() refers to the wrong
page and could cause a crash.

Fix it by calculating the oldest pending_pos and check whether the range
from the oldest outstanding record to the newest would span beyond the ring
buffer size. If that is the case, then reject the request. We've tested with
the ring buffer benchmark in BPF selftests (./benchs/run_bench_ringbufs.sh)
before/after the fix and while it seems a bit slower on some benchmarks, it
is still not significantly enough to matter.

Fixes: 457f44363a88 ("bpf: Implement BPF ring buffer and verifier support for it")
Reported-by: Bing-Jhong Billy Jheng <billy@starlabs.sg>
Reported-by: Muhammad Ramdhan <ramdhan@starlabs.sg>
Co-developed-by: Bing-Jhong Billy Jheng <billy@starlabs.sg>
Signed-off-by: Bing-Jhong Billy Jheng <billy@starlabs.sg>
Co-developed-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 v1 -> v2:
   - Move pending_pos to the same cacheline as producer_pos
   - Force compiler to read hdr->len only once

 kernel/bpf/ringbuf.c | 32 ++++++++++++++++++++++++++------
 1 file changed, 26 insertions(+), 6 deletions(-)

diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index 0ee653a936ea..87466de8316a 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -51,7 +51,8 @@ struct bpf_ringbuf {
 	 * This prevents a user-space application from modifying the
 	 * position and ruining in-kernel tracking. The permissions of the
 	 * pages depend on who is producing samples: user-space or the
-	 * kernel.
+	 * kernel. Note that the pending counter is placed in the same
+	 * page as the producer, so that it shares the same cacheline.
 	 *
 	 * Kernel-producer
 	 * ---------------
@@ -70,6 +71,7 @@ struct bpf_ringbuf {
 	 */
 	unsigned long consumer_pos __aligned(PAGE_SIZE);
 	unsigned long producer_pos __aligned(PAGE_SIZE);
+	unsigned long pending_pos;
 	char data[] __aligned(PAGE_SIZE);
 };
 
@@ -179,6 +181,7 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
 	rb->mask = data_sz - 1;
 	rb->consumer_pos = 0;
 	rb->producer_pos = 0;
+	rb->pending_pos = 0;
 
 	return rb;
 }
@@ -404,9 +407,10 @@ bpf_ringbuf_restore_from_rec(struct bpf_ringbuf_hdr *hdr)
 
 static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
 {
-	unsigned long cons_pos, prod_pos, new_prod_pos, flags;
-	u32 len, pg_off;
+	unsigned long cons_pos, prod_pos, new_prod_pos, pend_pos, flags;
 	struct bpf_ringbuf_hdr *hdr;
+	u64 tmp_size, hdr_len;
+	u32 len, pg_off;
 
 	if (unlikely(size > RINGBUF_MAX_RECORD_SZ))
 		return NULL;
@@ -424,13 +428,29 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
 		spin_lock_irqsave(&rb->spinlock, flags);
 	}
 
+	pend_pos = rb->pending_pos;
 	prod_pos = rb->producer_pos;
 	new_prod_pos = prod_pos + len;
 
-	/* check for out of ringbuf space by ensuring producer position
-	 * doesn't advance more than (ringbuf_size - 1) ahead
+	while (pend_pos < prod_pos) {
+		hdr = (void *)rb->data + (pend_pos & rb->mask);
+		hdr_len = READ_ONCE(hdr->len);
+		if (hdr_len & BPF_RINGBUF_BUSY_BIT)
+			break;
+		tmp_size = hdr_len & ~BPF_RINGBUF_DISCARD_BIT;
+		tmp_size = round_up(tmp_size + BPF_RINGBUF_HDR_SZ, 8);
+		pend_pos += tmp_size;
+	}
+	rb->pending_pos = pend_pos;
+
+	/* check for out of ringbuf space:
+	 * - by ensuring producer position doesn't advance more than
+	 *   (ringbuf_size - 1) ahead
+	 * - by ensuring oldest not yet committed record until newest
+	 *   record does not span more than (ringbuf_size - 1)
 	 */
-	if (new_prod_pos - cons_pos > rb->mask) {
+	if ((new_prod_pos - cons_pos > rb->mask) ||
+	    (new_prod_pos - pend_pos > rb->mask)) {
 		spin_unlock_irqrestore(&rb->spinlock, flags);
 		return NULL;
 	}
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH bpf v3 2/2] selftests/bpf: Add more ring buffer test coverage
  2024-06-21 14:08 [PATCH bpf v3 1/2] bpf: Fix overrunning reservations in ringbuf Daniel Borkmann
@ 2024-06-21 14:08 ` Daniel Borkmann
  2024-06-21 20:04 ` [PATCH bpf v3 1/2] bpf: Fix overrunning reservations in ringbuf Andrii Nakryiko
  2024-06-21 20:10 ` patchwork-bot+netdevbpf
  2 siblings, 0 replies; 4+ messages in thread
From: Daniel Borkmann @ 2024-06-21 14:08 UTC (permalink / raw)
  To: ast; +Cc: bpf, Daniel Borkmann

Add test coverage for reservations beyond the ring buffer size in order
to validate that bpf_ringbuf_reserve() rejects the request with NULL, all
other ring buffer tests keep passing as well:

  # ./vmtest.sh -- ./test_progs -t ringbuf
  [...]
  ./test_progs -t ringbuf
  [    1.165434] bpf_testmod: loading out-of-tree module taints kernel.
  [    1.165825] bpf_testmod: module verification failed: signature and/or required key missing - tainting kernel
  [    1.284001] tsc: Refined TSC clocksource calibration: 3407.982 MHz
  [    1.286871] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc34e357, max_idle_ns: 440795379773 ns
  [    1.289555] clocksource: Switched to clocksource tsc
  #274/1   ringbuf/ringbuf:OK
  #274/2   ringbuf/ringbuf_n:OK
  #274/3   ringbuf/ringbuf_map_key:OK
  #274/4   ringbuf/ringbuf_write:OK
  #274     ringbuf:OK
  #275     ringbuf_multi:OK
  [...]

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 v1 -> v2:
   - Remove old-style CHECK asserts
 v2 -> v3:
   - Add missing munmap

 tools/testing/selftests/bpf/Makefile          |  2 +-
 .../selftests/bpf/prog_tests/ringbuf.c        | 46 +++++++++++++++++++
 .../selftests/bpf/progs/test_ringbuf_write.c  | 42 +++++++++++++++++
 3 files changed, 89 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/progs/test_ringbuf_write.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 96c0af323341..037d2e0502ef 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -457,7 +457,7 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
 LSKELS := fentry_test.c fexit_test.c fexit_sleep.c atomics.c 		\
 	trace_printk.c trace_vprintk.c map_ptr_kern.c 			\
 	core_kern.c core_kern_overflow.c test_ringbuf.c			\
-	test_ringbuf_n.c test_ringbuf_map_key.c
+	test_ringbuf_n.c test_ringbuf_map_key.c test_ringbuf_write.c
 
 # Generate both light skeleton and libbpf skeleton for these
 LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test.c \
diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf.c b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
index 4c6f42dae409..40fd60215353 100644
--- a/tools/testing/selftests/bpf/prog_tests/ringbuf.c
+++ b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
@@ -12,9 +12,11 @@
 #include <sys/sysinfo.h>
 #include <linux/perf_event.h>
 #include <linux/ring_buffer.h>
+
 #include "test_ringbuf.lskel.h"
 #include "test_ringbuf_n.lskel.h"
 #include "test_ringbuf_map_key.lskel.h"
+#include "test_ringbuf_write.lskel.h"
 
 #define EDONE 7777
 
@@ -84,6 +86,48 @@ static void *poll_thread(void *input)
 	return (void *)(long)ring_buffer__poll(ringbuf, timeout);
 }
 
+static void ringbuf_write_subtest(void)
+{
+	struct test_ringbuf_write_lskel *skel;
+	int page_size = getpagesize();
+	size_t *mmap_ptr;
+	int err, rb_fd;
+
+	skel = test_ringbuf_write_lskel__open();
+	if (!ASSERT_OK_PTR(skel, "skel_load"))
+		return;
+
+	skel->maps.ringbuf.max_entries = 0x4000;
+
+	err = test_ringbuf_write_lskel__load(skel);
+	if (!ASSERT_OK(err, "ringbuf_write"))
+		goto cleanup;
+
+	rb_fd = skel->maps.ringbuf.map_fd;
+
+	mmap_ptr = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, rb_fd, 0);
+	ASSERT_OK_PTR(mmap_ptr, "rw_cons_pos");
+	*mmap_ptr = 0x3000;
+	ASSERT_OK(munmap(mmap_ptr, page_size), "unmap_rw");
+
+	skel->bss->pid = getpid();
+
+	ringbuf = ring_buffer__new(rb_fd, process_sample, NULL, NULL);
+	if (!ASSERT_OK_PTR(ringbuf, "ringbuf_create"))
+		goto cleanup;
+
+	err = test_ringbuf_write_lskel__attach(skel);
+	if (!ASSERT_OK(err, "ringbuf_write"))
+		goto cleanup;
+
+	trigger_samples();
+	ASSERT_GE(skel->bss->discarded, 1, "discarded");
+	ASSERT_EQ(skel->bss->passed, 0, "passed");
+cleanup:
+	ring_buffer__free(ringbuf);
+	test_ringbuf_write_lskel__destroy(skel);
+}
+
 static void ringbuf_subtest(void)
 {
 	const size_t rec_sz = BPF_RINGBUF_HDR_SZ + sizeof(struct sample);
@@ -451,4 +495,6 @@ void test_ringbuf(void)
 		ringbuf_n_subtest();
 	if (test__start_subtest("ringbuf_map_key"))
 		ringbuf_map_key_subtest();
+	if (test__start_subtest("ringbuf_write"))
+		ringbuf_write_subtest();
 }
diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_write.c b/tools/testing/selftests/bpf/progs/test_ringbuf_write.c
new file mode 100644
index 000000000000..c6c67238a7c8
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_ringbuf_write.c
@@ -0,0 +1,42 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct {
+	__uint(type, BPF_MAP_TYPE_RINGBUF);
+} ringbuf SEC(".maps");
+
+long discarded, passed;
+int pid;
+
+SEC("fentry/" SYS_PREFIX "sys_getpgid")
+int test_ringbuf_write(void *ctx)
+{
+	int *foo, cur_pid = bpf_get_current_pid_tgid() >> 32;
+	void *sample1, *sample2;
+
+	if (cur_pid != pid)
+		return 0;
+
+	sample1 = bpf_ringbuf_reserve(&ringbuf, 0x3000, 0);
+	if (!sample1)
+		return 0;
+	/* first one can pass */
+	sample2 = bpf_ringbuf_reserve(&ringbuf, 0x3000, 0);
+	if (!sample2) {
+		bpf_ringbuf_discard(sample1, 0);
+		__sync_fetch_and_add(&discarded, 1);
+		return 0;
+	}
+	/* second one must not */
+	__sync_fetch_and_add(&passed, 1);
+	foo = sample2 + 4084;
+	*foo = 256;
+	bpf_ringbuf_discard(sample1, 0);
+	bpf_ringbuf_discard(sample2, 0);
+	return 0;
+}
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf v3 1/2] bpf: Fix overrunning reservations in ringbuf
  2024-06-21 14:08 [PATCH bpf v3 1/2] bpf: Fix overrunning reservations in ringbuf Daniel Borkmann
  2024-06-21 14:08 ` [PATCH bpf v3 2/2] selftests/bpf: Add more ring buffer test coverage Daniel Borkmann
@ 2024-06-21 20:04 ` Andrii Nakryiko
  2024-06-21 20:10 ` patchwork-bot+netdevbpf
  2 siblings, 0 replies; 4+ messages in thread
From: Andrii Nakryiko @ 2024-06-21 20:04 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: ast, bpf, Bing-Jhong Billy Jheng, Muhammad Ramdhan,
	Andrii Nakryiko

On Fri, Jun 21, 2024 at 7:25 AM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> The BPF ring buffer internally is implemented as a power-of-2 sized circular
> buffer, with two logical and ever-increasing counters: consumer_pos is the
> consumer counter to show which logical position the consumer consumed the
> data, and producer_pos which is the producer counter denoting the amount of
> data reserved by all producers.
>
> Each time a record is reserved, the producer that "owns" the record will
> successfully advance producer counter. In user space each time a record is
> read, the consumer of the data advanced the consumer counter once it finished
> processing. Both counters are stored in separate pages so that from user
> space, the producer counter is read-only and the consumer counter is read-write.
>
> One aspect that simplifies and thus speeds up the implementation of both
> producers and consumers is how the data area is mapped twice contiguously
> back-to-back in the virtual memory, allowing to not take any special measures
> for samples that have to wrap around at the end of the circular buffer data
> area, because the next page after the last data page would be first data page
> again, and thus the sample will still appear completely contiguous in virtual
> memory.
>
> Each record has a struct bpf_ringbuf_hdr { u32 len; u32 pg_off; } header for
> book-keeping the length and offset, and is inaccessible to the BPF program.
> Helpers like bpf_ringbuf_reserve() return `(void *)hdr + BPF_RINGBUF_HDR_SZ`
> for the BPF program to use. Bing-Jhong and Muhammad reported that it is however
> possible to make a second allocated memory chunk overlapping with the first
> chunk and as a result, the BPF program is now able to edit first chunk's
> header.
>
> For example, consider the creation of a BPF_MAP_TYPE_RINGBUF map with size
> of 0x4000. Next, the consumer_pos is modified to 0x3000 /before/ a call to
> bpf_ringbuf_reserve() is made. This will allocate a chunk A, which is in
> [0x0,0x3008], and the BPF program is able to edit [0x8,0x3008]. Now, lets
> allocate a chunk B with size 0x3000. This will succeed because consumer_pos
> was edited ahead of time to pass the `new_prod_pos - cons_pos > rb->mask`
> check. Chunk B will be in range [0x3008,0x6010], and the BPF program is able
> to edit [0x3010,0x6010]. Due to the ring buffer memory layout mentioned
> earlier, the ranges [0x0,0x4000] and [0x4000,0x8000] point to the same data
> pages. This means that chunk B at [0x4000,0x4008] is chunk A's header.
> bpf_ringbuf_submit() / bpf_ringbuf_discard() use the header's pg_off to then
> locate the bpf_ringbuf itself via bpf_ringbuf_restore_from_rec(). Once chunk
> B modified chunk A's header, then bpf_ringbuf_commit() refers to the wrong
> page and could cause a crash.
>
> Fix it by calculating the oldest pending_pos and check whether the range
> from the oldest outstanding record to the newest would span beyond the ring
> buffer size. If that is the case, then reject the request. We've tested with
> the ring buffer benchmark in BPF selftests (./benchs/run_bench_ringbufs.sh)
> before/after the fix and while it seems a bit slower on some benchmarks, it
> is still not significantly enough to matter.
>
> Fixes: 457f44363a88 ("bpf: Implement BPF ring buffer and verifier support for it")
> Reported-by: Bing-Jhong Billy Jheng <billy@starlabs.sg>
> Reported-by: Muhammad Ramdhan <ramdhan@starlabs.sg>
> Co-developed-by: Bing-Jhong Billy Jheng <billy@starlabs.sg>
> Signed-off-by: Bing-Jhong Billy Jheng <billy@starlabs.sg>
> Co-developed-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> ---
>  v1 -> v2:
>    - Move pending_pos to the same cacheline as producer_pos
>    - Force compiler to read hdr->len only once
>
>  kernel/bpf/ringbuf.c | 32 ++++++++++++++++++++++++++------
>  1 file changed, 26 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
> index 0ee653a936ea..87466de8316a 100644
> --- a/kernel/bpf/ringbuf.c
> +++ b/kernel/bpf/ringbuf.c
> @@ -51,7 +51,8 @@ struct bpf_ringbuf {
>          * This prevents a user-space application from modifying the
>          * position and ruining in-kernel tracking. The permissions of the
>          * pages depend on who is producing samples: user-space or the
> -        * kernel.
> +        * kernel. Note that the pending counter is placed in the same
> +        * page as the producer, so that it shares the same cacheline.
>          *
>          * Kernel-producer
>          * ---------------
> @@ -70,6 +71,7 @@ struct bpf_ringbuf {
>          */
>         unsigned long consumer_pos __aligned(PAGE_SIZE);
>         unsigned long producer_pos __aligned(PAGE_SIZE);
> +       unsigned long pending_pos;
>         char data[] __aligned(PAGE_SIZE);
>  };
>
> @@ -179,6 +181,7 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
>         rb->mask = data_sz - 1;
>         rb->consumer_pos = 0;
>         rb->producer_pos = 0;
> +       rb->pending_pos = 0;
>
>         return rb;
>  }
> @@ -404,9 +407,10 @@ bpf_ringbuf_restore_from_rec(struct bpf_ringbuf_hdr *hdr)
>
>  static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
>  {
> -       unsigned long cons_pos, prod_pos, new_prod_pos, flags;
> -       u32 len, pg_off;
> +       unsigned long cons_pos, prod_pos, new_prod_pos, pend_pos, flags;
>         struct bpf_ringbuf_hdr *hdr;
> +       u64 tmp_size, hdr_len;

these can be u32, so I fixed it up while applying, thanks!

> +       u32 len, pg_off;
>
>         if (unlikely(size > RINGBUF_MAX_RECORD_SZ))
>                 return NULL;
> @@ -424,13 +428,29 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
>                 spin_lock_irqsave(&rb->spinlock, flags);
>         }
>
> +       pend_pos = rb->pending_pos;
>         prod_pos = rb->producer_pos;
>         new_prod_pos = prod_pos + len;
>
> -       /* check for out of ringbuf space by ensuring producer position
> -        * doesn't advance more than (ringbuf_size - 1) ahead
> +       while (pend_pos < prod_pos) {
> +               hdr = (void *)rb->data + (pend_pos & rb->mask);
> +               hdr_len = READ_ONCE(hdr->len);
> +               if (hdr_len & BPF_RINGBUF_BUSY_BIT)
> +                       break;
> +               tmp_size = hdr_len & ~BPF_RINGBUF_DISCARD_BIT;
> +               tmp_size = round_up(tmp_size + BPF_RINGBUF_HDR_SZ, 8);
> +               pend_pos += tmp_size;
> +       }
> +       rb->pending_pos = pend_pos;
> +
> +       /* check for out of ringbuf space:
> +        * - by ensuring producer position doesn't advance more than
> +        *   (ringbuf_size - 1) ahead
> +        * - by ensuring oldest not yet committed record until newest
> +        *   record does not span more than (ringbuf_size - 1)
>          */
> -       if (new_prod_pos - cons_pos > rb->mask) {
> +       if ((new_prod_pos - cons_pos > rb->mask) ||
> +           (new_prod_pos - pend_pos > rb->mask)) {
>                 spin_unlock_irqrestore(&rb->spinlock, flags);
>                 return NULL;
>         }
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf v3 1/2] bpf: Fix overrunning reservations in ringbuf
  2024-06-21 14:08 [PATCH bpf v3 1/2] bpf: Fix overrunning reservations in ringbuf Daniel Borkmann
  2024-06-21 14:08 ` [PATCH bpf v3 2/2] selftests/bpf: Add more ring buffer test coverage Daniel Borkmann
  2024-06-21 20:04 ` [PATCH bpf v3 1/2] bpf: Fix overrunning reservations in ringbuf Andrii Nakryiko
@ 2024-06-21 20:10 ` patchwork-bot+netdevbpf
  2 siblings, 0 replies; 4+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-06-21 20:10 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: ast, bpf, billy, ramdhan, andrii

Hello:

This series was applied to bpf/bpf.git (master)
by Andrii Nakryiko <andrii@kernel.org>:

On Fri, 21 Jun 2024 16:08:27 +0200 you wrote:
> The BPF ring buffer internally is implemented as a power-of-2 sized circular
> buffer, with two logical and ever-increasing counters: consumer_pos is the
> consumer counter to show which logical position the consumer consumed the
> data, and producer_pos which is the producer counter denoting the amount of
> data reserved by all producers.
> 
> Each time a record is reserved, the producer that "owns" the record will
> successfully advance producer counter. In user space each time a record is
> read, the consumer of the data advanced the consumer counter once it finished
> processing. Both counters are stored in separate pages so that from user
> space, the producer counter is read-only and the consumer counter is read-write.
> 
> [...]

Here is the summary with links:
  - [bpf,v3,1/2] bpf: Fix overrunning reservations in ringbuf
    https://git.kernel.org/bpf/bpf/c/cfa1a2329a69
  - [bpf,v3,2/2] selftests/bpf: Add more ring buffer test coverage
    https://git.kernel.org/bpf/bpf/c/1d68f685a850

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-06-21 20:10 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-21 14:08 [PATCH bpf v3 1/2] bpf: Fix overrunning reservations in ringbuf Daniel Borkmann
2024-06-21 14:08 ` [PATCH bpf v3 2/2] selftests/bpf: Add more ring buffer test coverage Daniel Borkmann
2024-06-21 20:04 ` [PATCH bpf v3 1/2] bpf: Fix overrunning reservations in ringbuf Andrii Nakryiko
2024-06-21 20:10 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox