linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] tracing/user_events: Fixes and improvements for 6.4
@ 2023-04-25 22:51 Beau Belgrave
  2023-04-25 22:51 ` [PATCH v2 1/4] tracing/user_events: Ensure write index cannot be negative Beau Belgrave
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Beau Belgrave @ 2023-04-25 22:51 UTC (permalink / raw)
  To: rostedt, mhiramat, mathieu.desnoyers, dcook, alanau
  Cc: linux-kernel, linux-trace-kernel

Now that user_events is in for-next we broadened our integration of
user_events. During this integration we found a few things that can
help prevent the debugging of issues for user_events when user
processes use the ABI directly.

The most important thing found is an out of bounds fix with the
write index. If it is negative, an out of bounds access is attempted.
This bug was introduced on one of the very first user_events patches
and remained unseen for a long time. Apologies for not catching that
sooner.

We think users will expect the kernel to always clear the registered
bit when events are unregistered, even if the event is still enabled
in a kernel tracer. The user process could do this after unregistering,
but it seems appropriate for the kernel side to attempt this. We also
discussed if it makes sense for the kernel to allow user processes
to tie multiple events to the same value and bit. While this doesn't
cause any issues on the kernel side, it leads to very undefined
behavior for the user process. Depending on which event gets enabled
when, the bit will vary.

Change history

V2:
Add bracket to complex for_each line.

Add patch to ensure in all cases we only limit up to 10 attempts to
fault-in data if the user is able to cause write failures with
successful fault-in sequences.

Beau Belgrave (4):
  tracing/user_events: Ensure write index cannot be negative
  tracing/user_events: Ensure bit is cleared on unregister
  tracing/user_events: Prevent same address and bit per process
  tracing/user_events: Limit max fault-in attempts

 kernel/trace/trace_events_user.c              | 123 ++++++++++++++++--
 .../testing/selftests/user_events/abi_test.c  |   9 +-
 .../selftests/user_events/ftrace_test.c       |  14 +-
 3 files changed, 130 insertions(+), 16 deletions(-)


base-commit: 88fe1ec75fcb296579e05eaf3807da3ee83137e4
-- 
2.25.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 1/4] tracing/user_events: Ensure write index cannot be negative
  2023-04-25 22:51 [PATCH v2 0/4] tracing/user_events: Fixes and improvements for 6.4 Beau Belgrave
@ 2023-04-25 22:51 ` Beau Belgrave
  2023-04-25 22:51 ` [PATCH v2 2/4] tracing/user_events: Ensure bit is cleared on unregister Beau Belgrave
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Beau Belgrave @ 2023-04-25 22:51 UTC (permalink / raw)
  To: rostedt, mhiramat, mathieu.desnoyers, dcook, alanau
  Cc: linux-kernel, linux-trace-kernel

The write index indicates which event the data is for and accesses a
per-file array. The index is passed by user processes during write()
calls as the first 4 bytes. Ensure that it cannot be negative by
returning -EINVAL to prevent out of bounds accesses.

Update ftrace self-test to ensure this occurs properly.

Fixes: 7f5a08c79df3 ("user_events: Add minimal support for trace_event into ftrace")
Reported-by: Doug Cook <dcook@linux.microsoft.com>
Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c                  | 3 +++
 tools/testing/selftests/user_events/ftrace_test.c | 5 +++++
 2 files changed, 8 insertions(+)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index cc8c6d8b69b5..e7dff24aa724 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -1821,6 +1821,9 @@ static ssize_t user_events_write_core(struct file *file, struct iov_iter *i)
 	if (unlikely(copy_from_iter(&idx, sizeof(idx), i) != sizeof(idx)))
 		return -EFAULT;
 
+	if (idx < 0)
+		return -EINVAL;
+
 	rcu_read_lock_sched();
 
 	refs = rcu_dereference_sched(info->refs);
diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c
index aceafacfb126..91272f9d6fce 100644
--- a/tools/testing/selftests/user_events/ftrace_test.c
+++ b/tools/testing/selftests/user_events/ftrace_test.c
@@ -296,6 +296,11 @@ TEST_F(user, write_events) {
 	ASSERT_NE(-1, writev(self->data_fd, (const struct iovec *)io, 3));
 	after = trace_bytes();
 	ASSERT_GT(after, before);
+
+	/* Negative index should fail with EINVAL */
+	reg.write_index = -1;
+	ASSERT_EQ(-1, writev(self->data_fd, (const struct iovec *)io, 3));
+	ASSERT_EQ(EINVAL, errno);
 }
 
 TEST_F(user, write_fault) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 2/4] tracing/user_events: Ensure bit is cleared on unregister
  2023-04-25 22:51 [PATCH v2 0/4] tracing/user_events: Fixes and improvements for 6.4 Beau Belgrave
  2023-04-25 22:51 ` [PATCH v2 1/4] tracing/user_events: Ensure write index cannot be negative Beau Belgrave
@ 2023-04-25 22:51 ` Beau Belgrave
  2023-04-25 22:51 ` [PATCH v2 3/4] tracing/user_events: Prevent same address and bit per process Beau Belgrave
  2023-04-25 22:51 ` [PATCH v2 4/4] tracing/user_events: Limit max fault-in attempts Beau Belgrave
  3 siblings, 0 replies; 5+ messages in thread
From: Beau Belgrave @ 2023-04-25 22:51 UTC (permalink / raw)
  To: rostedt, mhiramat, mathieu.desnoyers, dcook, alanau
  Cc: linux-kernel, linux-trace-kernel

If an event is enabled and a user process unregisters user_events, the
bit is left set. Fix this by always clearing the bit in the user process
if unregister is successful.

Update abi self-test to ensure this occurs properly.

Suggested-by: Doug Cook <dcook@linux.microsoft.com>
Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c              | 34 +++++++++++++++++++
 .../testing/selftests/user_events/abi_test.c  |  9 +++--
 2 files changed, 40 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index e7dff24aa724..eb195d697177 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -2146,6 +2146,35 @@ static long user_unreg_get(struct user_unreg __user *ureg,
 	return ret;
 }
 
+static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
+				   unsigned long uaddr, unsigned char bit)
+{
+	struct user_event_enabler enabler;
+	int result;
+
+	memset(&enabler, 0, sizeof(enabler));
+	enabler.addr = uaddr;
+	enabler.values = bit;
+retry:
+	/* Prevents state changes from racing with new enablers */
+	mutex_lock(&event_mutex);
+
+	/* Force the bit to be cleared, since no event is attached */
+	mmap_read_lock(user_mm->mm);
+	result = user_event_enabler_write(user_mm, &enabler, false);
+	mmap_read_unlock(user_mm->mm);
+
+	mutex_unlock(&event_mutex);
+
+	if (result) {
+		/* Attempt to fault-in and retry if it worked */
+		if (!user_event_mm_fault_in(user_mm, uaddr))
+			goto retry;
+	}
+
+	return result;
+}
+
 /*
  * Unregisters an enablement address/bit within a task/user mm.
  */
@@ -2190,6 +2219,11 @@ static long user_events_ioctl_unreg(unsigned long uarg)
 
 	mutex_unlock(&event_mutex);
 
+	/* Ensure bit is now cleared for user, regardless of event status */
+	if (!ret)
+		ret = user_event_mm_clear_bit(mm, reg.disable_addr,
+					      reg.disable_bit);
+
 	return ret;
 }
 
diff --git a/tools/testing/selftests/user_events/abi_test.c b/tools/testing/selftests/user_events/abi_test.c
index e0323d3777a7..5125c42efe65 100644
--- a/tools/testing/selftests/user_events/abi_test.c
+++ b/tools/testing/selftests/user_events/abi_test.c
@@ -109,13 +109,16 @@ TEST_F(user, enablement) {
 	ASSERT_EQ(0, change_event(false));
 	ASSERT_EQ(0, self->check);
 
-	/* Should not change after disable */
+	/* Ensure kernel clears bit after disable */
 	ASSERT_EQ(0, change_event(true));
 	ASSERT_EQ(1, self->check);
 	ASSERT_EQ(0, reg_disable(&self->check, 0));
+	ASSERT_EQ(0, self->check);
+
+	/* Ensure doesn't change after unreg */
+	ASSERT_EQ(0, change_event(true));
+	ASSERT_EQ(0, self->check);
 	ASSERT_EQ(0, change_event(false));
-	ASSERT_EQ(1, self->check);
-	self->check = 0;
 }
 
 TEST_F(user, bit_sizes) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 3/4] tracing/user_events: Prevent same address and bit per process
  2023-04-25 22:51 [PATCH v2 0/4] tracing/user_events: Fixes and improvements for 6.4 Beau Belgrave
  2023-04-25 22:51 ` [PATCH v2 1/4] tracing/user_events: Ensure write index cannot be negative Beau Belgrave
  2023-04-25 22:51 ` [PATCH v2 2/4] tracing/user_events: Ensure bit is cleared on unregister Beau Belgrave
@ 2023-04-25 22:51 ` Beau Belgrave
  2023-04-25 22:51 ` [PATCH v2 4/4] tracing/user_events: Limit max fault-in attempts Beau Belgrave
  3 siblings, 0 replies; 5+ messages in thread
From: Beau Belgrave @ 2023-04-25 22:51 UTC (permalink / raw)
  To: rostedt, mhiramat, mathieu.desnoyers, dcook, alanau
  Cc: linux-kernel, linux-trace-kernel

User processes register an address and bit pair for events. If the same
address and bit pair are registered multiple times in the same process,
it can cause undefined behavior when events are enabled/disabled.
When more than one are used, the bit could be turned off by another
event being disabled, while the original event is still enabled.

Prevent undefined behavior by checking the current mm to see if any
event has already been registered for the address and bit pair. Return
EADDRINUSE back to the user process if it's already being used.

Update ftrace self-test to ensure this occurs properly.

Suggested-by: Doug Cook <dcook@linux.microsoft.com>
Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c              | 41 +++++++++++++++++++
 .../selftests/user_events/ftrace_test.c       |  9 +++-
 2 files changed, 49 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index eb195d697177..4fc099fc7637 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -419,6 +419,21 @@ static int user_event_enabler_write(struct user_event_mm *mm,
 	return 0;
 }
 
+static bool user_event_enabler_exists(struct user_event_mm *mm,
+				      unsigned long uaddr, unsigned char bit)
+{
+	struct user_event_enabler *enabler;
+	struct user_event_enabler *next;
+
+	list_for_each_entry_safe(enabler, next, &mm->enablers, link) {
+		if (enabler->addr == uaddr &&
+		    (enabler->values & ENABLE_VAL_BIT_MASK) == bit)
+			return true;
+	}
+
+	return false;
+}
+
 static void user_event_enabler_update(struct user_event *user)
 {
 	struct user_event_enabler *enabler;
@@ -657,6 +672,22 @@ void user_event_mm_dup(struct task_struct *t, struct user_event_mm *old_mm)
 	user_event_mm_remove(t);
 }
 
+static bool current_user_event_enabler_exists(unsigned long uaddr,
+					      unsigned char bit)
+{
+	struct user_event_mm *user_mm = current_user_event_mm();
+	bool exists;
+
+	if (!user_mm)
+		return false;
+
+	exists = user_event_enabler_exists(user_mm, uaddr, bit);
+
+	user_event_mm_put(user_mm);
+
+	return exists;
+}
+
 static struct user_event_enabler
 *user_event_enabler_create(struct user_reg *reg, struct user_event *user,
 			   int *write_result)
@@ -2045,6 +2076,16 @@ static long user_events_ioctl_reg(struct user_event_file_info *info,
 	if (ret)
 		return ret;
 
+	/*
+	 * Prevent users from using the same address and bit multiple times
+	 * within the same mm address space. This can cause unexpected behavior
+	 * for user processes that is far easier to debug if this is explictly
+	 * an error upon registering.
+	 */
+	if (current_user_event_enabler_exists((unsigned long)reg.enable_addr,
+					      reg.enable_bit))
+		return -EADDRINUSE;
+
 	name = strndup_user((const char __user *)(uintptr_t)reg.name_args,
 			    MAX_EVENT_DESC);
 
diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c
index 91272f9d6fce..7c99cef94a65 100644
--- a/tools/testing/selftests/user_events/ftrace_test.c
+++ b/tools/testing/selftests/user_events/ftrace_test.c
@@ -219,7 +219,12 @@ TEST_F(user, register_events) {
 	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
 	ASSERT_EQ(0, reg.write_index);
 
-	/* Multiple registers should result in same index */
+	/* Multiple registers to the same addr + bit should fail */
+	ASSERT_EQ(-1, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
+	ASSERT_EQ(EADDRINUSE, errno);
+
+	/* Multiple registers to same name should result in same index */
+	reg.enable_bit = 30;
 	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, &reg));
 	ASSERT_EQ(0, reg.write_index);
 
@@ -242,6 +247,8 @@ TEST_F(user, register_events) {
 
 	/* Unregister */
 	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSUNREG, &unreg));
+	unreg.disable_bit = 30;
+	ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSUNREG, &unreg));
 
 	/* Delete should work only after close and unregister */
 	close(self->data_fd);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 4/4] tracing/user_events: Limit max fault-in attempts
  2023-04-25 22:51 [PATCH v2 0/4] tracing/user_events: Fixes and improvements for 6.4 Beau Belgrave
                   ` (2 preceding siblings ...)
  2023-04-25 22:51 ` [PATCH v2 3/4] tracing/user_events: Prevent same address and bit per process Beau Belgrave
@ 2023-04-25 22:51 ` Beau Belgrave
  3 siblings, 0 replies; 5+ messages in thread
From: Beau Belgrave @ 2023-04-25 22:51 UTC (permalink / raw)
  To: rostedt, mhiramat, mathieu.desnoyers, dcook, alanau
  Cc: linux-kernel, linux-trace-kernel

When event enablement changes, user_events attempts to update a bit in
the user process. If a fault is hit, an attempt to fault-in the page and
the write is retried if the page made it in. While this normally requires
a couple attempts, it is possible a bad user process could attempt to
cause infinite loops.

Ensure fault-in attempts either sync or async are limited to a max of 10
attempts for each update. When the max is hit, return -EFAULT so another
attempt is not made in all cases.

Suggested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c | 49 +++++++++++++++++++++++---------
 1 file changed, 35 insertions(+), 14 deletions(-)

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index 4fc099fc7637..cab2c5891758 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -123,6 +123,7 @@ struct user_event_enabler_fault {
 	struct work_struct		work;
 	struct user_event_mm		*mm;
 	struct user_event_enabler	*enabler;
+	int				attempt;
 };
 
 static struct kmem_cache *fault_cache;
@@ -266,11 +267,19 @@ static void user_event_enabler_destroy(struct user_event_enabler *enabler)
 	kfree(enabler);
 }
 
-static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr)
+static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr,
+				  int attempt)
 {
 	bool unlocked;
 	int ret;
 
+	/*
+	 * Normally this is low, ensure that it cannot be taken advantage of by
+	 * bad user processes to cause excessive looping.
+	 */
+	if (attempt > 10)
+		return -EFAULT;
+
 	mmap_read_lock(mm->mm);
 
 	/* Ensure MM has tasks, cannot use after exit_mm() */
@@ -289,7 +298,7 @@ static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr)
 
 static int user_event_enabler_write(struct user_event_mm *mm,
 				    struct user_event_enabler *enabler,
-				    bool fixup_fault);
+				    bool fixup_fault, int *attempt);
 
 static void user_event_enabler_fault_fixup(struct work_struct *work)
 {
@@ -298,9 +307,10 @@ static void user_event_enabler_fault_fixup(struct work_struct *work)
 	struct user_event_enabler *enabler = fault->enabler;
 	struct user_event_mm *mm = fault->mm;
 	unsigned long uaddr = enabler->addr;
+	int attempt = fault->attempt;
 	int ret;
 
-	ret = user_event_mm_fault_in(mm, uaddr);
+	ret = user_event_mm_fault_in(mm, uaddr, attempt);
 
 	if (ret && ret != -ENOENT) {
 		struct user_event *user = enabler->event;
@@ -329,7 +339,7 @@ static void user_event_enabler_fault_fixup(struct work_struct *work)
 
 	if (!ret) {
 		mmap_read_lock(mm->mm);
-		user_event_enabler_write(mm, enabler, true);
+		user_event_enabler_write(mm, enabler, true, &attempt);
 		mmap_read_unlock(mm->mm);
 	}
 out:
@@ -341,7 +351,8 @@ static void user_event_enabler_fault_fixup(struct work_struct *work)
 }
 
 static bool user_event_enabler_queue_fault(struct user_event_mm *mm,
-					   struct user_event_enabler *enabler)
+					   struct user_event_enabler *enabler,
+					   int attempt)
 {
 	struct user_event_enabler_fault *fault;
 
@@ -353,6 +364,7 @@ static bool user_event_enabler_queue_fault(struct user_event_mm *mm,
 	INIT_WORK(&fault->work, user_event_enabler_fault_fixup);
 	fault->mm = user_event_mm_get(mm);
 	fault->enabler = enabler;
+	fault->attempt = attempt;
 
 	/* Don't try to queue in again while we have a pending fault */
 	set_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler));
@@ -372,7 +384,7 @@ static bool user_event_enabler_queue_fault(struct user_event_mm *mm,
 
 static int user_event_enabler_write(struct user_event_mm *mm,
 				    struct user_event_enabler *enabler,
-				    bool fixup_fault)
+				    bool fixup_fault, int *attempt)
 {
 	unsigned long uaddr = enabler->addr;
 	unsigned long *ptr;
@@ -383,6 +395,8 @@ static int user_event_enabler_write(struct user_event_mm *mm,
 	lockdep_assert_held(&event_mutex);
 	mmap_assert_locked(mm->mm);
 
+	*attempt += 1;
+
 	/* Ensure MM has tasks, cannot use after exit_mm() */
 	if (refcount_read(&mm->tasks) == 0)
 		return -ENOENT;
@@ -398,7 +412,7 @@ static int user_event_enabler_write(struct user_event_mm *mm,
 		if (!fixup_fault)
 			return -EFAULT;
 
-		if (!user_event_enabler_queue_fault(mm, enabler))
+		if (!user_event_enabler_queue_fault(mm, enabler, *attempt))
 			pr_warn("user_events: Unable to queue fault handler\n");
 
 		return -EFAULT;
@@ -439,15 +453,19 @@ static void user_event_enabler_update(struct user_event *user)
 	struct user_event_enabler *enabler;
 	struct user_event_mm *mm = user_event_mm_get_all(user);
 	struct user_event_mm *next;
+	int attempt;
 
 	while (mm) {
 		next = mm->next;
 		mmap_read_lock(mm->mm);
 		rcu_read_lock();
 
-		list_for_each_entry_rcu(enabler, &mm->enablers, link)
-			if (enabler->event == user)
-				user_event_enabler_write(mm, enabler, true);
+		list_for_each_entry_rcu(enabler, &mm->enablers, link) {
+			if (enabler->event == user) {
+				attempt = 0;
+				user_event_enabler_write(mm, enabler, true, &attempt);
+			}
+		}
 
 		rcu_read_unlock();
 		mmap_read_unlock(mm->mm);
@@ -695,6 +713,7 @@ static struct user_event_enabler
 	struct user_event_enabler *enabler;
 	struct user_event_mm *user_mm;
 	unsigned long uaddr = (unsigned long)reg->enable_addr;
+	int attempt = 0;
 
 	user_mm = current_user_event_mm();
 
@@ -715,7 +734,8 @@ static struct user_event_enabler
 
 	/* Attempt to reflect the current state within the process */
 	mmap_read_lock(user_mm->mm);
-	*write_result = user_event_enabler_write(user_mm, enabler, false);
+	*write_result = user_event_enabler_write(user_mm, enabler, false,
+						 &attempt);
 	mmap_read_unlock(user_mm->mm);
 
 	/*
@@ -735,7 +755,7 @@ static struct user_event_enabler
 
 	if (*write_result) {
 		/* Attempt to fault-in and retry if it worked */
-		if (!user_event_mm_fault_in(user_mm, uaddr))
+		if (!user_event_mm_fault_in(user_mm, uaddr, attempt))
 			goto retry;
 
 		kfree(enabler);
@@ -2192,6 +2212,7 @@ static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
 {
 	struct user_event_enabler enabler;
 	int result;
+	int attempt = 0;
 
 	memset(&enabler, 0, sizeof(enabler));
 	enabler.addr = uaddr;
@@ -2202,14 +2223,14 @@ static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
 
 	/* Force the bit to be cleared, since no event is attached */
 	mmap_read_lock(user_mm->mm);
-	result = user_event_enabler_write(user_mm, &enabler, false);
+	result = user_event_enabler_write(user_mm, &enabler, false, &attempt);
 	mmap_read_unlock(user_mm->mm);
 
 	mutex_unlock(&event_mutex);
 
 	if (result) {
 		/* Attempt to fault-in and retry if it worked */
-		if (!user_event_mm_fault_in(user_mm, uaddr))
+		if (!user_event_mm_fault_in(user_mm, uaddr, attempt))
 			goto retry;
 	}
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-04-25 22:51 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-25 22:51 [PATCH v2 0/4] tracing/user_events: Fixes and improvements for 6.4 Beau Belgrave
2023-04-25 22:51 ` [PATCH v2 1/4] tracing/user_events: Ensure write index cannot be negative Beau Belgrave
2023-04-25 22:51 ` [PATCH v2 2/4] tracing/user_events: Ensure bit is cleared on unregister Beau Belgrave
2023-04-25 22:51 ` [PATCH v2 3/4] tracing/user_events: Prevent same address and bit per process Beau Belgrave
2023-04-25 22:51 ` [PATCH v2 4/4] tracing/user_events: Limit max fault-in attempts Beau Belgrave

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).