linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure
@ 2025-07-17  0:49 Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 01/12] unwind_user: Add user space unwinding API with frame pointer support Steven Rostedt
                   ` (12 more replies)
  0 siblings, 13 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

This is the first patch series of a set that will make it possible to be able
to use SFrames[1] in the Linux kernel. A quick recap of the motivation for
doing this.

Currently the only way to get a user space stack trace from a stack
walk (and not just copying large amount of user stack into the kernel
ring buffer) is to use frame pointers. This has a few issues. The biggest
one is that compiling frame pointers into every application and library
has been shown to cause performance overhead.

Another issue is that the format of the frames may not always be consistent
between different compilers and some architectures (s390) has no defined
format to do a reliable stack walk. The only way to perform user space
profiling on these architectures is to copy the user stack into the kernel
buffer.

SFrames is now supported in gcc binutils and soon will also be supported
by LLVM. SFrames acts more like ORC, and lives in the ELF executable
file as its own section. Like ORC it has two tables where the first table
is sorted by instruction pointers (IP) and using the current IP and finding
it's entry in the first table, it will take you to the second table which
will tell you where the return address of the current function is located
and then you can use that address to look it up in the first table to find
the return address of that function, and so on. This performs a user
space stack walk.

Now because the SFrame section lives in the ELF file it needs to be faulted
into memory when it is used. This means that walking the user space stack
requires being in a faultable context. As profilers like perf request a stack
trace in interrupt or NMI context, it cannot do the walking when it is
requested. Instead it must be deferred until it is safe to fault in user
space. One place this is known to be safe is when the task is about to return
back to user space.

Josh originally wrote the PoC of this code and his last version he posted
was back in January:

   https://lore.kernel.org/all/cover.1737511963.git.jpoimboe@kernel.org/

That series contained everything from adding a new faultable user space
stack walking code, deferring the stack walk, implementing sframes,
fixing up x86 (VDSO), and even added both the kernel and user space side
of perf to make it work. But Josh also ran out of time to work on it and
I picked it up. As there's several parts to this series, I also broke
it out. Especially since there's parts of his series that do not depend
on each other.

This series contains only the core infrastructure that all the rest needs.
Of the 12 patches, only 2 are x86 specific. The rest is simply the unwinding
code that s390 can build against. I moved the 2 x86 specific to the end
of the series too.

Since multiple tracers (like perf, ftrace, bpf, etc) can attach to the
deferred unwinder and each of these tracers can attach to some or all
of the tasks to trace, there is a many to many relationship. This relationship
needs to be made in interrupt or NMI context so it can not rely on any
allocation. To handle this, a bitmask is used. There's a global bitmask of
size long which will allocate a single bit when a tracer registers for
deferred stack traces. The task struct will also have a bitmask where a
request comes in from one of the tracers to have a deferred stack trace, it
will set the corresponding bit for that tracer it its mask. As two of the bits
are used internally, this means at most 30 on 32 bit systems or 62 on 64 bit
systems of tracers may be registered at a given time.  This should not be an
issue as only one perf application, or ftrace instance should request a bit.
BPF should also use only one bit and handle any multiplexing for its users.

When the first request is made for a deferred stack trace from a task, it will
generate a unique cookie. This cookie will be used as the identifier for the
user space stack trace. As the user space stack trace does not change while the
task is in the kernel, requests that come in after the first request and before
the task goes back to user space will get the same cookie.  If there's dropped
events, and the events dropped miss a task entering user space and coming back
to the kernel, the new stack trace taken when it goes back to user space should
not be used with the events before the drop happened.

When a tracer makes a request, it gets this cookie, and the tasks bitmask
sets the bit for the requesting tracer. A task work is used to have the task
do the callbacks before it goes back to user space. When it does, it will scan
its bitmask and call all the callbacks for the tracers that have their
representing bit set. The callback will receive the user space stack trace as
well as the cookie that was used. It's up to the tracer to use the cookie
or not to map the user space stack trace taken back to the events where
it was requested.

That's the basic idea. Obviously there's more to it than the above
explanation, but each patch explains what it is doing, and it is broken up
step by step.

I run two SFrame meetings once a month (one in Asia friendly timezone and
the other in Europe friendly). We have developers from Google, Oracle, Red Hat,
IBM, EfficiOS, Meta, Microsoft, and more that attend. (If anyone is interested
in attending let me know). I have been running this since December of 2024.
Last year in GNU Cauldron, a few of us got together to discuss the design
and such. We are pretty confident that the current design is sound. We have
working code on top of this and have been testing it.

Since the s390 folks want to start working on this (they have patches to
sframes already from working on the prototypes), I would like this series
to be a separate branch based on top of v6.16-rc2. Then all the subsystems
that want to work on top of this can as there's no real dependency between
them.

I have more patches on top of this series that add perf support, ftrace
support, sframe support and the x86 fix ups (for VDSO). But each of those
patch series can be worked on independently, but they all depend on this
series (although the x86 specific patches at the end isn't necessarily
needed, at least for other architectures).

Please review, and if you are happy with them, lets get them in a branch
that we all can use. I'm happy to take it in my tree if I can get acks on the
x86 code. Or it can be in the tip tree as a separate branch on top of 6.16-rc4
and I'll just base my work on top of that. Doesn't matter either way.

[1] https://sourceware.org/binutils/wiki/sframe

The code for this series is located here:

  git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
unwind/core

Head SHA1: f14e91fa8019acefb146869eb465966a88ef6f3b

Changes since v13: https://lore.kernel.org/linux-trace-kernel/20250708012239.268642741@kernel.org/


- Folded patch 1 ("unwind_user: Add user space unwinding API) into patch 2
  ("unwind_user: Add frame pointer support") as there was no real reason
  to separate the two.

- Only check alignment of cfa instead of cfa + frame->ra_off, that way
  both ra_off and fp_off should also be aligned.

- Incorporated some of Mathieu's changes from:
  https://lore.kernel.org/all/20250710164301.3094-2-mathieu.desnoyers@efficios.com/

- Updated the get_cookie() to be more like what Peter Zijlstra proposed:
  https://lore.kernel.org/all/20250715102912.GQ1613200@noisy.programming.kicks-ass.net/

- Added comment to explain struct unwind_task id.
 
- Tweaked KernelDoc of unwind_deferred_request()

- Removed update to convert pending over to local_t as the standalone
  pending field as it goes away in later patches.

- Added WARN_ON_ONCE when unwind_deferred_request() is called from NMI
  context when an architecture doesn't support it. (Peter Zijlstra).

- Always do the try_cmpxchg() in unwind_deferred_request() instead of
  having a special case for !CAN_USE_IN_NMI as that logic is replaced in
  later patches.

- Fold ("unwind: Clear unwind_mask on exit back to user space") into
  ("unwind deferred: Use bitmask to determine which callbacks to call").

- Move unwind_mask field to the beginning of unwind_task_info for better
  alignment.

- Have unwind_deferred_request() return 1 for both pending and already
  executed. Basically it now returns 0 - queued and callback will be
  called; 1 - it is already pending or has already executed; -1 - an error
  happened. (Peter Zijlstra)

- Use atomic_long_fetch_andnot() instead of a try_cmpxchg() loop on
  info->unwind_mask when clearing pending bit. (Peter Zijlstra)

- Added atomic_long_fetch_or() to update the pending bit and new requests.
  (Peter Zilstra)

- Added a RESERVED_BITS to assign unwind_mask and make sure that no work
  being cancelled would clear one of those bits.

- The UNWIND_USED and UNWIND_PENDING are now enums.

- Have the locking of the link list walk use guard(srcu_lite)
  (Peter Zijlstra)
 
- Fixed up due to the new atomic_long logic.

- Added new patch to prevent a callback being called twice because of
  another tracer requesting a callback after other callbacks have been
  called. By using atomic_long_fetch_or() to set the work bit and PENDING
  bit, it leaves other work bits set in the task's unwind_mask that have
  already had their callabcks called. Added a new unwind_completed mask in
  the cache that is used to keep track of what callbacks have been called
  and will not call them againg until the task has left the kernel.

- Removed handling of compat code and instead added a patch that lets the
  architecture not do the deferred stacktrace if the task can not support it.
  Specifically, x86 will not do the deferred stacktrace if the task is
  running in 32 bit mode. sframes doesn't currently support 32 bit x86
  anyway, and will likely never support it.



Josh Poimboeuf (4):
      unwind_user: Add user space unwinding API with frame pointer support
      unwind_user/deferred: Add unwind cache
      unwind_user/deferred: Add deferred unwinding interface
      unwind_user/x86: Enable frame pointer unwinding on x86

Steven Rostedt (8):
      unwind_user/deferred: Add unwind_user_faultable()
      unwind_user/deferred: Make unwind deferral requests NMI-safe
      unwind deferred: Use bitmask to determine which callbacks to call
      unwind deferred: Add unwind_completed mask to stop spurious callbacks
      unwind: Add USED bit to only have one conditional on way back to user space
      unwind deferred: Use SRCU unwind_deferred_task_work()
      unwind: Finish up unwind when a task exits
      unwind deferred/x86: Do not defer stack tracing for compat tasks

----
 MAINTAINERS                           |   8 +
 arch/Kconfig                          |   7 +
 arch/x86/Kconfig                      |   1 +
 arch/x86/include/asm/unwind_user.h    |  22 ++
 include/asm-generic/Kbuild            |   1 +
 include/asm-generic/unwind_user.h     |   5 +
 include/linux/entry-common.h          |   2 +
 include/linux/sched.h                 |   5 +
 include/linux/srcu.h                  |   4 +
 include/linux/unwind_deferred.h       |  86 ++++++++
 include/linux/unwind_deferred_types.h |  39 ++++
 include/linux/unwind_user.h           |  14 ++
 include/linux/unwind_user_types.h     |  44 ++++
 kernel/Makefile                       |   1 +
 kernel/exit.c                         |   2 +
 kernel/fork.c                         |   4 +
 kernel/unwind/Makefile                |   1 +
 kernel/unwind/deferred.c              | 365 ++++++++++++++++++++++++++++++++++
 kernel/unwind/user.c                  | 128 ++++++++++++
 19 files changed, 739 insertions(+)
 create mode 100644 arch/x86/include/asm/unwind_user.h
 create mode 100644 include/asm-generic/unwind_user.h
 create mode 100644 include/linux/unwind_deferred.h
 create mode 100644 include/linux/unwind_deferred_types.h
 create mode 100644 include/linux/unwind_user.h
 create mode 100644 include/linux/unwind_user_types.h
 create mode 100644 kernel/unwind/Makefile
 create mode 100644 kernel/unwind/deferred.c
 create mode 100644 kernel/unwind/user.c

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v14 01/12] unwind_user: Add user space unwinding API with frame pointer support
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 02/12] unwind_user/deferred: Add unwind_user_faultable() Steven Rostedt
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Josh Poimboeuf <jpoimboe@kernel.org>

Introduce a generic API for unwinding user stacks.

In order to expand user space unwinding to be able to handle more complex
scenarios, such as deferred unwinding and reading user space information,
create a generic interface that all architectures can use that support the
various unwinding methods.

This is an alternative method for handling user space stack traces from
the simple stack_trace_save_user() API. This does not replace that
interface, but this interface will be used to expand the functionality of
user space stack walking.

None of the structures introduced will be exposed to user space tooling.

Support for frame pointer unwinding is added. For an architecture to
support frame pointer unwinding it needs to enable
CONFIG_HAVE_UNWIND_USER_FP and define ARCH_INIT_USER_FP_FRAME.

By encoding the frame offsets in struct unwind_user_frame, much of this
code can also be reused for future unwinder implementations like sframe.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Co-developed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/all/20250710164301.3094-2-mathieu.desnoyers@efficios.com/
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Co-developed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Changes since v13: https://lore.kernel.org/20250708012357.982692711@kernel.org

- Folded patch 1 ("unwind_user: Add user space unwinding API) into patch 2
  ("unwind_user: Add frame pointer support") as there was no real reason
  to separate the two.

  v13 of patch 1: https://lore.kernel.org/all/20250708012357.814042587@kernel.org/

- Only check alignment of cfa instead of cfa + frame->ra_off, that way
  both ra_off and fp_off should also be aligned.

- Incorporated some of Mathieu's changes (see Link below his name)

 MAINTAINERS                       |   8 ++
 arch/Kconfig                      |   7 ++
 include/asm-generic/Kbuild        |   1 +
 include/asm-generic/unwind_user.h |   5 ++
 include/linux/unwind_user.h       |  14 ++++
 include/linux/unwind_user_types.h |  44 ++++++++++
 kernel/Makefile                   |   1 +
 kernel/unwind/Makefile            |   1 +
 kernel/unwind/user.c              | 128 ++++++++++++++++++++++++++++++
 9 files changed, 209 insertions(+)
 create mode 100644 include/asm-generic/unwind_user.h
 create mode 100644 include/linux/unwind_user.h
 create mode 100644 include/linux/unwind_user_types.h
 create mode 100644 kernel/unwind/Makefile
 create mode 100644 kernel/unwind/user.c

diff --git a/MAINTAINERS b/MAINTAINERS
index fad6cb025a19..370d780fd5f8 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -25928,6 +25928,14 @@ F:	Documentation/driver-api/uio-howto.rst
 F:	drivers/uio/
 F:	include/linux/uio_driver.h
 
+USERSPACE STACK UNWINDING
+M:	Josh Poimboeuf <jpoimboe@kernel.org>
+M:	Steven Rostedt <rostedt@goodmis.org>
+S:	Maintained
+F:	include/linux/unwind*.h
+F:	kernel/unwind/
+
+
 UTIL-LINUX PACKAGE
 M:	Karel Zak <kzak@redhat.com>
 L:	util-linux@vger.kernel.org
diff --git a/arch/Kconfig b/arch/Kconfig
index a3308a220f86..8e3fd723bd74 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -435,6 +435,13 @@ config HAVE_HARDLOCKUP_DETECTOR_ARCH
 	  It uses the same command line parameters, and sysctl interface,
 	  as the generic hardlockup detectors.
 
+config UNWIND_USER
+	bool
+
+config HAVE_UNWIND_USER_FP
+	bool
+	select UNWIND_USER
+
 config HAVE_PERF_REGS
 	bool
 	help
diff --git a/include/asm-generic/Kbuild b/include/asm-generic/Kbuild
index 8675b7b4ad23..295c94a3ccc1 100644
--- a/include/asm-generic/Kbuild
+++ b/include/asm-generic/Kbuild
@@ -59,6 +59,7 @@ mandatory-y += tlbflush.h
 mandatory-y += topology.h
 mandatory-y += trace_clock.h
 mandatory-y += uaccess.h
+mandatory-y += unwind_user.h
 mandatory-y += vermagic.h
 mandatory-y += vga.h
 mandatory-y += video.h
diff --git a/include/asm-generic/unwind_user.h b/include/asm-generic/unwind_user.h
new file mode 100644
index 000000000000..b8882b909944
--- /dev/null
+++ b/include/asm-generic/unwind_user.h
@@ -0,0 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_GENERIC_UNWIND_USER_H
+#define _ASM_GENERIC_UNWIND_USER_H
+
+#endif /* _ASM_GENERIC_UNWIND_USER_H */
diff --git a/include/linux/unwind_user.h b/include/linux/unwind_user.h
new file mode 100644
index 000000000000..7f7282516bf5
--- /dev/null
+++ b/include/linux/unwind_user.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_UNWIND_USER_H
+#define _LINUX_UNWIND_USER_H
+
+#include <linux/unwind_user_types.h>
+#include <asm/unwind_user.h>
+
+#ifndef ARCH_INIT_USER_FP_FRAME
+ #define ARCH_INIT_USER_FP_FRAME
+#endif
+
+int unwind_user(struct unwind_stacktrace *trace, unsigned int max_entries);
+
+#endif /* _LINUX_UNWIND_USER_H */
diff --git a/include/linux/unwind_user_types.h b/include/linux/unwind_user_types.h
new file mode 100644
index 000000000000..a449f15be890
--- /dev/null
+++ b/include/linux/unwind_user_types.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_UNWIND_USER_TYPES_H
+#define _LINUX_UNWIND_USER_TYPES_H
+
+#include <linux/types.h>
+
+/*
+ * Unwind types, listed in priority order: lower numbers are attempted first if
+ * available.
+ */
+enum unwind_user_type_bits {
+	UNWIND_USER_TYPE_FP_BIT =		0,
+
+	NR_UNWIND_USER_TYPE_BITS,
+};
+
+enum unwind_user_type {
+	/* Type "none" for the start of stack walk iteration. */
+	UNWIND_USER_TYPE_NONE =			0,
+	UNWIND_USER_TYPE_FP =			BIT(UNWIND_USER_TYPE_FP_BIT),
+};
+
+struct unwind_stacktrace {
+	unsigned int	nr;
+	unsigned long	*entries;
+};
+
+struct unwind_user_frame {
+	s32 cfa_off;
+	s32 ra_off;
+	s32 fp_off;
+	bool use_fp;
+};
+
+struct unwind_user_state {
+	unsigned long				ip;
+	unsigned long				sp;
+	unsigned long				fp;
+	enum unwind_user_type			current_type;
+	unsigned int				available_types;
+	bool					done;
+};
+
+#endif /* _LINUX_UNWIND_USER_TYPES_H */
diff --git a/kernel/Makefile b/kernel/Makefile
index 32e80dd626af..541186050251 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -55,6 +55,7 @@ obj-y += rcu/
 obj-y += livepatch/
 obj-y += dma/
 obj-y += entry/
+obj-y += unwind/
 obj-$(CONFIG_MODULES) += module/
 
 obj-$(CONFIG_KCMP) += kcmp.o
diff --git a/kernel/unwind/Makefile b/kernel/unwind/Makefile
new file mode 100644
index 000000000000..349ce3677526
--- /dev/null
+++ b/kernel/unwind/Makefile
@@ -0,0 +1 @@
+ obj-$(CONFIG_UNWIND_USER) += user.o
diff --git a/kernel/unwind/user.c b/kernel/unwind/user.c
new file mode 100644
index 000000000000..85b8c764d2f7
--- /dev/null
+++ b/kernel/unwind/user.c
@@ -0,0 +1,128 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+* Generic interfaces for unwinding user space
+*/
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/sched/task_stack.h>
+#include <linux/unwind_user.h>
+#include <linux/uaccess.h>
+
+static struct unwind_user_frame fp_frame = {
+	ARCH_INIT_USER_FP_FRAME
+};
+
+#define for_each_user_frame(state) \
+	for (unwind_user_start(state); !(state)->done; unwind_user_next(state))
+
+static int unwind_user_next_fp(struct unwind_user_state *state)
+{
+	struct unwind_user_frame *frame = &fp_frame;
+	unsigned long cfa, fp, ra = 0;
+	unsigned int shift;
+
+	if (frame->use_fp) {
+		if (state->fp < state->sp)
+			return -EINVAL;
+		cfa = state->fp;
+	} else {
+		cfa = state->sp;
+	}
+
+	/* Get the Canonical Frame Address (CFA) */
+	cfa += frame->cfa_off;
+
+	/* stack going in wrong direction? */
+	if (cfa <= state->sp)
+		return -EINVAL;
+
+	/* Make sure that the address is word aligned */
+	shift = sizeof(long) == 4 ? 2 : 3;
+	if (cfa & ((1 << shift) - 1))
+		return -EINVAL;
+
+	/* Find the Return Address (RA) */
+	if (get_user(ra, (unsigned long *)(cfa + frame->ra_off)))
+		return -EINVAL;
+
+	if (frame->fp_off && get_user(fp, (unsigned long __user *)(cfa + frame->fp_off)))
+		return -EINVAL;
+
+	state->ip = ra;
+	state->sp = cfa;
+	if (frame->fp_off)
+		state->fp = fp;
+	return 0;
+}
+
+static int unwind_user_next(struct unwind_user_state *state)
+{
+	unsigned long iter_mask = state->available_types;
+	unsigned int bit;
+
+	if (state->done)
+		return -EINVAL;
+
+	for_each_set_bit(bit, &iter_mask, NR_UNWIND_USER_TYPE_BITS) {
+		enum unwind_user_type type = BIT(bit);
+
+		state->current_type = type;
+		switch (type) {
+		case UNWIND_USER_TYPE_FP:
+			if (!unwind_user_next_fp(state))
+				return 0;
+			continue;
+		default:
+			WARN_ONCE(1, "Undefined unwind bit %d", bit);
+			break;
+		}
+		break;
+	}
+
+	/* No successful unwind method. */
+	state->current_type = UNWIND_USER_TYPE_NONE;
+	state->done = true;
+	return -EINVAL;
+}
+
+static int unwind_user_start(struct unwind_user_state *state)
+{
+	struct pt_regs *regs = task_pt_regs(current);
+
+	memset(state, 0, sizeof(*state));
+
+	if ((current->flags & PF_KTHREAD) || !user_mode(regs)) {
+		state->done = true;
+		return -EINVAL;
+	}
+
+	if (IS_ENABLED(CONFIG_HAVE_UNWIND_USER_FP))
+		state->available_types |= UNWIND_USER_TYPE_FP;
+
+	state->ip = instruction_pointer(regs);
+	state->sp = user_stack_pointer(regs);
+	state->fp = frame_pointer(regs);
+
+	return 0;
+}
+
+int unwind_user(struct unwind_stacktrace *trace, unsigned int max_entries)
+{
+	struct unwind_user_state state;
+
+	trace->nr = 0;
+
+	if (!max_entries)
+		return -EINVAL;
+
+	if (current->flags & PF_KTHREAD)
+		return 0;
+
+	for_each_user_frame(&state) {
+		trace->entries[trace->nr++] = state.ip;
+		if (trace->nr >= max_entries)
+			break;
+	}
+
+	return 0;
+}
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 02/12] unwind_user/deferred: Add unwind_user_faultable()
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 01/12] unwind_user: Add user space unwinding API with frame pointer support Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 03/12] unwind_user/deferred: Add unwind cache Steven Rostedt
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Steven Rostedt <rostedt@goodmis.org>

Add a new API to retrieve a user space callstack called
unwind_user_faultable(). The difference between this user space stack
tracer from the current user space stack tracer is that this must be
called from faultable context as it may use routines to access user space
data that needs to be faulted in.

It can be safely called from entering or exiting a system call as the code
can still be faulted in there.

This code is based on work by Josh Poimboeuf's deferred unwinding code:

Link: https://lore.kernel.org/all/6052e8487746603bdb29b65f4033e739092d9925.1737511963.git.jpoimboe@kernel.org/

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 include/linux/sched.h                 |  5 +++
 include/linux/unwind_deferred.h       | 24 +++++++++++
 include/linux/unwind_deferred_types.h |  9 ++++
 kernel/fork.c                         |  4 ++
 kernel/unwind/Makefile                |  2 +-
 kernel/unwind/deferred.c              | 60 +++++++++++++++++++++++++++
 6 files changed, 103 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/unwind_deferred.h
 create mode 100644 include/linux/unwind_deferred_types.h
 create mode 100644 kernel/unwind/deferred.c

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4f78a64beb52..59fdf7d9bb1e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -46,6 +46,7 @@
 #include <linux/rv.h>
 #include <linux/uidgid_types.h>
 #include <linux/tracepoint-defs.h>
+#include <linux/unwind_deferred_types.h>
 #include <asm/kmap_size.h>
 
 /* task_struct member predeclarations (sorted alphabetically): */
@@ -1654,6 +1655,10 @@ struct task_struct {
 	struct user_event_mm		*user_event_mm;
 #endif
 
+#ifdef CONFIG_UNWIND_USER
+	struct unwind_task_info		unwind_info;
+#endif
+
 	/* CPU-specific state of this task: */
 	struct thread_struct		thread;
 
diff --git a/include/linux/unwind_deferred.h b/include/linux/unwind_deferred.h
new file mode 100644
index 000000000000..a5f6e8f8a1a2
--- /dev/null
+++ b/include/linux/unwind_deferred.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_UNWIND_USER_DEFERRED_H
+#define _LINUX_UNWIND_USER_DEFERRED_H
+
+#include <linux/unwind_user.h>
+#include <linux/unwind_deferred_types.h>
+
+#ifdef CONFIG_UNWIND_USER
+
+void unwind_task_init(struct task_struct *task);
+void unwind_task_free(struct task_struct *task);
+
+int unwind_user_faultable(struct unwind_stacktrace *trace);
+
+#else /* !CONFIG_UNWIND_USER */
+
+static inline void unwind_task_init(struct task_struct *task) {}
+static inline void unwind_task_free(struct task_struct *task) {}
+
+static inline int unwind_user_faultable(struct unwind_stacktrace *trace) { return -ENOSYS; }
+
+#endif /* !CONFIG_UNWIND_USER */
+
+#endif /* _LINUX_UNWIND_USER_DEFERRED_H */
diff --git a/include/linux/unwind_deferred_types.h b/include/linux/unwind_deferred_types.h
new file mode 100644
index 000000000000..aa32db574e43
--- /dev/null
+++ b/include/linux/unwind_deferred_types.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_UNWIND_USER_DEFERRED_TYPES_H
+#define _LINUX_UNWIND_USER_DEFERRED_TYPES_H
+
+struct unwind_task_info {
+	unsigned long		*entries;
+};
+
+#endif /* _LINUX_UNWIND_USER_DEFERRED_TYPES_H */
diff --git a/kernel/fork.c b/kernel/fork.c
index 1ee8eb11f38b..3341d50c61f2 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -105,6 +105,7 @@
 #include <uapi/linux/pidfd.h>
 #include <linux/pidfs.h>
 #include <linux/tick.h>
+#include <linux/unwind_deferred.h>
 
 #include <asm/pgalloc.h>
 #include <linux/uaccess.h>
@@ -732,6 +733,7 @@ void __put_task_struct(struct task_struct *tsk)
 	WARN_ON(refcount_read(&tsk->usage));
 	WARN_ON(tsk == current);
 
+	unwind_task_free(tsk);
 	sched_ext_free(tsk);
 	io_uring_free(tsk);
 	cgroup_free(tsk);
@@ -2135,6 +2137,8 @@ __latent_entropy struct task_struct *copy_process(
 	p->bpf_ctx = NULL;
 #endif
 
+	unwind_task_init(p);
+
 	/* Perform scheduler related setup. Assign this task to a CPU. */
 	retval = sched_fork(clone_flags, p);
 	if (retval)
diff --git a/kernel/unwind/Makefile b/kernel/unwind/Makefile
index 349ce3677526..eae37bea54fd 100644
--- a/kernel/unwind/Makefile
+++ b/kernel/unwind/Makefile
@@ -1 +1 @@
- obj-$(CONFIG_UNWIND_USER) += user.o
+ obj-$(CONFIG_UNWIND_USER)	+= user.o deferred.o
diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
new file mode 100644
index 000000000000..a0badbeb3cc1
--- /dev/null
+++ b/kernel/unwind/deferred.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Deferred user space unwinding
+ */
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/unwind_deferred.h>
+
+#define UNWIND_MAX_ENTRIES 512
+
+/**
+ * unwind_user_faultable - Produce a user stacktrace in faultable context
+ * @trace: The descriptor that will store the user stacktrace
+ *
+ * This must be called in a known faultable context (usually when entering
+ * or exiting user space). Depending on the available implementations
+ * the @trace will be loaded with the addresses of the user space stacktrace
+ * if it can be found.
+ *
+ * Return: 0 on success and negative on error
+ *         On success @trace will contain the user space stacktrace
+ */
+int unwind_user_faultable(struct unwind_stacktrace *trace)
+{
+	struct unwind_task_info *info = &current->unwind_info;
+
+	/* Should always be called from faultable context */
+	might_fault();
+
+	if (current->flags & PF_EXITING)
+		return -EINVAL;
+
+	if (!info->entries) {
+		info->entries = kmalloc_array(UNWIND_MAX_ENTRIES, sizeof(long),
+					      GFP_KERNEL);
+		if (!info->entries)
+			return -ENOMEM;
+	}
+
+	trace->nr = 0;
+	trace->entries = info->entries;
+	unwind_user(trace, UNWIND_MAX_ENTRIES);
+
+	return 0;
+}
+
+void unwind_task_init(struct task_struct *task)
+{
+	struct unwind_task_info *info = &task->unwind_info;
+
+	memset(info, 0, sizeof(*info));
+}
+
+void unwind_task_free(struct task_struct *task)
+{
+	struct unwind_task_info *info = &task->unwind_info;
+
+	kfree(info->entries);
+}
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 03/12] unwind_user/deferred: Add unwind cache
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 01/12] unwind_user: Add user space unwinding API with frame pointer support Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 02/12] unwind_user/deferred: Add unwind_user_faultable() Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 04/12] unwind_user/deferred: Add deferred unwinding interface Steven Rostedt
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Josh Poimboeuf <jpoimboe@kernel.org>

Cache the results of the unwind to ensure the unwind is only performed
once, even when called by multiple tracers.

The cache nr_entries gets cleared every time the task exits the kernel.
When a stacktrace is requested, nr_entries gets set to the number of
entries in the stacktrace. If another stacktrace is requested, if
nr_entries is not zero, then it contains the same stacktrace that would be
retrieved so it is not processed again and the entries is given to the
caller.

Co-developed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 include/linux/entry-common.h          |  2 ++
 include/linux/unwind_deferred.h       |  8 +++++++
 include/linux/unwind_deferred_types.h |  7 +++++-
 kernel/unwind/deferred.c              | 31 +++++++++++++++++++++------
 4 files changed, 40 insertions(+), 8 deletions(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index f94f3fdf15fc..8908b8eeb99b 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -12,6 +12,7 @@
 #include <linux/resume_user_mode.h>
 #include <linux/tick.h>
 #include <linux/kmsan.h>
+#include <linux/unwind_deferred.h>
 
 #include <asm/entry-common.h>
 #include <asm/syscall.h>
@@ -362,6 +363,7 @@ static __always_inline void exit_to_user_mode(void)
 	lockdep_hardirqs_on_prepare();
 	instrumentation_end();
 
+	unwind_reset_info();
 	user_enter_irqoff();
 	arch_exit_to_user_mode();
 	lockdep_hardirqs_on(CALLER_ADDR0);
diff --git a/include/linux/unwind_deferred.h b/include/linux/unwind_deferred.h
index a5f6e8f8a1a2..baacf4a1eb4c 100644
--- a/include/linux/unwind_deferred.h
+++ b/include/linux/unwind_deferred.h
@@ -12,6 +12,12 @@ void unwind_task_free(struct task_struct *task);
 
 int unwind_user_faultable(struct unwind_stacktrace *trace);
 
+static __always_inline void unwind_reset_info(void)
+{
+	if (unlikely(current->unwind_info.cache))
+		current->unwind_info.cache->nr_entries = 0;
+}
+
 #else /* !CONFIG_UNWIND_USER */
 
 static inline void unwind_task_init(struct task_struct *task) {}
@@ -19,6 +25,8 @@ static inline void unwind_task_free(struct task_struct *task) {}
 
 static inline int unwind_user_faultable(struct unwind_stacktrace *trace) { return -ENOSYS; }
 
+static inline void unwind_reset_info(void) {}
+
 #endif /* !CONFIG_UNWIND_USER */
 
 #endif /* _LINUX_UNWIND_USER_DEFERRED_H */
diff --git a/include/linux/unwind_deferred_types.h b/include/linux/unwind_deferred_types.h
index aa32db574e43..db5b54b18828 100644
--- a/include/linux/unwind_deferred_types.h
+++ b/include/linux/unwind_deferred_types.h
@@ -2,8 +2,13 @@
 #ifndef _LINUX_UNWIND_USER_DEFERRED_TYPES_H
 #define _LINUX_UNWIND_USER_DEFERRED_TYPES_H
 
+struct unwind_cache {
+	unsigned int		nr_entries;
+	unsigned long		entries[];
+};
+
 struct unwind_task_info {
-	unsigned long		*entries;
+	struct unwind_cache	*cache;
 };
 
 #endif /* _LINUX_UNWIND_USER_DEFERRED_TYPES_H */
diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
index a0badbeb3cc1..96368a5aa522 100644
--- a/kernel/unwind/deferred.c
+++ b/kernel/unwind/deferred.c
@@ -4,10 +4,13 @@
  */
 #include <linux/kernel.h>
 #include <linux/sched.h>
+#include <linux/sizes.h>
 #include <linux/slab.h>
 #include <linux/unwind_deferred.h>
 
-#define UNWIND_MAX_ENTRIES 512
+/* Make the cache fit in a 4K page */
+#define UNWIND_MAX_ENTRIES					\
+	((SZ_4K - sizeof(struct unwind_cache)) / sizeof(long))
 
 /**
  * unwind_user_faultable - Produce a user stacktrace in faultable context
@@ -24,6 +27,7 @@
 int unwind_user_faultable(struct unwind_stacktrace *trace)
 {
 	struct unwind_task_info *info = &current->unwind_info;
+	struct unwind_cache *cache;
 
 	/* Should always be called from faultable context */
 	might_fault();
@@ -31,17 +35,30 @@ int unwind_user_faultable(struct unwind_stacktrace *trace)
 	if (current->flags & PF_EXITING)
 		return -EINVAL;
 
-	if (!info->entries) {
-		info->entries = kmalloc_array(UNWIND_MAX_ENTRIES, sizeof(long),
-					      GFP_KERNEL);
-		if (!info->entries)
+	if (!info->cache) {
+		info->cache = kzalloc(struct_size(cache, entries, UNWIND_MAX_ENTRIES),
+				      GFP_KERNEL);
+		if (!info->cache)
 			return -ENOMEM;
 	}
 
+	cache = info->cache;
+	trace->entries = cache->entries;
+
+	if (cache->nr_entries) {
+		/*
+		 * The user stack has already been previously unwound in this
+		 * entry context.  Skip the unwind and use the cache.
+		 */
+		trace->nr = cache->nr_entries;
+		return 0;
+	}
+
 	trace->nr = 0;
-	trace->entries = info->entries;
 	unwind_user(trace, UNWIND_MAX_ENTRIES);
 
+	cache->nr_entries = trace->nr;
+
 	return 0;
 }
 
@@ -56,5 +73,5 @@ void unwind_task_free(struct task_struct *task)
 {
 	struct unwind_task_info *info = &task->unwind_info;
 
-	kfree(info->entries);
+	kfree(info->cache);
 }
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 04/12] unwind_user/deferred: Add deferred unwinding interface
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
                   ` (2 preceding siblings ...)
  2025-07-17  0:49 ` [PATCH v14 03/12] unwind_user/deferred: Add unwind cache Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 05/12] unwind_user/deferred: Make unwind deferral requests NMI-safe Steven Rostedt
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Josh Poimboeuf <jpoimboe@kernel.org>

Add an interface for scheduling task work to unwind the user space stack
before returning to user space. This solves several problems for its
callers:

  - Ensure the unwind happens in task context even if the caller may be
    running in interrupt context.

  - Avoid duplicate unwinds, whether called multiple times by the same
    caller or by different callers.

  - Create a "context cookie" which allows trace post-processing to
    correlate kernel unwinds/traces with the user unwind.

A concept of a "cookie" is created to detect when the stacktrace is the
same. A cookie is generated the first time a user space stacktrace is
requested after the task enters the kernel. As the stacktrace is saved on
the task_struct while the task is in the kernel, if another request comes
in, if the cookie is still the same, it will use the saved stacktrace,
and not have to regenerate one.

The cookie is passed to the caller on request, and when the stacktrace is
generated upon returning to user space, it calls the requester's callback
with the cookie as well as the stacktrace. The cookie is cleared
when it goes back to user space. Note, this currently adds another
conditional to the unwind_reset_info() path that is always called
returning to user space, but future changes will put this back to a single
conditional.

A global list is created and protected by a global mutex that holds
tracers that register with the unwind infrastructure. The number of
registered tracers will be limited in future changes. Each perf program or
ftrace instance will register its own descriptor to use for deferred
unwind stack traces.

Note, in the function unwind_deferred_task_work() that gets called when
returning to user space, it uses a global mutex for synchronization which
will cause a big bottleneck. This will be replaced by SRCU, but that
change adds some complex synchronization that deservers its own commit.

Co-developed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Changes since v13: https://lore.kernel.org/20250708012358.659547758@kernel.org

- Updated the get_cookie() to be more like what Peter Zijlstra proposed:
  https://lore.kernel.org/all/20250715102912.GQ1613200@noisy.programming.kicks-ass.net/

- Added comment to explain struct unwind_task id.

- Tweaked KernelDoc of unwind_deferred_request()

 include/linux/unwind_deferred.h       |  24 ++++
 include/linux/unwind_deferred_types.h |  24 ++++
 kernel/unwind/deferred.c              | 156 +++++++++++++++++++++++++-
 3 files changed, 203 insertions(+), 1 deletion(-)

diff --git a/include/linux/unwind_deferred.h b/include/linux/unwind_deferred.h
index baacf4a1eb4c..14efd8c027aa 100644
--- a/include/linux/unwind_deferred.h
+++ b/include/linux/unwind_deferred.h
@@ -2,9 +2,19 @@
 #ifndef _LINUX_UNWIND_USER_DEFERRED_H
 #define _LINUX_UNWIND_USER_DEFERRED_H
 
+#include <linux/task_work.h>
 #include <linux/unwind_user.h>
 #include <linux/unwind_deferred_types.h>
 
+struct unwind_work;
+
+typedef void (*unwind_callback_t)(struct unwind_work *work, struct unwind_stacktrace *trace, u64 cookie);
+
+struct unwind_work {
+	struct list_head		list;
+	unwind_callback_t		func;
+};
+
 #ifdef CONFIG_UNWIND_USER
 
 void unwind_task_init(struct task_struct *task);
@@ -12,8 +22,19 @@ void unwind_task_free(struct task_struct *task);
 
 int unwind_user_faultable(struct unwind_stacktrace *trace);
 
+int unwind_deferred_init(struct unwind_work *work, unwind_callback_t func);
+int unwind_deferred_request(struct unwind_work *work, u64 *cookie);
+void unwind_deferred_cancel(struct unwind_work *work);
+
 static __always_inline void unwind_reset_info(void)
 {
+	if (unlikely(current->unwind_info.id.id))
+		current->unwind_info.id.id = 0;
+	/*
+	 * As unwind_user_faultable() can be called directly and
+	 * depends on nr_entries being cleared on exit to user,
+	 * this needs to be a separate conditional.
+	 */
 	if (unlikely(current->unwind_info.cache))
 		current->unwind_info.cache->nr_entries = 0;
 }
@@ -24,6 +45,9 @@ static inline void unwind_task_init(struct task_struct *task) {}
 static inline void unwind_task_free(struct task_struct *task) {}
 
 static inline int unwind_user_faultable(struct unwind_stacktrace *trace) { return -ENOSYS; }
+static inline int unwind_deferred_init(struct unwind_work *work, unwind_callback_t func) { return -ENOSYS; }
+static inline int unwind_deferred_request(struct unwind_work *work, u64 *timestamp) { return -ENOSYS; }
+static inline void unwind_deferred_cancel(struct unwind_work *work) {}
 
 static inline void unwind_reset_info(void) {}
 
diff --git a/include/linux/unwind_deferred_types.h b/include/linux/unwind_deferred_types.h
index db5b54b18828..104c477d5609 100644
--- a/include/linux/unwind_deferred_types.h
+++ b/include/linux/unwind_deferred_types.h
@@ -7,8 +7,32 @@ struct unwind_cache {
 	unsigned long		entries[];
 };
 
+/*
+ * The unwind_task_id is a unique identifier that maps to a user space
+ * stacktrace. It is generated the first time a deferred user space
+ * stacktrace is requested after a task has entered the kerenl and
+ * is cleared to zero when it exits. The mapped id will be a non-zero
+ * number.
+ *
+ * To simplify the generation of the 64 bit number, 32 bits will be
+ * the CPU it was generated on, and the other 32 bits will be a per
+ * cpu counter that gets incremented by two every time a new identifier
+ * is generated. The LSB will always be set to keep the value
+ * from being zero.
+ */
+union unwind_task_id {
+	struct {
+		u32		cpu;
+		u32		cnt;
+	};
+	u64			id;
+};
+
 struct unwind_task_info {
 	struct unwind_cache	*cache;
+	struct callback_head	work;
+	union unwind_task_id	id;
+	int			pending;
 };
 
 #endif /* _LINUX_UNWIND_USER_DEFERRED_TYPES_H */
diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
index 96368a5aa522..2cbae2ada309 100644
--- a/kernel/unwind/deferred.c
+++ b/kernel/unwind/deferred.c
@@ -2,16 +2,63 @@
 /*
  * Deferred user space unwinding
  */
+#include <linux/sched/task_stack.h>
+#include <linux/unwind_deferred.h>
+#include <linux/sched/clock.h>
+#include <linux/task_work.h>
 #include <linux/kernel.h>
 #include <linux/sched.h>
 #include <linux/sizes.h>
 #include <linux/slab.h>
-#include <linux/unwind_deferred.h>
+#include <linux/mm.h>
 
 /* Make the cache fit in a 4K page */
 #define UNWIND_MAX_ENTRIES					\
 	((SZ_4K - sizeof(struct unwind_cache)) / sizeof(long))
 
+/* Guards adding to and reading the list of callbacks */
+static DEFINE_MUTEX(callback_mutex);
+static LIST_HEAD(callbacks);
+
+/*
+ * This is a unique percpu identifier for a given task entry context.
+ * Conceptually, it's incremented every time the CPU enters the kernel from
+ * user space, so that each "entry context" on the CPU gets a unique ID.  In
+ * reality, as an optimization, it's only incremented on demand for the first
+ * deferred unwind request after a given entry-from-user.
+ *
+ * It's combined with the CPU id to make a systemwide-unique "context cookie".
+ */
+static DEFINE_PER_CPU(u32, unwind_ctx_ctr);
+
+/*
+ * The context cookie is a unique identifier that is assigned to a user
+ * space stacktrace. As the user space stacktrace remains the same while
+ * the task is in the kernel, the cookie is an identifier for the stacktrace.
+ * Although it is possible for the stacktrace to get another cookie if another
+ * request is made after the cookie was cleared and before reentering user
+ * space.
+ */
+static u64 get_cookie(struct unwind_task_info *info)
+{
+	u32 cnt = 1;
+	u32 old = 0;
+
+	if (info->id.cpu)
+		return info->id.id;
+
+	/* LSB is always set to ensure 0 is an invalid value */
+	cnt |= __this_cpu_read(unwind_ctx_ctr) + 2;
+	if (try_cmpxchg(&info->id.cnt, &old, cnt)) {
+		/* Update the per cpu counter */
+		__this_cpu_write(unwind_ctx_ctr, cnt);
+	}
+	/* Interrupts are disabled, the CPU will always be same */
+	info->id.cpu = smp_processor_id() + 1; /* Must be non zero */
+
+	return info->id.id;
+}
+
 /**
  * unwind_user_faultable - Produce a user stacktrace in faultable context
  * @trace: The descriptor that will store the user stacktrace
@@ -62,11 +109,117 @@ int unwind_user_faultable(struct unwind_stacktrace *trace)
 	return 0;
 }
 
+static void unwind_deferred_task_work(struct callback_head *head)
+{
+	struct unwind_task_info *info = container_of(head, struct unwind_task_info, work);
+	struct unwind_stacktrace trace;
+	struct unwind_work *work;
+	u64 cookie;
+
+	if (WARN_ON_ONCE(!info->pending))
+		return;
+
+	/* Allow work to come in again */
+	WRITE_ONCE(info->pending, 0);
+
+	/*
+	 * From here on out, the callback must always be called, even if it's
+	 * just an empty trace.
+	 */
+	trace.nr = 0;
+	trace.entries = NULL;
+
+	unwind_user_faultable(&trace);
+
+	cookie = info->id.id;
+
+	guard(mutex)(&callback_mutex);
+	list_for_each_entry(work, &callbacks, list) {
+		work->func(work, &trace, cookie);
+	}
+}
+
+/**
+ * unwind_deferred_request - Request a user stacktrace on task kernel exit
+ * @work: Unwind descriptor requesting the trace
+ * @cookie: The cookie of the first request made for this task
+ *
+ * Schedule a user space unwind to be done in task work before exiting the
+ * kernel.
+ *
+ * The returned @cookie output is the generated cookie of the very first
+ * request for a user space stacktrace for this task since it entered the
+ * kernel. It can be from a request by any caller of this infrastructure.
+ * Its value will also be passed to the callback function.  It can be
+ * used to stitch kernel and user stack traces together in post-processing.
+ *
+ * It's valid to call this function multiple times for the same @work within
+ * the same task entry context.  Each call will return the same cookie
+ * while the task hasn't left the kernel. If the callback is not pending
+ * because it has already been previously called for the same entry context,
+ * it will be called again with the same stack trace and cookie.
+ *
+ * Return: 1 if the the callback was already queued.
+ *         0 if the callback successfully was queued.
+ *         Negative if there's an error.
+ *         @cookie holds the cookie of the first request by any user
+ */
+int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
+{
+	struct unwind_task_info *info = &current->unwind_info;
+	int ret;
+
+	*cookie = 0;
+
+	if (WARN_ON_ONCE(in_nmi()))
+		return -EINVAL;
+
+	if ((current->flags & (PF_KTHREAD | PF_EXITING)) ||
+	    !user_mode(task_pt_regs(current)))
+		return -EINVAL;
+
+	guard(irqsave)();
+
+	*cookie = get_cookie(info);
+
+	/* callback already pending? */
+	if (info->pending)
+		return 1;
+
+	/* The work has been claimed, now schedule it. */
+	ret = task_work_add(current, &info->work, TWA_RESUME);
+	if (WARN_ON_ONCE(ret))
+		return ret;
+
+	info->pending = 1;
+	return 0;
+}
+
+void unwind_deferred_cancel(struct unwind_work *work)
+{
+	if (!work)
+		return;
+
+	guard(mutex)(&callback_mutex);
+	list_del(&work->list);
+}
+
+int unwind_deferred_init(struct unwind_work *work, unwind_callback_t func)
+{
+	memset(work, 0, sizeof(*work));
+
+	guard(mutex)(&callback_mutex);
+	list_add(&work->list, &callbacks);
+	work->func = func;
+	return 0;
+}
+
 void unwind_task_init(struct task_struct *task)
 {
 	struct unwind_task_info *info = &task->unwind_info;
 
 	memset(info, 0, sizeof(*info));
+	init_task_work(&info->work, unwind_deferred_task_work);
 }
 
 void unwind_task_free(struct task_struct *task)
@@ -74,4 +227,5 @@ void unwind_task_free(struct task_struct *task)
 	struct unwind_task_info *info = &task->unwind_info;
 
 	kfree(info->cache);
+	task_work_cancel(task, &info->work);
 }
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 05/12] unwind_user/deferred: Make unwind deferral requests NMI-safe
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
                   ` (3 preceding siblings ...)
  2025-07-17  0:49 ` [PATCH v14 04/12] unwind_user/deferred: Add deferred unwinding interface Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 06/12] unwind deferred: Use bitmask to determine which callbacks to call Steven Rostedt
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Steven Rostedt <rostedt@goodmis.org>

Make unwind_deferred_request() NMI-safe so tracers in NMI context can
call it and safely request a user space stacktrace when the task exits.

Note, this is only allowed for architectures that implement a safe
cmpxchg. If an architecture requests a deferred stack trace from NMI
context that does not support a safe NMI cmpxchg, it will get an -EINVAL
and trigger a warning. For those architectures, they would need another
method (perhaps an irqwork), to request a deferred user space stack trace.
That can be dealt with later if one of theses architectures require this
feature.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Changes since v13: https://lore.kernel.org/20250708012358.831631671@kernel.org

- Removed update to convert pending over to local_t as the standalone
  pending field is going away in subsequent patches.

- Added WARN_ON when unwind_deferred_request() is called from NMI context
  when an architecture doesn't support it. (Peter Zijlstra).

- Always do the try_cmpxchg() in unwind_deferred_request() instead of
  having a special case for !CAN_USE_IN_NMI as that logic will be
  replaced in coming patches (this simplifies the code).

 kernel/unwind/deferred.c | 52 +++++++++++++++++++++++++++++++++-------
 1 file changed, 44 insertions(+), 8 deletions(-)

diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
index 2cbae2ada309..c5ac087d2396 100644
--- a/kernel/unwind/deferred.c
+++ b/kernel/unwind/deferred.c
@@ -12,6 +12,31 @@
 #include <linux/slab.h>
 #include <linux/mm.h>
 
+/*
+ * For requesting a deferred user space stack trace from NMI context
+ * the architecture must support a safe cmpxchg in NMI context.
+ * For those architectures that do not have that, then it cannot ask
+ * for a deferred user space stack trace from an NMI context. If it
+ * does, then it will get -EINVAL.
+ */
+#if defined(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG)
+# define CAN_USE_IN_NMI		1
+static inline bool try_assign_cnt(struct unwind_task_info *info, u32 cnt)
+{
+	u32 old = 0;
+
+	return try_cmpxchg(&info->id.cnt, &old, cnt);
+}
+#else
+# define CAN_USE_IN_NMI		0
+/* When NMIs are not allowed, this always succeeds */
+static inline bool try_assign_cnt(struct unwind_task_info *info, u32 cnt)
+{
+	info->id.cnt = cnt;
+	return true;
+}
+#endif
+
 /* Make the cache fit in a 4K page */
 #define UNWIND_MAX_ENTRIES					\
 	((SZ_4K - sizeof(struct unwind_cache)) / sizeof(long))
@@ -42,14 +67,13 @@ static DEFINE_PER_CPU(u32, unwind_ctx_ctr);
 static u64 get_cookie(struct unwind_task_info *info)
 {
 	u32 cnt = 1;
-	u32 old = 0;
 
 	if (info->id.cpu)
 		return info->id.id;
 
 	/* LSB is always set to ensure 0 is an invalid value */
 	cnt |= __this_cpu_read(unwind_ctx_ctr) + 2;
-	if (try_cmpxchg(&info->id.cnt, &old, cnt)) {
+	if (try_assign_cnt(info, cnt)) {
 		/* Update the per cpu counter */
 		__this_cpu_write(unwind_ctx_ctr, cnt);
 	}
@@ -167,31 +191,43 @@ static void unwind_deferred_task_work(struct callback_head *head)
 int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
 {
 	struct unwind_task_info *info = &current->unwind_info;
+	long pending;
 	int ret;
 
 	*cookie = 0;
 
-	if (WARN_ON_ONCE(in_nmi()))
-		return -EINVAL;
-
 	if ((current->flags & (PF_KTHREAD | PF_EXITING)) ||
 	    !user_mode(task_pt_regs(current)))
 		return -EINVAL;
 
+	/*
+	 * NMI requires having safe cmpxchg operations.
+	 * Trigger a warning to make it obvious that an architecture
+	 * is using this in NMI when it should not be.
+	 */
+	if (WARN_ON_ONCE(!CAN_USE_IN_NMI && in_nmi()))
+		return -EINVAL;
+
 	guard(irqsave)();
 
 	*cookie = get_cookie(info);
 
 	/* callback already pending? */
-	if (info->pending)
+	pending = READ_ONCE(info->pending);
+	if (pending)
+		return 1;
+
+	/* Claim the work unless an NMI just now swooped in to do so. */
+	if (!try_cmpxchg(&info->pending, &pending, 1))
 		return 1;
 
 	/* The work has been claimed, now schedule it. */
 	ret = task_work_add(current, &info->work, TWA_RESUME);
-	if (WARN_ON_ONCE(ret))
+	if (WARN_ON_ONCE(ret)) {
+		WRITE_ONCE(info->pending, 0);
 		return ret;
+	}
 
-	info->pending = 1;
 	return 0;
 }
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 06/12] unwind deferred: Use bitmask to determine which callbacks to call
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
                   ` (4 preceding siblings ...)
  2025-07-17  0:49 ` [PATCH v14 05/12] unwind_user/deferred: Make unwind deferral requests NMI-safe Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 07/12] unwind deferred: Add unwind_completed mask to stop spurious callbacks Steven Rostedt
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Steven Rostedt <rostedt@goodmis.org>

In order to know which registered callback requested a stacktrace for when
the task goes back to user space, add a bitmask to keep track of all
registered tracers. The bitmask is the size of long, which means that on a
32 bit machine, it can have at most 32 registered tracers, and on 64 bit,
it can have at most 64 registered tracers. This should not be an issue as
there should not be more than 10 (unless BPF can abuse this?).

When a tracer registers with unwind_deferred_init() it will get a bit
number assigned to it. When a tracer requests a stacktrace, it will have
its bit set within the task_struct. When the task returns back to user
space, it will call the callbacks for all the registered tracers where
their bits are set in the task's mask.

When a tracer is removed by the unwind_deferred_cancel() all current tasks
will clear the associated bit, just in case another tracer gets registered
immediately afterward and then gets their callback called unexpectedly.

To prevent live locks from happening if an event that happens between the
task_work and when the task goes back to user space, triggers the deferred
unwind, have the unwind_mask get cleared on exit to user space and not
after the callback is made.

Move the pending bit from a value on the task_struct to bit zero of the
unwind_mask (saves space on the task_struct). This will allow modifying
the pending bit along with the work bits atomically.

Instead of clearing a work's bit after its callback is called, it is
delayed until exit. If the work is requested again, the task_work is not
queued again and the request will be notified that the task has already been
called by returning a positive number (the same as if it was already
pending).

The pending bit is cleared before calling the callback functions but the
current work bits remain. If one of the called works registers again, it
will not trigger a task_work if its bit is still present in the task's
unwind_mask.

If a new work requests a deferred unwind, then it will set both the
pending bit and its own bit. Note this will also cause any work that was
previously queued and had their callback already executed to be executed
again. Future work will remove these spurious callbacks.

The use of atomic_long bit operations were suggested by Peter Zijlstra:
Link: https://lore.kernel.org/all/20250715102912.GQ1613200@noisy.programming.kicks-ass.net/
The unwind_mask could not be converted to atomic_long_t do to atomic_long
not having all the bit operations needed by unwind_mask. Instead it
follows other use cases in the kernel and just typecasts the unwind_mask
to atomic_long_t when using the two atomic_long functions.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Changes since v13: https://lore.kernel.org/20250708012359.002924698@kernel.org

- Fold ("unwind: Clear unwind_mask on exit back to user space") into this
  commit.

- Move unwind_mask field to the beginning of unwind_task_info for better
  alignment.

Changes since v13 (of folded patch): https://lore.kernel.org/20250708012359.345060579@kernel.org

- Have unwind_deferred_request() return 1 for both pending and already
  executed. Basically it now returns 0 - queued and callback will be
  called; 1 - it is already pending or has already executed; -1 - an error
  happened. (Peter Zijlstra)

- Use atomic_long_fetch_andnot() instead of a try_cmpxchg() loop on
  info->unwind_mask when clearing pending bit. (Peter Zijlstra)

- Added atomic_long_fetch_or() to update the pending bit and new requests.
  (Peter Zilstra)

- Added a RESERVED_BITS to assign unwind_mask and make sure that no work
  being cancelled would clear one of those bits.

 include/linux/unwind_deferred.h       | 26 +++++++-
 include/linux/unwind_deferred_types.h |  2 +-
 kernel/unwind/deferred.c              | 87 +++++++++++++++++++++------
 3 files changed, 92 insertions(+), 23 deletions(-)

diff --git a/include/linux/unwind_deferred.h b/include/linux/unwind_deferred.h
index 14efd8c027aa..337ead927d4d 100644
--- a/include/linux/unwind_deferred.h
+++ b/include/linux/unwind_deferred.h
@@ -13,10 +13,19 @@ typedef void (*unwind_callback_t)(struct unwind_work *work, struct unwind_stackt
 struct unwind_work {
 	struct list_head		list;
 	unwind_callback_t		func;
+	int				bit;
 };
 
 #ifdef CONFIG_UNWIND_USER
 
+enum {
+	UNWIND_PENDING_BIT = 0,
+};
+
+enum {
+	UNWIND_PENDING		= BIT(UNWIND_PENDING_BIT),
+};
+
 void unwind_task_init(struct task_struct *task);
 void unwind_task_free(struct task_struct *task);
 
@@ -28,15 +37,26 @@ void unwind_deferred_cancel(struct unwind_work *work);
 
 static __always_inline void unwind_reset_info(void)
 {
-	if (unlikely(current->unwind_info.id.id))
+	struct unwind_task_info *info = &current->unwind_info;
+	unsigned long bits;
+
+	/* Was there any unwinding? */
+	if (unlikely(info->unwind_mask)) {
+		bits = info->unwind_mask;
+		do {
+			/* Is a task_work going to run again before going back */
+			if (bits & UNWIND_PENDING)
+				return;
+		} while (!try_cmpxchg(&info->unwind_mask, &bits, 0UL));
 		current->unwind_info.id.id = 0;
+	}
 	/*
 	 * As unwind_user_faultable() can be called directly and
 	 * depends on nr_entries being cleared on exit to user,
 	 * this needs to be a separate conditional.
 	 */
-	if (unlikely(current->unwind_info.cache))
-		current->unwind_info.cache->nr_entries = 0;
+	if (unlikely(info->cache))
+		info->cache->nr_entries = 0;
 }
 
 #else /* !CONFIG_UNWIND_USER */
diff --git a/include/linux/unwind_deferred_types.h b/include/linux/unwind_deferred_types.h
index 104c477d5609..5dc9cda141ff 100644
--- a/include/linux/unwind_deferred_types.h
+++ b/include/linux/unwind_deferred_types.h
@@ -29,10 +29,10 @@ union unwind_task_id {
 };
 
 struct unwind_task_info {
+	unsigned long		unwind_mask;
 	struct unwind_cache	*cache;
 	struct callback_head	work;
 	union unwind_task_id	id;
-	int			pending;
 };
 
 #endif /* _LINUX_UNWIND_USER_DEFERRED_TYPES_H */
diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
index c5ac087d2396..e19f02ef416d 100644
--- a/kernel/unwind/deferred.c
+++ b/kernel/unwind/deferred.c
@@ -45,6 +45,16 @@ static inline bool try_assign_cnt(struct unwind_task_info *info, u32 cnt)
 static DEFINE_MUTEX(callback_mutex);
 static LIST_HEAD(callbacks);
 
+#define RESERVED_BITS	(UNWIND_PENDING)
+
+/* Zero'd bits are available for assigning callback users */
+static unsigned long unwind_mask = RESERVED_BITS;
+
+static inline bool unwind_pending(struct unwind_task_info *info)
+{
+	return test_bit(UNWIND_PENDING_BIT, &info->unwind_mask);
+}
+
 /*
  * This is a unique percpu identifier for a given task entry context.
  * Conceptually, it's incremented every time the CPU enters the kernel from
@@ -138,14 +148,15 @@ static void unwind_deferred_task_work(struct callback_head *head)
 	struct unwind_task_info *info = container_of(head, struct unwind_task_info, work);
 	struct unwind_stacktrace trace;
 	struct unwind_work *work;
+	unsigned long bits;
 	u64 cookie;
 
-	if (WARN_ON_ONCE(!info->pending))
+	if (WARN_ON_ONCE(!unwind_pending(info)))
 		return;
 
-	/* Allow work to come in again */
-	WRITE_ONCE(info->pending, 0);
-
+	/* Clear pending bit but make sure to have the current bits */
+	bits = atomic_long_fetch_andnot(UNWIND_PENDING,
+				  (atomic_long_t *)&info->unwind_mask);
 	/*
 	 * From here on out, the callback must always be called, even if it's
 	 * just an empty trace.
@@ -159,7 +170,8 @@ static void unwind_deferred_task_work(struct callback_head *head)
 
 	guard(mutex)(&callback_mutex);
 	list_for_each_entry(work, &callbacks, list) {
-		work->func(work, &trace, cookie);
+		if (test_bit(work->bit, &bits))
+			work->func(work, &trace, cookie);
 	}
 }
 
@@ -183,15 +195,16 @@ static void unwind_deferred_task_work(struct callback_head *head)
  * because it has already been previously called for the same entry context,
  * it will be called again with the same stack trace and cookie.
  *
- * Return: 1 if the the callback was already queued.
- *         0 if the callback successfully was queued.
+ * Return: 0 if the callback successfully was queued.
+ *         1 if the callback is pending or was already executed.
  *         Negative if there's an error.
  *         @cookie holds the cookie of the first request by any user
  */
 int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
 {
 	struct unwind_task_info *info = &current->unwind_info;
-	long pending;
+	unsigned long old, bits;
+	unsigned long bit = BIT(work->bit);
 	int ret;
 
 	*cookie = 0;
@@ -212,32 +225,59 @@ int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
 
 	*cookie = get_cookie(info);
 
-	/* callback already pending? */
-	pending = READ_ONCE(info->pending);
-	if (pending)
-		return 1;
+	old = READ_ONCE(info->unwind_mask);
 
-	/* Claim the work unless an NMI just now swooped in to do so. */
-	if (!try_cmpxchg(&info->pending, &pending, 1))
+	/* Is this already queued or executed */
+	if (old & bit)
 		return 1;
 
+	/*
+	 * This work's bit hasn't been set yet. Now set it with the PENDING
+	 * bit and fetch the current value of unwind_mask. If ether the
+	 * work's bit or PENDING was already set, then this is already queued
+	 * to have a callback.
+	 */
+	bits = UNWIND_PENDING | bit;
+	old = atomic_long_fetch_or(bits, (atomic_long_t *)&info->unwind_mask);
+	if (old & bits) {
+		/*
+		 * If the work's bit was set, whatever set it had better
+		 * have also set pending and queued a callback.
+		 */
+		WARN_ON_ONCE(!(old & UNWIND_PENDING));
+		return old & bit;
+	}
+
 	/* The work has been claimed, now schedule it. */
 	ret = task_work_add(current, &info->work, TWA_RESUME);
-	if (WARN_ON_ONCE(ret)) {
-		WRITE_ONCE(info->pending, 0);
-		return ret;
-	}
 
-	return 0;
+	if (WARN_ON_ONCE(ret))
+		WRITE_ONCE(info->unwind_mask, 0);
+
+	return ret;
 }
 
 void unwind_deferred_cancel(struct unwind_work *work)
 {
+	struct task_struct *g, *t;
+
 	if (!work)
 		return;
 
+	/* No work should be using a reserved bit */
+	if (WARN_ON_ONCE(BIT(work->bit) & RESERVED_BITS))
+		return;
+
 	guard(mutex)(&callback_mutex);
 	list_del(&work->list);
+
+	__clear_bit(work->bit, &unwind_mask);
+
+	guard(rcu)();
+	/* Clear this bit from all threads */
+	for_each_process_thread(g, t) {
+		clear_bit(work->bit, &t->unwind_info.unwind_mask);
+	}
 }
 
 int unwind_deferred_init(struct unwind_work *work, unwind_callback_t func)
@@ -245,6 +285,14 @@ int unwind_deferred_init(struct unwind_work *work, unwind_callback_t func)
 	memset(work, 0, sizeof(*work));
 
 	guard(mutex)(&callback_mutex);
+
+	/* See if there's a bit in the mask available */
+	if (unwind_mask == ~0UL)
+		return -EBUSY;
+
+	work->bit = ffz(unwind_mask);
+	__set_bit(work->bit, &unwind_mask);
+
 	list_add(&work->list, &callbacks);
 	work->func = func;
 	return 0;
@@ -256,6 +304,7 @@ void unwind_task_init(struct task_struct *task)
 
 	memset(info, 0, sizeof(*info));
 	init_task_work(&info->work, unwind_deferred_task_work);
+	info->unwind_mask = 0;
 }
 
 void unwind_task_free(struct task_struct *task)
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 07/12] unwind deferred: Add unwind_completed mask to stop spurious callbacks
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
                   ` (5 preceding siblings ...)
  2025-07-17  0:49 ` [PATCH v14 06/12] unwind deferred: Use bitmask to determine which callbacks to call Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 08/12] unwind: Add USED bit to only have one conditional on way back to user space Steven Rostedt
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Steven Rostedt <rostedt@goodmis.org>

If there's more than one registered tracer to the unwind deferred
infrastructure, it is currently possible that one tracer could cause extra
callbacks to happen for another tracer if the former requests a deferred
stacktrace after the latter's callback was executed and before the task
went back to user space.

Here's an example of how this could occur:

  [Task enters kernel]
    tracer 1 request -> add cookie to its buffer
    tracer 1 request -> add cookie to its buffer
    <..>
    [ task work executes ]
    tracer 1 callback -> add trace + cookie to its buffer

    [tracer 2 requests and triggers the task work again]
    [ task work executes again ]
    tracer 1 callback -> add trace + cookie to its buffer
    tracer 2 callback -> add trace + cookie to its buffer
 [Task exits back to user space]

This is because the bit for tracer 1 gets set in the task's unwind_mask
when it did its request and does not get cleared until the task returns
back to user space. But if another tracer were to request another deferred
stacktrace, then the next task work will executed all tracer's callbacks
that have their bits set in the task's unwind_mask.

To fix this issue, add another mask called unwind_completed and place it
into the task's info->cache structure. The cache structure is allocated
on the first occurrence of a deferred stacktrace and this unwind_completed
mask is not needed until then. It's better to have it in the cache than to
permanently waste space in the task_struct.

After a tracer's callback is executed, it's bit gets set in this
unwind_completed mask. When the task_work enters, it will AND the task's
unwind_mask with the inverse of the unwind_completed which will eliminate
any work that already had its callback executed since the task entered the
kernel.

When the task leaves the kernel, it will reset this unwind_completed mask
just like it resets the other values as it enters user space.

Link: https://lore.kernel.org/all/20250716142609.47f0e4a5@batman.local.home/

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 include/linux/unwind_deferred.h       |  4 +++-
 include/linux/unwind_deferred_types.h |  1 +
 kernel/unwind/deferred.c              | 19 +++++++++++++++----
 3 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/include/linux/unwind_deferred.h b/include/linux/unwind_deferred.h
index 337ead927d4d..b9ec4c8515c7 100644
--- a/include/linux/unwind_deferred.h
+++ b/include/linux/unwind_deferred.h
@@ -55,8 +55,10 @@ static __always_inline void unwind_reset_info(void)
 	 * depends on nr_entries being cleared on exit to user,
 	 * this needs to be a separate conditional.
 	 */
-	if (unlikely(info->cache))
+	if (unlikely(info->cache)) {
 		info->cache->nr_entries = 0;
+		info->cache->unwind_completed = 0;
+	}
 }
 
 #else /* !CONFIG_UNWIND_USER */
diff --git a/include/linux/unwind_deferred_types.h b/include/linux/unwind_deferred_types.h
index 5dc9cda141ff..33b62ac25c86 100644
--- a/include/linux/unwind_deferred_types.h
+++ b/include/linux/unwind_deferred_types.h
@@ -3,6 +3,7 @@
 #define _LINUX_UNWIND_USER_DEFERRED_TYPES_H
 
 struct unwind_cache {
+	unsigned long		unwind_completed;
 	unsigned int		nr_entries;
 	unsigned long		entries[];
 };
diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
index e19f02ef416d..a3d26014a2e6 100644
--- a/kernel/unwind/deferred.c
+++ b/kernel/unwind/deferred.c
@@ -166,12 +166,18 @@ static void unwind_deferred_task_work(struct callback_head *head)
 
 	unwind_user_faultable(&trace);
 
+	if (info->cache)
+		bits &= ~(info->cache->unwind_completed);
+
 	cookie = info->id.id;
 
 	guard(mutex)(&callback_mutex);
 	list_for_each_entry(work, &callbacks, list) {
-		if (test_bit(work->bit, &bits))
+		if (test_bit(work->bit, &bits)) {
 			work->func(work, &trace, cookie);
+			if (info->cache)
+				info->cache->unwind_completed |= BIT(work->bit);
+		}
 	}
 }
 
@@ -260,23 +266,28 @@ int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
 void unwind_deferred_cancel(struct unwind_work *work)
 {
 	struct task_struct *g, *t;
+	int bit;
 
 	if (!work)
 		return;
 
+	bit = work->bit;
+
 	/* No work should be using a reserved bit */
-	if (WARN_ON_ONCE(BIT(work->bit) & RESERVED_BITS))
+	if (WARN_ON_ONCE(BIT(bit) & RESERVED_BITS))
 		return;
 
 	guard(mutex)(&callback_mutex);
 	list_del(&work->list);
 
-	__clear_bit(work->bit, &unwind_mask);
+	__clear_bit(bit, &unwind_mask);
 
 	guard(rcu)();
 	/* Clear this bit from all threads */
 	for_each_process_thread(g, t) {
-		clear_bit(work->bit, &t->unwind_info.unwind_mask);
+		clear_bit(bit, &t->unwind_info.unwind_mask);
+		if (t->unwind_info.cache)
+			clear_bit(bit, &t->unwind_info.cache->unwind_completed);
 	}
 }
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 08/12] unwind: Add USED bit to only have one conditional on way back to user space
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
                   ` (6 preceding siblings ...)
  2025-07-17  0:49 ` [PATCH v14 07/12] unwind deferred: Add unwind_completed mask to stop spurious callbacks Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work() Steven Rostedt
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Steven Rostedt <rostedt@goodmis.org>

On the way back to user space, the function unwind_reset_info() is called
unconditionally (but always inlined). It currently has two conditionals.
One that checks the unwind_mask which is set whenever a deferred trace is
called and is used to know that the mask needs to be cleared. The other
checks if the cache has been allocated, and if so, it resets the
nr_entries so that the unwinder knows it needs to do the work to get a new
user space stack trace again (it only does it once per entering the
kernel).

Use one of the bits in the unwind mask as a "USED" bit that gets set
whenever a trace is created. This will make it possible to only check the
unwind_mask in the unwind_reset_info() to know if it needs to do work or
not and eliminates a conditional that happens every time the task goes
back to user space.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Changes since v13: https://lore.kernel.org/20250708012359.516909531@kernel.org

- The UNWIND_USED is now an enum and not a define

 include/linux/unwind_deferred.h | 18 +++++++++---------
 kernel/unwind/deferred.c        |  5 ++++-
 2 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/include/linux/unwind_deferred.h b/include/linux/unwind_deferred.h
index b9ec4c8515c7..2efbda01e959 100644
--- a/include/linux/unwind_deferred.h
+++ b/include/linux/unwind_deferred.h
@@ -20,10 +20,14 @@ struct unwind_work {
 
 enum {
 	UNWIND_PENDING_BIT = 0,
+	UNWIND_USED_BIT,
 };
 
 enum {
 	UNWIND_PENDING		= BIT(UNWIND_PENDING_BIT),
+
+	/* Set if the unwinding was used (directly or deferred) */
+	UNWIND_USED		= BIT(UNWIND_USED_BIT)
 };
 
 void unwind_task_init(struct task_struct *task);
@@ -49,15 +53,11 @@ static __always_inline void unwind_reset_info(void)
 				return;
 		} while (!try_cmpxchg(&info->unwind_mask, &bits, 0UL));
 		current->unwind_info.id.id = 0;
-	}
-	/*
-	 * As unwind_user_faultable() can be called directly and
-	 * depends on nr_entries being cleared on exit to user,
-	 * this needs to be a separate conditional.
-	 */
-	if (unlikely(info->cache)) {
-		info->cache->nr_entries = 0;
-		info->cache->unwind_completed = 0;
+
+		if (unlikely(info->cache)) {
+			info->cache->nr_entries = 0;
+			info->cache->unwind_completed = 0;
+		}
 	}
 }
 
diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
index a3d26014a2e6..2311b725d691 100644
--- a/kernel/unwind/deferred.c
+++ b/kernel/unwind/deferred.c
@@ -45,7 +45,7 @@ static inline bool try_assign_cnt(struct unwind_task_info *info, u32 cnt)
 static DEFINE_MUTEX(callback_mutex);
 static LIST_HEAD(callbacks);
 
-#define RESERVED_BITS	(UNWIND_PENDING)
+#define RESERVED_BITS	(UNWIND_PENDING | UNWIND_USED)
 
 /* Zero'd bits are available for assigning callback users */
 static unsigned long unwind_mask = RESERVED_BITS;
@@ -140,6 +140,9 @@ int unwind_user_faultable(struct unwind_stacktrace *trace)
 
 	cache->nr_entries = trace->nr;
 
+	/* Clear nr_entries on way back to user space */
+	set_bit(UNWIND_USED_BIT, &info->unwind_mask);
+
 	return 0;
 }
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work()
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
                   ` (7 preceding siblings ...)
  2025-07-17  0:49 ` [PATCH v14 08/12] unwind: Add USED bit to only have one conditional on way back to user space Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  4:43   ` Paul E. McKenney
  2025-07-17  0:49 ` [PATCH v14 10/12] unwind: Finish up unwind when a task exits Steven Rostedt
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James, Paul E. McKenney

From: Steven Rostedt <rostedt@goodmis.org>

Instead of using the callback_mutex to protect the link list of callbacks
in unwind_deferred_task_work(), use SRCU instead. This gets called every
time a task exits that has to record a stack trace that was requested.
This can happen for many tasks on several CPUs at the same time. A mutex
is a bottleneck and can cause a bit of contention and slow down performance.

As the callbacks themselves are allowed to sleep, regular RCU cannot be
used to protect the list. Instead use SRCU, as that still allows the
callbacks to sleep and the list can be read without needing to hold the
callback_mutex.

Link: https://lore.kernel.org/all/ca9bd83a-6c80-4ee0-a83c-224b9d60b755@efficios.com/

Also added a new guard (srcu_lite) written by Peter Zilstra

Link: https://lore.kernel.org/all/20250715102912.GQ1613200@noisy.programming.kicks-ass.net/

Cc: "Paul E. McKenney" <paulmck@kernel.org>
Suggested-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Changes since v13: https://lore.kernel.org/20250708012359.172959778@kernel.org

- Have the locking of the link list walk use guard(srcu_lite)
  (Peter Zijlstra)

- Fixed up due to the new atomic_long logic.

 include/linux/srcu.h     |  4 ++++
 kernel/unwind/deferred.c | 27 +++++++++++++++++++++------
 2 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 900b0d5c05f5..879054b8bf87 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -524,4 +524,8 @@ DEFINE_LOCK_GUARD_1(srcu, struct srcu_struct,
 		    srcu_read_unlock(_T->lock, _T->idx),
 		    int idx)
 
+DEFINE_LOCK_GUARD_1(srcu_lite, struct srcu_struct,
+		    _T->idx = srcu_read_lock_lite(_T->lock),
+		    srcu_read_unlock_lite(_T->lock, _T->idx),
+		    int idx)
 #endif
diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
index 2311b725d691..353f7af610bf 100644
--- a/kernel/unwind/deferred.c
+++ b/kernel/unwind/deferred.c
@@ -41,7 +41,7 @@ static inline bool try_assign_cnt(struct unwind_task_info *info, u32 cnt)
 #define UNWIND_MAX_ENTRIES					\
 	((SZ_4K - sizeof(struct unwind_cache)) / sizeof(long))
 
-/* Guards adding to and reading the list of callbacks */
+/* Guards adding to or removing from the list of callbacks */
 static DEFINE_MUTEX(callback_mutex);
 static LIST_HEAD(callbacks);
 
@@ -49,6 +49,7 @@ static LIST_HEAD(callbacks);
 
 /* Zero'd bits are available for assigning callback users */
 static unsigned long unwind_mask = RESERVED_BITS;
+DEFINE_STATIC_SRCU(unwind_srcu);
 
 static inline bool unwind_pending(struct unwind_task_info *info)
 {
@@ -174,8 +175,9 @@ static void unwind_deferred_task_work(struct callback_head *head)
 
 	cookie = info->id.id;
 
-	guard(mutex)(&callback_mutex);
-	list_for_each_entry(work, &callbacks, list) {
+	guard(srcu_lite)(&unwind_srcu);
+	list_for_each_entry_srcu(work, &callbacks, list,
+				 srcu_read_lock_held(&unwind_srcu)) {
 		if (test_bit(work->bit, &bits)) {
 			work->func(work, &trace, cookie);
 			if (info->cache)
@@ -213,7 +215,7 @@ int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
 {
 	struct unwind_task_info *info = &current->unwind_info;
 	unsigned long old, bits;
-	unsigned long bit = BIT(work->bit);
+	unsigned long bit;
 	int ret;
 
 	*cookie = 0;
@@ -230,6 +232,14 @@ int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
 	if (WARN_ON_ONCE(!CAN_USE_IN_NMI && in_nmi()))
 		return -EINVAL;
 
+	/* Do not allow cancelled works to request again */
+	bit = READ_ONCE(work->bit);
+	if (WARN_ON_ONCE(bit < 0))
+		return -EINVAL;
+
+	/* Only need the mask now */
+	bit = BIT(bit);
+
 	guard(irqsave)();
 
 	*cookie = get_cookie(info);
@@ -281,10 +291,15 @@ void unwind_deferred_cancel(struct unwind_work *work)
 		return;
 
 	guard(mutex)(&callback_mutex);
-	list_del(&work->list);
+	list_del_rcu(&work->list);
+
+	/* Do not allow any more requests and prevent callbacks */
+	work->bit = -1;
 
 	__clear_bit(bit, &unwind_mask);
 
+	synchronize_srcu(&unwind_srcu);
+
 	guard(rcu)();
 	/* Clear this bit from all threads */
 	for_each_process_thread(g, t) {
@@ -307,7 +322,7 @@ int unwind_deferred_init(struct unwind_work *work, unwind_callback_t func)
 	work->bit = ffz(unwind_mask);
 	__set_bit(work->bit, &unwind_mask);
 
-	list_add(&work->list, &callbacks);
+	list_add_rcu(&work->list, &callbacks);
 	work->func = func;
 	return 0;
 }
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 10/12] unwind: Finish up unwind when a task exits
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
                   ` (8 preceding siblings ...)
  2025-07-17  0:49 ` [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work() Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 11/12] unwind_user/x86: Enable frame pointer unwinding on x86 Steven Rostedt
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Steven Rostedt <rostedt@goodmis.org>

On do_exit() when a task is exiting, if a unwind is requested and the
deferred user stacktrace is deferred via the task_work, the task_work
callback is called after exit_mm() is called in do_exit(). This means that
the user stack trace will not be retrieved and an empty stack is created.

Instead, add a function unwind_deferred_task_exit() and call it just
before exit_mm() so that the unwinder can call the requested callbacks
with the user space stack.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 include/linux/unwind_deferred.h |  3 +++
 kernel/exit.c                   |  2 ++
 kernel/unwind/deferred.c        | 23 ++++++++++++++++++++---
 3 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/include/linux/unwind_deferred.h b/include/linux/unwind_deferred.h
index 2efbda01e959..26122d00708a 100644
--- a/include/linux/unwind_deferred.h
+++ b/include/linux/unwind_deferred.h
@@ -39,6 +39,8 @@ int unwind_deferred_init(struct unwind_work *work, unwind_callback_t func);
 int unwind_deferred_request(struct unwind_work *work, u64 *cookie);
 void unwind_deferred_cancel(struct unwind_work *work);
 
+void unwind_deferred_task_exit(struct task_struct *task);
+
 static __always_inline void unwind_reset_info(void)
 {
 	struct unwind_task_info *info = &current->unwind_info;
@@ -71,6 +73,7 @@ static inline int unwind_deferred_init(struct unwind_work *work, unwind_callback
 static inline int unwind_deferred_request(struct unwind_work *work, u64 *timestamp) { return -ENOSYS; }
 static inline void unwind_deferred_cancel(struct unwind_work *work) {}
 
+static inline void unwind_deferred_task_exit(struct task_struct *task) {}
 static inline void unwind_reset_info(void) {}
 
 #endif /* !CONFIG_UNWIND_USER */
diff --git a/kernel/exit.c b/kernel/exit.c
index bb184a67ac73..1d8c8ac33c4f 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -68,6 +68,7 @@
 #include <linux/rethook.h>
 #include <linux/sysfs.h>
 #include <linux/user_events.h>
+#include <linux/unwind_deferred.h>
 #include <linux/uaccess.h>
 #include <linux/pidfs.h>
 
@@ -938,6 +939,7 @@ void __noreturn do_exit(long code)
 
 	tsk->exit_code = code;
 	taskstats_exit(tsk, group_dead);
+	unwind_deferred_task_exit(tsk);
 	trace_sched_process_exit(tsk, group_dead);
 
 	/*
diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
index 353f7af610bf..53a75f8f9b7e 100644
--- a/kernel/unwind/deferred.c
+++ b/kernel/unwind/deferred.c
@@ -114,7 +114,7 @@ int unwind_user_faultable(struct unwind_stacktrace *trace)
 	/* Should always be called from faultable context */
 	might_fault();
 
-	if (current->flags & PF_EXITING)
+	if (!current->mm)
 		return -EINVAL;
 
 	if (!info->cache) {
@@ -147,9 +147,9 @@ int unwind_user_faultable(struct unwind_stacktrace *trace)
 	return 0;
 }
 
-static void unwind_deferred_task_work(struct callback_head *head)
+static void process_unwind_deferred(struct task_struct *task)
 {
-	struct unwind_task_info *info = container_of(head, struct unwind_task_info, work);
+	struct unwind_task_info *info = &task->unwind_info;
 	struct unwind_stacktrace trace;
 	struct unwind_work *work;
 	unsigned long bits;
@@ -186,6 +186,23 @@ static void unwind_deferred_task_work(struct callback_head *head)
 	}
 }
 
+static void unwind_deferred_task_work(struct callback_head *head)
+{
+	process_unwind_deferred(current);
+}
+
+void unwind_deferred_task_exit(struct task_struct *task)
+{
+	struct unwind_task_info *info = &current->unwind_info;
+
+	if (!unwind_pending(info))
+		return;
+
+	process_unwind_deferred(task);
+
+	task_work_cancel(task, &info->work);
+}
+
 /**
  * unwind_deferred_request - Request a user stacktrace on task kernel exit
  * @work: Unwind descriptor requesting the trace
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 11/12] unwind_user/x86: Enable frame pointer unwinding on x86
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
                   ` (9 preceding siblings ...)
  2025-07-17  0:49 ` [PATCH v14 10/12] unwind: Finish up unwind when a task exits Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:49 ` [PATCH v14 12/12] unwind deferred/x86: Do not defer stack tracing for compat tasks Steven Rostedt
  2025-07-17  0:51 ` [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Josh Poimboeuf <jpoimboe@kernel.org>

Use ARCH_INIT_USER_FP_FRAME to describe how frame pointers are unwound
on x86, and enable CONFIG_HAVE_UNWIND_USER_FP accordingly so the
unwind_user interfaces can be used.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 arch/x86/Kconfig                   |  1 +
 arch/x86/include/asm/unwind_user.h | 11 +++++++++++
 2 files changed, 12 insertions(+)
 create mode 100644 arch/x86/include/asm/unwind_user.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 71019b3b54ea..5862433c81e1 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -302,6 +302,7 @@ config X86
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UACCESS_VALIDATION		if HAVE_OBJTOOL
 	select HAVE_UNSTABLE_SCHED_CLOCK
+	select HAVE_UNWIND_USER_FP		if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
 	select VDSO_GETRANDOM			if X86_64
diff --git a/arch/x86/include/asm/unwind_user.h b/arch/x86/include/asm/unwind_user.h
new file mode 100644
index 000000000000..8597857bf896
--- /dev/null
+++ b/arch/x86/include/asm/unwind_user.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_UNWIND_USER_H
+#define _ASM_X86_UNWIND_USER_H
+
+#define ARCH_INIT_USER_FP_FRAME							\
+	.cfa_off	= (s32)sizeof(long) *  2,				\
+	.ra_off		= (s32)sizeof(long) * -1,				\
+	.fp_off		= (s32)sizeof(long) * -2,				\
+	.use_fp		= true,
+
+#endif /* _ASM_X86_UNWIND_USER_H */
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v14 12/12] unwind deferred/x86: Do not defer stack tracing for compat tasks
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
                   ` (10 preceding siblings ...)
  2025-07-17  0:49 ` [PATCH v14 11/12] unwind_user/x86: Enable frame pointer unwinding on x86 Steven Rostedt
@ 2025-07-17  0:49 ` Steven Rostedt
  2025-07-17  0:51 ` [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:49 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf, x86
  Cc: Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

From: Steven Rostedt <rostedt@goodmis.org>

Currently compat tasks are not supported. If a deferred user space stack
trace is requested on a compat task, it should fail and return an error so
that the profiler can use an alternative approach (whatever it uses
today).

Add a arch_unwind_can_defer() macro that is called in
unwind_deferred_request(). Have x86 define it to a function that makes
sure that the current task is running in 64bit mode, and if it is not, it
returns false. This will cause unwind_deferred_request() to error out and
the caller can use the current method of user space stack tracing.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 arch/x86/include/asm/unwind_user.h | 11 +++++++++++
 include/linux/unwind_deferred.h    |  5 +++++
 kernel/unwind/deferred.c           |  3 +++
 3 files changed, 19 insertions(+)

diff --git a/arch/x86/include/asm/unwind_user.h b/arch/x86/include/asm/unwind_user.h
index 8597857bf896..220fd0a6e175 100644
--- a/arch/x86/include/asm/unwind_user.h
+++ b/arch/x86/include/asm/unwind_user.h
@@ -2,6 +2,17 @@
 #ifndef _ASM_X86_UNWIND_USER_H
 #define _ASM_X86_UNWIND_USER_H
 
+#ifdef CONFIG_IA32_EMULATION
+/* Currently compat mode is not supported for deferred stack trace */
+static inline bool arch_unwind_can_defer(void)
+{
+	struct pt_regs *regs = task_pt_regs(current);
+
+	return user_64bit_mode(regs);
+}
+# define arch_unwind_can_defer	arch_unwind_can_defer
+#endif /* CONFIG_IA32_EMULATION */
+
 #define ARCH_INIT_USER_FP_FRAME							\
 	.cfa_off	= (s32)sizeof(long) *  2,				\
 	.ra_off		= (s32)sizeof(long) * -1,				\
diff --git a/include/linux/unwind_deferred.h b/include/linux/unwind_deferred.h
index 26122d00708a..0124865aaab4 100644
--- a/include/linux/unwind_deferred.h
+++ b/include/linux/unwind_deferred.h
@@ -16,6 +16,11 @@ struct unwind_work {
 	int				bit;
 };
 
+/* Architectures can add a test to not defer unwinding */
+#ifndef arch_unwind_can_defer
+# define arch_unwind_can_defer()	(true)
+#endif
+
 #ifdef CONFIG_UNWIND_USER
 
 enum {
diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
index 53a75f8f9b7e..9972096e93e8 100644
--- a/kernel/unwind/deferred.c
+++ b/kernel/unwind/deferred.c
@@ -237,6 +237,9 @@ int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
 
 	*cookie = 0;
 
+	if (!arch_unwind_can_defer())
+		return -EINVAL;
+
 	if ((current->flags & (PF_KTHREAD | PF_EXITING)) ||
 	    !user_mode(task_pt_regs(current)))
 		return -EINVAL;
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure
  2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
                   ` (11 preceding siblings ...)
  2025-07-17  0:49 ` [PATCH v14 12/12] unwind deferred/x86: Do not defer stack tracing for compat tasks Steven Rostedt
@ 2025-07-17  0:51 ` Steven Rostedt
  12 siblings, 0 replies; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17  0:51 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-trace-kernel, bpf, x86, Masami Hiramatsu,
	Mathieu Desnoyers, Josh Poimboeuf, Peter Zijlstra, Ingo Molnar,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Andrii Nakryiko,
	Indu Bhagat, Jose E. Marchesi, Beau Belgrave, Jens Remus,
	Linus Torvalds, Andrew Morton, Jens Axboe, Florian Weimer,
	Sam James

On Wed, 16 Jul 2025 20:49:10 -0400
Steven Rostedt <rostedt@kernel.org> wrote:

> The code for this series is located here:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
> unwind/core
> 
> Head SHA1: f14e91fa8019acefb146869eb465966a88ef6f3b
> 
> Changes since v13: https://lore.kernel.org/linux-trace-kernel/20250708012239.268642741@kernel.org/

Here's a diff between v13 and v14:

diff --git a/arch/Kconfig b/arch/Kconfig
index 2c41d3072910..8e3fd723bd74 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -442,10 +442,6 @@ config HAVE_UNWIND_USER_FP
 	bool
 	select UNWIND_USER
 
-config HAVE_UNWIND_USER_COMPAT_FP
-	bool
-	depends on HAVE_UNWIND_USER_FP
-
 config HAVE_PERF_REGS
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 17d4094c821b..5862433c81e1 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -302,7 +302,6 @@ config X86
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UACCESS_VALIDATION		if HAVE_OBJTOOL
 	select HAVE_UNSTABLE_SCHED_CLOCK
-	select HAVE_UNWIND_USER_COMPAT_FP	if IA32_EMULATION
 	select HAVE_UNWIND_USER_FP		if X86_64
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
diff --git a/arch/x86/include/asm/unwind_user.h b/arch/x86/include/asm/unwind_user.h
index 19634a73612d..220fd0a6e175 100644
--- a/arch/x86/include/asm/unwind_user.h
+++ b/arch/x86/include/asm/unwind_user.h
@@ -2,41 +2,21 @@
 #ifndef _ASM_X86_UNWIND_USER_H
 #define _ASM_X86_UNWIND_USER_H
 
-#include <linux/unwind_user_types.h>
-
-#define ARCH_INIT_USER_FP_FRAME							\
-	.cfa_off	= (s32)sizeof(long) *  2,				\
-	.ra_off		= (s32)sizeof(long) * -1,				\
-	.fp_off		= (s32)sizeof(long) * -2,				\
-	.use_fp		= true,
-
 #ifdef CONFIG_IA32_EMULATION
-
-#define ARCH_INIT_USER_COMPAT_FP_FRAME						\
-	.cfa_off	= (s32)sizeof(u32)  *  2,				\
-	.ra_off		= (s32)sizeof(u32)  * -1,				\
-	.fp_off		= (s32)sizeof(u32)  * -2,				\
-	.use_fp		= true,
-
-#define in_compat_mode(regs) !user_64bit_mode(regs)
-
-void arch_unwind_user_init(struct unwind_user_state *state,
-			   struct pt_regs *regs);
-
-static inline void arch_unwind_user_next(struct unwind_user_state *state)
+/* Currently compat mode is not supported for deferred stack trace */
+static inline bool arch_unwind_can_defer(void)
 {
-	if (state->type != UNWIND_USER_TYPE_COMPAT_FP)
-		return;
+	struct pt_regs *regs = task_pt_regs(current);
 
-	state->ip += state->arch.cs_base;
-	state->fp += state->arch.ss_base;
+	return user_64bit_mode(regs);
 }
-
-#define arch_unwind_user_init arch_unwind_user_init
-#define arch_unwind_user_next arch_unwind_user_next
-
+# define arch_unwind_can_defer	arch_unwind_can_defer
 #endif /* CONFIG_IA32_EMULATION */
 
-#include <asm-generic/unwind_user.h>
+#define ARCH_INIT_USER_FP_FRAME							\
+	.cfa_off	= (s32)sizeof(long) *  2,				\
+	.ra_off		= (s32)sizeof(long) * -1,				\
+	.fp_off		= (s32)sizeof(long) * -2,				\
+	.use_fp		= true,
 
 #endif /* _ASM_X86_UNWIND_USER_H */
diff --git a/arch/x86/include/asm/unwind_user_types.h b/arch/x86/include/asm/unwind_user_types.h
deleted file mode 100644
index f93d535f900e..000000000000
--- a/arch/x86/include/asm/unwind_user_types.h
+++ /dev/null
@@ -1,17 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_X86_UNWIND_USER_TYPES_H
-#define _ASM_X86_UNWIND_USER_TYPES_H
-
-#ifdef CONFIG_IA32_EMULATION
-
-struct arch_unwind_user_state {
-	unsigned long ss_base;
-	unsigned long cs_base;
-};
-#define arch_unwind_user_state arch_unwind_user_state
-
-#endif /* CONFIG_IA32_EMULATION */
-
-#include <asm-generic/unwind_user_types.h>
-
-#endif /* _ASM_UNWIND_USER_TYPES_H */
diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
index 8ef9d8c71df9..ee117fcf46ed 100644
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -9,10 +9,7 @@
 #include <linux/stacktrace.h>
 #include <linux/export.h>
 #include <linux/uaccess.h>
-#include <asm/unwind_user.h>
 #include <asm/stacktrace.h>
-#include <asm/insn.h>
-#include <asm/insn-eval.h>
 #include <asm/unwind.h>
 
 void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
@@ -131,28 +128,3 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
 	}
 }
 
-#ifdef CONFIG_IA32_EMULATION
-void arch_unwind_user_init(struct unwind_user_state *state,
-			   struct pt_regs *regs)
-{
-	unsigned long cs_base, ss_base;
-
-	if (state->type != UNWIND_USER_TYPE_COMPAT_FP)
-		return;
-
-	cs_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
-	ss_base = insn_get_seg_base(regs, INAT_SEG_REG_SS);
-
-	if (cs_base == -1)
-		cs_base = 0;
-	if (ss_base == -1)
-		ss_base = 0;
-
-	state->arch.cs_base = cs_base;
-	state->arch.ss_base = ss_base;
-
-	state->ip += cs_base;
-	state->sp += ss_base;
-	state->fp += ss_base;
-}
-#endif /* CONFIG_IA32_EMULATION */
diff --git a/include/asm-generic/Kbuild b/include/asm-generic/Kbuild
index b797a2434396..295c94a3ccc1 100644
--- a/include/asm-generic/Kbuild
+++ b/include/asm-generic/Kbuild
@@ -60,7 +60,6 @@ mandatory-y += topology.h
 mandatory-y += trace_clock.h
 mandatory-y += uaccess.h
 mandatory-y += unwind_user.h
-mandatory-y += unwind_user_types.h
 mandatory-y += vermagic.h
 mandatory-y += vga.h
 mandatory-y += video.h
diff --git a/include/asm-generic/unwind_user_types.h b/include/asm-generic/unwind_user_types.h
deleted file mode 100644
index f568b82e52cd..000000000000
--- a/include/asm-generic/unwind_user_types.h
+++ /dev/null
@@ -1,5 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_GENERIC_UNWIND_USER_TYPES_H
-#define _ASM_GENERIC_UNWIND_USER_TYPES_H
-
-#endif /* _ASM_GENERIC_UNWIND_USER_TYPES_H */
diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 900b0d5c05f5..879054b8bf87 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -524,4 +524,8 @@ DEFINE_LOCK_GUARD_1(srcu, struct srcu_struct,
 		    srcu_read_unlock(_T->lock, _T->idx),
 		    int idx)
 
+DEFINE_LOCK_GUARD_1(srcu_lite, struct srcu_struct,
+		    _T->idx = srcu_read_lock_lite(_T->lock),
+		    srcu_read_unlock_lite(_T->lock, _T->idx),
+		    int idx)
 #endif
diff --git a/include/linux/unwind_deferred.h b/include/linux/unwind_deferred.h
index a9d5b100d6b2..0124865aaab4 100644
--- a/include/linux/unwind_deferred.h
+++ b/include/linux/unwind_deferred.h
@@ -16,18 +16,23 @@ struct unwind_work {
 	int				bit;
 };
 
-#ifdef CONFIG_UNWIND_USER
+/* Architectures can add a test to not defer unwinding */
+#ifndef arch_unwind_can_defer
+# define arch_unwind_can_defer()	(true)
+#endif
 
-#define UNWIND_PENDING_BIT	(BITS_PER_LONG - 1)
-#define UNWIND_PENDING		BIT(UNWIND_PENDING_BIT)
+#ifdef CONFIG_UNWIND_USER
 
-/* Set if the unwinding was used (directly or deferred) */
-#define UNWIND_USED_BIT		(UNWIND_PENDING_BIT - 1)
-#define UNWIND_USED		BIT(UNWIND_USED_BIT)
+enum {
+	UNWIND_PENDING_BIT = 0,
+	UNWIND_USED_BIT,
+};
 
 enum {
-	UNWIND_ALREADY_PENDING	= 1,
-	UNWIND_ALREADY_EXECUTED	= 2,
+	UNWIND_PENDING		= BIT(UNWIND_PENDING_BIT),
+
+	/* Set if the unwinding was used (directly or deferred) */
+	UNWIND_USED		= BIT(UNWIND_USED_BIT)
 };
 
 void unwind_task_init(struct task_struct *task);
@@ -56,8 +61,10 @@ static __always_inline void unwind_reset_info(void)
 		} while (!try_cmpxchg(&info->unwind_mask, &bits, 0UL));
 		current->unwind_info.id.id = 0;
 
-		if (unlikely(info->cache))
+		if (unlikely(info->cache)) {
 			info->cache->nr_entries = 0;
+			info->cache->unwind_completed = 0;
+		}
 	}
 }
 
diff --git a/include/linux/unwind_deferred_types.h b/include/linux/unwind_deferred_types.h
index db6c65daf185..33b62ac25c86 100644
--- a/include/linux/unwind_deferred_types.h
+++ b/include/linux/unwind_deferred_types.h
@@ -3,11 +3,24 @@
 #define _LINUX_UNWIND_USER_DEFERRED_TYPES_H
 
 struct unwind_cache {
+	unsigned long		unwind_completed;
 	unsigned int		nr_entries;
 	unsigned long		entries[];
 };
 
-
+/*
+ * The unwind_task_id is a unique identifier that maps to a user space
+ * stacktrace. It is generated the first time a deferred user space
+ * stacktrace is requested after a task has entered the kerenl and
+ * is cleared to zero when it exits. The mapped id will be a non-zero
+ * number.
+ *
+ * To simplify the generation of the 64 bit number, 32 bits will be
+ * the CPU it was generated on, and the other 32 bits will be a per
+ * cpu counter that gets incremented by two every time a new identifier
+ * is generated. The LSB will always be set to keep the value
+ * from being zero.
+ */
 union unwind_task_id {
 	struct {
 		u32		cpu;
@@ -17,9 +30,9 @@ union unwind_task_id {
 };
 
 struct unwind_task_info {
+	unsigned long		unwind_mask;
 	struct unwind_cache	*cache;
 	struct callback_head	work;
-	unsigned long		unwind_mask;
 	union unwind_task_id	id;
 };
 
diff --git a/include/linux/unwind_user.h b/include/linux/unwind_user.h
index 8a4af0214ecb..7f7282516bf5 100644
--- a/include/linux/unwind_user.h
+++ b/include/linux/unwind_user.h
@@ -9,31 +9,6 @@
  #define ARCH_INIT_USER_FP_FRAME
 #endif
 
-#ifndef ARCH_INIT_USER_COMPAT_FP_FRAME
- #define ARCH_INIT_USER_COMPAT_FP_FRAME
- #define in_compat_mode(regs) false
-#endif
-
-/*
- * If an architecture needs to initialize the state for a specific
- * reason, for example, it may need to do something different
- * in compat mode, it can define a macro named arch_unwind_user_init
- * with the name of the function that will perform this initialization.
- */
-#ifndef arch_unwind_user_init
-static inline void arch_unwind_user_init(struct unwind_user_state *state, struct pt_regs *reg) {}
-#endif
-
-/*
- * If an architecture requires some more updates to the state between
- * stack frames, it can define a macro named arch_unwind_user_next
- * with the name of the function that will update the state between
- * reading stack frames during the user space stack walk.
- */
-#ifndef arch_unwind_user_next
-static inline void arch_unwind_user_next(struct unwind_user_state *state) {}
-#endif
-
 int unwind_user(struct unwind_stacktrace *trace, unsigned int max_entries);
 
 #endif /* _LINUX_UNWIND_USER_H */
diff --git a/include/linux/unwind_user_types.h b/include/linux/unwind_user_types.h
index 0b6563951ca4..a449f15be890 100644
--- a/include/linux/unwind_user_types.h
+++ b/include/linux/unwind_user_types.h
@@ -3,16 +3,21 @@
 #define _LINUX_UNWIND_USER_TYPES_H
 
 #include <linux/types.h>
-#include <asm/unwind_user_types.h>
 
-#ifndef arch_unwind_user_state
-struct arch_unwind_user_state {};
-#endif
+/*
+ * Unwind types, listed in priority order: lower numbers are attempted first if
+ * available.
+ */
+enum unwind_user_type_bits {
+	UNWIND_USER_TYPE_FP_BIT =		0,
+
+	NR_UNWIND_USER_TYPE_BITS,
+};
 
 enum unwind_user_type {
-	UNWIND_USER_TYPE_NONE,
-	UNWIND_USER_TYPE_FP,
-	UNWIND_USER_TYPE_COMPAT_FP,
+	/* Type "none" for the start of stack walk iteration. */
+	UNWIND_USER_TYPE_NONE =			0,
+	UNWIND_USER_TYPE_FP =			BIT(UNWIND_USER_TYPE_FP_BIT),
 };
 
 struct unwind_stacktrace {
@@ -28,12 +33,12 @@ struct unwind_user_frame {
 };
 
 struct unwind_user_state {
-	unsigned long ip;
-	unsigned long sp;
-	unsigned long fp;
-	struct arch_unwind_user_state arch;
-	enum unwind_user_type type;
-	bool done;
+	unsigned long				ip;
+	unsigned long				sp;
+	unsigned long				fp;
+	enum unwind_user_type			current_type;
+	unsigned int				available_types;
+	bool					done;
 };
 
 #endif /* _LINUX_UNWIND_USER_TYPES_H */
diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
index 039e12700d49..9972096e93e8 100644
--- a/kernel/unwind/deferred.c
+++ b/kernel/unwind/deferred.c
@@ -44,7 +44,11 @@ static inline bool try_assign_cnt(struct unwind_task_info *info, u32 cnt)
 /* Guards adding to or removing from the list of callbacks */
 static DEFINE_MUTEX(callback_mutex);
 static LIST_HEAD(callbacks);
-static unsigned long unwind_mask;
+
+#define RESERVED_BITS	(UNWIND_PENDING | UNWIND_USED)
+
+/* Zero'd bits are available for assigning callback users */
+static unsigned long unwind_mask = RESERVED_BITS;
 DEFINE_STATIC_SRCU(unwind_srcu);
 
 static inline bool unwind_pending(struct unwind_task_info *info)
@@ -73,19 +77,16 @@ static DEFINE_PER_CPU(u32, unwind_ctx_ctr);
  */
 static u64 get_cookie(struct unwind_task_info *info)
 {
-	u32 cpu_cnt;
-	u32 cnt;
+	u32 cnt = 1;
 
 	if (info->id.cpu)
 		return info->id.id;
 
-	cpu_cnt = __this_cpu_read(unwind_ctx_ctr);
-	cpu_cnt += 2;
-	cnt = cpu_cnt | 1; /* Always make non zero */
-
+	/* LSB is always set to ensure 0 is an invalid value */
+	cnt |= __this_cpu_read(unwind_ctx_ctr) + 2;
 	if (try_assign_cnt(info, cnt)) {
 		/* Update the per cpu counter */
-		__this_cpu_write(unwind_ctx_ctr, cpu_cnt);
+		__this_cpu_write(unwind_ctx_ctr, cnt);
 	}
 	/* Interrupts are disabled, the CPU will always be same */
 	info->id.cpu = smp_processor_id() + 1; /* Must be non zero */
@@ -153,16 +154,13 @@ static void process_unwind_deferred(struct task_struct *task)
 	struct unwind_work *work;
 	unsigned long bits;
 	u64 cookie;
-	int idx;
 
 	if (WARN_ON_ONCE(!unwind_pending(info)))
 		return;
 
 	/* Clear pending bit but make sure to have the current bits */
-	bits = READ_ONCE(info->unwind_mask);
-	while (!try_cmpxchg(&info->unwind_mask, &bits, bits & ~UNWIND_PENDING))
-		;
-
+	bits = atomic_long_fetch_andnot(UNWIND_PENDING,
+				  (atomic_long_t *)&info->unwind_mask);
 	/*
 	 * From here on out, the callback must always be called, even if it's
 	 * just an empty trace.
@@ -172,15 +170,20 @@ static void process_unwind_deferred(struct task_struct *task)
 
 	unwind_user_faultable(&trace);
 
+	if (info->cache)
+		bits &= ~(info->cache->unwind_completed);
+
 	cookie = info->id.id;
 
-	idx = srcu_read_lock(&unwind_srcu);
+	guard(srcu_lite)(&unwind_srcu);
 	list_for_each_entry_srcu(work, &callbacks, list,
 				 srcu_read_lock_held(&unwind_srcu)) {
-		if (test_bit(work->bit, &bits))
+		if (test_bit(work->bit, &bits)) {
 			work->func(work, &trace, cookie);
+			if (info->cache)
+				info->cache->unwind_completed |= BIT(work->bit);
+		}
 	}
-	srcu_read_unlock(&unwind_srcu, idx);
 }
 
 static void unwind_deferred_task_work(struct callback_head *head)
@@ -201,7 +204,7 @@ void unwind_deferred_task_exit(struct task_struct *task)
 }
 
 /**
- * unwind_deferred_request - Request a user stacktrace on task exit
+ * unwind_deferred_request - Request a user stacktrace on task kernel exit
  * @work: Unwind descriptor requesting the trace
  * @cookie: The cookie of the first request made for this task
  *
@@ -221,9 +224,7 @@ void unwind_deferred_task_exit(struct task_struct *task)
  * it will be called again with the same stack trace and cookie.
  *
  * Return: 0 if the callback successfully was queued.
- *         UNWIND_ALREADY_PENDING if the the callback was already queued.
- *         UNWIND_ALREADY_EXECUTED if the callback was already called
- *                (and will not be called again)
+ *         1 if the callback is pending or was already executed.
  *         Negative if there's an error.
  *         @cookie holds the cookie of the first request by any user
  */
@@ -231,17 +232,24 @@ int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
 {
 	struct unwind_task_info *info = &current->unwind_info;
 	unsigned long old, bits;
-	int bit;
+	unsigned long bit;
 	int ret;
 
 	*cookie = 0;
 
+	if (!arch_unwind_can_defer())
+		return -EINVAL;
+
 	if ((current->flags & (PF_KTHREAD | PF_EXITING)) ||
 	    !user_mode(task_pt_regs(current)))
 		return -EINVAL;
 
-	/* NMI requires having safe cmpxchg operations */
-	if (!CAN_USE_IN_NMI && in_nmi())
+	/*
+	 * NMI requires having safe cmpxchg operations.
+	 * Trigger a warning to make it obvious that an architecture
+	 * is using this in NMI when it should not be.
+	 */
+	if (WARN_ON_ONCE(!CAN_USE_IN_NMI && in_nmi()))
 		return -EINVAL;
 
 	/* Do not allow cancelled works to request again */
@@ -249,44 +257,34 @@ int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
 	if (WARN_ON_ONCE(bit < 0))
 		return -EINVAL;
 
+	/* Only need the mask now */
+	bit = BIT(bit);
+
 	guard(irqsave)();
 
 	*cookie = get_cookie(info);
 
 	old = READ_ONCE(info->unwind_mask);
 
-	/* Is this already queued */
-	if (test_bit(bit, &old)) {
-		/*
-		 * If pending is not set, it means this work's callback
-		 * was already called.
-		 */
-		return old & UNWIND_PENDING ? UNWIND_ALREADY_PENDING :
-			UNWIND_ALREADY_EXECUTED;
-	}
-
-	if (unwind_pending(info))
-		goto out;
-
-	/*
-	 * This is the first to enable another task_work for this task since
-	 * the task entered the kernel, or had already called the callbacks.
-	 * Set only the bit for this work and clear all others as they have
-	 * already had their callbacks called, and do not need to call them
-	 * again because of this work.
-	 */
-	bits = UNWIND_PENDING | BIT(bit);
+	/* Is this already queued or executed */
+	if (old & bit)
+		return 1;
 
 	/*
-	 * If the cmpxchg() fails, it means that an NMI came in and set
-	 * the pending bit as well as cleared the other bits. Just
-	 * jump to setting the bit for this work.
+	 * This work's bit hasn't been set yet. Now set it with the PENDING
+	 * bit and fetch the current value of unwind_mask. If ether the
+	 * work's bit or PENDING was already set, then this is already queued
+	 * to have a callback.
 	 */
-	if (CAN_USE_IN_NMI) {
-		if (!try_cmpxchg(&info->unwind_mask, &old, bits))
-			goto out;
-	} else {
-		info->unwind_mask = bits;
+	bits = UNWIND_PENDING | bit;
+	old = atomic_long_fetch_or(bits, (atomic_long_t *)&info->unwind_mask);
+	if (old & bits) {
+		/*
+		 * If the work's bit was set, whatever set it had better
+		 * have also set pending and queued a callback.
+		 */
+		WARN_ON_ONCE(!(old & UNWIND_PENDING));
+		return old & bit;
 	}
 
 	/* The work has been claimed, now schedule it. */
@@ -296,9 +294,6 @@ int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
 		WRITE_ONCE(info->unwind_mask, 0);
 
 	return ret;
- out:
-	return test_and_set_bit(bit, &info->unwind_mask) ?
-		UNWIND_ALREADY_PENDING : 0;
 }
 
 void unwind_deferred_cancel(struct unwind_work *work)
@@ -309,9 +304,14 @@ void unwind_deferred_cancel(struct unwind_work *work)
 	if (!work)
 		return;
 
+	bit = work->bit;
+
+	/* No work should be using a reserved bit */
+	if (WARN_ON_ONCE(BIT(bit) & RESERVED_BITS))
+		return;
+
 	guard(mutex)(&callback_mutex);
 	list_del_rcu(&work->list);
-	bit = work->bit;
 
 	/* Do not allow any more requests and prevent callbacks */
 	work->bit = -1;
@@ -324,6 +324,8 @@ void unwind_deferred_cancel(struct unwind_work *work)
 	/* Clear this bit from all threads */
 	for_each_process_thread(g, t) {
 		clear_bit(bit, &t->unwind_info.unwind_mask);
+		if (t->unwind_info.cache)
+			clear_bit(bit, &t->unwind_info.cache->unwind_completed);
 	}
 }
 
@@ -334,7 +336,7 @@ int unwind_deferred_init(struct unwind_work *work, unwind_callback_t func)
 	guard(mutex)(&callback_mutex);
 
 	/* See if there's a bit in the mask available */
-	if (unwind_mask == ~(UNWIND_PENDING|UNWIND_USED))
+	if (unwind_mask == ~0UL)
 		return -EBUSY;
 
 	work->bit = ffz(unwind_mask);
diff --git a/kernel/unwind/user.c b/kernel/unwind/user.c
index 249d9e32fad7..85b8c764d2f7 100644
--- a/kernel/unwind/user.c
+++ b/kernel/unwind/user.c
@@ -12,54 +12,18 @@ static struct unwind_user_frame fp_frame = {
 	ARCH_INIT_USER_FP_FRAME
 };
 
-static struct unwind_user_frame compat_fp_frame = {
-	ARCH_INIT_USER_COMPAT_FP_FRAME
-};
-
-static inline bool fp_state(struct unwind_user_state *state)
-{
-	return IS_ENABLED(CONFIG_HAVE_UNWIND_USER_FP) &&
-	       state->type == UNWIND_USER_TYPE_FP;
-}
-
 #define for_each_user_frame(state) \
 	for (unwind_user_start(state); !(state)->done; unwind_user_next(state))
 
-static inline bool compat_fp_state(struct unwind_user_state *state)
+static int unwind_user_next_fp(struct unwind_user_state *state)
 {
-	return IS_ENABLED(CONFIG_HAVE_UNWIND_USER_COMPAT_FP) &&
-	       state->type == UNWIND_USER_TYPE_COMPAT_FP;
-}
-
-#define unwind_get_user_long(to, from, state)				\
-({									\
-	int __ret;							\
-	if (compat_fp_state(state))					\
-		__ret = get_user(to, (u32 __user *)(from));		\
-	else								\
-		__ret = get_user(to, (unsigned long __user *)(from));	\
-	__ret;								\
-})
-
-static int unwind_user_next(struct unwind_user_state *state)
-{
-	struct unwind_user_frame *frame;
-	unsigned long cfa = 0, fp, ra = 0;
+	struct unwind_user_frame *frame = &fp_frame;
+	unsigned long cfa, fp, ra = 0;
 	unsigned int shift;
 
-	if (state->done)
-		return -EINVAL;
-
-	if (compat_fp_state(state))
-		frame = &compat_fp_frame;
-	else if (fp_state(state))
-		frame = &fp_frame;
-	else
-		goto done;
-
 	if (frame->use_fp) {
 		if (state->fp < state->sp)
-			goto done;
+			return -EINVAL;
 		cfa = state->fp;
 	} else {
 		cfa = state->sp;
@@ -70,30 +34,53 @@ static int unwind_user_next(struct unwind_user_state *state)
 
 	/* stack going in wrong direction? */
 	if (cfa <= state->sp)
-		goto done;
+		return -EINVAL;
 
 	/* Make sure that the address is word aligned */
-	shift = sizeof(long) == 4 || compat_fp_state(state) ? 2 : 3;
-	if ((cfa + frame->ra_off) & ((1 << shift) - 1))
-		goto done;
+	shift = sizeof(long) == 4 ? 2 : 3;
+	if (cfa & ((1 << shift) - 1))
+		return -EINVAL;
 
 	/* Find the Return Address (RA) */
-	if (unwind_get_user_long(ra, cfa + frame->ra_off, state))
-		goto done;
+	if (get_user(ra, (unsigned long *)(cfa + frame->ra_off)))
+		return -EINVAL;
 
-	if (frame->fp_off && unwind_get_user_long(fp, cfa + frame->fp_off, state))
-		goto done;
+	if (frame->fp_off && get_user(fp, (unsigned long __user *)(cfa + frame->fp_off)))
+		return -EINVAL;
 
 	state->ip = ra;
 	state->sp = cfa;
 	if (frame->fp_off)
 		state->fp = fp;
+	return 0;
+}
+
+static int unwind_user_next(struct unwind_user_state *state)
+{
+	unsigned long iter_mask = state->available_types;
+	unsigned int bit;
 
-	arch_unwind_user_next(state);
+	if (state->done)
+		return -EINVAL;
 
-	return 0;
+	for_each_set_bit(bit, &iter_mask, NR_UNWIND_USER_TYPE_BITS) {
+		enum unwind_user_type type = BIT(bit);
+
+		state->current_type = type;
+		switch (type) {
+		case UNWIND_USER_TYPE_FP:
+			if (!unwind_user_next_fp(state))
+				return 0;
+			continue;
+		default:
+			WARN_ONCE(1, "Undefined unwind bit %d", bit);
+			break;
+		}
+		break;
+	}
 
-done:
+	/* No successful unwind method. */
+	state->current_type = UNWIND_USER_TYPE_NONE;
 	state->done = true;
 	return -EINVAL;
 }
@@ -109,19 +96,13 @@ static int unwind_user_start(struct unwind_user_state *state)
 		return -EINVAL;
 	}
 
-	if (IS_ENABLED(CONFIG_HAVE_UNWIND_USER_COMPAT_FP) && in_compat_mode(regs))
-		state->type = UNWIND_USER_TYPE_COMPAT_FP;
-	else if (IS_ENABLED(CONFIG_HAVE_UNWIND_USER_FP))
-		state->type = UNWIND_USER_TYPE_FP;
-	else
-		state->type = UNWIND_USER_TYPE_NONE;
+	if (IS_ENABLED(CONFIG_HAVE_UNWIND_USER_FP))
+		state->available_types |= UNWIND_USER_TYPE_FP;
 
 	state->ip = instruction_pointer(regs);
 	state->sp = user_stack_pointer(regs);
 	state->fp = frame_pointer(regs);
 
-	arch_unwind_user_init(state, regs);
-
 	return 0;
 }
 
-- Steve

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work()
  2025-07-17  0:49 ` [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work() Steven Rostedt
@ 2025-07-17  4:43   ` Paul E. McKenney
  2025-07-17 12:25     ` Steven Rostedt
  0 siblings, 1 reply; 21+ messages in thread
From: Paul E. McKenney @ 2025-07-17  4:43 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-trace-kernel, bpf, x86, Masami Hiramatsu,
	Mathieu Desnoyers, Josh Poimboeuf, Peter Zijlstra, Ingo Molnar,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Andrii Nakryiko,
	Indu Bhagat, Jose E. Marchesi, Beau Belgrave, Jens Remus,
	Linus Torvalds, Andrew Morton, Jens Axboe, Florian Weimer,
	Sam James

On Wed, Jul 16, 2025 at 08:49:19PM -0400, Steven Rostedt wrote:
> From: Steven Rostedt <rostedt@goodmis.org>
> 
> Instead of using the callback_mutex to protect the link list of callbacks
> in unwind_deferred_task_work(), use SRCU instead. This gets called every
> time a task exits that has to record a stack trace that was requested.
> This can happen for many tasks on several CPUs at the same time. A mutex
> is a bottleneck and can cause a bit of contention and slow down performance.
> 
> As the callbacks themselves are allowed to sleep, regular RCU cannot be
> used to protect the list. Instead use SRCU, as that still allows the
> callbacks to sleep and the list can be read without needing to hold the
> callback_mutex.
> 
> Link: https://lore.kernel.org/all/ca9bd83a-6c80-4ee0-a83c-224b9d60b755@efficios.com/
> 
> Also added a new guard (srcu_lite) written by Peter Zilstra
> 
> Link: https://lore.kernel.org/all/20250715102912.GQ1613200@noisy.programming.kicks-ass.net/
> 
> Cc: "Paul E. McKenney" <paulmck@kernel.org>
> Suggested-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
> ---
> Changes since v13: https://lore.kernel.org/20250708012359.172959778@kernel.org
> 
> - Have the locking of the link list walk use guard(srcu_lite)
>   (Peter Zijlstra)
> 
> - Fixed up due to the new atomic_long logic.
> 
>  include/linux/srcu.h     |  4 ++++
>  kernel/unwind/deferred.c | 27 +++++++++++++++++++++------
>  2 files changed, 25 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> index 900b0d5c05f5..879054b8bf87 100644
> --- a/include/linux/srcu.h
> +++ b/include/linux/srcu.h
> @@ -524,4 +524,8 @@ DEFINE_LOCK_GUARD_1(srcu, struct srcu_struct,
>  		    srcu_read_unlock(_T->lock, _T->idx),
>  		    int idx)
>  
> +DEFINE_LOCK_GUARD_1(srcu_lite, struct srcu_struct,

You need srcu_fast because srcu_lite is being removed.  They are quite
similar, but srcu_fast is faster and is NMI-safe.  (This last might or
might not matter here.)

See https://lore.kernel.org/all/20250716225418.3014815-3-paulmck@kernel.org/
for a srcu_fast_notrace, so something like this:

DEFINE_LOCK_GUARD_1(srcu_fast, struct srcu_struct,
		    _T->scp = srcu_read_lock_fast(_T->lock),
		    srcu_read_unlock_fast(_T->lock, _T->scp),
		    struct srcu_ctr __percpu *scp)

Other than that, it looks plausible.

							Thanx, Paul

> +		    _T->idx = srcu_read_lock_lite(_T->lock),
> +		    srcu_read_unlock_lite(_T->lock, _T->idx),
> +		    int idx)
>  #endif
> diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
> index 2311b725d691..353f7af610bf 100644
> --- a/kernel/unwind/deferred.c
> +++ b/kernel/unwind/deferred.c
> @@ -41,7 +41,7 @@ static inline bool try_assign_cnt(struct unwind_task_info *info, u32 cnt)
>  #define UNWIND_MAX_ENTRIES					\
>  	((SZ_4K - sizeof(struct unwind_cache)) / sizeof(long))
>  
> -/* Guards adding to and reading the list of callbacks */
> +/* Guards adding to or removing from the list of callbacks */
>  static DEFINE_MUTEX(callback_mutex);
>  static LIST_HEAD(callbacks);
>  
> @@ -49,6 +49,7 @@ static LIST_HEAD(callbacks);
>  
>  /* Zero'd bits are available for assigning callback users */
>  static unsigned long unwind_mask = RESERVED_BITS;
> +DEFINE_STATIC_SRCU(unwind_srcu);
>  
>  static inline bool unwind_pending(struct unwind_task_info *info)
>  {
> @@ -174,8 +175,9 @@ static void unwind_deferred_task_work(struct callback_head *head)
>  
>  	cookie = info->id.id;
>  
> -	guard(mutex)(&callback_mutex);
> -	list_for_each_entry(work, &callbacks, list) {
> +	guard(srcu_lite)(&unwind_srcu);
> +	list_for_each_entry_srcu(work, &callbacks, list,
> +				 srcu_read_lock_held(&unwind_srcu)) {
>  		if (test_bit(work->bit, &bits)) {
>  			work->func(work, &trace, cookie);
>  			if (info->cache)
> @@ -213,7 +215,7 @@ int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
>  {
>  	struct unwind_task_info *info = &current->unwind_info;
>  	unsigned long old, bits;
> -	unsigned long bit = BIT(work->bit);
> +	unsigned long bit;
>  	int ret;
>  
>  	*cookie = 0;
> @@ -230,6 +232,14 @@ int unwind_deferred_request(struct unwind_work *work, u64 *cookie)
>  	if (WARN_ON_ONCE(!CAN_USE_IN_NMI && in_nmi()))
>  		return -EINVAL;
>  
> +	/* Do not allow cancelled works to request again */
> +	bit = READ_ONCE(work->bit);
> +	if (WARN_ON_ONCE(bit < 0))
> +		return -EINVAL;
> +
> +	/* Only need the mask now */
> +	bit = BIT(bit);
> +
>  	guard(irqsave)();
>  
>  	*cookie = get_cookie(info);
> @@ -281,10 +291,15 @@ void unwind_deferred_cancel(struct unwind_work *work)
>  		return;
>  
>  	guard(mutex)(&callback_mutex);
> -	list_del(&work->list);
> +	list_del_rcu(&work->list);
> +
> +	/* Do not allow any more requests and prevent callbacks */
> +	work->bit = -1;
>  
>  	__clear_bit(bit, &unwind_mask);
>  
> +	synchronize_srcu(&unwind_srcu);
> +
>  	guard(rcu)();
>  	/* Clear this bit from all threads */
>  	for_each_process_thread(g, t) {
> @@ -307,7 +322,7 @@ int unwind_deferred_init(struct unwind_work *work, unwind_callback_t func)
>  	work->bit = ffz(unwind_mask);
>  	__set_bit(work->bit, &unwind_mask);
>  
> -	list_add(&work->list, &callbacks);
> +	list_add_rcu(&work->list, &callbacks);
>  	work->func = func;
>  	return 0;
>  }
> -- 
> 2.47.2
> 
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work()
  2025-07-17  4:43   ` Paul E. McKenney
@ 2025-07-17 12:25     ` Steven Rostedt
  2025-07-17 15:48       ` Paul E. McKenney
  0 siblings, 1 reply; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17 12:25 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Steven Rostedt, linux-kernel, linux-trace-kernel, bpf, x86,
	Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

On Wed, 16 Jul 2025 21:43:47 -0700
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> > +DEFINE_LOCK_GUARD_1(srcu_lite, struct srcu_struct,  
> 
> You need srcu_fast because srcu_lite is being removed.  They are quite
> similar, but srcu_fast is faster and is NMI-safe.  (This last might or
> might not matter here.)
> 
> See https://lore.kernel.org/all/20250716225418.3014815-3-paulmck@kernel.org/
> for a srcu_fast_notrace, so something like this:

Yeah, I already saw that patch.

> 
> DEFINE_LOCK_GUARD_1(srcu_fast, struct srcu_struct,
> 		    _T->scp = srcu_read_lock_fast(_T->lock),
> 		    srcu_read_unlock_fast(_T->lock, _T->scp),
> 		    struct srcu_ctr __percpu *scp)
> 
> Other than that, it looks plausible.

Using srcu_lite or srcu_fast is an optimization here. And since I saw you
adding the guard for srcu_fast in that other thread, I'll just use normal
SRCU here for this series, and in the future we could convert it over to
srcu_fast.

Thanks!

-- Steve

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work()
  2025-07-17 12:25     ` Steven Rostedt
@ 2025-07-17 15:48       ` Paul E. McKenney
  2025-07-17 16:10         ` Steven Rostedt
  0 siblings, 1 reply; 21+ messages in thread
From: Paul E. McKenney @ 2025-07-17 15:48 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Steven Rostedt, linux-kernel, linux-trace-kernel, bpf, x86,
	Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

On Thu, Jul 17, 2025 at 08:25:26AM -0400, Steven Rostedt wrote:
> On Wed, 16 Jul 2025 21:43:47 -0700
> "Paul E. McKenney" <paulmck@kernel.org> wrote:
> 
> > > +DEFINE_LOCK_GUARD_1(srcu_lite, struct srcu_struct,  
> > 
> > You need srcu_fast because srcu_lite is being removed.  They are quite
> > similar, but srcu_fast is faster and is NMI-safe.  (This last might or
> > might not matter here.)
> > 
> > See https://lore.kernel.org/all/20250716225418.3014815-3-paulmck@kernel.org/
> > for a srcu_fast_notrace, so something like this:
> 
> Yeah, I already saw that patch.
> 
> > 
> > DEFINE_LOCK_GUARD_1(srcu_fast, struct srcu_struct,
> > 		    _T->scp = srcu_read_lock_fast(_T->lock),
> > 		    srcu_read_unlock_fast(_T->lock, _T->scp),
> > 		    struct srcu_ctr __percpu *scp)
> > 
> > Other than that, it looks plausible.
> 
> Using srcu_lite or srcu_fast is an optimization here. And since I saw you
> adding the guard for srcu_fast in that other thread, I'll just use normal
> SRCU here for this series, and in the future we could convert it over to
> srcu_fast.

Works for me!

That said, "in the future" started in -next some time back and is slated
to start in mainline in the upcoming v6.17 merge window.  SRCU-lite is
being removed from the kernel, and has been deprecated via checkpatch.pl.

So if there is some reason that you absolutely cannot immediately convert
to SRCU-fast, let's please discuss.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work()
  2025-07-17 15:48       ` Paul E. McKenney
@ 2025-07-17 16:10         ` Steven Rostedt
  2025-07-17 16:27           ` Paul E. McKenney
  0 siblings, 1 reply; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17 16:10 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Steven Rostedt, linux-kernel, linux-trace-kernel, bpf, x86,
	Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

On Thu, 17 Jul 2025 08:48:40 -0700
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> So if there is some reason that you absolutely cannot immediately convert
> to SRCU-fast, let's please discuss.

There's two reasons I wouldn't add it immediately.

One, is the guard(srcu_fast) isn't in mainline yet. I would either need
to open code it, or play the tricks of basing code off your tree.

Two, I'm still grasping at the concept of srcu_fast (and srcu_lite for
that matter), where I rather be slow and safe than optimize and be
unsafe. The code where this is used may be faulting in user space
memory, so it doesn't need the micro-optimizations now.

-- Steve

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work()
  2025-07-17 16:10         ` Steven Rostedt
@ 2025-07-17 16:27           ` Paul E. McKenney
  2025-07-17 16:38             ` Steven Rostedt
  0 siblings, 1 reply; 21+ messages in thread
From: Paul E. McKenney @ 2025-07-17 16:27 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Steven Rostedt, linux-kernel, linux-trace-kernel, bpf, x86,
	Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

On Thu, Jul 17, 2025 at 12:10:10PM -0400, Steven Rostedt wrote:
> On Thu, 17 Jul 2025 08:48:40 -0700
> "Paul E. McKenney" <paulmck@kernel.org> wrote:
> 
> > So if there is some reason that you absolutely cannot immediately convert
> > to SRCU-fast, let's please discuss.
> 
> There's two reasons I wouldn't add it immediately.
> 
> One, is the guard(srcu_fast) isn't in mainline yet. I would either need
> to open code it, or play the tricks of basing code off your tree.

Fair point!  But guard(srcu_fast) isn't in my tree, either, just
guard(srcu_fast_nopreempt).  So why not add guard(srcu_fast) in your
tree, and we can ack it.  Yes, that means we will have a merge conflict
at some point, but it will be a trivial one.

> Two, I'm still grasping at the concept of srcu_fast (and srcu_lite for
> that matter), where I rather be slow and safe than optimize and be
> unsafe. The code where this is used may be faulting in user space
> memory, so it doesn't need the micro-optimizations now.

Straight-up SRCU and guard(srcu), then?  Both are already in mainline.

Or are those read-side smp_mb() calls a no-go for this code?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work()
  2025-07-17 16:27           ` Paul E. McKenney
@ 2025-07-17 16:38             ` Steven Rostedt
  2025-07-17 16:54               ` Paul E. McKenney
  0 siblings, 1 reply; 21+ messages in thread
From: Steven Rostedt @ 2025-07-17 16:38 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Steven Rostedt, linux-kernel, linux-trace-kernel, bpf, x86,
	Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

On Thu, 17 Jul 2025 09:27:34 -0700
"Paul E. McKenney" <paulmck@kernel.org> wrote:

> > Two, I'm still grasping at the concept of srcu_fast (and srcu_lite for
> > that matter), where I rather be slow and safe than optimize and be
> > unsafe. The code where this is used may be faulting in user space
> > memory, so it doesn't need the micro-optimizations now.  
> 
> Straight-up SRCU and guard(srcu), then?  Both are already in mainline.
> 
> Or are those read-side smp_mb() calls a no-go for this code?

As I stated, the read-side is likely going to be faulting in user space
memory. I don't think one or two smp_mb() will really make much of a
difference ;-)

It's not urgent. If it can be switched to srcu_fast, we can do it later.

Thanks,

-- Steve

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work()
  2025-07-17 16:38             ` Steven Rostedt
@ 2025-07-17 16:54               ` Paul E. McKenney
  0 siblings, 0 replies; 21+ messages in thread
From: Paul E. McKenney @ 2025-07-17 16:54 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Steven Rostedt, linux-kernel, linux-trace-kernel, bpf, x86,
	Masami Hiramatsu, Mathieu Desnoyers, Josh Poimboeuf,
	Peter Zijlstra, Ingo Molnar, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Andrii Nakryiko, Indu Bhagat, Jose E. Marchesi,
	Beau Belgrave, Jens Remus, Linus Torvalds, Andrew Morton,
	Jens Axboe, Florian Weimer, Sam James

On Thu, Jul 17, 2025 at 12:38:35PM -0400, Steven Rostedt wrote:
> On Thu, 17 Jul 2025 09:27:34 -0700
> "Paul E. McKenney" <paulmck@kernel.org> wrote:
> 
> > > Two, I'm still grasping at the concept of srcu_fast (and srcu_lite for
> > > that matter), where I rather be slow and safe than optimize and be
> > > unsafe. The code where this is used may be faulting in user space
> > > memory, so it doesn't need the micro-optimizations now.  
> > 
> > Straight-up SRCU and guard(srcu), then?  Both are already in mainline.
> > 
> > Or are those read-side smp_mb() calls a no-go for this code?
> 
> As I stated, the read-side is likely going to be faulting in user space
> memory. I don't think one or two smp_mb() will really make much of a
> difference ;-)
> 
> It's not urgent. If it can be switched to srcu_fast, we can do it later.

Very good, we will continue with our removal of SRCU-lite, and I might
as well add guard(srcu_fast) in my current series.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2025-07-17 16:54 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-17  0:49 [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 01/12] unwind_user: Add user space unwinding API with frame pointer support Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 02/12] unwind_user/deferred: Add unwind_user_faultable() Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 03/12] unwind_user/deferred: Add unwind cache Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 04/12] unwind_user/deferred: Add deferred unwinding interface Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 05/12] unwind_user/deferred: Make unwind deferral requests NMI-safe Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 06/12] unwind deferred: Use bitmask to determine which callbacks to call Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 07/12] unwind deferred: Add unwind_completed mask to stop spurious callbacks Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 08/12] unwind: Add USED bit to only have one conditional on way back to user space Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 09/12] unwind deferred: Use SRCU unwind_deferred_task_work() Steven Rostedt
2025-07-17  4:43   ` Paul E. McKenney
2025-07-17 12:25     ` Steven Rostedt
2025-07-17 15:48       ` Paul E. McKenney
2025-07-17 16:10         ` Steven Rostedt
2025-07-17 16:27           ` Paul E. McKenney
2025-07-17 16:38             ` Steven Rostedt
2025-07-17 16:54               ` Paul E. McKenney
2025-07-17  0:49 ` [PATCH v14 10/12] unwind: Finish up unwind when a task exits Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 11/12] unwind_user/x86: Enable frame pointer unwinding on x86 Steven Rostedt
2025-07-17  0:49 ` [PATCH v14 12/12] unwind deferred/x86: Do not defer stack tracing for compat tasks Steven Rostedt
2025-07-17  0:51 ` [PATCH v14 00/12] unwind_user: x86: Deferred unwinding infrastructure Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).