From: Frederic Weisbecker <frederic@kernel.org>
To: Valentin Schneider <vschneid@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
rcu@vger.kernel.org, x86@kernel.org,
linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev,
linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org,
linux-trace-kernel@vger.kernel.org,
Nicolas Saenz Julienne <nsaenzju@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>,
"H. Peter Anvin" <hpa@zytor.com>,
Andy Lutomirski <luto@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Josh Poimboeuf <jpoimboe@kernel.org>,
Paolo Bonzini <pbonzini@redhat.com>,
Arnd Bergmann <arnd@arndb.de>,
"Paul E. McKenney" <paulmck@kernel.org>,
Jason Baron <jbaron@akamai.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ard Biesheuvel <ardb@kernel.org>,
Sami Tolvanen <samitolvanen@google.com>,
"David S. Miller" <davem@davemloft.net>,
Neeraj Upadhyay <neeraj.upadhyay@kernel.org>,
Joel Fernandes <joelagnelf@nvidia.com>,
Josh Triplett <josh@joshtriplett.org>,
Boqun Feng <boqun.feng@gmail.com>,
Uladzislau Rezki <urezki@gmail.com>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Mel Gorman <mgorman@suse.de>,
Andrew Morton <akpm@linux-foundation.org>,
Masahiro Yamada <masahiroy@kernel.org>,
Han Shen <shenhan@google.com>, Rik van Riel <riel@surriel.com>,
Jann Horn <jannh@google.com>,
Dan Carpenter <dan.carpenter@linaro.org>,
Oleg Nesterov <oleg@redhat.com>,
Juri Lelli <juri.lelli@redhat.com>,
Clark Williams <williams@redhat.com>,
Yair Podemsky <ypodemsk@redhat.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
Daniel Wagner <dwagner@suse.de>, Petr Tesarik <ptesarik@suse.com>
Subject: Re: [PATCH v6 23/29] context-tracking: Introduce work deferral infrastructure
Date: Tue, 28 Oct 2025 15:00:30 +0100 [thread overview]
Message-ID: <aQDMfu0tzecFzoGr@localhost.localdomain> (raw)
In-Reply-To: <20251010153839.151763-24-vschneid@redhat.com>
Le Fri, Oct 10, 2025 at 05:38:33PM +0200, Valentin Schneider a écrit :
> smp_call_function() & friends have the unfortunate habit of sending IPIs to
> isolated, NOHZ_FULL, in-userspace CPUs, as they blindly target all online
> CPUs.
>
> Some callsites can be bent into doing the right, such as done by commit:
>
> cc9e303c91f5 ("x86/cpu: Disable frequency requests via aperfmperf IPI for nohz_full CPUs")
>
> Unfortunately, not all SMP callbacks can be omitted in this
> fashion. However, some of them only affect execution in kernelspace, which
> means they don't have to be executed *immediately* if the target CPU is in
> userspace: stashing the callback and executing it upon the next kernel entry
> would suffice. x86 kernel instruction patching or kernel TLB invalidation
> are prime examples of it.
>
> Reduce the RCU dynticks counter width to free up some bits to be used as a
> deferred callback bitmask. Add some build-time checks to validate that
> setup.
>
> Presence of CT_RCU_WATCHING in the ct_state prevents queuing deferred work.
>
> Later commits introduce the bit:callback mappings.
>
> Link: https://lore.kernel.org/all/20210929151723.162004989@infradead.org/
> Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
> Signed-off-by: Valentin Schneider <vschneid@redhat.com>
> ---
> arch/Kconfig | 9 +++
> arch/x86/Kconfig | 1 +
> arch/x86/include/asm/context_tracking_work.h | 16 +++++
> include/linux/context_tracking.h | 21 ++++++
> include/linux/context_tracking_state.h | 30 ++++++---
> include/linux/context_tracking_work.h | 26 ++++++++
> kernel/context_tracking.c | 69 +++++++++++++++++++-
> kernel/time/Kconfig | 5 ++
> 8 files changed, 165 insertions(+), 12 deletions(-)
> create mode 100644 arch/x86/include/asm/context_tracking_work.h
> create mode 100644 include/linux/context_tracking_work.h
>
> diff --git a/arch/Kconfig b/arch/Kconfig
> index d1b4ffd6e0856..a33229e017467 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -968,6 +968,15 @@ config HAVE_CONTEXT_TRACKING_USER_OFFSTACK
> - No use of instrumentation, unless instrumentation_begin() got
> called.
>
> +config HAVE_CONTEXT_TRACKING_WORK
> + bool
> + help
> + Architecture supports deferring work while not in kernel context.
> + This is especially useful on setups with isolated CPUs that might
> + want to avoid being interrupted to perform housekeeping tasks (for
> + ex. TLB invalidation or icache invalidation). The housekeeping
> + operations are performed upon re-entering the kernel.
> +
> config HAVE_TIF_NOHZ
> bool
> help
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 05880301212e3..3f1557b7acd8f 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -222,6 +222,7 @@ config X86
> select HAVE_CMPXCHG_LOCAL
> select HAVE_CONTEXT_TRACKING_USER if X86_64
> select HAVE_CONTEXT_TRACKING_USER_OFFSTACK if HAVE_CONTEXT_TRACKING_USER
> + select HAVE_CONTEXT_TRACKING_WORK if X86_64
> select HAVE_C_RECORDMCOUNT
> select HAVE_OBJTOOL_MCOUNT if HAVE_OBJTOOL
> select HAVE_OBJTOOL_NOP_MCOUNT if HAVE_OBJTOOL_MCOUNT
> diff --git a/arch/x86/include/asm/context_tracking_work.h b/arch/x86/include/asm/context_tracking_work.h
> new file mode 100644
> index 0000000000000..5f3b2d0977235
> --- /dev/null
> +++ b/arch/x86/include/asm/context_tracking_work.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _ASM_X86_CONTEXT_TRACKING_WORK_H
> +#define _ASM_X86_CONTEXT_TRACKING_WORK_H
> +
> +static __always_inline void arch_context_tracking_work(enum ct_work work)
> +{
> + switch (work) {
> + case CT_WORK_n:
> + // Do work...
> + break;
> + case CT_WORK_MAX:
> + WARN_ON_ONCE(true);
> + }
> +}
> +
> +#endif
> diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
> index af9fe87a09225..0b0faa040e9b5 100644
> --- a/include/linux/context_tracking.h
> +++ b/include/linux/context_tracking.h
> @@ -5,6 +5,7 @@
> #include <linux/sched.h>
> #include <linux/vtime.h>
> #include <linux/context_tracking_state.h>
> +#include <linux/context_tracking_work.h>
> #include <linux/instrumentation.h>
>
> #include <asm/ptrace.h>
> @@ -137,6 +138,26 @@ static __always_inline unsigned long ct_state_inc(int incby)
> return raw_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state));
> }
>
> +#ifdef CONFIG_CONTEXT_TRACKING_WORK
> +static __always_inline unsigned long ct_state_inc_clear_work(int incby)
> +{
> + struct context_tracking *ct = this_cpu_ptr(&context_tracking);
> + unsigned long new, old, state;
> +
> + state = arch_atomic_read(&ct->state);
> + do {
> + old = state;
> + new = old & ~CT_WORK_MASK;
> + new += incby;
> + state = arch_atomic_cmpxchg(&ct->state, old, new);
> + } while (old != state);
> +
> + return new;
> +}
> +#else
> +#define ct_state_inc_clear_work(x) ct_state_inc(x)
> +#endif
> +
> static __always_inline bool warn_rcu_enter(void)
> {
> bool ret = false;
> diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h
> index 0b81248aa03e2..d2c302133672f 100644
> --- a/include/linux/context_tracking_state.h
> +++ b/include/linux/context_tracking_state.h
> @@ -5,6 +5,7 @@
> #include <linux/percpu.h>
> #include <linux/static_key.h>
> #include <linux/context_tracking_irq.h>
> +#include <linux/context_tracking_work.h>
>
> /* Offset to allow distinguishing irq vs. task-based idle entry/exit. */
> #define CT_NESTING_IRQ_NONIDLE ((LONG_MAX / 2) + 1)
> @@ -39,16 +40,19 @@ struct context_tracking {
> };
>
> /*
> - * We cram two different things within the same atomic variable:
> + * We cram up to three different things within the same atomic variable:
> *
> - * CT_RCU_WATCHING_START CT_STATE_START
> - * | |
> - * v v
> - * MSB [ RCU watching counter ][ context_state ] LSB
> - * ^ ^
> - * | |
> - * CT_RCU_WATCHING_END CT_STATE_END
> + * CT_RCU_WATCHING_START CT_STATE_START
> + * | CT_WORK_START |
> + * | | |
> + * v v v
> + * MSB [ RCU watching counter ][ context work ][ context_state ] LSB
> + * ^ ^ ^
> + * | | |
> + * | CT_WORK_END |
> + * CT_RCU_WATCHING_END CT_STATE_END
> *
> + * The [ context work ] region spans 0 bits if CONFIG_CONTEXT_WORK=n
> * Bits are used from the LSB upwards, so unused bits (if any) will always be in
> * upper bits of the variable.
> */
> @@ -59,18 +63,24 @@ struct context_tracking {
> #define CT_STATE_START 0
> #define CT_STATE_END (CT_STATE_START + CT_STATE_WIDTH - 1)
>
> -#define CT_RCU_WATCHING_MAX_WIDTH (CT_SIZE - CT_STATE_WIDTH)
> +#define CT_WORK_WIDTH (IS_ENABLED(CONFIG_CONTEXT_TRACKING_WORK) ? CT_WORK_MAX_OFFSET : 0)
> +#define CT_WORK_START (CT_STATE_END + 1)
> +#define CT_WORK_END (CT_WORK_START + CT_WORK_WIDTH - 1)
> +
> +#define CT_RCU_WATCHING_MAX_WIDTH (CT_SIZE - CT_WORK_WIDTH - CT_STATE_WIDTH)
> #define CT_RCU_WATCHING_WIDTH (IS_ENABLED(CONFIG_RCU_DYNTICKS_TORTURE) ? 2 : CT_RCU_WATCHING_MAX_WIDTH)
> -#define CT_RCU_WATCHING_START (CT_STATE_END + 1)
> +#define CT_RCU_WATCHING_START (CT_WORK_END + 1)
> #define CT_RCU_WATCHING_END (CT_RCU_WATCHING_START + CT_RCU_WATCHING_WIDTH - 1)
> #define CT_RCU_WATCHING BIT(CT_RCU_WATCHING_START)
>
> #define CT_STATE_MASK GENMASK(CT_STATE_END, CT_STATE_START)
> +#define CT_WORK_MASK GENMASK(CT_WORK_END, CT_WORK_START)
> #define CT_RCU_WATCHING_MASK GENMASK(CT_RCU_WATCHING_END, CT_RCU_WATCHING_START)
>
> #define CT_UNUSED_WIDTH (CT_RCU_WATCHING_MAX_WIDTH - CT_RCU_WATCHING_WIDTH)
>
> static_assert(CT_STATE_WIDTH +
> + CT_WORK_WIDTH +
> CT_RCU_WATCHING_WIDTH +
> CT_UNUSED_WIDTH ==
> CT_SIZE);
> diff --git a/include/linux/context_tracking_work.h b/include/linux/context_tracking_work.h
> new file mode 100644
> index 0000000000000..c68245f8d77c5
> --- /dev/null
> +++ b/include/linux/context_tracking_work.h
> @@ -0,0 +1,26 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _LINUX_CONTEXT_TRACKING_WORK_H
> +#define _LINUX_CONTEXT_TRACKING_WORK_H
> +
> +#include <linux/bitops.h>
> +
> +enum {
> + CT_WORK_n_OFFSET,
> + CT_WORK_MAX_OFFSET
> +};
> +
> +enum ct_work {
> + CT_WORK_n = BIT(CT_WORK_n_OFFSET),
> + CT_WORK_MAX = BIT(CT_WORK_MAX_OFFSET)
> +};
> +
> +#include <asm/context_tracking_work.h>
> +
> +#ifdef CONFIG_CONTEXT_TRACKING_WORK
> +extern bool ct_set_cpu_work(unsigned int cpu, enum ct_work work);
> +#else
> +static inline bool
> +ct_set_cpu_work(unsigned int cpu, unsigned int work) { return false; }
> +#endif
> +
> +#endif
> diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
> index fb5be6e9b423f..3238bb1f41ff4 100644
> --- a/kernel/context_tracking.c
> +++ b/kernel/context_tracking.c
> @@ -72,6 +72,70 @@ static __always_inline void rcu_task_trace_heavyweight_exit(void)
> #endif /* #ifdef CONFIG_TASKS_TRACE_RCU */
> }
>
> +#ifdef CONFIG_CONTEXT_TRACKING_WORK
> +static noinstr void ct_work_flush(unsigned long seq)
> +{
> + int bit;
> +
> + seq = (seq & CT_WORK_MASK) >> CT_WORK_START;
> +
> + /*
> + * arch_context_tracking_work() must be noinstr, non-blocking,
> + * and NMI safe.
> + */
> + for_each_set_bit(bit, &seq, CT_WORK_MAX)
> + arch_context_tracking_work(BIT(bit));
> +}
> +
> +/**
> + * ct_set_cpu_work - set work to be run at next kernel context entry
> + *
> + * If @cpu is not currently executing in kernelspace, it will execute the
> + * callback mapped to @work (see arch_context_tracking_work()) at its next
> + * entry into ct_kernel_enter_state().
> + *
> + * If it is already executing in kernelspace, this will be a no-op.
> + */
> +bool ct_set_cpu_work(unsigned int cpu, enum ct_work work)
> +{
> + struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu);
> + unsigned int old;
> + bool ret = false;
> +
> + if (!ct->active)
> + return false;
> +
> + preempt_disable();
> +
> + old = atomic_read(&ct->state);
> +
> + /*
> + * The work bit must only be set if the target CPU is not executing
> + * in kernelspace.
> + * CT_RCU_WATCHING is used as a proxy for that - if the bit is set, we
> + * know for sure the CPU is executing in the kernel whether that be in
> + * NMI, IRQ or process context.
> + * Set CT_RCU_WATCHING here and let the cmpxchg do the check for us;
> + * the state could change between the atomic_read() and the cmpxchg().
> + */
> + old |= CT_RCU_WATCHING;
Most of the time, the task should be either idle or in userspace. I'm still not
sure why you start with a bet that the CPU is in the kernel with RCU watching.
> + /*
> + * Try setting the work until either
> + * - the target CPU has entered kernelspace
> + * - the work has been set
> + */
> + do {
> + ret = atomic_try_cmpxchg(&ct->state, &old, old | (work << CT_WORK_START));
> + } while (!ret && !(old & CT_RCU_WATCHING));
So this applies blindly to idle as well, right? It should work but note that
idle entry code before RCU watches is also fragile.
The rest looks good.
Thanks!
> +
> + preempt_enable();
> + return ret;
> +}
> +#else
> +static __always_inline void ct_work_flush(unsigned long work) { }
> +static __always_inline void ct_work_clear(struct context_tracking *ct) { }
> +#endif
> +
> /*
> * Record entry into an extended quiescent state. This is only to be
> * called when not already in an extended quiescent state, that is,
> @@ -88,7 +152,7 @@ static noinstr void ct_kernel_exit_state(int offset)
> rcu_task_trace_heavyweight_enter(); // Before CT state update!
> // RCU is still watching. Better not be in extended quiescent state!
> WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !rcu_is_watching_curr_cpu());
> - (void)ct_state_inc(offset);
> + (void)ct_state_inc_clear_work(offset);
> // RCU is no longer watching.
> }
>
> @@ -99,7 +163,7 @@ static noinstr void ct_kernel_exit_state(int offset)
> */
> static noinstr void ct_kernel_enter_state(int offset)
> {
> - int seq;
> + unsigned long seq;
>
> /*
> * CPUs seeing atomic_add_return() must see prior idle sojourns,
> @@ -107,6 +171,7 @@ static noinstr void ct_kernel_enter_state(int offset)
> * critical section.
> */
> seq = ct_state_inc(offset);
> + ct_work_flush(seq);
> // RCU is now watching. Better not be in an extended quiescent state!
> rcu_task_trace_heavyweight_exit(); // After CT state update!
> WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(seq & CT_RCU_WATCHING));
> diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
> index 7c6a52f7836ce..1a0c027aad141 100644
> --- a/kernel/time/Kconfig
> +++ b/kernel/time/Kconfig
> @@ -181,6 +181,11 @@ config CONTEXT_TRACKING_USER_FORCE
> Say N otherwise, this option brings an overhead that you
> don't want in production.
>
> +config CONTEXT_TRACKING_WORK
> + bool
> + depends on HAVE_CONTEXT_TRACKING_WORK && CONTEXT_TRACKING_USER
> + default y
> +
> config NO_HZ
> bool "Old Idle dynticks config"
> help
> --
> 2.51.0
>
--
Frederic Weisbecker
SUSE Labs
next prev parent reply other threads:[~2025-10-28 14:00 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-10 15:38 [PATCH v6 00/29] context_tracking,x86: Defer some IPIs until a user->kernel transition Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 01/29] objtool: Make validate_call() recognize indirect calls to pv_ops[] Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 02/29] objtool: Flesh out warning related to pv_ops[] calls Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 03/29] rcu: Add a small-width RCU watching counter debug option Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 04/29] rcutorture: Make TREE04 use CONFIG_RCU_DYNTICKS_TORTURE Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 05/29] jump_label: Add annotations for validating noinstr usage Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 06/29] static_call: Add read-only-after-init static calls Valentin Schneider
2025-10-30 10:25 ` Petr Tesarik
2025-10-31 11:52 ` Valentin Schneider
2025-11-03 8:37 ` Petr Tesarik
2025-10-10 15:38 ` [PATCH v6 07/29] x86/paravirt: Mark pv_sched_clock static call as __ro_after_init Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 08/29] x86/idle: Mark x86_idle " Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 09/29] x86/paravirt: Mark pv_steal_clock " Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 10/29] riscv/paravirt: " Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 11/29] loongarch/paravirt: " Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 12/29] arm64/paravirt: " Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 13/29] arm/paravirt: " Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 14/29] perf/x86/amd: Mark perf_lopwr_cb " Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 15/29] sched/clock: Mark sched_clock_running key " Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 16/29] KVM: VMX: Mark __kvm_is_using_evmcs static " Valentin Schneider
2025-10-14 0:02 ` Sean Christopherson
2025-10-14 11:20 ` Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 17/29] x86/speculation/mds: Mark cpu_buf_idle_clear key as allowed in .noinstr Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 18/29] sched/clock, x86: Mark __sched_clock_stable " Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 19/29] KVM: VMX: Mark vmx_l1d_should flush and vmx_l1d_flush_cond keys " Valentin Schneider
2025-10-14 0:01 ` Sean Christopherson
2025-10-14 11:02 ` Valentin Schneider
2025-10-14 19:06 ` Sean Christopherson
2025-10-10 15:38 ` [PATCH v6 20/29] stackleack: Mark stack_erasing_bypass key " Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 21/29] objtool: Add noinstr validation for static branches/calls Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 22/29] module: Add MOD_NOINSTR_TEXT mem_type Valentin Schneider
2025-10-10 15:38 ` [PATCH v6 23/29] context-tracking: Introduce work deferral infrastructure Valentin Schneider
2025-10-28 14:00 ` Frederic Weisbecker [this message]
2025-10-29 10:09 ` Valentin Schneider
2025-10-29 14:52 ` Frederic Weisbecker
2025-11-03 8:32 ` Shrikanth Hegde
2025-10-10 15:38 ` [PATCH v6 24/29] context_tracking,x86: Defer kernel text patching IPIs Valentin Schneider
2025-10-28 14:49 ` Frederic Weisbecker
2025-10-10 15:38 ` [PATCH v6 25/29] x86/mm: Make INVPCID type macros available to assembly Valentin Schneider
2025-10-10 15:38 ` [RFC PATCH v6 26/29] x86/mm/pti: Introduce a kernel/user CR3 software signal Valentin Schneider
2025-10-10 15:38 ` [RFC PATCH v6 27/29] x86/mm/pti: Implement a TLB flush immediately after a switch to kernel CR3 Valentin Schneider
2025-10-28 15:59 ` Frederic Weisbecker
2025-10-29 10:16 ` Valentin Schneider
2025-10-29 10:31 ` Frederic Weisbecker
2025-10-29 14:13 ` Valentin Schneider
2025-10-29 14:49 ` Frederic Weisbecker
2025-10-31 9:55 ` Valentin Schneider
2025-10-10 15:38 ` [RFC PATCH v6 28/29] x86/mm, mm/vmalloc: Defer kernel TLB flush IPIs under CONFIG_COALESCE_TLBI=y Valentin Schneider
2025-10-10 15:38 ` [RFC PATCH v6 29/29] x86/entry: Add an option to coalesce TLB flushes Valentin Schneider
2025-10-14 12:58 ` [PATCH v6 00/29] context_tracking,x86: Defer some IPIs until a user->kernel transition Juri Lelli
2025-10-14 15:26 ` Valentin Schneider
2025-10-15 13:16 ` Valentin Schneider
2025-10-15 14:28 ` Juri Lelli
2025-10-28 16:25 ` Frederic Weisbecker
2025-10-29 10:32 ` Valentin Schneider
2025-10-29 17:15 ` Frederic Weisbecker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aQDMfu0tzecFzoGr@localhost.localdomain \
--to=frederic@kernel.org \
--cc=acme@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=arnd@arndb.de \
--cc=boqun.feng@gmail.com \
--cc=bp@alien8.de \
--cc=dan.carpenter@linaro.org \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=dwagner@suse.de \
--cc=hpa@zytor.com \
--cc=jannh@google.com \
--cc=jbaron@akamai.com \
--cc=joelagnelf@nvidia.com \
--cc=josh@joshtriplett.org \
--cc=jpoimboe@kernel.org \
--cc=juri.lelli@redhat.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=loongarch@lists.linux.dev \
--cc=luto@kernel.org \
--cc=masahiroy@kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=mtosatti@redhat.com \
--cc=neeraj.upadhyay@kernel.org \
--cc=nsaenzju@redhat.com \
--cc=oleg@redhat.com \
--cc=paulmck@kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=ptesarik@suse.com \
--cc=rcu@vger.kernel.org \
--cc=riel@surriel.com \
--cc=rostedt@goodmis.org \
--cc=samitolvanen@google.com \
--cc=shenhan@google.com \
--cc=tglx@linutronix.de \
--cc=urezki@gmail.com \
--cc=vschneid@redhat.com \
--cc=williams@redhat.com \
--cc=x86@kernel.org \
--cc=ypodemsk@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).