From: Randy Dunlap <rdunlap@infradead.org>
To: Suren Baghdasaryan <surenb@google.com>, akpm@linux-foundation.org
Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz,
hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de,
dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com,
corbet@lwn.net, void@manifault.com, peterz@infradead.org,
juri.lelli@redhat.com, ldufour@linux.ibm.com,
catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de,
tglx@linutronix.de, mingo@redhat.com,
dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com,
david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org,
masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org,
tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org,
paulmck@kernel.org, pasha.tatashin@soleen.com,
yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com,
hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org,
ndesaulniers@google.com, gregkh@linuxfoundation.org,
ebiggers@google.com, ytcoode@gmail.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com,
vschneid@redhat.com, cl@linux.com, penberg@kernel.org,
iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com,
elver@google.com, dvyukov@google.com, shakeelb@google.com,
songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com,
minchan@google.com, kaleshsingh@google.com,
kernel-team@android.com, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, iommu@lists.linux.dev,
linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, linux-modules@vger.kernel.org,
kasan-dev@googlegroups.com, cgroups@vger.kernel.org
Subject: Re: [PATCH 07/40] Lazy percpu counters
Date: Mon, 1 May 2023 12:17:42 -0700 [thread overview]
Message-ID: <91a360d3-c876-0a57-5cb1-e3a5f419080d@infradead.org> (raw)
In-Reply-To: <20230501165450.15352-8-surenb@google.com>
Hi--
On 5/1/23 09:54, Suren Baghdasaryan wrote:
> From: Kent Overstreet <kent.overstreet@linux.dev>
>
> This patch adds lib/lazy-percpu-counter.c, which implements counters
> that start out as atomics, but lazily switch to percpu mode if the
> update rate crosses some threshold (arbitrarily set at 256 per second).
>
from submitting-patches.rst:
Describe your changes in imperative mood, e.g. "make xyzzy do frotz"
instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
to do frotz", as if you are giving orders to the codebase to change
its behaviour.
> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
> include/linux/lazy-percpu-counter.h | 102 ++++++++++++++++++++++
> lib/Kconfig | 3 +
> lib/Makefile | 2 +
> lib/lazy-percpu-counter.c | 127 ++++++++++++++++++++++++++++
> 4 files changed, 234 insertions(+)
> create mode 100644 include/linux/lazy-percpu-counter.h
> create mode 100644 lib/lazy-percpu-counter.c
>
> diff --git a/include/linux/lazy-percpu-counter.h b/include/linux/lazy-percpu-counter.h
> new file mode 100644
> index 000000000000..45ca9e2ce58b
> --- /dev/null
> +++ b/include/linux/lazy-percpu-counter.h
> @@ -0,0 +1,102 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Lazy percpu counters:
> + * (C) 2022 Kent Overstreet
> + *
> + * Lazy percpu counters start out in atomic mode, then switch to percpu mode if
> + * the update rate crosses some threshold.
> + *
> + * This means we don't have to decide between low memory overhead atomic
> + * counters and higher performance percpu counters - we can have our cake and
> + * eat it, too!
> + *
> + * Internally we use an atomic64_t, where the low bit indicates whether we're in
> + * percpu mode, and the high 8 bits are a secondary counter that's incremented
> + * when the counter is modified - meaning 55 bits of precision are available for
> + * the counter itself.
> + */
> +
> +#ifndef _LINUX_LAZY_PERCPU_COUNTER_H
> +#define _LINUX_LAZY_PERCPU_COUNTER_H
> +
> +#include <linux/atomic.h>
> +#include <asm/percpu.h>
> +
> +struct lazy_percpu_counter {
> + atomic64_t v;
> + unsigned long last_wrap;
> +};
> +
> +void lazy_percpu_counter_exit(struct lazy_percpu_counter *c);
> +void lazy_percpu_counter_add_slowpath(struct lazy_percpu_counter *c, s64 i);
> +void lazy_percpu_counter_add_slowpath_noupgrade(struct lazy_percpu_counter *c, s64 i);
> +s64 lazy_percpu_counter_read(struct lazy_percpu_counter *c);
> +
> +/*
> + * We use the high bits of the atomic counter for a secondary counter, which is
> + * incremented every time the counter is touched. When the secondary counter
> + * wraps, we check the time the counter last wrapped, and if it was recent
> + * enough that means the update frequency has crossed our threshold and we
> + * switch to percpu mode:
> + */
> +#define COUNTER_MOD_BITS 8
> +#define COUNTER_MOD_MASK ~(~0ULL >> COUNTER_MOD_BITS)
> +#define COUNTER_MOD_BITS_START (64 - COUNTER_MOD_BITS)
> +
> +/*
> + * We use the low bit of the counter to indicate whether we're in atomic mode
> + * (low bit clear), or percpu mode (low bit set, counter is a pointer to actual
> + * percpu counters:
> + */
> +#define COUNTER_IS_PCPU_BIT 1
> +
> +static inline u64 __percpu *lazy_percpu_counter_is_pcpu(u64 v)
> +{
> + if (!(v & COUNTER_IS_PCPU_BIT))
> + return NULL;
> +
> + v ^= COUNTER_IS_PCPU_BIT;
> + return (u64 __percpu *)(unsigned long)v;
> +}
> +
> +/**
> + * lazy_percpu_counter_add: Add a value to a lazy_percpu_counter
For kernel-doc, the function name should be followed by '-', not ':'.
(many places)
> + *
> + * @c: counter to modify
> + * @i: value to add
> + */
> +static inline void lazy_percpu_counter_add(struct lazy_percpu_counter *c, s64 i)
> +{
> + u64 v = atomic64_read(&c->v);
> + u64 __percpu *pcpu_v = lazy_percpu_counter_is_pcpu(v);
> +
> + if (likely(pcpu_v))
> + this_cpu_add(*pcpu_v, i);
> + else
> + lazy_percpu_counter_add_slowpath(c, i);
> +}
> +
> +/**
> + * lazy_percpu_counter_add_noupgrade: Add a value to a lazy_percpu_counter,
> + * without upgrading to percpu mode
> + *
> + * @c: counter to modify
> + * @i: value to add
> + */
> +static inline void lazy_percpu_counter_add_noupgrade(struct lazy_percpu_counter *c, s64 i)
> +{
> + u64 v = atomic64_read(&c->v);
> + u64 __percpu *pcpu_v = lazy_percpu_counter_is_pcpu(v);
> +
> + if (likely(pcpu_v))
> + this_cpu_add(*pcpu_v, i);
> + else
> + lazy_percpu_counter_add_slowpath_noupgrade(c, i);
> +}
> +
> +static inline void lazy_percpu_counter_sub(struct lazy_percpu_counter *c, s64 i)
> +{
> + lazy_percpu_counter_add(c, -i);
> +}
> +
> +#endif /* _LINUX_LAZY_PERCPU_COUNTER_H */
> diff --git a/lib/lazy-percpu-counter.c b/lib/lazy-percpu-counter.c
> new file mode 100644
> index 000000000000..4f4e32c2dc09
> --- /dev/null
> +++ b/lib/lazy-percpu-counter.c
> @@ -0,0 +1,127 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <linux/atomic.h>
> +#include <linux/gfp.h>
> +#include <linux/jiffies.h>
> +#include <linux/lazy-percpu-counter.h>
> +#include <linux/percpu.h>
> +
> +static inline s64 lazy_percpu_counter_atomic_val(s64 v)
> +{
> + /* Ensure output is sign extended properly: */
> + return (v << COUNTER_MOD_BITS) >>
> + (COUNTER_MOD_BITS + COUNTER_IS_PCPU_BIT);
> +}
> +
...
> +
> +/**
> + * lazy_percpu_counter_exit: Free resources associated with a
> + * lazy_percpu_counter
Same kernel-doc comment.
> + *
> + * @c: counter to exit
> + */
> +void lazy_percpu_counter_exit(struct lazy_percpu_counter *c)
> +{
> + free_percpu(lazy_percpu_counter_is_pcpu(atomic64_read(&c->v)));
> +}
> +EXPORT_SYMBOL_GPL(lazy_percpu_counter_exit);
> +
> +/**
> + * lazy_percpu_counter_read: Read current value of a lazy_percpu_counter
> + *
> + * @c: counter to read
> + */
> +s64 lazy_percpu_counter_read(struct lazy_percpu_counter *c)
> +{
> + s64 v = atomic64_read(&c->v);
> + u64 __percpu *pcpu_v = lazy_percpu_counter_is_pcpu(v);
> +
> + if (pcpu_v) {
> + int cpu;
> +
> + v = 0;
> + for_each_possible_cpu(cpu)
> + v += *per_cpu_ptr(pcpu_v, cpu);
> + } else {
> + v = lazy_percpu_counter_atomic_val(v);
> + }
> +
> + return v;
> +}
> +EXPORT_SYMBOL_GPL(lazy_percpu_counter_read);
> +
> +void lazy_percpu_counter_add_slowpath(struct lazy_percpu_counter *c, s64 i)
> +{
> + u64 atomic_i;
> + u64 old, v = atomic64_read(&c->v);
> + u64 __percpu *pcpu_v;
> +
> + atomic_i = i << COUNTER_IS_PCPU_BIT;
> + atomic_i &= ~COUNTER_MOD_MASK;
> + atomic_i |= 1ULL << COUNTER_MOD_BITS_START;
> +
> + do {
> + pcpu_v = lazy_percpu_counter_is_pcpu(v);
> + if (pcpu_v) {
> + this_cpu_add(*pcpu_v, i);
> + return;
> + }
> +
> + old = v;
> + } while ((v = atomic64_cmpxchg(&c->v, old, old + atomic_i)) != old);
> +
> + if (unlikely(!(v & COUNTER_MOD_MASK))) {
> + unsigned long now = jiffies;
> +
> + if (c->last_wrap &&
> + unlikely(time_after(c->last_wrap + HZ, now)))
> + lazy_percpu_counter_switch_to_pcpu(c);
> + else
> + c->last_wrap = now;
> + }
> +}
> +EXPORT_SYMBOL(lazy_percpu_counter_add_slowpath);
> +
> +void lazy_percpu_counter_add_slowpath_noupgrade(struct lazy_percpu_counter *c, s64 i)
> +{
> + u64 atomic_i;
> + u64 old, v = atomic64_read(&c->v);
> + u64 __percpu *pcpu_v;
> +
> + atomic_i = i << COUNTER_IS_PCPU_BIT;
> + atomic_i &= ~COUNTER_MOD_MASK;
> +
> + do {
> + pcpu_v = lazy_percpu_counter_is_pcpu(v);
> + if (pcpu_v) {
> + this_cpu_add(*pcpu_v, i);
> + return;
> + }
> +
> + old = v;
> + } while ((v = atomic64_cmpxchg(&c->v, old, old + atomic_i)) != old);
> +}
> +EXPORT_SYMBOL(lazy_percpu_counter_add_slowpath_noupgrade);
These last 2 exported functions could use some comments, preferably in
kernel-doc format.
Thanks.
--
~Randy
next prev parent reply other threads:[~2023-05-01 19:18 UTC|newest]
Thread overview: 160+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-01 16:54 [PATCH 00/40] Memory allocation profiling Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 01/40] lib/string_helpers: Drop space in string_get_size's output Suren Baghdasaryan
2023-05-01 18:13 ` Davidlohr Bueso
2023-05-01 19:35 ` Kent Overstreet
2023-05-01 19:57 ` Andy Shevchenko
2023-05-01 21:16 ` Kent Overstreet
2023-05-01 21:33 ` Liam R. Howlett
2023-05-02 0:11 ` Kent Overstreet
2023-05-02 0:53 ` Kent Overstreet
2023-05-02 2:22 ` James Bottomley
2023-05-02 3:17 ` Kent Overstreet
2023-05-02 5:33 ` Andy Shevchenko
2023-05-02 6:21 ` Kent Overstreet
2023-05-02 15:19 ` Andy Shevchenko
2023-05-03 2:07 ` Kent Overstreet
2023-05-03 6:30 ` Andy Shevchenko
2023-05-03 7:12 ` Kent Overstreet
2023-05-03 9:12 ` Andy Shevchenko
2023-05-03 9:16 ` Kent Overstreet
2023-05-02 11:42 ` James Bottomley
2023-05-02 22:50 ` Dave Chinner
2023-05-03 9:28 ` Vlastimil Babka
2023-05-03 9:44 ` Andy Shevchenko
2023-05-03 12:15 ` James Bottomley
2023-05-02 7:55 ` Jani Nikula
2023-05-01 16:54 ` [PATCH 02/40] scripts/kallysms: Always include __start and __stop symbols Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 03/40] fs: Convert alloc_inode_sb() to a macro Suren Baghdasaryan
2023-05-02 12:35 ` Petr Tesařík
2023-05-02 19:57 ` Kent Overstreet
2023-05-02 20:20 ` Petr Tesařík
2023-05-01 16:54 ` [PATCH 04/40] nodemask: Split out include/linux/nodemask_types.h Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 05/40] prandom: Remove unused include Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 06/40] lib/string.c: strsep_no_empty() Suren Baghdasaryan
2023-05-02 12:37 ` Petr Tesařík
2023-05-01 16:54 ` [PATCH 07/40] Lazy percpu counters Suren Baghdasaryan
2023-05-01 19:17 ` Randy Dunlap [this message]
2023-05-01 16:54 ` [PATCH 08/40] mm: introduce slabobj_ext to support slab object extensions Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 09/40] mm: introduce __GFP_NO_OBJ_EXT flag to selectively prevent slabobj_ext creation Suren Baghdasaryan
2023-05-02 12:50 ` Petr Tesařík
2023-05-02 18:33 ` Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 10/40] mm/slab: introduce SLAB_NO_OBJ_EXT to avoid obj_ext creation Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 11/40] mm: prevent slabobj_ext allocations for slabobj_ext and kmem_cache objects Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 12/40] slab: objext: introduce objext_flags as extension to page_memcg_data_flags Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 13/40] lib: code tagging framework Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 14/40] lib: code tagging module support Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 15/40] lib: prevent module unloading if memory is not freed Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 16/40] lib: code tagging query helper functions Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 17/40] lib: add allocation tagging support for memory allocation profiling Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 18/40] lib: introduce support for page allocation tagging Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 19/40] change alloc_pages name in dma_map_ops to avoid name conflicts Suren Baghdasaryan
2023-05-02 15:50 ` Petr Tesařík
2023-05-02 18:38 ` Suren Baghdasaryan
2023-05-02 20:09 ` Petr Tesařík
2023-05-02 20:18 ` Kent Overstreet
2023-05-02 20:24 ` Suren Baghdasaryan
2023-05-02 20:39 ` Petr Tesařík
2023-05-02 20:41 ` Suren Baghdasaryan
2023-05-03 16:25 ` Steven Rostedt
2023-05-03 18:03 ` Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 20/40] mm: enable page allocation tagging Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 21/40] mm/page_ext: enable early_page_ext when CONFIG_MEM_ALLOC_PROFILING_DEBUG=y Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 22/40] mm: create new codetag references during page splitting Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 23/40] lib: add codetag reference into slabobj_ext Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 24/40] mm/slab: add allocation accounting into slab allocation and free paths Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 25/40] mm/slab: enable slab allocation tagging for kmalloc and friends Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 26/40] mm/slub: Mark slab_free_freelist_hook() __always_inline Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 27/40] mempool: Hook up to memory allocation profiling Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 28/40] timekeeping: Fix a circular include dependency Suren Baghdasaryan
2023-05-02 15:50 ` Thomas Gleixner
2023-05-01 16:54 ` [PATCH 29/40] mm: percpu: Introduce pcpuobj_ext Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 30/40] mm: percpu: Add codetag reference into pcpuobj_ext Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 31/40] mm: percpu: enable per-cpu allocation tagging Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 32/40] arm64: Fix circular header dependency Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 33/40] move stack capture functionality into a separate function for reuse Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 34/40] lib: code tagging context capture support Suren Baghdasaryan
2023-05-03 7:35 ` Michal Hocko
2023-05-03 15:18 ` Suren Baghdasaryan
2023-05-03 15:26 ` Dave Hansen
2023-05-03 19:45 ` Suren Baghdasaryan
2023-05-04 8:04 ` Michal Hocko
2023-05-04 14:31 ` Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 35/40] lib: implement context capture support for tagged allocations Suren Baghdasaryan
2023-05-03 7:39 ` Michal Hocko
2023-05-03 15:24 ` Suren Baghdasaryan
2023-05-04 8:09 ` Michal Hocko
2023-05-04 16:22 ` Suren Baghdasaryan
2023-05-05 8:40 ` Michal Hocko
2023-05-05 18:10 ` Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 36/40] lib: add memory allocations report in show_mem() Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 37/40] codetag: debug: skip objext checking when it's for objext itself Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 38/40] codetag: debug: mark codetags for reserved pages as empty Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 39/40] codetag: debug: introduce OBJEXTS_ALLOC_FAIL to mark failed slab_ext allocations Suren Baghdasaryan
2023-05-01 16:54 ` [PATCH 40/40] MAINTAINERS: Add entries for code tagging and memory allocation profiling Suren Baghdasaryan
2023-05-01 17:47 ` [PATCH 00/40] Memory " Roman Gushchin
2023-05-01 18:08 ` Suren Baghdasaryan
2023-05-01 18:14 ` Roman Gushchin
2023-05-01 19:37 ` Kent Overstreet
2023-05-01 21:18 ` Roman Gushchin
2023-05-03 7:25 ` Michal Hocko
2023-05-03 7:34 ` Kent Overstreet
2023-05-03 7:51 ` Michal Hocko
2023-05-03 8:05 ` Kent Overstreet
2023-05-03 13:21 ` Steven Rostedt
2023-05-03 16:35 ` Tejun Heo
2023-05-03 17:42 ` Suren Baghdasaryan
2023-05-03 18:06 ` Tejun Heo
2023-05-03 17:44 ` Kent Overstreet
2023-05-03 17:51 ` Kent Overstreet
2023-05-03 18:24 ` Tejun Heo
2023-05-03 18:07 ` Johannes Weiner
2023-05-03 18:19 ` Tejun Heo
2023-05-03 18:40 ` Tejun Heo
2023-05-03 18:56 ` Kent Overstreet
2023-05-03 18:58 ` Tejun Heo
2023-05-03 19:09 ` Tejun Heo
2023-05-03 19:41 ` Suren Baghdasaryan
2023-05-03 19:48 ` Tejun Heo
2023-05-03 20:00 ` Tejun Heo
2023-05-03 20:14 ` Suren Baghdasaryan
2023-05-04 2:25 ` Tejun Heo
2023-05-04 3:33 ` Kent Overstreet
2023-05-04 3:33 ` Suren Baghdasaryan
2023-05-04 8:00 ` Petr Tesařík
2023-05-03 20:08 ` Suren Baghdasaryan
2023-05-03 20:11 ` Johannes Weiner
2023-05-04 2:16 ` Tejun Heo
2023-05-03 20:04 ` Andrey Ryabinin
2023-05-03 9:50 ` Petr Tesařík
2023-05-03 9:54 ` Kent Overstreet
2023-05-03 10:24 ` Petr Tesařík
2023-05-03 9:57 ` Kent Overstreet
2023-05-03 10:26 ` Petr Tesařík
2023-05-03 15:30 ` Kent Overstreet
2023-05-03 12:33 ` James Bottomley
2023-05-03 14:31 ` Suren Baghdasaryan
2023-05-03 15:28 ` Kent Overstreet
2023-05-03 15:37 ` Lorenzo Stoakes
2023-05-03 16:03 ` Kent Overstreet
2023-05-03 15:49 ` James Bottomley
2023-05-03 15:09 ` Suren Baghdasaryan
2023-05-03 16:28 ` Steven Rostedt
2023-05-03 17:40 ` Suren Baghdasaryan
2023-05-03 18:03 ` Steven Rostedt
2023-05-03 18:07 ` Suren Baghdasaryan
2023-05-03 18:12 ` Kent Overstreet
2023-05-04 9:07 ` Michal Hocko
2023-05-04 15:08 ` Suren Baghdasaryan
2023-05-07 10:27 ` Michal Hocko
2023-05-07 17:01 ` Kent Overstreet
2023-05-07 17:20 ` Kent Overstreet
2023-05-07 20:55 ` Steven Rostedt
2023-05-07 21:53 ` Kent Overstreet
2023-05-07 22:09 ` Steven Rostedt
2023-05-07 22:17 ` Kent Overstreet
2023-05-08 15:52 ` Petr Tesařík
2023-05-08 15:57 ` Kent Overstreet
2023-05-08 16:09 ` Petr Tesařík
2023-05-08 16:28 ` Kent Overstreet
2023-05-08 18:59 ` Petr Tesařík
2023-05-08 20:48 ` Kent Overstreet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=91a360d3-c876-0a57-5cb1-e3a5f419080d@infradead.org \
--to=rdunlap@infradead.org \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=arnd@arndb.de \
--cc=axboe@kernel.dk \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=catalin.marinas@arm.com \
--cc=cgroups@vger.kernel.org \
--cc=cl@linux.com \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=dennis@kernel.org \
--cc=dhowells@redhat.com \
--cc=dietmar.eggemann@arm.com \
--cc=dvyukov@google.com \
--cc=ebiggers@google.com \
--cc=elver@google.com \
--cc=glider@google.com \
--cc=gregkh@linuxfoundation.org \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=iommu@lists.linux.dev \
--cc=jbaron@akamai.com \
--cc=juri.lelli@redhat.com \
--cc=kaleshsingh@google.com \
--cc=kasan-dev@googlegroups.com \
--cc=keescook@chromium.org \
--cc=kent.overstreet@linux.dev \
--cc=kernel-team@android.com \
--cc=ldufour@linux.ibm.com \
--cc=liam.howlett@oracle.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-modules@vger.kernel.org \
--cc=masahiroy@kernel.org \
--cc=mcgrof@kernel.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.com \
--cc=minchan@google.com \
--cc=mingo@redhat.com \
--cc=muchun.song@linux.dev \
--cc=nathan@kernel.org \
--cc=ndesaulniers@google.com \
--cc=pasha.tatashin@soleen.com \
--cc=paulmck@kernel.org \
--cc=penberg@kernel.org \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=shakeelb@google.com \
--cc=songmuchun@bytedance.com \
--cc=surenb@google.com \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
--cc=vincent.guittot@linaro.org \
--cc=void@manifault.com \
--cc=vschneid@redhat.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=yosryahmed@google.com \
--cc=ytcoode@gmail.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).