From: Dave Hansen <dave.hansen@linux.intel.com>
To: linux-kernel@vger.kernel.org
Cc: Dave Hansen <dave.hansen@linux.intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
linux-mm@kvack.org, Lorenzo Stoakes <ljs@kernel.org>,
Shakeel Butt <shakeel.butt@linux.dev>,
Suren Baghdasaryan <surenb@google.com>,
Vlastimil Babka <vbabka@kernel.org>
Subject: [PATCH 1/6] mm: Make per-VMA locks available universally
Date: Wed, 29 Apr 2026 11:19:55 -0700 [thread overview]
Message-ID: <20260429181955.0C443845@davehans-spike.ostc.intel.com> (raw)
In-Reply-To: <20260429181954.F50224AE@davehans-spike.ostc.intel.com>
From: Dave Hansen <dave.hansen@linux.intel.com>
The per-VMA locks have been around for several years. They've had some
bugs worked out of them and have seen quite wide use. However, they
are still only available when architectures explicitly enable them.
Remove the conditional compilation around the per-VMA locks, making
them available on all architectures and configs.
The approach up to now seemed to be to add ARCH_SUPPORTS_PER_VMA_LOCK
when the architecture started using per-VMA locks in the fault
handler. But, contrary to the naming, the Kconfig option does not
really indicate whether the architecture supports per-VMA locks or
not. It is more of a marker for whether the architecture is likely to
benefit from per-VMA locks.
To me, the most important thing side-effect of universal availability
is letting per-VMA locks be used in SMP=n configs. This lets us use
per-VMA locking in all x86 code without fallbacks.
Overall, this just generally makes the kernel simpler. Just look at
the diffstat. It also opens the door to users that want to use the
per-VMA locks in common code. Doing *that* can bring additional
simplifications.
The downside of this is adding some fields to vm_area_struct and
mm_struct. I suspect there are some very simple ways to implement the
per-VMA locks that don't require any additional fields, especially if
such an approach was limited to SMP=n configs*. For now, do the
simplest thing: use the same implementation everywhere.
* For example, since SMP=n configs don't care much about scalability or
false sharing, there could be a single, global VMA seqcount that is
bumped when any VMA is modified instead of having space in each VMA
for a seqcount.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Lorenzo Stoakes <ljs@kernel.org>
Cc: Vlastimil Babka <vbabka@kernel.org>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: linux-mm@kvack.org
---
b/arch/arm/Kconfig | 1
b/arch/arm64/Kconfig | 1
b/arch/loongarch/Kconfig | 1
b/arch/powerpc/platforms/powernv/Kconfig | 1
b/arch/powerpc/platforms/pseries/Kconfig | 1
b/arch/riscv/Kconfig | 1
b/arch/s390/Kconfig | 1
b/arch/x86/Kconfig | 2 -
b/fs/proc/internal.h | 2 -
b/fs/proc/task_mmu.c | 51 -------------------------------
b/include/linux/mm.h | 12 -------
b/include/linux/mm_types.h | 7 ----
b/include/linux/mmap_lock.h | 48 -----------------------------
b/kernel/fork.c | 2 -
b/mm/Kconfig | 13 -------
b/mm/mmap_lock.c | 2 -
16 files changed, 1 insertion(+), 145 deletions(-)
diff -puN arch/arm64/Kconfig~unconditional-vma-locks arch/arm64/Kconfig
--- a/arch/arm64/Kconfig~unconditional-vma-locks 2026-04-29 11:18:47.795519653 -0700
+++ b/arch/arm64/Kconfig 2026-04-29 11:18:49.088569421 -0700
@@ -80,7 +80,6 @@ config ARM64
select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_SUPPORTS_PAGE_TABLE_CHECK
- select ARCH_SUPPORTS_PER_VMA_LOCK
select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE
select ARCH_SUPPORTS_RT
select ARCH_SUPPORTS_SCHED_SMT
diff -puN arch/arm/Kconfig~unconditional-vma-locks arch/arm/Kconfig
--- a/arch/arm/Kconfig~unconditional-vma-locks 2026-04-29 11:18:47.915524272 -0700
+++ b/arch/arm/Kconfig 2026-04-29 11:18:49.088569421 -0700
@@ -41,7 +41,6 @@ config ARM
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_CFI
select ARCH_SUPPORTS_HUGETLBFS if ARM_LPAE
- select ARCH_SUPPORTS_PER_VMA_LOCK
select ARCH_SUPPORTS_RT
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF
diff -puN arch/loongarch/Kconfig~unconditional-vma-locks arch/loongarch/Kconfig
--- a/arch/loongarch/Kconfig~unconditional-vma-locks 2026-04-29 11:18:47.956525850 -0700
+++ b/arch/loongarch/Kconfig 2026-04-29 11:18:49.088569421 -0700
@@ -68,7 +68,6 @@ config LOONGARCH
select ARCH_SUPPORTS_LTO_CLANG_THIN
select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS
select ARCH_SUPPORTS_NUMA_BALANCING if NUMA
- select ARCH_SUPPORTS_PER_VMA_LOCK
select ARCH_SUPPORTS_RT
select ARCH_SUPPORTS_SCHED_SMT if SMP
select ARCH_SUPPORTS_SCHED_MC if SMP
diff -puN arch/powerpc/platforms/powernv/Kconfig~unconditional-vma-locks arch/powerpc/platforms/powernv/Kconfig
--- a/arch/powerpc/platforms/powernv/Kconfig~unconditional-vma-locks 2026-04-29 11:18:47.969526350 -0700
+++ b/arch/powerpc/platforms/powernv/Kconfig 2026-04-29 11:18:49.089569460 -0700
@@ -17,7 +17,6 @@ config PPC_POWERNV
select PPC_DOORBELL
select MMU_NOTIFIER
select FORCE_SMP
- select ARCH_SUPPORTS_PER_VMA_LOCK
select PPC_RADIX_BROADCAST_TLBIE if PPC_RADIX_MMU
default y
diff -puN arch/powerpc/platforms/pseries/Kconfig~unconditional-vma-locks arch/powerpc/platforms/pseries/Kconfig
--- a/arch/powerpc/platforms/pseries/Kconfig~unconditional-vma-locks 2026-04-29 11:18:47.972526466 -0700
+++ b/arch/powerpc/platforms/pseries/Kconfig 2026-04-29 11:18:49.089569460 -0700
@@ -23,7 +23,6 @@ config PPC_PSERIES
select HOTPLUG_CPU
select FORCE_SMP
select SWIOTLB
- select ARCH_SUPPORTS_PER_VMA_LOCK
select PPC_RADIX_BROADCAST_TLBIE if PPC_RADIX_MMU
default y
diff -puN arch/riscv/Kconfig~unconditional-vma-locks arch/riscv/Kconfig
--- a/arch/riscv/Kconfig~unconditional-vma-locks 2026-04-29 11:18:48.060529854 -0700
+++ b/arch/riscv/Kconfig 2026-04-29 11:18:49.089569460 -0700
@@ -70,7 +70,6 @@ config RISCV
select ARCH_SUPPORTS_LTO_CLANG_THIN
select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS if 64BIT && MMU
select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU
- select ARCH_SUPPORTS_PER_VMA_LOCK if MMU
select ARCH_SUPPORTS_RT
select ARCH_SUPPORTS_SHADOW_CALL_STACK if HAVE_SHADOW_CALL_STACK
select ARCH_SUPPORTS_SCHED_MC if SMP
diff -puN arch/s390/Kconfig~unconditional-vma-locks arch/s390/Kconfig
--- a/arch/s390/Kconfig~unconditional-vma-locks 2026-04-29 11:18:48.125532357 -0700
+++ b/arch/s390/Kconfig 2026-04-29 11:18:49.089569460 -0700
@@ -153,7 +153,6 @@ config S390
select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_SUPPORTS_PAGE_TABLE_CHECK
- select ARCH_SUPPORTS_PER_VMA_LOCK
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_USE_SYM_ANNOTATIONS
diff -puN arch/x86/Kconfig~unconditional-vma-locks arch/x86/Kconfig
--- a/arch/x86/Kconfig~unconditional-vma-locks 2026-04-29 11:18:48.128532472 -0700
+++ b/arch/x86/Kconfig 2026-04-29 11:18:49.090569499 -0700
@@ -27,7 +27,6 @@ config X86_64
select ARCH_HAS_GIGANTIC_PAGE
select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS
select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
- select ARCH_SUPPORTS_PER_VMA_LOCK
select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE
select HAVE_ARCH_SOFT_DIRTY
select MODULES_USE_ELF_RELA
@@ -1885,7 +1884,6 @@ config X86_USER_SHADOW_STACK
bool "X86 userspace shadow stack"
depends on AS_WRUSS
depends on X86_64
- depends on PER_VMA_LOCK
select ARCH_USES_HIGH_VMA_FLAGS
select ARCH_HAS_USER_SHADOW_STACK
select X86_CET
diff -puN fs/proc/internal.h~unconditional-vma-locks fs/proc/internal.h
--- a/fs/proc/internal.h~unconditional-vma-locks 2026-04-29 11:18:48.305539283 -0700
+++ b/fs/proc/internal.h 2026-04-29 11:18:49.090569499 -0700
@@ -382,10 +382,8 @@ struct mem_size_stats;
struct proc_maps_locking_ctx {
struct mm_struct *mm;
-#ifdef CONFIG_PER_VMA_LOCK
bool mmap_locked;
struct vm_area_struct *locked_vma;
-#endif
};
struct proc_maps_private {
diff -puN fs/proc/task_mmu.c~unconditional-vma-locks fs/proc/task_mmu.c
--- a/fs/proc/task_mmu.c~unconditional-vma-locks 2026-04-29 11:18:48.346540861 -0700
+++ b/fs/proc/task_mmu.c 2026-04-29 11:18:49.090569499 -0700
@@ -130,8 +130,6 @@ static void release_task_mempolicy(struc
}
#endif
-#ifdef CONFIG_PER_VMA_LOCK
-
static void reset_lock_ctx(struct proc_maps_locking_ctx *lock_ctx)
{
lock_ctx->locked_vma = NULL;
@@ -213,33 +211,6 @@ static inline bool fallback_to_mmap_lock
return true;
}
-#else /* CONFIG_PER_VMA_LOCK */
-
-static inline bool lock_vma_range(struct seq_file *m,
- struct proc_maps_locking_ctx *lock_ctx)
-{
- return mmap_read_lock_killable(lock_ctx->mm) == 0;
-}
-
-static inline void unlock_vma_range(struct proc_maps_locking_ctx *lock_ctx)
-{
- mmap_read_unlock(lock_ctx->mm);
-}
-
-static struct vm_area_struct *get_next_vma(struct proc_maps_private *priv,
- loff_t last_pos)
-{
- return vma_next(&priv->iter);
-}
-
-static inline bool fallback_to_mmap_lock(struct proc_maps_private *priv,
- loff_t pos)
-{
- return false;
-}
-
-#endif /* CONFIG_PER_VMA_LOCK */
-
static struct vm_area_struct *proc_get_vma(struct seq_file *m, loff_t *ppos)
{
struct proc_maps_private *priv = m->private;
@@ -527,8 +498,6 @@ static int pid_maps_open(struct inode *i
PROCMAP_QUERY_VMA_FLAGS \
)
-#ifdef CONFIG_PER_VMA_LOCK
-
static int query_vma_setup(struct proc_maps_locking_ctx *lock_ctx)
{
reset_lock_ctx(lock_ctx);
@@ -581,26 +550,6 @@ static struct vm_area_struct *query_vma_
return vma;
}
-#else /* CONFIG_PER_VMA_LOCK */
-
-static int query_vma_setup(struct proc_maps_locking_ctx *lock_ctx)
-{
- return mmap_read_lock_killable(lock_ctx->mm);
-}
-
-static void query_vma_teardown(struct proc_maps_locking_ctx *lock_ctx)
-{
- mmap_read_unlock(lock_ctx->mm);
-}
-
-static struct vm_area_struct *query_vma_find_by_addr(struct proc_maps_locking_ctx *lock_ctx,
- unsigned long addr)
-{
- return find_vma(lock_ctx->mm, addr);
-}
-
-#endif /* CONFIG_PER_VMA_LOCK */
-
static struct vm_area_struct *query_matching_vma(struct proc_maps_locking_ctx *lock_ctx,
unsigned long addr, u32 flags)
{
diff -puN include/linux/mmap_lock.h~unconditional-vma-locks include/linux/mmap_lock.h
--- a/include/linux/mmap_lock.h~unconditional-vma-locks 2026-04-29 11:18:48.700554487 -0700
+++ b/include/linux/mmap_lock.h 2026-04-29 11:18:49.091569537 -0700
@@ -76,8 +76,6 @@ static inline void mmap_assert_write_loc
rwsem_assert_held_write(&mm->mmap_lock);
}
-#ifdef CONFIG_PER_VMA_LOCK
-
#ifdef CONFIG_LOCKDEP
#define __vma_lockdep_map(vma) (&vma->vmlock_dep_map)
#else
@@ -484,52 +482,6 @@ struct vm_area_struct *lock_next_vma(str
struct vma_iterator *iter,
unsigned long address);
-#else /* CONFIG_PER_VMA_LOCK */
-
-static inline void mm_lock_seqcount_init(struct mm_struct *mm) {}
-static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {}
-static inline void mm_lock_seqcount_end(struct mm_struct *mm) {}
-
-static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq)
-{
- return false;
-}
-
-static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq)
-{
- return true;
-}
-static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) {}
-static inline void vma_end_read(struct vm_area_struct *vma) {}
-static inline void vma_start_write(struct vm_area_struct *vma) {}
-static inline __must_check
-int vma_start_write_killable(struct vm_area_struct *vma) { return 0; }
-static inline void vma_assert_write_locked(struct vm_area_struct *vma)
- { mmap_assert_write_locked(vma->vm_mm); }
-static inline void vma_assert_attached(struct vm_area_struct *vma) {}
-static inline void vma_assert_detached(struct vm_area_struct *vma) {}
-static inline void vma_mark_attached(struct vm_area_struct *vma) {}
-static inline void vma_mark_detached(struct vm_area_struct *vma) {}
-
-static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
- unsigned long address)
-{
- return NULL;
-}
-
-static inline void vma_assert_locked(struct vm_area_struct *vma)
-{
- mmap_assert_locked(vma->vm_mm);
-}
-
-static inline void vma_assert_stabilised(struct vm_area_struct *vma)
-{
- /* If no VMA locks, then either mmap lock suffices to stabilise. */
- mmap_assert_locked(vma->vm_mm);
-}
-
-#endif /* CONFIG_PER_VMA_LOCK */
-
static inline void mmap_write_lock(struct mm_struct *mm)
{
__mmap_lock_trace_start_locking(mm, true);
diff -puN include/linux/mm.h~unconditional-vma-locks include/linux/mm.h
--- a/include/linux/mm.h~unconditional-vma-locks 2026-04-29 11:18:48.714555026 -0700
+++ b/include/linux/mm.h 2026-04-29 11:18:49.091569537 -0700
@@ -890,7 +890,6 @@ static inline void vma_numab_state_free(
* These must be here rather than mmap_lock.h as dependent on vm_fault type,
* declared in this header.
*/
-#ifdef CONFIG_PER_VMA_LOCK
static inline void release_fault_lock(struct vm_fault *vmf)
{
if (vmf->flags & FAULT_FLAG_VMA_LOCK)
@@ -906,17 +905,6 @@ static inline void assert_fault_locked(c
else
mmap_assert_locked(vmf->vma->vm_mm);
}
-#else
-static inline void release_fault_lock(struct vm_fault *vmf)
-{
- mmap_read_unlock(vmf->vma->vm_mm);
-}
-
-static inline void assert_fault_locked(const struct vm_fault *vmf)
-{
- mmap_assert_locked(vmf->vma->vm_mm);
-}
-#endif /* CONFIG_PER_VMA_LOCK */
static inline bool mm_flags_test(int flag, const struct mm_struct *mm)
{
diff -puN include/linux/mm_types.h~unconditional-vma-locks include/linux/mm_types.h
--- a/include/linux/mm_types.h~unconditional-vma-locks 2026-04-29 11:18:48.761556836 -0700
+++ b/include/linux/mm_types.h 2026-04-29 11:18:49.092569576 -0700
@@ -959,7 +959,6 @@ struct vm_area_struct {
vma_flags_t flags;
};
-#ifdef CONFIG_PER_VMA_LOCK
/*
* Can only be written (using WRITE_ONCE()) while holding both:
* - mmap_lock (in write mode)
@@ -975,7 +974,7 @@ struct vm_area_struct {
* slowpath.
*/
unsigned int vm_lock_seq;
-#endif
+
/*
* A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma
* list, after a COW of one of the file pages. A MAP_SHARED vma
@@ -1007,7 +1006,6 @@ struct vm_area_struct {
#ifdef CONFIG_NUMA_BALANCING
struct vma_numab_state *numab_state; /* NUMA Balancing state */
#endif
-#ifdef CONFIG_PER_VMA_LOCK
/*
* Used to keep track of firstly, whether the VMA is attached, secondly,
* if attached, how many read locks are taken, and thirdly, if the
@@ -1050,7 +1048,6 @@ struct vm_area_struct {
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lockdep_map vmlock_dep_map;
#endif
-#endif
/*
* For areas with an address space and backing store,
* linkage into the address_space->i_mmap interval tree.
@@ -1249,7 +1246,6 @@ struct mm_struct {
* init_mm.mmlist, and are protected
* by mmlist_lock
*/
-#ifdef CONFIG_PER_VMA_LOCK
struct rcuwait vma_writer_wait;
/*
* This field has lock-like semantics, meaning it is sometimes
@@ -1269,7 +1265,6 @@ struct mm_struct {
* mmap_lock.
*/
seqcount_t mm_lock_seq;
-#endif
#ifdef CONFIG_FUTEX_PRIVATE_HASH
struct mutex futex_hash_lock;
struct futex_private_hash __rcu *futex_phash;
diff -puN kernel/fork.c~unconditional-vma-locks kernel/fork.c
--- a/kernel/fork.c~unconditional-vma-locks 2026-04-29 11:18:48.774557336 -0700
+++ b/kernel/fork.c 2026-04-29 11:18:49.092569576 -0700
@@ -1067,9 +1067,7 @@ static void mmap_init_lock(struct mm_str
{
init_rwsem(&mm->mmap_lock);
mm_lock_seqcount_init(mm);
-#ifdef CONFIG_PER_VMA_LOCK
rcuwait_init(&mm->vma_writer_wait);
-#endif
}
static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
diff -puN mm/Kconfig~unconditional-vma-locks mm/Kconfig
--- a/mm/Kconfig~unconditional-vma-locks 2026-04-29 11:18:48.838559801 -0700
+++ b/mm/Kconfig 2026-04-29 11:18:49.093569614 -0700
@@ -1394,19 +1394,6 @@ config LRU_GEN_STATS
config LRU_GEN_WALKS_MMU
def_bool y
depends on LRU_GEN && ARCH_HAS_HW_PTE_YOUNG
-# }
-
-config ARCH_SUPPORTS_PER_VMA_LOCK
- def_bool n
-
-config PER_VMA_LOCK
- def_bool y
- depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
- help
- Allow per-vma locking during page fault handling.
-
- This feature allows locking each virtual memory area separately when
- handling page faults instead of taking mmap_lock.
config LOCK_MM_AND_FIND_VMA
bool
diff -puN mm/mmap_lock.c~unconditional-vma-locks mm/mmap_lock.c
--- a/mm/mmap_lock.c~unconditional-vma-locks 2026-04-29 11:18:49.084569267 -0700
+++ b/mm/mmap_lock.c 2026-04-29 11:18:49.093569614 -0700
@@ -44,7 +44,6 @@ EXPORT_SYMBOL(__mmap_lock_do_trace_relea
#endif /* CONFIG_TRACING */
#ifdef CONFIG_MMU
-#ifdef CONFIG_PER_VMA_LOCK
/* State shared across __vma_[start, end]_exclude_readers. */
struct vma_exclude_readers_state {
@@ -431,7 +430,6 @@ fallback:
return vma;
}
-#endif /* CONFIG_PER_VMA_LOCK */
#ifdef CONFIG_LOCK_MM_AND_FIND_VMA
#include <linux/extable.h>
_
next prev parent reply other threads:[~2026-04-29 18:19 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-29 18:19 [PATCH 0/6] mm: Make per-VMA locks available in all builds Dave Hansen
2026-04-29 18:19 ` Dave Hansen [this message]
2026-04-29 18:19 ` [PATCH 2/6] binder: Make shrinker rely solely on per-VMA lock Dave Hansen
2026-04-29 18:19 ` [PATCH 3/6] mm: Add RCU-based VMA lookup that waits for writers Dave Hansen
2026-04-29 18:20 ` [PATCH 4/6] binder: Remove mmap_lock fallback Dave Hansen
2026-04-29 18:20 ` [PATCH 5/6] tcp: Remove mmap_lock fallback path Dave Hansen
2026-04-29 18:20 ` [PATCH 6/6] x86/mm: Avoid mmap lock for shadow stack pop fast path Dave Hansen
2026-05-04 23:15 ` Edgecombe, Rick P
2026-05-05 16:39 ` Dave Hansen
2026-04-29 18:22 ` [PATCH 0/6] mm: Make per-VMA locks available in all builds Dave Hansen
2026-04-30 8:11 ` Lorenzo Stoakes
2026-04-30 17:17 ` Suren Baghdasaryan
2026-04-30 17:20 ` Dave Hansen
2026-04-30 7:55 ` [syzbot ci] " syzbot ci
2026-04-30 16:59 ` Dave Hansen
[not found] ` <20260430072053.e0be1b431bcff02831f07e9d@linux-foundation.org>
2026-04-30 16:52 ` [PATCH 0/6] " Dave Hansen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260429181955.0C443845@davehans-spike.ostc.intel.com \
--to=dave.hansen@linux.intel.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox