From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, Linus Torvalds <torvalds@linux-foundation.org>,
Andy Lutomirsky <luto@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Dave Hansen <dave.hansen@intel.com>,
Borislav Petkov <bpetkov@suse.de>,
Greg KH <gregkh@linuxfoundation.org>,
keescook@google.com, hughd@google.com,
Brian Gerst <brgerst@gmail.com>,
Josh Poimboeuf <jpoimboe@redhat.com>,
Denys Vlasenko <dvlasenk@redhat.com>,
Rik van Riel <riel@redhat.com>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
Juergen Gross <jgross@suse.com>,
David Laight <David.Laight@aculab.com>,
Eduardo Valentin <eduval@amazon.com>,
aliguori@amazon.com, Will Deacon <will.deacon@arm.com>,
daniel.gruss@iaik.tugraz.at,
Dave Hansen <dave.hansen@linux.intel.com>,
Ingo Molnar <mingo@kernel.org>, Borislav Petkov <bp@suse.de>,
Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
linux-mm@kvack.org
Subject: [patch V163 22/51] x86/mm/pti: Prepare the x86/entry assembly code for entry/exit CR3 switching
Date: Mon, 18 Dec 2017 12:42:37 +0100 [thread overview]
Message-ID: <20171218115255.418928476@linutronix.de> (raw)
In-Reply-To: 20171218114215.239543034@linutronix.de
[-- Attachment #1: 0029-x86-mm-pti-Prepare-the-x86-entry-assembly-code-for-e.patch --]
[-- Type: text/plain, Size: 10611 bytes --]
From: Dave Hansen <dave.hansen@linux.intel.com>
PAGE_TABLE_ISOLATION needs to switch to a different CR3 value when it
enters the kernel and switch back when it exits. This essentially needs to
be done before leaving assembly code.
This is extra challenging because the switching context is tricky: the
registers that can be clobbered can vary. It is also hard to store things
on the stack because there is an established ABI (ptregs) or the stack is
entirely unsafe to use.
Establish a set of macros that allow changing to the user and kernel CR3
values.
Interactions with SWAPGS:
Previous versions of the PAGE_TABLE_ISOLATION code relied on having
per-CPU scratch space to save/restore a register that can be used for the
CR3 MOV. The %GS register is used to index into our per-CPU space, so
SWAPGS *had* to be done before the CR3 switch. That scratch space is gone
now, but the semantic that SWAPGS must be done before the CR3 MOV is
retained. This is good to keep because it is not that hard to do and it
allows to do things like add per-CPU debugging information.
What this does in the NMI code is worth pointing out. NMIs can interrupt
*any* context and they can also be nested with NMIs interrupting other
NMIs. The comments below ".Lnmi_from_kernel" explain the format of the
stack during this situation. Changing the format of this stack is hard.
Instead of storing the old CR3 value on the stack, this depends on the
*regular* register save/restore mechanism and then uses %r14 to keep CR3
during the NMI. It is callee-saved and will not be clobbered by the C NMI
handlers that get called.
[ PeterZ: ESPFIX optimization ]
Based-on-code-from: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Laight <David.Laight@aculab.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Eduardo Valentin <eduval@amazon.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: linux-mm@kvack.org
---
arch/x86/entry/calling.h | 66 +++++++++++++++++++++++++++++++++++++++
arch/x86/entry/entry_64.S | 45 +++++++++++++++++++++++---
arch/x86/entry/entry_64_compat.S | 24 +++++++++++++-
3 files changed, 128 insertions(+), 7 deletions(-)
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -1,6 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/jump_label.h>
#include <asm/unwind_hints.h>
+#include <asm/cpufeatures.h>
+#include <asm/page_types.h>
/*
@@ -187,6 +189,70 @@ For 32-bit we have the following convent
#endif
.endm
+#ifdef CONFIG_PAGE_TABLE_ISOLATION
+
+/* PAGE_TABLE_ISOLATION PGDs are 8k. Flip bit 12 to switch between the two halves: */
+#define PTI_SWITCH_MASK (1<<PAGE_SHIFT)
+
+.macro ADJUST_KERNEL_CR3 reg:req
+ /* Clear "PAGE_TABLE_ISOLATION bit", point CR3 at kernel pagetables: */
+ andq $(~PTI_SWITCH_MASK), \reg
+.endm
+
+.macro ADJUST_USER_CR3 reg:req
+ /* Move CR3 up a page to the user page tables: */
+ orq $(PTI_SWITCH_MASK), \reg
+.endm
+
+.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
+ mov %cr3, \scratch_reg
+ ADJUST_KERNEL_CR3 \scratch_reg
+ mov \scratch_reg, %cr3
+.endm
+
+.macro SWITCH_TO_USER_CR3 scratch_reg:req
+ mov %cr3, \scratch_reg
+ ADJUST_USER_CR3 \scratch_reg
+ mov \scratch_reg, %cr3
+.endm
+
+.macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req
+ movq %cr3, \scratch_reg
+ movq \scratch_reg, \save_reg
+ /*
+ * Is the switch bit zero? This means the address is
+ * up in real PAGE_TABLE_ISOLATION patches in a moment.
+ */
+ testq $(PTI_SWITCH_MASK), \scratch_reg
+ jz .Ldone_\@
+
+ ADJUST_KERNEL_CR3 \scratch_reg
+ movq \scratch_reg, %cr3
+
+.Ldone_\@:
+.endm
+
+.macro RESTORE_CR3 save_reg:req
+ /*
+ * The CR3 write could be avoided when not changing its value,
+ * but would require a CR3 read *and* a scratch register.
+ */
+ movq \save_reg, %cr3
+.endm
+
+#else /* CONFIG_PAGE_TABLE_ISOLATION=n: */
+
+.macro SWITCH_TO_KERNEL_CR3 scratch_reg:req
+.endm
+.macro SWITCH_TO_USER_CR3 scratch_reg:req
+.endm
+.macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req
+.endm
+.macro RESTORE_CR3 save_reg:req
+.endm
+
+#endif
+
#endif /* CONFIG_X86_64 */
/*
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -164,6 +164,9 @@ ENTRY(entry_SYSCALL_64_trampoline)
/* Stash the user RSP. */
movq %rsp, RSP_SCRATCH
+ /* Note: using %rsp as a scratch reg. */
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
/* Load the top of the task stack into RSP */
movq CPU_ENTRY_AREA_tss + TSS_sp1 + CPU_ENTRY_AREA, %rsp
@@ -203,6 +206,10 @@ ENTRY(entry_SYSCALL_64)
*/
swapgs
+ /*
+ * This path is not taken when PAGE_TABLE_ISOLATION is disabled so it
+ * is not required to switch CR3.
+ */
movq %rsp, PER_CPU_VAR(rsp_scratch)
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
@@ -399,6 +406,7 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
* We are on the trampoline stack. All regs except RDI are live.
* We can do future final exit work right here.
*/
+ SWITCH_TO_USER_CR3 scratch_reg=%rdi
popq %rdi
popq %rsp
@@ -736,6 +744,8 @@ GLOBAL(swapgs_restore_regs_and_return_to
* We can do future final exit work right here.
*/
+ SWITCH_TO_USER_CR3 scratch_reg=%rdi
+
/* Restore RDI. */
popq %rdi
SWAPGS
@@ -818,7 +828,9 @@ ENTRY(native_iret)
*/
pushq %rdi /* Stash user RDI */
- SWAPGS
+ SWAPGS /* to kernel GS */
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi /* to kernel CR3 */
+
movq PER_CPU_VAR(espfix_waddr), %rdi
movq %rax, (0*8)(%rdi) /* user RAX */
movq (1*8)(%rsp), %rax /* user RIP */
@@ -834,7 +846,6 @@ ENTRY(native_iret)
/* Now RAX == RSP. */
andl $0xffff0000, %eax /* RAX = (RSP & 0xffff0000) */
- popq %rdi /* Restore user RDI */
/*
* espfix_stack[31:16] == 0. The page tables are set up such that
@@ -845,7 +856,11 @@ ENTRY(native_iret)
* still points to an RO alias of the ESPFIX stack.
*/
orq PER_CPU_VAR(espfix_stack), %rax
- SWAPGS
+
+ SWITCH_TO_USER_CR3 scratch_reg=%rdi /* to user CR3 */
+ SWAPGS /* to user GS */
+ popq %rdi /* Restore user RDI */
+
movq %rax, %rsp
UNWIND_HINT_IRET_REGS offset=8
@@ -945,6 +960,8 @@ ENTRY(switch_to_thread_stack)
UNWIND_HINT_FUNC
pushq %rdi
+ /* Need to switch before accessing the thread stack. */
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
movq %rsp, %rdi
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
UNWIND_HINT sp_offset=16 sp_reg=ORC_REG_DI
@@ -1244,7 +1261,11 @@ ENTRY(paranoid_entry)
js 1f /* negative -> in kernel */
SWAPGS
xorl %ebx, %ebx
-1: ret
+
+1:
+ SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14
+
+ ret
END(paranoid_entry)
/*
@@ -1266,6 +1287,7 @@ ENTRY(paranoid_exit)
testl %ebx, %ebx /* swapgs needed? */
jnz .Lparanoid_exit_no_swapgs
TRACE_IRQS_IRETQ
+ RESTORE_CR3 save_reg=%r14
SWAPGS_UNSAFE_STACK
jmp .Lparanoid_exit_restore
.Lparanoid_exit_no_swapgs:
@@ -1293,6 +1315,8 @@ ENTRY(error_entry)
* from user mode due to an IRET fault.
*/
SWAPGS
+ /* We have user CR3. Change to kernel CR3. */
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
.Lerror_entry_from_usermode_after_swapgs:
/* Put us onto the real thread stack. */
@@ -1339,6 +1363,7 @@ ENTRY(error_entry)
* .Lgs_change's error handler with kernel gsbase.
*/
SWAPGS
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
jmp .Lerror_entry_done
.Lbstep_iret:
@@ -1348,10 +1373,11 @@ ENTRY(error_entry)
.Lerror_bad_iret:
/*
- * We came from an IRET to user mode, so we have user gsbase.
- * Switch to kernel gsbase:
+ * We came from an IRET to user mode, so we have user
+ * gsbase and CR3. Switch to kernel gsbase and CR3:
*/
SWAPGS
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
/*
* Pretend that the exception came from user mode: set up pt_regs
@@ -1383,6 +1409,10 @@ END(error_exit)
/*
* Runs on exception stack. Xen PV does not go through this path at all,
* so we can use real assembly here.
+ *
+ * Registers:
+ * %r14: Used to save/restore the CR3 of the interrupted context
+ * when PAGE_TABLE_ISOLATION is in use. Do not clobber.
*/
ENTRY(nmi)
UNWIND_HINT_IRET_REGS
@@ -1446,6 +1476,7 @@ ENTRY(nmi)
swapgs
cld
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx
movq %rsp, %rdx
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
UNWIND_HINT_IRET_REGS base=%rdx offset=8
@@ -1698,6 +1729,8 @@ ENTRY(nmi)
movq $-1, %rsi
call do_nmi
+ RESTORE_CR3 save_reg=%r14
+
testl %ebx, %ebx /* swapgs needed? */
jnz nmi_restore
nmi_swapgs:
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -49,6 +49,10 @@
ENTRY(entry_SYSENTER_compat)
/* Interrupts are off on entry. */
SWAPGS
+
+ /* We are about to clobber %rsp anyway, clobbering here is OK */
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
/*
@@ -216,6 +220,12 @@ GLOBAL(entry_SYSCALL_compat_after_hwfram
pushq $0 /* pt_regs->r15 = 0 */
/*
+ * We just saved %rdi so it is safe to clobber. It is not
+ * preserved during the C calls inside TRACE_IRQS_OFF anyway.
+ */
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
+
+ /*
* User mode is traced as though IRQs are on, and SYSENTER
* turned them off.
*/
@@ -256,10 +266,22 @@ GLOBAL(entry_SYSCALL_compat_after_hwfram
* when the system call started, which is already known to user
* code. We zero R8-R10 to avoid info leaks.
*/
+ movq RSP-ORIG_RAX(%rsp), %rsp
+
+ /*
+ * The original userspace %rsp (RSP-ORIG_RAX(%rsp)) is stored
+ * on the process stack which is not mapped to userspace and
+ * not readable after we SWITCH_TO_USER_CR3. Delay the CR3
+ * switch until after after the last reference to the process
+ * stack.
+ *
+ * %r8 is zeroed before the sysret, thus safe to clobber.
+ */
+ SWITCH_TO_USER_CR3 scratch_reg=%r8
+
xorq %r8, %r8
xorq %r9, %r9
xorq %r10, %r10
- movq RSP-ORIG_RAX(%rsp), %rsp
swapgs
sysretl
END(entry_SYSCALL_compat)
next prev parent reply other threads:[~2017-12-18 12:07 UTC|newest]
Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-18 11:42 [patch V163 00/51] x86/pti: Updated patch queue Thomas Gleixner
2017-12-18 11:42 ` [patch V163 01/51] x86/mm/dump_pagetables: Check PAGE_PRESENT for real Thomas Gleixner
2017-12-18 14:07 ` Borislav Petkov
2017-12-18 11:42 ` [patch V163 02/51] x86/vsyscall/64: Explicitly set _PAGE_USER in the pagetable hierarchy Thomas Gleixner
2017-12-18 14:21 ` Borislav Petkov
2017-12-18 11:42 ` [patch V163 03/51] x86/vsyscall/64: Warn and fail vsyscall emulation in NATIVE mode Thomas Gleixner
2017-12-18 14:44 ` Borislav Petkov
2017-12-18 11:42 ` [patch V163 04/51] arch: Allow arch_dup_mmap() to fail Thomas Gleixner
2017-12-18 11:42 ` [patch V163 05/51] x86/ldt: Rework locking Thomas Gleixner
2017-12-18 11:42 ` [patch V163 06/51] x86/ldt: Prevent ldt inheritance on exec Thomas Gleixner
2017-12-18 11:42 ` [patch V163 07/51] x86/mm/64: Improve the memory map documentation Thomas Gleixner
2017-12-18 11:42 ` [patch V163 08/51] x86/doc: Remove obvious weirdness Thomas Gleixner
2017-12-18 11:42 ` [patch V163 09/51] x86/entry: Remove SYSENTER_stack naming Thomas Gleixner
2017-12-18 11:42 ` [patch V163 10/51] x86/uv: Use the right tlbflush API Thomas Gleixner
2017-12-18 11:42 ` [patch V163 11/51] x86/microcode: Dont abuse the tlbflush interface Thomas Gleixner
2017-12-18 11:42 ` [patch V163 12/51] x86/mm: Use __flush_tlb_one() for kernel memory Thomas Gleixner
2017-12-18 11:42 ` [patch V163 13/51] x86/mm: Remove superfluous barriers Thomas Gleixner
2017-12-18 11:42 ` [patch V163 14/51] x86/mm: Clarify which functions are supposed to flush what Thomas Gleixner
2017-12-18 11:42 ` [patch V163 15/51] x86/mm: Move the CR3 construction functions to tlbflush.h Thomas Gleixner
2017-12-18 11:42 ` [patch V163 16/51] x86/mm: Remove hard-coded ASID limit checks Thomas Gleixner
2017-12-18 11:42 ` [patch V163 17/51] x86/mm: Put MMU to hardware ASID translation in one place Thomas Gleixner
2017-12-18 11:42 ` [patch V163 18/51] x86/mm: Create asm/invpcid.h Thomas Gleixner
2017-12-18 11:42 ` [patch V163 19/51] init: Invoke init_espfix_bsp() from mm_init() Thomas Gleixner
2017-12-18 11:42 ` [patch V163 20/51] x86/cpufeatures: Add X86_BUG_CPU_INSECURE Thomas Gleixner
2017-12-18 11:42 ` [patch V163 21/51] x86/mm/pti: Disable global pages if PAGE_TABLE_ISOLATION=y Thomas Gleixner
2017-12-18 11:42 ` Thomas Gleixner [this message]
2017-12-18 11:42 ` [patch V163 23/51] x86/mm/pti: Add infrastructure for page table isolation Thomas Gleixner
2017-12-18 11:42 ` [patch V163 24/51] x86/mm/pti: Add mapping helper functions Thomas Gleixner
2017-12-18 12:52 ` Peter Zijlstra
2017-12-18 11:42 ` [patch V163 25/51] x86/mm/pti: Allow NX poison to be set in p4d/pgd Thomas Gleixner
2017-12-18 11:42 ` [patch V163 26/51] x86/mm/pti: Allocate a separate user PGD Thomas Gleixner
2017-12-18 11:42 ` [patch V163 27/51] x86/mm/pti: Populate " Thomas Gleixner
2017-12-18 20:34 ` Dave Hansen
2017-12-18 20:41 ` Peter Zijlstra
2017-12-18 20:45 ` Dave Hansen
2017-12-18 20:57 ` Peter Zijlstra
2017-12-19 7:48 ` Ingo Molnar
2017-12-19 18:08 ` Thomas Gleixner
2017-12-20 0:22 ` Thomas Gleixner
2017-12-20 16:26 ` Juergen Gross
2017-12-20 19:50 ` Thomas Gleixner
2017-12-18 22:11 ` Andy Lutomirski
2017-12-18 11:42 ` [patch V163 28/51] x86/mm/pti: Add functions to clone kernel PMDs Thomas Gleixner
2017-12-18 13:08 ` Peter Zijlstra
2017-12-18 11:42 ` [patch V163 29/51] x86/mm/pti: Force entry through trampoline when PTI active Thomas Gleixner
2017-12-18 11:42 ` [patch V163 30/51] x86/fixmap: Move the CPU entry area into a separate PMD Thomas Gleixner
2017-12-18 11:42 ` [patch V163 31/51] x86/mm/pti: Share cpu_entry_area PMDs Thomas Gleixner
2017-12-18 11:42 ` [patch V163 32/51] x86/entry: Align entry text section to PMD boundary Thomas Gleixner
2017-12-18 11:42 ` [patch V163 33/51] x86/mm/pti: Share entry text PMD Thomas Gleixner
2017-12-18 11:42 ` [patch V163 34/51] x86/mm/pti: Map ESPFIX into user space Thomas Gleixner
2017-12-18 11:42 ` [patch V163 35/51] x86/fixmap: Move IDT fixmap into the cpu_entry_area range Thomas Gleixner
2017-12-18 11:42 ` [patch V163 36/51] x86/fixmap: Add debugstore entries to cpu_entry_area Thomas Gleixner
2017-12-18 11:42 ` [patch V163 37/51] x86/events/intel/ds: Map debug buffers in fixmap Thomas Gleixner
2017-12-18 13:18 ` Peter Zijlstra
2017-12-18 11:42 ` [patch V163 38/51] x86/mm/64: Make a full PGD-entry size hole in the memory map Thomas Gleixner
2017-12-18 13:27 ` Peter Zijlstra
2017-12-19 18:09 ` Thomas Gleixner
2017-12-19 18:14 ` Thomas Gleixner
2017-12-18 11:42 ` [patch V163 39/51] x86/pti: Put the LDT in its own PGD if PTI is on Thomas Gleixner
2017-12-18 13:46 ` Peter Zijlstra
2017-12-18 11:42 ` [patch V163 40/51] x86/pti: Map the vsyscall page if needed Thomas Gleixner
2017-12-18 11:42 ` [patch V163 41/51] x86/mm: Allow flushing for future ASID switches Thomas Gleixner
2017-12-18 11:42 ` [patch V163 42/51] x86/mm: Abstract switching CR3 Thomas Gleixner
2017-12-18 11:42 ` [patch V163 43/51] x86/mm: Use/Fix PCID to optimize user/kernel switches Thomas Gleixner
2017-12-18 11:42 ` [patch V163 44/51] x86/mm: Optimize RESTORE_CR3 Thomas Gleixner
2017-12-18 11:43 ` [patch V163 45/51] x86/mm: Use INVPCID for __native_flush_tlb_single() Thomas Gleixner
2017-12-18 11:43 ` [patch V163 46/51] x86/mm: Clarify the whole ASID/kernel PCID/user PCID naming Thomas Gleixner
2017-12-18 11:43 ` [patch V163 47/51] x86/mm/pti: Add Kconfig Thomas Gleixner
2017-12-18 11:43 ` [patch V163 48/51] x86/mm/dump_pagetables: Add page table directory Thomas Gleixner
2017-12-18 11:43 ` [patch V163 49/51] x86/mm/dump_pagetables: Check user space page table for WX pages Thomas Gleixner
2017-12-18 11:43 ` [patch V163 50/51] x86/mm/dump_pagetables: Allow dumping current pagetables Thomas Gleixner
2017-12-18 11:43 ` [patch V163 51/51] x86/ldt: Make the LDT mapping RO Thomas Gleixner
2017-12-18 13:48 ` Peter Zijlstra
2017-12-18 12:11 ` [patch V163 00/51] x86/pti: Updated patch queue Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171218115255.418928476@linutronix.de \
--to=tglx@linutronix.de \
--cc=David.Laight@aculab.com \
--cc=aliguori@amazon.com \
--cc=boris.ostrovsky@oracle.com \
--cc=bp@alien8.de \
--cc=bp@suse.de \
--cc=bpetkov@suse.de \
--cc=brgerst@gmail.com \
--cc=daniel.gruss@iaik.tugraz.at \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=dvlasenk@redhat.com \
--cc=eduval@amazon.com \
--cc=gregkh@linuxfoundation.org \
--cc=hpa@zytor.com \
--cc=hughd@google.com \
--cc=jgross@suse.com \
--cc=jpoimboe@redhat.com \
--cc=keescook@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=torvalds@linux-foundation.org \
--cc=will.deacon@arm.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox