* [PATCH v11 0/4] Machine check recovery when kernel accesses poison
@ 2016-02-11 21:34 Tony Luck
2016-02-11 21:34 ` [PATCH v11 1/4] x86: Expand exception table to allow new handling options Tony Luck
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Tony Luck @ 2016-02-11 21:34 UTC (permalink / raw)
To: Ingo Molnar
Cc: Borislav Petkov, Andrew Morton, Andy Lutomirski, Dan Williams,
elliott, Brian Gerst, linux-kernel, linux-mm, linux-nvdimm, x86
This series is initially targeted at the folks doing filesystems
on top of NVDIMMs. They really want to be able to return -EIO
when there is a h/w error (just like spinning rust, and SSD does).
I plan to use the same infrastructure to write a machine check aware
"copy_from_user()" that will SIGBUS the calling application when a
syscall touches poison in user space (just like we do when the application
touches the poison itself).
I've dropped off the "reviewed-by" tags that I collected back prior to
adding the new field to the exception table. Please send new ones
if you can.
Changes
V10->V11
Boris: Optimize for aligned case in __mcsafe_copy()
Boris: Add whitespace and comments to __mcsafe_copy() for readability
Boris: Move Xeon E7 check to Intel quirks
Boris: Simpler description for mce=recovery command line option
V9->V10
Andy: Commit comment in part 2 is stale - refers to "EXTABLE_CLASS_FAULT"
Boris: Part1 - Numerous spelling, grammar, etc. fixes
Boris: Part2 - No longer need #include <linux/module.h> (in either file).
V8->V9
Boris: Create a synthetic cpu capability for machine check recovery.
Changes V7-V8
Boris: Would be so much cleaner if we added a new field to the exception table
instead of squeezing bits into the fixup field. New field added
Tony: Documentation needs to be updated. Done
Changes V6-V7:
Boris: Why add/subtract 0x20000000? Added better comment provided by Andy
Boris: Churn. Part2 changes things only introduced in part1.
Merged parts 1&2 into one patch.
Ingo: Missing my sign off on part1. Added.
Changes V5-V6
Andy: Provoked massive re-write by providing what is now part1 of this
patch series. This frees up two bits in the exception table
fixup field that can be used to tag exception table entries
as different "classes". This means we don't need my separate
exception table fro machine checks. Also avoids duplicating
fixup actions for #PF and #MC cases that were in version 5.
Andy: Use C99 array initializers to tie the various class fixup
functions back to the defintions of each class. Also give the
functions meanningful names (not fixup_class0() etc.).
Boris: Cleaned up my lousy assembly code removing many spurious 'l'
modifiers on instructions.
Boris: Provided some helper functions for the machine check severity
calculation that make the code more readable.
Boris: Have __mcsafe_copy() return a structure with the 'remaining bytes'
in a separate field from the fault indicator. Boris had suggested
Linux -EFAULT/-EINVAL ... but I thought it made more sense to return
the exception number (X86_TRAP_MC, etc.) This finally kills off
BIT(63) which has been controversial throughout all the early versions
of this patch series.
Changes V4-V5
Tony: Extended __mcsafe_copy() to have fixup entries for both machine
check and page fault.
Changes V3-V4:
Andy: Simplify fixup_mcexception() by dropping used-once local variable
Andy: "Reviewed-by" tag added to part1
Boris: Moved new functions to memcpy_64.S and declaration to asm/string_64.h
Boris: Changed name s/mcsafe_memcpy/__mcsafe_copy/ to make it clear that this
is an internal function and that return value doesn't follow memcpy() semantics.
Boris: "Reviewed-by" tag added to parts 1&2
Changes V2-V3:
Andy: Don't hack "regs->ax = BIT(63) | addr;" in the machine check
handler. Now have better fixup code that computes the number
of remaining bytes (just like page-fault fixup).
Andy: #define for BIT(63). Done, plus couple of extra macros using it.
Boris: Don't clutter up generic code (like mm/extable.c) with this.
I moved everything under arch/x86 (the asm-generic change is
a more generic #define).
Boris: Dependencies for CONFIG_MCE_KERNEL_RECOVERY are too generic.
I made it a real menu item with default "n". Dan Williams
will use "select MCE_KERNEL_RECOVERY" from his persistent
filesystem code.
Boris: Simplify conditionals in mce.c by moving tolerant/kill_it
checks earlier, with a skip to end if they aren't set.
Boris: Miscellaneous grammar/punctuation. Fixed.
Boris: Don't leak spurious __start_mcextable symbols into kernels
that didn't configure MCE_KERNEL_RECOVERY. Done.
Tony: New code doesn't belong in user_copy_64.S/uaccess*.h. Moved
to new .S/.h files
Elliott:Cacheing behavior non-optimal. Could use movntdqa, vmovntdqa
or vmovntdqa on source addresses. I didn't fix this yet. Think
of the current mcsafe_memcpy() as the first of several functions.
This one is useful for small copies (meta-data) where the overhead
of saving SSE/AVX state isn't justified.
Changes V1->V2:
0-day: Reported build errors and warnings on 32-bit systems. Fixed
0-day: Reported bloat to tinyconfig. Fixed
Boris: Suggestions to use extra macros to reduce code duplication in _ASM_*EXTABLE. Done
Boris: Re-write "tolerant==3" check to reduce indentation level. See below.
Andy: Check IP is valid before searching kernel exception tables. Done.
Andy: Explain use of BIT(63) on return value from mcsafe_memcpy(). Done (added decode macros).
Andy: Untangle mess of code in tail of do_machine_check() to make it
clear what is going on (e.g. that we only enter the ist_begin_non_atomic()
if we were called from user code, not from kernel!). Done.
Tony Luck (4):
x86: Expand exception table to allow new handling options
x86, mce: Check for faults tagged in EXTABLE_CLASS_FAULT exception
table entries
x86, mce: Add __mcsafe_copy()
x86: Create a new synthetic cpu capability for machine check recovery
Documentation/x86/exception-tables.txt | 35 +++++++
Documentation/x86/x86_64/boot-options.txt | 2 +
arch/x86/include/asm/asm.h | 40 ++++----
arch/x86/include/asm/cpufeature.h | 1 +
arch/x86/include/asm/mce.h | 1 +
arch/x86/include/asm/string_64.h | 8 ++
arch/x86/include/asm/uaccess.h | 16 ++--
arch/x86/kernel/cpu/mcheck/mce-severity.c | 22 ++++-
arch/x86/kernel/cpu/mcheck/mce.c | 83 +++++++++-------
arch/x86/kernel/kprobes/core.c | 2 +-
arch/x86/kernel/traps.c | 6 +-
arch/x86/kernel/x8664_ksyms_64.c | 2 +
arch/x86/lib/memcpy_64.S | 151 ++++++++++++++++++++++++++++++
arch/x86/mm/extable.c | 100 ++++++++++++++------
arch/x86/mm/fault.c | 2 +-
scripts/sortextable.c | 32 +++++++
16 files changed, 410 insertions(+), 93 deletions(-)
--
2.5.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v11 1/4] x86: Expand exception table to allow new handling options
2016-02-11 21:34 [PATCH v11 0/4] Machine check recovery when kernel accesses poison Tony Luck
@ 2016-02-11 21:34 ` Tony Luck
2016-02-11 21:34 ` [PATCH v11 2/4] x86, mce: Check for faults tagged in EXTABLE_CLASS_FAULT exception table entries Tony Luck
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Tony Luck @ 2016-02-11 21:34 UTC (permalink / raw)
To: Ingo Molnar
Cc: Borislav Petkov, Andrew Morton, Andy Lutomirski, Dan Williams,
elliott, Brian Gerst, linux-kernel, linux-mm, linux-nvdimm, x86
Huge amounts of help from Andy Lutomirski and Borislav Petkov to
produce this. Andy provided the inspiration to add classes to the
exception table with a clever bit-squeezing trick, Boris pointed
out how much cleaner it would all be if we just had a new field.
Linus Torvalds blessed the expansion with:
I'd rather not be clever in order to save just a tiny amount of space
in the exception table, which isn't really criticial for anybody.
The third field is another relative function pointer, this one to a
handler that executes the actions.
We start out with three handlers:
1: Legacy - just jumps the to fixup IP
2: Fault - provide the trap number in %ax to the fixup code
3: Cleaned up legacy for the uaccess error hack
Signed-off-by: Tony Luck <tony.luck@intel.com>
---
Documentation/x86/exception-tables.txt | 35 ++++++++++++
arch/x86/include/asm/asm.h | 40 +++++++------
arch/x86/include/asm/uaccess.h | 16 +++---
arch/x86/kernel/kprobes/core.c | 2 +-
arch/x86/kernel/traps.c | 6 +-
arch/x86/mm/extable.c | 100 ++++++++++++++++++++++++---------
arch/x86/mm/fault.c | 2 +-
scripts/sortextable.c | 32 +++++++++++
8 files changed, 176 insertions(+), 57 deletions(-)
diff --git a/Documentation/x86/exception-tables.txt b/Documentation/x86/exception-tables.txt
index 32901aa36f0a..fed18187a8b8 100644
--- a/Documentation/x86/exception-tables.txt
+++ b/Documentation/x86/exception-tables.txt
@@ -290,3 +290,38 @@ Due to the way that the exception table is built and needs to be ordered,
only use exceptions for code in the .text section. Any other section
will cause the exception table to not be sorted correctly, and the
exceptions will fail.
+
+Things changed when 64-bit support was added to x86 Linux. Rather than
+double the size of the exception table by expanding the two entries
+from 32-bits to 64 bits, a clever trick was used to store addresses
+as relative offsets from the table itself. The assembly code changed
+from:
+ .long 1b,3b
+to:
+ .long (from) - .
+ .long (to) - .
+
+and the C-code that uses these values converts back to absolute addresses
+like this:
+
+ ex_insn_addr(const struct exception_table_entry *x)
+ {
+ return (unsigned long)&x->insn + x->insn;
+ }
+
+In v4.5 the exception table entry was given a new field "handler".
+This is also 32-bits wide and contains a third relative function
+pointer which points to one of:
+
+1) int ex_handler_default(const struct exception_table_entry *fixup)
+ This is legacy case that just jumps to the fixup code
+2) int ex_handler_fault(const struct exception_table_entry *fixup)
+ This case provides the fault number of the trap that occurred at
+ entry->insn. It is used to distinguish page faults from machine
+ check.
+3) int ex_handler_ext(const struct exception_table_entry *fixup)
+ This case is used for uaccess_err ... we need to set a flag
+ in the task structure. Before the handler functions existed this
+ case was handled by adding a large offset to the fixup to tag
+ it as special.
+More functions can easily be added.
diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index 189679aba703..f5063b6659eb 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -44,19 +44,22 @@
/* Exception table entry */
#ifdef __ASSEMBLY__
-# define _ASM_EXTABLE(from,to) \
+# define _ASM_EXTABLE_HANDLE(from, to, handler) \
.pushsection "__ex_table","a" ; \
- .balign 8 ; \
+ .balign 4 ; \
.long (from) - . ; \
.long (to) - . ; \
+ .long (handler) - . ; \
.popsection
-# define _ASM_EXTABLE_EX(from,to) \
- .pushsection "__ex_table","a" ; \
- .balign 8 ; \
- .long (from) - . ; \
- .long (to) - . + 0x7ffffff0 ; \
- .popsection
+# define _ASM_EXTABLE(from, to) \
+ _ASM_EXTABLE_HANDLE(from, to, ex_handler_default)
+
+# define _ASM_EXTABLE_FAULT(from, to) \
+ _ASM_EXTABLE_HANDLE(from, to, ex_handler_fault)
+
+# define _ASM_EXTABLE_EX(from, to) \
+ _ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
# define _ASM_NOKPROBE(entry) \
.pushsection "_kprobe_blacklist","aw" ; \
@@ -89,19 +92,24 @@
.endm
#else
-# define _ASM_EXTABLE(from,to) \
+# define _EXPAND_EXTABLE_HANDLE(x) #x
+# define _ASM_EXTABLE_HANDLE(from, to, handler) \
" .pushsection \"__ex_table\",\"a\"\n" \
- " .balign 8\n" \
+ " .balign 4\n" \
" .long (" #from ") - .\n" \
" .long (" #to ") - .\n" \
+ " .long (" _EXPAND_EXTABLE_HANDLE(handler) ") - .\n" \
" .popsection\n"
-# define _ASM_EXTABLE_EX(from,to) \
- " .pushsection \"__ex_table\",\"a\"\n" \
- " .balign 8\n" \
- " .long (" #from ") - .\n" \
- " .long (" #to ") - . + 0x7ffffff0\n" \
- " .popsection\n"
+# define _ASM_EXTABLE(from, to) \
+ _ASM_EXTABLE_HANDLE(from, to, ex_handler_default)
+
+# define _ASM_EXTABLE_FAULT(from, to) \
+ _ASM_EXTABLE_HANDLE(from, to, ex_handler_fault)
+
+# define _ASM_EXTABLE_EX(from, to) \
+ _ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
+
/* For C file, we already have NOKPROBE_SYMBOL macro */
#endif
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index a4a30e4b2d34..c0f27d7ea7ff 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -90,12 +90,11 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
likely(!__range_not_ok(addr, size, user_addr_max()))
/*
- * The exception table consists of pairs of addresses relative to the
- * exception table enty itself: the first is the address of an
- * instruction that is allowed to fault, and the second is the address
- * at which the program should continue. No registers are modified,
- * so it is entirely up to the continuation code to figure out what to
- * do.
+ * The exception table consists of triples of addresses relative to the
+ * exception table entry itself. The first address is of an instruction
+ * that is allowed to fault, the second is the target at which the program
+ * should continue. The third is a handler function to deal with the fault
+ * caused by the instruction in the first field.
*
* All the routines below use bits of fixup code that are out of line
* with the main instruction path. This means when everything is well,
@@ -104,13 +103,14 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
*/
struct exception_table_entry {
- int insn, fixup;
+ int insn, fixup, handler;
};
/* This is not the generic standard exception_table_entry format */
#define ARCH_HAS_SORT_EXTABLE
#define ARCH_HAS_SEARCH_EXTABLE
-extern int fixup_exception(struct pt_regs *regs);
+extern int fixup_exception(struct pt_regs *regs, int trapnr);
+extern bool ex_has_fault_handler(unsigned long ip);
extern int early_fixup_exception(unsigned long *ip);
/*
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 1deffe6cc873..0f05deeff5ce 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -988,7 +988,7 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
* In case the user-specified fault handler returned
* zero, try to fix up.
*/
- if (fixup_exception(regs))
+ if (fixup_exception(regs, trapnr))
return 1;
/*
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index ade185a46b1d..211c11c7bba4 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -199,7 +199,7 @@ do_trap_no_signal(struct task_struct *tsk, int trapnr, char *str,
}
if (!user_mode(regs)) {
- if (!fixup_exception(regs)) {
+ if (!fixup_exception(regs, trapnr)) {
tsk->thread.error_code = error_code;
tsk->thread.trap_nr = trapnr;
die(str, regs, error_code);
@@ -453,7 +453,7 @@ do_general_protection(struct pt_regs *regs, long error_code)
tsk = current;
if (!user_mode(regs)) {
- if (fixup_exception(regs))
+ if (fixup_exception(regs, X86_TRAP_GP))
return;
tsk->thread.error_code = error_code;
@@ -699,7 +699,7 @@ static void math_error(struct pt_regs *regs, int error_code, int trapnr)
conditional_sti(regs);
if (!user_mode(regs)) {
- if (!fixup_exception(regs)) {
+ if (!fixup_exception(regs, trapnr)) {
task->thread.error_code = error_code;
task->thread.trap_nr = trapnr;
die(str, regs, error_code);
diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index 903ec1e9c326..9dd7e4b7fcde 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -3,6 +3,9 @@
#include <linux/sort.h>
#include <asm/uaccess.h>
+typedef bool (*ex_handler_t)(const struct exception_table_entry *,
+ struct pt_regs *, int);
+
static inline unsigned long
ex_insn_addr(const struct exception_table_entry *x)
{
@@ -13,11 +16,56 @@ ex_fixup_addr(const struct exception_table_entry *x)
{
return (unsigned long)&x->fixup + x->fixup;
}
+static inline ex_handler_t
+ex_fixup_handler(const struct exception_table_entry *x)
+{
+ return (ex_handler_t)((unsigned long)&x->handler + x->handler);
+}
-int fixup_exception(struct pt_regs *regs)
+bool ex_handler_default(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
{
- const struct exception_table_entry *fixup;
- unsigned long new_ip;
+ regs->ip = ex_fixup_addr(fixup);
+ return true;
+}
+EXPORT_SYMBOL(ex_handler_default);
+
+bool ex_handler_fault(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ regs->ip = ex_fixup_addr(fixup);
+ regs->ax = trapnr;
+ return true;
+}
+EXPORT_SYMBOL_GPL(ex_handler_fault);
+
+bool ex_handler_ext(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ /* Special hack for uaccess_err */
+ current_thread_info()->uaccess_err = 1;
+ regs->ip = ex_fixup_addr(fixup);
+ return true;
+}
+EXPORT_SYMBOL(ex_handler_ext);
+
+bool ex_has_fault_handler(unsigned long ip)
+{
+ const struct exception_table_entry *e;
+ ex_handler_t handler;
+
+ e = search_exception_tables(ip);
+ if (!e)
+ return false;
+ handler = ex_fixup_handler(e);
+
+ return handler == ex_handler_fault;
+}
+
+int fixup_exception(struct pt_regs *regs, int trapnr)
+{
+ const struct exception_table_entry *e;
+ ex_handler_t handler;
#ifdef CONFIG_PNPBIOS
if (unlikely(SEGMENT_IS_PNP_CODE(regs->cs))) {
@@ -33,42 +81,34 @@ int fixup_exception(struct pt_regs *regs)
}
#endif
- fixup = search_exception_tables(regs->ip);
- if (fixup) {
- new_ip = ex_fixup_addr(fixup);
-
- if (fixup->fixup - fixup->insn >= 0x7ffffff0 - 4) {
- /* Special hack for uaccess_err */
- current_thread_info()->uaccess_err = 1;
- new_ip -= 0x7ffffff0;
- }
- regs->ip = new_ip;
- return 1;
- }
+ e = search_exception_tables(regs->ip);
+ if (!e)
+ return 0;
- return 0;
+ handler = ex_fixup_handler(e);
+ return handler(e, regs, trapnr);
}
/* Restricted version used during very early boot */
int __init early_fixup_exception(unsigned long *ip)
{
- const struct exception_table_entry *fixup;
+ const struct exception_table_entry *e;
unsigned long new_ip;
+ ex_handler_t handler;
- fixup = search_exception_tables(*ip);
- if (fixup) {
- new_ip = ex_fixup_addr(fixup);
+ e = search_exception_tables(*ip);
+ if (!e)
+ return 0;
- if (fixup->fixup - fixup->insn >= 0x7ffffff0 - 4) {
- /* uaccess handling not supported during early boot */
- return 0;
- }
+ new_ip = ex_fixup_addr(e);
+ handler = ex_fixup_handler(e);
- *ip = new_ip;
- return 1;
- }
+ /* special handling not supported during early boot */
+ if (handler != ex_handler_default)
+ return 0;
- return 0;
+ *ip = new_ip;
+ return 1;
}
/*
@@ -133,6 +173,8 @@ void sort_extable(struct exception_table_entry *start,
i += 4;
p->fixup += i;
i += 4;
+ p->handler += i;
+ i += 4;
}
sort(start, finish - start, sizeof(struct exception_table_entry),
@@ -145,6 +187,8 @@ void sort_extable(struct exception_table_entry *start,
i += 4;
p->fixup -= i;
i += 4;
+ p->handler -= i;
+ i += 4;
}
}
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index eef44d9a3f77..495946c3f9dd 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -656,7 +656,7 @@ no_context(struct pt_regs *regs, unsigned long error_code,
int sig;
/* Are we prepared to handle this kernel fault? */
- if (fixup_exception(regs)) {
+ if (fixup_exception(regs, X86_TRAP_PF)) {
/*
* Any interrupt that takes a fault gets the fixup. This makes
* the below recursive fault logic only apply to a faults from
diff --git a/scripts/sortextable.c b/scripts/sortextable.c
index c2423d913b46..7b29fb14f870 100644
--- a/scripts/sortextable.c
+++ b/scripts/sortextable.c
@@ -209,6 +209,35 @@ static int compare_relative_table(const void *a, const void *b)
return 0;
}
+static void x86_sort_relative_table(char *extab_image, int image_size)
+{
+ int i;
+
+ i = 0;
+ while (i < image_size) {
+ uint32_t *loc = (uint32_t *)(extab_image + i);
+
+ w(r(loc) + i, loc);
+ w(r(loc + 1) + i + 4, loc + 1);
+ w(r(loc + 2) + i + 8, loc + 2);
+
+ i += sizeof(uint32_t) * 3;
+ }
+
+ qsort(extab_image, image_size / 12, 12, compare_relative_table);
+
+ i = 0;
+ while (i < image_size) {
+ uint32_t *loc = (uint32_t *)(extab_image + i);
+
+ w(r(loc) - i, loc);
+ w(r(loc + 1) - (i + 4), loc + 1);
+ w(r(loc + 2) - (i + 8), loc + 2);
+
+ i += sizeof(uint32_t) * 3;
+ }
+}
+
static void sort_relative_table(char *extab_image, int image_size)
{
int i;
@@ -281,6 +310,9 @@ do_file(char const *const fname)
break;
case EM_386:
case EM_X86_64:
+ custom_sort = x86_sort_relative_table;
+ break;
+
case EM_S390:
custom_sort = sort_relative_table;
break;
--
2.5.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v11 2/4] x86, mce: Check for faults tagged in EXTABLE_CLASS_FAULT exception table entries
2016-02-11 21:34 [PATCH v11 0/4] Machine check recovery when kernel accesses poison Tony Luck
2016-02-11 21:34 ` [PATCH v11 1/4] x86: Expand exception table to allow new handling options Tony Luck
@ 2016-02-11 21:34 ` Tony Luck
2016-02-11 21:34 ` [PATCH v11 3/4] x86, mce: Add __mcsafe_copy() Tony Luck
` (2 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Tony Luck @ 2016-02-11 21:34 UTC (permalink / raw)
To: Ingo Molnar
Cc: Borislav Petkov, Andrew Morton, Andy Lutomirski, Dan Williams,
elliott, Brian Gerst, linux-kernel, linux-mm, linux-nvdimm, x86
Extend the severity checking code to add a new context IN_KERN_RECOV
which is used to indicate that the machine check was triggered by code
in the kernel tagged with _ASM_EXTABLE_FAULT() so that the ex_handler_fault()
handler will provide the fixup code with the trap number.
Major re-work to the tail code in do_machine_check() to make all this
readable/maintainable. One functional change is that tolerant=3 no longer
stops recovery actions. Revert to only skipping sending SIGBUS to the
current process.
Signed-off-by: Tony Luck <tony.luck@intel.com>
---
arch/x86/kernel/cpu/mcheck/mce-severity.c | 22 +++++++++-
arch/x86/kernel/cpu/mcheck/mce.c | 70 ++++++++++++++++---------------
2 files changed, 56 insertions(+), 36 deletions(-)
diff --git a/arch/x86/kernel/cpu/mcheck/mce-severity.c b/arch/x86/kernel/cpu/mcheck/mce-severity.c
index 9c682c222071..5119766d9889 100644
--- a/arch/x86/kernel/cpu/mcheck/mce-severity.c
+++ b/arch/x86/kernel/cpu/mcheck/mce-severity.c
@@ -14,6 +14,7 @@
#include <linux/init.h>
#include <linux/debugfs.h>
#include <asm/mce.h>
+#include <asm/uaccess.h>
#include "mce-internal.h"
@@ -29,7 +30,7 @@
* panic situations)
*/
-enum context { IN_KERNEL = 1, IN_USER = 2 };
+enum context { IN_KERNEL = 1, IN_USER = 2, IN_KERNEL_RECOV = 3 };
enum ser { SER_REQUIRED = 1, NO_SER = 2 };
enum exception { EXCP_CONTEXT = 1, NO_EXCP = 2 };
@@ -48,6 +49,7 @@ static struct severity {
#define MCESEV(s, m, c...) { .sev = MCE_ ## s ## _SEVERITY, .msg = m, ## c }
#define KERNEL .context = IN_KERNEL
#define USER .context = IN_USER
+#define KERNEL_RECOV .context = IN_KERNEL_RECOV
#define SER .ser = SER_REQUIRED
#define NOSER .ser = NO_SER
#define EXCP .excp = EXCP_CONTEXT
@@ -87,6 +89,10 @@ static struct severity {
EXCP, KERNEL, MCGMASK(MCG_STATUS_RIPV, 0)
),
MCESEV(
+ PANIC, "In kernel and no restart IP",
+ EXCP, KERNEL_RECOV, MCGMASK(MCG_STATUS_RIPV, 0)
+ ),
+ MCESEV(
DEFERRED, "Deferred error",
NOSER, MASK(MCI_STATUS_UC|MCI_STATUS_DEFERRED|MCI_STATUS_POISON, MCI_STATUS_DEFERRED)
),
@@ -123,6 +129,11 @@ static struct severity {
MCGMASK(MCG_STATUS_RIPV|MCG_STATUS_EIPV, MCG_STATUS_RIPV)
),
MCESEV(
+ AR, "Action required: data load in error recoverable area of kernel",
+ SER, MASK(MCI_STATUS_OVER|MCI_UC_SAR|MCI_ADDR|MCACOD, MCI_UC_SAR|MCI_ADDR|MCACOD_DATA),
+ KERNEL_RECOV
+ ),
+ MCESEV(
AR, "Action required: data load error in a user process",
SER, MASK(MCI_STATUS_OVER|MCI_UC_SAR|MCI_ADDR|MCACOD, MCI_UC_SAR|MCI_ADDR|MCACOD_DATA),
USER
@@ -170,6 +181,9 @@ static struct severity {
) /* always matches. keep at end */
};
+#define mc_recoverable(mcg) (((mcg) & (MCG_STATUS_RIPV|MCG_STATUS_EIPV)) == \
+ (MCG_STATUS_RIPV|MCG_STATUS_EIPV))
+
/*
* If mcgstatus indicated that ip/cs on the stack were
* no good, then "m->cs" will be zero and we will have
@@ -183,7 +197,11 @@ static struct severity {
*/
static int error_context(struct mce *m)
{
- return ((m->cs & 3) == 3) ? IN_USER : IN_KERNEL;
+ if ((m->cs & 3) == 3)
+ return IN_USER;
+ if (mc_recoverable(m->mcgstatus) && ex_has_fault_handler(m->ip))
+ return IN_KERNEL_RECOV;
+ return IN_KERNEL;
}
/*
diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index a006f4cd792b..905f3070f412 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -961,6 +961,20 @@ static void mce_clear_state(unsigned long *toclear)
}
}
+static int do_memory_failure(struct mce *m)
+{
+ int flags = MF_ACTION_REQUIRED;
+ int ret;
+
+ pr_err("Uncorrected hardware memory error in user-access at %llx", m->addr);
+ if (!(m->mcgstatus & MCG_STATUS_RIPV))
+ flags |= MF_MUST_KILL;
+ ret = memory_failure(m->addr >> PAGE_SHIFT, MCE_VECTOR, flags);
+ if (ret)
+ pr_err("Memory error not recovered");
+ return ret;
+}
+
/*
* The actual machine check handler. This only handles real
* exceptions when something got corrupted coming in through int 18.
@@ -998,8 +1012,6 @@ void do_machine_check(struct pt_regs *regs, long error_code)
DECLARE_BITMAP(toclear, MAX_NR_BANKS);
DECLARE_BITMAP(valid_banks, MAX_NR_BANKS);
char *msg = "Unknown";
- u64 recover_paddr = ~0ull;
- int flags = MF_ACTION_REQUIRED;
int lmce = 0;
/* If this CPU is offline, just bail out. */
@@ -1136,22 +1148,13 @@ void do_machine_check(struct pt_regs *regs, long error_code)
}
/*
- * At insane "tolerant" levels we take no action. Otherwise
- * we only die if we have no other choice. For less serious
- * issues we try to recover, or limit damage to the current
- * process.
+ * If tolerant is at an insane level we drop requests to kill
+ * processes and continue even when there is no way out.
*/
- if (cfg->tolerant < 3) {
- if (no_way_out)
- mce_panic("Fatal machine check on current CPU", &m, msg);
- if (worst == MCE_AR_SEVERITY) {
- recover_paddr = m.addr;
- if (!(m.mcgstatus & MCG_STATUS_RIPV))
- flags |= MF_MUST_KILL;
- } else if (kill_it) {
- force_sig(SIGBUS, current);
- }
- }
+ if (cfg->tolerant == 3)
+ kill_it = 0;
+ else if (no_way_out)
+ mce_panic("Fatal machine check on current CPU", &m, msg);
if (worst > 0)
mce_report_event(regs);
@@ -1159,25 +1162,24 @@ void do_machine_check(struct pt_regs *regs, long error_code)
out:
sync_core();
- if (recover_paddr == ~0ull)
- goto done;
+ if (worst != MCE_AR_SEVERITY && !kill_it)
+ goto out_ist;
- pr_err("Uncorrected hardware memory error in user-access at %llx",
- recover_paddr);
- /*
- * We must call memory_failure() here even if the current process is
- * doomed. We still need to mark the page as poisoned and alert any
- * other users of the page.
- */
- ist_begin_non_atomic(regs);
- local_irq_enable();
- if (memory_failure(recover_paddr >> PAGE_SHIFT, MCE_VECTOR, flags) < 0) {
- pr_err("Memory error not recovered");
- force_sig(SIGBUS, current);
+ /* Fault was in user mode and we need to take some action */
+ if ((m.cs & 3) == 3) {
+ ist_begin_non_atomic(regs);
+ local_irq_enable();
+
+ if (kill_it || do_memory_failure(&m))
+ force_sig(SIGBUS, current);
+ local_irq_disable();
+ ist_end_non_atomic();
+ } else {
+ if (!fixup_exception(regs, X86_TRAP_MC))
+ mce_panic("Failed kernel mode recovery", &m, NULL);
}
- local_irq_disable();
- ist_end_non_atomic();
-done:
+
+out_ist:
ist_exit(regs);
}
EXPORT_SYMBOL_GPL(do_machine_check);
--
2.5.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v11 3/4] x86, mce: Add __mcsafe_copy()
2016-02-11 21:34 [PATCH v11 0/4] Machine check recovery when kernel accesses poison Tony Luck
2016-02-11 21:34 ` [PATCH v11 1/4] x86: Expand exception table to allow new handling options Tony Luck
2016-02-11 21:34 ` [PATCH v11 2/4] x86, mce: Check for faults tagged in EXTABLE_CLASS_FAULT exception table entries Tony Luck
@ 2016-02-11 21:34 ` Tony Luck
2016-02-11 21:34 ` [PATCH v11 4/4] x86: Create a new synthetic cpu capability for machine check recovery Tony Luck
2016-02-11 22:02 ` [PATCH v11 0/4] Machine check recovery when kernel accesses poison Borislav Petkov
4 siblings, 0 replies; 9+ messages in thread
From: Tony Luck @ 2016-02-11 21:34 UTC (permalink / raw)
To: Ingo Molnar
Cc: Borislav Petkov, Andrew Morton, Andy Lutomirski, Dan Williams,
elliott, Brian Gerst, linux-kernel, linux-mm, linux-nvdimm, x86
Make use of the EXTABLE_FAULT exception table entries. This routine
returns a structure to indicate the result of the copy:
struct mcsafe_ret {
u64 trapnr;
u64 remain;
};
If the copy is successful, then both 'trapnr' and 'remain' are zero.
If we faulted during the copy, then 'trapnr' will say which type
of trap (X86_TRAP_PF or X86_TRAP_MC) and 'remain' says how many
bytes were not copied.
Note that this is probably the first of several copy functions.
We can make new ones for non-temporal cache handling etc.
Signed-off-by: Tony Luck <tony.luck@intel.com>
---
arch/x86/include/asm/string_64.h | 8 +++
arch/x86/kernel/x8664_ksyms_64.c | 2 +
arch/x86/lib/memcpy_64.S | 151 +++++++++++++++++++++++++++++++++++++++
3 files changed, 161 insertions(+)
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index ff8b9a17dc4b..5b24039463a4 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -78,6 +78,14 @@ int strcmp(const char *cs, const char *ct);
#define memset(s, c, n) __memset(s, c, n)
#endif
+struct mcsafe_ret {
+ u64 trapnr;
+ u64 remain;
+};
+
+struct mcsafe_ret __mcsafe_copy(void *dst, const void __user *src, size_t cnt);
+extern void __mcsafe_copy_end(void);
+
#endif /* __KERNEL__ */
#endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/kernel/x8664_ksyms_64.c b/arch/x86/kernel/x8664_ksyms_64.c
index a0695be19864..fff245462a8c 100644
--- a/arch/x86/kernel/x8664_ksyms_64.c
+++ b/arch/x86/kernel/x8664_ksyms_64.c
@@ -37,6 +37,8 @@ EXPORT_SYMBOL(__copy_user_nocache);
EXPORT_SYMBOL(_copy_from_user);
EXPORT_SYMBOL(_copy_to_user);
+EXPORT_SYMBOL_GPL(__mcsafe_copy);
+
EXPORT_SYMBOL(copy_page);
EXPORT_SYMBOL(clear_page);
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 16698bba87de..7f967a9ed0e4 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -177,3 +177,154 @@ ENTRY(memcpy_orig)
.Lend:
retq
ENDPROC(memcpy_orig)
+
+#ifndef CONFIG_UML
+/*
+ * __mcsafe_copy - memory copy with machine check exception handling
+ * Note that we only catch machine checks when reading the source addresses.
+ * Writes to target are posted and don't generate machine checks.
+ */
+ENTRY(__mcsafe_copy)
+ cmpl $8,%edx
+ jb 20f /* less then 8 bytes, go to byte copy loop */
+
+ /* check for bad alignment of source */
+ testl $7,%esi
+ /* already aligned */
+ jz 102f
+
+ /* copy one byte at a time until source is 8-byte aligned */
+ movl %esi,%ecx
+ andl $7,%ecx
+ subl $8,%ecx
+ negl %ecx
+ subl %ecx,%edx
+0: movb (%rsi),%al
+ movb %al,(%rdi)
+ incq %rsi
+ incq %rdi
+ decl %ecx
+ jnz 0b
+
+102:
+ /* Figure out how many whole cache lines (64-bytes) to copy */
+ movl %edx,%ecx
+ andl $63,%edx
+ shrl $6,%ecx
+ jz 17f
+
+ /* Loop copying whole cache lines */
+1: movq (%rsi),%r8
+2: movq 1*8(%rsi),%r9
+3: movq 2*8(%rsi),%r10
+4: movq 3*8(%rsi),%r11
+ movq %r8,(%rdi)
+ movq %r9,1*8(%rdi)
+ movq %r10,2*8(%rdi)
+ movq %r11,3*8(%rdi)
+9: movq 4*8(%rsi),%r8
+10: movq 5*8(%rsi),%r9
+11: movq 6*8(%rsi),%r10
+12: movq 7*8(%rsi),%r11
+ movq %r8,4*8(%rdi)
+ movq %r9,5*8(%rdi)
+ movq %r10,6*8(%rdi)
+ movq %r11,7*8(%rdi)
+ leaq 64(%rsi),%rsi
+ leaq 64(%rdi),%rdi
+ decl %ecx
+ jnz 1b
+
+ /* Are there any trailing 8-byte words? */
+17: movl %edx,%ecx
+ andl $7,%edx
+ shrl $3,%ecx
+ jz 20f
+
+ /* Copy trailing words */
+18: movq (%rsi),%r8
+ mov %r8,(%rdi)
+ leaq 8(%rsi),%rsi
+ leaq 8(%rdi),%rdi
+ decl %ecx
+ jnz 18b
+
+ /* Any trailing bytes? */
+20: andl %edx,%edx
+ jz 23f
+
+ /* copy trailing bytes */
+ movl %edx,%ecx
+21: movb (%rsi),%al
+ movb %al,(%rdi)
+ incq %rsi
+ incq %rdi
+ decl %ecx
+ jnz 21b
+
+ /* Copy successful. Return .remain = 0, .trapnr = 0 */
+23: xorq %rax, %rax
+ xorq %rdx, %rdx
+ ret
+
+ .section .fixup,"ax"
+ /*
+ * machine check handler loaded %rax with trap number
+ * We just need to make sure %edx has the number of
+ * bytes remaining
+ */
+30:
+ add %ecx,%edx
+ ret
+31:
+ shl $6,%ecx
+ add %ecx,%edx
+ ret
+32:
+ shl $6,%ecx
+ lea -8(%ecx,%edx),%edx
+ ret
+33:
+ shl $6,%ecx
+ lea -16(%ecx,%edx),%edx
+ ret
+34:
+ shl $6,%ecx
+ lea -24(%ecx,%edx),%edx
+ ret
+35:
+ shl $6,%ecx
+ lea -32(%ecx,%edx),%edx
+ ret
+36:
+ shl $6,%ecx
+ lea -40(%ecx,%edx),%edx
+ ret
+37:
+ shl $6,%ecx
+ lea -48(%ecx,%edx),%edx
+ ret
+38:
+ shl $6,%ecx
+ lea -56(%ecx,%edx),%edx
+ ret
+39:
+ lea (%rdx,%rcx,8),%rdx
+ ret
+40:
+ mov %ecx,%edx
+ ret
+ .previous
+
+ _ASM_EXTABLE_FAULT(0b,30b)
+ _ASM_EXTABLE_FAULT(1b,31b)
+ _ASM_EXTABLE_FAULT(2b,32b)
+ _ASM_EXTABLE_FAULT(3b,33b)
+ _ASM_EXTABLE_FAULT(4b,34b)
+ _ASM_EXTABLE_FAULT(9b,35b)
+ _ASM_EXTABLE_FAULT(10b,36b)
+ _ASM_EXTABLE_FAULT(11b,37b)
+ _ASM_EXTABLE_FAULT(12b,38b)
+ _ASM_EXTABLE_FAULT(18b,39b)
+ _ASM_EXTABLE_FAULT(21b,40b)
+#endif
--
2.5.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v11 4/4] x86: Create a new synthetic cpu capability for machine check recovery
2016-02-11 21:34 [PATCH v11 0/4] Machine check recovery when kernel accesses poison Tony Luck
` (2 preceding siblings ...)
2016-02-11 21:34 ` [PATCH v11 3/4] x86, mce: Add __mcsafe_copy() Tony Luck
@ 2016-02-11 21:34 ` Tony Luck
2016-02-11 22:02 ` [PATCH v11 0/4] Machine check recovery when kernel accesses poison Borislav Petkov
4 siblings, 0 replies; 9+ messages in thread
From: Tony Luck @ 2016-02-11 21:34 UTC (permalink / raw)
To: Ingo Molnar
Cc: Borislav Petkov, Andrew Morton, Andy Lutomirski, Dan Williams,
elliott, Brian Gerst, linux-kernel, linux-mm, linux-nvdimm, x86
The Intel Software Developer Manual describes bit 24 in the MCG_CAP
MSR:
MCG_SER_P (software error recovery support present) flag,
bit 24 — Indicates (when set) that the processor supports
software error recovery
But only some models with this capability bit set will actually
generate recoverable machine checks.
Check the model name and set a synthetic capability bit. Provide
a command line option to set this bit anyway in case the kernel
doesn't recognise the model name.
Signed-off-by: Tony Luck <tony.luck@intel.com>
---
Documentation/x86/x86_64/boot-options.txt | 2 ++
arch/x86/include/asm/cpufeature.h | 1 +
arch/x86/include/asm/mce.h | 1 +
arch/x86/kernel/cpu/mcheck/mce.c | 13 +++++++++++++
4 files changed, 17 insertions(+)
diff --git a/Documentation/x86/x86_64/boot-options.txt b/Documentation/x86/x86_64/boot-options.txt
index 68ed3114c363..0965a71f9942 100644
--- a/Documentation/x86/x86_64/boot-options.txt
+++ b/Documentation/x86/x86_64/boot-options.txt
@@ -60,6 +60,8 @@ Machine check
threshold to 1. Enabling this may make memory predictive failure
analysis less effective if the bios sets thresholds for memory
errors since we will not see details for all errors.
+ mce=recovery
+ Force-enable recoverable machine check code paths
nomce (for compatibility with i386): same as mce=off
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 7ad8c9464297..06c6c2d2fea0 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -106,6 +106,7 @@
#define X86_FEATURE_APERFMPERF ( 3*32+28) /* APERFMPERF */
#define X86_FEATURE_EAGER_FPU ( 3*32+29) /* "eagerfpu" Non lazy FPU restore */
#define X86_FEATURE_NONSTOP_TSC_S3 ( 3*32+30) /* TSC doesn't stop in S3 state */
+#define X86_FEATURE_MCE_RECOVERY ( 3*32+31) /* cpu has recoverable machine checks */
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */
#define X86_FEATURE_XMM3 ( 4*32+ 0) /* "pni" SSE-3 */
diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
index 2ea4527e462f..18d2ba9c8e44 100644
--- a/arch/x86/include/asm/mce.h
+++ b/arch/x86/include/asm/mce.h
@@ -113,6 +113,7 @@ struct mca_config {
bool ignore_ce;
bool disabled;
bool ser;
+ bool recovery;
bool bios_cmci_threshold;
u8 banks;
s8 bootlog;
diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index 905f3070f412..15ff6f07bd92 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -1578,6 +1578,17 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
if (c->x86 == 6 && c->x86_model == 45)
quirk_no_way_out = quirk_sandybridge_ifu;
+ /*
+ * MCG_CAP.MCG_SER_P is necessary but not sufficient to know
+ * whether this processor will actually generate recoverable
+ * machine checks. Check to see if this is an E7 model Xeon.
+ * We can't do a model number check because E5 and E7 use the
+ * same model number. E5 doesn't support recovery, E7 does.
+ */
+ if (mca_cfg.recovery || (mca_cfg.ser &&
+ !strncmp(c->x86_model_id,
+ "Intel(R) Xeon(R) CPU E7-", 24)))
+ set_cpu_cap(c, X86_FEATURE_MCE_RECOVERY);
}
if (cfg->monarch_timeout < 0)
cfg->monarch_timeout = 0;
@@ -2030,6 +2041,8 @@ static int __init mcheck_enable(char *str)
cfg->bootlog = (str[0] == 'b');
else if (!strcmp(str, "bios_cmci_threshold"))
cfg->bios_cmci_threshold = true;
+ else if (!strcmp(str, "recovery"))
+ cfg->recovery = true;
else if (isdigit(str[0])) {
if (get_option(&str, &cfg->tolerant) == 2)
get_option(&str, &(cfg->monarch_timeout));
--
2.5.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v11 0/4] Machine check recovery when kernel accesses poison
2016-02-11 21:34 [PATCH v11 0/4] Machine check recovery when kernel accesses poison Tony Luck
` (3 preceding siblings ...)
2016-02-11 21:34 ` [PATCH v11 4/4] x86: Create a new synthetic cpu capability for machine check recovery Tony Luck
@ 2016-02-11 22:02 ` Borislav Petkov
2016-02-11 22:16 ` Luck, Tony
4 siblings, 1 reply; 9+ messages in thread
From: Borislav Petkov @ 2016-02-11 22:02 UTC (permalink / raw)
To: Tony Luck
Cc: Ingo Molnar, Andrew Morton, Andy Lutomirski, Dan Williams,
elliott, Brian Gerst, linux-kernel, linux-mm, linux-nvdimm, x86
On Thu, Feb 11, 2016 at 01:34:10PM -0800, Tony Luck wrote:
> This series is initially targeted at the folks doing filesystems
> on top of NVDIMMs. They really want to be able to return -EIO
> when there is a h/w error (just like spinning rust, and SSD does).
>
> I plan to use the same infrastructure to write a machine check aware
> "copy_from_user()" that will SIGBUS the calling application when a
> syscall touches poison in user space (just like we do when the application
> touches the poison itself).
>
> I've dropped off the "reviewed-by" tags that I collected back prior to
> adding the new field to the exception table. Please send new ones
> if you can.
>
> Changes
That's some changelog, I tell ya. Well, it took us long enough so for
all 4:
Reviewed-by: Borislav Petkov <bp@suse.de>
--
Regards/Gruss,
Boris.
ECO tip #101: Trim your mails when you reply.
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: [PATCH v11 0/4] Machine check recovery when kernel accesses poison
2016-02-11 22:02 ` [PATCH v11 0/4] Machine check recovery when kernel accesses poison Borislav Petkov
@ 2016-02-11 22:16 ` Luck, Tony
2016-02-11 22:33 ` Borislav Petkov
0 siblings, 1 reply; 9+ messages in thread
From: Luck, Tony @ 2016-02-11 22:16 UTC (permalink / raw)
To: Ingo Molnar
Cc: Borislav Petkov, Andrew Morton, Andy Lutomirski, Williams, Dan J,
elliott@hpe.com, Brian Gerst, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, linux-nvdimm@ml01.01.org, x86@kernel.org
> That's some changelog, I tell ya. Well, it took us long enough so for all 4:
I'll see if Peter Jackson wants to turn it into a series of movies.
> Reviewed-by: Borislav Petkov <bp@suse.de>
Ingo: Boris is happy ... your turn to find things for me to fix (or is it ready for 4.6 now??)
-Tony
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v11 0/4] Machine check recovery when kernel accesses poison
2016-02-11 22:16 ` Luck, Tony
@ 2016-02-11 22:33 ` Borislav Petkov
0 siblings, 0 replies; 9+ messages in thread
From: Borislav Petkov @ 2016-02-11 22:33 UTC (permalink / raw)
To: Luck, Tony
Cc: Ingo Molnar, Andrew Morton, Andy Lutomirski, Williams, Dan J,
elliott@hpe.com, Brian Gerst, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, linux-nvdimm@ml01.01.org, x86@kernel.org
On Thu, Feb 11, 2016 at 10:16:56PM +0000, Luck, Tony wrote:
> > That's some changelog, I tell ya. Well, it took us long enough so for all 4:
>
> I'll see if Peter Jackson wants to turn it into a series of movies.
LOL. A passing title might be "The Fellowship of the MCA"!
:-)
--
Regards/Gruss,
Boris.
ECO tip #101: Trim your mails when you reply.
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v11 0/4] Machine check recovery when kernel accesses poison
@ 2016-02-17 18:20 Tony Luck
0 siblings, 0 replies; 9+ messages in thread
From: Tony Luck @ 2016-02-17 18:20 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
[Resend of v11 with Boris' "Reviewed-by" tags added. For Ingo's workflow]
-Tony
Tony Luck (4):
x86: Expand exception table to allow new handling options
x86, mce: Check for faults tagged in EXTABLE_CLASS_FAULT exception
table entries
x86, mce: Add __mcsafe_copy()
x86: Create a new synthetic cpu capability for machine check recovery
Documentation/x86/exception-tables.txt | 35 +++++++
Documentation/x86/x86_64/boot-options.txt | 2 +
arch/x86/include/asm/asm.h | 40 ++++----
arch/x86/include/asm/cpufeature.h | 1 +
arch/x86/include/asm/mce.h | 1 +
arch/x86/include/asm/string_64.h | 8 ++
arch/x86/include/asm/uaccess.h | 16 ++--
arch/x86/kernel/cpu/mcheck/mce-severity.c | 22 ++++-
arch/x86/kernel/cpu/mcheck/mce.c | 83 +++++++++-------
arch/x86/kernel/kprobes/core.c | 2 +-
arch/x86/kernel/traps.c | 6 +-
arch/x86/kernel/x8664_ksyms_64.c | 2 +
arch/x86/lib/memcpy_64.S | 151 ++++++++++++++++++++++++++++++
arch/x86/mm/extable.c | 100 ++++++++++++++------
arch/x86/mm/fault.c | 2 +-
scripts/sortextable.c | 32 +++++++
16 files changed, 410 insertions(+), 93 deletions(-)
--
2.5.0
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2016-02-17 18:20 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-11 21:34 [PATCH v11 0/4] Machine check recovery when kernel accesses poison Tony Luck
2016-02-11 21:34 ` [PATCH v11 1/4] x86: Expand exception table to allow new handling options Tony Luck
2016-02-11 21:34 ` [PATCH v11 2/4] x86, mce: Check for faults tagged in EXTABLE_CLASS_FAULT exception table entries Tony Luck
2016-02-11 21:34 ` [PATCH v11 3/4] x86, mce: Add __mcsafe_copy() Tony Luck
2016-02-11 21:34 ` [PATCH v11 4/4] x86: Create a new synthetic cpu capability for machine check recovery Tony Luck
2016-02-11 22:02 ` [PATCH v11 0/4] Machine check recovery when kernel accesses poison Borislav Petkov
2016-02-11 22:16 ` Luck, Tony
2016-02-11 22:33 ` Borislav Petkov
-- strict thread matches above, loose matches on Subject: below --
2016-02-17 18:20 Tony Luck
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).