* [PATCH] perf/x86: Further optimize copy_from_user_nmi()
@ 2015-05-19 4:08 Zhiqiang Zhang
2015-05-19 7:39 ` Peter Zijlstra
0 siblings, 1 reply; 3+ messages in thread
From: Zhiqiang Zhang @ 2015-05-19 4:08 UTC (permalink / raw)
To: stable, gregkh, peterz, mingo; +Cc: morgan.wang
commit e00b12e64be9a34ef071de7b6052ca9ea29dd460 upstream
Now that we can deal with nested NMI due to IRET re-enabling NMIs and
can deal with faults from NMI by making sure we preserve CR2 over NMIs
we can in fact simply access user-space memory from NMI context.
So rewrite copy_from_user_nmi() to use __copy_from_user_inatomic() and
rework the fault path to do the minimal required work before taking
the in_atomic() fault handler.
In particular avoid perf_sw_event() which would make perf recurse on
itself (it should be harmless as our recursion protections should be
able to deal with this -- but why tempt fate).
Also rename notify_page_fault() to kprobes_fault() as that is a much
better name; there is no notifier in it and its specific to kprobes.
Don measured that his worst case NMI path shrunk from ~300K cycles to
~150K cycles.
Cc: Stephane Eranian <eranian@google.com>
Cc: jmario@redhat.com
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: dave.hansen@linux.intel.com
Tested-by: Don Zickus <dzickus@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131024105206.GM2490@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
[zhangzhiqiang: backport to 3.10:
- adjust context
- context in __do_page_fault() in arch/x86/mm/fault.c is modify in mainline, so
arch/x86/mm/fault.c is adjusted.
- After the above adjustments, becomes same to the original patch:
https://github.com/torvalds/linux/commit/
]
Signed-off-by: Zhiqiang Zhang <zhangzhiqiang.zhang@huawei.com>
---
arch/x86/lib/usercopy.c | 43 +++++++++++++++----------------------------
arch/x86/mm/fault.c | 41 +++++++++++++++++++++--------------------
2 files changed, 36 insertions(+), 48 deletions(-)
diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
index 4f74d94..5465b86 100644
--- a/arch/x86/lib/usercopy.c
+++ b/arch/x86/lib/usercopy.c
@@ -11,39 +11,26 @@
#include <linux/sched.h>
/*
- * best effort, GUP based copy_from_user() that is NMI-safe
+ * We rely on the nested NMI work to allow atomic faults from the NMI path; the
+ * nested NMI paths are careful to preserve CR2.
*/
unsigned long
copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
{
- unsigned long offset, addr = (unsigned long)from;
- unsigned long size, len = 0;
- struct page *page;
- void *map;
- int ret;
+ unsigned long ret;
if (__range_not_ok(from, n, TASK_SIZE))
- return len;
-
- do {
- ret = __get_user_pages_fast(addr, 1, 0, &page);
- if (!ret)
- break;
-
- offset = addr & (PAGE_SIZE - 1);
- size = min(PAGE_SIZE - offset, n - len);
-
- map = kmap_atomic(page);
- memcpy(to, map+offset, size);
- kunmap_atomic(map);
- put_page(page);
-
- len += size;
- to += size;
- addr += size;
-
- } while (len < n);
-
- return len;
+ return 0;
+
+ /*
+ * Even though this function is typically called from NMI/IRQ context
+ * disable pagefaults so that its behaviour is consistent even when
+ * called form other contexts.
+ */
+ pagefault_disable();
+ ret = __copy_from_user_inatomic(to, from, n);
+ pagefault_enable();
+
+ return n - ret;
}
EXPORT_SYMBOL_GPL(copy_from_user_nmi);
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index d8b1ff6..9cc2f7a 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -51,7 +51,7 @@ kmmio_fault(struct pt_regs *regs, unsigned long addr)
return 0;
}
-static inline int __kprobes notify_page_fault(struct pt_regs *regs)
+static inline int __kprobes kprobes_fault(struct pt_regs *regs)
{
int ret = 0;
@@ -1054,7 +1054,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code)
return;
/* kprobes don't want to hook the spurious faults: */
- if (notify_page_fault(regs))
+ if (kprobes_fault(regs))
return;
/*
* Don't take the mm semaphore here. If we fixup a prefetch
@@ -1066,23 +1066,8 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code)
}
/* kprobes don't want to hook the spurious faults: */
- if (unlikely(notify_page_fault(regs)))
+ if (unlikely(kprobes_fault(regs)))
return;
- /*
- * It's safe to allow irq's after cr2 has been saved and the
- * vmalloc fault has been handled.
- *
- * User-mode registers count as a user access even for any
- * potential system fault or CPU buglet:
- */
- if (user_mode_vm(regs)) {
- local_irq_enable();
- error_code |= PF_USER;
- flags |= FAULT_FLAG_USER;
- } else {
- if (regs->flags & X86_EFLAGS_IF)
- local_irq_enable();
- }
if (unlikely(error_code & PF_RSVD))
pgtable_bad(regs, error_code, address);
@@ -1092,8 +1077,6 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code)
return;
}
- perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
-
/*
* If we're in an interrupt, have no user context or are running
* in an atomic region then we must not take the fault:
@@ -1103,6 +1086,24 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code)
return;
}
+ /*
+ * It's safe to allow irq's after cr2 has been saved and the
+ * vmalloc fault has been handled.
+ *
+ * User-mode registers count as a user access even for any
+ * potential system fault or CPU buglet:
+ */
+ if (user_mode_vm(regs)) {
+ local_irq_enable();
+ error_code |= PF_USER;
+ flags |= FAULT_FLAG_USER;
+ } else {
+ if (regs->flags & X86_EFLAGS_IF)
+ local_irq_enable();
+ }
+
+ perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+
if (error_code & PF_WRITE)
flags |= FAULT_FLAG_WRITE;
--
1.9.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] perf/x86: Further optimize copy_from_user_nmi()
2015-05-19 4:08 [PATCH] perf/x86: Further optimize copy_from_user_nmi() Zhiqiang Zhang
@ 2015-05-19 7:39 ` Peter Zijlstra
2015-05-19 11:18 ` zhangzhiqiang
0 siblings, 1 reply; 3+ messages in thread
From: Peter Zijlstra @ 2015-05-19 7:39 UTC (permalink / raw)
To: Zhiqiang Zhang; +Cc: stable, gregkh, mingo, morgan.wang
On Tue, May 19, 2015 at 12:08:39PM +0800, Zhiqiang Zhang wrote:
> commit e00b12e64be9a34ef071de7b6052ca9ea29dd460 upstream
>
> Now that we can deal with nested NMI due to IRET re-enabling NMIs and
> can deal with faults from NMI by making sure we preserve CR2 over NMIs
> we can in fact simply access user-space memory from NMI context.
>
> So rewrite copy_from_user_nmi() to use __copy_from_user_inatomic() and
> rework the fault path to do the minimal required work before taking
> the in_atomic() fault handler.
>
> In particular avoid perf_sw_event() which would make perf recurse on
> itself (it should be harmless as our recursion protections should be
> able to deal with this -- but why tempt fate).
>
> Also rename notify_page_fault() to kprobes_fault() as that is a much
> better name; there is no notifier in it and its specific to kprobes.
>
> Don measured that his worst case NMI path shrunk from ~300K cycles to
> ~150K cycles.
>
> Cc: Stephane Eranian <eranian@google.com>
> Cc: jmario@redhat.com
> Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Andi Kleen <ak@linux.intel.com>
> Cc: dave.hansen@linux.intel.com
> Tested-by: Don Zickus <dzickus@redhat.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> Link: http://lkml.kernel.org/r/20131024105206.GM2490@laptop.programming.kicks-ass.net
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> [zhangzhiqiang: backport to 3.10:
Did you make sure all the nested NMI fixes are in 3.10?
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] perf/x86: Further optimize copy_from_user_nmi()
2015-05-19 7:39 ` Peter Zijlstra
@ 2015-05-19 11:18 ` zhangzhiqiang
0 siblings, 0 replies; 3+ messages in thread
From: zhangzhiqiang @ 2015-05-19 11:18 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: stable, gregkh, mingo, morgan.wang
On 2015/5/19 15:39, Peter Zijlstra wrote:
> On Tue, May 19, 2015 at 12:08:39PM +0800, Zhiqiang Zhang wrote:
>> commit e00b12e64be9a34ef071de7b6052ca9ea29dd460 upstream
>>
>> Now that we can deal with nested NMI due to IRET re-enabling NMIs and
>> can deal with faults from NMI by making sure we preserve CR2 over NMIs
>> we can in fact simply access user-space memory from NMI context.
>>
>> So rewrite copy_from_user_nmi() to use __copy_from_user_inatomic() and
>> rework the fault path to do the minimal required work before taking
>> the in_atomic() fault handler.
>>
>> In particular avoid perf_sw_event() which would make perf recurse on
>> itself (it should be harmless as our recursion protections should be
>> able to deal with this -- but why tempt fate).
>>
>> Also rename notify_page_fault() to kprobes_fault() as that is a much
>> better name; there is no notifier in it and its specific to kprobes.
>>
>> Don measured that his worst case NMI path shrunk from ~300K cycles to
>> ~150K cycles.
>>
>> Cc: Stephane Eranian <eranian@google.com>
>> Cc: jmario@redhat.com
>> Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
>> Cc: Linus Torvalds <torvalds@linux-foundation.org>
>> Cc: Andi Kleen <ak@linux.intel.com>
>> Cc: dave.hansen@linux.intel.com
>> Tested-by: Don Zickus <dzickus@redhat.com>
>> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
>> Link: http://lkml.kernel.org/r/20131024105206.GM2490@laptop.programming.kicks-ass.net
>> Signed-off-by: Ingo Molnar <mingo@kernel.org>
>> [zhangzhiqiang: backport to 3.10:
>
> Did you make sure all the nested NMI fixes are in 3.10?
>
> .
>
Sorry, i am not quite sure about that, i just using it fixes page fault from PMI.
I will looking farther carefully.
thanks very much.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-05-19 11:18 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-19 4:08 [PATCH] perf/x86: Further optimize copy_from_user_nmi() Zhiqiang Zhang
2015-05-19 7:39 ` Peter Zijlstra
2015-05-19 11:18 ` zhangzhiqiang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).