From: Ingo Molnar <mingo@kernel.org>
To: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Jason Low <jason.low2@hp.com>,
Peter Zijlstra <peterz@infradead.org>,
Davidlohr Bueso <dave@stgolabs.net>,
Tim Chen <tim.c.chen@linux.intel.com>,
Aswin Chandramouleeswaran <aswin@hp.com>,
LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH] x86/uaccess: Implement get_kernel()
Date: Fri, 10 Apr 2015 13:14:27 +0200 [thread overview]
Message-ID: <20150410111427.GA30477@gmail.com> (raw)
In-Reply-To: <20150410092152.GA21332@gmail.com>
* Ingo Molnar <mingo@kernel.org> wrote:
> * Ingo Molnar <mingo@kernel.org> wrote:
>
> > Yeah, so what I missed here are those nops: placeholders for the
> > STAC/CLAC instructions on x86... and this is what Linus mentioned
> > about the clac() overhead.
> >
> > But this could be solved I think: by adding a
> > copy_from_kernel_inatomic() primitive which simply leaves out the
> > STAC/CLAC sequence: as these are always guaranteed to be kernel
> > addresses, the SMAP fault should not be generated.
>
> So the first step would be to introduce a generic
> __copy_from_kernel_inatomic() primitive as attached below.
>
> The next patch will implement efficient __copy_from_kernel_inatomic()
> for x86.
The patch below does that. Note, for simplicity I've changed the
interface to 'get_kernel()' (will propagate this through the other
patches as well).
Minimally boot tested. owner-spinning looks sane now:
0000000000000030 <mutex_spin_on_owner.isra.4>:
30: 48 3b 37 cmp (%rdi),%rsi
33: 55 push %rbp
34: 48 89 e5 mov %rsp,%rbp
37: 75 4f jne 88 <mutex_spin_on_owner.isra.4+0x58>
39: 48 8d 56 28 lea 0x28(%rsi),%rdx
3d: 8b 46 28 mov 0x28(%rsi),%eax
40: 85 c0 test %eax,%eax
42: 74 3c je 80 <mutex_spin_on_owner.isra.4+0x50>
44: 65 48 8b 04 25 00 00 mov %gs:0x0,%rax
4b: 00 00
4d: 48 8b 80 10 c0 ff ff mov -0x3ff0(%rax),%rax
54: a8 08 test $0x8,%al
56: 75 28 jne 80 <mutex_spin_on_owner.isra.4+0x50>
58: 65 48 8b 0c 25 00 00 mov %gs:0x0,%rcx
5f: 00 00
61: 0f 1f 80 00 00 00 00 nopl 0x0(%rax)
68: f3 90 pause
6a: 48 3b 37 cmp (%rdi),%rsi
6d: 75 19 jne 88 <mutex_spin_on_owner.isra.4+0x58>
6f: 8b 02 mov (%rdx),%eax
71: 85 c0 test %eax,%eax
73: 74 0b je 80 <mutex_spin_on_owner.isra.4+0x50>
75: 48 8b 81 10 c0 ff ff mov -0x3ff0(%rcx),%rax
7c: a8 08 test $0x8,%al
7e: 74 e8 je 68 <mutex_spin_on_owner.isra.4+0x38>
80: 31 c0 xor %eax,%eax
82: 5d pop %rbp
83: c3 retq
84: 0f 1f 40 00 nopl 0x0(%rax)
88: b8 01 00 00 00 mov $0x1,%eax
8d: 5d pop %rbp
8e: c3 retq
8f: 90 nop
although the double unrolled need_resched() check looks silly:
4d: 48 8b 80 10 c0 ff ff mov -0x3ff0(%rax),%rax
54: a8 08 test $0x8,%al
75: 48 8b 81 10 c0 ff ff mov -0x3ff0(%rcx),%rax
7c: a8 08 test $0x8,%al
Thanks,
Ingo
=============================>
>From 7f5917c7f9ee3598b353adbd3d0849ecf4485748 Mon Sep 17 00:00:00 2001
From: Ingo Molnar <mingo@kernel.org>
Date: Fri, 10 Apr 2015 13:01:39 +0200
Subject: [PATCH] x86/uaccess: Implement get_kernel()
Implement an optimized get_kernel() primitive on x86: avoid the
STAC/CLAC overhead. Only offer a 64-bit variant for now, 32-bit
will fall back to get_user().
Not-Yet-Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/x86/include/asm/uaccess_64.h | 32 ++++++++++++++++++++++++++++++++
include/linux/uaccess.h | 8 ++------
kernel/locking/mutex.c | 3 +--
3 files changed, 35 insertions(+), 8 deletions(-)
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index f2f9b39b274a..3ef75f0addac 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -207,6 +207,38 @@ __copy_from_user_inatomic(void *dst, const void __user *src, unsigned size)
return __copy_from_user_nocheck(dst, src, size);
}
+
+#define __get_kernel_asm_ex(dst, src, int_type, reg_type, lh_type) \
+ asm volatile("1: mov"int_type" %1,%"reg_type"0\n" \
+ "2:\n" \
+ _ASM_EXTABLE_EX(1b, 2b) \
+ : lh_type(dst) : "m" (__m(src)))
+
+extern void __get_kernel_BUILD_ERROR(void);
+
+/*
+ * Simple copy-from-possibly-faulting-kernel-addresses method that
+ * avoids the STAC/CLAC SMAP overhead.
+ *
+ * NOTE: this does not propagate the error code of faulting kernel
+ * addresses properly. You can recover it via uaccess_catch()
+ * if you really need to.
+ */
+#define get_kernel(dst, src) \
+do { \
+ typeof(*(src)) __val; \
+ \
+ switch (sizeof(__val)) { \
+ case 1: __get_kernel_asm_ex(__val, src, "b", "b", "=q"); break; \
+ case 2: __get_kernel_asm_ex(__val, src, "w", "w", "=r"); break; \
+ case 4: __get_kernel_asm_ex(__val, src, "l", "k", "=r"); break; \
+ case 8: __get_kernel_asm_ex(__val, src, "q", " ", "=r"); break; \
+ default: __get_kernel_BUILD_ERROR(); \
+ } \
+ (dst) = __val; \
+} while (0)
+#define ARCH_HAS_GET_KERNEL
+
static __must_check __always_inline int
__copy_to_user_inatomic(void __user *dst, const void *src, unsigned size)
{
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 885eea43b69f..03a025b98d2e 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -111,12 +111,8 @@ extern long notrace __probe_kernel_write(void *dst, const void *src, size_t size
* Generic wrapper, most architectures can just use __copy_from_user_inatomic()
* to implement __copy_from_kernel_inatomic():
*/
-#ifndef ARCH_HAS_COPY_FROM_KERNEL_INATOMIC
-static __must_check __always_inline int
-__copy_from_kernel_inatomic(void *dst, const void __user *src, unsigned size)
-{
- return __copy_from_user_inatomic(dst, src, size);
-}
+#ifndef ARCH_HAS_GET_KERNEL
+# define get_kernel(dst, src) get_user(dst, src)
#endif
#endif /* __LINUX_UACCESS_H__ */
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index a4f74cda9fc4..68c750f4e8e8 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -226,7 +226,6 @@ static noinline
bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
{
int on_cpu;
- int ret;
while (lock->owner == owner) {
/*
@@ -252,7 +251,7 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
* NOTE2: We ignore failed copies, as the next iteration will clean
* up after us. This saves an extra branch in the common case.
*/
- ret = __copy_from_kernel_inatomic(&on_cpu, &owner->on_cpu, sizeof(on_cpu));
+ get_kernel(on_cpu, &owner->on_cpu);
if (!on_cpu || need_resched())
return false;
next prev parent reply other threads:[~2015-04-10 11:14 UTC|newest]
Thread overview: 108+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-08 19:39 [PATCH 0/2] locking: Simplify mutex and rwsem spinning code Jason Low
2015-04-08 19:39 ` [PATCH 1/2] locking/mutex: Further refactor mutex_spin_on_owner() Jason Low
2015-04-09 9:00 ` [tip:locking/core] locking/mutex: Further simplify mutex_spin_on_owner() tip-bot for Jason Low
2015-04-08 19:39 ` [PATCH 2/2] locking/rwsem: Use a return variable in rwsem_spin_on_owner() Jason Low
2015-04-09 5:37 ` Ingo Molnar
2015-04-09 6:40 ` Jason Low
2015-04-09 7:53 ` Ingo Molnar
2015-04-09 16:47 ` Linus Torvalds
2015-04-09 17:56 ` Paul E. McKenney
2015-04-09 18:08 ` Linus Torvalds
2015-04-09 18:16 ` Linus Torvalds
2015-04-09 18:39 ` Paul E. McKenney
2015-04-10 9:00 ` [PATCH] mutex: Speed up mutex_spin_on_owner() by not taking the RCU lock Ingo Molnar
2015-04-10 9:12 ` Ingo Molnar
2015-04-10 9:21 ` [PATCH] uaccess: Add __copy_from_kernel_inatomic() primitive Ingo Molnar
2015-04-10 11:14 ` Ingo Molnar [this message]
2015-04-10 11:27 ` [PATCH] mutex: Improve mutex_spin_on_owner() code generation Ingo Molnar
2015-04-10 12:08 ` [PATCH] x86: Align jump targets to 1 byte boundaries Ingo Molnar
2015-04-10 12:18 ` [PATCH] x86: Pack function addresses tightly as well Ingo Molnar
2015-04-10 12:30 ` [PATCH] x86: Pack loops " Ingo Molnar
2015-04-10 13:46 ` Borislav Petkov
2015-05-15 9:40 ` [tip:x86/asm] " tip-bot for Ingo Molnar
2015-05-17 6:03 ` [tip:x86/apic] " tip-bot for Ingo Molnar
2015-05-15 9:39 ` [tip:x86/asm] x86: Pack function addresses " tip-bot for Ingo Molnar
2015-05-15 18:36 ` Linus Torvalds
2015-05-15 20:52 ` Denys Vlasenko
2015-05-17 5:58 ` Ingo Molnar
2015-05-17 7:09 ` Ingo Molnar
2015-05-17 7:30 ` Ingo Molnar
2015-05-18 9:28 ` Denys Vlasenko
2015-05-19 21:38 ` [RFC PATCH] x86/64: Optimize the effective instruction cache footprint of kernel functions Ingo Molnar
2015-05-20 0:47 ` Linus Torvalds
2015-05-20 12:21 ` Denys Vlasenko
2015-05-21 11:36 ` Ingo Molnar
2015-05-21 11:38 ` Denys Vlasenko
2016-04-16 21:08 ` Denys Vlasenko
2015-05-20 13:09 ` Ingo Molnar
2015-05-20 11:29 ` Denys Vlasenko
2015-05-21 13:28 ` Ingo Molnar
2015-05-21 14:03 ` Ingo Molnar
2015-04-10 12:50 ` [PATCH] x86: Align jump targets to 1 byte boundaries Denys Vlasenko
2015-04-10 13:18 ` H. Peter Anvin
2015-04-10 17:54 ` Ingo Molnar
2015-04-10 18:32 ` H. Peter Anvin
2015-04-11 14:41 ` Markus Trippelsdorf
2015-04-12 10:14 ` Ingo Molnar
2015-04-13 16:23 ` Markus Trippelsdorf
2015-04-13 17:26 ` Markus Trippelsdorf
2015-04-13 18:31 ` Linus Torvalds
2015-04-13 19:09 ` Markus Trippelsdorf
2015-04-14 5:38 ` Ingo Molnar
2015-04-14 8:23 ` Markus Trippelsdorf
2015-04-14 9:16 ` Ingo Molnar
2015-04-14 11:17 ` Markus Trippelsdorf
2015-04-14 12:09 ` Ingo Molnar
2015-04-10 18:48 ` Linus Torvalds
2015-04-12 23:44 ` Maciej W. Rozycki
2015-04-10 19:23 ` Daniel Borkmann
2015-04-11 13:48 ` Markus Trippelsdorf
2015-04-10 13:19 ` Borislav Petkov
2015-04-10 13:54 ` Denys Vlasenko
2015-04-10 14:01 ` Borislav Petkov
2015-04-10 14:53 ` Denys Vlasenko
2015-04-10 15:25 ` Borislav Petkov
2015-04-10 15:48 ` Denys Vlasenko
2015-04-10 15:54 ` Borislav Petkov
2015-04-10 21:44 ` Borislav Petkov
2015-04-10 18:54 ` Linus Torvalds
2015-04-10 14:10 ` Paul E. McKenney
2015-04-11 14:28 ` Josh Triplett
2015-04-11 9:20 ` [PATCH] x86: Turn off GCC branch probability heuristics Ingo Molnar
2015-04-11 17:41 ` Linus Torvalds
2015-04-11 18:57 ` Thomas Gleixner
2015-04-11 19:35 ` Linus Torvalds
2015-04-12 5:47 ` Ingo Molnar
2015-04-12 6:20 ` Markus Trippelsdorf
2015-04-12 10:15 ` Ingo Molnar
2015-04-12 7:56 ` Mike Galbraith
2015-04-12 7:41 ` Ingo Molnar
2015-04-12 8:07 ` Ingo Molnar
2015-04-12 21:11 ` Jan Hubicka
2015-05-14 11:59 ` [PATCH] x86: Align jump targets to 1 byte boundaries Denys Vlasenko
2015-05-14 18:17 ` Ingo Molnar
2015-05-14 19:04 ` Denys Vlasenko
2015-05-14 19:44 ` Ingo Molnar
2015-05-15 15:45 ` Josh Triplett
2015-05-17 5:34 ` Ingo Molnar
2015-05-17 19:18 ` Josh Triplett
2015-05-18 6:48 ` Ingo Molnar
2015-05-15 9:39 ` [tip:x86/asm] x86: Align jump targets to 1-byte boundaries tip-bot for Ingo Molnar
2015-04-10 11:34 ` [PATCH] x86/uaccess: Implement get_kernel() Peter Zijlstra
2015-04-10 18:04 ` Ingo Molnar
2015-04-10 17:49 ` Linus Torvalds
2015-04-10 18:04 ` Ingo Molnar
2015-04-10 18:09 ` Linus Torvalds
2015-04-10 14:20 ` [PATCH] mutex: Speed up mutex_spin_on_owner() by not taking the RCU lock Paul E. McKenney
2015-04-10 17:44 ` Ingo Molnar
2015-04-10 18:05 ` Paul E. McKenney
2015-04-09 19:43 ` [PATCH 2/2] locking/rwsem: Use a return variable in rwsem_spin_on_owner() Jason Low
2015-04-09 19:58 ` Paul E. McKenney
2015-04-09 20:58 ` Jason Low
2015-04-09 21:07 ` Paul E. McKenney
2015-04-09 19:59 ` Davidlohr Bueso
2015-04-09 20:36 ` Jason Low
2015-04-10 2:43 ` Andev
2015-04-10 9:04 ` Ingo Molnar
2015-04-08 19:49 ` [PATCH 0/2] locking: Simplify mutex and rwsem spinning code Davidlohr Bueso
2015-04-08 20:10 ` Jason Low
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150410111427.GA30477@gmail.com \
--to=mingo@kernel.org \
--cc=aswin@hp.com \
--cc=dave@stgolabs.net \
--cc=jason.low2@hp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=tim.c.chen@linux.intel.com \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).