* Patch "x86: Add 1/2/4/8 byte optimization to 64bit __copy_{from,to}_user_inatomic" has been added to the 3.10-stable tree
@ 2015-10-18 0:51 gregkh
0 siblings, 0 replies; only message in thread
From: gregkh @ 2015-10-18 0:51 UTC (permalink / raw)
To: ak, gregkh, hpa, tglx; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
x86: Add 1/2/4/8 byte optimization to 64bit __copy_{from,to}_user_inatomic
to the 3.10-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
x86-add-1-2-4-8-byte-optimization-to-64bit-__copy_-from-to-_user_inatomic.patch
and it can be found in the queue-3.10 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From ff47ab4ff3cddfa7bc1b25b990e24abe2ae474ff Mon Sep 17 00:00:00 2001
From: Andi Kleen <ak@linux.intel.com>
Date: Fri, 16 Aug 2013 14:17:19 -0700
Subject: x86: Add 1/2/4/8 byte optimization to 64bit __copy_{from,to}_user_inatomic
From: Andi Kleen <ak@linux.intel.com>
commit ff47ab4ff3cddfa7bc1b25b990e24abe2ae474ff upstream.
The 64bit __copy_{from,to}_user_inatomic always called
copy_from_user_generic, but skipped the special optimizations for 1/2/4/8
byte accesses.
This especially hurts the futex call, which accesses the 4 byte futex
user value with a complicated fast string operation in a function call,
instead of a single movl.
Use __copy_{from,to}_user for _inatomic instead to get the same
optimizations. The only problem was the might_fault() in those functions.
So move that to new wrapper and call __copy_{f,t}_user_nocheck()
from *_inatomic directly.
32bit already did this correctly by duplicating the code.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1376687844-19857-2-git-send-email-andi@firstfloor.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
arch/x86/include/asm/uaccess_64.h | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -77,11 +77,10 @@ int copy_to_user(void __user *dst, const
}
static __always_inline __must_check
-int __copy_from_user(void *dst, const void __user *src, unsigned size)
+int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
{
int ret = 0;
- might_fault();
if (!__builtin_constant_p(size))
return copy_user_generic(dst, (__force void *)src, size);
switch (size) {
@@ -121,11 +120,17 @@ int __copy_from_user(void *dst, const vo
}
static __always_inline __must_check
-int __copy_to_user(void __user *dst, const void *src, unsigned size)
+int __copy_from_user(void *dst, const void __user *src, unsigned size)
+{
+ might_fault();
+ return __copy_from_user_nocheck(dst, src, size);
+}
+
+static __always_inline __must_check
+int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size)
{
int ret = 0;
- might_fault();
if (!__builtin_constant_p(size))
return copy_user_generic((__force void *)dst, src, size);
switch (size) {
@@ -165,6 +170,13 @@ int __copy_to_user(void __user *dst, con
}
static __always_inline __must_check
+int __copy_to_user(void __user *dst, const void *src, unsigned size)
+{
+ might_fault();
+ return __copy_to_user_nocheck(dst, src, size);
+}
+
+static __always_inline __must_check
int __copy_in_user(void __user *dst, const void __user *src, unsigned size)
{
int ret = 0;
@@ -220,13 +232,13 @@ int __copy_in_user(void __user *dst, con
static __must_check __always_inline int
__copy_from_user_inatomic(void *dst, const void __user *src, unsigned size)
{
- return copy_user_generic(dst, (__force const void *)src, size);
+ return __copy_from_user_nocheck(dst, (__force const void *)src, size);
}
static __must_check __always_inline int
__copy_to_user_inatomic(void __user *dst, const void *src, unsigned size)
{
- return copy_user_generic((__force void *)dst, src, size);
+ return __copy_to_user_nocheck((__force void *)dst, src, size);
}
extern long __copy_user_nocache(void *dst, const void __user *src,
Patches currently in stable-queue which might be from ak@linux.intel.com are
queue-3.10/x86-apic-serialize-lvtt-and-tsc_deadline-writes.patch
queue-3.10/x86-add-1-2-4-8-byte-optimization-to-64bit-__copy_-from-to-_user_inatomic.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2015-10-18 0:51 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-10-18 0:51 Patch "x86: Add 1/2/4/8 byte optimization to 64bit __copy_{from,to}_user_inatomic" has been added to the 3.10-stable tree gregkh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).