* [PATCH v3-resend 00/11] uaccess: better might_sleep/might_fault behavior
2013-05-26 14:21 [PATCH v3-resend 00/11] uaccess: better might_sleep/might_fault behavior Michael S. Tsirkin
@ 2013-05-26 14:21 ` Michael S. Tsirkin
2013-05-26 14:30 ` [PATCH v3-resend 01/11] asm-generic: uaccess s/might_sleep/might_fault/ Michael S. Tsirkin
2013-05-27 16:35 ` [PATCH v3-resend 00/11] uaccess: better might_sleep/might_fault behavior Peter Zijlstra
2 siblings, 0 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2013-05-26 14:21 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Peter Zijlstra, Arnd Bergmann, linux-arch, linux-mm,
kvm
I seem to have mis-sent v3. Trying again with same patches after
fixing the message id for the cover letter. I hope the duplicates
that are thus created don't inconvenience people too much.
If they do, I apologize.
I have pared down the Cc list to reduce the noise.
sched maintainers are Cc'd on all patches since that's
the tree I aim for with these patches.
This improves the might_fault annotations used
by uaccess routines:
1. The only reason uaccess routines might sleep
is if they fault. Make this explicit for
all architectures.
2. a voluntary preempt point in uaccess functions
means compiler can't inline them efficiently,
this breaks assumptions that they are very
fast and small that e.g. net code seems to make.
remove this preempt point so behaviour
matches what callers assume.
3. Accesses (e.g through socket ops) to kernel memory
with KERNEL_DS like net/sunrpc does will never sleep.
Remove an unconditinal might_sleep in the inline
might_fault in kernel.h
(used when PROVE_LOCKING is not set).
4. Accesses with pagefault_disable return EFAULT
but won't cause caller to sleep.
Check for that and avoid might_sleep when
PROVE_LOCKING is set.
I'd like these changes to go in for 3.11:
besides a general benefit of improved
consistency and performance, I would also like them
for the vhost driver where we want to call socket ops
under a spinlock, and fall back on slower thread handler
on error.
If the changes look good, would sched maintainers
please consider merging them through sched/core because of the
interaction with the scheduler?
Please review, and consider for 3.11.
Note on arch code updates:
I tested x86_64 code.
Other architectures were build-tested.
I don't have cross-build environment for arm64, tile, microblaze and
mn10300 architectures. arm64 and tile got acks.
The arch changes look generally safe enough
but would appreciate review/acks from arch maintainers.
core changes naturally need acks from sched maintainers.
Version 1 of this change was titled
x86: uaccess s/might_sleep/might_fault/
Changes from v2:
add a patch removing a colunatry preempt point
in uaccess functions when PREEMPT_VOLUNATRY is set.
Addresses comments by Arnd Bergmann,
and Peter Zijlstra.
comment on future possible simplifications in the git log
for the powerpc patch. Addresses a comment
by Arnd Bergmann.
Changes from v1:
add more architectures
fix might_fault() scheduling differently depending
on CONFIG_PROVE_LOCKING, as suggested by Ingo
Michael S. Tsirkin (11):
asm-generic: uaccess s/might_sleep/might_fault/
arm64: uaccess s/might_sleep/might_fault/
frv: uaccess s/might_sleep/might_fault/
m32r: uaccess s/might_sleep/might_fault/
microblaze: uaccess s/might_sleep/might_fault/
mn10300: uaccess s/might_sleep/might_fault/
powerpc: uaccess s/might_sleep/might_fault/
tile: uaccess s/might_sleep/might_fault/
x86: uaccess s/might_sleep/might_fault/
kernel: drop voluntary schedule from might_fault
kernel: uaccess in atomic with pagefault_disable
arch/arm64/include/asm/uaccess.h | 4 ++--
arch/frv/include/asm/uaccess.h | 4 ++--
arch/m32r/include/asm/uaccess.h | 12 ++++++------
arch/microblaze/include/asm/uaccess.h | 6 +++---
arch/mn10300/include/asm/uaccess.h | 4 ++--
arch/powerpc/include/asm/uaccess.h | 16 ++++++++--------
arch/tile/include/asm/uaccess.h | 2 +-
arch/x86/include/asm/uaccess_64.h | 2 +-
include/asm-generic/uaccess.h | 10 +++++-----
include/linux/kernel.h | 7 ++-----
mm/memory.c | 10 +++++++---
11 files changed, 39 insertions(+), 38 deletions(-)
--
MST
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v3-resend 01/11] asm-generic: uaccess s/might_sleep/might_fault/
2013-05-26 14:21 [PATCH v3-resend 00/11] uaccess: better might_sleep/might_fault behavior Michael S. Tsirkin
2013-05-26 14:21 ` Michael S. Tsirkin
@ 2013-05-26 14:30 ` Michael S. Tsirkin
2013-05-27 16:35 ` [PATCH v3-resend 00/11] uaccess: better might_sleep/might_fault behavior Peter Zijlstra
2 siblings, 0 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2013-05-26 14:30 UTC (permalink / raw)
To: linux-kernel; +Cc: Ingo Molnar, Peter Zijlstra, Arnd Bergmann, linux-arch
The only reason uaccess routines might sleep
is if they fault. Make this explicit.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
include/asm-generic/uaccess.h | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/asm-generic/uaccess.h b/include/asm-generic/uaccess.h
index c184aa8..dc1269c 100644
--- a/include/asm-generic/uaccess.h
+++ b/include/asm-generic/uaccess.h
@@ -163,7 +163,7 @@ static inline __must_check long __copy_to_user(void __user *to,
#define put_user(x, ptr) \
({ \
- might_sleep(); \
+ might_fault(); \
access_ok(VERIFY_WRITE, ptr, sizeof(*ptr)) ? \
__put_user(x, ptr) : \
-EFAULT; \
@@ -225,7 +225,7 @@ extern int __put_user_bad(void) __attribute__((noreturn));
#define get_user(x, ptr) \
({ \
- might_sleep(); \
+ might_fault(); \
access_ok(VERIFY_READ, ptr, sizeof(*ptr)) ? \
__get_user(x, ptr) : \
-EFAULT; \
@@ -255,7 +255,7 @@ extern int __get_user_bad(void) __attribute__((noreturn));
static inline long copy_from_user(void *to,
const void __user * from, unsigned long n)
{
- might_sleep();
+ might_fault();
if (access_ok(VERIFY_READ, from, n))
return __copy_from_user(to, from, n);
else
@@ -265,7 +265,7 @@ static inline long copy_from_user(void *to,
static inline long copy_to_user(void __user *to,
const void *from, unsigned long n)
{
- might_sleep();
+ might_fault();
if (access_ok(VERIFY_WRITE, to, n))
return __copy_to_user(to, from, n);
else
@@ -336,7 +336,7 @@ __clear_user(void __user *to, unsigned long n)
static inline __must_check unsigned long
clear_user(void __user *to, unsigned long n)
{
- might_sleep();
+ might_fault();
if (!access_ok(VERIFY_WRITE, to, n))
return n;
--
MST
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v3-resend 00/11] uaccess: better might_sleep/might_fault behavior
2013-05-26 14:21 [PATCH v3-resend 00/11] uaccess: better might_sleep/might_fault behavior Michael S. Tsirkin
2013-05-26 14:21 ` Michael S. Tsirkin
2013-05-26 14:30 ` [PATCH v3-resend 01/11] asm-generic: uaccess s/might_sleep/might_fault/ Michael S. Tsirkin
@ 2013-05-27 16:35 ` Peter Zijlstra
2013-05-27 16:35 ` Peter Zijlstra
2 siblings, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2013-05-27 16:35 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: linux-kernel, Ingo Molnar, Arnd Bergmann, linux-arch, linux-mm,
kvm
On Sun, May 26, 2013 at 05:21:30PM +0300, Michael S. Tsirkin wrote:
> If the changes look good, would sched maintainers
> please consider merging them through sched/core because of the
> interaction with the scheduler?
>
> Please review, and consider for 3.11.
I'll stick them in my queue, we'll see if anything falls over ;-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread