* [PATCH v3 0/2] x86 FPU API
@ 2010-05-06 8:45 Avi Kivity
2010-05-06 8:45 ` [PATCH v3 1/2] x86: eliminate TS_XSAVE Avi Kivity
` (2 more replies)
0 siblings, 3 replies; 24+ messages in thread
From: Avi Kivity @ 2010-05-06 8:45 UTC (permalink / raw)
To: H. Peter Anvin, Ingo Molnar
Cc: kvm, linux-kernel, Brian Gerst, Dexuan Cui, Sheng Yang,
Suresh Siddha
Currently all fpu accessors are wedded to task_struct. However kvm also uses
the fpu in a different context. Introduce an FPU API, and replace the
current uses with the new API.
While this patchset is oriented towards deeper changes, as a first step it
simlifies xsave for kvm.
v3:
use u8 instead of bool in asm to avoid bad code generation on older
gccs.
v2:
eliminate useless padding in use_xsave() by using a larger instruction
Avi Kivity (2):
x86: eliminate TS_XSAVE
x86: Introduce 'struct fpu' and related API
arch/x86/include/asm/i387.h | 135 +++++++++++++++++++++++++++---------
arch/x86/include/asm/processor.h | 6 ++-
arch/x86/include/asm/thread_info.h | 1 -
arch/x86/include/asm/xsave.h | 7 +-
arch/x86/kernel/cpu/common.c | 5 +-
arch/x86/kernel/i387.c | 107 ++++++++++++++---------------
arch/x86/kernel/process.c | 20 +++---
arch/x86/kernel/process_32.c | 2 +-
arch/x86/kernel/process_64.c | 2 +-
arch/x86/kernel/xsave.c | 8 +-
arch/x86/math-emu/fpu_aux.c | 6 +-
11 files changed, 181 insertions(+), 118 deletions(-)
^ permalink raw reply [flat|nested] 24+ messages in thread* [PATCH v3 1/2] x86: eliminate TS_XSAVE 2010-05-06 8:45 [PATCH v3 0/2] x86 FPU API Avi Kivity @ 2010-05-06 8:45 ` Avi Kivity 2010-05-10 20:39 ` [tip:x86/fpu] x86: Eliminate TS_XSAVE tip-bot for Avi Kivity ` (3 more replies) 2010-05-06 8:45 ` [PATCH v3 2/2] x86: Introduce 'struct fpu' and related API Avi Kivity 2010-05-10 8:48 ` [PATCH v3 0/2] x86 FPU API Avi Kivity 2 siblings, 4 replies; 24+ messages in thread From: Avi Kivity @ 2010-05-06 8:45 UTC (permalink / raw) To: H. Peter Anvin, Ingo Molnar Cc: kvm, linux-kernel, Brian Gerst, Dexuan Cui, Sheng Yang, Suresh Siddha The fpu code currently uses current->thread_info->status & TS_XSAVE as a way to distinguish between XSAVE capable processors and older processors. The decision is not really task specific; instead we use the task status to avoid a global memory reference - the value should be the same across all threads. Eliminate this tie-in into the task structure by using an alternative instruction keyed off the XSAVE cpu feature; this results in shorter and faster code, without introducing a global memory reference. Signed-off-by: Avi Kivity <avi@redhat.com> --- arch/x86/include/asm/i387.h | 20 ++++++++++++++++---- arch/x86/include/asm/thread_info.h | 1 - arch/x86/kernel/cpu/common.c | 5 +---- arch/x86/kernel/i387.c | 5 +---- arch/x86/kernel/xsave.c | 6 +++--- 5 files changed, 21 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h index da29309..a301a68 100644 --- a/arch/x86/include/asm/i387.h +++ b/arch/x86/include/asm/i387.h @@ -56,6 +56,18 @@ extern int restore_i387_xstate_ia32(void __user *buf); #define X87_FSW_ES (1 << 7) /* Exception Summary */ +static inline bool use_xsave(void) +{ + u8 has_xsave; + + alternative_io("mov $0, %0", + "mov $1, %0", + X86_FEATURE_XSAVE, + "=g"(has_xsave)); + + return has_xsave; +} + #ifdef CONFIG_X86_64 /* Ignore delayed exceptions from user space */ @@ -99,7 +111,7 @@ static inline void clear_fpu_state(struct task_struct *tsk) /* * xsave header may indicate the init state of the FP. */ - if ((task_thread_info(tsk)->status & TS_XSAVE) && + if (use_xsave() && !(xstate->xsave_hdr.xstate_bv & XSTATE_FP)) return; @@ -164,7 +176,7 @@ static inline void fxsave(struct task_struct *tsk) static inline void __save_init_fpu(struct task_struct *tsk) { - if (task_thread_info(tsk)->status & TS_XSAVE) + if (use_xsave()) xsave(tsk); else fxsave(tsk); @@ -218,7 +230,7 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx) */ static inline void __save_init_fpu(struct task_struct *tsk) { - if (task_thread_info(tsk)->status & TS_XSAVE) { + if (use_xsave()) { struct xsave_struct *xstate = &tsk->thread.xstate->xsave; struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; @@ -266,7 +278,7 @@ end: static inline int restore_fpu_checking(struct task_struct *tsk) { - if (task_thread_info(tsk)->status & TS_XSAVE) + if (use_xsave()) return xrstor_checking(&tsk->thread.xstate->xsave); else return fxrstor_checking(&tsk->thread.xstate->fxsave); diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h index d017ed5..d4092fa 100644 --- a/arch/x86/include/asm/thread_info.h +++ b/arch/x86/include/asm/thread_info.h @@ -242,7 +242,6 @@ static inline struct thread_info *current_thread_info(void) #define TS_POLLING 0x0004 /* true if in idle loop and not sleeping */ #define TS_RESTORE_SIGMASK 0x0008 /* restore signal mask in do_signal() */ -#define TS_XSAVE 0x0010 /* Use xsave/xrstor */ #define tsk_is_polling(t) (task_thread_info(t)->status & TS_POLLING) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 4868e4a..c1c00d0 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1243,10 +1243,7 @@ void __cpuinit cpu_init(void) /* * Force FPU initialization: */ - if (cpu_has_xsave) - current_thread_info()->status = TS_XSAVE; - else - current_thread_info()->status = 0; + current_thread_info()->status = 0; clear_used_math(); mxcsr_feature_mask_init(); diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index 54c31c2..14ca1dc 100644 --- a/arch/x86/kernel/i387.c +++ b/arch/x86/kernel/i387.c @@ -102,10 +102,7 @@ void __cpuinit fpu_init(void) mxcsr_feature_mask_init(); /* clean state in init */ - if (cpu_has_xsave) - current_thread_info()->status = TS_XSAVE; - else - current_thread_info()->status = 0; + current_thread_info()->status = 0; clear_used_math(); } #endif /* CONFIG_X86_64 */ diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c index 782c3a3..c1b0a11 100644 --- a/arch/x86/kernel/xsave.c +++ b/arch/x86/kernel/xsave.c @@ -99,7 +99,7 @@ int save_i387_xstate(void __user *buf) if (err) return err; - if (task_thread_info(tsk)->status & TS_XSAVE) + if (use_xsave()) err = xsave_user(buf); else err = fxsave_user(buf); @@ -116,7 +116,7 @@ int save_i387_xstate(void __user *buf) clear_used_math(); /* trigger finit */ - if (task_thread_info(tsk)->status & TS_XSAVE) { + if (use_xsave()) { struct _fpstate __user *fx = buf; struct _xstate __user *x = buf; u64 xstate_bv; @@ -225,7 +225,7 @@ int restore_i387_xstate(void __user *buf) clts(); task_thread_info(current)->status |= TS_USEDFPU; } - if (task_thread_info(tsk)->status & TS_XSAVE) + if (use_xsave()) err = restore_user_xstate(buf); else err = fxrstor_checking((__force struct i387_fxsave_struct *) -- 1.7.0.4 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip:x86/fpu] x86: Eliminate TS_XSAVE 2010-05-06 8:45 ` [PATCH v3 1/2] x86: eliminate TS_XSAVE Avi Kivity @ 2010-05-10 20:39 ` tip-bot for Avi Kivity 2010-05-12 0:18 ` [tip:x86/fpu] x86, fpu: Use the proper asm constraint in use_xsave() tip-bot for H. Peter Anvin ` (2 subsequent siblings) 3 siblings, 0 replies; 24+ messages in thread From: tip-bot for Avi Kivity @ 2010-05-10 20:39 UTC (permalink / raw) To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, suresh.b.siddha, tglx, avi Commit-ID: c9ad488289144ae5ef53b012e15895ef1f5e4bb6 Gitweb: http://git.kernel.org/tip/c9ad488289144ae5ef53b012e15895ef1f5e4bb6 Author: Avi Kivity <avi@redhat.com> AuthorDate: Thu, 6 May 2010 11:45:45 +0300 Committer: H. Peter Anvin <hpa@zytor.com> CommitDate: Mon, 10 May 2010 10:39:33 -0700 x86: Eliminate TS_XSAVE The fpu code currently uses current->thread_info->status & TS_XSAVE as a way to distinguish between XSAVE capable processors and older processors. The decision is not really task specific; instead we use the task status to avoid a global memory reference - the value should be the same across all threads. Eliminate this tie-in into the task structure by using an alternative instruction keyed off the XSAVE cpu feature; this results in shorter and faster code, without introducing a global memory reference. [ hpa: in the future, this probably should use an asm jmp ] Signed-off-by: Avi Kivity <avi@redhat.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-2-git-send-email-avi@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> --- arch/x86/include/asm/i387.h | 20 ++++++++++++++++---- arch/x86/include/asm/thread_info.h | 1 - arch/x86/kernel/cpu/common.c | 5 +---- arch/x86/kernel/i387.c | 5 +---- arch/x86/kernel/xsave.c | 6 +++--- 5 files changed, 21 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h index da29309..a301a68 100644 --- a/arch/x86/include/asm/i387.h +++ b/arch/x86/include/asm/i387.h @@ -56,6 +56,18 @@ extern int restore_i387_xstate_ia32(void __user *buf); #define X87_FSW_ES (1 << 7) /* Exception Summary */ +static inline bool use_xsave(void) +{ + u8 has_xsave; + + alternative_io("mov $0, %0", + "mov $1, %0", + X86_FEATURE_XSAVE, + "=g"(has_xsave)); + + return has_xsave; +} + #ifdef CONFIG_X86_64 /* Ignore delayed exceptions from user space */ @@ -99,7 +111,7 @@ static inline void clear_fpu_state(struct task_struct *tsk) /* * xsave header may indicate the init state of the FP. */ - if ((task_thread_info(tsk)->status & TS_XSAVE) && + if (use_xsave() && !(xstate->xsave_hdr.xstate_bv & XSTATE_FP)) return; @@ -164,7 +176,7 @@ static inline void fxsave(struct task_struct *tsk) static inline void __save_init_fpu(struct task_struct *tsk) { - if (task_thread_info(tsk)->status & TS_XSAVE) + if (use_xsave()) xsave(tsk); else fxsave(tsk); @@ -218,7 +230,7 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx) */ static inline void __save_init_fpu(struct task_struct *tsk) { - if (task_thread_info(tsk)->status & TS_XSAVE) { + if (use_xsave()) { struct xsave_struct *xstate = &tsk->thread.xstate->xsave; struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; @@ -266,7 +278,7 @@ end: static inline int restore_fpu_checking(struct task_struct *tsk) { - if (task_thread_info(tsk)->status & TS_XSAVE) + if (use_xsave()) return xrstor_checking(&tsk->thread.xstate->xsave); else return fxrstor_checking(&tsk->thread.xstate->fxsave); diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h index e0d2890..e9e3415 100644 --- a/arch/x86/include/asm/thread_info.h +++ b/arch/x86/include/asm/thread_info.h @@ -244,7 +244,6 @@ static inline struct thread_info *current_thread_info(void) #define TS_POLLING 0x0004 /* true if in idle loop and not sleeping */ #define TS_RESTORE_SIGMASK 0x0008 /* restore signal mask in do_signal() */ -#define TS_XSAVE 0x0010 /* Use xsave/xrstor */ #define tsk_is_polling(t) (task_thread_info(t)->status & TS_POLLING) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 4868e4a..c1c00d0 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1243,10 +1243,7 @@ void __cpuinit cpu_init(void) /* * Force FPU initialization: */ - if (cpu_has_xsave) - current_thread_info()->status = TS_XSAVE; - else - current_thread_info()->status = 0; + current_thread_info()->status = 0; clear_used_math(); mxcsr_feature_mask_init(); diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index 54c31c2..14ca1dc 100644 --- a/arch/x86/kernel/i387.c +++ b/arch/x86/kernel/i387.c @@ -102,10 +102,7 @@ void __cpuinit fpu_init(void) mxcsr_feature_mask_init(); /* clean state in init */ - if (cpu_has_xsave) - current_thread_info()->status = TS_XSAVE; - else - current_thread_info()->status = 0; + current_thread_info()->status = 0; clear_used_math(); } #endif /* CONFIG_X86_64 */ diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c index 782c3a3..c1b0a11 100644 --- a/arch/x86/kernel/xsave.c +++ b/arch/x86/kernel/xsave.c @@ -99,7 +99,7 @@ int save_i387_xstate(void __user *buf) if (err) return err; - if (task_thread_info(tsk)->status & TS_XSAVE) + if (use_xsave()) err = xsave_user(buf); else err = fxsave_user(buf); @@ -116,7 +116,7 @@ int save_i387_xstate(void __user *buf) clear_used_math(); /* trigger finit */ - if (task_thread_info(tsk)->status & TS_XSAVE) { + if (use_xsave()) { struct _fpstate __user *fx = buf; struct _xstate __user *x = buf; u64 xstate_bv; @@ -225,7 +225,7 @@ int restore_i387_xstate(void __user *buf) clts(); task_thread_info(current)->status |= TS_USEDFPU; } - if (task_thread_info(tsk)->status & TS_XSAVE) + if (use_xsave()) err = restore_user_xstate(buf); else err = fxrstor_checking((__force struct i387_fxsave_struct *) ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip:x86/fpu] x86, fpu: Use the proper asm constraint in use_xsave() 2010-05-06 8:45 ` [PATCH v3 1/2] x86: eliminate TS_XSAVE Avi Kivity 2010-05-10 20:39 ` [tip:x86/fpu] x86: Eliminate TS_XSAVE tip-bot for Avi Kivity @ 2010-05-12 0:18 ` tip-bot for H. Peter Anvin 2010-05-12 1:06 ` [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives tip-bot for H. Peter Anvin 2010-05-12 1:06 ` [tip:x86/fpu] x86, fpu: Use static_cpu_has() to implement use_xsave() tip-bot for H. Peter Anvin 3 siblings, 0 replies; 24+ messages in thread From: tip-bot for H. Peter Anvin @ 2010-05-12 0:18 UTC (permalink / raw) To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, suresh.b.siddha, tglx, avi Commit-ID: dce8bf4e115aa44d590802ce3554e926840c9042 Gitweb: http://git.kernel.org/tip/dce8bf4e115aa44d590802ce3554e926840c9042 Author: H. Peter Anvin <hpa@zytor.com> AuthorDate: Mon, 10 May 2010 13:41:41 -0700 Committer: H. Peter Anvin <hpa@zytor.com> CommitDate: Mon, 10 May 2010 13:41:41 -0700 x86, fpu: Use the proper asm constraint in use_xsave() The proper constraint for a receiving 8-bit variable is "=qm", not "=g" which equals "=rim"; even though the "i" will never match, bugs can and do happen due to the difference between "q" and "r". Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Avi Kivity <avi@redhat.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-2-git-send-email-avi@redhat.com> --- arch/x86/include/asm/i387.h | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h index 1a8cca3..8002e9c 100644 --- a/arch/x86/include/asm/i387.h +++ b/arch/x86/include/asm/i387.h @@ -64,7 +64,7 @@ static inline bool use_xsave(void) alternative_io("mov $0, %0", "mov $1, %0", X86_FEATURE_XSAVE, - "=g"(has_xsave)); + "=qm" (has_xsave)); return has_xsave; } ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives 2010-05-06 8:45 ` [PATCH v3 1/2] x86: eliminate TS_XSAVE Avi Kivity 2010-05-10 20:39 ` [tip:x86/fpu] x86: Eliminate TS_XSAVE tip-bot for Avi Kivity 2010-05-12 0:18 ` [tip:x86/fpu] x86, fpu: Use the proper asm constraint in use_xsave() tip-bot for H. Peter Anvin @ 2010-05-12 1:06 ` tip-bot for H. Peter Anvin 2010-05-18 20:10 ` Eric Dumazet 2010-05-12 1:06 ` [tip:x86/fpu] x86, fpu: Use static_cpu_has() to implement use_xsave() tip-bot for H. Peter Anvin 3 siblings, 1 reply; 24+ messages in thread From: tip-bot for H. Peter Anvin @ 2010-05-12 1:06 UTC (permalink / raw) To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, suresh.b.siddha, tglx, avi Commit-ID: a3c8acd04376d604370dcb6cd2143c9c14078a50 Gitweb: http://git.kernel.org/tip/a3c8acd04376d604370dcb6cd2143c9c14078a50 Author: H. Peter Anvin <hpa@zytor.com> AuthorDate: Tue, 11 May 2010 17:47:07 -0700 Committer: H. Peter Anvin <hpa@zytor.com> CommitDate: Tue, 11 May 2010 17:47:07 -0700 x86: Add new static_cpu_has() function using alternatives For CPU-feature-specific code that touches performance-critical paths, introduce a static patching version of [boot_]cpu_has(). This is run at alternatives time and is therefore not appropriate for most initialization code, but on the other hand initialization code is generally not performance critical. On gcc 4.5+ this uses the new "asm goto" feature. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Avi Kivity <avi@redhat.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-2-git-send-email-avi@redhat.com> --- arch/x86/include/asm/cpufeature.h | 57 +++++++++++++++++++++++++++++++++++++ 1 files changed, 57 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index 0cd82d0..9b11a5c 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -175,6 +175,7 @@ #if defined(__KERNEL__) && !defined(__ASSEMBLY__) +#include <asm/asm.h> #include <linux/bitops.h> extern const char * const x86_cap_flags[NCAPINTS*32]; @@ -283,6 +284,62 @@ extern const char * const x86_power_flags[32]; #endif /* CONFIG_X86_64 */ +/* + * Static testing of CPU features. Used the same as boot_cpu_has(). + * These are only valid after alternatives have run, but will statically + * patch the target code for additional performance. + * + */ +static __always_inline __pure bool __static_cpu_has(u8 bit) +{ +#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 5) + asm goto("1: jmp %l[t_no]\n" + "2:\n" + ".section .altinstructions,\"a\"\n" + _ASM_ALIGN "\n" + _ASM_PTR "1b\n" + _ASM_PTR "0\n" /* no replacement */ + " .byte %P0\n" /* feature bit */ + " .byte 2b - 1b\n" /* source len */ + " .byte 0\n" /* replacement len */ + " .byte 0xff + 0 - (2b-1b)\n" /* padding */ + ".previous\n" + : : "i" (bit) : : t_no); + return true; + t_no: + return false; +#else + u8 flag; + /* Open-coded due to __stringify() in ALTERNATIVE() */ + asm volatile("1: movb $0,%0\n" + "2:\n" + ".section .altinstructions,\"a\"\n" + _ASM_ALIGN "\n" + _ASM_PTR "1b\n" + _ASM_PTR "3f\n" + " .byte %P1\n" /* feature bit */ + " .byte 2b - 1b\n" /* source len */ + " .byte 4f - 3f\n" /* replacement len */ + " .byte 0xff + (4f-3f) - (2b-1b)\n" /* padding */ + ".previous\n" + ".section .altinstr_replacement,\"ax\"\n" + "3: movb $1,%0\n" + "4:\n" + ".previous\n" + : "=qm" (flag) : "i" (bit)); + return flag; +#endif +} + +#define static_cpu_has(bit) \ +( \ + __builtin_constant_p(boot_cpu_has(bit)) ? \ + boot_cpu_has(bit) : \ + (__builtin_constant_p(bit) && !((bit) & ~0xff)) ? \ + __static_cpu_has(bit) : \ + boot_cpu_has(bit) \ +) + #endif /* defined(__KERNEL__) && !defined(__ASSEMBLY__) */ #endif /* _ASM_X86_CPUFEATURE_H */ ^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives 2010-05-12 1:06 ` [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives tip-bot for H. Peter Anvin @ 2010-05-18 20:10 ` Eric Dumazet 2010-05-18 20:43 ` H. Peter Anvin ` (2 more replies) 0 siblings, 3 replies; 24+ messages in thread From: Eric Dumazet @ 2010-05-18 20:10 UTC (permalink / raw) To: mingo, hpa, linux-kernel, suresh.b.siddha, tglx, avi; +Cc: linux-tip-commits Le mercredi 12 mai 2010 à 01:06 +0000, tip-bot for H. Peter Anvin a écrit : > Commit-ID: a3c8acd04376d604370dcb6cd2143c9c14078a50 > Gitweb: http://git.kernel.org/tip/a3c8acd04376d604370dcb6cd2143c9c14078a50 > Author: H. Peter Anvin <hpa@zytor.com> > AuthorDate: Tue, 11 May 2010 17:47:07 -0700 > Committer: H. Peter Anvin <hpa@zytor.com> > CommitDate: Tue, 11 May 2010 17:47:07 -0700 > > x86: Add new static_cpu_has() function using alternatives > > For CPU-feature-specific code that touches performance-critical paths, > introduce a static patching version of [boot_]cpu_has(). This is run > at alternatives time and is therefore not appropriate for most > initialization code, but on the other hand initialization code is > generally not performance critical. > > On gcc 4.5+ this uses the new "asm goto" feature. Might be time to change Documentation/Changes about gcc requirements ... # make CHK include/linux/version.h CHK include/generated/utsrelease.h CALL scripts/checksyscalls.sh CHK include/generated/compile.h CC arch/x86/kernel/process_32.o /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h: In function `prepare_to_copy': /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h:315: warning: asm operand 1 probably doesn't match constraints /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h:315: error: impossible constraint in `asm' /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h:313: warning: 'flag' might be used uninitialized in this function make[2]: *** [arch/x86/kernel/process_32.o] Error 1 make[1]: *** [arch/x86/kernel] Error 2 make: *** [arch/x86] Error 2 # gcc -v Reading specs from /usr/lib/gcc/i386-redhat-linux/3.4.6/specs Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --disable-checking --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-java-awt=gtk --host=i386-redhat-linux Thread model: posix gcc version 3.4.6 20060404 (Red Hat 3.4.6-10) ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives 2010-05-18 20:10 ` Eric Dumazet @ 2010-05-18 20:43 ` H. Peter Anvin 2010-05-18 20:57 ` H. Peter Anvin 2010-05-18 20:58 ` [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives H. Peter Anvin 2 siblings, 0 replies; 24+ messages in thread From: H. Peter Anvin @ 2010-05-18 20:43 UTC (permalink / raw) To: Eric Dumazet Cc: mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits On 05/18/2010 01:10 PM, Eric Dumazet wrote: > Le mercredi 12 mai 2010 à 01:06 +0000, tip-bot for H. Peter Anvin a > écrit : >> Commit-ID: a3c8acd04376d604370dcb6cd2143c9c14078a50 >> Gitweb: http://git.kernel.org/tip/a3c8acd04376d604370dcb6cd2143c9c14078a50 >> Author: H. Peter Anvin <hpa@zytor.com> >> AuthorDate: Tue, 11 May 2010 17:47:07 -0700 >> Committer: H. Peter Anvin <hpa@zytor.com> >> CommitDate: Tue, 11 May 2010 17:47:07 -0700 >> >> x86: Add new static_cpu_has() function using alternatives >> >> For CPU-feature-specific code that touches performance-critical paths, >> introduce a static patching version of [boot_]cpu_has(). This is run >> at alternatives time and is therefore not appropriate for most >> initialization code, but on the other hand initialization code is >> generally not performance critical. >> >> On gcc 4.5+ this uses the new "asm goto" feature. > > Might be time to change Documentation/Changes about gcc requirements ... > Well, this failure is in the fallback code. What this seems to imply is that gcc 3.4.6 doesn't know how to propagate what is already known to be constant in an inline into an immediate. This is a pretty big fail, and yes, if it really is this broken it might just be time to declare gcc 3.4 dead. > # make > CHK include/linux/version.h > CHK include/generated/utsrelease.h > CALL scripts/checksyscalls.sh > CHK include/generated/compile.h > CC arch/x86/kernel/process_32.o > /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h: In function > `prepare_to_copy': > /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h:315: warning: asm > operand 1 probably doesn't match constraints > /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h:315: error: > impossible constraint in `asm' > /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h:313: warning: > 'flag' might be used uninitialized in this function > make[2]: *** [arch/x86/kernel/process_32.o] Error 1 > make[1]: *** [arch/x86/kernel] Error 2 > make: *** [arch/x86] Error 2 > # gcc -v > Reading specs from /usr/lib/gcc/i386-redhat-linux/3.4.6/specs > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/usr/share/info --enable-shared --enable-threads=posix > --disable-checking --with-system-zlib --enable-__cxa_atexit > --disable-libunwind-exceptions --enable-java-awt=gtk > --host=i386-redhat-linux > Thread model: posix > gcc version 3.4.6 20060404 (Red Hat 3.4.6-10) > -hpa ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives 2010-05-18 20:10 ` Eric Dumazet 2010-05-18 20:43 ` H. Peter Anvin @ 2010-05-18 20:57 ` H. Peter Anvin 2010-05-18 21:11 ` Eric Dumazet 2010-05-18 20:58 ` [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives H. Peter Anvin 2 siblings, 1 reply; 24+ messages in thread From: H. Peter Anvin @ 2010-05-18 20:57 UTC (permalink / raw) To: Eric Dumazet Cc: mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits On 05/18/2010 01:10 PM, Eric Dumazet wrote: > # gcc -v > Reading specs from /usr/lib/gcc/i386-redhat-linux/3.4.6/specs > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/usr/share/info --enable-shared --enable-threads=posix > --disable-checking --with-system-zlib --enable-__cxa_atexit > --disable-libunwind-exceptions --enable-java-awt=gtk > --host=i386-redhat-linux > Thread model: posix > gcc version 3.4.6 20060404 (Red Hat 3.4.6-10) I just implemented a fallback for gcc 3, but the real question is to which degree we still care about gcc 3 support for x86 specifically (other architectures might have other needs, but this is x86-specific code.) Lately the number of issues with gcc 3 support seems to have gone way up, and at some point we're going to have to cut it loose -- when would depend largely on what the usage case is; e.g. why are you, yourself, using gcc 3.4 to compile a state of the art kernel? -hpa ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives 2010-05-18 20:57 ` H. Peter Anvin @ 2010-05-18 21:11 ` Eric Dumazet 2010-05-18 21:31 ` H. Peter Anvin 2010-05-18 21:38 ` Does anyone care about gcc 3.x support for x86 anymore? H. Peter Anvin 0 siblings, 2 replies; 24+ messages in thread From: Eric Dumazet @ 2010-05-18 21:11 UTC (permalink / raw) To: H. Peter Anvin Cc: mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits Le mardi 18 mai 2010 à 13:57 -0700, H. Peter Anvin a écrit : > On 05/18/2010 01:10 PM, Eric Dumazet wrote: > > # gcc -v > > Reading specs from /usr/lib/gcc/i386-redhat-linux/3.4.6/specs > > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > > --infodir=/usr/share/info --enable-shared --enable-threads=posix > > --disable-checking --with-system-zlib --enable-__cxa_atexit > > --disable-libunwind-exceptions --enable-java-awt=gtk > > --host=i386-redhat-linux > > Thread model: posix > > gcc version 3.4.6 20060404 (Red Hat 3.4.6-10) > > I just implemented a fallback for gcc 3, but the real question is to > which degree we still care about gcc 3 support for x86 specifically > (other architectures might have other needs, but this is x86-specific code.) > > Lately the number of issues with gcc 3 support seems to have gone way > up, and at some point we're going to have to cut it loose -- when would > depend largely on what the usage case is; e.g. why are you, yourself, > using gcc 3.4 to compile a state of the art kernel? I use many different machines to compile kernels, and found this one using gcc-3.4.6, but still running original 2.6.9.something RHEL kernel ;) For kernels I actually boot, I use gcc-4.5, 4.4.x, 4.3.x, 4.2.x, 4.1.2 (cross compiler) ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives 2010-05-18 21:11 ` Eric Dumazet @ 2010-05-18 21:31 ` H. Peter Anvin 2010-05-18 21:38 ` Does anyone care about gcc 3.x support for x86 anymore? H. Peter Anvin 1 sibling, 0 replies; 24+ messages in thread From: H. Peter Anvin @ 2010-05-18 21:31 UTC (permalink / raw) To: Eric Dumazet Cc: mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits On 05/18/2010 02:11 PM, Eric Dumazet wrote: > > I use many different machines to compile kernels, and found this one > using gcc-3.4.6, but still running original 2.6.9.something RHEL > kernel ;) > OK, so this is "testing only, not actually using." I guess I'm trying to figure out if *anyone* would actually care if gcc 3.x support is discontinued. -hpa ^ permalink raw reply [flat|nested] 24+ messages in thread
* Does anyone care about gcc 3.x support for x86 anymore? 2010-05-18 21:11 ` Eric Dumazet 2010-05-18 21:31 ` H. Peter Anvin @ 2010-05-18 21:38 ` H. Peter Anvin 2010-05-19 23:10 ` Mauro Carvalho Chehab 1 sibling, 1 reply; 24+ messages in thread From: H. Peter Anvin @ 2010-05-18 21:38 UTC (permalink / raw) To: Eric Dumazet Cc: mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits Recently, we have seen an increasing number of problems with gcc 3.4 on x86; mostly due to poor constant propagation producing not just bad code but failing to properly eliminate what should be dead code. I'm wondering if there is any remaining real use of gcc 3.4 on x86 for compiling current kernels (as opposed to residual use for compiling applications on old enterprise distros.) I'm specifically not referring to other architectures here -- most of these issues have been in relation to low-level arch-specific code, and as such only affects the x86 architectures. Other architectures may very well have a much stronger need for continued support of an older toolchain. If there isn't a reason to preserve support, I would like to consider discontinue support for using gcc 3 to compile x86 kernels. If there is a valid use case, it would be good to know what it is. -hpa ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Does anyone care about gcc 3.x support for x86 anymore? 2010-05-18 21:38 ` Does anyone care about gcc 3.x support for x86 anymore? H. Peter Anvin @ 2010-05-19 23:10 ` Mauro Carvalho Chehab 2010-05-20 0:39 ` H. Peter Anvin 2010-05-20 0:42 ` H. Peter Anvin 0 siblings, 2 replies; 24+ messages in thread From: Mauro Carvalho Chehab @ 2010-05-19 23:10 UTC (permalink / raw) To: H. Peter Anvin Cc: Eric Dumazet, mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits H. Peter Anvin wrote: > Recently, we have seen an increasing number of problems with gcc 3.4 on > x86; mostly due to poor constant propagation producing not just bad code > but failing to properly eliminate what should be dead code. I don't see any problem, as, if people are using gcc3, they are probably not interested on the bleeding edge kernel. However, if the problems are just performance/dead code removal, I would just add a big warning if someone tries to compile x86 with it. I don't like very much the idea of having different minimum gcc requirements for each architecture, except if gcc is producing a broken code. Currently,Documentation/Changes list just a common minimal list for everything - although the text describing gcc say that the "version requirements" may vary for each CPU type. -- Cheers, Mauro ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Does anyone care about gcc 3.x support for x86 anymore? 2010-05-19 23:10 ` Mauro Carvalho Chehab @ 2010-05-20 0:39 ` H. Peter Anvin 2010-05-20 0:42 ` H. Peter Anvin 1 sibling, 0 replies; 24+ messages in thread From: H. Peter Anvin @ 2010-05-20 0:39 UTC (permalink / raw) To: Mauro Carvalho Chehab Cc: Eric Dumazet, mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits On 05/19/2010 04:10 PM, Mauro Carvalho Chehab wrote: > H. Peter Anvin wrote: >> Recently, we have seen an increasing number of problems with gcc 3.4 on >> x86; mostly due to poor constant propagation producing not just bad code >> but failing to properly eliminate what should be dead code. > > I don't see any problem, as, if people are using gcc3, they are probably > not interested on the bleeding edge kernel. > > However, if the problems are just performance/dead code removal, I would > just add a big warning if someone tries to compile x86 with it. I don't > like very much the idea of having different minimum gcc requirements > for each architecture, except if gcc is producing a broken code. > > Currently,Documentation/Changes list just a common minimal list for > everything - although the text describing gcc say that the "version > requirements" may vary for each CPU type. > We already have different gcc version requirements, whether or not they're written down is another matter... -hpa ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Does anyone care about gcc 3.x support for x86 anymore? 2010-05-19 23:10 ` Mauro Carvalho Chehab 2010-05-20 0:39 ` H. Peter Anvin @ 2010-05-20 0:42 ` H. Peter Anvin 2010-05-20 12:44 ` Ingo Molnar 1 sibling, 1 reply; 24+ messages in thread From: H. Peter Anvin @ 2010-05-20 0:42 UTC (permalink / raw) To: Mauro Carvalho Chehab Cc: Eric Dumazet, mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits On 05/19/2010 04:10 PM, Mauro Carvalho Chehab wrote: > > However, if the problems are just performance/dead code removal, I would > just add a big warning if someone tries to compile x86 with it. I don't > like very much the idea of having different minimum gcc requirements > for each architecture, except if gcc is producing a broken code. > I should clarify the problem. The problems we have seen are related to constant propagation, which causes gcc3 to die when there is an assembly constraint like: asm("..." : : "i" (foo)); ... since "foo" isn't constant as far as it is concerned. We can put in workarounds, but it's real effort to keep it alive that probably isn't well spent. Similarly, lack of constant propagation can cause code that should have been compile-time removed to still be there, causing link failures. -hpa ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: Does anyone care about gcc 3.x support for x86 anymore? 2010-05-20 0:42 ` H. Peter Anvin @ 2010-05-20 12:44 ` Ingo Molnar 0 siblings, 0 replies; 24+ messages in thread From: Ingo Molnar @ 2010-05-20 12:44 UTC (permalink / raw) To: H. Peter Anvin Cc: Mauro Carvalho Chehab, Eric Dumazet, mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits * H. Peter Anvin <hpa@zytor.com> wrote: > On 05/19/2010 04:10 PM, Mauro Carvalho Chehab wrote: > > > > However, if the problems are just performance/dead > > code removal, I would just add a big warning if > > someone tries to compile x86 with it. I don't like > > very much the idea of having different minimum gcc > > requirements for each architecture, except if gcc is > > producing a broken code. > > > > I should clarify the problem. The problems we have seen > are related to constant propagation, which causes gcc3 > to die when there is an assembly constraint like: > > asm("..." : : "i" (foo)); > > ... since "foo" isn't constant as far as it is > concerned. We can put in workarounds, but it's real > effort to keep it alive that probably isn't well spent. > > Similarly, lack of constant propagation can cause code > that should have been compile-time removed to still be > there, causing link failures. Put in a deprecation warning first perhaps? Ingo ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives 2010-05-18 20:10 ` Eric Dumazet 2010-05-18 20:43 ` H. Peter Anvin 2010-05-18 20:57 ` H. Peter Anvin @ 2010-05-18 20:58 ` H. Peter Anvin 2010-05-18 21:31 ` Eric Dumazet 2010-05-27 20:12 ` [tip:x86/urgent] x86, cpufeature: Unbreak compile with gcc 3.x tip-bot for H. Peter Anvin 2 siblings, 2 replies; 24+ messages in thread From: H. Peter Anvin @ 2010-05-18 20:58 UTC (permalink / raw) To: Eric Dumazet Cc: mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits [-- Attachment #1: Type: text/plain, Size: 1377 bytes --] On 05/18/2010 01:10 PM, Eric Dumazet wrote: > > Might be time to change Documentation/Changes about gcc requirements ... > > # make > CHK include/linux/version.h > CHK include/generated/utsrelease.h > CALL scripts/checksyscalls.sh > CHK include/generated/compile.h > CC arch/x86/kernel/process_32.o > /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h: In function > `prepare_to_copy': > /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h:315: warning: asm > operand 1 probably doesn't match constraints > /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h:315: error: > impossible constraint in `asm' > /usr/src/linux-2.6/arch/x86/include/asm/cpufeature.h:313: warning: > 'flag' might be used uninitialized in this function > make[2]: *** [arch/x86/kernel/process_32.o] Error 1 > make[1]: *** [arch/x86/kernel] Error 2 > make: *** [arch/x86] Error 2 > # gcc -v > Reading specs from /usr/lib/gcc/i386-redhat-linux/3.4.6/specs > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/usr/share/info --enable-shared --enable-threads=posix > --disable-checking --with-system-zlib --enable-__cxa_atexit > --disable-libunwind-exceptions --enable-java-awt=gtk > --host=i386-redhat-linux > Thread model: posix > gcc version 3.4.6 20060404 (Red Hat 3.4.6-10) > Here is the fallback patch if you want to test it. -hpa [-- Attachment #2: diff --] [-- Type: text/plain, Size: 747 bytes --] diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index dca9c54..4681459 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -332,6 +332,7 @@ static __always_inline __pure bool __static_cpu_has(u8 bit) #endif } +#if __GNUC__ >= 4 #define static_cpu_has(bit) \ ( \ __builtin_constant_p(boot_cpu_has(bit)) ? \ @@ -340,6 +341,12 @@ static __always_inline __pure bool __static_cpu_has(u8 bit) __static_cpu_has(bit) : \ boot_cpu_has(bit) \ ) +#else +/* + * gcc 3.x is too stupid to do the static test; fall back to dynamic. + */ +#define static_cpu_has(bit) boot_cpu_has(bit) +#endif #endif /* defined(__KERNEL__) && !defined(__ASSEMBLY__) */ ^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives 2010-05-18 20:58 ` [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives H. Peter Anvin @ 2010-05-18 21:31 ` Eric Dumazet 2010-05-27 20:12 ` [tip:x86/urgent] x86, cpufeature: Unbreak compile with gcc 3.x tip-bot for H. Peter Anvin 1 sibling, 0 replies; 24+ messages in thread From: Eric Dumazet @ 2010-05-18 21:31 UTC (permalink / raw) To: H. Peter Anvin Cc: mingo, linux-kernel, suresh.b.siddha, tglx, avi, linux-tip-commits Le mardi 18 mai 2010 à 13:58 -0700, H. Peter Anvin a écrit : > Here is the fallback patch if you want to test it. > Thanks, kernel compiled just fine with this patch. ^ permalink raw reply [flat|nested] 24+ messages in thread
* [tip:x86/urgent] x86, cpufeature: Unbreak compile with gcc 3.x 2010-05-18 20:58 ` [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives H. Peter Anvin 2010-05-18 21:31 ` Eric Dumazet @ 2010-05-27 20:12 ` tip-bot for H. Peter Anvin 1 sibling, 0 replies; 24+ messages in thread From: tip-bot for H. Peter Anvin @ 2010-05-27 20:12 UTC (permalink / raw) To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, eric.dumazet, tglx, hpa Commit-ID: 1ba4f22c426ba04b00fd717318d50620c621a0e1 Gitweb: http://git.kernel.org/tip/1ba4f22c426ba04b00fd717318d50620c621a0e1 Author: H. Peter Anvin <hpa@linux.intel.com> AuthorDate: Thu, 27 May 2010 12:02:00 -0700 Committer: H. Peter Anvin <hpa@linux.intel.com> CommitDate: Thu, 27 May 2010 12:02:00 -0700 x86, cpufeature: Unbreak compile with gcc 3.x gcc 3 is too braindamaged to be able to compile static_cpu_has() -- apparently it can't tell that a constant passed to an inline function is still a constant -- so if we're using gcc 3, just use the dynamic test. This is bad for performance, but if you care about performance, don't use an ancient, known-to-optimize-poorly compiler. Reported-and-tested-by: Eric Dumazet <eric.dumazet@gmail.com> LKML-Reference: <4BF2FF82.7090005@zytor.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> --- arch/x86/include/asm/cpufeature.h | 7 +++++++ 1 files changed, 7 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index dca9c54..4681459 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -332,6 +332,7 @@ static __always_inline __pure bool __static_cpu_has(u8 bit) #endif } +#if __GNUC__ >= 4 #define static_cpu_has(bit) \ ( \ __builtin_constant_p(boot_cpu_has(bit)) ? \ @@ -340,6 +341,12 @@ static __always_inline __pure bool __static_cpu_has(u8 bit) __static_cpu_has(bit) : \ boot_cpu_has(bit) \ ) +#else +/* + * gcc 3.x is too stupid to do the static test; fall back to dynamic. + */ +#define static_cpu_has(bit) boot_cpu_has(bit) +#endif #endif /* defined(__KERNEL__) && !defined(__ASSEMBLY__) */ ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip:x86/fpu] x86, fpu: Use static_cpu_has() to implement use_xsave() 2010-05-06 8:45 ` [PATCH v3 1/2] x86: eliminate TS_XSAVE Avi Kivity ` (2 preceding siblings ...) 2010-05-12 1:06 ` [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives tip-bot for H. Peter Anvin @ 2010-05-12 1:06 ` tip-bot for H. Peter Anvin 3 siblings, 0 replies; 24+ messages in thread From: tip-bot for H. Peter Anvin @ 2010-05-12 1:06 UTC (permalink / raw) To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, suresh.b.siddha, tglx, avi Commit-ID: c9775b4cc522e5f1b40b1366a993f0f05f600f39 Gitweb: http://git.kernel.org/tip/c9775b4cc522e5f1b40b1366a993f0f05f600f39 Author: H. Peter Anvin <hpa@zytor.com> AuthorDate: Tue, 11 May 2010 17:49:54 -0700 Committer: H. Peter Anvin <hpa@zytor.com> CommitDate: Tue, 11 May 2010 17:49:54 -0700 x86, fpu: Use static_cpu_has() to implement use_xsave() use_xsave() is now just a special case of static_cpu_has(), so use static_cpu_has(). Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Avi Kivity <avi@redhat.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-2-git-send-email-avi@redhat.com> --- arch/x86/include/asm/i387.h | 12 +++--------- 1 files changed, 3 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h index 8002e9c..c991b3a 100644 --- a/arch/x86/include/asm/i387.h +++ b/arch/x86/include/asm/i387.h @@ -18,6 +18,7 @@ #include <linux/hardirq.h> #include <linux/slab.h> #include <asm/asm.h> +#include <asm/cpufeature.h> #include <asm/processor.h> #include <asm/sigcontext.h> #include <asm/user.h> @@ -57,16 +58,9 @@ extern int restore_i387_xstate_ia32(void __user *buf); #define X87_FSW_ES (1 << 7) /* Exception Summary */ -static inline bool use_xsave(void) +static __always_inline __pure bool use_xsave(void) { - u8 has_xsave; - - alternative_io("mov $0, %0", - "mov $1, %0", - X86_FEATURE_XSAVE, - "=qm" (has_xsave)); - - return has_xsave; + return static_cpu_has(X86_FEATURE_XSAVE); } #ifdef CONFIG_X86_64 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 2/2] x86: Introduce 'struct fpu' and related API 2010-05-06 8:45 [PATCH v3 0/2] x86 FPU API Avi Kivity 2010-05-06 8:45 ` [PATCH v3 1/2] x86: eliminate TS_XSAVE Avi Kivity @ 2010-05-06 8:45 ` Avi Kivity 2010-05-10 20:39 ` [tip:x86/fpu] " tip-bot for Avi Kivity 2010-05-10 20:40 ` [tip:x86/fpu] x86, fpu: Unbreak FPU emulation tip-bot for H. Peter Anvin 2010-05-10 8:48 ` [PATCH v3 0/2] x86 FPU API Avi Kivity 2 siblings, 2 replies; 24+ messages in thread From: Avi Kivity @ 2010-05-06 8:45 UTC (permalink / raw) To: H. Peter Anvin, Ingo Molnar Cc: kvm, linux-kernel, Brian Gerst, Dexuan Cui, Sheng Yang, Suresh Siddha Currently all fpu state access is through tsk->thread.xstate. Since we wish to generalize fpu access to non-task contexts, wrap the state in a new 'struct fpu' and convert existing access to use an fpu API. Signal frame handlers are not converted to the API since they will remain task context only things. Signed-off-by: Avi Kivity <avi@redhat.com> --- arch/x86/include/asm/i387.h | 115 ++++++++++++++++++++++++++++---------- arch/x86/include/asm/processor.h | 6 ++- arch/x86/include/asm/xsave.h | 7 +- arch/x86/kernel/i387.c | 102 +++++++++++++++++----------------- arch/x86/kernel/process.c | 20 +++---- arch/x86/kernel/process_32.c | 2 +- arch/x86/kernel/process_64.c | 2 +- arch/x86/kernel/xsave.c | 2 +- arch/x86/math-emu/fpu_aux.c | 6 +- 9 files changed, 160 insertions(+), 102 deletions(-) diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h index a301a68..1a8cca3 100644 --- a/arch/x86/include/asm/i387.h +++ b/arch/x86/include/asm/i387.h @@ -16,6 +16,7 @@ #include <linux/kernel_stat.h> #include <linux/regset.h> #include <linux/hardirq.h> +#include <linux/slab.h> #include <asm/asm.h> #include <asm/processor.h> #include <asm/sigcontext.h> @@ -103,10 +104,10 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx) values. The kernel data segment can be sometimes 0 and sometimes new user value. Both should be ok. Use the PDA as safe address because it should be already in L1. */ -static inline void clear_fpu_state(struct task_struct *tsk) +static inline void fpu_clear(struct fpu *fpu) { - struct xsave_struct *xstate = &tsk->thread.xstate->xsave; - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct xsave_struct *xstate = &fpu->state->xsave; + struct i387_fxsave_struct *fx = &fpu->state->fxsave; /* * xsave header may indicate the init state of the FP. @@ -123,6 +124,11 @@ static inline void clear_fpu_state(struct task_struct *tsk) X86_FEATURE_FXSAVE_LEAK); } +static inline void clear_fpu_state(struct task_struct *tsk) +{ + fpu_clear(&tsk->thread.fpu); +} + static inline int fxsave_user(struct i387_fxsave_struct __user *fx) { int err; @@ -147,7 +153,7 @@ static inline int fxsave_user(struct i387_fxsave_struct __user *fx) return err; } -static inline void fxsave(struct task_struct *tsk) +static inline void fpu_fxsave(struct fpu *fpu) { /* Using "rex64; fxsave %0" is broken because, if the memory operand uses any extended registers for addressing, a second REX prefix @@ -157,42 +163,45 @@ static inline void fxsave(struct task_struct *tsk) /* Using "fxsaveq %0" would be the ideal choice, but is only supported starting with gas 2.16. */ __asm__ __volatile__("fxsaveq %0" - : "=m" (tsk->thread.xstate->fxsave)); + : "=m" (fpu->state->fxsave)); #elif 0 /* Using, as a workaround, the properly prefixed form below isn't accepted by any binutils version so far released, complaining that the same type of prefix is used twice if an extended register is needed for addressing (fix submitted to mainline 2005-11-21). */ __asm__ __volatile__("rex64/fxsave %0" - : "=m" (tsk->thread.xstate->fxsave)); + : "=m" (fpu->state->fxsave)); #else /* This, however, we can work around by forcing the compiler to select an addressing mode that doesn't require extended registers. */ __asm__ __volatile__("rex64/fxsave (%1)" - : "=m" (tsk->thread.xstate->fxsave) - : "cdaSDb" (&tsk->thread.xstate->fxsave)); + : "=m" (fpu->state->fxsave) + : "cdaSDb" (&fpu->state->fxsave)); #endif } -static inline void __save_init_fpu(struct task_struct *tsk) +static inline void fpu_save_init(struct fpu *fpu) { if (use_xsave()) - xsave(tsk); + fpu_xsave(fpu); else - fxsave(tsk); + fpu_fxsave(fpu); - clear_fpu_state(tsk); + fpu_clear(fpu); +} + +static inline void __save_init_fpu(struct task_struct *tsk) +{ + fpu_save_init(&tsk->thread.fpu); task_thread_info(tsk)->status &= ~TS_USEDFPU; } #else /* CONFIG_X86_32 */ #ifdef CONFIG_MATH_EMULATION -extern void finit_task(struct task_struct *tsk); +extern void finit_soft_fpu(struct i387_soft_struct *soft); #else -static inline void finit_task(struct task_struct *tsk) -{ -} +static inline void finit_soft_fpu(struct i387_soft_struct *soft) {} #endif static inline void tolerant_fwait(void) @@ -228,13 +237,13 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx) /* * These must be called with preempt disabled */ -static inline void __save_init_fpu(struct task_struct *tsk) +static inline void fpu_save_init(struct fpu *fpu) { if (use_xsave()) { - struct xsave_struct *xstate = &tsk->thread.xstate->xsave; - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct xsave_struct *xstate = &fpu->state->xsave; + struct i387_fxsave_struct *fx = &fpu->state->fxsave; - xsave(tsk); + fpu_xsave(fpu); /* * xsave header may indicate the init state of the FP. @@ -258,8 +267,8 @@ static inline void __save_init_fpu(struct task_struct *tsk) "fxsave %[fx]\n" "bt $7,%[fsw] ; jnc 1f ; fnclex\n1:", X86_FEATURE_FXSR, - [fx] "m" (tsk->thread.xstate->fxsave), - [fsw] "m" (tsk->thread.xstate->fxsave.swd) : "memory"); + [fx] "m" (fpu->state->fxsave), + [fsw] "m" (fpu->state->fxsave.swd) : "memory"); clear_state: /* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception is pending. Clear the x87 state here by setting it to fixed @@ -271,17 +280,34 @@ clear_state: X86_FEATURE_FXSAVE_LEAK, [addr] "m" (safe_address)); end: + ; +} + +static inline void __save_init_fpu(struct task_struct *tsk) +{ + fpu_save_init(&tsk->thread.fpu); task_thread_info(tsk)->status &= ~TS_USEDFPU; } + #endif /* CONFIG_X86_64 */ -static inline int restore_fpu_checking(struct task_struct *tsk) +static inline int fpu_fxrstor_checking(struct fpu *fpu) +{ + return fxrstor_checking(&fpu->state->fxsave); +} + +static inline int fpu_restore_checking(struct fpu *fpu) { if (use_xsave()) - return xrstor_checking(&tsk->thread.xstate->xsave); + return fpu_xrstor_checking(fpu); else - return fxrstor_checking(&tsk->thread.xstate->fxsave); + return fpu_fxrstor_checking(fpu); +} + +static inline int restore_fpu_checking(struct task_struct *tsk) +{ + return fpu_restore_checking(&tsk->thread.fpu); } /* @@ -409,30 +435,59 @@ static inline void clear_fpu(struct task_struct *tsk) static inline unsigned short get_fpu_cwd(struct task_struct *tsk) { if (cpu_has_fxsr) { - return tsk->thread.xstate->fxsave.cwd; + return tsk->thread.fpu.state->fxsave.cwd; } else { - return (unsigned short)tsk->thread.xstate->fsave.cwd; + return (unsigned short)tsk->thread.fpu.state->fsave.cwd; } } static inline unsigned short get_fpu_swd(struct task_struct *tsk) { if (cpu_has_fxsr) { - return tsk->thread.xstate->fxsave.swd; + return tsk->thread.fpu.state->fxsave.swd; } else { - return (unsigned short)tsk->thread.xstate->fsave.swd; + return (unsigned short)tsk->thread.fpu.state->fsave.swd; } } static inline unsigned short get_fpu_mxcsr(struct task_struct *tsk) { if (cpu_has_xmm) { - return tsk->thread.xstate->fxsave.mxcsr; + return tsk->thread.fpu.state->fxsave.mxcsr; } else { return MXCSR_DEFAULT; } } +static bool fpu_allocated(struct fpu *fpu) +{ + return fpu->state != NULL; +} + +static inline int fpu_alloc(struct fpu *fpu) +{ + if (fpu_allocated(fpu)) + return 0; + fpu->state = kmem_cache_alloc(task_xstate_cachep, GFP_KERNEL); + if (!fpu->state) + return -ENOMEM; + WARN_ON((unsigned long)fpu->state & 15); + return 0; +} + +static inline void fpu_free(struct fpu *fpu) +{ + if (fpu->state) { + kmem_cache_free(task_xstate_cachep, fpu->state); + fpu->state = NULL; + } +} + +static inline void fpu_copy(struct fpu *dst, struct fpu *src) +{ + memcpy(dst->state, src->state, xstate_size); +} + #endif /* __ASSEMBLY__ */ #define PSHUFB_XMM5_XMM0 .byte 0x66, 0x0f, 0x38, 0x00, 0xc5 diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 32428b4..1e248a6 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -380,6 +380,10 @@ union thread_xstate { struct xsave_struct xsave; }; +struct fpu { + union thread_xstate *state; +}; + #ifdef CONFIG_X86_64 DECLARE_PER_CPU(struct orig_ist, orig_ist); @@ -457,7 +461,7 @@ struct thread_struct { unsigned long trap_no; unsigned long error_code; /* floating point and extended processor state */ - union thread_xstate *xstate; + struct fpu fpu; #ifdef CONFIG_X86_32 /* Virtual 86 mode info */ struct vm86_struct __user *vm86_info; diff --git a/arch/x86/include/asm/xsave.h b/arch/x86/include/asm/xsave.h index ddc04cc..2c4390c 100644 --- a/arch/x86/include/asm/xsave.h +++ b/arch/x86/include/asm/xsave.h @@ -37,8 +37,9 @@ extern int check_for_xstate(struct i387_fxsave_struct __user *buf, void __user *fpstate, struct _fpx_sw_bytes *sw); -static inline int xrstor_checking(struct xsave_struct *fx) +static inline int fpu_xrstor_checking(struct fpu *fpu) { + struct xsave_struct *fx = &fpu->state->xsave; int err; asm volatile("1: .byte " REX_PREFIX "0x0f,0xae,0x2f\n\t" @@ -110,12 +111,12 @@ static inline void xrstor_state(struct xsave_struct *fx, u64 mask) : "memory"); } -static inline void xsave(struct task_struct *tsk) +static inline void fpu_xsave(struct fpu *fpu) { /* This, however, we can work around by forcing the compiler to select an addressing mode that doesn't require extended registers. */ __asm__ __volatile__(".byte " REX_PREFIX "0x0f,0xae,0x27" - : : "D" (&(tsk->thread.xstate->xsave)), + : : "D" (&(fpu->state->xsave)), "a" (-1), "d"(-1) : "memory"); } #endif diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index 14ca1dc..86cef6b 100644 --- a/arch/x86/kernel/i387.c +++ b/arch/x86/kernel/i387.c @@ -107,57 +107,57 @@ void __cpuinit fpu_init(void) } #endif /* CONFIG_X86_64 */ -/* - * The _current_ task is using the FPU for the first time - * so initialize it and set the mxcsr to its default - * value at reset if we support XMM instructions and then - * remeber the current task has used the FPU. - */ -int init_fpu(struct task_struct *tsk) +static void fpu_finit(struct fpu *fpu) { - if (tsk_used_math(tsk)) { - if (HAVE_HWFP && tsk == current) - unlazy_fpu(tsk); - return 0; - } - - /* - * Memory allocation at the first usage of the FPU and other state. - */ - if (!tsk->thread.xstate) { - tsk->thread.xstate = kmem_cache_alloc(task_xstate_cachep, - GFP_KERNEL); - if (!tsk->thread.xstate) - return -ENOMEM; - } - #ifdef CONFIG_X86_32 if (!HAVE_HWFP) { - memset(tsk->thread.xstate, 0, xstate_size); - finit_task(tsk); - set_stopped_child_used_math(tsk); - return 0; + finit_soft_fpu(&fpu->state->soft); + return; } #endif if (cpu_has_fxsr) { - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fx = &fpu->state->fxsave; memset(fx, 0, xstate_size); fx->cwd = 0x37f; if (cpu_has_xmm) fx->mxcsr = MXCSR_DEFAULT; } else { - struct i387_fsave_struct *fp = &tsk->thread.xstate->fsave; + struct i387_fsave_struct *fp = &fpu->state->fsave; memset(fp, 0, xstate_size); fp->cwd = 0xffff037fu; fp->swd = 0xffff0000u; fp->twd = 0xffffffffu; fp->fos = 0xffff0000u; } +} + +/* + * The _current_ task is using the FPU for the first time + * so initialize it and set the mxcsr to its default + * value at reset if we support XMM instructions and then + * remeber the current task has used the FPU. + */ +int init_fpu(struct task_struct *tsk) +{ + int ret; + + if (tsk_used_math(tsk)) { + if (HAVE_HWFP && tsk == current) + unlazy_fpu(tsk); + return 0; + } + /* - * Only the device not available exception or ptrace can call init_fpu. + * Memory allocation at the first usage of the FPU and other state. */ + ret = fpu_alloc(&tsk->thread.fpu); + if (ret) + return ret; + + fpu_finit(&tsk->thread.fpu); + set_stopped_child_used_math(tsk); return 0; } @@ -191,7 +191,7 @@ int xfpregs_get(struct task_struct *target, const struct user_regset *regset, return ret; return user_regset_copyout(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fxsave, 0, -1); + &target->thread.fpu.state->fxsave, 0, -1); } int xfpregs_set(struct task_struct *target, const struct user_regset *regset, @@ -208,19 +208,19 @@ int xfpregs_set(struct task_struct *target, const struct user_regset *regset, return ret; ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fxsave, 0, -1); + &target->thread.fpu.state->fxsave, 0, -1); /* * mxcsr reserved bits must be masked to zero for security reasons. */ - target->thread.xstate->fxsave.mxcsr &= mxcsr_feature_mask; + target->thread.fpu.state->fxsave.mxcsr &= mxcsr_feature_mask; /* * update the header bits in the xsave header, indicating the * presence of FP and SSE state. */ if (cpu_has_xsave) - target->thread.xstate->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; + target->thread.fpu.state->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; return ret; } @@ -243,14 +243,14 @@ int xstateregs_get(struct task_struct *target, const struct user_regset *regset, * memory layout in the thread struct, so that we can copy the entire * xstateregs to the user using one user_regset_copyout(). */ - memcpy(&target->thread.xstate->fxsave.sw_reserved, + memcpy(&target->thread.fpu.state->fxsave.sw_reserved, xstate_fx_sw_bytes, sizeof(xstate_fx_sw_bytes)); /* * Copy the xstate memory layout. */ ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->xsave, 0, -1); + &target->thread.fpu.state->xsave, 0, -1); return ret; } @@ -269,14 +269,14 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset, return ret; ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->xsave, 0, -1); + &target->thread.fpu.state->xsave, 0, -1); /* * mxcsr reserved bits must be masked to zero for security reasons. */ - target->thread.xstate->fxsave.mxcsr &= mxcsr_feature_mask; + target->thread.fpu.state->fxsave.mxcsr &= mxcsr_feature_mask; - xsave_hdr = &target->thread.xstate->xsave.xsave_hdr; + xsave_hdr = &target->thread.fpu.state->xsave.xsave_hdr; xsave_hdr->xstate_bv &= pcntxt_mask; /* @@ -362,7 +362,7 @@ static inline u32 twd_fxsr_to_i387(struct i387_fxsave_struct *fxsave) static void convert_from_fxsr(struct user_i387_ia32_struct *env, struct task_struct *tsk) { - struct i387_fxsave_struct *fxsave = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fxsave = &tsk->thread.fpu.state->fxsave; struct _fpreg *to = (struct _fpreg *) &env->st_space[0]; struct _fpxreg *from = (struct _fpxreg *) &fxsave->st_space[0]; int i; @@ -402,7 +402,7 @@ static void convert_to_fxsr(struct task_struct *tsk, const struct user_i387_ia32_struct *env) { - struct i387_fxsave_struct *fxsave = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fxsave = &tsk->thread.fpu.state->fxsave; struct _fpreg *from = (struct _fpreg *) &env->st_space[0]; struct _fpxreg *to = (struct _fpxreg *) &fxsave->st_space[0]; int i; @@ -442,7 +442,7 @@ int fpregs_get(struct task_struct *target, const struct user_regset *regset, if (!cpu_has_fxsr) { return user_regset_copyout(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fsave, 0, + &target->thread.fpu.state->fsave, 0, -1); } @@ -472,7 +472,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset, if (!cpu_has_fxsr) { return user_regset_copyin(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fsave, 0, -1); + &target->thread.fpu.state->fsave, 0, -1); } if (pos > 0 || count < sizeof(env)) @@ -487,7 +487,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset, * presence of FP. */ if (cpu_has_xsave) - target->thread.xstate->xsave.xsave_hdr.xstate_bv |= XSTATE_FP; + target->thread.fpu.state->xsave.xsave_hdr.xstate_bv |= XSTATE_FP; return ret; } @@ -498,7 +498,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset, static inline int save_i387_fsave(struct _fpstate_ia32 __user *buf) { struct task_struct *tsk = current; - struct i387_fsave_struct *fp = &tsk->thread.xstate->fsave; + struct i387_fsave_struct *fp = &tsk->thread.fpu.state->fsave; fp->status = fp->swd; if (__copy_to_user(buf, fp, sizeof(struct i387_fsave_struct))) @@ -509,7 +509,7 @@ static inline int save_i387_fsave(struct _fpstate_ia32 __user *buf) static int save_i387_fxsave(struct _fpstate_ia32 __user *buf) { struct task_struct *tsk = current; - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fx = &tsk->thread.fpu.state->fxsave; struct user_i387_ia32_struct env; int err = 0; @@ -544,7 +544,7 @@ static int save_i387_xsave(void __user *buf) * header as well as change any contents in the memory layout. * xrestore as part of sigreturn will capture all the changes. */ - tsk->thread.xstate->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; + tsk->thread.fpu.state->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; if (save_i387_fxsave(fx) < 0) return -1; @@ -596,7 +596,7 @@ static inline int restore_i387_fsave(struct _fpstate_ia32 __user *buf) { struct task_struct *tsk = current; - return __copy_from_user(&tsk->thread.xstate->fsave, buf, + return __copy_from_user(&tsk->thread.fpu.state->fsave, buf, sizeof(struct i387_fsave_struct)); } @@ -607,10 +607,10 @@ static int restore_i387_fxsave(struct _fpstate_ia32 __user *buf, struct user_i387_ia32_struct env; int err; - err = __copy_from_user(&tsk->thread.xstate->fxsave, &buf->_fxsr_env[0], + err = __copy_from_user(&tsk->thread.fpu.state->fxsave, &buf->_fxsr_env[0], size); /* mxcsr reserved bits must be masked to zero for security reasons */ - tsk->thread.xstate->fxsave.mxcsr &= mxcsr_feature_mask; + tsk->thread.fpu.state->fxsave.mxcsr &= mxcsr_feature_mask; if (err || __copy_from_user(&env, buf, sizeof(env))) return 1; convert_to_fxsr(tsk, &env); @@ -626,7 +626,7 @@ static int restore_i387_xsave(void __user *buf) struct i387_fxsave_struct __user *fx = (struct i387_fxsave_struct __user *) &fx_user->_fxsr_env[0]; struct xsave_hdr_struct *xsave_hdr = - ¤t->thread.xstate->xsave.xsave_hdr; + ¤t->thread.fpu.state->xsave.xsave_hdr; u64 mask; int err; diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index eccdb57..8bcc21f 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -31,24 +31,22 @@ struct kmem_cache *task_xstate_cachep; int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) { + int ret; + *dst = *src; - if (src->thread.xstate) { - dst->thread.xstate = kmem_cache_alloc(task_xstate_cachep, - GFP_KERNEL); - if (!dst->thread.xstate) - return -ENOMEM; - WARN_ON((unsigned long)dst->thread.xstate & 15); - memcpy(dst->thread.xstate, src->thread.xstate, xstate_size); + if (fpu_allocated(&src->thread.fpu)) { + memset(&dst->thread.fpu, 0, sizeof(dst->thread.fpu)); + ret = fpu_alloc(&dst->thread.fpu); + if (ret) + return ret; + fpu_copy(&dst->thread.fpu, &src->thread.fpu); } return 0; } void free_thread_xstate(struct task_struct *tsk) { - if (tsk->thread.xstate) { - kmem_cache_free(task_xstate_cachep, tsk->thread.xstate); - tsk->thread.xstate = NULL; - } + fpu_free(&tsk->thread.fpu); } void free_thread_info(struct thread_info *ti) diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index 75090c5..8d12878 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -309,7 +309,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) /* we're going to use this soon, after a few expensive things */ if (preload_fpu) - prefetch(next->xstate); + prefetch(next->fpu.state); /* * Reload esp0. diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index cc4258f..758de3b 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -388,7 +388,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) /* we're going to use this soon, after a few expensive things */ if (preload_fpu) - prefetch(next->xstate); + prefetch(next->fpu.state); /* * Reload esp0, LDT and the page table pointer: diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c index c1b0a11..37e68fc 100644 --- a/arch/x86/kernel/xsave.c +++ b/arch/x86/kernel/xsave.c @@ -109,7 +109,7 @@ int save_i387_xstate(void __user *buf) task_thread_info(tsk)->status &= ~TS_USEDFPU; stts(); } else { - if (__copy_to_user(buf, &tsk->thread.xstate->fxsave, + if (__copy_to_user(buf, &tsk->thread.fpu.state->fxsave, xstate_size)) return -1; } diff --git a/arch/x86/math-emu/fpu_aux.c b/arch/x86/math-emu/fpu_aux.c index aa09870..62797f9 100644 --- a/arch/x86/math-emu/fpu_aux.c +++ b/arch/x86/math-emu/fpu_aux.c @@ -30,10 +30,10 @@ static void fclex(void) } /* Needs to be externally visible */ -void finit_task(struct task_struct *tsk) +void finit_soft_fpu(struct i387_soft_struct *soft) { - struct i387_soft_struct *soft = &tsk->thread.xstate->soft; struct address *oaddr, *iaddr; + memset(soft, 0, sizeof(*soft)); soft->cwd = 0x037f; soft->swd = 0; soft->ftop = 0; /* We don't keep top in the status word internally. */ @@ -52,7 +52,7 @@ void finit_task(struct task_struct *tsk) void finit(void) { - finit_task(current); + finit_task(¤t->thread.fpu); } /* -- 1.7.0.4 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip:x86/fpu] x86: Introduce 'struct fpu' and related API 2010-05-06 8:45 ` [PATCH v3 2/2] x86: Introduce 'struct fpu' and related API Avi Kivity @ 2010-05-10 20:39 ` tip-bot for Avi Kivity 2010-05-10 20:40 ` [tip:x86/fpu] x86, fpu: Unbreak FPU emulation tip-bot for H. Peter Anvin 1 sibling, 0 replies; 24+ messages in thread From: tip-bot for Avi Kivity @ 2010-05-10 20:39 UTC (permalink / raw) To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, suresh.b.siddha, tglx, avi Commit-ID: 86603283326c9e95e5ad4e9fdddeec93cac5d9ad Gitweb: http://git.kernel.org/tip/86603283326c9e95e5ad4e9fdddeec93cac5d9ad Author: Avi Kivity <avi@redhat.com> AuthorDate: Thu, 6 May 2010 11:45:46 +0300 Committer: H. Peter Anvin <hpa@zytor.com> CommitDate: Mon, 10 May 2010 10:48:55 -0700 x86: Introduce 'struct fpu' and related API Currently all fpu state access is through tsk->thread.xstate. Since we wish to generalize fpu access to non-task contexts, wrap the state in a new 'struct fpu' and convert existing access to use an fpu API. Signal frame handlers are not converted to the API since they will remain task context only things. Signed-off-by: Avi Kivity <avi@redhat.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-3-git-send-email-avi@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> --- arch/x86/include/asm/i387.h | 115 ++++++++++++++++++++++++++++---------- arch/x86/include/asm/processor.h | 6 ++- arch/x86/include/asm/xsave.h | 7 +- arch/x86/kernel/i387.c | 102 +++++++++++++++++----------------- arch/x86/kernel/process.c | 21 +++---- arch/x86/kernel/process_32.c | 2 +- arch/x86/kernel/process_64.c | 2 +- arch/x86/kernel/xsave.c | 2 +- arch/x86/math-emu/fpu_aux.c | 6 +- 9 files changed, 160 insertions(+), 103 deletions(-) diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h index a301a68..1a8cca3 100644 --- a/arch/x86/include/asm/i387.h +++ b/arch/x86/include/asm/i387.h @@ -16,6 +16,7 @@ #include <linux/kernel_stat.h> #include <linux/regset.h> #include <linux/hardirq.h> +#include <linux/slab.h> #include <asm/asm.h> #include <asm/processor.h> #include <asm/sigcontext.h> @@ -103,10 +104,10 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx) values. The kernel data segment can be sometimes 0 and sometimes new user value. Both should be ok. Use the PDA as safe address because it should be already in L1. */ -static inline void clear_fpu_state(struct task_struct *tsk) +static inline void fpu_clear(struct fpu *fpu) { - struct xsave_struct *xstate = &tsk->thread.xstate->xsave; - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct xsave_struct *xstate = &fpu->state->xsave; + struct i387_fxsave_struct *fx = &fpu->state->fxsave; /* * xsave header may indicate the init state of the FP. @@ -123,6 +124,11 @@ static inline void clear_fpu_state(struct task_struct *tsk) X86_FEATURE_FXSAVE_LEAK); } +static inline void clear_fpu_state(struct task_struct *tsk) +{ + fpu_clear(&tsk->thread.fpu); +} + static inline int fxsave_user(struct i387_fxsave_struct __user *fx) { int err; @@ -147,7 +153,7 @@ static inline int fxsave_user(struct i387_fxsave_struct __user *fx) return err; } -static inline void fxsave(struct task_struct *tsk) +static inline void fpu_fxsave(struct fpu *fpu) { /* Using "rex64; fxsave %0" is broken because, if the memory operand uses any extended registers for addressing, a second REX prefix @@ -157,42 +163,45 @@ static inline void fxsave(struct task_struct *tsk) /* Using "fxsaveq %0" would be the ideal choice, but is only supported starting with gas 2.16. */ __asm__ __volatile__("fxsaveq %0" - : "=m" (tsk->thread.xstate->fxsave)); + : "=m" (fpu->state->fxsave)); #elif 0 /* Using, as a workaround, the properly prefixed form below isn't accepted by any binutils version so far released, complaining that the same type of prefix is used twice if an extended register is needed for addressing (fix submitted to mainline 2005-11-21). */ __asm__ __volatile__("rex64/fxsave %0" - : "=m" (tsk->thread.xstate->fxsave)); + : "=m" (fpu->state->fxsave)); #else /* This, however, we can work around by forcing the compiler to select an addressing mode that doesn't require extended registers. */ __asm__ __volatile__("rex64/fxsave (%1)" - : "=m" (tsk->thread.xstate->fxsave) - : "cdaSDb" (&tsk->thread.xstate->fxsave)); + : "=m" (fpu->state->fxsave) + : "cdaSDb" (&fpu->state->fxsave)); #endif } -static inline void __save_init_fpu(struct task_struct *tsk) +static inline void fpu_save_init(struct fpu *fpu) { if (use_xsave()) - xsave(tsk); + fpu_xsave(fpu); else - fxsave(tsk); + fpu_fxsave(fpu); - clear_fpu_state(tsk); + fpu_clear(fpu); +} + +static inline void __save_init_fpu(struct task_struct *tsk) +{ + fpu_save_init(&tsk->thread.fpu); task_thread_info(tsk)->status &= ~TS_USEDFPU; } #else /* CONFIG_X86_32 */ #ifdef CONFIG_MATH_EMULATION -extern void finit_task(struct task_struct *tsk); +extern void finit_soft_fpu(struct i387_soft_struct *soft); #else -static inline void finit_task(struct task_struct *tsk) -{ -} +static inline void finit_soft_fpu(struct i387_soft_struct *soft) {} #endif static inline void tolerant_fwait(void) @@ -228,13 +237,13 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx) /* * These must be called with preempt disabled */ -static inline void __save_init_fpu(struct task_struct *tsk) +static inline void fpu_save_init(struct fpu *fpu) { if (use_xsave()) { - struct xsave_struct *xstate = &tsk->thread.xstate->xsave; - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct xsave_struct *xstate = &fpu->state->xsave; + struct i387_fxsave_struct *fx = &fpu->state->fxsave; - xsave(tsk); + fpu_xsave(fpu); /* * xsave header may indicate the init state of the FP. @@ -258,8 +267,8 @@ static inline void __save_init_fpu(struct task_struct *tsk) "fxsave %[fx]\n" "bt $7,%[fsw] ; jnc 1f ; fnclex\n1:", X86_FEATURE_FXSR, - [fx] "m" (tsk->thread.xstate->fxsave), - [fsw] "m" (tsk->thread.xstate->fxsave.swd) : "memory"); + [fx] "m" (fpu->state->fxsave), + [fsw] "m" (fpu->state->fxsave.swd) : "memory"); clear_state: /* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception is pending. Clear the x87 state here by setting it to fixed @@ -271,17 +280,34 @@ clear_state: X86_FEATURE_FXSAVE_LEAK, [addr] "m" (safe_address)); end: + ; +} + +static inline void __save_init_fpu(struct task_struct *tsk) +{ + fpu_save_init(&tsk->thread.fpu); task_thread_info(tsk)->status &= ~TS_USEDFPU; } + #endif /* CONFIG_X86_64 */ -static inline int restore_fpu_checking(struct task_struct *tsk) +static inline int fpu_fxrstor_checking(struct fpu *fpu) +{ + return fxrstor_checking(&fpu->state->fxsave); +} + +static inline int fpu_restore_checking(struct fpu *fpu) { if (use_xsave()) - return xrstor_checking(&tsk->thread.xstate->xsave); + return fpu_xrstor_checking(fpu); else - return fxrstor_checking(&tsk->thread.xstate->fxsave); + return fpu_fxrstor_checking(fpu); +} + +static inline int restore_fpu_checking(struct task_struct *tsk) +{ + return fpu_restore_checking(&tsk->thread.fpu); } /* @@ -409,30 +435,59 @@ static inline void clear_fpu(struct task_struct *tsk) static inline unsigned short get_fpu_cwd(struct task_struct *tsk) { if (cpu_has_fxsr) { - return tsk->thread.xstate->fxsave.cwd; + return tsk->thread.fpu.state->fxsave.cwd; } else { - return (unsigned short)tsk->thread.xstate->fsave.cwd; + return (unsigned short)tsk->thread.fpu.state->fsave.cwd; } } static inline unsigned short get_fpu_swd(struct task_struct *tsk) { if (cpu_has_fxsr) { - return tsk->thread.xstate->fxsave.swd; + return tsk->thread.fpu.state->fxsave.swd; } else { - return (unsigned short)tsk->thread.xstate->fsave.swd; + return (unsigned short)tsk->thread.fpu.state->fsave.swd; } } static inline unsigned short get_fpu_mxcsr(struct task_struct *tsk) { if (cpu_has_xmm) { - return tsk->thread.xstate->fxsave.mxcsr; + return tsk->thread.fpu.state->fxsave.mxcsr; } else { return MXCSR_DEFAULT; } } +static bool fpu_allocated(struct fpu *fpu) +{ + return fpu->state != NULL; +} + +static inline int fpu_alloc(struct fpu *fpu) +{ + if (fpu_allocated(fpu)) + return 0; + fpu->state = kmem_cache_alloc(task_xstate_cachep, GFP_KERNEL); + if (!fpu->state) + return -ENOMEM; + WARN_ON((unsigned long)fpu->state & 15); + return 0; +} + +static inline void fpu_free(struct fpu *fpu) +{ + if (fpu->state) { + kmem_cache_free(task_xstate_cachep, fpu->state); + fpu->state = NULL; + } +} + +static inline void fpu_copy(struct fpu *dst, struct fpu *src) +{ + memcpy(dst->state, src->state, xstate_size); +} + #endif /* __ASSEMBLY__ */ #define PSHUFB_XMM5_XMM0 .byte 0x66, 0x0f, 0x38, 0x00, 0xc5 diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index b753ea5..b684f58 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -380,6 +380,10 @@ union thread_xstate { struct xsave_struct xsave; }; +struct fpu { + union thread_xstate *state; +}; + #ifdef CONFIG_X86_64 DECLARE_PER_CPU(struct orig_ist, orig_ist); @@ -457,7 +461,7 @@ struct thread_struct { unsigned long trap_no; unsigned long error_code; /* floating point and extended processor state */ - union thread_xstate *xstate; + struct fpu fpu; #ifdef CONFIG_X86_32 /* Virtual 86 mode info */ struct vm86_struct __user *vm86_info; diff --git a/arch/x86/include/asm/xsave.h b/arch/x86/include/asm/xsave.h index ddc04cc..2c4390c 100644 --- a/arch/x86/include/asm/xsave.h +++ b/arch/x86/include/asm/xsave.h @@ -37,8 +37,9 @@ extern int check_for_xstate(struct i387_fxsave_struct __user *buf, void __user *fpstate, struct _fpx_sw_bytes *sw); -static inline int xrstor_checking(struct xsave_struct *fx) +static inline int fpu_xrstor_checking(struct fpu *fpu) { + struct xsave_struct *fx = &fpu->state->xsave; int err; asm volatile("1: .byte " REX_PREFIX "0x0f,0xae,0x2f\n\t" @@ -110,12 +111,12 @@ static inline void xrstor_state(struct xsave_struct *fx, u64 mask) : "memory"); } -static inline void xsave(struct task_struct *tsk) +static inline void fpu_xsave(struct fpu *fpu) { /* This, however, we can work around by forcing the compiler to select an addressing mode that doesn't require extended registers. */ __asm__ __volatile__(".byte " REX_PREFIX "0x0f,0xae,0x27" - : : "D" (&(tsk->thread.xstate->xsave)), + : : "D" (&(fpu->state->xsave)), "a" (-1), "d"(-1) : "memory"); } #endif diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index 14ca1dc..86cef6b 100644 --- a/arch/x86/kernel/i387.c +++ b/arch/x86/kernel/i387.c @@ -107,57 +107,57 @@ void __cpuinit fpu_init(void) } #endif /* CONFIG_X86_64 */ -/* - * The _current_ task is using the FPU for the first time - * so initialize it and set the mxcsr to its default - * value at reset if we support XMM instructions and then - * remeber the current task has used the FPU. - */ -int init_fpu(struct task_struct *tsk) +static void fpu_finit(struct fpu *fpu) { - if (tsk_used_math(tsk)) { - if (HAVE_HWFP && tsk == current) - unlazy_fpu(tsk); - return 0; - } - - /* - * Memory allocation at the first usage of the FPU and other state. - */ - if (!tsk->thread.xstate) { - tsk->thread.xstate = kmem_cache_alloc(task_xstate_cachep, - GFP_KERNEL); - if (!tsk->thread.xstate) - return -ENOMEM; - } - #ifdef CONFIG_X86_32 if (!HAVE_HWFP) { - memset(tsk->thread.xstate, 0, xstate_size); - finit_task(tsk); - set_stopped_child_used_math(tsk); - return 0; + finit_soft_fpu(&fpu->state->soft); + return; } #endif if (cpu_has_fxsr) { - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fx = &fpu->state->fxsave; memset(fx, 0, xstate_size); fx->cwd = 0x37f; if (cpu_has_xmm) fx->mxcsr = MXCSR_DEFAULT; } else { - struct i387_fsave_struct *fp = &tsk->thread.xstate->fsave; + struct i387_fsave_struct *fp = &fpu->state->fsave; memset(fp, 0, xstate_size); fp->cwd = 0xffff037fu; fp->swd = 0xffff0000u; fp->twd = 0xffffffffu; fp->fos = 0xffff0000u; } +} + +/* + * The _current_ task is using the FPU for the first time + * so initialize it and set the mxcsr to its default + * value at reset if we support XMM instructions and then + * remeber the current task has used the FPU. + */ +int init_fpu(struct task_struct *tsk) +{ + int ret; + + if (tsk_used_math(tsk)) { + if (HAVE_HWFP && tsk == current) + unlazy_fpu(tsk); + return 0; + } + /* - * Only the device not available exception or ptrace can call init_fpu. + * Memory allocation at the first usage of the FPU and other state. */ + ret = fpu_alloc(&tsk->thread.fpu); + if (ret) + return ret; + + fpu_finit(&tsk->thread.fpu); + set_stopped_child_used_math(tsk); return 0; } @@ -191,7 +191,7 @@ int xfpregs_get(struct task_struct *target, const struct user_regset *regset, return ret; return user_regset_copyout(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fxsave, 0, -1); + &target->thread.fpu.state->fxsave, 0, -1); } int xfpregs_set(struct task_struct *target, const struct user_regset *regset, @@ -208,19 +208,19 @@ int xfpregs_set(struct task_struct *target, const struct user_regset *regset, return ret; ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fxsave, 0, -1); + &target->thread.fpu.state->fxsave, 0, -1); /* * mxcsr reserved bits must be masked to zero for security reasons. */ - target->thread.xstate->fxsave.mxcsr &= mxcsr_feature_mask; + target->thread.fpu.state->fxsave.mxcsr &= mxcsr_feature_mask; /* * update the header bits in the xsave header, indicating the * presence of FP and SSE state. */ if (cpu_has_xsave) - target->thread.xstate->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; + target->thread.fpu.state->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; return ret; } @@ -243,14 +243,14 @@ int xstateregs_get(struct task_struct *target, const struct user_regset *regset, * memory layout in the thread struct, so that we can copy the entire * xstateregs to the user using one user_regset_copyout(). */ - memcpy(&target->thread.xstate->fxsave.sw_reserved, + memcpy(&target->thread.fpu.state->fxsave.sw_reserved, xstate_fx_sw_bytes, sizeof(xstate_fx_sw_bytes)); /* * Copy the xstate memory layout. */ ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->xsave, 0, -1); + &target->thread.fpu.state->xsave, 0, -1); return ret; } @@ -269,14 +269,14 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset, return ret; ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->xsave, 0, -1); + &target->thread.fpu.state->xsave, 0, -1); /* * mxcsr reserved bits must be masked to zero for security reasons. */ - target->thread.xstate->fxsave.mxcsr &= mxcsr_feature_mask; + target->thread.fpu.state->fxsave.mxcsr &= mxcsr_feature_mask; - xsave_hdr = &target->thread.xstate->xsave.xsave_hdr; + xsave_hdr = &target->thread.fpu.state->xsave.xsave_hdr; xsave_hdr->xstate_bv &= pcntxt_mask; /* @@ -362,7 +362,7 @@ static inline u32 twd_fxsr_to_i387(struct i387_fxsave_struct *fxsave) static void convert_from_fxsr(struct user_i387_ia32_struct *env, struct task_struct *tsk) { - struct i387_fxsave_struct *fxsave = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fxsave = &tsk->thread.fpu.state->fxsave; struct _fpreg *to = (struct _fpreg *) &env->st_space[0]; struct _fpxreg *from = (struct _fpxreg *) &fxsave->st_space[0]; int i; @@ -402,7 +402,7 @@ static void convert_to_fxsr(struct task_struct *tsk, const struct user_i387_ia32_struct *env) { - struct i387_fxsave_struct *fxsave = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fxsave = &tsk->thread.fpu.state->fxsave; struct _fpreg *from = (struct _fpreg *) &env->st_space[0]; struct _fpxreg *to = (struct _fpxreg *) &fxsave->st_space[0]; int i; @@ -442,7 +442,7 @@ int fpregs_get(struct task_struct *target, const struct user_regset *regset, if (!cpu_has_fxsr) { return user_regset_copyout(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fsave, 0, + &target->thread.fpu.state->fsave, 0, -1); } @@ -472,7 +472,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset, if (!cpu_has_fxsr) { return user_regset_copyin(&pos, &count, &kbuf, &ubuf, - &target->thread.xstate->fsave, 0, -1); + &target->thread.fpu.state->fsave, 0, -1); } if (pos > 0 || count < sizeof(env)) @@ -487,7 +487,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset, * presence of FP. */ if (cpu_has_xsave) - target->thread.xstate->xsave.xsave_hdr.xstate_bv |= XSTATE_FP; + target->thread.fpu.state->xsave.xsave_hdr.xstate_bv |= XSTATE_FP; return ret; } @@ -498,7 +498,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset, static inline int save_i387_fsave(struct _fpstate_ia32 __user *buf) { struct task_struct *tsk = current; - struct i387_fsave_struct *fp = &tsk->thread.xstate->fsave; + struct i387_fsave_struct *fp = &tsk->thread.fpu.state->fsave; fp->status = fp->swd; if (__copy_to_user(buf, fp, sizeof(struct i387_fsave_struct))) @@ -509,7 +509,7 @@ static inline int save_i387_fsave(struct _fpstate_ia32 __user *buf) static int save_i387_fxsave(struct _fpstate_ia32 __user *buf) { struct task_struct *tsk = current; - struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave; + struct i387_fxsave_struct *fx = &tsk->thread.fpu.state->fxsave; struct user_i387_ia32_struct env; int err = 0; @@ -544,7 +544,7 @@ static int save_i387_xsave(void __user *buf) * header as well as change any contents in the memory layout. * xrestore as part of sigreturn will capture all the changes. */ - tsk->thread.xstate->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; + tsk->thread.fpu.state->xsave.xsave_hdr.xstate_bv |= XSTATE_FPSSE; if (save_i387_fxsave(fx) < 0) return -1; @@ -596,7 +596,7 @@ static inline int restore_i387_fsave(struct _fpstate_ia32 __user *buf) { struct task_struct *tsk = current; - return __copy_from_user(&tsk->thread.xstate->fsave, buf, + return __copy_from_user(&tsk->thread.fpu.state->fsave, buf, sizeof(struct i387_fsave_struct)); } @@ -607,10 +607,10 @@ static int restore_i387_fxsave(struct _fpstate_ia32 __user *buf, struct user_i387_ia32_struct env; int err; - err = __copy_from_user(&tsk->thread.xstate->fxsave, &buf->_fxsr_env[0], + err = __copy_from_user(&tsk->thread.fpu.state->fxsave, &buf->_fxsr_env[0], size); /* mxcsr reserved bits must be masked to zero for security reasons */ - tsk->thread.xstate->fxsave.mxcsr &= mxcsr_feature_mask; + tsk->thread.fpu.state->fxsave.mxcsr &= mxcsr_feature_mask; if (err || __copy_from_user(&env, buf, sizeof(env))) return 1; convert_to_fxsr(tsk, &env); @@ -626,7 +626,7 @@ static int restore_i387_xsave(void __user *buf) struct i387_fxsave_struct __user *fx = (struct i387_fxsave_struct __user *) &fx_user->_fxsr_env[0]; struct xsave_hdr_struct *xsave_hdr = - ¤t->thread.xstate->xsave.xsave_hdr; + ¤t->thread.fpu.state->xsave.xsave_hdr; u64 mask; int err; diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 28ad9f4..f18fd9c 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -32,25 +32,22 @@ struct kmem_cache *task_xstate_cachep; int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) { + int ret; + *dst = *src; - if (src->thread.xstate) { - dst->thread.xstate = kmem_cache_alloc(task_xstate_cachep, - GFP_KERNEL); - if (!dst->thread.xstate) - return -ENOMEM; - WARN_ON((unsigned long)dst->thread.xstate & 15); - memcpy(dst->thread.xstate, src->thread.xstate, xstate_size); + if (fpu_allocated(&src->thread.fpu)) { + memset(&dst->thread.fpu, 0, sizeof(dst->thread.fpu)); + ret = fpu_alloc(&dst->thread.fpu); + if (ret) + return ret; + fpu_copy(&dst->thread.fpu, &src->thread.fpu); } return 0; } void free_thread_xstate(struct task_struct *tsk) { - if (tsk->thread.xstate) { - kmem_cache_free(task_xstate_cachep, tsk->thread.xstate); - tsk->thread.xstate = NULL; - } - + fpu_free(&tsk->thread.fpu); WARN(tsk->thread.ds_ctx, "leaking DS context\n"); } diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index f6c6266..0a7a4f5 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -317,7 +317,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) /* we're going to use this soon, after a few expensive things */ if (preload_fpu) - prefetch(next->xstate); + prefetch(next->fpu.state); /* * Reload esp0. diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 17cb329..979215f 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -396,7 +396,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) /* we're going to use this soon, after a few expensive things */ if (preload_fpu) - prefetch(next->xstate); + prefetch(next->fpu.state); /* * Reload esp0, LDT and the page table pointer: diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c index c1b0a11..37e68fc 100644 --- a/arch/x86/kernel/xsave.c +++ b/arch/x86/kernel/xsave.c @@ -109,7 +109,7 @@ int save_i387_xstate(void __user *buf) task_thread_info(tsk)->status &= ~TS_USEDFPU; stts(); } else { - if (__copy_to_user(buf, &tsk->thread.xstate->fxsave, + if (__copy_to_user(buf, &tsk->thread.fpu.state->fxsave, xstate_size)) return -1; } diff --git a/arch/x86/math-emu/fpu_aux.c b/arch/x86/math-emu/fpu_aux.c index aa09870..62797f9 100644 --- a/arch/x86/math-emu/fpu_aux.c +++ b/arch/x86/math-emu/fpu_aux.c @@ -30,10 +30,10 @@ static void fclex(void) } /* Needs to be externally visible */ -void finit_task(struct task_struct *tsk) +void finit_soft_fpu(struct i387_soft_struct *soft) { - struct i387_soft_struct *soft = &tsk->thread.xstate->soft; struct address *oaddr, *iaddr; + memset(soft, 0, sizeof(*soft)); soft->cwd = 0x037f; soft->swd = 0; soft->ftop = 0; /* We don't keep top in the status word internally. */ @@ -52,7 +52,7 @@ void finit_task(struct task_struct *tsk) void finit(void) { - finit_task(current); + finit_task(¤t->thread.fpu); } /* ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip:x86/fpu] x86, fpu: Unbreak FPU emulation 2010-05-06 8:45 ` [PATCH v3 2/2] x86: Introduce 'struct fpu' and related API Avi Kivity 2010-05-10 20:39 ` [tip:x86/fpu] " tip-bot for Avi Kivity @ 2010-05-10 20:40 ` tip-bot for H. Peter Anvin 1 sibling, 0 replies; 24+ messages in thread From: tip-bot for H. Peter Anvin @ 2010-05-10 20:40 UTC (permalink / raw) To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, suresh.b.siddha, tglx, avi Commit-ID: c3f8978ea332cd4be88e12574452a025892ac9af Gitweb: http://git.kernel.org/tip/c3f8978ea332cd4be88e12574452a025892ac9af Author: H. Peter Anvin <hpa@zytor.com> AuthorDate: Mon, 10 May 2010 13:37:16 -0700 Committer: H. Peter Anvin <hpa@zytor.com> CommitDate: Mon, 10 May 2010 13:37:16 -0700 x86, fpu: Unbreak FPU emulation Unbreak FPU emulation, broken by checkin 86603283326c9e95e5ad4e9fdddeec93cac5d9ad: x86: Introduce 'struct fpu' and related API Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Avi Kivity <avi@redhat.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-3-git-send-email-avi@redhat.com> --- arch/x86/math-emu/fpu_aux.c | 2 +- arch/x86/math-emu/fpu_entry.c | 4 ++-- arch/x86/math-emu/fpu_system.h | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/math-emu/fpu_aux.c b/arch/x86/math-emu/fpu_aux.c index 62797f9..dc8adad 100644 --- a/arch/x86/math-emu/fpu_aux.c +++ b/arch/x86/math-emu/fpu_aux.c @@ -52,7 +52,7 @@ void finit_soft_fpu(struct i387_soft_struct *soft) void finit(void) { - finit_task(¤t->thread.fpu); + finit_soft_fpu(¤t->thread.fpu.state->soft); } /* diff --git a/arch/x86/math-emu/fpu_entry.c b/arch/x86/math-emu/fpu_entry.c index 5d87f58..7718541 100644 --- a/arch/x86/math-emu/fpu_entry.c +++ b/arch/x86/math-emu/fpu_entry.c @@ -681,7 +681,7 @@ int fpregs_soft_set(struct task_struct *target, unsigned int pos, unsigned int count, const void *kbuf, const void __user *ubuf) { - struct i387_soft_struct *s387 = &target->thread.xstate->soft; + struct i387_soft_struct *s387 = &target->thread.fpu.state->soft; void *space = s387->st_space; int ret; int offset, other, i, tags, regnr, tag, newtop; @@ -733,7 +733,7 @@ int fpregs_soft_get(struct task_struct *target, unsigned int pos, unsigned int count, void *kbuf, void __user *ubuf) { - struct i387_soft_struct *s387 = &target->thread.xstate->soft; + struct i387_soft_struct *s387 = &target->thread.fpu.state->soft; const void *space = s387->st_space; int ret; int offset = (S387->ftop & 7) * 10, other = 80 - offset; diff --git a/arch/x86/math-emu/fpu_system.h b/arch/x86/math-emu/fpu_system.h index 50fa0ec..2c61441 100644 --- a/arch/x86/math-emu/fpu_system.h +++ b/arch/x86/math-emu/fpu_system.h @@ -31,7 +31,7 @@ #define SEG_EXPAND_DOWN(s) (((s).b & ((1 << 11) | (1 << 10))) \ == (1 << 10)) -#define I387 (current->thread.xstate) +#define I387 (current->thread.fpu.state) #define FPU_info (I387->soft.info) #define FPU_CS (*(unsigned short *) &(FPU_info->regs->cs)) ^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v3 0/2] x86 FPU API 2010-05-06 8:45 [PATCH v3 0/2] x86 FPU API Avi Kivity 2010-05-06 8:45 ` [PATCH v3 1/2] x86: eliminate TS_XSAVE Avi Kivity 2010-05-06 8:45 ` [PATCH v3 2/2] x86: Introduce 'struct fpu' and related API Avi Kivity @ 2010-05-10 8:48 ` Avi Kivity 2010-05-10 15:24 ` H. Peter Anvin 2 siblings, 1 reply; 24+ messages in thread From: Avi Kivity @ 2010-05-10 8:48 UTC (permalink / raw) To: H. Peter Anvin, Ingo Molnar Cc: kvm, linux-kernel, Brian Gerst, Dexuan Cui, Sheng Yang, Suresh Siddha On 05/06/2010 11:45 AM, Avi Kivity wrote: > Currently all fpu accessors are wedded to task_struct. However kvm also uses > the fpu in a different context. Introduce an FPU API, and replace the > current uses with the new API. > > While this patchset is oriented towards deeper changes, as a first step it > simlifies xsave for kvm. > Peter/Ingo, what are the plans for merging it? The kvm xsave work depends on this. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 0/2] x86 FPU API 2010-05-10 8:48 ` [PATCH v3 0/2] x86 FPU API Avi Kivity @ 2010-05-10 15:24 ` H. Peter Anvin 0 siblings, 0 replies; 24+ messages in thread From: H. Peter Anvin @ 2010-05-10 15:24 UTC (permalink / raw) To: Avi Kivity Cc: Ingo Molnar, kvm, linux-kernel, Brian Gerst, Dexuan Cui, Sheng Yang, Suresh Siddha On 05/10/2010 01:48 AM, Avi Kivity wrote: > On 05/06/2010 11:45 AM, Avi Kivity wrote: >> Currently all fpu accessors are wedded to task_struct. However kvm >> also uses >> the fpu in a different context. Introduce an FPU API, and replace the >> current uses with the new API. >> >> While this patchset is oriented towards deeper changes, as a first >> step it >> simlifies xsave for kvm. >> > > Peter/Ingo, what are the plans for merging it? The kvm xsave work > depends on this. > Going to look at it today. Looks good, but I want to go over it in detail to catch any gotchas. -hpa -- H. Peter Anvin, Intel Open Source Technology Center I work for Intel. I don't speak on their behalf. ^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2010-05-27 20:13 UTC | newest] Thread overview: 24+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-05-06 8:45 [PATCH v3 0/2] x86 FPU API Avi Kivity 2010-05-06 8:45 ` [PATCH v3 1/2] x86: eliminate TS_XSAVE Avi Kivity 2010-05-10 20:39 ` [tip:x86/fpu] x86: Eliminate TS_XSAVE tip-bot for Avi Kivity 2010-05-12 0:18 ` [tip:x86/fpu] x86, fpu: Use the proper asm constraint in use_xsave() tip-bot for H. Peter Anvin 2010-05-12 1:06 ` [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives tip-bot for H. Peter Anvin 2010-05-18 20:10 ` Eric Dumazet 2010-05-18 20:43 ` H. Peter Anvin 2010-05-18 20:57 ` H. Peter Anvin 2010-05-18 21:11 ` Eric Dumazet 2010-05-18 21:31 ` H. Peter Anvin 2010-05-18 21:38 ` Does anyone care about gcc 3.x support for x86 anymore? H. Peter Anvin 2010-05-19 23:10 ` Mauro Carvalho Chehab 2010-05-20 0:39 ` H. Peter Anvin 2010-05-20 0:42 ` H. Peter Anvin 2010-05-20 12:44 ` Ingo Molnar 2010-05-18 20:58 ` [tip:x86/fpu] x86: Add new static_cpu_has() function using alternatives H. Peter Anvin 2010-05-18 21:31 ` Eric Dumazet 2010-05-27 20:12 ` [tip:x86/urgent] x86, cpufeature: Unbreak compile with gcc 3.x tip-bot for H. Peter Anvin 2010-05-12 1:06 ` [tip:x86/fpu] x86, fpu: Use static_cpu_has() to implement use_xsave() tip-bot for H. Peter Anvin 2010-05-06 8:45 ` [PATCH v3 2/2] x86: Introduce 'struct fpu' and related API Avi Kivity 2010-05-10 20:39 ` [tip:x86/fpu] " tip-bot for Avi Kivity 2010-05-10 20:40 ` [tip:x86/fpu] x86, fpu: Unbreak FPU emulation tip-bot for H. Peter Anvin 2010-05-10 8:48 ` [PATCH v3 0/2] x86 FPU API Avi Kivity 2010-05-10 15:24 ` H. Peter Anvin
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).