stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg KH <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org, stable@vger.kernel.org
Cc: torvalds@linux-foundation.org, akpm@linux-foundation.org,
	alan@lxorguk.ukuu.org.uk
Subject: [ 55/73] i387: re-introduce FPU state preloading at context switch time
Date: Mon, 27 Feb 2012 17:02:58 -0800	[thread overview]
Message-ID: <20120228010213.357750296@linuxfoundation.org> (raw)
In-Reply-To: <20120228010246.GA24299@kroah.com>

3.0-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Linus Torvalds <torvalds@linux-foundation.org>

commit 34ddc81a230b15c0e345b6b253049db731499f7e upstream.

After all the FPU state cleanups and finally finding the problem that
caused all our FPU save/restore problems, this re-introduces the
preloading of FPU state that was removed in commit b3b0870ef3ff ("i387:
do not preload FPU state at task switch time").

However, instead of simply reverting the removal, this reimplements
preloading with several fixes, most notably

 - properly abstracted as a true FPU state switch, rather than as
   open-coded save and restore with various hacks.

   In particular, implementing it as a proper FPU state switch allows us
   to optimize the CR0.TS flag accesses: there is no reason to set the
   TS bit only to then almost immediately clear it again.  CR0 accesses
   are quite slow and expensive, don't flip the bit back and forth for
   no good reason.

 - Make sure that the same model works for both x86-32 and x86-64, so
   that there are no gratuitous differences between the two due to the
   way they save and restore segment state differently due to
   architectural differences that really don't matter to the FPU state.

 - Avoid exposing the "preload" state to the context switch routines,
   and in particular allow the concept of lazy state restore: if nothing
   else has used the FPU in the meantime, and the process is still on
   the same CPU, we can avoid restoring state from memory entirely, just
   re-expose the state that is still in the FPU unit.

   That optimized lazy restore isn't actually implemented here, but the
   infrastructure is set up for it.  Of course, older CPU's that use
   'fnsave' to save the state cannot take advantage of this, since the
   state saving also trashes the state.

In other words, there is now an actual _design_ to the FPU state saving,
rather than just random historical baggage.  Hopefully it's easier to
follow as a result.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/x86/include/asm/i387.h  |  110 ++++++++++++++++++++++++++++++++++++-------
 arch/x86/kernel/process_32.c |    5 +
 arch/x86/kernel/process_64.c |    5 +
 arch/x86/kernel/traps.c      |   55 ++++++++++++---------
 4 files changed, 133 insertions(+), 42 deletions(-)

--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -29,6 +29,7 @@ extern unsigned int sig_xstate_size;
 extern void fpu_init(void);
 extern void mxcsr_feature_mask_init(void);
 extern int init_fpu(struct task_struct *child);
+extern void __math_state_restore(struct task_struct *);
 extern void math_state_restore(void);
 extern int dump_fpu(struct pt_regs *, struct user_i387_struct *);
 
@@ -212,9 +213,10 @@ static inline void fpu_fxsave(struct fpu
 #endif	/* CONFIG_X86_64 */
 
 /*
- * These must be called with preempt disabled
+ * These must be called with preempt disabled. Returns
+ * 'true' if the FPU state is still intact.
  */
-static inline void fpu_save_init(struct fpu *fpu)
+static inline int fpu_save_init(struct fpu *fpu)
 {
 	if (use_xsave()) {
 		fpu_xsave(fpu);
@@ -223,22 +225,33 @@ static inline void fpu_save_init(struct
 		 * xsave header may indicate the init state of the FP.
 		 */
 		if (!(fpu->state->xsave.xsave_hdr.xstate_bv & XSTATE_FP))
-			return;
+			return 1;
 	} else if (use_fxsr()) {
 		fpu_fxsave(fpu);
 	} else {
 		asm volatile("fnsave %[fx]; fwait"
 			     : [fx] "=m" (fpu->state->fsave));
-		return;
+		return 0;
 	}
 
-	if (unlikely(fpu->state->fxsave.swd & X87_FSW_ES))
+	/*
+	 * If exceptions are pending, we need to clear them so
+	 * that we don't randomly get exceptions later.
+	 *
+	 * FIXME! Is this perhaps only true for the old-style
+	 * irq13 case? Maybe we could leave the x87 state
+	 * intact otherwise?
+	 */
+	if (unlikely(fpu->state->fxsave.swd & X87_FSW_ES)) {
 		asm volatile("fnclex");
+		return 0;
+	}
+	return 1;
 }
 
-static inline void __save_init_fpu(struct task_struct *tsk)
+static inline int __save_init_fpu(struct task_struct *tsk)
 {
-	fpu_save_init(&tsk->thread.fpu);
+	return fpu_save_init(&tsk->thread.fpu);
 }
 
 static inline int fpu_fxrstor_checking(struct fpu *fpu)
@@ -301,20 +314,79 @@ static inline void __thread_fpu_begin(st
 }
 
 /*
- * Signal frame handlers...
+ * FPU state switching for scheduling.
+ *
+ * This is a two-stage process:
+ *
+ *  - switch_fpu_prepare() saves the old state and
+ *    sets the new state of the CR0.TS bit. This is
+ *    done within the context of the old process.
+ *
+ *  - switch_fpu_finish() restores the new state as
+ *    necessary.
  */
-extern int save_i387_xstate(void __user *buf);
-extern int restore_i387_xstate(void __user *buf);
+typedef struct { int preload; } fpu_switch_t;
+
+/*
+ * FIXME! We could do a totally lazy restore, but we need to
+ * add a per-cpu "this was the task that last touched the FPU
+ * on this CPU" variable, and the task needs to have a "I last
+ * touched the FPU on this CPU" and check them.
+ *
+ * We don't do that yet, so "fpu_lazy_restore()" always returns
+ * false, but some day..
+ */
+#define fpu_lazy_restore(tsk) (0)
+#define fpu_lazy_state_intact(tsk) do { } while (0)
 
-static inline void __unlazy_fpu(struct task_struct *tsk)
+static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct task_struct *new)
 {
-	if (__thread_has_fpu(tsk)) {
-		__save_init_fpu(tsk);
-		__thread_fpu_end(tsk);
-	} else
-		tsk->fpu_counter = 0;
+	fpu_switch_t fpu;
+
+	fpu.preload = tsk_used_math(new) && new->fpu_counter > 5;
+	if (__thread_has_fpu(old)) {
+		if (__save_init_fpu(old))
+			fpu_lazy_state_intact(old);
+		__thread_clear_has_fpu(old);
+		old->fpu_counter++;
+
+		/* Don't change CR0.TS if we just switch! */
+		if (fpu.preload) {
+			__thread_set_has_fpu(new);
+			prefetch(new->thread.fpu.state);
+		} else
+			stts();
+	} else {
+		old->fpu_counter = 0;
+		if (fpu.preload) {
+			if (fpu_lazy_restore(new))
+				fpu.preload = 0;
+			else
+				prefetch(new->thread.fpu.state);
+			__thread_fpu_begin(new);
+		}
+	}
+	return fpu;
+}
+
+/*
+ * By the time this gets called, we've already cleared CR0.TS and
+ * given the process the FPU if we are going to preload the FPU
+ * state - all we need to do is to conditionally restore the register
+ * state itself.
+ */
+static inline void switch_fpu_finish(struct task_struct *new, fpu_switch_t fpu)
+{
+	if (fpu.preload)
+		__math_state_restore(new);
 }
 
+/*
+ * Signal frame handlers...
+ */
+extern int save_i387_xstate(void __user *buf);
+extern int restore_i387_xstate(void __user *buf);
+
 static inline void __clear_fpu(struct task_struct *tsk)
 {
 	if (__thread_has_fpu(tsk)) {
@@ -474,7 +546,11 @@ static inline void save_init_fpu(struct
 static inline void unlazy_fpu(struct task_struct *tsk)
 {
 	preempt_disable();
-	__unlazy_fpu(tsk);
+	if (__thread_has_fpu(tsk)) {
+		__save_init_fpu(tsk);
+		__thread_fpu_end(tsk);
+	} else
+		tsk->fpu_counter = 0;
 	preempt_enable();
 }
 
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -293,10 +293,11 @@ __switch_to(struct task_struct *prev_p,
 				 *next = &next_p->thread;
 	int cpu = smp_processor_id();
 	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+	fpu_switch_t fpu;
 
 	/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
 
-	__unlazy_fpu(prev_p);
+	fpu = switch_fpu_prepare(prev_p, next_p);
 
 	/*
 	 * Reload esp0.
@@ -351,6 +352,8 @@ __switch_to(struct task_struct *prev_p,
 	if (prev->gs | next->gs)
 		lazy_load_gs(next->gs);
 
+	switch_fpu_finish(next_p, fpu);
+
 	percpu_write(current_task, next_p);
 
 	return prev_p;
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -377,8 +377,9 @@ __switch_to(struct task_struct *prev_p,
 	int cpu = smp_processor_id();
 	struct tss_struct *tss = &per_cpu(init_tss, cpu);
 	unsigned fsindex, gsindex;
+	fpu_switch_t fpu;
 
-	__unlazy_fpu(prev_p);
+	fpu = switch_fpu_prepare(prev_p, next_p);
 
 	/*
 	 * Reload esp0, LDT and the page table pointer:
@@ -448,6 +449,8 @@ __switch_to(struct task_struct *prev_p,
 		wrmsrl(MSR_KERNEL_GS_BASE, next->gs);
 	prev->gsindex = gsindex;
 
+	switch_fpu_finish(next_p, fpu);
+
 	/*
 	 * Switch the PDA and FPU contexts.
 	 */
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -717,6 +717,37 @@ asmlinkage void __attribute__((weak)) sm
 }
 
 /*
+ * This gets called with the process already owning the
+ * FPU state, and with CR0.TS cleared. It just needs to
+ * restore the FPU register state.
+ */
+void __math_state_restore(struct task_struct *tsk)
+{
+	/* We need a safe address that is cheap to find and that is already
+	   in L1. We've just brought in "tsk->thread.has_fpu", so use that */
+#define safe_address (tsk->thread.has_fpu)
+
+	/* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
+	   is pending.  Clear the x87 state here by setting it to fixed
+	   values. safe_address is a random variable that should be in L1 */
+	alternative_input(
+		ASM_NOP8 ASM_NOP2,
+		"emms\n\t"	  	/* clear stack tags */
+		"fildl %P[addr]",	/* set F?P to defined value */
+		X86_FEATURE_FXSAVE_LEAK,
+		[addr] "m" (safe_address));
+
+	/*
+	 * Paranoid restore. send a SIGSEGV if we fail to restore the state.
+	 */
+	if (unlikely(restore_fpu_checking(tsk))) {
+		__thread_fpu_end(tsk);
+		force_sig(SIGSEGV, tsk);
+		return;
+	}
+}
+
+/*
  * 'math_state_restore()' saves the current math information in the
  * old math state array, and gets the new ones from the current task
  *
@@ -730,10 +761,6 @@ void math_state_restore(void)
 {
 	struct task_struct *tsk = current;
 
-	/* We need a safe address that is cheap to find and that is already
-	   in L1. We're just bringing in "tsk->thread.has_fpu", so use that */
-#define safe_address (tsk->thread.has_fpu)
-
 	if (!tsk_used_math(tsk)) {
 		local_irq_enable();
 		/*
@@ -750,25 +777,7 @@ void math_state_restore(void)
 	}
 
 	__thread_fpu_begin(tsk);
-
-	/* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
-	   is pending.  Clear the x87 state here by setting it to fixed
-	   values. safe_address is a random variable that should be in L1 */
-	alternative_input(
-		ASM_NOP8 ASM_NOP2,
-		"emms\n\t"	  	/* clear stack tags */
-		"fildl %P[addr]",	/* set F?P to defined value */
-		X86_FEATURE_FXSAVE_LEAK,
-		[addr] "m" (safe_address));
-
-	/*
-	 * Paranoid restore. send a SIGSEGV if we fail to restore the state.
-	 */
-	if (unlikely(restore_fpu_checking(tsk))) {
-		__thread_fpu_end(tsk);
-		force_sig(SIGSEGV, tsk);
-		return;
-	}
+	__math_state_restore(tsk);
 
 	tsk->fpu_counter++;
 }



  parent reply	other threads:[~2012-02-28  1:02 UTC|newest]

Thread overview: 74+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-28  1:02 [ 00/73] 3.0.23-stable review Greg KH
2012-02-28  1:02 ` [ 01/73] ASoC: wm8962: Fix sidetone enumeration texts Greg KH
2012-02-28  1:02 ` [ 02/73] NOMMU: Lock i_mmap_mutex for access to the VMA prio list Greg KH
2012-02-28  1:02 ` [ 03/73] hwmon: (max6639) Fix FAN_FROM_REG calculation Greg KH
2012-02-28  1:02 ` [ 04/73] hwmon: (max6639) Fix PPR register initialization to set both channels Greg KH
2012-02-28  1:02 ` [ 05/73] hwmon: (ads1015) Fix file leak in probe function Greg KH
2012-02-28  1:02 ` [ 06/73] powerpc/perf: power_pmu_start restores incorrect values, breaking frequency events Greg KH
2012-02-28  1:02 ` [ 07/73] drm/radeon/kms: fix MSI re-arm on rv370+ Greg KH
2012-02-28  1:02 ` [ 08/73] PCI: workaround hard-wired bus number V2 Greg KH
2012-02-28  1:02 ` [ 09/73] mac80211: Fix a rwlock bad magic bug Greg KH
2012-02-28  1:02 ` [ 10/73] ipheth: Add iPhone 4S Greg KH
2012-02-28  1:02 ` [ 11/73] eCryptfs: Copy up lower inode attrs after setting lower xattr Greg KH
2012-02-28  1:02 ` [ 12/73] ALSA: hda - Fix redundant jack creations for cx5051 Greg KH
2012-02-28  1:02 ` [ 13/73] mmc: core: check for zero length ioctl data Greg KH
2012-02-28  1:02 ` [ 14/73] NFSv4: Ensure we throw out bad delegation stateids on NFS4ERR_BAD_STATEID Greg KH
2012-02-28  1:02 ` [ 15/73] ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR Greg KH
2012-02-28  1:02 ` [ 16/73] ARM: 7325/1: fix v7 boot with lockdep enabled Greg KH
2012-02-28  1:02 ` [ 17/73] net: Make qdisc_skb_cb upper size bound explicit Greg KH
2012-02-28  1:02 ` [ 18/73] IPoIB: Stop lying about hard_header_len and use skb->cb to stash LL addresses Greg KH
2012-02-28  1:02 ` [ 19/73] gro: more generic L2 header check Greg KH
2012-02-28  1:02 ` [ 20/73] veth: Enforce minimum size of VETH_INFO_PEER Greg KH
2012-02-28  1:02 ` [ 21/73] 3c59x: shorten timer period for slave devices Greg KH
2012-02-28  1:02 ` [ 22/73] ipv6-multicast: Fix memory leak in input path Greg KH
2012-02-28  1:02 ` [ 23/73] ipv6-multicast: Fix memory leak in IPv6 multicast Greg KH
2012-02-28  1:02 ` [ 24/73] ipv4: fix for ip_options_rcv_srr() daddr update Greg KH
2012-02-28  1:02 ` [ 25/73] ipv4: Save nexthop address of LSRR/SSRR option to IPCB Greg KH
2012-02-28  1:02 ` [ 26/73] ipv4: Fix wrong order of ip_rt_get_source() and update iph->daddr Greg KH
2012-02-28  1:02 ` [ 27/73] ipv4: reset flowi parameters on route connect Greg KH
2012-02-28  1:02 ` [ 28/73] net: Dont proxy arp respond if iif == rt->dst.dev if private VLAN is disabled Greg KH
2012-02-28  1:02 ` [ 29/73] netpoll: netpoll_poll_dev() should access dev->flags Greg KH
2012-02-28  1:02 ` [ 30/73] net_sched: Bug in netem reordering Greg KH
2012-02-28  1:02 ` [ 31/73] via-velocity: S3 resume fix Greg KH
2012-02-28  1:02 ` [ 32/73] tcp_v4_send_reset: binding oif to iif in no sock case Greg KH
2012-02-28  1:02 ` [ 33/73] tcp: allow tcp_sacktag_one() to tag ranges not aligned with skbs Greg KH
2012-02-28  1:02 ` [ 34/73] tcp: fix range tcp_shifted_skb() passes to tcp_sacktag_one() Greg KH
2012-02-28  1:02 ` [ 35/73] tcp: fix tcp_shifted_skb() adjustment of lost_cnt_hint for FACK Greg KH
2012-02-28  1:02 ` [ 36/73] route: fix ICMP redirect validation Greg KH
2012-02-28  1:02 ` [ 37/73] ipv4: fix redirect handling Greg KH
2012-02-28  1:02 ` [ 38/73] USB: Added Kamstrup VID/PIDs to cp210x serial driver Greg KH
2012-02-28  1:02 ` [ 39/73] USB: option: cleanup zte 3g-dongles pid in option.c Greg KH
2012-02-28  1:02 ` [ 40/73] USB: Serial: ti_usb_3410_5052: Add Abbot Diabetes Care cable id Greg KH
2012-02-28  1:02 ` [ 41/73] USB: Remove duplicate USB 3.0 hub feature #defines Greg KH
2012-02-28  1:02 ` [ 42/73] USB: Fix handoff when BIOS disables host PCI device Greg KH
2012-02-28  1:02 ` [ 43/73] xhci: Fix oops caused by more USB2 ports than USB3 ports Greg KH
2012-02-28  1:02 ` [ 44/73] xhci: Fix encoding for HS bulk/control NAK rate Greg KH
2012-02-28  1:02 ` [ 45/73] USB: Set hub depth after USB3 hub reset Greg KH
2012-02-28  1:02 ` [ 46/73] i387: math_state_restore() isnt called from asm Greg KH
2012-02-28  1:02 ` [ 47/73] i387: make irq_fpu_usable() tests more robust Greg KH
2012-02-28  1:02 ` [ 48/73] i387: fix sense of sanity check Greg KH
2012-02-28  1:02 ` [ 49/73] i387: fix x86-64 preemption-unsafe user stack save/restore Greg KH
2012-02-28  1:02 ` [ 50/73] i387: move TS_USEDFPU clearing out of __save_init_fpu and into callers Greg KH
2012-02-28  1:02 ` [ 51/73] i387: dont ever touch TS_USEDFPU directly, use helper functions Greg KH
2012-02-28  1:02 ` [ 52/73] i387: do not preload FPU state at task switch time Greg KH
2012-02-28  1:02 ` [ 53/73] i387: move AMD K7/K8 fpu fxsave/fxrstor workaround from save to restore Greg KH
2012-02-28  1:02 ` [ 54/73] i387: move TS_USEDFPU flag from thread_info to task_struct Greg KH
2012-02-28  1:02 ` Greg KH [this message]
2012-02-28  1:02 ` [ 56/73] usb-storage: fix freezing of the scanning thread Greg KH
2012-02-28  1:03 ` [ 57/73] USB: Dont fail USB3 probe on missing legacy PCI IRQ Greg KH
2012-02-28  1:03 ` [ 58/73] x86/amd: Fix L1i and L2 cache sharing information for AMD family 15h processors Greg KH
2012-02-28  1:03 ` [ 59/73] ath9k: stop on rates with idx -1 in ath9k rate controls .tx_status Greg KH
2012-02-28  1:03 ` [ 60/73] genirq: Unmask oneshot irqs when thread was not woken Greg KH
2012-02-28  1:03 ` [ 61/73] genirq: Handle pending irqs in irq_startup() Greg KH
2012-02-28  1:03 ` [ 62/73] [SCSI] scsi_scan: Fix Poison overwritten warning caused by using freed shost Greg KH
2012-02-28  1:03 ` [ 63/73] [SCSI] scsi_pm: Fix bug in the SCSI power management handler Greg KH
2012-02-28  1:03 ` [ 64/73] ipvs: fix matching of fwmark templates during scheduling Greg KH
2012-02-28  1:03 ` [ 65/73] jme: Fix FIFO flush issue Greg KH
2012-02-28  1:03 ` [ 66/73] davinci_emac: Do not free all rx dma descriptors during init Greg KH
2012-02-28  1:03 ` [ 67/73] builddeb: Dont create files in /tmp with predictable names Greg KH
2012-02-28  1:03 ` [ 68/73] [media] hdpvr: fix race conditon during start of streaming Greg KH
2012-02-28  1:03 ` [ 69/73] hwmon: (f75375s) Fix register write order when setting fans to full speed Greg KH
2012-02-28  1:03 ` [ 70/73] epoll: introduce POLLFREE to flush ->signalfd_wqh before kfree() Greg KH
2012-02-28  1:03 ` [ 71/73] epoll: ep_unregister_pollwait() can use the freed pwq->whead Greg KH
2012-02-28  1:03 ` [ 72/73] epoll: limit paths Greg KH
2012-02-28  1:03 ` [ 73/73] cdrom: use copy_to_user() without the underscores Greg KH

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120228010213.357750296@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=akpm@linux-foundation.org \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=linux-kernel@vger.kernel.org \
    --cc=stable@vger.kernel.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).