stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org, Andy Lutomirski <luto@kernel.org>,
	Borislav Petkov <bp@alien8.de>, Borislav Petkov <bpetkov@suse.de>,
	Brian Gerst <brgerst@gmail.com>,
	Chang Seok <chang.seok.bae@intel.com>,
	Denys Vlasenko <dvlasenk@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [PATCH 4.13 19/52] x86/switch_to/64: Rewrite FS/GS switching yet again to fix AMD CPUs
Date: Mon, 18 Sep 2017 11:09:47 +0200	[thread overview]
Message-ID: <20170918090906.890754188@linuxfoundation.org> (raw)
In-Reply-To: <20170918090904.072766209@linuxfoundation.org>

4.13-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Andy Lutomirski <luto@kernel.org>

commit e137a4d8f4dd2e277e355495b6b2cb241a8693c3 upstream.

Switching FS and GS is a mess, and the current code is still subtly
wrong: it assumes that "Loading a nonzero value into FS sets the
index and base", which is false on AMD CPUs if the value being
loaded is 1, 2, or 3.

(The current code came from commit 3e2b68d752c9 ("x86/asm,
sched/x86: Rewrite the FS and GS context switch code"), which made
it better but didn't fully fix it.)

Rewrite it to be much simpler and more obviously correct.  This
should fix it fully on AMD CPUs and shouldn't adversely affect
performance.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Chang Seok <chang.seok.bae@intel.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/x86/kernel/process_64.c |  227 +++++++++++++++++++++++--------------------
 1 file changed, 122 insertions(+), 105 deletions(-)

--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -149,6 +149,123 @@ void release_thread(struct task_struct *
 	}
 }
 
+enum which_selector {
+	FS,
+	GS
+};
+
+/*
+ * Saves the FS or GS base for an outgoing thread if FSGSBASE extensions are
+ * not available.  The goal is to be reasonably fast on non-FSGSBASE systems.
+ * It's forcibly inlined because it'll generate better code and this function
+ * is hot.
+ */
+static __always_inline void save_base_legacy(struct task_struct *prev_p,
+					     unsigned short selector,
+					     enum which_selector which)
+{
+	if (likely(selector == 0)) {
+		/*
+		 * On Intel (without X86_BUG_NULL_SEG), the segment base could
+		 * be the pre-existing saved base or it could be zero.  On AMD
+		 * (with X86_BUG_NULL_SEG), the segment base could be almost
+		 * anything.
+		 *
+		 * This branch is very hot (it's hit twice on almost every
+		 * context switch between 64-bit programs), and avoiding
+		 * the RDMSR helps a lot, so we just assume that whatever
+		 * value is already saved is correct.  This matches historical
+		 * Linux behavior, so it won't break existing applications.
+		 *
+		 * To avoid leaking state, on non-X86_BUG_NULL_SEG CPUs, if we
+		 * report that the base is zero, it needs to actually be zero:
+		 * see the corresponding logic in load_seg_legacy.
+		 */
+	} else {
+		/*
+		 * If the selector is 1, 2, or 3, then the base is zero on
+		 * !X86_BUG_NULL_SEG CPUs and could be anything on
+		 * X86_BUG_NULL_SEG CPUs.  In the latter case, Linux
+		 * has never attempted to preserve the base across context
+		 * switches.
+		 *
+		 * If selector > 3, then it refers to a real segment, and
+		 * saving the base isn't necessary.
+		 */
+		if (which == FS)
+			prev_p->thread.fsbase = 0;
+		else
+			prev_p->thread.gsbase = 0;
+	}
+}
+
+static __always_inline void save_fsgs(struct task_struct *task)
+{
+	savesegment(fs, task->thread.fsindex);
+	savesegment(gs, task->thread.gsindex);
+	save_base_legacy(task, task->thread.fsindex, FS);
+	save_base_legacy(task, task->thread.gsindex, GS);
+}
+
+static __always_inline void loadseg(enum which_selector which,
+				    unsigned short sel)
+{
+	if (which == FS)
+		loadsegment(fs, sel);
+	else
+		load_gs_index(sel);
+}
+
+static __always_inline void load_seg_legacy(unsigned short prev_index,
+					    unsigned long prev_base,
+					    unsigned short next_index,
+					    unsigned long next_base,
+					    enum which_selector which)
+{
+	if (likely(next_index <= 3)) {
+		/*
+		 * The next task is using 64-bit TLS, is not using this
+		 * segment at all, or is having fun with arcane CPU features.
+		 */
+		if (next_base == 0) {
+			/*
+			 * Nasty case: on AMD CPUs, we need to forcibly zero
+			 * the base.
+			 */
+			if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
+				loadseg(which, __USER_DS);
+				loadseg(which, next_index);
+			} else {
+				/*
+				 * We could try to exhaustively detect cases
+				 * under which we can skip the segment load,
+				 * but there's really only one case that matters
+				 * for performance: if both the previous and
+				 * next states are fully zeroed, we can skip
+				 * the load.
+				 *
+				 * (This assumes that prev_base == 0 has no
+				 * false positives.  This is the case on
+				 * Intel-style CPUs.)
+				 */
+				if (likely(prev_index | next_index | prev_base))
+					loadseg(which, next_index);
+			}
+		} else {
+			if (prev_index != next_index)
+				loadseg(which, next_index);
+			wrmsrl(which == FS ? MSR_FS_BASE : MSR_KERNEL_GS_BASE,
+			       next_base);
+		}
+	} else {
+		/*
+		 * The next task is using a real segment.  Loading the selector
+		 * is sufficient.
+		 */
+		loadseg(which, next_index);
+	}
+}
+
 int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
 		unsigned long arg, struct task_struct *p, unsigned long tls)
 {
@@ -286,7 +403,6 @@ __switch_to(struct task_struct *prev_p,
 	struct fpu *next_fpu = &next->fpu;
 	int cpu = smp_processor_id();
 	struct tss_struct *tss = &per_cpu(cpu_tss, cpu);
-	unsigned prev_fsindex, prev_gsindex;
 
 	switch_fpu_prepare(prev_fpu, cpu);
 
@@ -295,8 +411,7 @@ __switch_to(struct task_struct *prev_p,
 	 *
 	 * (e.g. xen_load_tls())
 	 */
-	savesegment(fs, prev_fsindex);
-	savesegment(gs, prev_gsindex);
+	save_fsgs(prev_p);
 
 	/*
 	 * Load TLS before restoring any segments so that segment loads
@@ -335,108 +450,10 @@ __switch_to(struct task_struct *prev_p,
 	if (unlikely(next->ds | prev->ds))
 		loadsegment(ds, next->ds);
 
-	/*
-	 * Switch FS and GS.
-	 *
-	 * These are even more complicated than DS and ES: they have
-	 * 64-bit bases are that controlled by arch_prctl.  The bases
-	 * don't necessarily match the selectors, as user code can do
-	 * any number of things to cause them to be inconsistent.
-	 *
-	 * We don't promise to preserve the bases if the selectors are
-	 * nonzero.  We also don't promise to preserve the base if the
-	 * selector is zero and the base doesn't match whatever was
-	 * most recently passed to ARCH_SET_FS/GS.  (If/when the
-	 * FSGSBASE instructions are enabled, we'll need to offer
-	 * stronger guarantees.)
-	 *
-	 * As an invariant,
-	 * (fsbase != 0 && fsindex != 0) || (gsbase != 0 && gsindex != 0) is
-	 * impossible.
-	 */
-	if (next->fsindex) {
-		/* Loading a nonzero value into FS sets the index and base. */
-		loadsegment(fs, next->fsindex);
-	} else {
-		if (next->fsbase) {
-			/* Next index is zero but next base is nonzero. */
-			if (prev_fsindex)
-				loadsegment(fs, 0);
-			wrmsrl(MSR_FS_BASE, next->fsbase);
-		} else {
-			/* Next base and index are both zero. */
-			if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
-				/*
-				 * We don't know the previous base and can't
-				 * find out without RDMSR.  Forcibly clear it.
-				 */
-				loadsegment(fs, __USER_DS);
-				loadsegment(fs, 0);
-			} else {
-				/*
-				 * If the previous index is zero and ARCH_SET_FS
-				 * didn't change the base, then the base is
-				 * also zero and we don't need to do anything.
-				 */
-				if (prev->fsbase || prev_fsindex)
-					loadsegment(fs, 0);
-			}
-		}
-	}
-	/*
-	 * Save the old state and preserve the invariant.
-	 * NB: if prev_fsindex == 0, then we can't reliably learn the base
-	 * without RDMSR because Intel user code can zero it without telling
-	 * us and AMD user code can program any 32-bit value without telling
-	 * us.
-	 */
-	if (prev_fsindex)
-		prev->fsbase = 0;
-	prev->fsindex = prev_fsindex;
-
-	if (next->gsindex) {
-		/* Loading a nonzero value into GS sets the index and base. */
-		load_gs_index(next->gsindex);
-	} else {
-		if (next->gsbase) {
-			/* Next index is zero but next base is nonzero. */
-			if (prev_gsindex)
-				load_gs_index(0);
-			wrmsrl(MSR_KERNEL_GS_BASE, next->gsbase);
-		} else {
-			/* Next base and index are both zero. */
-			if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
-				/*
-				 * We don't know the previous base and can't
-				 * find out without RDMSR.  Forcibly clear it.
-				 *
-				 * This contains a pointless SWAPGS pair.
-				 * Fixing it would involve an explicit check
-				 * for Xen or a new pvop.
-				 */
-				load_gs_index(__USER_DS);
-				load_gs_index(0);
-			} else {
-				/*
-				 * If the previous index is zero and ARCH_SET_GS
-				 * didn't change the base, then the base is
-				 * also zero and we don't need to do anything.
-				 */
-				if (prev->gsbase || prev_gsindex)
-					load_gs_index(0);
-			}
-		}
-	}
-	/*
-	 * Save the old state and preserve the invariant.
-	 * NB: if prev_gsindex == 0, then we can't reliably learn the base
-	 * without RDMSR because Intel user code can zero it without telling
-	 * us and AMD user code can program any 32-bit value without telling
-	 * us.
-	 */
-	if (prev_gsindex)
-		prev->gsbase = 0;
-	prev->gsindex = prev_gsindex;
+	load_seg_legacy(prev->fsindex, prev->fsbase,
+			next->fsindex, next->fsbase, FS);
+	load_seg_legacy(prev->gsindex, prev->gsbase,
+			next->gsindex, next->gsbase, GS);
 
 	switch_fpu_finish(next_fpu, cpu);
 

  parent reply	other threads:[~2017-09-18  9:11 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-18  9:09 [PATCH 4.13 00/52] 4.13.3-stable review Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 01/52] Revert "net: use lib/percpu_counter API for fragmentation mem accounting" Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 02/52] Revert "net: fix percpu memory leaks" Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 03/52] gianfar: Fix Tx flow control deactivation Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 04/52] vhost_net: correctly check tx avail during rx busy polling Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 05/52] ip6_gre: update mtu properly in ip6gre_err Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 06/52] udp: drop head states only when all skb references are gone Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 07/52] ipv6: fix memory leak with multiple tables during netns destruction Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 08/52] ipv6: fix typo in fib6_net_exit() Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 09/52] sctp: fix missing wake ups in some situations Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 10/52] tcp: fix a request socket leak Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 11/52] ip_tunnel: fix setting ttl and tos value in collect_md mode Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 12/52] f2fs: let fill_super handle roll-forward errors Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 13/52] f2fs: check hot_data for roll-forward recovery Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 14/52] thunderbolt: Remove superfluous check Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 15/52] thunderbolt: Make key root-only accessible Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 16/52] thunderbolt: Allow clearing the key Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 17/52] x86/fsgsbase/64: Fully initialize FS and GS state in start_thread_common Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 18/52] x86/fsgsbase/64: Report FSBASE and GSBASE correctly in core dumps Greg Kroah-Hartman
2017-09-18  9:09 ` Greg Kroah-Hartman [this message]
2017-09-18  9:09 ` [PATCH 4.13 20/52] x86/mm, mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 21/52] ovl: fix false positive ESTALE on lookup Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 22/52] fuse: allow server to run in different pid_ns Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 23/52] idr: remove WARN_ON_ONCE() when trying to replace negative ID Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 24/52] libnvdimm, btt: check memory allocation failure Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 25/52] libnvdimm: fix integer overflow static analysis warning Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 26/52] xfs: write unmount record for ro mounts Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 27/52] xfs: toggle readonly state around xfs_log_mount_finish Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 28/52] xfs: Add infrastructure needed for error propagation during buffer IO failure Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 29/52] xfs: Properly retry failed inode items in case of error during buffer writeback Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 30/52] xfs: fix recovery failure when log record header wraps log end Greg Kroah-Hartman
2017-09-18  9:09 ` [PATCH 4.13 31/52] xfs: always verify the log tail during recovery Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 32/52] xfs: fix log recovery corruption error due to tail overwrite Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 33/52] xfs: handle -EFSCORRUPTED during head/tail verification Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 34/52] xfs: stop searching for free slots in an inode chunk when there are none Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 35/52] xfs: evict all inodes involved with log redo item Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 36/52] xfs: check for race with xfs_reclaim_inode() in xfs_ifree_cluster() Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 37/52] xfs: open-code xfs_buf_item_dirty() Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 38/52] xfs: remove unnecessary dirty bli format check for ordered bufs Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 39/52] xfs: ordered buffer log items are never formatted Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 40/52] xfs: refactor buffer logging into buffer dirtying helper Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 41/52] xfs: dont log dirty ranges for ordered buffers Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 42/52] xfs: skip bmbt block ino validation during owner change Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 43/52] xfs: move bmbt owner change to last step of extent swap Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 44/52] xfs: disallow marking previously dirty buffers as ordered Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 45/52] xfs: relog dirty buffers during swapext bmbt owner change Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 46/52] xfs: disable per-inode DAX flag Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 47/52] xfs: fix incorrect log_flushed on fsync Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 48/52] xfs: dont set v3 xflags for v2 inodes Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 49/52] xfs: open code end_buffer_async_write in xfs_finish_page_writeback Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 50/52] xfs: use kmem_free to free return value of kmem_zalloc Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 51/52] md/raid1/10: reset bio allocated from mempool Greg Kroah-Hartman
2017-09-18  9:10 ` [PATCH 4.13 52/52] md/raid5: release/flush io in raid5_do_work() Greg Kroah-Hartman
2017-09-18 19:29 ` [PATCH 4.13 00/52] 4.13.3-stable review Guenter Roeck
2017-09-18 20:17 ` Shuah Khan
2017-09-19  6:33   ` Greg Kroah-Hartman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170918090906.890754188@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=bp@alien8.de \
    --cc=bpetkov@suse.de \
    --cc=brgerst@gmail.com \
    --cc=chang.seok.bae@intel.com \
    --cc=dvlasenk@redhat.com \
    --cc=hpa@zytor.com \
    --cc=jpoimboe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=stable@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).