From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: AH8x227hR6NGRAONorJ3fMQZMEjjn0LTCWPGe37ADATbp+5hhgP3mX1yCtgTWxjRmT+Cta1EOAQM ARC-Seal: i=1; a=rsa-sha256; t=1516476219; cv=none; d=google.com; s=arc-20160816; b=uMDu7mgrlcQ3XybAoWsT85AotUe3IfAo7S02pe3Wwxs/qxqBWXfV9uTqLcgq0eTOEX RThknoyQch4DP/fRhg/P7wMuol3ShWKdZ3GURhKMr/MfAw/TuA3hZfLzWqWStboIK31/ gEV9kIMzwpnM6NQ/nYsRnfzgpm3JKy+2xxshQZRFiqzKvWz0RmHhAg4SQQ47ERbbiquG qosY/Daw83r0yF04gW/bqfuWUuVk8ozY9Dc2xekaKdAnudqCOSG7lDLE7MOoOVg29DSC r28Wf3gpwaiSvHE6jNt7iRI8J+GFkSeDst1uIzRklYLMxrsEAfjD/+sdWsZ8XU/booJ0 Xvpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=Nh0LhbQAsIBx3+xi207zOuNvtxGWBmmxqbbZv5pqXks=; b=FLhxh59R5u8xNh/32JbUoPW92qhKfxs230sVCVQnMFJoi4JbcYgvg6T+riMr/LQ6a4 ofOI4qpqO9SiPXzcvPgEmQ0WagBWEMpzlM0dYk7FRJL1rZCuDtfGKVh44DKxgU6M7dh6 9ZmR2c6n1eg+aHwoxOWP9zWc/KOKuL4JEvT8MfuYqjYS+ICHaDeKAuku+mle+G4LqH9m U+9I0MJIyACOK6gBn8iYFRBSPMPu2k17f49MjGsbLiexgIhzAOBrnOB4RM4K/5StMB2+ 7HE4o4fWMs+1Dsj73kcYJVqyGVGFp1AlwDj/qKm37VO3DDI1uLPw/xY2CWndATFuRghK naiA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.de header.s=amazon201209 header.b=XUgQNNHJ; spf=pass (google.com: domain of prvs=551b82ed1=karahmed@amazon.com designates 207.171.184.29 as permitted sender) smtp.mailfrom=prvs=551b82ed1=karahmed@amazon.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.de Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.de header.s=amazon201209 header.b=XUgQNNHJ; spf=pass (google.com: domain of prvs=551b82ed1=karahmed@amazon.com designates 207.171.184.29 as permitted sender) smtp.mailfrom=prvs=551b82ed1=karahmed@amazon.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.de X-IronPort-AV: E=Sophos;i="5.46,387,1511827200"; d="scan'208";a="588403210" From: KarimAllah Ahmed To: linux-kernel@vger.kernel.org Cc: KarimAllah Ahmed , Andi Kleen , Andrea Arcangeli , Andy Lutomirski , Arjan van de Ven , Ashok Raj , Asit Mallick , Borislav Petkov , Dan Williams , Dave Hansen , David Woodhouse , Greg Kroah-Hartman , "H . Peter Anvin" , Ingo Molnar , Janakarajan Natarajan , Joerg Roedel , Jun Nakajima , Laura Abbott , Linus Torvalds , Masami Hiramatsu , Paolo Bonzini , Peter Zijlstra , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Thomas Gleixner , Tim Chen , Tom Lendacky , kvm@vger.kernel.org, x86@kernel.org Subject: [RFC 04/10] x86/mm: Only flush indirect branches when switching into non dumpable process Date: Sat, 20 Jan 2018 20:22:55 +0100 Message-Id: <1516476182-5153-5-git-send-email-karahmed@amazon.de> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1516476182-5153-1-git-send-email-karahmed@amazon.de> References: <1516476182-5153-1-git-send-email-karahmed@amazon.de> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: =?utf-8?q?1590140567714479425?= X-GMAIL-MSGID: =?utf-8?q?1590140567714479425?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: From: Tim Chen Flush indirect branches when switching into a process that marked itself non dumpable. This protects high value processes like gpg better, without having too high performance overhead. Signed-off-by: Andi Kleen Signed-off-by: David Woodhouse Signed-off-by: KarimAllah Ahmed --- arch/x86/mm/tlb.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 304de7d..f64e80c 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -225,8 +225,19 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, * Avoid user/user BTB poisoning by flushing the branch predictor * when switching between processes. This stops one process from * doing Spectre-v2 attacks on another. + * + * As an optimization: Flush indirect branches only when + * switching into processes that disable dumping. + * + * This will not flush when switching into kernel threads. + * But it would flush when switching into idle and back + * + * It might be useful to have a one-off cache here + * to also not flush the idle case, but we would need some + * kind of stable sequence number to remember the previous mm. */ - indirect_branch_prediction_barrier(); + if (tsk && tsk->mm && get_dumpable(tsk->mm) != SUID_DUMP_USER) + indirect_branch_prediction_barrier(); if (IS_ENABLED(CONFIG_VMAP_STACK)) { /* -- 2.7.4