From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f69.google.com (mail-wm0-f69.google.com [74.125.82.69]) by kanga.kvack.org (Postfix) with ESMTP id D01706B000D for ; Fri, 9 Feb 2018 04:25:57 -0500 (EST) Received: by mail-wm0-f69.google.com with SMTP id r9so3593529wme.8 for ; Fri, 09 Feb 2018 01:25:57 -0800 (PST) Received: from theia.8bytes.org (8bytes.org. [2a01:238:4383:600:38bc:a715:4b6d:a889]) by mx.google.com with ESMTPS id o30si1654690edi.498.2018.02.09.01.25.56 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Feb 2018 01:25:56 -0800 (PST) From: Joerg Roedel Subject: [PATCH 06/31] x86/entry/32: Split off return-to-kernel path Date: Fri, 9 Feb 2018 10:25:15 +0100 Message-Id: <1518168340-9392-7-git-send-email-joro@8bytes.org> In-Reply-To: <1518168340-9392-1-git-send-email-joro@8bytes.org> References: <1518168340-9392-1-git-send-email-joro@8bytes.org> Sender: owner-linux-mm@kvack.org List-ID: To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , jroedel@suse.de, joro@8bytes.org From: Joerg Roedel Use a separate return path when we know we are returning to the kernel. This allows us to put the PTI cr3-switch and the switch to the entry-stack into the return-to-user path without further checking. Signed-off-by: Joerg Roedel --- arch/x86/entry/entry_32.S | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 00ae759..9bd7718 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -65,7 +65,7 @@ # define preempt_stop(clobbers) DISABLE_INTERRUPTS(clobbers); TRACE_IRQS_OFF #else # define preempt_stop(clobbers) -# define resume_kernel restore_all +# define resume_kernel restore_all_kernel #endif .macro TRACE_IRQS_IRET @@ -400,9 +400,9 @@ ENTRY(resume_kernel) DISABLE_INTERRUPTS(CLBR_ANY) .Lneed_resched: cmpl $0, PER_CPU_VAR(__preempt_count) - jnz restore_all + jnz restore_all_kernel testl $X86_EFLAGS_IF, PT_EFLAGS(%esp) # interrupts off (exception path) ? - jz restore_all + jz restore_all_kernel call preempt_schedule_irq jmp .Lneed_resched END(resume_kernel) @@ -602,6 +602,11 @@ restore_all: .Lirq_return: INTERRUPT_RETURN +restore_all_kernel: + TRACE_IRQS_IRET + RESTORE_REGS 4 + jmp .Lirq_return + .section .fixup, "ax" ENTRY(iret_exc ) pushl $0 # no error code -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org