From: Andy Lutomirski <luto@amacapital.net>
To: Andy Lutomirski <luto@kernel.org>
Cc: "X86 ML" <x86@kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Brian Gerst" <brgerst@gmail.com>,
"Borislav Petkov" <bp@alien8.de>,
"Frédéric Weisbecker" <fweisbec@gmail.com>,
"Denys Vlasenko" <dvlasenk@redhat.com>,
"Linus Torvalds" <torvalds@linux-foundation.org>
Subject: Re: [PATCH 00/12] x86: Rewrite 64-bit syscall code
Date: Mon, 7 Dec 2015 14:55:27 -0800 [thread overview]
Message-ID: <CALCETrVi9nW9FvCVNd38usJ_SU81MChyzeFfqs+i3jFqKrtm4w@mail.gmail.com> (raw)
In-Reply-To: <cover.1449522077.git.luto@kernel.org>
[-- Attachment #1: Type: text/plain, Size: 588 bytes --]
On Mon, Dec 7, 2015 at 1:51 PM, Andy Lutomirski <luto@kernel.org> wrote:
> This is kind of like the 32-bit and compat code, except that I
> preserved the fast path this time. I was unable to measure any
> significant performance change on my laptop in the fast path.
>
> What do you all think?
For completeness, if I zap the fast path entirely (see attached), I
lose 20 cycles (148 cycles vs 128 cycles) on Skylake. Switching
between movq and pushq for stack setup makes no difference whatsoever,
interestingly. I haven't tried to figure out exactly where those 20
cycles go.
--Andy
[-- Attachment #2: zap_fastpatch.patch --]
[-- Type: text/x-patch, Size: 2878 bytes --]
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 1ab5362f241d..a97981f0d9ce 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -163,71 +163,17 @@ GLOBAL(entry_SYSCALL_64_after_swapgs)
pushq %r9 /* pt_regs->r9 */
pushq %r10 /* pt_regs->r10 */
pushq %r11 /* pt_regs->r11 */
- sub $(6*8), %rsp /* pt_regs->bp, bx, r12-15 not saved */
+ pushq %rbx /* pt_regs->rbx */
+ pushq %rbp /* pt_regs->rbp */
+ pushq %r12 /* pt_regs->r12 */
+ pushq %r13 /* pt_regs->r13 */
+ pushq %r14 /* pt_regs->r14 */
+ pushq %r15 /* pt_regs->r15 */
- /*
- * If we need to do entry work or if we guess we'll need to do
- * exit work, go straight to the slow path.
- */
- testl $_TIF_WORK_SYSCALL_ENTRY|_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
- jnz entry_SYSCALL64_slow_path
-
-entry_SYSCALL_64_fastpath:
- /*
- * Easy case: enable interrupts and issue the syscall. If the syscall
- * needs pt_regs, we'll call a stub that disables interrupts again
- * and jumps to the slow path.
- */
- TRACE_IRQS_ON
- ENABLE_INTERRUPTS(CLBR_NONE)
-#if __SYSCALL_MASK == ~0
- cmpq $__NR_syscall_max, %rax
-#else
- andl $__SYSCALL_MASK, %eax
- cmpl $__NR_syscall_max, %eax
-#endif
- ja 1f /* return -ENOSYS (already in pt_regs->ax) */
- movq %r10, %rcx
- call *sys_call_table_fastpath_64(, %rax, 8)
- movq %rax, RAX(%rsp)
-1:
-
- /*
- * If we get here, then we know that pt_regs is clean for SYSRET64.
- * If we see that no exit work is required (which we are required
- * to check with IRQs off), then we can go straight to SYSRET64.
- */
- DISABLE_INTERRUPTS(CLBR_NONE)
- TRACE_IRQS_OFF
- testl $_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
- jnz 1f
-
- LOCKDEP_SYS_EXIT
- TRACE_IRQS_ON /* user mode is traced as IRQs on */
- RESTORE_C_REGS
- movq RSP(%rsp), %rsp
- USERGS_SYSRET64
-
-1:
- /*
- * The fast path looked good when we started, but something changed
- * along the way and we need to switch to the slow path. Calling
- * raise(3) will trigger this, for example. IRQs are off.
- */
- TRACE_IRQS_ON
- ENABLE_INTERRUPTS(CLBR_NONE)
- SAVE_EXTRA_REGS
- movq %rsp, %rdi
- call syscall_return_slowpath /* returns with IRQs disabled */
- jmp return_from_SYSCALL_64
-
-entry_SYSCALL64_slow_path:
/* IRQs are off. */
- SAVE_EXTRA_REGS
movq %rsp, %rdi
call do_syscall_64 /* returns with IRQs disabled */
-return_from_SYSCALL_64:
RESTORE_EXTRA_REGS
TRACE_IRQS_IRETQ /* we're about to change IF */
@@ -305,14 +251,7 @@ opportunistic_sysret_failed:
END(entry_SYSCALL_64)
ENTRY(stub_ptregs_64)
- /*
- * Syscalls marked as needing ptregs that go through the fast path
- * land here. We transfer to the slow path.
- */
- DISABLE_INTERRUPTS(CLBR_NONE)
- TRACE_IRQS_OFF
- addq $8, %rsp
- jmp entry_SYSCALL64_slow_path
+ ud2
END(stub_ptregs_64)
/*
next prev parent reply other threads:[~2015-12-07 22:55 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-07 21:51 [PATCH 00/12] x86: Rewrite 64-bit syscall code Andy Lutomirski
2015-12-07 21:51 ` [PATCH 01/12] selftests/x86: Extend Makefile to allow 64-bit only tests Andy Lutomirski
2015-12-08 9:34 ` Borislav Petkov
2015-12-09 18:55 ` Andy Lutomirski
2015-12-09 19:11 ` Shuah Khan
2015-12-09 19:22 ` Andy Lutomirski
2015-12-09 19:58 ` Shuah Khan
2015-12-07 21:51 ` [PATCH 02/12] selftests/x86: Add check_initial_reg_state Andy Lutomirski
2015-12-08 9:54 ` Borislav Petkov
2015-12-09 18:56 ` Andy Lutomirski
2015-12-09 19:09 ` Borislav Petkov
2015-12-09 19:20 ` Andy Lutomirski
2015-12-09 19:28 ` Borislav Petkov
2015-12-07 21:51 ` [PATCH 03/12] x86/syscalls: Refactor syscalltbl.sh Andy Lutomirski
2015-12-07 21:51 ` [PATCH 04/12] x86/syscalls: Remove __SYSCALL_COMMON and __SYSCALL_X32 Andy Lutomirski
2015-12-07 21:51 ` [PATCH 05/12] x86/syscalls: Move compat syscall entry handling into syscalltbl.sh Andy Lutomirski
2015-12-07 21:51 ` [PATCH 06/12] x86/syscalls: Add syscall entry qualifiers Andy Lutomirski
2015-12-07 21:51 ` [PATCH 07/12] x86/entry/64: Always run ptregs-using syscalls on the slow path Andy Lutomirski
2015-12-08 0:50 ` Brian Gerst
2015-12-08 0:54 ` Brian Gerst
2015-12-08 1:12 ` Andy Lutomirski
2015-12-08 13:07 ` Brian Gerst
2015-12-08 18:56 ` Ingo Molnar
2015-12-08 21:51 ` Andy Lutomirski
2015-12-09 4:43 ` Brian Gerst
2015-12-09 5:45 ` Andy Lutomirski
2015-12-09 6:21 ` Andy Lutomirski
2015-12-09 12:52 ` Brian Gerst
2015-12-09 13:02 ` [PATCH] x86/entry/64: Remove duplicate syscall table for fast path Brian Gerst
2015-12-09 18:53 ` Andy Lutomirski
2015-12-09 21:08 ` Brian Gerst
2015-12-09 21:15 ` Andy Lutomirski
2015-12-09 23:50 ` Andy Lutomirski
2015-12-10 5:42 ` Brian Gerst
2015-12-10 5:54 ` Andy Lutomirski
2015-12-09 19:30 ` Andy Lutomirski
2015-12-07 21:51 ` [PATCH 08/12] x86/entry/64: Call all native slow-path syscalls with full pt-regs Andy Lutomirski
2015-12-07 21:51 ` [PATCH 09/12] x86/entry/64: Stop using int_ret_from_sys_call in ret_from_fork Andy Lutomirski
2015-12-07 21:51 ` [PATCH 10/12] x86/entry/64: Migrate the 64-bit syscall slow path to C Andy Lutomirski
2015-12-07 21:51 ` [PATCH 11/12] x86/entry/32: Change INT80 to be an interrupt gate Andy Lutomirski
2016-04-01 1:45 ` Rusty Russell
2016-04-01 7:40 ` [tip:x86/urgent] lguest, x86/entry/32: Fix handling of guest syscalls using interrupt gates tip-bot for Rusty Russell
2015-12-07 21:51 ` [PATCH 12/12] x86/entry: Do enter_from_user_mode with IRQs off Andy Lutomirski
2015-12-07 22:55 ` Andy Lutomirski [this message]
2015-12-08 4:42 ` [PATCH 00/12] x86: Rewrite 64-bit syscall code Ingo Molnar
2015-12-08 5:42 ` Andy Lutomirski
2015-12-08 7:00 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALCETrVi9nW9FvCVNd38usJ_SU81MChyzeFfqs+i3jFqKrtm4w@mail.gmail.com \
--to=luto@amacapital.net \
--cc=bp@alien8.de \
--cc=brgerst@gmail.com \
--cc=dvlasenk@redhat.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).