From: tip-bot for Andy Lutomirski <tipbot@zytor.com>
To: linux-tip-commits@vger.kernel.org
Cc: linux-kernel@vger.kernel.org, luto@amacapital.net,
mingo@kernel.org, tglx@linutronix.de, hpa@zytor.com
Subject: [tip:x86/vdso] x86_64/vsyscall: Move all of the gate_area code to vsyscall_64.c
Date: Tue, 28 Oct 2014 04:14:34 -0700 [thread overview]
Message-ID: <tip-b93590901a01a6d036b3b7c856bcc5724fdb9911@git.kernel.org> (raw)
In-Reply-To: <a7ee266773671a05f00b7175ca65a0dd812d2e4b.1411494540.git.luto@amacapital.net>
Commit-ID: b93590901a01a6d036b3b7c856bcc5724fdb9911
Gitweb: http://git.kernel.org/tip/b93590901a01a6d036b3b7c856bcc5724fdb9911
Author: Andy Lutomirski <luto@amacapital.net>
AuthorDate: Tue, 23 Sep 2014 10:50:51 -0700
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 28 Oct 2014 11:22:08 +0100
x86_64/vsyscall: Move all of the gate_area code to vsyscall_64.c
This code exists for the sole purpose of making the vsyscall
page look sort of like real userspace memory. Move it so that
it lives with the rest of the vsyscall code.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/a7ee266773671a05f00b7175ca65a0dd812d2e4b.1411494540.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/x86/kernel/vsyscall_64.c | 49 +++++++++++++++++++++++++++++++++++++++++++
arch/x86/mm/init_64.c | 49 -------------------------------------------
2 files changed, 49 insertions(+), 49 deletions(-)
diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
index 957779f..521d5ed 100644
--- a/arch/x86/kernel/vsyscall_64.c
+++ b/arch/x86/kernel/vsyscall_64.c
@@ -284,6 +284,55 @@ sigsegv:
}
/*
+ * A pseudo VMA to allow ptrace access for the vsyscall page. This only
+ * covers the 64bit vsyscall page now. 32bit has a real VMA now and does
+ * not need special handling anymore:
+ */
+static const char *gate_vma_name(struct vm_area_struct *vma)
+{
+ return "[vsyscall]";
+}
+static struct vm_operations_struct gate_vma_ops = {
+ .name = gate_vma_name,
+};
+static struct vm_area_struct gate_vma = {
+ .vm_start = VSYSCALL_ADDR,
+ .vm_end = VSYSCALL_ADDR + PAGE_SIZE,
+ .vm_page_prot = PAGE_READONLY_EXEC,
+ .vm_flags = VM_READ | VM_EXEC,
+ .vm_ops = &gate_vma_ops,
+};
+
+struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+{
+#ifdef CONFIG_IA32_EMULATION
+ if (!mm || mm->context.ia32_compat)
+ return NULL;
+#endif
+ return &gate_vma;
+}
+
+int in_gate_area(struct mm_struct *mm, unsigned long addr)
+{
+ struct vm_area_struct *vma = get_gate_vma(mm);
+
+ if (!vma)
+ return 0;
+
+ return (addr >= vma->vm_start) && (addr < vma->vm_end);
+}
+
+/*
+ * Use this when you have no reliable mm, typically from interrupt
+ * context. It is less reliable than using a task's mm and may give
+ * false positives.
+ */
+int in_gate_area_no_mm(unsigned long addr)
+{
+ return (addr & PAGE_MASK) == VSYSCALL_ADDR;
+}
+
+/*
* Assume __initcall executes before all user space. Hopefully kmod
* doesn't violate that. We'll find out if it does.
*/
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 4cb8763..dd9ca9b 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1193,55 +1193,6 @@ int kern_addr_valid(unsigned long addr)
return pfn_valid(pte_pfn(*pte));
}
-/*
- * A pseudo VMA to allow ptrace access for the vsyscall page. This only
- * covers the 64bit vsyscall page now. 32bit has a real VMA now and does
- * not need special handling anymore:
- */
-static const char *gate_vma_name(struct vm_area_struct *vma)
-{
- return "[vsyscall]";
-}
-static struct vm_operations_struct gate_vma_ops = {
- .name = gate_vma_name,
-};
-static struct vm_area_struct gate_vma = {
- .vm_start = VSYSCALL_ADDR,
- .vm_end = VSYSCALL_ADDR + PAGE_SIZE,
- .vm_page_prot = PAGE_READONLY_EXEC,
- .vm_flags = VM_READ | VM_EXEC,
- .vm_ops = &gate_vma_ops,
-};
-
-struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
-{
-#ifdef CONFIG_IA32_EMULATION
- if (!mm || mm->context.ia32_compat)
- return NULL;
-#endif
- return &gate_vma;
-}
-
-int in_gate_area(struct mm_struct *mm, unsigned long addr)
-{
- struct vm_area_struct *vma = get_gate_vma(mm);
-
- if (!vma)
- return 0;
-
- return (addr >= vma->vm_start) && (addr < vma->vm_end);
-}
-
-/*
- * Use this when you have no reliable mm, typically from interrupt
- * context. It is less reliable than using a task's mm and may give
- * false positives.
- */
-int in_gate_area_no_mm(unsigned long addr)
-{
- return (addr & PAGE_MASK) == VSYSCALL_ADDR;
-}
-
static unsigned long probe_memory_block_size(void)
{
/* start from 2g */
next prev parent reply other threads:[~2014-10-28 11:14 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-23 17:50 [PATCH 0/8] x86: Disentangle the vdso and clean it up Andy Lutomirski
2014-09-23 17:50 ` [PATCH 1/8] x86_64,vsyscall: Move all of the gate_area code to vsyscall_64.c Andy Lutomirski
2014-10-28 11:14 ` tip-bot for Andy Lutomirski [this message]
2014-09-23 17:50 ` [PATCH 2/8] x86_64: Move getcpu code from vsyscall_64.c to vdso/vma.c Andy Lutomirski
2014-10-28 11:14 ` [tip:x86/vdso] x86_64/vdso: " tip-bot for Andy Lutomirski
2014-09-23 17:50 ` [PATCH 3/8] x86,vdso: Change the PER_CPU segment to use struct desc_struct Andy Lutomirski
2014-10-28 11:15 ` [tip:x86/vdso] x86/vdso: " tip-bot for Andy Lutomirski
2014-09-23 17:50 ` [PATCH 4/8] x86,vdso: Make the PER_CPU segment start out accessed Andy Lutomirski
2014-10-28 11:15 ` [tip:x86/vdso] x86/vdso: " tip-bot for Andy Lutomirski
2014-09-23 17:50 ` [PATCH 5/8] x86,vdso: Make the PER_CPU segment 32 bits Andy Lutomirski
2014-10-28 11:15 ` [tip:x86/vdso] x86/vdso: " tip-bot for Andy Lutomirski
2014-09-23 17:50 ` [PATCH 6/8] x86_64,vdso: Remove jiffies from the vvar page Andy Lutomirski
2014-10-28 11:15 ` [tip:x86/vdso] x86_64/vdso: " tip-bot for Andy Lutomirski
2014-09-23 17:50 ` [PATCH 7/8] x86_64,vdso: Clean up vgetcpu init and merge the vdso initcalls Andy Lutomirski
2014-10-28 11:16 ` [tip:x86/vdso] x86_64/vdso: " tip-bot for Andy Lutomirski
2014-09-23 17:50 ` [PATCH 8/8] x86,vdso: Replace vgetcpu_mode with static_cpu_has Andy Lutomirski
2014-10-20 17:44 ` [PATCH 0/8] x86: Disentangle the vdso and clean it up Andy Lutomirski
2014-10-20 21:41 ` H. Peter Anvin
2014-10-20 21:57 ` Andy Lutomirski
2014-10-20 22:03 ` H. Peter Anvin
2014-10-20 22:41 ` Andy Lutomirski
2014-10-21 4:38 ` Andy Lutomirski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=tip-b93590901a01a6d036b3b7c856bcc5724fdb9911@git.kernel.org \
--to=tipbot@zytor.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tip-commits@vger.kernel.org \
--cc=luto@amacapital.net \
--cc=mingo@kernel.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox