linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v3 00/11] arm64: Add a compat vDSO
@ 2016-12-06 16:03 Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 01/11] arm64: compat: Remove leftover variable declaration Kevin Brodsky
                   ` (11 more replies)
  0 siblings, 12 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

This series adds support for a compat (AArch32) vDSO, providing two
userspace functionalities to compat processes:

* "Virtual" time syscalls (gettimeofday and clock_gettime). The
  implementation is an adaptation of the arm vDSO (vgettimeofday.c),
  sharing the data page with the 64-bit vDSO.

* sigreturn trampolines, following the example of the 64-bit vDSO
  (sigreturn.S), but slightly more complicated because we provide A32
  and T32 variants for both sigreturn and rt_sigreturn, and appropriate
  arm-specific unwinding directives.

The first point brings the performance improvement expected of a vDSO,
by implementing time syscalls directly in userspace. The second point
allows to provide unwinding information for sigreturn trampolines,
achieving feature parity with the trampolines provided by glibc.

Unfortunately, this time we cannot escape using a 32-bit toolchain. To
build the compat VDSO, CONFIG_COMPAT_VDSO must be set *and*
CROSS_COMPILE_ARM32 must be defined to the prefix of a 32-bit compiler.
Failure to do so will not prevent building the kernel, but a warning
will be printed and the compat vDSO will not be built.

v3 is a major refactor of the series. The main change is that the kuser
helpers are no longer mutually exclusive with the 32-bit vDSO, and can
be disabled independently of the 32-bit vDSO (they are kept enabled by
default). To this end, the "old" sigreturn trampolines have been moved
out of the [vectors] page into an independent [sigreturn] page, similar
to [sigpage] on arm. The [vectors] page is now [kuserhelpers], and its
presence is controlled by CONFIG_KUSER_HELPERS. The [sigreturn] page is
only present when the 32-bit vDSO is not included (the latter provides
its own sigreturn trampolines with unwinding information). The following
table summarises which pages are now added in 32-bit processes:

+----------------+----------------+----------------+
|     CONFIG     | !VDSO32        | VDSO32         |
+----------------+----------------+----------------+
| !KUSER_HELPERS | [sigreturn]    | [vvar]         |
|                |                | [vdso]         |
+----------------+----------------+----------------+
| KUSER_HELPERS  | [sigreturn]    | [vvar]         |
|                | [kuserhelpers] | [vdso]         |
|                |                | [kuserhelpers] |
+----------------+----------------+----------------+

Additionally, the 32-bit vDSO no longer requires a 32-bit compiler
supporting ARMv8, any compiler supporting ARMv7-A can now be used. The
only code change is to introduce separate AArch32 barriers, to stop
generating dmb ishld when using an ARMv7 assembler. However, a major
rework of the 32-bit vDSO Makefile was necessary, because mismatching
32/64-bit compiler versions prevent sharing flags (e.g.  recently
introduced warning flags). Copy/pasting flags is not nice or beautiful,
but filtering out flags is not practicable and may break when top-level
Makefiles are changed.

Patches overview:
*  1..3: split [vectors] -> [sigreturn] + [kuserhelpers], add
         CONFIG_KUSER_HELPERS
*  4..6: preparation patches
*     7: the 32-bit vDSO itself
* 8..10: plumbing for the 32-bit vDSO
*    11: Kconfig/Makefile wiring

Thanks,
Kevin

Changelog v2..v3:
* kuser helpers / sigreturn split.
* Rework/debug of the sigreturn trampolines in the 32-bit vDSO. The CFI
  directives have been removed, as they only provide debug information
  on arm (which is stripped from vdso.so). After adding a missing nop,
  unwinding now works properly with all the trampolines (see comment in
  patch 7).
* Some cleanup in vdso.c, to make the 32-bit and 64-bit setup more
  consistent.
* 32-bit vDSO Makefile refactor.
* AArch32 barriers, selected by the 32-bit compiler mode
  (armv7-a/armv8-a).
* Use PROVIDE_HIDDEN() instead of HIDDEN() in vdso32/vdso.lds.S, as
  HIDDEN() only exists since ld 2.23.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>

Kevin Brodsky (11):
  arm64: compat: Remove leftover variable declaration
  arm64: compat: Split the sigreturn trampolines and kuser helpers
  arm64: compat: Add CONFIG_KUSER_HELPERS
  arm64: Refactor vDSO init/setup
  arm64: compat: Add time-related syscall numbers
  arm64: compat: Expose offset to registers in sigframes
  arm64: compat: Add a 32-bit vDSO
  arm64: compat: 32-bit vDSO setup
  arm64: elf: Set AT_SYSINFO_EHDR in compat processes
  arm64: compat: Use vDSO sigreturn trampolines if available
  arm64: Wire up and expose the new compat vDSO

 arch/arm64/Kconfig                         |  52 +++++
 arch/arm64/Makefile                        |  28 ++-
 arch/arm64/include/asm/elf.h               |  15 +-
 arch/arm64/include/asm/processor.h         |   4 +-
 arch/arm64/include/asm/signal32.h          |  46 ++++-
 arch/arm64/include/asm/unistd.h            |   2 +
 arch/arm64/include/asm/vdso.h              |   3 +
 arch/arm64/kernel/Makefile                 |   9 +-
 arch/arm64/kernel/asm-offsets.c            |  13 ++
 arch/arm64/kernel/kuser32.S                |  48 +----
 arch/arm64/kernel/signal32.c               |  66 ++----
 arch/arm64/kernel/sigreturn32.S            |  59 ++++++
 arch/arm64/kernel/vdso.c                   | 315 ++++++++++++++++++++---------
 arch/arm64/kernel/vdso32/Makefile          | 166 +++++++++++++++
 arch/arm64/kernel/vdso32/aarch32-barrier.h |  33 +++
 arch/arm64/kernel/vdso32/sigreturn.S       |  76 +++++++
 arch/arm64/kernel/vdso32/vdso.S            |  32 +++
 arch/arm64/kernel/vdso32/vdso.lds.S        |  93 +++++++++
 arch/arm64/kernel/vdso32/vgettimeofday.c   | 295 +++++++++++++++++++++++++++
 19 files changed, 1147 insertions(+), 208 deletions(-)
 create mode 100644 arch/arm64/kernel/sigreturn32.S
 create mode 100644 arch/arm64/kernel/vdso32/Makefile
 create mode 100644 arch/arm64/kernel/vdso32/aarch32-barrier.h
 create mode 100644 arch/arm64/kernel/vdso32/sigreturn.S
 create mode 100644 arch/arm64/kernel/vdso32/vdso.S
 create mode 100644 arch/arm64/kernel/vdso32/vdso.lds.S
 create mode 100644 arch/arm64/kernel/vdso32/vgettimeofday.c

-- 
2.10.2

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 01/11] arm64: compat: Remove leftover variable declaration
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 02/11] arm64: compat: Split the sigreturn trampolines and kuser helpers Kevin Brodsky
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

Commit a1d5ebaf8ccd ("arm64: big-endian: don't treat code as data when
copying sigret code") moved the 32-bit sigreturn trampoline code from
the aarch32_sigret_code array to kuser32.S. The commit removed the
array definition from signal32.c, but not its declaration in
signal32.h. Remove the leftover declaration.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/include/asm/signal32.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
index eeaa97559bab..81abea0b7650 100644
--- a/arch/arm64/include/asm/signal32.h
+++ b/arch/arm64/include/asm/signal32.h
@@ -22,8 +22,6 @@
 
 #define AARCH32_KERN_SIGRET_CODE_OFFSET	0x500
 
-extern const compat_ulong_t aarch32_sigret_code[6];
-
 int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
 		       struct pt_regs *regs);
 int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 02/11] arm64: compat: Split the sigreturn trampolines and kuser helpers
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 01/11] arm64: compat: Remove leftover variable declaration Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 03/11] arm64: compat: Add CONFIG_KUSER_HELPERS Kevin Brodsky
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

AArch32 processes are currently installed a special [vectors] page that
contains the sigreturn trampolines and the kuser helpers, at the fixed
address mandated by the kuser helpers ABI.

Having both functionalities in the same page is becoming problematic,
because:

* It makes it impossible to disable the kuser helpers (the sigreturn
  trampolines cannot be removed), which is possible in arm.

* A future 32-bit vDSO would provide the sigreturn trampolines itself,
  making those in [vectors] redundant.

This patch addresses the problem by moving the sigreturn trampolines to
a separate [sigreturn] page, in similar fashion to [sigpage] in arm.

[vectors] has always been a misnomer on arm64/compat, as there is no
AArch32 vector there. Now that only the kuser helpers are left there,
we can rename it to [kuserhelpers].

mm->context.vdso used to point to the [vectors] page, which is
unnecessary (as its address is fixed). It now points to the [sigreturn]
page (whose address is randomized like a vDSO).

Finally aarch32_setup_vectors_page() has been renamed to the more
generic aarch32_setup_additional_pages().

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/include/asm/elf.h       |   6 +-
 arch/arm64/include/asm/processor.h |   4 +-
 arch/arm64/include/asm/signal32.h  |   2 -
 arch/arm64/kernel/signal32.c       |   5 +-
 arch/arm64/kernel/vdso.c           | 135 ++++++++++++++++++++++++++-----------
 5 files changed, 104 insertions(+), 48 deletions(-)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index a55384f4a5d7..7da9452596ad 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -185,10 +185,10 @@ typedef compat_elf_greg_t		compat_elf_gregset_t[COMPAT_ELF_NGREG];
 #define compat_start_thread		compat_start_thread
 #define COMPAT_SET_PERSONALITY(ex)	set_thread_flag(TIF_32BIT);
 #define COMPAT_ARCH_DLINFO
-extern int aarch32_setup_vectors_page(struct linux_binprm *bprm,
-				      int uses_interp);
+extern int aarch32_setup_additional_pages(struct linux_binprm *bprm,
+					  int uses_interp);
 #define compat_arch_setup_additional_pages \
-					aarch32_setup_vectors_page
+					aarch32_setup_additional_pages
 
 #endif /* CONFIG_COMPAT */
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 60e34824e18c..b976060b1113 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -39,9 +39,9 @@
 
 #define STACK_TOP_MAX		TASK_SIZE_64
 #ifdef CONFIG_COMPAT
-#define AARCH32_VECTORS_BASE	0xffff0000
+#define AARCH32_KUSER_HELPERS_BASE 0xffff0000
 #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
-				AARCH32_VECTORS_BASE : STACK_TOP_MAX)
+				AARCH32_KUSER_HELPERS_BASE : STACK_TOP_MAX)
 #else
 #define STACK_TOP		STACK_TOP_MAX
 #endif /* CONFIG_COMPAT */
diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
index 81abea0b7650..58e288aaf0ba 100644
--- a/arch/arm64/include/asm/signal32.h
+++ b/arch/arm64/include/asm/signal32.h
@@ -20,8 +20,6 @@
 #ifdef CONFIG_COMPAT
 #include <linux/compat.h>
 
-#define AARCH32_KERN_SIGRET_CODE_OFFSET	0x500
-
 int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
 		       struct pt_regs *regs);
 int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index b7063de792f7..49396a2b6e11 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -484,14 +484,13 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 		retcode = ptr_to_compat(ka->sa.sa_restorer);
 	} else {
 		/* Set up sigreturn pointer */
+		void *sigreturn_base = current->mm->context.vdso;
 		unsigned int idx = thumb << 1;
 
 		if (ka->sa.sa_flags & SA_SIGINFO)
 			idx += 3;
 
-		retcode = AARCH32_VECTORS_BASE +
-			  AARCH32_KERN_SIGRET_CODE_OFFSET +
-			  (idx << 2) + thumb;
+		retcode = ptr_to_compat(sigreturn_base) + (idx << 2) + thumb;
 	}
 
 	regs->regs[0]	= usig;
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index a2c2478e7d78..6208b7ba4593 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -1,5 +1,7 @@
 /*
- * VDSO implementation for AArch64 and vector page setup for AArch32.
+ * Additional userspace pages setup for AArch64 and AArch32.
+ * - AArch64: vDSO pages setup, vDSO data page update.
+ * - AArch32: sigreturn and kuser helpers pages setup.
  *
  * Copyright (C) 2012 ARM Limited
  *
@@ -50,64 +52,121 @@ static union {
 struct vdso_data *vdso_data = &vdso_data_store.data;
 
 #ifdef CONFIG_COMPAT
-/*
- * Create and map the vectors page for AArch32 tasks.
- */
-static struct page *vectors_page[1] __ro_after_init;
 
-static int __init alloc_vectors_page(void)
-{
-	extern char __kuser_helper_start[], __kuser_helper_end[];
-	extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
+/* sigreturn trampolines page */
+static struct page *sigreturn_page __ro_after_init;
+static const struct vm_special_mapping sigreturn_spec = {
+	.name	= "[sigreturn]",
+	.pages	= &sigreturn_page,
+};
 
-	int kuser_sz = __kuser_helper_end - __kuser_helper_start;
-	int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
-	unsigned long vpage;
+static int __init aarch32_sigreturn_init(void)
+{
+	extern char __aarch32_sigret_code_start, __aarch32_sigret_code_end;
 
-	vpage = get_zeroed_page(GFP_ATOMIC);
+	size_t sigret_sz =
+		&__aarch32_sigret_code_end - &__aarch32_sigret_code_start;
+	struct page *page;
+	unsigned long page_addr;
 
-	if (!vpage)
+	page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+	if (!page)
 		return -ENOMEM;
+	page_addr = (unsigned long)page_address(page);
 
-	/* kuser helpers */
-	memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start,
-		kuser_sz);
-
-	/* sigreturn code */
-	memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET,
-               __aarch32_sigret_code_start, sigret_sz);
+	memcpy((void *)page_addr, &__aarch32_sigret_code_start, sigret_sz);
 
-	flush_icache_range(vpage, vpage + PAGE_SIZE);
-	vectors_page[0] = virt_to_page(vpage);
+	flush_icache_range(page_addr, page_addr + PAGE_SIZE);
 
+	sigreturn_page = page;
 	return 0;
 }
-arch_initcall(alloc_vectors_page);
+arch_initcall(aarch32_sigreturn_init);
 
-int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
+static int sigreturn_setup(struct mm_struct *mm)
 {
-	struct mm_struct *mm = current->mm;
-	unsigned long addr = AARCH32_VECTORS_BASE;
-	static const struct vm_special_mapping spec = {
-		.name	= "[vectors]",
-		.pages	= vectors_page,
+	unsigned long addr;
+	void *ret;
+
+	addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
+	if (IS_ERR_VALUE(addr)) {
+		ret = ERR_PTR(addr);
+		goto out;
+	}
+
+	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
+				       VM_READ|VM_EXEC|
+				       VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
+				       &sigreturn_spec);
+	if (IS_ERR(ret))
+		goto out;
+
+	mm->context.vdso = (void *)addr;
+
+out:
+	return PTR_ERR_OR_ZERO(ret);
+}
+
+/* kuser helpers page */
+static struct page *kuser_helpers_page __ro_after_init;
+static const struct vm_special_mapping kuser_helpers_spec = {
+	.name	= "[kuserhelpers]",
+	.pages	= &kuser_helpers_page,
+};
+
+static int __init aarch32_kuser_helpers_init(void)
+{
+	extern char __kuser_helper_start, __kuser_helper_end;
 
-	};
+	size_t kuser_sz = &__kuser_helper_end - &__kuser_helper_start;
+	struct page *page;
+	unsigned long page_addr;
+
+	page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+	if (!page)
+		return -ENOMEM;
+	page_addr = (unsigned long)page_address(page);
+
+	memcpy((void *)(page_addr + 0x1000 - kuser_sz), &__kuser_helper_start,
+	       kuser_sz);
+
+	flush_icache_range(page_addr, page_addr + PAGE_SIZE);
+
+	kuser_helpers_page = page;
+	return 0;
+}
+arch_initcall(aarch32_kuser_helpers_init);
+
+static int kuser_helpers_setup(struct mm_struct *mm)
+{
 	void *ret;
 
+	/* Map the kuser helpers at the ABI-defined high address */
+	ret = _install_special_mapping(mm, AARCH32_KUSER_HELPERS_BASE, PAGE_SIZE,
+				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
+				       &kuser_helpers_spec);
+	return PTR_ERR_OR_ZERO(ret);
+}
+
+int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+{
+	struct mm_struct *mm = current->mm;
+	int ret;
+
 	if (down_write_killable(&mm->mmap_sem))
 		return -EINTR;
-	current->mm->context.vdso = (void *)addr;
 
-	/* Map vectors page at the high address. */
-	ret = _install_special_mapping(mm, addr, PAGE_SIZE,
-				       VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC,
-				       &spec);
+	ret = sigreturn_setup(mm);
+	if (ret)
+		goto out;
 
-	up_write(&mm->mmap_sem);
+	ret = kuser_helpers_setup(mm);
 
-	return PTR_ERR_OR_ZERO(ret);
+out:
+	up_write(&mm->mmap_sem);
+	return ret;
 }
+
 #endif /* CONFIG_COMPAT */
 
 static struct vm_special_mapping vdso_spec[2] __ro_after_init = {
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 03/11] arm64: compat: Add CONFIG_KUSER_HELPERS
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 01/11] arm64: compat: Remove leftover variable declaration Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 02/11] arm64: compat: Split the sigreturn trampolines and kuser helpers Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 04/11] arm64: Refactor vDSO init/setup Kevin Brodsky
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

Make it possible to disable the kuser helpers by adding a KUSER_HELPERS
config option (enabled by default). When disabled, all kuser
helpers-related code is removed from the kernel and no mapping is done
at the fixed high address (0xffff0000); any attempt to use a kuser
helper from a 32-bit process will result in a segfault.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/Kconfig              | 29 ++++++++++++++++++++
 arch/arm64/kernel/Makefile      |  3 ++-
 arch/arm64/kernel/kuser32.S     | 48 +++------------------------------
 arch/arm64/kernel/sigreturn32.S | 59 +++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/vdso.c        |  6 +++++
 5 files changed, 99 insertions(+), 46 deletions(-)
 create mode 100644 arch/arm64/kernel/sigreturn32.S

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 969ef880d234..265d88c44cfc 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1013,6 +1013,35 @@ config COMPAT
 
 	  If you want to execute 32-bit userspace applications, say Y.
 
+config KUSER_HELPERS
+	bool "Enable the kuser helpers page in 32-bit processes"
+	depends on COMPAT
+	default y
+	help
+	  Warning: disabling this option may break 32-bit applications.
+
+	  Provide kuser helpers in a special purpose fixed-address page. The
+	  kernel provides helper code to userspace in read-only form at a fixed
+	  location to allow userspace to be independent of the CPU type fitted
+	  to the system. This permits 32-bit binaries to be run on ARMv4 through
+	  to ARMv8 without modification.
+
+	  See Documentation/arm/kernel_user_helpers.txt for details.
+
+	  However, the fixed-address nature of these helpers can be used by ROP
+	  (return-orientated programming) authors when creating exploits.
+
+	  If all of the 32-bit binaries and libraries which run on your platform
+	  are built specifically for your platform, and make no use of these
+	  helpers, then you can turn this option off to hinder such exploits.
+	  However, in that case, if a binary or library relying on those helpers
+	  is run, it will receive a SIGSEGV signal, which will terminate the
+	  program. Typically, binaries compiled for ARMv7 or later do not use
+	  the kuser helpers.
+
+	  Say N here only if you are absolutely certain that you do not need
+	  these helpers; otherwise, the safe option is to say Y.
+
 config SYSVIPC_COMPAT
 	def_bool y
 	depends on COMPAT && SYSVIPC
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 7d66bbaafc0c..0850506d0217 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -27,8 +27,9 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_
 $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
-arm64-obj-$(CONFIG_COMPAT)		+= sys32.o kuser32.o signal32.o 	\
+arm64-obj-$(CONFIG_COMPAT)		+= sys32.o sigreturn32.o signal32.o	\
 					   sys_compat.o entry32.o
+arm64-obj-$(CONFIG_KUSER_HELPERS)	+= kuser32.o
 arm64-obj-$(CONFIG_FUNCTION_TRACER)	+= ftrace.o entry-ftrace.o
 arm64-obj-$(CONFIG_MODULES)		+= arm64ksyms.o module.o
 arm64-obj-$(CONFIG_ARM64_MODULE_PLTS)	+= module-plts.o
diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S
index 997e6b27ff6a..d15b5c2935b3 100644
--- a/arch/arm64/kernel/kuser32.S
+++ b/arch/arm64/kernel/kuser32.S
@@ -20,16 +20,13 @@
  *
  * AArch32 user helpers.
  *
- * Each segment is 32-byte aligned and will be moved to the top of the high
- * vector page.  New segments (if ever needed) must be added in front of
- * existing ones.  This mechanism should be used only for things that are
- * really small and justified, and not be abused freely.
+ * These helpers are provided for compatibility with AArch32 binaries that
+ * still need them. They are installed at a fixed address by
+ * aarch32_setup_additional_pages().
  *
  * See Documentation/arm/kernel_user_helpers.txt for formal definitions.
  */
 
-#include <asm/unistd.h>
-
 	.align	5
 	.globl	__kuser_helper_start
 __kuser_helper_start:
@@ -77,42 +74,3 @@ __kuser_helper_version:			// 0xffff0ffc
 	.word	((__kuser_helper_end - __kuser_helper_start) >> 5)
 	.globl	__kuser_helper_end
 __kuser_helper_end:
-
-/*
- * AArch32 sigreturn code
- *
- * For ARM syscalls, the syscall number has to be loaded into r7.
- * We do not support an OABI userspace.
- *
- * For Thumb syscalls, we also pass the syscall number via r7. We therefore
- * need two 16-bit instructions.
- */
-	.globl __aarch32_sigret_code_start
-__aarch32_sigret_code_start:
-
-	/*
-	 * ARM Code
-	 */
-	.byte	__NR_compat_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_sigreturn
-	.byte	__NR_compat_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_sigreturn
-
-	/*
-	 * Thumb code
-	 */
-	.byte	__NR_compat_sigreturn, 0x27			// svc	#__NR_compat_sigreturn
-	.byte	__NR_compat_sigreturn, 0xdf			// mov	r7, #__NR_compat_sigreturn
-
-	/*
-	 * ARM code
-	 */
-	.byte	__NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_rt_sigreturn
-	.byte	__NR_compat_rt_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_rt_sigreturn
-
-	/*
-	 * Thumb code
-	 */
-	.byte	__NR_compat_rt_sigreturn, 0x27			// svc	#__NR_compat_rt_sigreturn
-	.byte	__NR_compat_rt_sigreturn, 0xdf			// mov	r7, #__NR_compat_rt_sigreturn
-
-        .globl __aarch32_sigret_code_end
-__aarch32_sigret_code_end:
diff --git a/arch/arm64/kernel/sigreturn32.S b/arch/arm64/kernel/sigreturn32.S
new file mode 100644
index 000000000000..f2615e2286c5
--- /dev/null
+++ b/arch/arm64/kernel/sigreturn32.S
@@ -0,0 +1,59 @@
+/*
+ * sigreturn trampolines for AArch32.
+ *
+ * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ *
+ * AArch32 sigreturn code
+ *
+ * For ARM syscalls, the syscall number has to be loaded into r7.
+ * We do not support an OABI userspace.
+ *
+ * For Thumb syscalls, we also pass the syscall number via r7. We therefore
+ * need two 16-bit instructions.
+ */
+
+#include <asm/unistd.h>
+
+	.globl __aarch32_sigret_code_start
+__aarch32_sigret_code_start:
+
+	/*
+	 * ARM Code
+	 */
+	.byte	__NR_compat_sigreturn, 0x70, 0xa0, 0xe3		// mov	r7, #__NR_compat_sigreturn
+	.byte	__NR_compat_sigreturn, 0x00, 0x00, 0xef		// svc	#__NR_compat_sigreturn
+
+	/*
+	 * Thumb code
+	 */
+	.byte	__NR_compat_sigreturn, 0x27			// svc	#__NR_compat_sigreturn
+	.byte	__NR_compat_sigreturn, 0xdf			// mov	r7, #__NR_compat_sigreturn
+
+	/*
+	 * ARM code
+	 */
+	.byte	__NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3	// mov	r7, #__NR_compat_rt_sigreturn
+	.byte	__NR_compat_rt_sigreturn, 0x00, 0x00, 0xef	// svc	#__NR_compat_rt_sigreturn
+
+	/*
+	 * Thumb code
+	 */
+	.byte	__NR_compat_rt_sigreturn, 0x27			// svc	#__NR_compat_rt_sigreturn
+	.byte	__NR_compat_rt_sigreturn, 0xdf			// mov	r7, #__NR_compat_rt_sigreturn
+
+        .globl __aarch32_sigret_code_end
+__aarch32_sigret_code_end:
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 6208b7ba4593..d189e5be5039 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -107,6 +107,8 @@ static int sigreturn_setup(struct mm_struct *mm)
 	return PTR_ERR_OR_ZERO(ret);
 }
 
+#ifdef CONFIG_KUSER_HELPERS
+
 /* kuser helpers page */
 static struct page *kuser_helpers_page __ro_after_init;
 static const struct vm_special_mapping kuser_helpers_spec = {
@@ -148,6 +150,8 @@ static int kuser_helpers_setup(struct mm_struct *mm)
 	return PTR_ERR_OR_ZERO(ret);
 }
 
+#endif /* CONFIG_KUSER_HELPERS */
+
 int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 {
 	struct mm_struct *mm = current->mm;
@@ -160,7 +164,9 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 	if (ret)
 		goto out;
 
+#ifdef CONFIG_KUSER_HELPERS
 	ret = kuser_helpers_setup(mm);
+#endif
 
 out:
 	up_write(&mm->mmap_sem);
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 04/11] arm64: Refactor vDSO init/setup
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
                   ` (2 preceding siblings ...)
  2016-12-06 16:03 ` [RFC PATCH v3 03/11] arm64: compat: Add CONFIG_KUSER_HELPERS Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 05/11] arm64: compat: Add time-related syscall numbers Kevin Brodsky
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

Move the logic for setting up mappings and pages for the vDSO into
static functions. This makes the vDSO setup code more consistent with
the compat side and will allow to reuse it for the future compat vDSO.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/kernel/vdso.c | 170 +++++++++++++++++++++++++++--------------------
 1 file changed, 99 insertions(+), 71 deletions(-)

diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index d189e5be5039..4d3048619be8 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -39,8 +39,10 @@
 #include <asm/vdso.h>
 #include <asm/vdso_datapage.h>
 
-extern char vdso_start, vdso_end;
-static unsigned long vdso_pages __ro_after_init;
+struct vdso_mappings {
+	unsigned long num_code_pages;
+	struct vm_special_mapping data_mapping, code_mapping;
+};
 
 /*
  * The vDSO data page.
@@ -51,6 +53,93 @@ static union {
 } vdso_data_store __page_aligned_data;
 struct vdso_data *vdso_data = &vdso_data_store.data;
 
+static int __init vdso_mappings_init(const char *name,
+				      const char *code_start,
+				      const char *code_end,
+				      struct vdso_mappings *mappings)
+{
+	unsigned long i, num_code_pages;
+	struct page **pages;
+
+	if (memcmp(code_start, "\177ELF", 4)) {
+		pr_err("%s is not a valid ELF object!\n", name);
+		return -EINVAL;
+	}
+
+	num_code_pages = (code_end - code_start) >> PAGE_SHIFT;
+	pr_info("%s: %ld pages (%ld code @ %p, %ld data @ %p)\n",
+		name, num_code_pages + 1, num_code_pages, code_start, 1L,
+		vdso_data);
+
+	/*
+	 * Allocate space for storing pointers to the vDSO code pages + the
+	 * data page. The pointers must have the same lifetime as the mappings,
+	 * which are static, so there is no need to keep track of the pointer
+	 * array to free it.
+	 */
+	pages = kmalloc_array(num_code_pages + 1, sizeof(struct page *),
+			      GFP_KERNEL);
+	if (pages == NULL)
+		return -ENOMEM;
+
+	/* Grab the vDSO data page */
+	pages[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
+
+	/* Grab the vDSO code pages */
+	for (i = 0; i < num_code_pages; i++)
+		pages[i + 1] = pfn_to_page(PHYS_PFN(__pa(code_start)) + i);
+
+	/* Populate the special mapping structures */
+	mappings->data_mapping = (struct vm_special_mapping) {
+		.name	= "[vvar]",
+		.pages	= &pages[0],
+	};
+
+	mappings->code_mapping = (struct vm_special_mapping) {
+		.name	= "[vdso]",
+		.pages	= &pages[1],
+	};
+
+	mappings->num_code_pages = num_code_pages;
+	return 0;
+}
+
+static int vdso_setup(struct mm_struct *mm,
+		      const struct vdso_mappings *mappings)
+{
+	unsigned long vdso_base, vdso_text_len, vdso_mapping_len;
+	void *ret;
+
+	vdso_text_len = mappings->num_code_pages << PAGE_SHIFT;
+	/* Be sure to map the data page */
+	vdso_mapping_len = vdso_text_len + PAGE_SIZE;
+
+	vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0);
+	if (IS_ERR_VALUE(vdso_base)) {
+		ret = ERR_PTR(vdso_base);
+		goto out;
+	}
+
+	ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE,
+				       VM_READ|VM_MAYREAD,
+				       &mappings->data_mapping);
+	if (IS_ERR(ret))
+		goto out;
+
+	vdso_base += PAGE_SIZE;
+	ret = _install_special_mapping(mm, vdso_base, vdso_text_len,
+				       VM_READ|VM_EXEC|
+				       VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
+				       &mappings->code_mapping);
+	if (IS_ERR(ret))
+		goto out;
+
+	mm->context.vdso = (void *)vdso_base;
+
+out:
+	return PTR_ERR_OR_ZERO(ret);
+}
+
 #ifdef CONFIG_COMPAT
 
 /* sigreturn trampolines page */
@@ -175,90 +264,29 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 
 #endif /* CONFIG_COMPAT */
 
-static struct vm_special_mapping vdso_spec[2] __ro_after_init = {
-	{
-		.name	= "[vvar]",
-	},
-	{
-		.name	= "[vdso]",
-	},
-};
+static struct vdso_mappings vdso_mappings __ro_after_init;
 
 static int __init vdso_init(void)
 {
-	int i;
-	struct page **vdso_pagelist;
-
-	if (memcmp(&vdso_start, "\177ELF", 4)) {
-		pr_err("vDSO is not a valid ELF object!\n");
-		return -EINVAL;
-	}
+	extern char vdso_start, vdso_end;
 
-	vdso_pages = (&vdso_end - &vdso_start) >> PAGE_SHIFT;
-	pr_info("vdso: %ld pages (%ld code @ %p, %ld data @ %p)\n",
-		vdso_pages + 1, vdso_pages, &vdso_start, 1L, vdso_data);
-
-	/* Allocate the vDSO pagelist, plus a page for the data. */
-	vdso_pagelist = kcalloc(vdso_pages + 1, sizeof(struct page *),
-				GFP_KERNEL);
-	if (vdso_pagelist == NULL)
-		return -ENOMEM;
-
-	/* Grab the vDSO data page. */
-	vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
-
-	/* Grab the vDSO code pages. */
-	for (i = 0; i < vdso_pages; i++)
-		vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
-
-	vdso_spec[0].pages = &vdso_pagelist[0];
-	vdso_spec[1].pages = &vdso_pagelist[1];
-
-	return 0;
+	return vdso_mappings_init("vdso", &vdso_start, &vdso_end,
+				   &vdso_mappings);
 }
 arch_initcall(vdso_init);
 
-int arch_setup_additional_pages(struct linux_binprm *bprm,
-				int uses_interp)
+int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 {
 	struct mm_struct *mm = current->mm;
-	unsigned long vdso_base, vdso_text_len, vdso_mapping_len;
-	void *ret;
-
-	vdso_text_len = vdso_pages << PAGE_SHIFT;
-	/* Be sure to map the data page */
-	vdso_mapping_len = vdso_text_len + PAGE_SIZE;
+	int ret;
 
 	if (down_write_killable(&mm->mmap_sem))
 		return -EINTR;
-	vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0);
-	if (IS_ERR_VALUE(vdso_base)) {
-		ret = ERR_PTR(vdso_base);
-		goto up_fail;
-	}
-	ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE,
-				       VM_READ|VM_MAYREAD,
-				       &vdso_spec[0]);
-	if (IS_ERR(ret))
-		goto up_fail;
-
-	vdso_base += PAGE_SIZE;
-	mm->context.vdso = (void *)vdso_base;
-	ret = _install_special_mapping(mm, vdso_base, vdso_text_len,
-				       VM_READ|VM_EXEC|
-				       VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
-				       &vdso_spec[1]);
-	if (IS_ERR(ret))
-		goto up_fail;
 
+	ret = vdso_setup(mm, &vdso_mappings);
 
 	up_write(&mm->mmap_sem);
-	return 0;
-
-up_fail:
-	mm->context.vdso = NULL;
-	up_write(&mm->mmap_sem);
-	return PTR_ERR(ret);
+	return ret;
 }
 
 /*
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 05/11] arm64: compat: Add time-related syscall numbers
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
                   ` (3 preceding siblings ...)
  2016-12-06 16:03 ` [RFC PATCH v3 04/11] arm64: Refactor vDSO init/setup Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 06/11] arm64: compat: Expose offset to registers in sigframes Kevin Brodsky
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

They will be used by the future compat vDSO.

The compat syscall numbers correspond to the arm syscall numbers, see
arch/arm/include/uapi/asm/unistd.h.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/include/asm/unistd.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h
index e78ac26324bd..8d1c5f5e58f3 100644
--- a/arch/arm64/include/asm/unistd.h
+++ b/arch/arm64/include/asm/unistd.h
@@ -34,8 +34,10 @@
 #define __NR_compat_exit		1
 #define __NR_compat_read		3
 #define __NR_compat_write		4
+#define __NR_compat_gettimeofday	78
 #define __NR_compat_sigreturn		119
 #define __NR_compat_rt_sigreturn	173
+#define __NR_compat_clock_gettime	263
 
 /*
  * The following SVCs are ARM private.
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 06/11] arm64: compat: Expose offset to registers in sigframes
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
                   ` (4 preceding siblings ...)
  2016-12-06 16:03 ` [RFC PATCH v3 05/11] arm64: compat: Add time-related syscall numbers Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 07/11] arm64: compat: Add a 32-bit vDSO Kevin Brodsky
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

This will be needed to provide unwinding information in compat
sigreturn trampolines, part of the future compat vDSO. There is no
obvious header the compat_sig* struct's should be moved to, so let's
put them in signal32.h.

Also fix minor style issues reported by checkpatch.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/include/asm/signal32.h | 46 +++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/asm-offsets.c   | 13 +++++++++++
 arch/arm64/kernel/signal32.c      | 46 ---------------------------------------
 3 files changed, 59 insertions(+), 46 deletions(-)

diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h
index 58e288aaf0ba..bcd0e139ee4a 100644
--- a/arch/arm64/include/asm/signal32.h
+++ b/arch/arm64/include/asm/signal32.h
@@ -20,6 +20,52 @@
 #ifdef CONFIG_COMPAT
 #include <linux/compat.h>
 
+struct compat_sigcontext {
+	/* We always set these two fields to 0 */
+	compat_ulong_t			trap_no;
+	compat_ulong_t			error_code;
+
+	compat_ulong_t			oldmask;
+	compat_ulong_t			arm_r0;
+	compat_ulong_t			arm_r1;
+	compat_ulong_t			arm_r2;
+	compat_ulong_t			arm_r3;
+	compat_ulong_t			arm_r4;
+	compat_ulong_t			arm_r5;
+	compat_ulong_t			arm_r6;
+	compat_ulong_t			arm_r7;
+	compat_ulong_t			arm_r8;
+	compat_ulong_t			arm_r9;
+	compat_ulong_t			arm_r10;
+	compat_ulong_t			arm_fp;
+	compat_ulong_t			arm_ip;
+	compat_ulong_t			arm_sp;
+	compat_ulong_t			arm_lr;
+	compat_ulong_t			arm_pc;
+	compat_ulong_t			arm_cpsr;
+	compat_ulong_t			fault_address;
+};
+
+struct compat_ucontext {
+	compat_ulong_t			uc_flags;
+	compat_uptr_t			uc_link;
+	compat_stack_t			uc_stack;
+	struct compat_sigcontext	uc_mcontext;
+	compat_sigset_t			uc_sigmask;
+	int __unused[32 - (sizeof(compat_sigset_t) / sizeof(int))];
+	compat_ulong_t			uc_regspace[128] __aligned(8);
+};
+
+struct compat_sigframe {
+	struct compat_ucontext		uc;
+	compat_ulong_t			retcode[2];
+};
+
+struct compat_rt_sigframe {
+	struct compat_siginfo		info;
+	struct compat_sigframe		sig;
+};
+
 int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
 		       struct pt_regs *regs);
 int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 4a2f0f0fef32..965371186a5a 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -26,6 +26,7 @@
 #include <asm/cpufeature.h>
 #include <asm/thread_info.h>
 #include <asm/memory.h>
+#include <asm/signal32.h>
 #include <asm/smp_plat.h>
 #include <asm/suspend.h>
 #include <asm/vdso_datapage.h>
@@ -75,6 +76,18 @@ int main(void)
   DEFINE(S_ORIG_ADDR_LIMIT,	offsetof(struct pt_regs, orig_addr_limit));
   DEFINE(S_FRAME_SIZE,		sizeof(struct pt_regs));
   BLANK();
+#ifdef CONFIG_COMPAT
+  DEFINE(COMPAT_SIGFRAME_REGS_OFFSET,
+				offsetof(struct compat_sigframe, uc) +
+				offsetof(struct compat_ucontext, uc_mcontext) +
+				offsetof(struct compat_sigcontext, arm_r0));
+  DEFINE(COMPAT_RT_SIGFRAME_REGS_OFFSET,
+				offsetof(struct compat_rt_sigframe, sig) +
+				offsetof(struct compat_sigframe, uc) +
+				offsetof(struct compat_ucontext, uc_mcontext) +
+				offsetof(struct compat_sigcontext, arm_r0));
+  BLANK();
+#endif
   DEFINE(MM_CONTEXT_ID,		offsetof(struct mm_struct, context.id.counter));
   BLANK();
   DEFINE(VMA_VM_MM,		offsetof(struct vm_area_struct, vm_mm));
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index 49396a2b6e11..281df761e208 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -29,42 +29,6 @@
 #include <asm/uaccess.h>
 #include <asm/unistd.h>
 
-struct compat_sigcontext {
-	/* We always set these two fields to 0 */
-	compat_ulong_t			trap_no;
-	compat_ulong_t			error_code;
-
-	compat_ulong_t			oldmask;
-	compat_ulong_t			arm_r0;
-	compat_ulong_t			arm_r1;
-	compat_ulong_t			arm_r2;
-	compat_ulong_t			arm_r3;
-	compat_ulong_t			arm_r4;
-	compat_ulong_t			arm_r5;
-	compat_ulong_t			arm_r6;
-	compat_ulong_t			arm_r7;
-	compat_ulong_t			arm_r8;
-	compat_ulong_t			arm_r9;
-	compat_ulong_t			arm_r10;
-	compat_ulong_t			arm_fp;
-	compat_ulong_t			arm_ip;
-	compat_ulong_t			arm_sp;
-	compat_ulong_t			arm_lr;
-	compat_ulong_t			arm_pc;
-	compat_ulong_t			arm_cpsr;
-	compat_ulong_t			fault_address;
-};
-
-struct compat_ucontext {
-	compat_ulong_t			uc_flags;
-	compat_uptr_t			uc_link;
-	compat_stack_t			uc_stack;
-	struct compat_sigcontext	uc_mcontext;
-	compat_sigset_t			uc_sigmask;
-	int		__unused[32 - (sizeof (compat_sigset_t) / sizeof (int))];
-	compat_ulong_t	uc_regspace[128] __attribute__((__aligned__(8)));
-};
-
 struct compat_vfp_sigframe {
 	compat_ulong_t	magic;
 	compat_ulong_t	size;
@@ -91,16 +55,6 @@ struct compat_aux_sigframe {
 	unsigned long			end_magic;
 } __attribute__((__aligned__(8)));
 
-struct compat_sigframe {
-	struct compat_ucontext	uc;
-	compat_ulong_t		retcode[2];
-};
-
-struct compat_rt_sigframe {
-	struct compat_siginfo info;
-	struct compat_sigframe sig;
-};
-
 #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
 
 static inline int put_sigset_t(compat_sigset_t __user *uset, sigset_t *set)
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 07/11] arm64: compat: Add a 32-bit vDSO
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
                   ` (5 preceding siblings ...)
  2016-12-06 16:03 ` [RFC PATCH v3 06/11] arm64: compat: Expose offset to registers in sigframes Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2016-12-07 18:51   ` Nathan Lynch
  2016-12-06 16:03 ` [RFC PATCH v3 08/11] arm64: compat: 32-bit vDSO setup Kevin Brodsky
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

Provide the files necessary for building a compat (AArch32) vDSO in
kernel/vdso32.

This is mostly an adaptation of the arm vDSO. The most significant
change in vgettimeofday.c is the use of the arm64 vdso_data struct,
allowing the vDSO data page to be shared between the 32 and 64-bit
vDSOs. Additionally, a different set of barrier macros is used (see
aarch32-barrier.h), as we want to support old 32-bit compilers that
may not support ARMv8 and its new barrier arguments (*ld).

In addition to the time functions, sigreturn trampolines are also
provided, aiming at replacing those in the sigreturn page as the
latter don't provide any unwinding information (and it's easier to
have just one "user code" page). arm-specific unwinding directives are
used, based on glibc's implementation. Symbol offsets are made
available to the kernel using the same method as the 64-bit vDSO.

There is unfortunately an important caveat: we cannot get away with
hand-coding 32-bit instructions like in kernel/kuser32.S, this time we
really need a 32-bit compiler. The compat vDSO Makefile relies on
CROSS_COMPILE_ARM32 to provide a 32-bit compiler, appropriate logic
will be added to the arm64 Makefile later on to ensure that an attempt
to build the compat vDSO is made only if this variable has been set
properly.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
Note regarding the sigreturn trampolines: they should now have proper unwinding
information. I tested them by recompiling glibc to make it use the kernel
sigreturn (instead of setting its own sa_restorer), and then use glibc's
backtrace() inside a signal handler, or in C++ throw an exception from a signal
handler (hopefully nobody actually uses this "feature", but who knows...). Note
that in C, you must compile your application with -fasynchronous-unwind-tables
(which is the default on x86 but not arm), so that backtrace() actually uses the
unwinder to derive the stack trace. gdb is not very useful in this case, because
it has some magic to detect sigreturn trampolines based on the exact sequence of
instructions, and can derive the stack trace from there, without debug/unwinding
information.

The only remaining issue (AFAIK) is that backtrace() doesn't manage to print
the name of the Thumb trampolines, and gdb is also confused (the frame shows up
as "#1  0x... in ?? ()"). This seems to be a bug somewhere in the unwinder (and
gdb can't use its magic instruction matching because it expects slightly
different instructions), but I can't tell for sure. Given their limited use, an
option would be to only keep the ARM trampolines; but for now they will remain
there for compatibility with arch/arm.

 arch/arm64/kernel/vdso32/Makefile          | 166 ++++++++++++++++
 arch/arm64/kernel/vdso32/aarch32-barrier.h |  33 ++++
 arch/arm64/kernel/vdso32/sigreturn.S       |  76 ++++++++
 arch/arm64/kernel/vdso32/vdso.S            |  32 ++++
 arch/arm64/kernel/vdso32/vdso.lds.S        |  93 +++++++++
 arch/arm64/kernel/vdso32/vgettimeofday.c   | 295 +++++++++++++++++++++++++++++
 6 files changed, 695 insertions(+)
 create mode 100644 arch/arm64/kernel/vdso32/Makefile
 create mode 100644 arch/arm64/kernel/vdso32/aarch32-barrier.h
 create mode 100644 arch/arm64/kernel/vdso32/sigreturn.S
 create mode 100644 arch/arm64/kernel/vdso32/vdso.S
 create mode 100644 arch/arm64/kernel/vdso32/vdso.lds.S
 create mode 100644 arch/arm64/kernel/vdso32/vgettimeofday.c

diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
new file mode 100644
index 000000000000..87cb8f01d8b7
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/Makefile
@@ -0,0 +1,166 @@
+#
+# Building a vDSO image for AArch32.
+#
+# Author: Kevin Brodsky <kevin.brodsky@arm.com>
+# A mix between the arm64 and arm vDSO Makefiles.
+
+CC_ARM32 := $(CROSS_COMPILE_ARM32)gcc
+
+# Same as cc-*option, but using CC_ARM32 instead of CC
+cc32-option = $(call try-run,\
+        $(CC_ARM32) $(1) -c -x c /dev/null -o "$$TMP",$(1),$(2))
+cc32-disable-warning = $(call try-run,\
+	$(CC_ARM32) -W$(strip $(1)) -c -x c /dev/null -o "$$TMP",-Wno-$(strip $(1)))
+cc32-ldoption = $(call try-run,\
+        $(CC_ARM32) $(1) -nostdlib -x c /dev/null -o "$$TMP",$(1),$(2))
+
+# We cannot use the global flags to compile the vDSO files, the main reason
+# being that the 32-bit compiler may be older than the main (64-bit) compiler
+# and therefore may not understand flags set using $(cc-option ...). Besides,
+# arch-specific options should be taken from the arm Makefile instead of the
+# arm64 one.
+# As a result we set our own flags here.
+
+# From top-level Makefile
+# NOSTDINC_FLAGS
+VDSO_CPPFLAGS := -nostdinc -isystem $(shell $(CC_ARM32) -print-file-name=include)
+VDSO_CPPFLAGS += $(LINUXINCLUDE)
+VDSO_CPPFLAGS += $(KBUILD_CPPFLAGS)
+
+# Common C and assembly flags
+# From top-level Makefile
+VDSO_CAFLAGS := $(VDSO_CPPFLAGS)
+VDSO_CAFLAGS += $(call cc32-option,-fno-PIE)
+ifdef CONFIG_DEBUG_INFO
+VDSO_CAFLAGS += -g
+endif
+ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC_ARM32)), y)
+VDSO_CAFLAGS += -DCC_HAVE_ASM_GOTO
+endif
+
+# From arm Makefile
+VDSO_CAFLAGS += $(call cc32-option,-fno-dwarf2-cfi-asm)
+VDSO_CAFLAGS += -mabi=aapcs-linux -mfloat-abi=soft
+ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
+VDSO_CAFLAGS += -mbig-endian
+else
+VDSO_CAFLAGS += -mlittle-endian
+endif
+
+# From arm vDSO Makefile
+VDSO_CAFLAGS += -fPIC -fno-builtin -fno-stack-protector
+VDSO_CAFLAGS += -DDISABLE_BRANCH_PROFILING
+
+# Try to compile for ARMv8. If the compiler is too old and doesn't support it,
+# fall back to v7. There is no easy way to check for what architecture the code
+# is being compiled, so define a macro specifying that (see arch/arm/Makefile).
+VDSO_CAFLAGS += $(call cc32-option,-march=armv8-a -D__LINUX_ARM_ARCH__=8,\
+                                   -march=armv7-a -D__LINUX_ARM_ARCH__=7)
+
+VDSO_CFLAGS := $(VDSO_CAFLAGS)
+# KBUILD_CFLAGS from top-level Makefile
+VDSO_CFLAGS += -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+               -fno-strict-aliasing -fno-common \
+               -Werror-implicit-function-declaration \
+               -Wno-format-security \
+               -std=gnu89
+VDSO_CFLAGS  += -O2
+# Some useful compiler-dependent flags from top-level Makefile
+VDSO_CFLAGS += $(call cc32-option,-Wdeclaration-after-statement,)
+VDSO_CFLAGS += $(call cc32-option,-Wno-pointer-sign)
+VDSO_CFLAGS += $(call cc32-option,-fno-strict-overflow)
+VDSO_CFLAGS += $(call cc32-option,-Werror=strict-prototypes)
+VDSO_CFLAGS += $(call cc32-option,-Werror=date-time)
+VDSO_CFLAGS += $(call cc32-option,-Werror=incompatible-pointer-types)
+
+# The 32-bit compiler does not provide 128-bit integers, which are used in
+# some headers that are indirectly included from the vDSO code.
+# This hack makes the compiler happy and should trigger a warning/error if
+# variables of such type are referenced.
+VDSO_CFLAGS += -D__uint128_t='void*'
+# Silence some warnings coming from headers that operate on long's
+# (on GCC 4.8 or older, there is unfortunately no way to silence this warning)
+VDSO_CFLAGS += $(call cc32-disable-warning,shift-count-overflow)
+VDSO_CFLAGS += -Wno-int-to-pointer-cast
+
+VDSO_AFLAGS := $(VDSO_CAFLAGS)
+VDSO_AFLAGS += -D__ASSEMBLY__
+
+VDSO_LDFLAGS := $(VDSO_CPPFLAGS)
+# From arm vDSO Makefile
+VDSO_LDFLAGS += -Wl,-Bsymbolic -Wl,--no-undefined -Wl,-soname=linux-vdso.so.1
+VDSO_LDFLAGS += -Wl,-z,max-page-size=4096 -Wl,-z,common-page-size=4096
+VDSO_LDFLAGS += -nostdlib -shared -mfloat-abi=soft
+VDSO_LDFLAGS += $(call cc32-ldoption,-Wl$(comma)--hash-style=sysv)
+VDSO_LDFLAGS += $(call cc32-ldoption,-Wl$(comma)--build-id)
+VDSO_LDFLAGS += $(call cc32-ldoption,-fuse-ld=bfd)
+
+
+# Borrow vdsomunge.c from the arm vDSO
+munge := arch/arm/vdso/vdsomunge
+hostprogs-y := $(srctree)/$(munge)
+
+c-obj-vdso := vgettimeofday.o
+asm-obj-vdso := sigreturn.o
+
+# Build rules
+targets := $(c-obj-vdso) $(asm-obj-vdso) vdso.so vdso.so.dbg vdso.so.raw
+c-obj-vdso := $(addprefix $(obj)/, $(c-obj-vdso))
+asm-obj-vdso := $(addprefix $(obj)/, $(asm-obj-vdso))
+obj-vdso := $(c-obj-vdso) $(asm-obj-vdso)
+
+obj-y += vdso.o
+extra-y += vdso.lds
+CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+
+# Force dependency (vdso.s includes vdso.so through incbin)
+$(obj)/vdso.o: $(obj)/vdso.so
+
+include/generated/vdso32-offsets.h: $(obj)/vdso.so.dbg FORCE
+	$(call if_changed,vdsosym)
+
+# Strip rule for vdso.so
+$(obj)/vdso.so: OBJCOPYFLAGS := -S
+$(obj)/vdso.so: $(obj)/vdso.so.dbg FORCE
+	$(call if_changed,objcopy)
+
+$(obj)/vdso.so.dbg: $(obj)/vdso.so.raw $(objtree)/$(munge) FORCE
+	$(call if_changed,vdsomunge)
+
+# Link rule for the .so file, .lds has to be first
+$(obj)/vdso.so.raw: $(src)/vdso.lds $(obj-vdso) FORCE
+	$(call if_changed,vdsold)
+
+# Compilation rules for the vDSO sources
+$(c-obj-vdso): %.o: %.c FORCE
+	$(call if_changed_dep,vdsocc)
+$(asm-obj-vdso): %.o: %.S FORCE
+	$(call if_changed_dep,vdsoas)
+
+# Actual build commands
+quiet_cmd_vdsold = VDSOL   $@
+      cmd_vdsold = $(CC_ARM32) -Wp,-MD,$(depfile) $(VDSO_LDFLAGS) \
+                   -Wl,-T $(filter %.lds,$^) $(filter %.o,$^) -o $@
+quiet_cmd_vdsocc = VDSOC   $@
+      cmd_vdsocc = $(CC_ARM32) -Wp,-MD,$(depfile) $(VDSO_CFLAGS) -c -o $@ $<
+quiet_cmd_vdsoas = VDSOA   $@
+      cmd_vdsoas = $(CC_ARM32) -Wp,-MD,$(depfile) $(VDSO_AFLAGS) -c -o $@ $<
+
+quiet_cmd_vdsomunge = MUNGE   $@
+      cmd_vdsomunge = $(objtree)/$(munge) $< $@
+
+# Generate vDSO offsets using helper script (borrowed from the 64-bit vDSO)
+gen-vdsosym := $(srctree)/$(src)/../vdso/gen_vdso_offsets.sh
+quiet_cmd_vdsosym = VDSOSYM $@
+# The AArch64 nm should be able to read an AArch32 binary
+      cmd_vdsosym = $(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@
+
+# Install commands for the unstripped file
+quiet_cmd_vdso_install = INSTALL $@
+      cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/vdso32.so
+
+vdso.so: $(obj)/vdso.so.dbg
+	@mkdir -p $(MODLIB)/vdso
+	$(call cmd,vdso_install)
+
+vdso_install: vdso.so
diff --git a/arch/arm64/kernel/vdso32/aarch32-barrier.h b/arch/arm64/kernel/vdso32/aarch32-barrier.h
new file mode 100644
index 000000000000..31bfee63d59d
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/aarch32-barrier.h
@@ -0,0 +1,33 @@
+/*
+ * Barriers for AArch32 code.
+ *
+ * Copyright (C) 2016 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __VDSO32_AARCH32_BARRIER_H
+#define __VDSO32_AARCH32_BARRIER_H
+
+#include <asm/barrier.h>
+
+#if __LINUX_ARM_ARCH__ >= 8
+#define aarch32_smp_mb()	dmb(ish)
+#define aarch32_smp_rmb()	dmb(ishld)
+#define aarch32_smp_wmb()	dmb(ishst)
+#else
+#define aarch32_smp_mb()	dmb(ish)
+#define aarch32_smp_rmb()	dmb(ish) /* ishld does not exist on ARMv7 */
+#define aarch32_smp_wmb()	dmb(ishst)
+#endif
+
+#endif	/* __VDSO32_AARCH32_BARRIER_H */
diff --git a/arch/arm64/kernel/vdso32/sigreturn.S b/arch/arm64/kernel/vdso32/sigreturn.S
new file mode 100644
index 000000000000..9267a4d78404
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/sigreturn.S
@@ -0,0 +1,76 @@
+/*
+ * Sigreturn trampolines for returning from a signal when the SA_RESTORER
+ * flag is not set.
+ *
+ * Copyright (C) 2016 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Based on glibc's arm sa_restorer. While this is not strictly necessary, we
+ * provide both A32 and T32 versions, in accordance with the arm sigreturn
+ * code.
+ */
+
+#include <linux/linkage.h>
+#include <asm/asm-offsets.h>
+#include <asm/unistd.h>
+
+.macro sigreturn_trampoline name, syscall, regs_offset
+	/*
+	 * We provide directives for enabling stack unwinding through the
+	 * trampoline. On arm, CFI directives are only used for debugging (and
+	 * the vDSO is stripped of debug information), so only the arm-specific
+	 * unwinding directives are useful here.
+	 */
+	.fnstart
+	.save {r0-r15}
+	.pad #\regs_offset
+	/*
+	 * It is necessary to start the unwind tables at least one instruction
+	 * before the trampoline, as the unwinder will assume that the signal
+	 * handler has been called from the trampoline, that is just before
+	 * where the signal handler returns (mov r7, ...).
+	 */
+	nop
+ENTRY(\name)
+	mov	r7, #\syscall
+	svc	#0
+	.fnend
+	/*
+	 * We would like to use ENDPROC, but the macro uses @ which is a
+	 * comment symbol for arm assemblers, so directly use .type with %
+	 * instead.
+	 */
+	.type \name, %function
+END(\name)
+.endm
+
+	.text
+
+	.arm
+	sigreturn_trampoline __kernel_sigreturn_arm, \
+			     __NR_compat_sigreturn, \
+			     COMPAT_SIGFRAME_REGS_OFFSET
+
+	sigreturn_trampoline __kernel_rt_sigreturn_arm, \
+			     __NR_compat_rt_sigreturn, \
+			     COMPAT_RT_SIGFRAME_REGS_OFFSET
+
+	.thumb
+	sigreturn_trampoline __kernel_sigreturn_thumb, \
+			     __NR_compat_sigreturn, \
+			     COMPAT_SIGFRAME_REGS_OFFSET
+
+	sigreturn_trampoline __kernel_rt_sigreturn_thumb, \
+			     __NR_compat_rt_sigreturn, \
+			     COMPAT_RT_SIGFRAME_REGS_OFFSET
diff --git a/arch/arm64/kernel/vdso32/vdso.S b/arch/arm64/kernel/vdso32/vdso.S
new file mode 100644
index 000000000000..fe19ff70eb76
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/vdso.S
@@ -0,0 +1,32 @@
+/*
+ * Copyright (C) 2012 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <linux/const.h>
+#include <asm/page.h>
+
+	.globl vdso32_start, vdso32_end
+	.section .rodata
+	.balign PAGE_SIZE
+vdso32_start:
+	.incbin "arch/arm64/kernel/vdso32/vdso.so"
+	.balign PAGE_SIZE
+vdso32_end:
+
+	.previous
diff --git a/arch/arm64/kernel/vdso32/vdso.lds.S b/arch/arm64/kernel/vdso32/vdso.lds.S
new file mode 100644
index 000000000000..89560e80bd21
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/vdso.lds.S
@@ -0,0 +1,93 @@
+/*
+ * Adapted from arm64 version.
+ *
+ * GNU linker script for the VDSO library.
+ *
+ * Copyright (C) 2012 ARM Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ * Heavily based on the vDSO linker scripts for other archs.
+ */
+
+#include <linux/const.h>
+#include <asm/page.h>
+#include <asm/vdso.h>
+
+OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", "elf32-littlearm")
+OUTPUT_ARCH(arm)
+
+SECTIONS
+{
+	PROVIDE_HIDDEN(_vdso_data = . - PAGE_SIZE);
+	. = VDSO_LBASE + SIZEOF_HEADERS;
+
+	.hash		: { *(.hash) }			:text
+	.gnu.hash	: { *(.gnu.hash) }
+	.dynsym		: { *(.dynsym) }
+	.dynstr		: { *(.dynstr) }
+	.gnu.version	: { *(.gnu.version) }
+	.gnu.version_d	: { *(.gnu.version_d) }
+	.gnu.version_r	: { *(.gnu.version_r) }
+
+	.note		: { *(.note.*) }		:text	:note
+
+	.dynamic	: { *(.dynamic) }		:text	:dynamic
+
+	.rodata		: { *(.rodata*) }		:text
+
+	.text		: { *(.text*) }			:text	=0xe7f001f2
+
+	.got		: { *(.got) }
+	.rel.plt	: { *(.rel.plt) }
+
+	/DISCARD/	: {
+		*(.note.GNU-stack)
+		*(.data .data.* .gnu.linkonce.d.* .sdata*)
+		*(.bss .sbss .dynbss .dynsbss)
+	}
+}
+
+/*
+ * We must supply the ELF program headers explicitly to get just one
+ * PT_LOAD segment, and set the flags explicitly to make segments read-only.
+ */
+PHDRS
+{
+	text		PT_LOAD		FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
+	dynamic		PT_DYNAMIC	FLAGS(4);		/* PF_R */
+	note		PT_NOTE		FLAGS(4);		/* PF_R */
+}
+
+VERSION
+{
+	LINUX_2.6 {
+	global:
+		__vdso_clock_gettime;
+		__vdso_gettimeofday;
+		__kernel_sigreturn_arm;
+		__kernel_sigreturn_thumb;
+		__kernel_rt_sigreturn_arm;
+		__kernel_rt_sigreturn_thumb;
+	local: *;
+	};
+}
+
+/*
+ * Make the sigreturn code visible to the kernel.
+ */
+VDSO_compat_sigreturn_arm	= __kernel_sigreturn_arm;
+VDSO_compat_sigreturn_thumb	= __kernel_sigreturn_thumb;
+VDSO_compat_rt_sigreturn_arm	= __kernel_rt_sigreturn_arm;
+VDSO_compat_rt_sigreturn_thumb	= __kernel_rt_sigreturn_thumb;
diff --git a/arch/arm64/kernel/vdso32/vgettimeofday.c b/arch/arm64/kernel/vdso32/vgettimeofday.c
new file mode 100644
index 000000000000..53c3d1f82b26
--- /dev/null
+++ b/arch/arm64/kernel/vdso32/vgettimeofday.c
@@ -0,0 +1,295 @@
+/*
+ * Copyright 2015 Mentor Graphics Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; version 2 of the
+ * License.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/clocksource.h>
+#include <linux/compiler.h>
+#include <linux/time.h>
+#include <asm/unistd.h>
+#include <asm/vdso_datapage.h>
+
+#include "aarch32-barrier.h"
+
+/*
+ * We use the hidden visibility to prevent the compiler from generating a GOT
+ * relocation. Not only is going through a GOT useless (the entry couldn't and
+ * musn't be overridden by another library), it does not even work: the linker
+ * cannot generate an absolute address to the data page.
+ *
+ * With the hidden visibility, the compiler simply generates a PC-relative
+ * relocation (R_ARM_REL32), and this is what we need.
+ */
+extern const struct vdso_data _vdso_data __attribute__((visibility("hidden")));
+
+static inline const struct vdso_data *get_vdso_data(void)
+{
+	const struct vdso_data *ret;
+	/*
+	 * This simply puts &_vdso_data into ret. The reason why we don't use
+	 * `ret = &_vdso_data` is that the compiler tends to optimise this in a
+	 * very suboptimal way: instead of keeping &_vdso_data in a register,
+	 * it goes through a relocation almost every time _vdso_data must be
+	 * accessed (even in subfunctions). This is both time and space
+	 * consuming: each relocation uses a word in the code section, and it
+	 * has to be loaded at runtime.
+	 *
+	 * This trick hides the assignment from the compiler. Since it cannot
+	 * track where the pointer comes from, it will only use one relocation
+	 * where get_vdso_data() is called, and then keep the result in a
+	 * register.
+	 */
+	asm("mov %0, %1" : "=r"(ret) : "r"(&_vdso_data));
+	return ret;
+}
+
+static notrace u32 __vdso_read_begin(const struct vdso_data *vdata)
+{
+	u32 seq;
+repeat:
+	seq = ACCESS_ONCE(vdata->tb_seq_count);
+	if (seq & 1) {
+		cpu_relax();
+		goto repeat;
+	}
+	return seq;
+}
+
+static notrace u32 vdso_read_begin(const struct vdso_data *vdata)
+{
+	u32 seq;
+
+	seq = __vdso_read_begin(vdata);
+
+	aarch32_smp_rmb();
+	return seq;
+}
+
+static notrace int vdso_read_retry(const struct vdso_data *vdata, u32 start)
+{
+	aarch32_smp_rmb();
+	return vdata->tb_seq_count != start;
+}
+
+/*
+ * Note: only AEABI is supported by the compat layer, we can assume AEABI
+ * syscall conventions are used.
+ */
+static notrace long clock_gettime_fallback(clockid_t _clkid,
+					   struct timespec *_ts)
+{
+	register struct timespec *ts asm("r1") = _ts;
+	register clockid_t clkid asm("r0") = _clkid;
+	register long ret asm ("r0");
+	register long nr asm("r7") = __NR_compat_clock_gettime;
+
+	asm volatile(
+	"	svc #0\n"
+	: "=r" (ret)
+	: "r" (clkid), "r" (ts), "r" (nr)
+	: "memory");
+
+	return ret;
+}
+
+static notrace int do_realtime_coarse(struct timespec *ts,
+				      const struct vdso_data *vdata)
+{
+	u32 seq;
+
+	do {
+		seq = vdso_read_begin(vdata);
+
+		ts->tv_sec = vdata->xtime_coarse_sec;
+		ts->tv_nsec = vdata->xtime_coarse_nsec;
+
+	} while (vdso_read_retry(vdata, seq));
+
+	return 0;
+}
+
+static notrace int do_monotonic_coarse(struct timespec *ts,
+				       const struct vdso_data *vdata)
+{
+	struct timespec tomono;
+	u32 seq;
+
+	do {
+		seq = vdso_read_begin(vdata);
+
+		ts->tv_sec = vdata->xtime_coarse_sec;
+		ts->tv_nsec = vdata->xtime_coarse_nsec;
+
+		tomono.tv_sec = vdata->wtm_clock_sec;
+		tomono.tv_nsec = vdata->wtm_clock_nsec;
+
+	} while (vdso_read_retry(vdata, seq));
+
+	ts->tv_sec += tomono.tv_sec;
+	timespec_add_ns(ts, tomono.tv_nsec);
+
+	return 0;
+}
+
+static notrace u64 get_ns(const struct vdso_data *vdata)
+{
+	u64 cycle_delta;
+	u64 cycle_now;
+	u64 nsec;
+
+	/* AArch32 implementation of arch_counter_get_cntvct() */
+	isb();
+	asm volatile("mrrc p15, 1, %Q0, %R0, c14" : "=r" (cycle_now));
+
+	/* The virtual counter provides 56 significant bits. */
+	cycle_delta = (cycle_now - vdata->cs_cycle_last) & CLOCKSOURCE_MASK(56);
+
+	nsec = (cycle_delta * vdata->cs_mono_mult) + vdata->xtime_clock_nsec;
+	nsec >>= vdata->cs_shift;
+
+	return nsec;
+}
+
+static notrace int do_realtime(struct timespec *ts,
+			       const struct vdso_data *vdata)
+{
+	u64 nsecs;
+	u32 seq;
+
+	do {
+		seq = vdso_read_begin(vdata);
+
+		if (vdata->use_syscall)
+			return -1;
+
+		ts->tv_sec = vdata->xtime_clock_sec;
+		nsecs = get_ns(vdata);
+
+	} while (vdso_read_retry(vdata, seq));
+
+	ts->tv_nsec = 0;
+	timespec_add_ns(ts, nsecs);
+
+	return 0;
+}
+
+static notrace int do_monotonic(struct timespec *ts,
+				const struct vdso_data *vdata)
+{
+	struct timespec tomono;
+	u64 nsecs;
+	u32 seq;
+
+	do {
+		seq = vdso_read_begin(vdata);
+
+		if (vdata->use_syscall)
+			return -1;
+
+		ts->tv_sec = vdata->xtime_clock_sec;
+		nsecs = get_ns(vdata);
+
+		tomono.tv_sec = vdata->wtm_clock_sec;
+		tomono.tv_nsec = vdata->wtm_clock_nsec;
+
+	} while (vdso_read_retry(vdata, seq));
+
+	ts->tv_sec += tomono.tv_sec;
+	ts->tv_nsec = 0;
+	timespec_add_ns(ts, nsecs + tomono.tv_nsec);
+
+	return 0;
+}
+
+notrace int __vdso_clock_gettime(clockid_t clkid, struct timespec *ts)
+{
+	const struct vdso_data *vdata = get_vdso_data();
+	int ret = -1;
+
+	switch (clkid) {
+	case CLOCK_REALTIME_COARSE:
+		ret = do_realtime_coarse(ts, vdata);
+		break;
+	case CLOCK_MONOTONIC_COARSE:
+		ret = do_monotonic_coarse(ts, vdata);
+		break;
+	case CLOCK_REALTIME:
+		ret = do_realtime(ts, vdata);
+		break;
+	case CLOCK_MONOTONIC:
+		ret = do_monotonic(ts, vdata);
+		break;
+	default:
+		break;
+	}
+
+	if (ret)
+		ret = clock_gettime_fallback(clkid, ts);
+
+	return ret;
+}
+
+static notrace long gettimeofday_fallback(struct timeval *_tv,
+					  struct timezone *_tz)
+{
+	register struct timezone *tz asm("r1") = _tz;
+	register struct timeval *tv asm("r0") = _tv;
+	register long ret asm ("r0");
+	register long nr asm("r7") = __NR_compat_gettimeofday;
+
+	asm volatile(
+	"	svc #0\n"
+	: "=r" (ret)
+	: "r" (tv), "r" (tz), "r" (nr)
+	: "memory");
+
+	return ret;
+}
+
+notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
+{
+	struct timespec ts;
+	const struct vdso_data *vdata = get_vdso_data();
+	int ret;
+
+	ret = do_realtime(&ts, vdata);
+	if (ret)
+		return gettimeofday_fallback(tv, tz);
+
+	if (tv) {
+		tv->tv_sec = ts.tv_sec;
+		tv->tv_usec = ts.tv_nsec / 1000;
+	}
+	if (tz) {
+		tz->tz_minuteswest = vdata->tz_minuteswest;
+		tz->tz_dsttime = vdata->tz_dsttime;
+	}
+
+	return ret;
+}
+
+/* Avoid unresolved references emitted by GCC */
+
+void __aeabi_unwind_cpp_pr0(void)
+{
+}
+
+void __aeabi_unwind_cpp_pr1(void)
+{
+}
+
+void __aeabi_unwind_cpp_pr2(void)
+{
+}
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 08/11] arm64: compat: 32-bit vDSO setup
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
                   ` (6 preceding siblings ...)
  2016-12-06 16:03 ` [RFC PATCH v3 07/11] arm64: compat: Add a 32-bit vDSO Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 09/11] arm64: elf: Set AT_SYSINFO_EHDR in compat processes Kevin Brodsky
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

If the compat vDSO is enabled, install it in compat processes. In this
case, the compat vDSO replaces the sigreturn page (it provides its own
sigreturn trampolines).

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/kernel/vdso.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 4d3048619be8..473492f3ab2a 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -141,6 +141,20 @@ static int vdso_setup(struct mm_struct *mm,
 }
 
 #ifdef CONFIG_COMPAT
+#ifdef CONFIG_VDSO32
+
+static struct vdso_mappings vdso32_mappings __ro_after_init;
+
+static int __init vdso32_init(void)
+{
+	extern char vdso32_start, vdso32_end;
+
+	return vdso_mappings_init("vdso32", &vdso32_start, &vdso32_end,
+				  &vdso32_mappings);
+}
+arch_initcall(vdso32_init);
+
+#else /* CONFIG_VDSO32 */
 
 /* sigreturn trampolines page */
 static struct page *sigreturn_page __ro_after_init;
@@ -196,6 +210,8 @@ static int sigreturn_setup(struct mm_struct *mm)
 	return PTR_ERR_OR_ZERO(ret);
 }
 
+#endif /* CONFIG_VDSO32 */
+
 #ifdef CONFIG_KUSER_HELPERS
 
 /* kuser helpers page */
@@ -249,7 +265,11 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 	if (down_write_killable(&mm->mmap_sem))
 		return -EINTR;
 
+#ifdef CONFIG_VDSO32
+	ret = vdso_setup(mm, &vdso32_mappings);
+#else
 	ret = sigreturn_setup(mm);
+#endif
 	if (ret)
 		goto out;
 
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 09/11] arm64: elf: Set AT_SYSINFO_EHDR in compat processes
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
                   ` (7 preceding siblings ...)
  2016-12-06 16:03 ` [RFC PATCH v3 08/11] arm64: compat: 32-bit vDSO setup Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 10/11] arm64: compat: Use vDSO sigreturn trampolines if available Kevin Brodsky
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

If the compat vDSO is enabled, we need to set AT_SYSINFO_EHDR in the
auxiliary vector of compat processes to the address of the vDSO code
page, so that the dynamic linker can find it (just like the regular vDSO).

Note that we cast context.vdso to unsigned long, instead of elf_addr_t,
because elf_addr_t is 32-bit in compat_binfmt_elf.c, and casting to u32
would trigger a pointer narrowing warning.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/include/asm/elf.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 7da9452596ad..765c633950b7 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -141,11 +141,12 @@ typedef struct user_fpsimd_state elf_fpregset_t;
 #define SET_PERSONALITY(ex)		clear_thread_flag(TIF_32BIT);
 
 /* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */
-#define ARCH_DLINFO							\
+#define _SET_AUX_ENT_VDSO						\
 do {									\
 	NEW_AUX_ENT(AT_SYSINFO_EHDR,					\
-		    (elf_addr_t)current->mm->context.vdso);		\
+		    (unsigned long)current->mm->context.vdso);		\
 } while (0)
+#define ARCH_DLINFO _SET_AUX_ENT_VDSO
 
 #define ARCH_HAS_SETUP_ADDITIONAL_PAGES
 struct linux_binprm;
@@ -184,7 +185,11 @@ typedef compat_elf_greg_t		compat_elf_gregset_t[COMPAT_ELF_NGREG];
 
 #define compat_start_thread		compat_start_thread
 #define COMPAT_SET_PERSONALITY(ex)	set_thread_flag(TIF_32BIT);
+#ifdef CONFIG_VDSO32
+#define COMPAT_ARCH_DLINFO		_SET_AUX_ENT_VDSO
+#else
 #define COMPAT_ARCH_DLINFO
+#endif
 extern int aarch32_setup_additional_pages(struct linux_binprm *bprm,
 					  int uses_interp);
 #define compat_arch_setup_additional_pages \
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 10/11] arm64: compat: Use vDSO sigreturn trampolines if available
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
                   ` (8 preceding siblings ...)
  2016-12-06 16:03 ` [RFC PATCH v3 09/11] arm64: elf: Set AT_SYSINFO_EHDR in compat processes Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2016-12-06 16:03 ` [RFC PATCH v3 11/11] arm64: Wire up and expose the new compat vDSO Kevin Brodsky
  2017-02-17 17:16 ` [RFC PATCH v3 00/11] arm64: Add a " Will Deacon
  11 siblings, 0 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

If the compat vDSO is enabled, it replaces the sigreturn page.
Therefore, we use the sigreturn trampolines the vDSO provides instead.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/include/asm/vdso.h |  3 +++
 arch/arm64/kernel/signal32.c  | 15 +++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/arch/arm64/include/asm/vdso.h b/arch/arm64/include/asm/vdso.h
index 839ce0031bd5..f2a952338f1e 100644
--- a/arch/arm64/include/asm/vdso.h
+++ b/arch/arm64/include/asm/vdso.h
@@ -28,6 +28,9 @@
 #ifndef __ASSEMBLY__
 
 #include <generated/vdso-offsets.h>
+#ifdef CONFIG_VDSO32
+#include <generated/vdso32-offsets.h>
+#endif
 
 #define VDSO_SYMBOL(base, name)						   \
 ({									   \
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index 281df761e208..b6e8ff7949a4 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -28,6 +28,7 @@
 #include <asm/signal32.h>
 #include <asm/uaccess.h>
 #include <asm/unistd.h>
+#include <asm/vdso.h>
 
 struct compat_vfp_sigframe {
 	compat_ulong_t	magic;
@@ -438,6 +439,19 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 		retcode = ptr_to_compat(ka->sa.sa_restorer);
 	} else {
 		/* Set up sigreturn pointer */
+#ifdef CONFIG_VDSO32
+		void *vdso_base = current->mm->context.vdso;
+		void *trampoline =
+			(ka->sa.sa_flags & SA_SIGINFO
+			 ? (thumb
+			    ? VDSO_SYMBOL(vdso_base, compat_rt_sigreturn_thumb)
+			    : VDSO_SYMBOL(vdso_base, compat_rt_sigreturn_arm))
+			 : (thumb
+			    ? VDSO_SYMBOL(vdso_base, compat_sigreturn_thumb)
+			    : VDSO_SYMBOL(vdso_base, compat_sigreturn_arm)));
+
+		retcode = ptr_to_compat(trampoline) + thumb;
+#else
 		void *sigreturn_base = current->mm->context.vdso;
 		unsigned int idx = thumb << 1;
 
@@ -445,6 +459,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
 			idx += 3;
 
 		retcode = ptr_to_compat(sigreturn_base) + (idx << 2) + thumb;
+#endif
 	}
 
 	regs->regs[0]	= usig;
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 11/11] arm64: Wire up and expose the new compat vDSO
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
                   ` (9 preceding siblings ...)
  2016-12-06 16:03 ` [RFC PATCH v3 10/11] arm64: compat: Use vDSO sigreturn trampolines if available Kevin Brodsky
@ 2016-12-06 16:03 ` Kevin Brodsky
  2017-02-17 17:16 ` [RFC PATCH v3 00/11] arm64: Add a " Will Deacon
  11 siblings, 0 replies; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-06 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

Expose the new compat vDSO via the COMPAT_VDSO config option.

The option is not enabled in defconfig because we really need a 32-bit
compiler this time, and we rely on the user to provide it themselves
by setting CROSS_COMPILE_ARM32. Therefore enabling the option by
default would make little sense, since the user must explicitly set a
non-standard environment variable anyway.

CONFIG_COMPAT_VDSO is not directly used in the code, because we want
to ignore it (build as if it were not set) if the user didn't set
CROSS_COMPILE_ARM32 properly. If the variable has been set to a valid
prefix, CONFIG_VDSO32 will be set; this is the option that the code
and Makefiles test.

For more flexibility, like CROSS_COMPILE, CROSS_COMPILE_ARM32 can also
be set via CONFIG_CROSS_COMPILE_ARM32 (the environment variable
overrides the config option, as expected).

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
---
 arch/arm64/Kconfig         | 23 +++++++++++++++++++++++
 arch/arm64/Makefile        | 28 ++++++++++++++++++++++++++--
 arch/arm64/kernel/Makefile |  8 ++++++--
 3 files changed, 55 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 265d88c44cfc..721b8151ff8c 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1046,6 +1046,29 @@ config SYSVIPC_COMPAT
 	def_bool y
 	depends on COMPAT && SYSVIPC
 
+config COMPAT_VDSO
+	bool "32-bit vDSO"
+	depends on COMPAT
+	default n
+	help
+	  Warning: a 32-bit toolchain is necessary to build the vDSO. You
+	  must explicitly define which toolchain should be used by setting
+	  CROSS_COMPILE_ARM32 to the prefix of the 32-bit toolchain (same format
+	  as CROSS_COMPILE). If a 32-bit compiler cannot be found, a warning
+	  will be printed and the kernel will be built as if COMPAT_VDSO had not
+	  been set.
+
+	  Provide a vDSO to 32-bit processes. It includes the symbols provided
+	  by the vDSO from the 32-bit kernel, so that a 32-bit libc can use
+	  the compat vDSO without modification. It also provides sigreturn
+	  trampolines, replacing the sigreturn page.
+
+config CROSS_COMPILE_ARM32
+	string "32-bit toolchain prefix"
+	help
+	  Same as setting CROSS_COMPILE_ARM32 in the environment, but saved for
+	  future builds. The environment variable overrides this config option.
+
 endmenu
 
 menu "Power management options"
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 3635b8662724..370d8de0c100 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -37,10 +37,32 @@ $(warning LSE atomics not supported by binutils)
   endif
 endif
 
-KBUILD_CFLAGS	+= -mgeneral-regs-only $(lseinstr)
+ifeq ($(CONFIG_COMPAT_VDSO), y)
+  CROSS_COMPILE_ARM32 ?= $(CONFIG_CROSS_COMPILE_ARM32:"%"=%)
+
+  # Check that the user has provided a valid prefix for the 32-bit toolchain.
+  # To prevent selecting the system gcc by default, the prefix is not allowed to
+  # be empty, unlike CROSS_COMPILE. In the unlikely event that the system gcc
+  # is actually the 32-bit ARM compiler to be used, the variable can be set to
+  # the dirname (e.g. CROSS_COMPILE_ARM32=/usr/bin/).
+  # Note: this Makefile is read both before and after regenerating the
+  # config (if needed). Any warning appearing before the config has been
+  # regenerated should be ignored.
+  ifeq ($(CROSS_COMPILE_ARM32),)
+    $(warning CROSS_COMPILE_ARM32 not defined or empty, the compat vDSO will not be built)
+  else ifeq ($(shell which $(CROSS_COMPILE_ARM32)gcc 2> /dev/null),)
+    $(warning $(CROSS_COMPILE_ARM32)gcc not found, the compat vDSO will not be built)
+  else
+    export CROSS_COMPILE_ARM32
+    export CONFIG_VDSO32 := y
+    vdso32 := -DCONFIG_VDSO32=1
+  endif
+endif
+
+KBUILD_CFLAGS	+= -mgeneral-regs-only $(lseinstr) $(vdso32)
 KBUILD_CFLAGS	+= -fno-asynchronous-unwind-tables
 KBUILD_CFLAGS	+= $(call cc-option, -mpc-relative-literal-loads)
-KBUILD_AFLAGS	+= $(lseinstr)
+KBUILD_AFLAGS	+= $(lseinstr) $(vdso32)
 
 ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
 KBUILD_CPPFLAGS	+= -mbig-endian
@@ -139,6 +161,8 @@ archclean:
 prepare: vdso_prepare
 vdso_prepare: prepare0
 	$(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso include/generated/vdso-offsets.h
+	$(if $(CONFIG_VDSO32),$(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso32 \
+					  include/generated/vdso32-offsets.h)
 
 define archhelp
   echo  '* Image.gz      - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)'
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 0850506d0217..b0e1005037cd 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -27,8 +27,11 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_
 $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
-arm64-obj-$(CONFIG_COMPAT)		+= sys32.o sigreturn32.o signal32.o	\
-					   sys_compat.o entry32.o
+arm64-obj-$(CONFIG_COMPAT)		+= sys32.o signal32.o sys_compat.o	\
+					   entry32.o
+ifneq ($(CONFIG_VDSO32),y)
+arm64-obj-y				+= sigreturn32.o
+endif
 arm64-obj-$(CONFIG_KUSER_HELPERS)	+= kuser32.o
 arm64-obj-$(CONFIG_FUNCTION_TRACER)	+= ftrace.o entry-ftrace.o
 arm64-obj-$(CONFIG_MODULES)		+= arm64ksyms.o module.o
@@ -53,6 +56,7 @@ arm64-obj-$(CONFIG_KEXEC)		+= machine_kexec.o relocate_kernel.o	\
 					   cpu-reset.o
 
 obj-y					+= $(arm64-obj-y) vdso/ probes/
+obj-$(CONFIG_VDSO32)			+= vdso32/
 obj-m					+= $(arm64-obj-m)
 head-y					:= head.o
 extra-y					+= $(head-y) vmlinux.lds
-- 
2.10.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 07/11] arm64: compat: Add a 32-bit vDSO
  2016-12-06 16:03 ` [RFC PATCH v3 07/11] arm64: compat: Add a 32-bit vDSO Kevin Brodsky
@ 2016-12-07 18:51   ` Nathan Lynch
  2016-12-08 10:59     ` Kevin Brodsky
  0 siblings, 1 reply; 16+ messages in thread
From: Nathan Lynch @ 2016-12-07 18:51 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Kevin,

Kevin Brodsky <kevin.brodsky@arm.com> writes:
> diff --git a/arch/arm64/kernel/vdso32/vgettimeofday.c b/arch/arm64/kernel/vdso32/vgettimeofday.c
> new file mode 100644
> index 000000000000..53c3d1f82b26
> --- /dev/null
> +++ b/arch/arm64/kernel/vdso32/vgettimeofday.c

As I said at LPC last month, I'm not excited to have arch/arm's
vgettimeofday.c copied into arch/arm64 and tweaked; I'd rather see this
part of the implementation shared between arch/arm and arch/arm64
somehow, even if there's not an elegant way to do so.

The situation which must be avoided is one where the arch/arm64 compat
VDSO incompatibly diverges from the arch/arm VDSO.  That becomes much
less likely if there's only one copy of the userspace-exposed code to
maintain.


[...]

> +/*
> + * We use the hidden visibility to prevent the compiler from generating a GOT
> + * relocation. Not only is going through a GOT useless (the entry couldn't and
> + * musn't be overridden by another library), it does not even work: the linker
> + * cannot generate an absolute address to the data page.
> + *
> + * With the hidden visibility, the compiler simply generates a PC-relative
> + * relocation (R_ARM_REL32), and this is what we need.
> + */
> +extern const struct vdso_data _vdso_data __attribute__((visibility("hidden")));
> +
> +static inline const struct vdso_data *get_vdso_data(void)
> +{
> +	const struct vdso_data *ret;
> +	/*
> +	 * This simply puts &_vdso_data into ret. The reason why we don't use
> +	 * `ret = &_vdso_data` is that the compiler tends to optimise this in a
> +	 * very suboptimal way: instead of keeping &_vdso_data in a register,
> +	 * it goes through a relocation almost every time _vdso_data must be
> +	 * accessed (even in subfunctions). This is both time and space
> +	 * consuming: each relocation uses a word in the code section, and it
> +	 * has to be loaded at runtime.
> +	 *
> +	 * This trick hides the assignment from the compiler. Since it cannot
> +	 * track where the pointer comes from, it will only use one relocation
> +	 * where get_vdso_data() is called, and then keep the result in a
> +	 * register.
> +	 */
> +	asm("mov %0, %1" : "=r"(ret) : "r"(&_vdso_data));
> +	return ret;
> +}

I think this is an instructive case: this code looks like an improvement
over how the arch/arm VDSO accesses the data page.  Enhancements like
this (not to mention fixes) shouldn't require a patch-per-architecture;
things inevitably get out of sync and it's more of a maintenance burden
in the long term.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 07/11] arm64: compat: Add a 32-bit vDSO
  2016-12-07 18:51   ` Nathan Lynch
@ 2016-12-08 10:59     ` Kevin Brodsky
  2016-12-08 17:02       ` Nathan Lynch
  0 siblings, 1 reply; 16+ messages in thread
From: Kevin Brodsky @ 2016-12-08 10:59 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/12/16 18:51, Nathan Lynch wrote:
> Hi Kevin,

Hi Nathan,

Thanks for taking a look at these patches!

> Kevin Brodsky <kevin.brodsky@arm.com> writes:
>> diff --git a/arch/arm64/kernel/vdso32/vgettimeofday.c b/arch/arm64/kernel/vdso32/vgettimeofday.c
>> new file mode 100644
>> index 000000000000..53c3d1f82b26
>> --- /dev/null
>> +++ b/arch/arm64/kernel/vdso32/vgettimeofday.c
> As I said at LPC last month, I'm not excited to have arch/arm's
> vgettimeofday.c copied into arch/arm64 and tweaked; I'd rather see this
> part of the implementation shared between arch/arm and arch/arm64
> somehow, even if there's not an elegant way to do so.
>
> The situation which must be avoided is one where the arch/arm64 compat
> VDSO incompatibly diverges from the arch/arm VDSO.  That becomes much
> less likely if there's only one copy of the userspace-exposed code to
> maintain.

I still agree this is very suboptimal. However, I also think this is by far the most 
straightforward solution, and I would like to stick to it *as a first step*. If you 
diff this "tweaked" vgettimeofday.c with the original one, you'll see that it is 
really not obvious to share even parts of vgettimeofday.c in the current state of arm 
and arm64. Besides, arm's vDSO hasn't changed much since it has been introduced last 
year, and vgettimeofday.c hasn't changed at all, so I think it's fair to say 
divergences are unlikely to happen in the near future.

Nevertheless, I completely agree this is not a good situation in the long run, and I 
am not happy with leaving things in this state. After this series is merged, my plan 
is to try and align arm and arm64 both in terms of vDSO features and kernel-vDSO 
interface. This would notably involve:
* Aligning the vDSO data struct's (especially vdso_data's member names), so that 
vgettimeofday.c can be (mostly) shared.
* Non-trivial Makefile work to share the same one between the arm and arm64 compat 
vDSOs. The refactoring I did in v3 is a first step, as not using top-level flags 
makes it more feasible to share the same Makefile.
* Adding support for CLOCK_MONOTONIC_RAW, as I did in the 64-bit vDSO.
* Moving arm's sigreturn trampolines to its vDSO (if enabled), and convince glibc to 
use the kernel's trampolines like on arm64.

This represents a substantial amount of work, which I think should be in a separate 
series (this one has already grown bigger than I expected).

The final step would be to investigate whether the 64-bit vDSO could reasonably move 
to a C implementation, and if it is the case unify the 3 vDSOs.

> [...]
>
>> +/*
>> + * We use the hidden visibility to prevent the compiler from generating a GOT
>> + * relocation. Not only is going through a GOT useless (the entry couldn't and
>> + * musn't be overridden by another library), it does not even work: the linker
>> + * cannot generate an absolute address to the data page.
>> + *
>> + * With the hidden visibility, the compiler simply generates a PC-relative
>> + * relocation (R_ARM_REL32), and this is what we need.
>> + */
>> +extern const struct vdso_data _vdso_data __attribute__((visibility("hidden")));
>> +
>> +static inline const struct vdso_data *get_vdso_data(void)
>> +{
>> +	const struct vdso_data *ret;
>> +	/*
>> +	 * This simply puts &_vdso_data into ret. The reason why we don't use
>> +	 * `ret = &_vdso_data` is that the compiler tends to optimise this in a
>> +	 * very suboptimal way: instead of keeping &_vdso_data in a register,
>> +	 * it goes through a relocation almost every time _vdso_data must be
>> +	 * accessed (even in subfunctions). This is both time and space
>> +	 * consuming: each relocation uses a word in the code section, and it
>> +	 * has to be loaded at runtime.
>> +	 *
>> +	 * This trick hides the assignment from the compiler. Since it cannot
>> +	 * track where the pointer comes from, it will only use one relocation
>> +	 * where get_vdso_data() is called, and then keep the result in a
>> +	 * register.
>> +	 */
>> +	asm("mov %0, %1" : "=r"(ret) : "r"(&_vdso_data));
>> +	return ret;
>> +}
> I think this is an instructive case: this code looks like an improvement
> over how the arch/arm VDSO accesses the data page.  Enhancements like
> this (not to mention fixes) shouldn't require a patch-per-architecture;
> things inevitably get out of sync and it's more of a maintenance burden
> in the long term.

What arm does is suboptimal indeed, as it uses a whole function to derive vdso_data's 
address instead of a simple relocation. However both methods are functionally 
equivalent. The approach I used here is simply consistent with how vdso_data is 
accessed in the 64-bit vDSO (with those two tricks because of both an assembler 
difference and the fact we are using C here, and GCC is not being smart). Again, I 
agree the arm vDSO should use the same approach, but unification requires other 
changes beyond the scope of this first series.

Thanks,
Kevin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 07/11] arm64: compat: Add a 32-bit vDSO
  2016-12-08 10:59     ` Kevin Brodsky
@ 2016-12-08 17:02       ` Nathan Lynch
  0 siblings, 0 replies; 16+ messages in thread
From: Nathan Lynch @ 2016-12-08 17:02 UTC (permalink / raw)
  To: linux-arm-kernel

Kevin Brodsky <kevin.brodsky@arm.com> writes:
> On 07/12/16 18:51, Nathan Lynch wrote:
>> Kevin Brodsky <kevin.brodsky@arm.com> writes:
>>> diff --git a/arch/arm64/kernel/vdso32/vgettimeofday.c b/arch/arm64/kernel/vdso32/vgettimeofday.c
>>> new file mode 100644
>>> index 000000000000..53c3d1f82b26
>>> --- /dev/null
>>> +++ b/arch/arm64/kernel/vdso32/vgettimeofday.c
>> As I said at LPC last month, I'm not excited to have arch/arm's
>> vgettimeofday.c copied into arch/arm64 and tweaked; I'd rather see this
>> part of the implementation shared between arch/arm and arch/arm64
>> somehow, even if there's not an elegant way to do so.
>>
>> The situation which must be avoided is one where the arch/arm64 compat
>> VDSO incompatibly diverges from the arch/arm VDSO.  That becomes much
>> less likely if there's only one copy of the userspace-exposed code to
>> maintain.
>
> I still agree this is very suboptimal. However, I also think this is by far the most 
> straightforward solution, and I would like to stick to it *as a first step*. If you 
> diff this "tweaked" vgettimeofday.c with the original one, you'll see that it is 
> really not obvious to share even parts of vgettimeofday.c in the current state of arm 
> and arm64.

After adapting your get_vdso_data() to arch/arm/vdso, I come up with the
differences below, which seems surmountable mostly with the addition of
appropriate accessor functions for the data page, or changing the field
names to match.  So I can't say I'm persuaded.

I'm happy to review and facilitate changes to code in arch/arm/vdso to
make it possible to share it with arm64 for its compat VDSO.

 vgettimeofday.c |   84 ++++++++++++++++++++++++--------------------------------
 1 file changed, 37 insertions(+), 47 deletions(-)

--- arch/arm/vdso/vgettimeofday.c	2016-12-07 11:24:35.043856904 -0600
+++ kb-vgettimeofday.c	2016-12-08 10:02:14.031896779 -0600
@@ -15,20 +15,23 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/clocksource.h>
 #include <linux/compiler.h>
-#include <linux/hrtimer.h>
 #include <linux/time.h>
-#include <asm/arch_timer.h>
-#include <asm/barrier.h>
-#include <asm/bug.h>
-#include <asm/page.h>
 #include <asm/unistd.h>
 #include <asm/vdso_datapage.h>
 
-#ifndef CONFIG_AEABI
-#error This code depends on AEABI system call conventions
-#endif
+#include "aarch32-barrier.h"
 
+/*
+ * We use the hidden visibility to prevent the compiler from generating a GOT
+ * relocation. Not only is going through a GOT useless (the entry couldn't and
+ * musn't be overridden by another library), it does not even work: the linker
+ * cannot generate an absolute address to the data page.
+ *
+ * With the hidden visibility, the compiler simply generates a PC-relative
+ * relocation (R_ARM_REL32), and this is what we need.
+ */
 extern const struct vdso_data _vdso_data __attribute__((visibility("hidden")));
 
 static inline const struct vdso_data *get_vdso_data(void)
@@ -52,13 +55,11 @@
 	return ret;
 }
 
-#define __get_datapage() get_vdso_data()
-
 static notrace u32 __vdso_read_begin(const struct vdso_data *vdata)
 {
 	u32 seq;
 repeat:
-	seq = ACCESS_ONCE(vdata->seq_count);
+	seq = ACCESS_ONCE(vdata->tb_seq_count);
 	if (seq & 1) {
 		cpu_relax();
 		goto repeat;
@@ -72,26 +73,30 @@
 
 	seq = __vdso_read_begin(vdata);
 
-	smp_rmb(); /* Pairs with smp_wmb in vdso_write_end */
+	aarch32_smp_rmb();
 	return seq;
 }
 
 static notrace int vdso_read_retry(const struct vdso_data *vdata, u32 start)
 {
-	smp_rmb(); /* Pairs with smp_wmb in vdso_write_begin */
-	return vdata->seq_count != start;
+	aarch32_smp_rmb();
+	return vdata->tb_seq_count != start;
 }
 
+/*
+ * Note: only AEABI is supported by the compat layer, we can assume AEABI
+ * syscall conventions are used.
+ */
 static notrace long clock_gettime_fallback(clockid_t _clkid,
 					   struct timespec *_ts)
 {
 	register struct timespec *ts asm("r1") = _ts;
 	register clockid_t clkid asm("r0") = _clkid;
 	register long ret asm ("r0");
-	register long nr asm("r7") = __NR_clock_gettime;
+	register long nr asm("r7") = __NR_compat_clock_gettime;
 
 	asm volatile(
-	"	swi #0\n"
+	"	svc #0\n"
 	: "=r" (ret)
 	: "r" (clkid), "r" (ts), "r" (nr)
 	: "memory");
@@ -138,25 +143,27 @@
 	return 0;
 }
 
-#ifdef CONFIG_ARM_ARCH_TIMER
-
 static notrace u64 get_ns(const struct vdso_data *vdata)
 {
 	u64 cycle_delta;
 	u64 cycle_now;
 	u64 nsec;
 
-	cycle_now = arch_counter_get_cntvct();
+	/* AArch32 implementation of arch_counter_get_cntvct() */
+	isb();
+	asm volatile("mrrc p15, 1, %Q0, %R0, c14" : "=r" (cycle_now));
 
-	cycle_delta = (cycle_now - vdata->cs_cycle_last) & vdata->cs_mask;
+	/* The virtual counter provides 56 significant bits. */
+	cycle_delta = (cycle_now - vdata->cs_cycle_last) & CLOCKSOURCE_MASK(56);
 
-	nsec = (cycle_delta * vdata->cs_mult) + vdata->xtime_clock_snsec;
+	nsec = (cycle_delta * vdata->cs_mono_mult) + vdata->xtime_clock_nsec;
 	nsec >>= vdata->cs_shift;
 
 	return nsec;
 }
 
-static notrace int do_realtime(struct timespec *ts, const struct vdso_data *vdata)
+static notrace int do_realtime(struct timespec *ts,
+			       const struct vdso_data *vdata)
 {
 	u64 nsecs;
 	u32 seq;
@@ -164,7 +171,7 @@
 	do {
 		seq = vdso_read_begin(vdata);
 
-		if (!vdata->tk_is_cntvct)
+		if (vdata->use_syscall)
 			return -1;
 
 		ts->tv_sec = vdata->xtime_clock_sec;
@@ -178,7 +185,8 @@
 	return 0;
 }
 
-static notrace int do_monotonic(struct timespec *ts, const struct vdso_data *vdata)
+static notrace int do_monotonic(struct timespec *ts,
+				const struct vdso_data *vdata)
 {
 	struct timespec tomono;
 	u64 nsecs;
@@ -187,7 +195,7 @@
 	do {
 		seq = vdso_read_begin(vdata);
 
-		if (!vdata->tk_is_cntvct)
+		if (vdata->use_syscall)
 			return -1;
 
 		ts->tv_sec = vdata->xtime_clock_sec;
@@ -205,27 +213,11 @@
 	return 0;
 }
 
-#else /* CONFIG_ARM_ARCH_TIMER */
-
-static notrace int do_realtime(struct timespec *ts, const struct vdso_data *vdata)
-{
-	return -1;
-}
-
-static notrace int do_monotonic(struct timespec *ts, const struct vdso_data *vdata)
-{
-	return -1;
-}
-
-#endif /* CONFIG_ARM_ARCH_TIMER */
-
 notrace int __vdso_clock_gettime(clockid_t clkid, struct timespec *ts)
 {
-	const struct vdso_data *vdata;
+	const struct vdso_data *vdata = get_vdso_data();
 	int ret = -1;
 
-	vdata = __get_datapage();
-
 	switch (clkid) {
 	case CLOCK_REALTIME_COARSE:
 		ret = do_realtime_coarse(ts, vdata);
@@ -255,10 +247,10 @@
 	register struct timezone *tz asm("r1") = _tz;
 	register struct timeval *tv asm("r0") = _tv;
 	register long ret asm ("r0");
-	register long nr asm("r7") = __NR_gettimeofday;
+	register long nr asm("r7") = __NR_compat_gettimeofday;
 
 	asm volatile(
-	"	swi #0\n"
+	"	svc #0\n"
 	: "=r" (ret)
 	: "r" (tv), "r" (tz), "r" (nr)
 	: "memory");
@@ -269,11 +261,9 @@
 notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
 {
 	struct timespec ts;
-	const struct vdso_data *vdata;
+	const struct vdso_data *vdata = get_vdso_data();
 	int ret;
 
-	vdata = __get_datapage();
-
 	ret = do_realtime(&ts, vdata);
 	if (ret)
 		return gettimeofday_fallback(tv, tz);

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH v3 00/11] arm64: Add a compat vDSO
  2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
                   ` (10 preceding siblings ...)
  2016-12-06 16:03 ` [RFC PATCH v3 11/11] arm64: Wire up and expose the new compat vDSO Kevin Brodsky
@ 2017-02-17 17:16 ` Will Deacon
  11 siblings, 0 replies; 16+ messages in thread
From: Will Deacon @ 2017-02-17 17:16 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Dec 06, 2016 at 04:03:42PM +0000, Kevin Brodsky wrote:
> Hi,

Hi Kevin,

> This series adds support for a compat (AArch32) vDSO, providing two
> userspace functionalities to compat processes:
> 
> * "Virtual" time syscalls (gettimeofday and clock_gettime). The
>   implementation is an adaptation of the arm vDSO (vgettimeofday.c),
>   sharing the data page with the 64-bit vDSO.
> 
> * sigreturn trampolines, following the example of the 64-bit vDSO
>   (sigreturn.S), but slightly more complicated because we provide A32
>   and T32 variants for both sigreturn and rt_sigreturn, and appropriate
>   arm-specific unwinding directives.
> 
> The first point brings the performance improvement expected of a vDSO,
> by implementing time syscalls directly in userspace. The second point
> allows to provide unwinding information for sigreturn trampolines,
> achieving feature parity with the trampolines provided by glibc.
> 
> Unfortunately, this time we cannot escape using a 32-bit toolchain. To
> build the compat VDSO, CONFIG_COMPAT_VDSO must be set *and*
> CROSS_COMPILE_ARM32 must be defined to the prefix of a 32-bit compiler.
> Failure to do so will not prevent building the kernel, but a warning
> will be printed and the compat vDSO will not be built.
> 
> v3 is a major refactor of the series. The main change is that the kuser
> helpers are no longer mutually exclusive with the 32-bit vDSO, and can
> be disabled independently of the 32-bit vDSO (they are kept enabled by
> default). To this end, the "old" sigreturn trampolines have been moved
> out of the [vectors] page into an independent [sigreturn] page, similar
> to [sigpage] on arm. The [vectors] page is now [kuserhelpers], and its
> presence is controlled by CONFIG_KUSER_HELPERS. The [sigreturn] page is
> only present when the 32-bit vDSO is not included (the latter provides
> its own sigreturn trampolines with unwinding information). The following
> table summarises which pages are now added in 32-bit processes:
> 
> +----------------+----------------+----------------+
> |     CONFIG     | !VDSO32        | VDSO32         |
> +----------------+----------------+----------------+
> | !KUSER_HELPERS | [sigreturn]    | [vvar]         |
> |                |                | [vdso]         |
> +----------------+----------------+----------------+
> | KUSER_HELPERS  | [sigreturn]    | [vvar]         |
> |                | [kuserhelpers] | [vdso]         |
> |                |                | [kuserhelpers] |
> +----------------+----------------+----------------+

Why is this different to arch/arm/? The idea of the COMPAT layer is to
do the same thing as arch/arm, so we should use [sigpage] and [vectors]
instead of [sigreturn] and [kuserhelpers]. I also think that the /proc/maps
code assumes the "gate vma" is at the end, but that's more of a niggle
than anything else.

The big issue with this series is that I share Nathan's concerns that we're
needlessly forking code from arch/arm/, without a commitment to follow up
with the refactoring effort.  So, whilst I can see that technically this is
looking pretty solid, it's a big ask to merge this into mainline as-is.

Will

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-02-17 17:16 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-12-06 16:03 [RFC PATCH v3 00/11] arm64: Add a compat vDSO Kevin Brodsky
2016-12-06 16:03 ` [RFC PATCH v3 01/11] arm64: compat: Remove leftover variable declaration Kevin Brodsky
2016-12-06 16:03 ` [RFC PATCH v3 02/11] arm64: compat: Split the sigreturn trampolines and kuser helpers Kevin Brodsky
2016-12-06 16:03 ` [RFC PATCH v3 03/11] arm64: compat: Add CONFIG_KUSER_HELPERS Kevin Brodsky
2016-12-06 16:03 ` [RFC PATCH v3 04/11] arm64: Refactor vDSO init/setup Kevin Brodsky
2016-12-06 16:03 ` [RFC PATCH v3 05/11] arm64: compat: Add time-related syscall numbers Kevin Brodsky
2016-12-06 16:03 ` [RFC PATCH v3 06/11] arm64: compat: Expose offset to registers in sigframes Kevin Brodsky
2016-12-06 16:03 ` [RFC PATCH v3 07/11] arm64: compat: Add a 32-bit vDSO Kevin Brodsky
2016-12-07 18:51   ` Nathan Lynch
2016-12-08 10:59     ` Kevin Brodsky
2016-12-08 17:02       ` Nathan Lynch
2016-12-06 16:03 ` [RFC PATCH v3 08/11] arm64: compat: 32-bit vDSO setup Kevin Brodsky
2016-12-06 16:03 ` [RFC PATCH v3 09/11] arm64: elf: Set AT_SYSINFO_EHDR in compat processes Kevin Brodsky
2016-12-06 16:03 ` [RFC PATCH v3 10/11] arm64: compat: Use vDSO sigreturn trampolines if available Kevin Brodsky
2016-12-06 16:03 ` [RFC PATCH v3 11/11] arm64: Wire up and expose the new compat vDSO Kevin Brodsky
2017-02-17 17:16 ` [RFC PATCH v3 00/11] arm64: Add a " Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).