* [PATCH v6 0/7] Increased address space for 64 bit
@ 2024-06-26 13:53 Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 1/7] um: Add generic stub_syscall6 function Benjamin Berg
` (6 more replies)
0 siblings, 7 replies; 10+ messages in thread
From: Benjamin Berg @ 2024-06-26 13:53 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
The new version of the patchset uses execveat on a memfd instead of
cloning twice to disable rseq. This should be much more robust going
forward as it will also avoid issues with other new features like mseal.
This patchset fixes a few bugs, adds a new method of discovering the
host task size and finally adds four level page table support. All of
this means the userspace TASK_SIZE is much larger and in turns permits
userspace applications that need a lot of virtual addresses to work
fine.
One such application is ASAN which uses a fixed address in memory that
would otherwise not be addressable.
v6:
* Apply fixes pointed out by Tiwei Bie
* Add temporary file fallback as memfd is not always supported
v5:
* Use execveat with memfd instead of double clone
v4:
* Do not use WNOHANG in wait for CLONE_VFORK
v3:
* Undo incorrect change in child wait loop
v2:
* Improved double clone logic using CLONE_VFORK
* Kconfig fixes pointed out by Tiwei Bie
Benjamin Berg (7):
um: Add generic stub_syscall6 function
um: Add generic stub_syscall1 function
um: use execveat to create userspace MMs
um: Fix stub_start address calculation
um: Limit TASK_SIZE to the addressable range
um: Discover host_task_size from envp
um: Add 4 level page table support
arch/um/Kconfig | 1 +
arch/um/include/asm/page.h | 14 +-
arch/um/include/asm/pgalloc.h | 11 +-
arch/um/include/asm/pgtable-4level.h | 119 +++++++++++++++++
arch/um/include/asm/pgtable.h | 6 +-
arch/um/include/shared/as-layout.h | 2 +-
arch/um/include/shared/os.h | 2 +-
arch/um/include/shared/skas/stub-data.h | 11 ++
arch/um/kernel/mem.c | 17 ++-
arch/um/kernel/um_arch.c | 14 +-
arch/um/os-Linux/main.c | 9 +-
arch/um/os-Linux/skas/process.c | 171 ++++++++++++++++--------
arch/x86/um/.gitignore | 2 +
arch/x86/um/Kconfig | 38 ++++--
arch/x86/um/Makefile | 32 ++++-
arch/x86/um/os-Linux/task_size.c | 19 ++-
arch/x86/um/shared/sysdep/stub_32.h | 22 +++
arch/x86/um/shared/sysdep/stub_64.h | 27 ++++
arch/x86/um/stub_elf.c | 86 ++++++++++++
arch/x86/um/stub_elf_embed.S | 11 ++
20 files changed, 528 insertions(+), 86 deletions(-)
create mode 100644 arch/um/include/asm/pgtable-4level.h
create mode 100644 arch/x86/um/.gitignore
create mode 100644 arch/x86/um/stub_elf.c
create mode 100644 arch/x86/um/stub_elf_embed.S
--
2.45.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v6 1/7] um: Add generic stub_syscall6 function
2024-06-26 13:53 [PATCH v6 0/7] Increased address space for 64 bit Benjamin Berg
@ 2024-06-26 13:53 ` Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 2/7] um: Add generic stub_syscall1 function Benjamin Berg
` (5 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Benjamin Berg @ 2024-06-26 13:53 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
This function will be used by the new static stub binary.
Signed-off-by: Benjamin Berg <benjamin@sipsolutions.net>
---
arch/x86/um/shared/sysdep/stub_32.h | 22 ++++++++++++++++++++++
arch/x86/um/shared/sysdep/stub_64.h | 16 ++++++++++++++++
2 files changed, 38 insertions(+)
diff --git a/arch/x86/um/shared/sysdep/stub_32.h b/arch/x86/um/shared/sysdep/stub_32.h
index ea8b5a2d67af..ca2dd9263cf2 100644
--- a/arch/x86/um/shared/sysdep/stub_32.h
+++ b/arch/x86/um/shared/sysdep/stub_32.h
@@ -79,6 +79,28 @@ static __always_inline long stub_syscall5(long syscall, long arg1, long arg2,
return ret;
}
+static __always_inline long stub_syscall6(long syscall, long arg1, long arg2,
+ long arg3, long arg4, long arg5,
+ long arg6)
+{
+ struct syscall_args {
+ int ebx, ebp;
+ } args = { arg1, arg6 };
+ long ret;
+
+ __asm__ volatile ("pushl %%ebp;"
+ "movl 0x4(%%ebx),%%ebp;"
+ "movl (%%ebx),%%ebx;"
+ "int $0x80;"
+ "popl %%ebp"
+ : "=a" (ret)
+ : "0" (syscall), "b" (&args),
+ "c" (arg2), "d" (arg3), "S" (arg4), "D" (arg5)
+ : "memory");
+
+ return ret;
+}
+
static __always_inline void trap_myself(void)
{
__asm("int3");
diff --git a/arch/x86/um/shared/sysdep/stub_64.h b/arch/x86/um/shared/sysdep/stub_64.h
index b24168ef0ac4..c99ea6e06f96 100644
--- a/arch/x86/um/shared/sysdep/stub_64.h
+++ b/arch/x86/um/shared/sysdep/stub_64.h
@@ -79,6 +79,22 @@ static __always_inline long stub_syscall5(long syscall, long arg1, long arg2,
return ret;
}
+static __always_inline long stub_syscall6(long syscall, long arg1, long arg2,
+ long arg3, long arg4, long arg5,
+ long arg6)
+{
+ long ret;
+
+ __asm__ volatile ("movq %5,%%r10 ; movq %6,%%r8 ; movq %7,%%r9 ; "
+ __syscall
+ : "=a" (ret)
+ : "0" (syscall), "D" (arg1), "S" (arg2), "d" (arg3),
+ "g" (arg4), "g" (arg5), "g" (arg6)
+ : __syscall_clobber, "r10", "r8", "r9");
+
+ return ret;
+}
+
static __always_inline void trap_myself(void)
{
__asm("int3");
--
2.45.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v6 2/7] um: Add generic stub_syscall1 function
2024-06-26 13:53 [PATCH v6 0/7] Increased address space for 64 bit Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 1/7] um: Add generic stub_syscall6 function Benjamin Berg
@ 2024-06-26 13:53 ` Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 3/7] um: use execveat to create userspace MMs Benjamin Berg
` (4 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Benjamin Berg @ 2024-06-26 13:53 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
The 64bit version did not have a stub_syscall1 function yet. Add it as
it will be useful to implement a static binary for stub loading.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
---
arch/x86/um/shared/sysdep/stub_64.h | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/arch/x86/um/shared/sysdep/stub_64.h b/arch/x86/um/shared/sysdep/stub_64.h
index c99ea6e06f96..6ed2ce4b54ba 100644
--- a/arch/x86/um/shared/sysdep/stub_64.h
+++ b/arch/x86/um/shared/sysdep/stub_64.h
@@ -27,6 +27,17 @@ static __always_inline long stub_syscall0(long syscall)
return ret;
}
+static __always_inline long stub_syscall1(long syscall, long arg1)
+{
+ long ret;
+
+ __asm__ volatile (__syscall
+ : "=a" (ret)
+ : "0" (syscall), "D" (arg1) : __syscall_clobber );
+
+ return ret;
+}
+
static __always_inline long stub_syscall2(long syscall, long arg1, long arg2)
{
long ret;
--
2.45.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v6 3/7] um: use execveat to create userspace MMs
2024-06-26 13:53 [PATCH v6 0/7] Increased address space for 64 bit Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 1/7] um: Add generic stub_syscall6 function Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 2/7] um: Add generic stub_syscall1 function Benjamin Berg
@ 2024-06-26 13:53 ` Benjamin Berg
2024-07-01 20:20 ` Johannes Berg
2024-06-26 13:53 ` [PATCH v6 4/7] um: Fix stub_start address calculation Benjamin Berg
` (3 subsequent siblings)
6 siblings, 1 reply; 10+ messages in thread
From: Benjamin Berg @ 2024-06-26 13:53 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
Using clone will not undo features that have been enabled by libc. An
example of this already happening is rseq, which could cause the kernel
to read/write memory of the userspace process. In the future the
standard library might also use mseal by default to protect itself,
which would also thwart our attempts at unmapping everything.
Solve all this by taking a step back and doing an execve into a tiny
static binary that sets up the minimal environment required for the
stub without using any standard library. That way we have a clean
execution environment that is fully under the control of UML.
Note that this changes things a bit as the FDs are not anymore shared
with the kernel. Instead, we explicitly share the FDs for the physical
memory and all existing iomem regions. Doing this is fine, as iomem
regions cannot be added at runtime.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
---
v6:
- Apply fixes pointed out by Tiwei Bie
- Add temporary file fallback as memfd is not always supported
---
arch/um/include/shared/skas/stub-data.h | 11 ++
arch/um/os-Linux/skas/process.c | 171 ++++++++++++++++--------
arch/x86/um/.gitignore | 2 +
arch/x86/um/Makefile | 32 ++++-
arch/x86/um/stub_elf.c | 86 ++++++++++++
arch/x86/um/stub_elf_embed.S | 11 ++
6 files changed, 255 insertions(+), 58 deletions(-)
create mode 100644 arch/x86/um/.gitignore
create mode 100644 arch/x86/um/stub_elf.c
create mode 100644 arch/x86/um/stub_elf_embed.S
diff --git a/arch/um/include/shared/skas/stub-data.h b/arch/um/include/shared/skas/stub-data.h
index 5e3ade3fb38b..83d210f59956 100644
--- a/arch/um/include/shared/skas/stub-data.h
+++ b/arch/um/include/shared/skas/stub-data.h
@@ -8,6 +8,17 @@
#ifndef __STUB_DATA_H
#define __STUB_DATA_H
+struct stub_init_data {
+ unsigned long stub_start;
+
+ int stub_code_fd;
+ unsigned long stub_code_offset;
+ int stub_data_fd;
+ unsigned long stub_data_offset;
+
+ unsigned long segv_handler;
+};
+
struct stub_data {
unsigned long offset;
int fd;
diff --git a/arch/um/os-Linux/skas/process.c b/arch/um/os-Linux/skas/process.c
index 41a288dcfc34..1ba117325bc2 100644
--- a/arch/um/os-Linux/skas/process.c
+++ b/arch/um/os-Linux/skas/process.c
@@ -23,6 +23,9 @@
#include <skas.h>
#include <sysdep/stub.h>
#include <linux/threads.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <mem_user.h>
#include "../internal.h"
int is_skas_winch(int pid, int fd, void *data)
@@ -188,69 +191,125 @@ static void handle_trap(int pid, struct uml_pt_regs *regs)
extern char __syscall_stub_start[];
-/**
- * userspace_tramp() - userspace trampoline
- * @stack: pointer to the new userspace stack page
- *
- * The userspace trampoline is used to setup a new userspace process in start_userspace() after it was clone()'ed.
- * This function will run on a temporary stack page.
- * It ptrace()'es itself, then
- * Two pages are mapped into the userspace address space:
- * - STUB_CODE (with EXEC), which contains the skas stub code
- * - STUB_DATA (with R/W), which contains a data page that is used to transfer certain data between the UML userspace process and the UML kernel.
- * Also for the userspace process a SIGSEGV handler is installed to catch pagefaults in the userspace process.
- * And last the process stops itself to give control to the UML kernel for this userspace process.
- *
- * Return: Always zero, otherwise the current userspace process is ended with non null exit() call
- */
+static int stub_exec_fd;
+
static int userspace_tramp(void *stack)
{
- struct sigaction sa;
- void *addr;
- int fd;
+ char *const argv[] = { "uml-userspace", NULL };
+ int pipe_fds[2];
unsigned long long offset;
- unsigned long segv_handler = STUB_CODE +
- (unsigned long) stub_segv_handler -
- (unsigned long) __syscall_stub_start;
-
- ptrace(PTRACE_TRACEME, 0, 0, 0);
-
- signal(SIGTERM, SIG_DFL);
- signal(SIGWINCH, SIG_IGN);
-
- fd = phys_mapping(uml_to_phys(__syscall_stub_start), &offset);
- addr = mmap64((void *) STUB_CODE, UM_KERN_PAGE_SIZE,
- PROT_EXEC, MAP_FIXED | MAP_PRIVATE, fd, offset);
- if (addr == MAP_FAILED) {
- os_info("mapping mmap stub at 0x%lx failed, errno = %d\n",
- STUB_CODE, errno);
- exit(1);
+ struct stub_init_data init_data = {
+ .stub_start = STUB_START,
+ .segv_handler = STUB_CODE +
+ (unsigned long) stub_segv_handler -
+ (unsigned long) __syscall_stub_start,
+ };
+ struct iomem_region *iomem;
+ int ret;
+
+ init_data.stub_code_fd = phys_mapping(uml_to_phys(__syscall_stub_start),
+ &offset);
+ init_data.stub_code_offset = MMAP_OFFSET(offset);
+
+ init_data.stub_data_fd = phys_mapping(uml_to_phys(stack), &offset);
+ init_data.stub_data_offset = MMAP_OFFSET(offset);
+
+ /* Set CLOEXEC on all FDs and then unset on all memory related FDs */
+ close_range(0, ~0U, CLOSE_RANGE_CLOEXEC);
+
+ fcntl(init_data.stub_data_fd, F_SETFD, 0);
+ for (iomem = iomem_regions; iomem; iomem = iomem->next)
+ fcntl(iomem->fd, F_SETFD, 0);
+
+ /* Create a pipe for init_data (no CLOEXEC) and dup2 to STDIN */
+ if (pipe2(pipe_fds, 0))
+ exit(2);
+
+ close(0);
+ if (dup2(pipe_fds[0], 0) < 0) {
+ close(pipe_fds[0]);
+ close(pipe_fds[1]);
+ exit(3);
}
+ close(pipe_fds[0]);
+
+ /* Write init_data and close write side */
+ ret = write(pipe_fds[1], &init_data, sizeof(init_data));
+ close(pipe_fds[1]);
+
+ if (ret != sizeof(init_data))
+ exit(4);
+
+ execveat(stub_exec_fd, "", argv, NULL, AT_EMPTY_PATH);
+
+ close(0);
+
+ exit(5);
+}
+
+extern char stub_elf_start[];
+extern char stub_elf_end[];
- fd = phys_mapping(uml_to_phys(stack), &offset);
- addr = mmap((void *) STUB_DATA,
- STUB_DATA_PAGES * UM_KERN_PAGE_SIZE, PROT_READ | PROT_WRITE,
- MAP_FIXED | MAP_SHARED, fd, offset);
- if (addr == MAP_FAILED) {
- os_info("mapping segfault stack at 0x%lx failed, errno = %d\n",
- STUB_DATA, errno);
- exit(1);
+static int __init init_stub_exec_fd(void)
+{
+ size_t len = 0;
+ int res;
+ char tmpfile[] = "/tmp/uml-userspace-XXXXXX";
+
+ stub_exec_fd = memfd_create("uml-userspace",
+ MFD_EXEC | MFD_CLOEXEC | MFD_ALLOW_SEALING);
+
+ if (stub_exec_fd < 0) {
+ printk(UM_KERN_INFO "Could not create executable memfd, using temporary file!");
+
+ stub_exec_fd = mkostemp(tmpfile, O_CLOEXEC);
+ if (stub_exec_fd < 0)
+ panic("Could not create temporary file for stub binary: %d",
+ errno);
+ } else {
+ tmpfile[0] = '\0';
}
- set_sigstack((void *) STUB_DATA, STUB_DATA_PAGES * UM_KERN_PAGE_SIZE);
- sigemptyset(&sa.sa_mask);
- sa.sa_flags = SA_ONSTACK | SA_NODEFER | SA_SIGINFO;
- sa.sa_sigaction = (void *) segv_handler;
- sa.sa_restorer = NULL;
- if (sigaction(SIGSEGV, &sa, NULL) < 0) {
- os_info("%s - setting SIGSEGV handler failed - errno = %d\n",
- __func__, errno);
- exit(1);
+ while (len < stub_elf_end - stub_elf_start) {
+ res = write(stub_exec_fd, stub_elf_start + len,
+ stub_elf_end - stub_elf_start - len);
+ if (res < 0) {
+ if (errno == EINTR)
+ continue;
+
+ if (tmpfile[0])
+ unlink(tmpfile);
+ panic("%s: Failed write to memfd: %d", __func__, errno);
+ }
+
+ len += res;
+ }
+
+ if (!tmpfile[0]) {
+ fcntl(stub_exec_fd, F_ADD_SEALS,
+ F_SEAL_WRITE | F_SEAL_SHRINK | F_SEAL_GROW | F_SEAL_SEAL);
+ } else {
+ /* Only executable by us */
+ if (fchmod(stub_exec_fd, 00100) < 0) {
+ unlink(tmpfile);
+ panic("Could not make stub binary excutable: %d",
+ errno);
+ }
+
+ close(stub_exec_fd);
+ stub_exec_fd = open(tmpfile, O_CLOEXEC);
+ if (stub_exec_fd < 0) {
+ unlink(tmpfile);
+ panic("Could not reopen stub binary: %d",
+ errno);
+ }
+
+ unlink(tmpfile);
}
- kill(os_getpid(), SIGSTOP);
return 0;
}
+__initcall(init_stub_exec_fd);
int userspace_pid[NR_CPUS];
int kill_userspace_mm[NR_CPUS];
@@ -270,7 +329,7 @@ int start_userspace(unsigned long stub_stack)
{
void *stack;
unsigned long sp;
- int pid, status, n, flags, err;
+ int pid, status, n, err;
/* setup a temporary stack page */
stack = mmap(NULL, UM_KERN_PAGE_SIZE,
@@ -286,10 +345,10 @@ int start_userspace(unsigned long stub_stack)
/* set stack pointer to the end of the stack page, so it can grow downwards */
sp = (unsigned long)stack + UM_KERN_PAGE_SIZE;
- flags = CLONE_FILES | SIGCHLD;
-
/* clone into new userspace process */
- pid = clone(userspace_tramp, (void *) sp, flags, (void *) stub_stack);
+ pid = clone(userspace_tramp, (void *) sp,
+ CLONE_VFORK | CLONE_VM | SIGCHLD,
+ (void *)stub_stack);
if (pid < 0) {
err = -errno;
printk(UM_KERN_ERR "%s : clone failed, errno = %d\n",
diff --git a/arch/x86/um/.gitignore b/arch/x86/um/.gitignore
new file mode 100644
index 000000000000..91f9df29d1c3
--- /dev/null
+++ b/arch/x86/um/.gitignore
@@ -0,0 +1,2 @@
+stub_elf
+stub_elf.dbg
diff --git a/arch/x86/um/Makefile b/arch/x86/um/Makefile
index 8bc72a51b257..6e8b59498b64 100644
--- a/arch/x86/um/Makefile
+++ b/arch/x86/um/Makefile
@@ -11,10 +11,37 @@ endif
obj-y = bugs_$(BITS).o delay.o fault.o ldt.o \
ptrace_$(BITS).o ptrace_user.o setjmp_$(BITS).o signal.o \
- stub_$(BITS).o stub_segv.o \
+ stub_$(BITS).o stub_segv.o stub_elf_embed.o \
sys_call_table_$(BITS).o sysrq_$(BITS).o tls_$(BITS).o \
mem_$(BITS).o subarch.o os-Linux/
+# Stub executable
+
+stub_elf_objs-y := stub_elf.o
+
+stub_elf_objs := $(foreach F,$(stub_elf_objs-y),$(obj)/$F)
+
+# Object file containing the ELF executable
+$(obj)/stub_elf_embed.o: $(src)/stub_elf_embed.S $(obj)/stub_elf
+
+$(obj)/stub_elf.dbg: $(stub_elf_objs) FORCE
+ $(call if_changed,stub_elf)
+
+$(obj)/stub_elf: OBJCOPYFLAGS := -S
+$(obj)/stub_elf: $(obj)/stub_elf.dbg FORCE
+ $(call if_changed,objcopy)
+
+quiet_cmd_stub_elf = STUB_ELF $@
+ cmd_stub_elf = $(CC) -nostdlib -o $@ \
+ $(CC_FLAGS_LTO) $(STUB_ELF_LDFLAGS) \
+ $(filter %.o,$^)
+
+STUB_ELF_LDFLAGS = -n -static
+
+targets += stub_elf.dbg stub_elf $(stub_elf_objs-y)
+
+# end
+
ifeq ($(CONFIG_X86_32),y)
obj-y += syscalls_32.o
@@ -46,7 +73,8 @@ targets += user-offsets.s
include/generated/user_constants.h: $(obj)/user-offsets.s FORCE
$(call filechk,offsets,__USER_CONSTANT_H__)
-UNPROFILE_OBJS := stub_segv.o
+UNPROFILE_OBJS := stub_segv.o stub_elf.o
CFLAGS_stub_segv.o := $(CFLAGS_NO_HARDENING)
+CFLAGS_stub_elf.o := $(CFLAGS_NO_HARDENING)
include $(srctree)/arch/um/scripts/Makefile.rules
diff --git a/arch/x86/um/stub_elf.c b/arch/x86/um/stub_elf.c
new file mode 100644
index 000000000000..2bf1a717065d
--- /dev/null
+++ b/arch/x86/um/stub_elf.c
@@ -0,0 +1,86 @@
+#include <sys/ptrace.h>
+#include <sys/prctl.h>
+#include <asm/unistd.h>
+#include <sysdep/stub.h>
+#include <stub-data.h>
+
+void _start(void);
+
+static void real_init(void)
+{
+ struct stub_init_data init_data;
+ unsigned long res;
+ struct {
+ void *ss_sp;
+ int ss_flags;
+ size_t ss_size;
+ } stack;
+ struct {
+ void *sa_handler_;
+ unsigned long sa_flags;
+ void *sa_restorer;
+ unsigned long sa_mask;
+ } sa = {};
+
+ /* set a nice name */
+ stub_syscall2(__NR_prctl, PR_SET_NAME, (unsigned long)"uml-userspace");
+
+ /* read information from STDIN and close it */
+ res = stub_syscall3(__NR_read, 0,
+ (unsigned long)&init_data, sizeof(init_data));
+ if (res != sizeof(init_data))
+ stub_syscall1(__NR_exit, 10);
+
+ stub_syscall1(__NR_close, 0);
+
+ /* map stub code + data */
+ res = stub_syscall6(STUB_MMAP_NR,
+ init_data.stub_start, UM_KERN_PAGE_SIZE,
+ PROT_READ | PROT_EXEC, MAP_FIXED | MAP_SHARED,
+ init_data.stub_code_fd, init_data.stub_code_offset);
+ if (res != init_data.stub_start)
+ stub_syscall1(__NR_exit, 11);
+
+ res = stub_syscall6(STUB_MMAP_NR,
+ init_data.stub_start + UM_KERN_PAGE_SIZE,
+ STUB_DATA_PAGES * UM_KERN_PAGE_SIZE,
+ PROT_READ | PROT_WRITE, MAP_FIXED | MAP_SHARED,
+ init_data.stub_data_fd, init_data.stub_data_offset);
+ if (res != init_data.stub_start + UM_KERN_PAGE_SIZE)
+ stub_syscall1(__NR_exit, 12);
+
+ /* setup signal stack inside stub data */
+ stack.ss_flags = 0;
+ stack.ss_size = STUB_DATA_PAGES * UM_KERN_PAGE_SIZE;
+ stack.ss_sp = (void *)init_data.stub_start + UM_KERN_PAGE_SIZE;
+ stub_syscall2(__NR_sigaltstack, (unsigned long)&stack, 0);
+
+ /* register SIGSEGV handler (SA_RESTORER, the handler never returns) */
+ sa.sa_flags = SA_ONSTACK | SA_NODEFER | SA_SIGINFO | 0x04000000;
+ sa.sa_handler_ = (void *) init_data.segv_handler;
+ sa.sa_restorer = NULL;
+ sa.sa_mask = 0L; /* No need to mask anything */
+ res = stub_syscall4(__NR_rt_sigaction, SIGSEGV, (unsigned long)&sa, 0,
+ sizeof(sa.sa_mask));
+ if (res < 0)
+ stub_syscall1(__NR_exit, 13);
+
+ stub_syscall4(__NR_ptrace, PTRACE_TRACEME, 0, 0, 0);
+
+ stub_syscall2(__NR_kill, stub_syscall0(__NR_getpid), SIGSTOP);
+
+ stub_syscall1(__NR_exit, 14);
+
+ __builtin_unreachable();
+}
+
+void _start(void)
+{
+ char *alloc;
+
+ /* bump the stack pointer as the stub is mapped into our stack */
+ alloc = __builtin_alloca((1 + STUB_DATA_PAGES) * UM_KERN_PAGE_SIZE);
+ asm volatile("" : "+r,m"(alloc) : : "memory");
+
+ real_init();
+}
diff --git a/arch/x86/um/stub_elf_embed.S b/arch/x86/um/stub_elf_embed.S
new file mode 100644
index 000000000000..e39321b4c313
--- /dev/null
+++ b/arch/x86/um/stub_elf_embed.S
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <linux/init.h>
+#include <linux/linkage.h>
+
+__INITDATA
+
+SYM_DATA_START(stub_elf_start)
+ .incbin "arch/x86/um/stub_elf"
+SYM_DATA_END_LABEL(stub_elf_start, SYM_L_GLOBAL, stub_elf_end)
+
+__FINIT
--
2.45.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v6 4/7] um: Fix stub_start address calculation
2024-06-26 13:53 [PATCH v6 0/7] Increased address space for 64 bit Benjamin Berg
` (2 preceding siblings ...)
2024-06-26 13:53 ` [PATCH v6 3/7] um: use execveat to create userspace MMs Benjamin Berg
@ 2024-06-26 13:53 ` Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 5/7] um: Limit TASK_SIZE to the addressable range Benjamin Berg
` (2 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Benjamin Berg @ 2024-06-26 13:53 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
The calculation was wrong as it only subtracted one and then rounded
down for alignment. However, this is incorrect if host_task_size is not
already aligned.
This probably worked fine because on 64 bit the host_task_size is bigger
than returned by os_get_top_address.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
---
arch/um/kernel/um_arch.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
index e95f805e5004..0d8b1a73cd5b 100644
--- a/arch/um/kernel/um_arch.c
+++ b/arch/um/kernel/um_arch.c
@@ -331,7 +331,8 @@ int __init linux_main(int argc, char **argv)
/* reserve a few pages for the stubs (taking care of data alignment) */
/* align the data portion */
BUILD_BUG_ON(!is_power_of_2(STUB_DATA_PAGES));
- stub_start = (host_task_size - 1) & ~(STUB_DATA_PAGES * PAGE_SIZE - 1);
+ stub_start = (host_task_size - STUB_DATA_PAGES * PAGE_SIZE) &
+ ~(STUB_DATA_PAGES * PAGE_SIZE - 1);
/* another page for the code portion */
stub_start -= PAGE_SIZE;
host_task_size = stub_start;
--
2.45.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v6 5/7] um: Limit TASK_SIZE to the addressable range
2024-06-26 13:53 [PATCH v6 0/7] Increased address space for 64 bit Benjamin Berg
` (3 preceding siblings ...)
2024-06-26 13:53 ` [PATCH v6 4/7] um: Fix stub_start address calculation Benjamin Berg
@ 2024-06-26 13:53 ` Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 6/7] um: Discover host_task_size from envp Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 7/7] um: Add 4 level page table support Benjamin Berg
6 siblings, 0 replies; 10+ messages in thread
From: Benjamin Berg @ 2024-06-26 13:53 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
We may have a TASK_SIZE from the host that is bigger than UML is able to
address with a three-level pagetable. Guard against that by clipping the
maximum TASK_SIZE to the maximum addressable area.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
---
arch/um/kernel/um_arch.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
index 0d8b1a73cd5b..5ab1a92b6bf7 100644
--- a/arch/um/kernel/um_arch.c
+++ b/arch/um/kernel/um_arch.c
@@ -337,11 +337,16 @@ int __init linux_main(int argc, char **argv)
stub_start -= PAGE_SIZE;
host_task_size = stub_start;
+ /* Limit TASK_SIZE to what is addressable by the page table */
+ task_size = host_task_size;
+ if (task_size > PTRS_PER_PGD * PGDIR_SIZE)
+ task_size = PTRS_PER_PGD * PGDIR_SIZE;
+
/*
* TASK_SIZE needs to be PGDIR_SIZE aligned or else exit_mmap craps
* out
*/
- task_size = host_task_size & PGDIR_MASK;
+ task_size = task_size & PGDIR_MASK;
/* OS sanity checks that need to happen before the kernel runs */
os_early_checks();
--
2.45.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v6 6/7] um: Discover host_task_size from envp
2024-06-26 13:53 [PATCH v6 0/7] Increased address space for 64 bit Benjamin Berg
` (4 preceding siblings ...)
2024-06-26 13:53 ` [PATCH v6 5/7] um: Limit TASK_SIZE to the addressable range Benjamin Berg
@ 2024-06-26 13:53 ` Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 7/7] um: Add 4 level page table support Benjamin Berg
6 siblings, 0 replies; 10+ messages in thread
From: Benjamin Berg @ 2024-06-26 13:53 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
When loading the UML binary, the host kernel will place the stack at the
highest possible address. It will then map the program name and
environment variables onto the start of the stack.
As such, an easy way to figure out the host_task_size is to use the
highest pointer to an environment variable as a reference.
Ensure that this works by disabling address layout randomization and
re-executing UML in case it was enabled.
This increases the available TASK_SIZE for 64 bit UML considerably.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
---
arch/um/include/shared/as-layout.h | 2 +-
arch/um/include/shared/os.h | 2 +-
arch/um/kernel/um_arch.c | 4 ++--
arch/um/os-Linux/main.c | 9 ++++++++-
arch/x86/um/os-Linux/task_size.c | 19 +++++++++++++++----
5 files changed, 27 insertions(+), 9 deletions(-)
diff --git a/arch/um/include/shared/as-layout.h b/arch/um/include/shared/as-layout.h
index c22f46a757dc..480bb44ea1f2 100644
--- a/arch/um/include/shared/as-layout.h
+++ b/arch/um/include/shared/as-layout.h
@@ -48,7 +48,7 @@ extern unsigned long brk_start;
extern unsigned long host_task_size;
extern unsigned long stub_start;
-extern int linux_main(int argc, char **argv);
+extern int linux_main(int argc, char **argv, char **envp);
extern void uml_finishsetup(void);
struct siginfo;
diff --git a/arch/um/include/shared/os.h b/arch/um/include/shared/os.h
index aff8906304ea..db644fc67069 100644
--- a/arch/um/include/shared/os.h
+++ b/arch/um/include/shared/os.h
@@ -327,7 +327,7 @@ extern int __ignore_sigio_fd(int fd);
extern int get_pty(void);
/* sys-$ARCH/task_size.c */
-extern unsigned long os_get_top_address(void);
+extern unsigned long os_get_top_address(char **envp);
long syscall(long number, ...);
diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
index 5ab1a92b6bf7..046eaf356b28 100644
--- a/arch/um/kernel/um_arch.c
+++ b/arch/um/kernel/um_arch.c
@@ -305,7 +305,7 @@ static void parse_cache_line(char *line)
}
}
-int __init linux_main(int argc, char **argv)
+int __init linux_main(int argc, char **argv, char **envp)
{
unsigned long avail, diff;
unsigned long virtmem_size, max_physmem;
@@ -327,7 +327,7 @@ int __init linux_main(int argc, char **argv)
if (have_console == 0)
add_arg(DEFAULT_COMMAND_LINE_CONSOLE);
- host_task_size = os_get_top_address();
+ host_task_size = os_get_top_address(envp);
/* reserve a few pages for the stubs (taking care of data alignment) */
/* align the data portion */
BUILD_BUG_ON(!is_power_of_2(STUB_DATA_PAGES));
diff --git a/arch/um/os-Linux/main.c b/arch/um/os-Linux/main.c
index f98ff79cdbf7..9a61b1767795 100644
--- a/arch/um/os-Linux/main.c
+++ b/arch/um/os-Linux/main.c
@@ -11,6 +11,7 @@
#include <signal.h>
#include <string.h>
#include <sys/resource.h>
+#include <sys/personality.h>
#include <as-layout.h>
#include <init.h>
#include <kern_util.h>
@@ -108,6 +109,12 @@ int __init main(int argc, char **argv, char **envp)
char **new_argv;
int ret, i, err;
+ /* Disable randomization and re-exec if it was changed successfully */
+ ret = personality(PER_LINUX | ADDR_NO_RANDOMIZE);
+ if (ret >= 0 && (ret & (PER_LINUX | ADDR_NO_RANDOMIZE)) !=
+ (PER_LINUX | ADDR_NO_RANDOMIZE))
+ execve("/proc/self/exe", argv, envp);
+
set_stklim();
setup_env_path();
@@ -140,7 +147,7 @@ int __init main(int argc, char **argv, char **envp)
#endif
change_sig(SIGPIPE, 0);
- ret = linux_main(argc, argv);
+ ret = linux_main(argc, argv, envp);
/*
* Disable SIGPROF - I have no idea why libc doesn't do this or turn
diff --git a/arch/x86/um/os-Linux/task_size.c b/arch/x86/um/os-Linux/task_size.c
index 1dc9adc20b1c..33c26291545a 100644
--- a/arch/x86/um/os-Linux/task_size.c
+++ b/arch/x86/um/os-Linux/task_size.c
@@ -65,7 +65,7 @@ static int page_ok(unsigned long page)
return ok;
}
-unsigned long os_get_top_address(void)
+unsigned long os_get_top_address(char **envp)
{
struct sigaction sa, old;
unsigned long bottom = 0;
@@ -142,10 +142,21 @@ unsigned long os_get_top_address(void)
#else
-unsigned long os_get_top_address(void)
+unsigned long os_get_top_address(char **envp)
{
- /* The old value of CONFIG_TOP_ADDR */
- return 0x7fc0002000;
+ unsigned long top_addr = (unsigned long) &top_addr;
+ int i;
+
+ /* The earliest variable should be after the program name in ELF */
+ for (i = 0; envp[i]; i++) {
+ if ((unsigned long) envp[i] > top_addr)
+ top_addr = (unsigned long) envp[i];
+ }
+
+ top_addr &= ~(UM_KERN_PAGE_SIZE - 1);
+ top_addr += UM_KERN_PAGE_SIZE;
+
+ return top_addr;
}
#endif
--
2.45.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v6 7/7] um: Add 4 level page table support
2024-06-26 13:53 [PATCH v6 0/7] Increased address space for 64 bit Benjamin Berg
` (5 preceding siblings ...)
2024-06-26 13:53 ` [PATCH v6 6/7] um: Discover host_task_size from envp Benjamin Berg
@ 2024-06-26 13:53 ` Benjamin Berg
2024-07-03 11:32 ` Johannes Berg
6 siblings, 1 reply; 10+ messages in thread
From: Benjamin Berg @ 2024-06-26 13:53 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
The larger memory space is useful to support more applications inside
UML. One example for this is ASAN instrumentation of userspace
applications which requires addresses that would otherwise not be
available.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
---
v2:
- Do not hide option behind the EXPERT flag
- Fix typo in new "Two-level pagetables" option
---
arch/um/Kconfig | 1 +
arch/um/include/asm/page.h | 14 +++-
arch/um/include/asm/pgalloc.h | 11 ++-
arch/um/include/asm/pgtable-4level.h | 119 +++++++++++++++++++++++++++
arch/um/include/asm/pgtable.h | 6 +-
arch/um/kernel/mem.c | 17 +++-
arch/x86/um/Kconfig | 38 ++++++---
7 files changed, 189 insertions(+), 17 deletions(-)
create mode 100644 arch/um/include/asm/pgtable-4level.h
diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 93a5a8999b07..5d111fc8ccb7 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -208,6 +208,7 @@ config MMAPPER
config PGTABLE_LEVELS
int
+ default 4 if 4_LEVEL_PGTABLES
default 3 if 3_LEVEL_PGTABLES
default 2
diff --git a/arch/um/include/asm/page.h b/arch/um/include/asm/page.h
index 9ef9a8aedfa6..c3b2ae03b60c 100644
--- a/arch/um/include/asm/page.h
+++ b/arch/um/include/asm/page.h
@@ -57,14 +57,22 @@ typedef unsigned long long phys_t;
typedef struct { unsigned long pte; } pte_t;
typedef struct { unsigned long pgd; } pgd_t;
-#ifdef CONFIG_3_LEVEL_PGTABLES
+#if CONFIG_PGTABLE_LEVELS > 2
+
typedef struct { unsigned long pmd; } pmd_t;
#define pmd_val(x) ((x).pmd)
#define __pmd(x) ((pmd_t) { (x) } )
-#endif
-#define pte_val(x) ((x).pte)
+#if CONFIG_PGTABLE_LEVELS > 3
+typedef struct { unsigned long pud; } pud_t;
+#define pud_val(x) ((x).pud)
+#define __pud(x) ((pud_t) { (x) } )
+
+#endif /* CONFIG_PGTABLE_LEVELS > 3 */
+#endif /* CONFIG_PGTABLE_LEVELS > 2 */
+
+#define pte_val(x) ((x).pte)
#define pte_get_bits(p, bits) ((p).pte & (bits))
#define pte_set_bits(p, bits) ((p).pte |= (bits))
diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h
index de5e31c64793..04fb4e6969a4 100644
--- a/arch/um/include/asm/pgalloc.h
+++ b/arch/um/include/asm/pgalloc.h
@@ -31,7 +31,7 @@ do { \
tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \
} while (0)
-#ifdef CONFIG_3_LEVEL_PGTABLES
+#if CONFIG_PGTABLE_LEVELS > 2
#define __pmd_free_tlb(tlb, pmd, address) \
do { \
@@ -39,6 +39,15 @@ do { \
tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \
} while (0)
+#if CONFIG_PGTABLE_LEVELS > 3
+
+#define __pud_free_tlb(tlb, pud, address) \
+do { \
+ pagetable_pud_dtor(virt_to_ptdesc(pud)); \
+ tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \
+} while (0)
+
+#endif
#endif
#endif
diff --git a/arch/um/include/asm/pgtable-4level.h b/arch/um/include/asm/pgtable-4level.h
new file mode 100644
index 000000000000..f912fcc16b7a
--- /dev/null
+++ b/arch/um/include/asm/pgtable-4level.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2003 PathScale Inc
+ * Derived from include/asm-i386/pgtable.h
+ */
+
+#ifndef __UM_PGTABLE_4LEVEL_H
+#define __UM_PGTABLE_4LEVEL_H
+
+#include <asm-generic/pgtable-nop4d.h>
+
+/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
+
+#define PGDIR_SHIFT 39
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+/* PUD_SHIFT determines the size of the area a third-level page table can
+ * map
+ */
+
+#define PUD_SHIFT 30
+#define PUD_SIZE (1UL << PUD_SHIFT)
+#define PUD_MASK (~(PUD_SIZE-1))
+
+/* PMD_SHIFT determines the size of the area a second-level page table can
+ * map
+ */
+
+#define PMD_SHIFT 21
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+
+/*
+ * entries per page directory level
+ */
+
+#define PTRS_PER_PTE 512
+#define PTRS_PER_PMD 512
+#define PTRS_PER_PUD 512
+#define PTRS_PER_PGD 512
+
+#define USER_PTRS_PER_PGD ((TASK_SIZE + (PGDIR_SIZE - 1)) / PGDIR_SIZE)
+
+#define pte_ERROR(e) \
+ printk("%s:%d: bad pte %p(%016lx).\n", __FILE__, __LINE__, &(e), \
+ pte_val(e))
+#define pmd_ERROR(e) \
+ printk("%s:%d: bad pmd %p(%016lx).\n", __FILE__, __LINE__, &(e), \
+ pmd_val(e))
+#define pud_ERROR(e) \
+ printk("%s:%d: bad pud %p(%016lx).\n", __FILE__, __LINE__, &(e), \
+ pud_val(e))
+#define pgd_ERROR(e) \
+ printk("%s:%d: bad pgd %p(%016lx).\n", __FILE__, __LINE__, &(e), \
+ pgd_val(e))
+
+#define pud_none(x) (!(pud_val(x) & ~_PAGE_NEWPAGE))
+#define pud_bad(x) ((pud_val(x) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
+#define pud_present(x) (pud_val(x) & _PAGE_PRESENT)
+#define pud_populate(mm, pud, pmd) \
+ set_pud(pud, __pud(_PAGE_TABLE + __pa(pmd)))
+
+#define set_pud(pudptr, pudval) (*(pudptr) = (pudval))
+
+#define p4d_none(x) (!(p4d_val(x) & ~_PAGE_NEWPAGE))
+#define p4d_bad(x) ((p4d_val(x) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
+#define p4d_present(x) (p4d_val(x) & _PAGE_PRESENT)
+#define p4d_populate(mm, p4d, pud) \
+ set_p4d(p4d, __p4d(_PAGE_TABLE + __pa(pud)))
+
+#define set_p4d(p4dptr, p4dval) (*(p4dptr) = (p4dval))
+
+
+static inline int pgd_newpage(pgd_t pgd)
+{
+ return(pgd_val(pgd) & _PAGE_NEWPAGE);
+}
+
+static inline void pgd_mkuptodate(pgd_t pgd) { pgd_val(pgd) &= ~_PAGE_NEWPAGE; }
+
+#define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval))
+
+static inline void pud_clear (pud_t *pud)
+{
+ set_pud(pud, __pud(_PAGE_NEWPAGE));
+}
+
+static inline void p4d_clear (p4d_t *p4d)
+{
+ set_p4d(p4d, __p4d(_PAGE_NEWPAGE));
+}
+
+#define pud_page(pud) phys_to_page(pud_val(pud) & PAGE_MASK)
+#define pud_pgtable(pud) ((pmd_t *) __va(pud_val(pud) & PAGE_MASK))
+
+#define p4d_page(p4d) phys_to_page(p4d_val(p4d) & PAGE_MASK)
+#define p4d_pgtable(p4d) ((pud_t *) __va(p4d_val(p4d) & PAGE_MASK))
+
+static inline unsigned long pte_pfn(pte_t pte)
+{
+ return phys_to_pfn(pte_val(pte));
+}
+
+static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
+{
+ pte_t pte;
+ phys_t phys = pfn_to_phys(page_nr);
+
+ pte_set_val(pte, phys, pgprot);
+ return pte;
+}
+
+static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)
+{
+ return __pmd((page_nr << PAGE_SHIFT) | pgprot_val(pgprot));
+}
+
+#endif
diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h
index e1ece21dbe3f..71a7651e2db7 100644
--- a/arch/um/include/asm/pgtable.h
+++ b/arch/um/include/asm/pgtable.h
@@ -24,9 +24,11 @@
/* We borrow bit 10 to store the exclusive marker in swap PTEs. */
#define _PAGE_SWP_EXCLUSIVE 0x400
-#ifdef CONFIG_3_LEVEL_PGTABLES
+#if CONFIG_PGTABLE_LEVELS == 4
+#include <asm/pgtable-4level.h>
+#elif CONFIG_PGTABLE_LEVELS == 3
#include <asm/pgtable-3level.h>
-#else
+#elif CONFIG_PGTABLE_LEVELS == 2
#include <asm/pgtable-2level.h>
#endif
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index ca91accd64fc..2dc0d90c0550 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -99,7 +99,7 @@ static void __init one_page_table_init(pmd_t *pmd)
static void __init one_md_table_init(pud_t *pud)
{
-#ifdef CONFIG_3_LEVEL_PGTABLES
+#if CONFIG_PGTABLE_LEVELS > 2
pmd_t *pmd_table = (pmd_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
if (!pmd_table)
panic("%s: Failed to allocate %lu bytes align=%lx\n",
@@ -110,6 +110,19 @@ static void __init one_md_table_init(pud_t *pud)
#endif
}
+static void __init one_ud_table_init(p4d_t *p4d)
+{
+#if CONFIG_PGTABLE_LEVELS > 3
+ pud_t *pud_table = (pud_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
+ if (!pud_table)
+ panic("%s: Failed to allocate %lu bytes align=%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
+
+ set_p4d(p4d, __p4d(_KERNPG_TABLE + (unsigned long) __pa(pud_table)));
+ BUG_ON(pud_table != pud_offset(p4d, 0));
+#endif
+}
+
static void __init fixrange_init(unsigned long start, unsigned long end,
pgd_t *pgd_base)
{
@@ -127,6 +140,8 @@ static void __init fixrange_init(unsigned long start, unsigned long end,
for ( ; (i < PTRS_PER_PGD) && (vaddr < end); pgd++, i++) {
p4d = p4d_offset(pgd, vaddr);
+ if (p4d_none(*p4d))
+ one_ud_table_init(p4d);
pud = pud_offset(p4d, vaddr);
if (pud_none(*pud))
one_md_table_init(pud);
diff --git a/arch/x86/um/Kconfig b/arch/x86/um/Kconfig
index 186f13268401..454ad560f627 100644
--- a/arch/x86/um/Kconfig
+++ b/arch/x86/um/Kconfig
@@ -28,16 +28,34 @@ config X86_64
def_bool 64BIT
select MODULES_USE_ELF_RELA
-config 3_LEVEL_PGTABLES
- bool "Three-level pagetables" if !64BIT
- default 64BIT
- help
- Three-level pagetables will let UML have more than 4G of physical
- memory. All the memory that can't be mapped directly will be treated
- as high memory.
-
- However, this it experimental on 32-bit architectures, so if unsure say
- N (on x86-64 it's automatically enabled, instead, as it's safe there).
+choice
+ prompt "Pagetable levels"
+ default 2_LEVEL_PGTABLES if !64BIT
+ default 4_LEVEL_PGTABLES if 64BIT
+
+ config 2_LEVEL_PGTABLES
+ bool "Two-level pagetables" if !64BIT
+ depends on !64BIT
+ help
+ Two-level page table for 32-bit architectures.
+
+ config 3_LEVEL_PGTABLES
+ bool "Three-level pagetables" if 64BIT
+ help
+ Three-level pagetables will let UML have more than 4G of physical
+ memory. All the memory that can't be mapped directly will be treated
+ as high memory.
+
+ However, this it experimental on 32-bit architectures, so if unsure say
+ N (on x86-64 it's automatically enabled, instead, as it's safe there).
+
+ config 4_LEVEL_PGTABLES
+ bool "Four-level pagetables" if 64BIT
+ depends on 64BIT
+ help
+ Four-level pagetables, gives a bigger address space which can be
+ useful for some applications (e.g. ASAN).
+endchoice
config ARCH_HAS_SC_SIGNALS
def_bool !64BIT
--
2.45.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v6 3/7] um: use execveat to create userspace MMs
2024-06-26 13:53 ` [PATCH v6 3/7] um: use execveat to create userspace MMs Benjamin Berg
@ 2024-07-01 20:20 ` Johannes Berg
0 siblings, 0 replies; 10+ messages in thread
From: Johannes Berg @ 2024-07-01 20:20 UTC (permalink / raw)
To: Benjamin Berg, linux-um; +Cc: Benjamin Berg
On Wed, 2024-06-26 at 15:53 +0200, Benjamin Berg wrote:
>
> +static int __init init_stub_exec_fd(void)
> +{
> + size_t len = 0;
> + int res;
> + char tmpfile[] = "/tmp/uml-userspace-XXXXXX";
That seems awkward, perhaps it should use make_tempfile() from mem.c?
> + stub_exec_fd = mkostemp(tmpfile, O_CLOEXEC);
mkostemp() also requires _GNU_SOURCE according to the man page? It also
doesn't matter since you reopen the file anyway.
> + /* Only executable by us */
> + if (fchmod(stub_exec_fd, 00100) < 0) {
> + unlink(tmpfile);
> + panic("Could not make stub binary excutable: %d",
> + errno);
> + }
> +
> + close(stub_exec_fd);
> + stub_exec_fd = open(tmpfile, O_CLOEXEC);
Hmm. Technically, I think you _have_ to open for reading, writing, or
both; not none? But then I guess you have to make it readable?
Might also want O_NOFOLLOW here?
> diff --git a/arch/x86/um/stub_elf.c b/arch/x86/um/stub_elf.c
> new file mode 100644
> index 000000000000..2bf1a717065d
> --- /dev/null
> +++ b/arch/x86/um/stub_elf.c
Is stub_elf really the right name? In practice it's going to be an ELF
file, but ... who cares? Not sure why it should be called "elf" vs. just
"stub" or "exec_stub" or something like that.
Also, is it really x86-specific?
> +++ b/arch/x86/um/stub_elf_embed.S
surely that isn't x86-specific?
johannes
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v6 7/7] um: Add 4 level page table support
2024-06-26 13:53 ` [PATCH v6 7/7] um: Add 4 level page table support Benjamin Berg
@ 2024-07-03 11:32 ` Johannes Berg
0 siblings, 0 replies; 10+ messages in thread
From: Johannes Berg @ 2024-07-03 11:32 UTC (permalink / raw)
To: Benjamin Berg, linux-um; +Cc: Benjamin Berg
On Wed, 2024-06-26 at 15:53 +0200, Benjamin Berg wrote:
>
> +choice
> + prompt "Pagetable levels"
> + default 2_LEVEL_PGTABLES if !64BIT
> + default 4_LEVEL_PGTABLES if 64BIT
> +
> + config 2_LEVEL_PGTABLES
> + bool "Two-level pagetables" if !64BIT
> + depends on !64BIT
> + help
> + Two-level page table for 32-bit architectures.
> +
> + config 3_LEVEL_PGTABLES
> + bool "Three-level pagetables" if 64BIT
> + help
> + Three-level pagetables will let UML have more than 4G of physical
> + memory. All the memory that can't be mapped directly will be treated
> + as high memory.
> +
> + However, this it experimental on 32-bit architectures, so if unsure say
> + N (on x86-64 it's automatically enabled, instead, as it's safe there).
You copied this but it's not actually true any more - three-level is
never default now.
johannes
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-07-03 11:33 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-26 13:53 [PATCH v6 0/7] Increased address space for 64 bit Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 1/7] um: Add generic stub_syscall6 function Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 2/7] um: Add generic stub_syscall1 function Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 3/7] um: use execveat to create userspace MMs Benjamin Berg
2024-07-01 20:20 ` Johannes Berg
2024-06-26 13:53 ` [PATCH v6 4/7] um: Fix stub_start address calculation Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 5/7] um: Limit TASK_SIZE to the addressable range Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 6/7] um: Discover host_task_size from envp Benjamin Berg
2024-06-26 13:53 ` [PATCH v6 7/7] um: Add 4 level page table support Benjamin Berg
2024-07-03 11:32 ` Johannes Berg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox