* [PULL 00/39] tcg and linux-user patch queue
@ 2024-02-22 20:42 Richard Henderson
2024-02-22 20:42 ` [PULL 01/39] tcg/aarch64: Apple does not align __int128_t in even registers Richard Henderson
` (39 more replies)
0 siblings, 40 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel
The following changes since commit 6630bc04bccadcf868165ad6bca5a964bb69b067:
Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into staging (2024-02-22 12:42:52 +0000)
are available in the Git repository at:
https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20240222
for you to fetch changes up to a06efc2615a1283e139e35ae8a8875925766268f:
linux-user: Remove pgb_dynamic alignment assertion (2024-02-22 09:04:05 -1000)
----------------------------------------------------------------
tcg/aarch64: Apple does not align __int128_t in even registers
accel/tcg: Fixes for page tables in mmio memory
linux-user: Remove qemu_host_page_{size,mask}, HOST_PAGE_ALIGN
migration: Remove qemu_host_page_size
hw/tpm: Remove qemu_host_page_size
softmmu: Remove qemu_host_page_{size,mask}, HOST_PAGE_ALIGN
linux-user: Split and reorganize target_mmap.
*-user: Deprecate and disable -p pagesize
linux-user: Allow TARGET_PAGE_BITS_VARY
target/alpha: Enable TARGET_PAGE_BITS_VARY for user-only
target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only
target/ppc: Enable TARGET_PAGE_BITS_VARY for user-only
linux-user: Remove pgb_dynamic alignment assertion
----------------------------------------------------------------
Jonathan Cameron (1):
tcg: Avoid double lock if page tables happen to be in mmio memory.
Peter Maydell (1):
accel/tcg: Set can_do_io at at start of lookup_tb_ptr helper
Richard Henderson (37):
tcg/aarch64: Apple does not align __int128_t in even registers
accel/tcg: Remove qemu_host_page_size from page_protect/page_unprotect
linux-user: Adjust SVr4 NULL page mapping
linux-user: Remove qemu_host_page_{size, mask} in probe_guest_base
linux-user: Remove qemu_host_page_size from create_elf_tables
linux-user/hppa: Simplify init_guest_commpage
linux-user/nios2: Remove qemu_host_page_size from init_guest_commpage
linux-user/arm: Remove qemu_host_page_size from init_guest_commpage
linux-user: Remove qemu_host_page_{size, mask} from mmap.c
linux-user: Remove REAL_HOST_PAGE_ALIGN from mmap.c
linux-user: Remove HOST_PAGE_ALIGN from mmap.c
migration: Remove qemu_host_page_size
hw/tpm: Remove HOST_PAGE_ALIGN from tpm_ppi_init
softmmu/physmem: Remove qemu_host_page_size
softmmu/physmem: Remove HOST_PAGE_ALIGN
linux-user: Remove qemu_host_page_size from main
linux-user: Split out target_mmap__locked
linux-user: Move some mmap checks outside the lock
linux-user: Fix sub-host-page mmap
linux-user: Split out mmap_end
linux-user: Do early mmap placement only for reserved_va
linux-user: Split out do_munmap
linux-user: Use do_munmap for target_mmap failure
linux-user: Split out mmap_h_eq_g
linux-user: Split out mmap_h_lt_g
linux-user: Split out mmap_h_gt_g
tests/tcg: Remove run-test-mmap-*
tests/tcg: Extend file in linux-madvise.c
*-user: Deprecate and disable -p pagesize
cpu: Remove page_size_init
accel/tcg: Disconnect TargetPageDataNode from page size
linux-user: Allow TARGET_PAGE_BITS_VARY
target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only
linux-user: Bound mmap_min_addr by host page size
target/ppc: Enable TARGET_PAGE_BITS_VARY for user-only
target/alpha: Enable TARGET_PAGE_BITS_VARY for user-only
linux-user: Remove pgb_dynamic alignment assertion
docs/about/deprecated.rst | 10 +
docs/user/main.rst | 3 -
bsd-user/qemu.h | 7 +
include/exec/cpu-common.h | 7 -
include/hw/core/cpu.h | 2 -
target/alpha/cpu-param.h | 16 +-
target/arm/cpu-param.h | 6 +-
target/ppc/cpu-param.h | 9 +-
tcg/aarch64/tcg-target.h | 6 +-
accel/tcg/cpu-exec.c | 8 +
accel/tcg/cputlb.c | 34 +-
accel/tcg/translate-all.c | 1 -
accel/tcg/user-exec.c | 31 +-
bsd-user/main.c | 22 +-
cpu-target.c | 13 -
hw/tpm/tpm_ppi.c | 6 +-
linux-user/elfload.c | 68 +--
linux-user/main.c | 34 +-
linux-user/mmap.c | 767 ++++++++++++++++++------------
migration/ram.c | 22 +-
system/physmem.c | 17 +-
system/vl.c | 1 -
target/arm/cpu.c | 51 +-
tests/tcg/multiarch/linux/linux-madvise.c | 2 +
tests/tcg/alpha/Makefile.target | 3 -
tests/tcg/arm/Makefile.target | 3 -
tests/tcg/hppa/Makefile.target | 3 -
tests/tcg/i386/Makefile.target | 3 -
tests/tcg/m68k/Makefile.target | 3 -
tests/tcg/multiarch/Makefile.target | 9 -
tests/tcg/ppc/Makefile.target | 12 -
tests/tcg/sh4/Makefile.target | 3 -
tests/tcg/sparc64/Makefile.target | 6 -
33 files changed, 700 insertions(+), 488 deletions(-)
delete mode 100644 tests/tcg/ppc/Makefile.target
delete mode 100644 tests/tcg/sparc64/Makefile.target
^ permalink raw reply [flat|nested] 42+ messages in thread
* [PULL 01/39] tcg/aarch64: Apple does not align __int128_t in even registers
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 02/39] accel/tcg: Set can_do_io at at start of lookup_tb_ptr helper Richard Henderson
` (38 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-stable, Philippe Mathieu-Daudé
From https://developer.apple.com/documentation/xcode/writing-arm64-code-for-apple-platforms
When passing an argument with 16-byte alignment in integer registers,
Apple platforms allow the argument to start in an odd-numbered xN
register. The standard ABI requires it to begin in an even-numbered
xN register.
Cc: qemu-stable@nongnu.org
Fixes: 5427a9a7604 ("tcg: Add TCG_TARGET_CALL_{RET,ARG}_I128")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2169
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <9fc0c2c7-dd57-459e-aecb-528edb74b4a7@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
tcg/aarch64/tcg-target.h | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h
index ef5ebe91bd..85d5746e47 100644
--- a/tcg/aarch64/tcg-target.h
+++ b/tcg/aarch64/tcg-target.h
@@ -55,7 +55,11 @@ typedef enum {
#define TCG_TARGET_CALL_STACK_OFFSET 0
#define TCG_TARGET_CALL_ARG_I32 TCG_CALL_ARG_NORMAL
#define TCG_TARGET_CALL_ARG_I64 TCG_CALL_ARG_NORMAL
-#define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_EVEN
+#ifdef CONFIG_DARWIN
+# define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_NORMAL
+#else
+# define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_EVEN
+#endif
#define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_NORMAL
#define have_lse (cpuinfo & CPUINFO_LSE)
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 02/39] accel/tcg: Set can_do_io at at start of lookup_tb_ptr helper
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
2024-02-22 20:42 ` [PULL 01/39] tcg/aarch64: Apple does not align __int128_t in even registers Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 03/39] tcg: Avoid double lock if page tables happen to be in mmio memory Richard Henderson
` (37 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Maydell, Jonathan Cameron
From: Peter Maydell <peter.maydell@linaro.org>
If a page table is in IO memory and lookup_tb_ptr probes
the TLB it can result in a page table walk for the instruction
fetch. If this hits IO memory and io_prepare falsely assumes
it needs to do a TLB recompile.
Avoid that by setting can_do_io at the start of lookup_tb_ptr.
Link: https://lore.kernel.org/qemu-devel/CAFEAcA_a_AyQ=Epz3_+CheAT8Crsk9mOu894wbNW_FywamkZiw@mail.gmail.com/#t
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-Id: <20240219173153.12114-2-Jonathan.Cameron@huawei.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cpu-exec.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 977576ca14..52239a441f 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -396,6 +396,14 @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
uint64_t cs_base;
uint32_t flags, cflags;
+ /*
+ * By definition we've just finished a TB, so I/O is OK.
+ * Avoid the possibility of calling cpu_io_recompile() if
+ * a page table walk triggered by tb_lookup() calling
+ * probe_access_internal() happens to touch an MMIO device.
+ * The next TB, if we chain to it, will clear the flag again.
+ */
+ cpu->neg.can_do_io = true;
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
cflags = curr_cflags(cpu);
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 03/39] tcg: Avoid double lock if page tables happen to be in mmio memory.
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
2024-02-22 20:42 ` [PULL 01/39] tcg/aarch64: Apple does not align __int128_t in even registers Richard Henderson
2024-02-22 20:42 ` [PULL 02/39] accel/tcg: Set can_do_io at at start of lookup_tb_ptr helper Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 04/39] accel/tcg: Remove qemu_host_page_size from page_protect/page_unprotect Richard Henderson
` (36 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Jonathan Cameron, Peter Maydell
From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
On i386, after fixing the page walking code to work with pages in
MMIO memory (specifically CXL emulated interleaved memory),
a crash was seen in an interrupt handling path.
Useful part of backtrace
7 0x0000555555ab1929 in bql_lock_impl (file=0x555556049122 "../../accel/tcg/cputlb.c", line=2033) at ../../system/cpus.c:524
8 bql_lock_impl (file=file@entry=0x555556049122 "../../accel/tcg/cputlb.c", line=line@entry=2033) at ../../system/cpus.c:520
9 0x0000555555c9f7d6 in do_ld_mmio_beN (cpu=0x5555578e0cb0, full=0x7ffe88012950, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2033
10 0x0000555555ca0fbd in do_ld_8 (cpu=cpu@entry=0x5555578e0cb0, p=p@entry=0x7ffff4efd1d0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356
11 0x0000555555ca341f in do_ld8_mmu (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=19595792376, oi=oi@entry=52, ra=0, ra@entry=52, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439
12 0x0000555555ca5f59 in cpu_ldq_mmu (ra=52, oi=52, addr=19595792376, env=0x5555578e3470) at ../../accel/tcg/ldst_common.c.inc:169
13 cpu_ldq_le_mmuidx_ra (env=0x5555578e3470, addr=19595792376, mmu_idx=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/ldst_common.c.inc:301
14 0x0000555555b4b5fc in ptw_ldq (ra=0, in=0x7ffff4efd320) at ../../target/i386/tcg/sysemu/excp_helper.c:98
15 ptw_ldq (ra=0, in=0x7ffff4efd320) at ../../target/i386/tcg/sysemu/excp_helper.c:93
16 mmu_translate (env=env@entry=0x5555578e3470, in=0x7ffff4efd3e0, out=0x7ffff4efd3b0, err=err@entry=0x7ffff4efd3c0, ra=ra@entry=0) at ../../target/i386/tcg/sysemu/excp_helper.c:174
17 0x0000555555b4c4b3 in get_physical_address (ra=0, err=0x7ffff4efd3c0, out=0x7ffff4efd3b0, mmu_idx=0, access_type=MMU_DATA_LOAD, addr=18446741874686299840, env=0x5555578e3470) at ../../target/i386/tcg/sysemu/excp_helper.c:580
18 x86_cpu_tlb_fill (cs=0x5555578e0cb0, addr=18446741874686299840, size=<optimized out>, access_type=MMU_DATA_LOAD, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:606
19 0x0000555555ca0ee9 in tlb_fill (retaddr=0, mmu_idx=0, access_type=MMU_DATA_LOAD, size=<optimized out>, addr=18446741874686299840, cpu=0x7ffff4efd540) at ../../accel/tcg/cputlb.c:1315
20 mmu_lookup1 (cpu=cpu@entry=0x5555578e0cb0, data=data@entry=0x7ffff4efd540, mmu_idx=0, access_type=access_type@entry=MMU_DATA_LOAD, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:1713
21 0x0000555555ca2c61 in mmu_lookup (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=18446741874686299840, oi=oi@entry=32, ra=ra@entry=0, type=type@entry=MMU_DATA_LOAD, l=l@entry=0x7ffff4efd540) at ../../accel/tcg/cputlb.c:1803
22 0x0000555555ca3165 in do_ld4_mmu (cpu=cpu@entry=0x5555578e0cb0, addr=addr@entry=18446741874686299840, oi=oi@entry=32, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2416
23 0x0000555555ca5ef9 in cpu_ldl_mmu (ra=0, oi=32, addr=18446741874686299840, env=0x5555578e3470) at ../../accel/tcg/ldst_common.c.inc:158
24 cpu_ldl_le_mmuidx_ra (env=env@entry=0x5555578e3470, addr=addr@entry=18446741874686299840, mmu_idx=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/ldst_common.c.inc:294
25 0x0000555555bb6cdd in do_interrupt64 (is_hw=1, next_eip=18446744072399775809, error_code=0, is_int=0, intno=236, env=0x5555578e3470) at ../../target/i386/tcg/seg_helper.c:889
26 do_interrupt_all (cpu=cpu@entry=0x5555578e0cb0, intno=236, is_int=is_int@entry=0, error_code=error_code@entry=0, next_eip=next_eip@entry=0, is_hw=is_hw@entry=1) at ../../target/i386/tcg/seg_helper.c:1130
27 0x0000555555bb87da in do_interrupt_x86_hardirq (env=env@entry=0x5555578e3470, intno=<optimized out>, is_hw=is_hw@entry=1) at ../../target/i386/tcg/seg_helper.c:1162
28 0x0000555555b5039c in x86_cpu_exec_interrupt (cs=0x5555578e0cb0, interrupt_request=<optimized out>) at ../../target/i386/tcg/sysemu/seg_helper.c:197
29 0x0000555555c94480 in cpu_handle_interrupt (last_tb=<synthetic pointer>, cpu=0x5555578e0cb0) at ../../accel/tcg/cpu-exec.c:844
Peter identified this as being due to the BQL already being
held when the page table walker encounters MMIO memory and attempts
to take the lock again. There are other examples of similar paths
TCG, so this follows the approach taken in those of simply checking
if the lock is already held and if it is, don't take it again.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Message-Id: <20240219173153.12114-4-Jonathan.Cameron@huawei.com>
[rth: Use BQL_LOCK_GUARD]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 34 ++++++++++------------------------
1 file changed, 10 insertions(+), 24 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 047cd2cc0a..6243bcb179 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -2022,7 +2022,6 @@ static uint64_t do_ld_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
MemoryRegion *mr;
hwaddr mr_offset;
MemTxAttrs attrs;
- uint64_t ret;
tcg_debug_assert(size > 0 && size <= 8);
@@ -2030,12 +2029,9 @@ static uint64_t do_ld_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr;
- bql_lock();
- ret = int_ld_mmio_beN(cpu, full, ret_be, addr, size, mmu_idx,
- type, ra, mr, mr_offset);
- bql_unlock();
-
- return ret;
+ BQL_LOCK_GUARD();
+ return int_ld_mmio_beN(cpu, full, ret_be, addr, size, mmu_idx,
+ type, ra, mr, mr_offset);
}
static Int128 do_ld16_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
@@ -2054,13 +2050,11 @@ static Int128 do_ld16_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr;
- bql_lock();
+ BQL_LOCK_GUARD();
a = int_ld_mmio_beN(cpu, full, ret_be, addr, size - 8, mmu_idx,
MMU_DATA_LOAD, ra, mr, mr_offset);
b = int_ld_mmio_beN(cpu, full, ret_be, addr + size - 8, 8, mmu_idx,
MMU_DATA_LOAD, ra, mr, mr_offset + size - 8);
- bql_unlock();
-
return int128_make128(b, a);
}
@@ -2569,7 +2563,6 @@ static uint64_t do_st_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
hwaddr mr_offset;
MemoryRegion *mr;
MemTxAttrs attrs;
- uint64_t ret;
tcg_debug_assert(size > 0 && size <= 8);
@@ -2577,12 +2570,9 @@ static uint64_t do_st_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr;
- bql_lock();
- ret = int_st_mmio_leN(cpu, full, val_le, addr, size, mmu_idx,
- ra, mr, mr_offset);
- bql_unlock();
-
- return ret;
+ BQL_LOCK_GUARD();
+ return int_st_mmio_leN(cpu, full, val_le, addr, size, mmu_idx,
+ ra, mr, mr_offset);
}
static uint64_t do_st16_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
@@ -2593,7 +2583,6 @@ static uint64_t do_st16_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
MemoryRegion *mr;
hwaddr mr_offset;
MemTxAttrs attrs;
- uint64_t ret;
tcg_debug_assert(size > 8 && size <= 16);
@@ -2601,14 +2590,11 @@ static uint64_t do_st16_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full,
section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, ra);
mr = section->mr;
- bql_lock();
+ BQL_LOCK_GUARD();
int_st_mmio_leN(cpu, full, int128_getlo(val_le), addr, 8,
mmu_idx, ra, mr, mr_offset);
- ret = int_st_mmio_leN(cpu, full, int128_gethi(val_le), addr + 8,
- size - 8, mmu_idx, ra, mr, mr_offset + 8);
- bql_unlock();
-
- return ret;
+ return int_st_mmio_leN(cpu, full, int128_gethi(val_le), addr + 8,
+ size - 8, mmu_idx, ra, mr, mr_offset + 8);
}
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 04/39] accel/tcg: Remove qemu_host_page_size from page_protect/page_unprotect
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (2 preceding siblings ...)
2024-02-22 20:42 ` [PULL 03/39] tcg: Avoid double lock if page tables happen to be in mmio memory Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 05/39] linux-user: Adjust SVr4 NULL page mapping Richard Henderson
` (35 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
Use qemu_real_host_page_size instead. Except for the final mprotect
within page_protect, we already handled host < target page size.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-2-richard.henderson@linaro.org>
---
accel/tcg/user-exec.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 68b252cb8e..69b7429e31 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -651,16 +651,17 @@ void page_protect(tb_page_addr_t address)
{
PageFlagsNode *p;
target_ulong start, last;
+ int host_page_size = qemu_real_host_page_size();
int prot;
assert_memory_lock();
- if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
+ if (host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK;
last = start + TARGET_PAGE_SIZE - 1;
} else {
- start = address & qemu_host_page_mask;
- last = start + qemu_host_page_size - 1;
+ start = address & -host_page_size;
+ last = start + host_page_size - 1;
}
p = pageflags_find(start, last);
@@ -671,7 +672,7 @@ void page_protect(tb_page_addr_t address)
if (unlikely(p->itree.last < last)) {
/* More than one protection region covers the one host page. */
- assert(TARGET_PAGE_SIZE < qemu_host_page_size);
+ assert(TARGET_PAGE_SIZE < host_page_size);
while ((p = pageflags_next(p, start, last)) != NULL) {
prot |= p->flags;
}
@@ -679,7 +680,7 @@ void page_protect(tb_page_addr_t address)
if (prot & PAGE_WRITE) {
pageflags_set_clear(start, last, 0, PAGE_WRITE);
- mprotect(g2h_untagged(start), qemu_host_page_size,
+ mprotect(g2h_untagged(start), last - start + 1,
prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE);
}
}
@@ -725,18 +726,19 @@ int page_unprotect(target_ulong address, uintptr_t pc)
}
#endif
} else {
+ int host_page_size = qemu_real_host_page_size();
target_ulong start, len, i;
int prot;
- if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
+ if (host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK;
len = TARGET_PAGE_SIZE;
prot = p->flags | PAGE_WRITE;
pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0);
current_tb_invalidated = tb_invalidate_phys_page_unwind(start, pc);
} else {
- start = address & qemu_host_page_mask;
- len = qemu_host_page_size;
+ start = address & -host_page_size;
+ len = host_page_size;
prot = 0;
for (i = 0; i < len; i += TARGET_PAGE_SIZE) {
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 05/39] linux-user: Adjust SVr4 NULL page mapping
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (3 preceding siblings ...)
2024-02-22 20:42 ` [PULL 04/39] accel/tcg: Remove qemu_host_page_size from page_protect/page_unprotect Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 06/39] linux-user: Remove qemu_host_page_{size, mask} in probe_guest_base Richard Henderson
` (34 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Ilya Leoshkevich, Pierrick Bouvier, Helge Deller
Use TARGET_PAGE_SIZE and MAP_FIXED_NOREPLACE.
We really should be attending to this earlier during
probe_guest_base, as well as better detection and
emulation of various Linux personalities.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-3-richard.henderson@linaro.org>
---
linux-user/elfload.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index b8eef893d0..e918a13748 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -3912,8 +3912,9 @@ int load_elf_binary(struct linux_binprm *bprm, struct image_info *info)
and some applications "depend" upon this behavior. Since
we do not have the power to recompile these, we emulate
the SVr4 behavior. Sigh. */
- target_mmap(0, qemu_host_page_size, PROT_READ | PROT_EXEC,
- MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ target_mmap(0, TARGET_PAGE_SIZE, PROT_READ | PROT_EXEC,
+ MAP_FIXED_NOREPLACE | MAP_PRIVATE | MAP_ANONYMOUS,
+ -1, 0);
}
#ifdef TARGET_MIPS
info->interp_fp_abi = interp_info.fp_abi;
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 06/39] linux-user: Remove qemu_host_page_{size, mask} in probe_guest_base
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (4 preceding siblings ...)
2024-02-22 20:42 ` [PULL 05/39] linux-user: Adjust SVr4 NULL page mapping Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 07/39] linux-user: Remove qemu_host_page_size from create_elf_tables Richard Henderson
` (33 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Ilya Leoshkevich, Pierrick Bouvier, Helge Deller
The host SHMLBA is by definition a multiple of the host page size.
Thus the remaining component of qemu_host_page_size is the
target page size.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-4-richard.henderson@linaro.org>
---
linux-user/elfload.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index e918a13748..e84a201448 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2893,7 +2893,7 @@ static bool pgb_addr_set(PGBAddrs *ga, abi_ulong guest_loaddr,
/* Add any HI_COMMPAGE not covered by reserved_va. */
if (reserved_va < HI_COMMPAGE) {
- ga->bounds[n][0] = HI_COMMPAGE & qemu_host_page_mask;
+ ga->bounds[n][0] = HI_COMMPAGE & qemu_real_host_page_mask();
ga->bounds[n][1] = HI_COMMPAGE + TARGET_PAGE_SIZE - 1;
n++;
}
@@ -3075,7 +3075,7 @@ void probe_guest_base(const char *image_name, abi_ulong guest_loaddr,
abi_ulong guest_hiaddr)
{
/* In order to use host shmat, we must be able to honor SHMLBA. */
- uintptr_t align = MAX(SHMLBA, qemu_host_page_size);
+ uintptr_t align = MAX(SHMLBA, TARGET_PAGE_SIZE);
/* Sanity check the guest binary. */
if (reserved_va) {
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 07/39] linux-user: Remove qemu_host_page_size from create_elf_tables
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (5 preceding siblings ...)
2024-02-22 20:42 ` [PULL 06/39] linux-user: Remove qemu_host_page_{size, mask} in probe_guest_base Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 08/39] linux-user/hppa: Simplify init_guest_commpage Richard Henderson
` (32 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Ilya Leoshkevich, Pierrick Bouvier, Helge Deller
AT_PAGESZ is supposed to advertise the guest page size.
The random adjustment made here using qemu_host_page_size
does not match anything else within linux-user.
The idea here is good, but should be done more systemically
via adjustment to TARGET_PAGE_SIZE.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-5-richard.henderson@linaro.org>
---
linux-user/elfload.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index e84a201448..dfb152bfcb 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2679,13 +2679,7 @@ static abi_ulong create_elf_tables(abi_ulong p, int argc, int envc,
NEW_AUX_ENT(AT_PHDR, (abi_ulong)(info->load_addr + exec->e_phoff));
NEW_AUX_ENT(AT_PHENT, (abi_ulong)(sizeof (struct elf_phdr)));
NEW_AUX_ENT(AT_PHNUM, (abi_ulong)(exec->e_phnum));
- if ((info->alignment & ~qemu_host_page_mask) != 0) {
- /* Target doesn't support host page size alignment */
- NEW_AUX_ENT(AT_PAGESZ, (abi_ulong)(TARGET_PAGE_SIZE));
- } else {
- NEW_AUX_ENT(AT_PAGESZ, (abi_ulong)(MAX(TARGET_PAGE_SIZE,
- qemu_host_page_size)));
- }
+ NEW_AUX_ENT(AT_PAGESZ, (abi_ulong)(TARGET_PAGE_SIZE));
NEW_AUX_ENT(AT_BASE, (abi_ulong)(interp_info ? interp_info->load_addr : 0));
NEW_AUX_ENT(AT_FLAGS, (abi_ulong)0);
NEW_AUX_ENT(AT_ENTRY, info->entry);
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 08/39] linux-user/hppa: Simplify init_guest_commpage
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (6 preceding siblings ...)
2024-02-22 20:42 ` [PULL 07/39] linux-user: Remove qemu_host_page_size from create_elf_tables Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 09/39] linux-user/nios2: Remove qemu_host_page_size from init_guest_commpage Richard Henderson
` (31 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Ilya Leoshkevich, Helge Deller
If reserved_va, then we have already reserved the entire
guest virtual address space; no need to remap page.
If !reserved_va, then use MAP_FIXED_NOREPLACE.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-6-richard.henderson@linaro.org>
---
linux-user/elfload.c | 22 +++++++++++++---------
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index dfb152bfcb..1893b3c192 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -1970,16 +1970,20 @@ static inline void init_thread(struct target_pt_regs *regs,
static bool init_guest_commpage(void)
{
- void *want = g2h_untagged(LO_COMMPAGE);
- void *addr = mmap(want, qemu_host_page_size, PROT_NONE,
- MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ /* If reserved_va, then we have already mapped 0 page on the host. */
+ if (!reserved_va) {
+ void *want, *addr;
- if (addr == MAP_FAILED) {
- perror("Allocating guest commpage");
- exit(EXIT_FAILURE);
- }
- if (addr != want) {
- return false;
+ want = g2h_untagged(LO_COMMPAGE);
+ addr = mmap(want, TARGET_PAGE_SIZE, PROT_NONE,
+ MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED_NOREPLACE, -1, 0);
+ if (addr == MAP_FAILED) {
+ perror("Allocating guest commpage");
+ exit(EXIT_FAILURE);
+ }
+ if (addr != want) {
+ return false;
+ }
}
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 09/39] linux-user/nios2: Remove qemu_host_page_size from init_guest_commpage
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (7 preceding siblings ...)
2024-02-22 20:42 ` [PULL 08/39] linux-user/hppa: Simplify init_guest_commpage Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 10/39] linux-user/arm: " Richard Henderson
` (30 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Helge Deller
Use qemu_real_host_page_size.
If !reserved_va, use MAP_FIXED_NOREPLACE.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-7-richard.henderson@linaro.org>
---
linux-user/elfload.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 1893b3c192..a9f1077861 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -1532,10 +1532,14 @@ static bool init_guest_commpage(void)
0x3a, 0x68, 0x3b, 0x00, /* trap 0 */
};
- void *want = g2h_untagged(LO_COMMPAGE & -qemu_host_page_size);
- void *addr = mmap(want, qemu_host_page_size, PROT_READ | PROT_WRITE,
- MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ int host_page_size = qemu_real_host_page_size();
+ void *want, *addr;
+ want = g2h_untagged(LO_COMMPAGE & -host_page_size);
+ addr = mmap(want, host_page_size, PROT_READ | PROT_WRITE,
+ MAP_ANONYMOUS | MAP_PRIVATE |
+ (reserved_va ? MAP_FIXED : MAP_FIXED_NOREPLACE),
+ -1, 0);
if (addr == MAP_FAILED) {
perror("Allocating guest commpage");
exit(EXIT_FAILURE);
@@ -1544,9 +1548,9 @@ static bool init_guest_commpage(void)
return false;
}
- memcpy(addr, kuser_page, sizeof(kuser_page));
+ memcpy(g2h_untagged(LO_COMMPAGE), kuser_page, sizeof(kuser_page));
- if (mprotect(addr, qemu_host_page_size, PROT_READ)) {
+ if (mprotect(addr, host_page_size, PROT_READ)) {
perror("Protecting guest commpage");
exit(EXIT_FAILURE);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 10/39] linux-user/arm: Remove qemu_host_page_size from init_guest_commpage
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (8 preceding siblings ...)
2024-02-22 20:42 ` [PULL 09/39] linux-user/nios2: Remove qemu_host_page_size from init_guest_commpage Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 11/39] linux-user: Remove qemu_host_page_{size, mask} from mmap.c Richard Henderson
` (29 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Helge Deller
Use qemu_real_host_page_size.
If the commpage is not within reserved_va, use MAP_FIXED_NOREPLACE.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-8-richard.henderson@linaro.org>
---
linux-user/elfload.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index a9f1077861..f3f1ab4f69 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -460,6 +460,7 @@ enum {
static bool init_guest_commpage(void)
{
ARMCPU *cpu = ARM_CPU(thread_cpu);
+ int host_page_size = qemu_real_host_page_size();
abi_ptr commpage;
void *want;
void *addr;
@@ -472,10 +473,12 @@ static bool init_guest_commpage(void)
return true;
}
- commpage = HI_COMMPAGE & -qemu_host_page_size;
+ commpage = HI_COMMPAGE & -host_page_size;
want = g2h_untagged(commpage);
- addr = mmap(want, qemu_host_page_size, PROT_READ | PROT_WRITE,
- MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ addr = mmap(want, host_page_size, PROT_READ | PROT_WRITE,
+ MAP_ANONYMOUS | MAP_PRIVATE |
+ (commpage < reserved_va ? MAP_FIXED : MAP_FIXED_NOREPLACE),
+ -1, 0);
if (addr == MAP_FAILED) {
perror("Allocating guest commpage");
@@ -488,12 +491,12 @@ static bool init_guest_commpage(void)
/* Set kernel helper versions; rest of page is 0. */
__put_user(5, (uint32_t *)g2h_untagged(0xffff0ffcu));
- if (mprotect(addr, qemu_host_page_size, PROT_READ)) {
+ if (mprotect(addr, host_page_size, PROT_READ)) {
perror("Protecting guest commpage");
exit(EXIT_FAILURE);
}
- page_set_flags(commpage, commpage | ~qemu_host_page_mask,
+ page_set_flags(commpage, commpage | (host_page_size - 1),
PAGE_READ | PAGE_EXEC | PAGE_VALID);
return true;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 11/39] linux-user: Remove qemu_host_page_{size, mask} from mmap.c
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (9 preceding siblings ...)
2024-02-22 20:42 ` [PULL 10/39] linux-user/arm: " Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 12/39] linux-user: Remove REAL_HOST_PAGE_ALIGN " Richard Henderson
` (28 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Ilya Leoshkevich, Pierrick Bouvier, Helge Deller
Use qemu_real_host_page_size instead.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-9-richard.henderson@linaro.org>
---
linux-user/mmap.c | 66 +++++++++++++++++++++++------------------------
1 file changed, 33 insertions(+), 33 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 96c9433e27..4d3c8717b9 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -165,6 +165,7 @@ static int target_to_host_prot(int prot)
/* NOTE: all the constants are the HOST ones, but addresses are target. */
int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
{
+ int host_page_size = qemu_real_host_page_size();
abi_ulong starts[3];
abi_ulong lens[3];
int prots[3];
@@ -189,13 +190,13 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
}
last = start + len - 1;
- host_start = start & qemu_host_page_mask;
+ host_start = start & -host_page_size;
host_last = HOST_PAGE_ALIGN(last) - 1;
nranges = 0;
mmap_lock();
- if (host_last - host_start < qemu_host_page_size) {
+ if (host_last - host_start < host_page_size) {
/* Single host page contains all guest pages: sum the prot. */
prot1 = target_prot;
for (abi_ulong a = host_start; a < start; a += TARGET_PAGE_SIZE) {
@@ -205,7 +206,7 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
prot1 |= page_get_flags(a + 1);
}
starts[nranges] = host_start;
- lens[nranges] = qemu_host_page_size;
+ lens[nranges] = host_page_size;
prots[nranges] = prot1;
nranges++;
} else {
@@ -218,10 +219,10 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
/* If the resulting sum differs, create a new range. */
if (prot1 != target_prot) {
starts[nranges] = host_start;
- lens[nranges] = qemu_host_page_size;
+ lens[nranges] = host_page_size;
prots[nranges] = prot1;
nranges++;
- host_start += qemu_host_page_size;
+ host_start += host_page_size;
}
}
@@ -233,9 +234,9 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
}
/* If the resulting sum differs, create a new range. */
if (prot1 != target_prot) {
- host_last -= qemu_host_page_size;
+ host_last -= host_page_size;
starts[nranges] = host_last + 1;
- lens[nranges] = qemu_host_page_size;
+ lens[nranges] = host_page_size;
prots[nranges] = prot1;
nranges++;
}
@@ -270,6 +271,7 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
int prot, int flags, int fd, off_t offset)
{
+ int host_page_size = qemu_real_host_page_size();
abi_ulong real_last;
void *host_start;
int prot_old, prot_new;
@@ -286,7 +288,7 @@ static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
return false;
}
- real_last = real_start + qemu_host_page_size - 1;
+ real_last = real_start + host_page_size - 1;
host_start = g2h_untagged(real_start);
/* Get the protection of the target pages outside the mapping. */
@@ -304,12 +306,12 @@ static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
* outside of the fragment we need to map. Allocate a new host
* page to cover, discarding whatever else may have been present.
*/
- void *p = mmap(host_start, qemu_host_page_size,
+ void *p = mmap(host_start, host_page_size,
target_to_host_prot(prot),
flags | MAP_ANONYMOUS, -1, 0);
if (p != host_start) {
if (p != MAP_FAILED) {
- munmap(p, qemu_host_page_size);
+ munmap(p, host_page_size);
errno = EEXIST;
}
return false;
@@ -324,7 +326,7 @@ static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
/* Adjust protection to be able to write. */
if (!(host_prot_old & PROT_WRITE)) {
host_prot_old |= PROT_WRITE;
- mprotect(host_start, qemu_host_page_size, host_prot_old);
+ mprotect(host_start, host_page_size, host_prot_old);
}
/* Read or zero the new guest pages. */
@@ -338,7 +340,7 @@ static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
/* Put final protection */
if (host_prot_new != host_prot_old) {
- mprotect(host_start, qemu_host_page_size, host_prot_new);
+ mprotect(host_start, host_page_size, host_prot_new);
}
return true;
}
@@ -373,17 +375,18 @@ static abi_ulong mmap_find_vma_reserved(abi_ulong start, abi_ulong size,
*/
abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
{
+ int host_page_size = qemu_real_host_page_size();
void *ptr, *prev;
abi_ulong addr;
int wrapped, repeat;
- align = MAX(align, qemu_host_page_size);
+ align = MAX(align, host_page_size);
/* If 'start' == 0, then a default start address is used. */
if (start == 0) {
start = mmap_next_start;
} else {
- start &= qemu_host_page_mask;
+ start &= -host_page_size;
}
start = ROUND_UP(start, align);
@@ -492,6 +495,7 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
int flags, int fd, off_t offset)
{
+ int host_page_size = qemu_real_host_page_size();
abi_ulong ret, last, real_start, real_last, retaddr, host_len;
abi_ulong passthrough_start = -1, passthrough_last = 0;
int page_flags;
@@ -537,8 +541,8 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
}
}
- real_start = start & qemu_host_page_mask;
- host_offset = offset & qemu_host_page_mask;
+ real_start = start & -host_page_size;
+ host_offset = offset & -host_page_size;
/*
* If the user is asking for the kernel to find a location, do that
@@ -567,8 +571,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
* may need to truncate file maps at EOF and add extra anonymous pages
* up to the targets page boundary.
*/
- if ((qemu_real_host_page_size() < qemu_host_page_size) &&
- !(flags & MAP_ANONYMOUS)) {
+ if (host_page_size < TARGET_PAGE_SIZE && !(flags & MAP_ANONYMOUS)) {
struct stat sb;
if (fstat(fd, &sb) == -1) {
@@ -595,11 +598,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
host_len = HOST_PAGE_ALIGN(host_len);
host_prot = target_to_host_prot(target_prot);
- /*
- * Note: we prefer to control the mapping address. It is
- * especially important if qemu_host_page_size >
- * qemu_real_host_page_size.
- */
+ /* Note: we prefer to control the mapping address. */
p = mmap(g2h_untagged(start), host_len, host_prot,
flags | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
if (p == MAP_FAILED) {
@@ -665,7 +664,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
* aligned, so we read it
*/
if (!(flags & MAP_ANONYMOUS) &&
- (offset & ~qemu_host_page_mask) != (start & ~qemu_host_page_mask)) {
+ (offset & (host_page_size - 1)) != (start & (host_page_size - 1))) {
/*
* msync() won't work here, so we return an error if write is
* possible while it is a shared mapping
@@ -694,7 +693,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
/* handle the start of the mapping */
if (start > real_start) {
- if (real_last == real_start + qemu_host_page_size - 1) {
+ if (real_last == real_start + host_page_size - 1) {
/* one single host page */
if (!mmap_frag(real_start, start, last,
target_prot, flags, fd, offset)) {
@@ -703,21 +702,21 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
goto the_end1;
}
if (!mmap_frag(real_start, start,
- real_start + qemu_host_page_size - 1,
+ real_start + host_page_size - 1,
target_prot, flags, fd, offset)) {
goto fail;
}
- real_start += qemu_host_page_size;
+ real_start += host_page_size;
}
/* handle the end of the mapping */
if (last < real_last) {
- abi_ulong real_page = real_last - qemu_host_page_size + 1;
+ abi_ulong real_page = real_last - host_page_size + 1;
if (!mmap_frag(real_page, real_page, last,
target_prot, flags, fd,
offset + real_page - start)) {
goto fail;
}
- real_last -= qemu_host_page_size;
+ real_last -= host_page_size;
}
/* map the middle (easier) */
@@ -784,6 +783,7 @@ fail:
static int mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
{
+ int host_page_size = qemu_real_host_page_size();
abi_ulong real_start;
abi_ulong real_last;
abi_ulong real_len;
@@ -793,7 +793,7 @@ static int mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
int prot;
last = start + len - 1;
- real_start = start & qemu_host_page_mask;
+ real_start = start & -host_page_size;
real_last = HOST_PAGE_ALIGN(last) - 1;
/*
@@ -802,7 +802,7 @@ static int mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
* The single page special case is required for the last page,
* lest real_start overflow to zero.
*/
- if (real_last - real_start < qemu_host_page_size) {
+ if (real_last - real_start < host_page_size) {
prot = 0;
for (a = real_start; a < start; a += TARGET_PAGE_SIZE) {
prot |= page_get_flags(a);
@@ -818,14 +818,14 @@ static int mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
prot |= page_get_flags(a);
}
if (prot != 0) {
- real_start += qemu_host_page_size;
+ real_start += host_page_size;
}
for (prot = 0, a = last; a < real_last; a += TARGET_PAGE_SIZE) {
prot |= page_get_flags(a + 1);
}
if (prot != 0) {
- real_last -= qemu_host_page_size;
+ real_last -= host_page_size;
}
if (real_last < real_start) {
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 12/39] linux-user: Remove REAL_HOST_PAGE_ALIGN from mmap.c
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (10 preceding siblings ...)
2024-02-22 20:42 ` [PULL 11/39] linux-user: Remove qemu_host_page_{size, mask} from mmap.c Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 13/39] linux-user: Remove HOST_PAGE_ALIGN " Richard Henderson
` (27 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
We already have qemu_real_host_page_size() in a local variable.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-10-richard.henderson@linaro.org>
---
linux-user/mmap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 4d3c8717b9..53e5486cc8 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -585,7 +585,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
* the hosts real pagesize. Additional anonymous maps
* will be created beyond EOF.
*/
- len = REAL_HOST_PAGE_ALIGN(sb.st_size - offset);
+ len = ROUND_UP(sb.st_size - offset, host_page_size);
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 13/39] linux-user: Remove HOST_PAGE_ALIGN from mmap.c
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (11 preceding siblings ...)
2024-02-22 20:42 ` [PULL 12/39] linux-user: Remove REAL_HOST_PAGE_ALIGN " Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 14/39] migration: Remove qemu_host_page_size Richard Henderson
` (26 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Ilya Leoshkevich, Helge Deller
This removes a hidden use of qemu_host_page_size, using instead
the existing host_page_size local within each function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-11-richard.henderson@linaro.org>
---
linux-user/mmap.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 53e5486cc8..d11f758d07 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -191,7 +191,7 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
last = start + len - 1;
host_start = start & -host_page_size;
- host_last = HOST_PAGE_ALIGN(last) - 1;
+ host_last = ROUND_UP(last, host_page_size) - 1;
nranges = 0;
mmap_lock();
@@ -389,8 +389,7 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
start &= -host_page_size;
}
start = ROUND_UP(start, align);
-
- size = HOST_PAGE_ALIGN(size);
+ size = ROUND_UP(size, host_page_size);
if (reserved_va) {
return mmap_find_vma_reserved(start, size, align);
@@ -550,7 +549,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
*/
if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
host_len = len + offset - host_offset;
- host_len = HOST_PAGE_ALIGN(host_len);
+ host_len = ROUND_UP(host_len, host_page_size);
start = mmap_find_vma(real_start, host_len, TARGET_PAGE_SIZE);
if (start == (abi_ulong)-1) {
errno = ENOMEM;
@@ -595,7 +594,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
void *p;
host_len = len + offset - host_offset;
- host_len = HOST_PAGE_ALIGN(host_len);
+ host_len = ROUND_UP(host_len, host_page_size);
host_prot = target_to_host_prot(target_prot);
/* Note: we prefer to control the mapping address. */
@@ -625,7 +624,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
goto fail;
}
last = start + len - 1;
- real_last = HOST_PAGE_ALIGN(last) - 1;
+ real_last = ROUND_UP(last, host_page_size) - 1;
/*
* Test if requested memory area fits target address space
@@ -794,7 +793,7 @@ static int mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
last = start + len - 1;
real_start = start & -host_page_size;
- real_last = HOST_PAGE_ALIGN(last) - 1;
+ real_last = ROUND_UP(last, host_page_size) - 1;
/*
* If guest pages remain on the first or last host pages,
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 14/39] migration: Remove qemu_host_page_size
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (12 preceding siblings ...)
2024-02-22 20:42 ` [PULL 13/39] linux-user: Remove HOST_PAGE_ALIGN " Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:42 ` [PULL 15/39] hw/tpm: Remove HOST_PAGE_ALIGN from tpm_ppi_init Richard Henderson
` (25 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Ilya Leoshkevich, Pierrick Bouvier, Helge Deller
Replace with the maximum of the real host page size
and the target page size. This is an exact replacement.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-12-richard.henderson@linaro.org>
---
migration/ram.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 4649a81204..61c1488352 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2935,7 +2935,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
{
RAMState **rsp = opaque;
RAMBlock *block;
- int ret;
+ int ret, max_hg_page_size;
if (compress_threads_save_setup()) {
return -1;
@@ -2950,6 +2950,12 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
}
(*rsp)->pss[RAM_CHANNEL_PRECOPY].pss_channel = f;
+ /*
+ * ??? Mirrors the previous value of qemu_host_page_size,
+ * but is this really what was intended for the migration?
+ */
+ max_hg_page_size = MAX(qemu_real_host_page_size(), TARGET_PAGE_SIZE);
+
WITH_RCU_READ_LOCK_GUARD() {
qemu_put_be64(f, ram_bytes_total_with_ignored()
| RAM_SAVE_FLAG_MEM_SIZE);
@@ -2958,8 +2964,8 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr));
qemu_put_be64(f, block->used_length);
- if (migrate_postcopy_ram() && block->page_size !=
- qemu_host_page_size) {
+ if (migrate_postcopy_ram() &&
+ block->page_size != max_hg_page_size) {
qemu_put_be64(f, block->page_size);
}
if (migrate_ignore_shared()) {
@@ -3792,6 +3798,7 @@ static int parse_ramblock(QEMUFile *f, RAMBlock *block, ram_addr_t length)
int ret = 0;
/* ADVISE is earlier, it shows the source has the postcopy capability on */
bool postcopy_advised = migration_incoming_postcopy_advised();
+ int max_hg_page_size;
assert(block);
@@ -3809,9 +3816,16 @@ static int parse_ramblock(QEMUFile *f, RAMBlock *block, ram_addr_t length)
return ret;
}
}
+
+ /*
+ * ??? Mirrors the previous value of qemu_host_page_size,
+ * but is this really what was intended for the migration?
+ */
+ max_hg_page_size = MAX(qemu_real_host_page_size(), TARGET_PAGE_SIZE);
+
/* For postcopy we need to check hugepage sizes match */
if (postcopy_advised && migrate_postcopy_ram() &&
- block->page_size != qemu_host_page_size) {
+ block->page_size != max_hg_page_size) {
uint64_t remote_page_size = qemu_get_be64(f);
if (remote_page_size != block->page_size) {
error_report("Mismatched RAM page size %s "
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 15/39] hw/tpm: Remove HOST_PAGE_ALIGN from tpm_ppi_init
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (13 preceding siblings ...)
2024-02-22 20:42 ` [PULL 14/39] migration: Remove qemu_host_page_size Richard Henderson
@ 2024-02-22 20:42 ` Richard Henderson
2024-02-22 20:43 ` [PULL 16/39] softmmu/physmem: Remove qemu_host_page_size Richard Henderson
` (24 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:42 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
This removes a hidden use of qemu_host_page_size, hoisting
two uses of qemu_real_host_page_size to a local variable.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
---
hw/tpm/tpm_ppi.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/hw/tpm/tpm_ppi.c b/hw/tpm/tpm_ppi.c
index 7f74e26ec6..f27ed6c35e 100644
--- a/hw/tpm/tpm_ppi.c
+++ b/hw/tpm/tpm_ppi.c
@@ -47,8 +47,10 @@ void tpm_ppi_reset(TPMPPI *tpmppi)
void tpm_ppi_init(TPMPPI *tpmppi, MemoryRegion *m,
hwaddr addr, Object *obj)
{
- tpmppi->buf = qemu_memalign(qemu_real_host_page_size(),
- HOST_PAGE_ALIGN(TPM_PPI_ADDR_SIZE));
+ size_t host_page_size = qemu_real_host_page_size();
+
+ tpmppi->buf = qemu_memalign(host_page_size,
+ ROUND_UP(TPM_PPI_ADDR_SIZE, host_page_size));
memory_region_init_ram_device_ptr(&tpmppi->ram, obj, "tpm-ppi",
TPM_PPI_ADDR_SIZE, tpmppi->buf);
vmstate_register_ram(&tpmppi->ram, DEVICE(obj));
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 16/39] softmmu/physmem: Remove qemu_host_page_size
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (14 preceding siblings ...)
2024-02-22 20:42 ` [PULL 15/39] hw/tpm: Remove HOST_PAGE_ALIGN from tpm_ppi_init Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 17/39] softmmu/physmem: Remove HOST_PAGE_ALIGN Richard Henderson
` (23 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
Use qemu_real_host_page_size() instead.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-14-richard.henderson@linaro.org>
---
system/physmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/system/physmem.c b/system/physmem.c
index e3ebc19eef..3b08e064ff 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -3511,7 +3511,7 @@ int ram_block_discard_range(RAMBlock *rb, uint64_t start, size_t length)
* fallocate works on hugepages and shmem
* shared anonymous memory requires madvise REMOVE
*/
- need_madvise = (rb->page_size == qemu_host_page_size);
+ need_madvise = (rb->page_size == qemu_real_host_page_size());
need_fallocate = rb->fd != -1;
if (need_fallocate) {
/* For a file, this causes the area of the file to be zero'd
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 17/39] softmmu/physmem: Remove HOST_PAGE_ALIGN
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (15 preceding siblings ...)
2024-02-22 20:43 ` [PULL 16/39] softmmu/physmem: Remove qemu_host_page_size Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 18/39] linux-user: Remove qemu_host_page_size from main Richard Henderson
` (22 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Ilya Leoshkevich, Pierrick Bouvier, Helge Deller
Align allocation sizes to the maximum of host and target page sizes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-15-richard.henderson@linaro.org>
---
system/physmem.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/system/physmem.c b/system/physmem.c
index 3b08e064ff..3adda08ebf 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -1680,7 +1680,8 @@ int qemu_ram_resize(RAMBlock *block, ram_addr_t newsize, Error **errp)
assert(block);
- newsize = HOST_PAGE_ALIGN(newsize);
+ newsize = TARGET_PAGE_ALIGN(newsize);
+ newsize = REAL_HOST_PAGE_ALIGN(newsize);
if (block->used_length == newsize) {
/*
@@ -1916,7 +1917,9 @@ RAMBlock *qemu_ram_alloc_from_fd(ram_addr_t size, MemoryRegion *mr,
return NULL;
}
- size = HOST_PAGE_ALIGN(size);
+ size = TARGET_PAGE_ALIGN(size);
+ size = REAL_HOST_PAGE_ALIGN(size);
+
file_size = get_file_size(fd);
if (file_size > offset && file_size < (offset + size)) {
error_setg(errp, "backing store size 0x%" PRIx64
@@ -2014,13 +2017,17 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, ram_addr_t max_size,
{
RAMBlock *new_block;
Error *local_err = NULL;
+ int align;
assert((ram_flags & ~(RAM_SHARED | RAM_RESIZEABLE | RAM_PREALLOC |
RAM_NORESERVE)) == 0);
assert(!host ^ (ram_flags & RAM_PREALLOC));
- size = HOST_PAGE_ALIGN(size);
- max_size = HOST_PAGE_ALIGN(max_size);
+ align = qemu_real_host_page_size();
+ align = MAX(align, TARGET_PAGE_SIZE);
+ size = ROUND_UP(size, align);
+ max_size = ROUND_UP(max_size, align);
+
new_block = g_malloc0(sizeof(*new_block));
new_block->mr = mr;
new_block->resized = resized;
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 18/39] linux-user: Remove qemu_host_page_size from main
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (16 preceding siblings ...)
2024-02-22 20:43 ` [PULL 17/39] softmmu/physmem: Remove HOST_PAGE_ALIGN Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 19/39] linux-user: Split out target_mmap__locked Richard Henderson
` (21 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
Use qemu_real_host_page_size() instead.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-16-richard.henderson@linaro.org>
---
linux-user/main.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/linux-user/main.c b/linux-user/main.c
index 74b2fbb393..e540acb84a 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -781,7 +781,7 @@ int main(int argc, char **argv, char **envp)
}
cpu_type = parse_cpu_option(cpu_model);
- /* init tcg before creating CPUs and to get qemu_host_page_size */
+ /* init tcg before creating CPUs */
{
AccelState *accel = current_accel();
AccelClass *ac = ACCEL_GET_CLASS(accel);
@@ -804,8 +804,10 @@ int main(int argc, char **argv, char **envp)
*/
max_reserved_va = MAX_RESERVED_VA(cpu);
if (reserved_va != 0) {
- if ((reserved_va + 1) % qemu_host_page_size) {
- char *s = size_to_str(qemu_host_page_size);
+ int host_page_size = qemu_real_host_page_size();
+
+ if ((reserved_va + 1) % host_page_size) {
+ char *s = size_to_str(host_page_size);
fprintf(stderr, "Reserved virtual address not aligned mod %s\n", s);
g_free(s);
exit(EXIT_FAILURE);
@@ -902,7 +904,7 @@ int main(int argc, char **argv, char **envp)
* If we're in a chroot with no /proc, fall back to 1 page.
*/
if (mmap_min_addr == 0) {
- mmap_min_addr = qemu_host_page_size;
+ mmap_min_addr = qemu_real_host_page_size();
qemu_log_mask(CPU_LOG_PAGE,
"host mmap_min_addr=0x%lx (fallback)\n",
mmap_min_addr);
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 19/39] linux-user: Split out target_mmap__locked
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (17 preceding siblings ...)
2024-02-22 20:43 ` [PULL 18/39] linux-user: Remove qemu_host_page_size from main Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 20/39] linux-user: Move some mmap checks outside the lock Richard Henderson
` (20 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel
Cc: Philippe Mathieu-Daudé, Pierrick Bouvier, Ilya Leoshkevich,
Helge Deller
All "goto fail" may be transformed to "return -1".
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-17-richard.henderson@linaro.org>
---
linux-user/mmap.c | 62 ++++++++++++++++++++++++++---------------------
1 file changed, 35 insertions(+), 27 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index d11f758d07..b4c3cc65aa 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -490,9 +490,9 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
}
}
-/* NOTE: all the constants are the HOST ones */
-abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
- int flags, int fd, off_t offset)
+static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
+ int target_prot, int flags,
+ int fd, off_t offset)
{
int host_page_size = qemu_real_host_page_size();
abi_ulong ret, last, real_start, real_last, retaddr, host_len;
@@ -500,30 +500,27 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
int page_flags;
off_t host_offset;
- mmap_lock();
- trace_target_mmap(start, len, target_prot, flags, fd, offset);
-
if (!len) {
errno = EINVAL;
- goto fail;
+ return -1;
}
page_flags = validate_prot_to_pageflags(target_prot);
if (!page_flags) {
errno = EINVAL;
- goto fail;
+ return -1;
}
/* Also check for overflows... */
len = TARGET_PAGE_ALIGN(len);
if (!len) {
errno = ENOMEM;
- goto fail;
+ return -1;
}
if (offset & ~TARGET_PAGE_MASK) {
errno = EINVAL;
- goto fail;
+ return -1;
}
/*
@@ -553,7 +550,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
start = mmap_find_vma(real_start, host_len, TARGET_PAGE_SIZE);
if (start == (abi_ulong)-1) {
errno = ENOMEM;
- goto fail;
+ return -1;
}
}
@@ -574,7 +571,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
struct stat sb;
if (fstat(fd, &sb) == -1) {
- goto fail;
+ return -1;
}
/* Are we trying to create a map beyond EOF?. */
@@ -601,7 +598,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
p = mmap(g2h_untagged(start), host_len, host_prot,
flags | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
if (p == MAP_FAILED) {
- goto fail;
+ return -1;
}
/* update start so that it points to the file position at 'offset' */
host_start = (uintptr_t)p;
@@ -610,7 +607,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
flags | MAP_FIXED, fd, host_offset);
if (p == MAP_FAILED) {
munmap(g2h_untagged(start), host_len);
- goto fail;
+ return -1;
}
host_start += offset - host_offset;
}
@@ -621,7 +618,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
} else {
if (start & ~TARGET_PAGE_MASK) {
errno = EINVAL;
- goto fail;
+ return -1;
}
last = start + len - 1;
real_last = ROUND_UP(last, host_page_size) - 1;
@@ -633,14 +630,14 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
*/
if (last < start || !guest_range_valid_untagged(start, len)) {
errno = ENOMEM;
- goto fail;
+ return -1;
}
if (flags & MAP_FIXED_NOREPLACE) {
/* Validate that the chosen range is empty. */
if (!page_check_range_empty(start, last)) {
errno = EEXIST;
- goto fail;
+ return -1;
}
/*
@@ -671,17 +668,17 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
if ((flags & MAP_TYPE) == MAP_SHARED
&& (target_prot & PROT_WRITE)) {
errno = EINVAL;
- goto fail;
+ return -1;
}
retaddr = target_mmap(start, len, target_prot | PROT_WRITE,
(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))
| MAP_PRIVATE | MAP_ANONYMOUS,
-1, 0);
if (retaddr == -1) {
- goto fail;
+ return -1;
}
if (pread(fd, g2h_untagged(start), len, offset) == -1) {
- goto fail;
+ return -1;
}
if (!(target_prot & PROT_WRITE)) {
ret = target_mprotect(start, len, target_prot);
@@ -696,14 +693,14 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
/* one single host page */
if (!mmap_frag(real_start, start, last,
target_prot, flags, fd, offset)) {
- goto fail;
+ return -1;
}
goto the_end1;
}
if (!mmap_frag(real_start, start,
real_start + host_page_size - 1,
target_prot, flags, fd, offset)) {
- goto fail;
+ return -1;
}
real_start += host_page_size;
}
@@ -713,7 +710,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
if (!mmap_frag(real_page, real_page, last,
target_prot, flags, fd,
offset + real_page - start)) {
- goto fail;
+ return -1;
}
real_last -= host_page_size;
}
@@ -739,7 +736,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
munmap(p, len1);
errno = EEXIST;
}
- goto fail;
+ return -1;
}
passthrough_start = real_start;
passthrough_last = real_last;
@@ -773,11 +770,22 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
qemu_log_unlock(f);
}
}
- mmap_unlock();
return start;
-fail:
+}
+
+/* NOTE: all the constants are the HOST ones */
+abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
+ int flags, int fd, off_t offset)
+{
+ abi_long ret;
+
+ trace_target_mmap(start, len, target_prot, flags, fd, offset);
+ mmap_lock();
+
+ ret = target_mmap__locked(start, len, target_prot, flags, fd, offset);
+
mmap_unlock();
- return -1;
+ return ret;
}
static int mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 20/39] linux-user: Move some mmap checks outside the lock
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (18 preceding siblings ...)
2024-02-22 20:43 ` [PULL 19/39] linux-user: Split out target_mmap__locked Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 21/39] linux-user: Fix sub-host-page mmap Richard Henderson
` (19 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
Basic validation of operands does not require the lock.
Hoist them from target_mmap__locked back into target_mmap.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-18-richard.henderson@linaro.org>
---
linux-user/mmap.c | 107 +++++++++++++++++++++++-----------------------
1 file changed, 53 insertions(+), 54 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index b4c3cc65aa..fbaea832c5 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -491,52 +491,14 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
}
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
- int target_prot, int flags,
+ int target_prot, int flags, int page_flags,
int fd, off_t offset)
{
int host_page_size = qemu_real_host_page_size();
abi_ulong ret, last, real_start, real_last, retaddr, host_len;
abi_ulong passthrough_start = -1, passthrough_last = 0;
- int page_flags;
off_t host_offset;
- if (!len) {
- errno = EINVAL;
- return -1;
- }
-
- page_flags = validate_prot_to_pageflags(target_prot);
- if (!page_flags) {
- errno = EINVAL;
- return -1;
- }
-
- /* Also check for overflows... */
- len = TARGET_PAGE_ALIGN(len);
- if (!len) {
- errno = ENOMEM;
- return -1;
- }
-
- if (offset & ~TARGET_PAGE_MASK) {
- errno = EINVAL;
- return -1;
- }
-
- /*
- * If we're mapping shared memory, ensure we generate code for parallel
- * execution and flush old translations. This will work up to the level
- * supported by the host -- anything that requires EXCP_ATOMIC will not
- * be atomic with respect to an external process.
- */
- if (flags & MAP_SHARED) {
- CPUState *cpu = thread_cpu;
- if (!(cpu->tcg_cflags & CF_PARALLEL)) {
- cpu->tcg_cflags |= CF_PARALLEL;
- tb_flush(cpu);
- }
- }
-
real_start = start & -host_page_size;
host_offset = offset & -host_page_size;
@@ -616,23 +578,9 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
passthrough_start = start;
passthrough_last = last;
} else {
- if (start & ~TARGET_PAGE_MASK) {
- errno = EINVAL;
- return -1;
- }
last = start + len - 1;
real_last = ROUND_UP(last, host_page_size) - 1;
- /*
- * Test if requested memory area fits target address space
- * It can fail only on 64-bit host with 32-bit target.
- * On any other target/host host mmap() handles this error correctly.
- */
- if (last < start || !guest_range_valid_untagged(start, len)) {
- errno = ENOMEM;
- return -1;
- }
-
if (flags & MAP_FIXED_NOREPLACE) {
/* Validate that the chosen range is empty. */
if (!page_check_range_empty(start, last)) {
@@ -778,13 +726,64 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
int flags, int fd, off_t offset)
{
abi_long ret;
+ int page_flags;
trace_target_mmap(start, len, target_prot, flags, fd, offset);
+
+ if (!len) {
+ errno = EINVAL;
+ return -1;
+ }
+
+ page_flags = validate_prot_to_pageflags(target_prot);
+ if (!page_flags) {
+ errno = EINVAL;
+ return -1;
+ }
+
+ /* Also check for overflows... */
+ len = TARGET_PAGE_ALIGN(len);
+ if (!len || len != (size_t)len) {
+ errno = ENOMEM;
+ return -1;
+ }
+
+ if (offset & ~TARGET_PAGE_MASK) {
+ errno = EINVAL;
+ return -1;
+ }
+ if (flags & (MAP_FIXED | MAP_FIXED_NOREPLACE)) {
+ if (start & ~TARGET_PAGE_MASK) {
+ errno = EINVAL;
+ return -1;
+ }
+ if (!guest_range_valid_untagged(start, len)) {
+ errno = ENOMEM;
+ return -1;
+ }
+ }
+
mmap_lock();
- ret = target_mmap__locked(start, len, target_prot, flags, fd, offset);
+ ret = target_mmap__locked(start, len, target_prot, flags,
+ page_flags, fd, offset);
mmap_unlock();
+
+ /*
+ * If we're mapping shared memory, ensure we generate code for parallel
+ * execution and flush old translations. This will work up to the level
+ * supported by the host -- anything that requires EXCP_ATOMIC will not
+ * be atomic with respect to an external process.
+ */
+ if (ret != -1 && (flags & MAP_TYPE) != MAP_PRIVATE) {
+ CPUState *cpu = thread_cpu;
+ if (!(cpu->tcg_cflags & CF_PARALLEL)) {
+ cpu->tcg_cflags |= CF_PARALLEL;
+ tb_flush(cpu);
+ }
+ }
+
return ret;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 21/39] linux-user: Fix sub-host-page mmap
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (19 preceding siblings ...)
2024-02-22 20:43 ` [PULL 20/39] linux-user: Move some mmap checks outside the lock Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 22/39] linux-user: Split out mmap_end Richard Henderson
` (18 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Ilya Leoshkevich, Helge Deller
We cannot skip over the_end1 to the_end, because we fail to
record the validity of the guest page with the interval tree.
Remove "the_end" and rename "the_end1" to "the_end".
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-19-richard.henderson@linaro.org>
---
linux-user/mmap.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index fbaea832c5..48fcdd4a32 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -643,7 +643,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
target_prot, flags, fd, offset)) {
return -1;
}
- goto the_end1;
+ goto the_end;
}
if (!mmap_frag(real_start, start,
real_start + host_page_size - 1,
@@ -690,7 +690,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
passthrough_last = real_last;
}
}
- the_end1:
+ the_end:
if (flags & MAP_ANONYMOUS) {
page_flags |= PAGE_ANON;
}
@@ -708,7 +708,6 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
}
}
shm_region_rm_complete(start, last);
- the_end:
trace_target_mmap_complete(start);
if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
FILE *f = qemu_log_trylock();
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 22/39] linux-user: Split out mmap_end
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (20 preceding siblings ...)
2024-02-22 20:43 ` [PULL 21/39] linux-user: Fix sub-host-page mmap Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 23/39] linux-user: Do early mmap placement only for reserved_va Richard Henderson
` (17 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
Use a subroutine instead of a goto within target_mmap__locked.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-20-richard.henderson@linaro.org>
---
linux-user/mmap.c | 71 +++++++++++++++++++++++++++--------------------
1 file changed, 41 insertions(+), 30 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 48fcdd4a32..cc983bedbd 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -490,6 +490,43 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
}
}
+/*
+ * Record a successful mmap within the user-exec interval tree.
+ */
+static abi_long mmap_end(abi_ulong start, abi_ulong last,
+ abi_ulong passthrough_start,
+ abi_ulong passthrough_last,
+ int flags, int page_flags)
+{
+ if (flags & MAP_ANONYMOUS) {
+ page_flags |= PAGE_ANON;
+ }
+ page_flags |= PAGE_RESET;
+ if (passthrough_start > passthrough_last) {
+ page_set_flags(start, last, page_flags);
+ } else {
+ if (start < passthrough_start) {
+ page_set_flags(start, passthrough_start - 1, page_flags);
+ }
+ page_set_flags(passthrough_start, passthrough_last,
+ page_flags | PAGE_PASSTHROUGH);
+ if (passthrough_last < last) {
+ page_set_flags(passthrough_last + 1, last, page_flags);
+ }
+ }
+ shm_region_rm_complete(start, last);
+ trace_target_mmap_complete(start);
+ if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
+ FILE *f = qemu_log_trylock();
+ if (f) {
+ fprintf(f, "page layout changed following mmap\n");
+ page_dump(f);
+ qemu_log_unlock(f);
+ }
+ }
+ return start;
+}
+
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
int target_prot, int flags, int page_flags,
int fd, off_t offset)
@@ -632,7 +669,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
ret = target_mprotect(start, len, target_prot);
assert(ret == 0);
}
- goto the_end;
+ return mmap_end(start, last, -1, 0, flags, page_flags);
}
/* handle the start of the mapping */
@@ -643,7 +680,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
target_prot, flags, fd, offset)) {
return -1;
}
- goto the_end;
+ return mmap_end(start, last, -1, 0, flags, page_flags);
}
if (!mmap_frag(real_start, start,
real_start + host_page_size - 1,
@@ -690,34 +727,8 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
passthrough_last = real_last;
}
}
- the_end:
- if (flags & MAP_ANONYMOUS) {
- page_flags |= PAGE_ANON;
- }
- page_flags |= PAGE_RESET;
- if (passthrough_start > passthrough_last) {
- page_set_flags(start, last, page_flags);
- } else {
- if (start < passthrough_start) {
- page_set_flags(start, passthrough_start - 1, page_flags);
- }
- page_set_flags(passthrough_start, passthrough_last,
- page_flags | PAGE_PASSTHROUGH);
- if (passthrough_last < last) {
- page_set_flags(passthrough_last + 1, last, page_flags);
- }
- }
- shm_region_rm_complete(start, last);
- trace_target_mmap_complete(start);
- if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
- FILE *f = qemu_log_trylock();
- if (f) {
- fprintf(f, "page layout changed following mmap\n");
- page_dump(f);
- qemu_log_unlock(f);
- }
- }
- return start;
+ return mmap_end(start, last, passthrough_start, passthrough_last,
+ flags, page_flags);
}
/* NOTE: all the constants are the HOST ones */
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 23/39] linux-user: Do early mmap placement only for reserved_va
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (21 preceding siblings ...)
2024-02-22 20:43 ` [PULL 22/39] linux-user: Split out mmap_end Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 24/39] linux-user: Split out do_munmap Richard Henderson
` (16 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Ilya Leoshkevich, Helge Deller
For reserved_va, place all non-fixed maps then proceed
as for MAP_FIXED.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-21-richard.henderson@linaro.org>
---
linux-user/mmap.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index cc983bedbd..1bbfeb25b1 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -540,17 +540,19 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
host_offset = offset & -host_page_size;
/*
- * If the user is asking for the kernel to find a location, do that
- * before we truncate the length for mapping files below.
+ * For reserved_va, we are in full control of the allocation.
+ * Find a suitable hole and convert to MAP_FIXED.
*/
- if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
+ if (reserved_va && !(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
host_len = len + offset - host_offset;
- host_len = ROUND_UP(host_len, host_page_size);
- start = mmap_find_vma(real_start, host_len, TARGET_PAGE_SIZE);
+ start = mmap_find_vma(real_start, host_len,
+ MAX(host_page_size, TARGET_PAGE_SIZE));
if (start == (abi_ulong)-1) {
errno = ENOMEM;
return -1;
}
+ start += offset - host_offset;
+ flags |= MAP_FIXED;
}
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 24/39] linux-user: Split out do_munmap
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (22 preceding siblings ...)
2024-02-22 20:43 ` [PULL 23/39] linux-user: Do early mmap placement only for reserved_va Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 25/39] linux-user: Use do_munmap for target_mmap failure Richard Henderson
` (15 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 23 ++++++++++++++++-------
1 file changed, 16 insertions(+), 7 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 1bbfeb25b1..8ebcca4444 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -267,6 +267,21 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
return ret;
}
+/*
+ * Perform munmap on behalf of the target, with host parameters.
+ * If reserved_va, we must replace the memory reservation.
+ */
+static int do_munmap(void *addr, size_t len)
+{
+ if (reserved_va) {
+ void *ptr = mmap(addr, len, PROT_NONE,
+ MAP_FIXED | MAP_ANONYMOUS
+ | MAP_PRIVATE | MAP_NORESERVE, -1, 0);
+ return ptr == addr ? 0 : -1;
+ }
+ return munmap(addr, len);
+}
+
/* map an incomplete host page */
static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
int prot, int flags, int fd, off_t offset)
@@ -854,13 +869,7 @@ static int mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
real_len = real_last - real_start + 1;
host_start = g2h_untagged(real_start);
- if (reserved_va) {
- void *ptr = mmap(host_start, real_len, PROT_NONE,
- MAP_FIXED | MAP_ANONYMOUS
- | MAP_PRIVATE | MAP_NORESERVE, -1, 0);
- return ptr == host_start ? 0 : -1;
- }
- return munmap(host_start, real_len);
+ return do_munmap(host_start, real_len);
}
int target_munmap(abi_ulong start, abi_ulong len)
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 25/39] linux-user: Use do_munmap for target_mmap failure
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (23 preceding siblings ...)
2024-02-22 20:43 ` [PULL 24/39] linux-user: Split out do_munmap Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 26/39] linux-user: Split out mmap_h_eq_g Richard Henderson
` (14 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé
For the cases for which the host mmap succeeds, but does
not yield the desired address, use do_munmap to restore
the reserved_va memory reservation.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 8ebcca4444..cbcd31e941 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -326,7 +326,7 @@ static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
flags | MAP_ANONYMOUS, -1, 0);
if (p != host_start) {
if (p != MAP_FAILED) {
- munmap(p, host_page_size);
+ do_munmap(p, host_page_size);
errno = EEXIST;
}
return false;
@@ -622,7 +622,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
p = mmap(g2h_untagged(start), len, host_prot,
flags | MAP_FIXED, fd, host_offset);
if (p == MAP_FAILED) {
- munmap(g2h_untagged(start), host_len);
+ do_munmap(g2h_untagged(start), host_len);
return -1;
}
host_start += offset - host_offset;
@@ -735,7 +735,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
flags, fd, offset1);
if (p != want_p) {
if (p != MAP_FAILED) {
- munmap(p, len1);
+ do_munmap(p, len1);
errno = EEXIST;
}
return -1;
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 26/39] linux-user: Split out mmap_h_eq_g
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (24 preceding siblings ...)
2024-02-22 20:43 ` [PULL 25/39] linux-user: Use do_munmap for target_mmap failure Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 27/39] linux-user: Split out mmap_h_lt_g Richard Henderson
` (13 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Helge Deller
Move the MAX_FIXED_NOREPLACE check for reserved_va earlier.
Move the computation of host_prot earlier.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-22-richard.henderson@linaro.org>
---
linux-user/mmap.c | 68 ++++++++++++++++++++++++++++++++++++++---------
1 file changed, 55 insertions(+), 13 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index cbcd31e941..d3556bcc14 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -542,6 +542,33 @@ static abi_long mmap_end(abi_ulong start, abi_ulong last,
return start;
}
+/*
+ * Special case host page size == target page size,
+ * where there are no edge conditions.
+ */
+static abi_long mmap_h_eq_g(abi_ulong start, abi_ulong len,
+ int host_prot, int flags, int page_flags,
+ int fd, off_t offset)
+{
+ void *p, *want_p = g2h_untagged(start);
+ abi_ulong last;
+
+ p = mmap(want_p, len, host_prot, flags, fd, offset);
+ if (p == MAP_FAILED) {
+ return -1;
+ }
+ /* If the host kernel does not support MAP_FIXED_NOREPLACE, emulate. */
+ if ((flags & MAP_FIXED_NOREPLACE) && p != want_p) {
+ do_munmap(p, len);
+ errno = EEXIST;
+ return -1;
+ }
+
+ start = h2g(p);
+ last = start + len - 1;
+ return mmap_end(start, last, start, last, flags, page_flags);
+}
+
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
int target_prot, int flags, int page_flags,
int fd, off_t offset)
@@ -550,6 +577,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
abi_ulong ret, last, real_start, real_last, retaddr, host_len;
abi_ulong passthrough_start = -1, passthrough_last = 0;
off_t host_offset;
+ int host_prot;
real_start = start & -host_page_size;
host_offset = offset & -host_page_size;
@@ -558,16 +586,33 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
* For reserved_va, we are in full control of the allocation.
* Find a suitable hole and convert to MAP_FIXED.
*/
- if (reserved_va && !(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
- host_len = len + offset - host_offset;
- start = mmap_find_vma(real_start, host_len,
- MAX(host_page_size, TARGET_PAGE_SIZE));
- if (start == (abi_ulong)-1) {
- errno = ENOMEM;
- return -1;
+ if (reserved_va) {
+ if (flags & MAP_FIXED_NOREPLACE) {
+ /* Validate that the chosen range is empty. */
+ if (!page_check_range_empty(start, start + len - 1)) {
+ errno = EEXIST;
+ return -1;
+ }
+ flags = (flags & ~MAP_FIXED_NOREPLACE) | MAP_FIXED;
+ } else if (!(flags & MAP_FIXED)) {
+ size_t real_len = len + offset - host_offset;
+ abi_ulong align = MAX(host_page_size, TARGET_PAGE_SIZE);
+
+ start = mmap_find_vma(real_start, real_len, align);
+ if (start == (abi_ulong)-1) {
+ errno = ENOMEM;
+ return -1;
+ }
+ start += offset - host_offset;
+ flags |= MAP_FIXED;
}
- start += offset - host_offset;
- flags |= MAP_FIXED;
+ }
+
+ host_prot = target_to_host_prot(target_prot);
+
+ if (host_page_size == TARGET_PAGE_SIZE) {
+ return mmap_h_eq_g(start, len, host_prot, flags,
+ page_flags, fd, offset);
}
/*
@@ -603,12 +648,10 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
uintptr_t host_start;
- int host_prot;
void *p;
host_len = len + offset - host_offset;
host_len = ROUND_UP(host_len, host_page_size);
- host_prot = target_to_host_prot(target_prot);
/* Note: we prefer to control the mapping address. */
p = mmap(g2h_untagged(start), host_len, host_prot,
@@ -731,8 +774,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
len1 = real_last - real_start + 1;
want_p = g2h_untagged(real_start);
- p = mmap(want_p, len1, target_to_host_prot(target_prot),
- flags, fd, offset1);
+ p = mmap(want_p, len1, host_prot, flags, fd, offset1);
if (p != want_p) {
if (p != MAP_FAILED) {
do_munmap(p, len1);
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 27/39] linux-user: Split out mmap_h_lt_g
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (25 preceding siblings ...)
2024-02-22 20:43 ` [PULL 26/39] linux-user: Split out mmap_h_eq_g Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 28/39] linux-user: Split out mmap_h_gt_g Richard Henderson
` (12 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Helge Deller
Work much harder to get alignment and mapping beyond the end
of the file correct. Both of which are excercised by our
test-mmap for alpha (8k pages) on any 4k page host.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-23-richard.henderson@linaro.org>
---
linux-user/mmap.c | 184 ++++++++++++++++++++++++++++++++++++++--------
1 file changed, 153 insertions(+), 31 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index d3556bcc14..ff8f9f7ed0 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -569,6 +569,156 @@ static abi_long mmap_h_eq_g(abi_ulong start, abi_ulong len,
return mmap_end(start, last, start, last, flags, page_flags);
}
+/*
+ * Special case host page size < target page size.
+ *
+ * The two special cases are increased guest alignment, and mapping
+ * past the end of a file.
+ *
+ * When mapping files into a memory area larger than the file,
+ * accesses to pages beyond the file size will cause a SIGBUS.
+ *
+ * For example, if mmaping a file of 100 bytes on a host with 4K
+ * pages emulating a target with 8K pages, the target expects to
+ * be able to access the first 8K. But the host will trap us on
+ * any access beyond 4K.
+ *
+ * When emulating a target with a larger page-size than the hosts,
+ * we may need to truncate file maps at EOF and add extra anonymous
+ * pages up to the targets page boundary.
+ *
+ * This workaround only works for files that do not change.
+ * If the file is later extended (e.g. ftruncate), the SIGBUS
+ * vanishes and the proper behaviour is that changes within the
+ * anon page should be reflected in the file.
+ *
+ * However, this case is rather common with executable images,
+ * so the workaround is important for even trivial tests, whereas
+ * the mmap of of a file being extended is less common.
+ */
+static abi_long mmap_h_lt_g(abi_ulong start, abi_ulong len, int host_prot,
+ int mmap_flags, int page_flags, int fd,
+ off_t offset, int host_page_size)
+{
+ void *p, *want_p = g2h_untagged(start);
+ off_t fileend_adj = 0;
+ int flags = mmap_flags;
+ abi_ulong last, pass_last;
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ struct stat sb;
+
+ if (fstat(fd, &sb) == -1) {
+ return -1;
+ }
+ if (offset >= sb.st_size) {
+ /*
+ * The entire map is beyond the end of the file.
+ * Transform it to an anonymous mapping.
+ */
+ flags |= MAP_ANONYMOUS;
+ fd = -1;
+ offset = 0;
+ } else if (offset + len > sb.st_size) {
+ /*
+ * A portion of the map is beyond the end of the file.
+ * Truncate the file portion of the allocation.
+ */
+ fileend_adj = offset + len - sb.st_size;
+ }
+ }
+
+ if (flags & (MAP_FIXED | MAP_FIXED_NOREPLACE)) {
+ if (fileend_adj) {
+ p = mmap(want_p, len, host_prot, flags | MAP_ANONYMOUS, -1, 0);
+ } else {
+ p = mmap(want_p, len, host_prot, flags, fd, offset);
+ }
+ if (p != want_p) {
+ if (p != MAP_FAILED) {
+ /* Host does not support MAP_FIXED_NOREPLACE: emulate. */
+ do_munmap(p, len);
+ errno = EEXIST;
+ }
+ return -1;
+ }
+
+ if (fileend_adj) {
+ void *t = mmap(p, len - fileend_adj, host_prot,
+ (flags & ~MAP_FIXED_NOREPLACE) | MAP_FIXED,
+ fd, offset);
+
+ if (t == MAP_FAILED) {
+ int save_errno = errno;
+
+ /*
+ * We failed a map over the top of the successful anonymous
+ * mapping above. The only failure mode is running out of VMAs,
+ * and there's nothing that we can do to detect that earlier.
+ * If we have replaced an existing mapping with MAP_FIXED,
+ * then we cannot properly recover. It's a coin toss whether
+ * it would be better to exit or continue here.
+ */
+ if (!(flags & MAP_FIXED_NOREPLACE) &&
+ !page_check_range_empty(start, start + len - 1)) {
+ qemu_log("QEMU target_mmap late failure: %s",
+ strerror(save_errno));
+ }
+
+ do_munmap(want_p, len);
+ errno = save_errno;
+ return -1;
+ }
+ }
+ } else {
+ size_t host_len, part_len;
+
+ /*
+ * Take care to align the host memory. Perform a larger anonymous
+ * allocation and extract the aligned portion. Remap the file on
+ * top of that.
+ */
+ host_len = len + TARGET_PAGE_SIZE - host_page_size;
+ p = mmap(want_p, host_len, host_prot, flags | MAP_ANONYMOUS, -1, 0);
+ if (p == MAP_FAILED) {
+ return -1;
+ }
+
+ part_len = (uintptr_t)p & (TARGET_PAGE_SIZE - 1);
+ if (part_len) {
+ part_len = TARGET_PAGE_SIZE - part_len;
+ do_munmap(p, part_len);
+ p += part_len;
+ host_len -= part_len;
+ }
+ if (len < host_len) {
+ do_munmap(p + len, host_len - len);
+ }
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ void *t = mmap(p, len - fileend_adj, host_prot,
+ flags | MAP_FIXED, fd, offset);
+
+ if (t == MAP_FAILED) {
+ int save_errno = errno;
+ do_munmap(p, len);
+ errno = save_errno;
+ return -1;
+ }
+ }
+
+ start = h2g(p);
+ }
+
+ last = start + len - 1;
+ if (fileend_adj) {
+ pass_last = ROUND_UP(last - fileend_adj, host_page_size) - 1;
+ } else {
+ pass_last = last;
+ }
+ return mmap_end(start, last, start, pass_last, mmap_flags, page_flags);
+}
+
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
int target_prot, int flags, int page_flags,
int fd, off_t offset)
@@ -613,37 +763,9 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
if (host_page_size == TARGET_PAGE_SIZE) {
return mmap_h_eq_g(start, len, host_prot, flags,
page_flags, fd, offset);
- }
-
- /*
- * When mapping files into a memory area larger than the file, accesses
- * to pages beyond the file size will cause a SIGBUS.
- *
- * For example, if mmaping a file of 100 bytes on a host with 4K pages
- * emulating a target with 8K pages, the target expects to be able to
- * access the first 8K. But the host will trap us on any access beyond
- * 4K.
- *
- * When emulating a target with a larger page-size than the hosts, we
- * may need to truncate file maps at EOF and add extra anonymous pages
- * up to the targets page boundary.
- */
- if (host_page_size < TARGET_PAGE_SIZE && !(flags & MAP_ANONYMOUS)) {
- struct stat sb;
-
- if (fstat(fd, &sb) == -1) {
- return -1;
- }
-
- /* Are we trying to create a map beyond EOF?. */
- if (offset + len > sb.st_size) {
- /*
- * If so, truncate the file map at eof aligned with
- * the hosts real pagesize. Additional anonymous maps
- * will be created beyond EOF.
- */
- len = ROUND_UP(sb.st_size - offset, host_page_size);
- }
+ } else if (host_page_size < TARGET_PAGE_SIZE) {
+ return mmap_h_lt_g(start, len, host_prot, flags,
+ page_flags, fd, offset, host_page_size);
}
if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 28/39] linux-user: Split out mmap_h_gt_g
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (26 preceding siblings ...)
2024-02-22 20:43 ` [PULL 27/39] linux-user: Split out mmap_h_lt_g Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 29/39] tests/tcg: Remove run-test-mmap-* Richard Henderson
` (11 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Helge Deller
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-24-richard.henderson@linaro.org>
---
linux-user/mmap.c | 288 ++++++++++++++++++++++------------------------
1 file changed, 139 insertions(+), 149 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index ff8f9f7ed0..82f4026283 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -282,7 +282,16 @@ static int do_munmap(void *addr, size_t len)
return munmap(addr, len);
}
-/* map an incomplete host page */
+/*
+ * Map an incomplete host page.
+ *
+ * Here be dragons. This case will not work if there is an existing
+ * overlapping host page, which is file mapped, and for which the mapping
+ * is beyond the end of the file. In that case, we will see SIGBUS when
+ * trying to write a portion of this page.
+ *
+ * FIXME: Work around this with a temporary signal handler and longjmp.
+ */
static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
int prot, int flags, int fd, off_t offset)
{
@@ -719,19 +728,138 @@ static abi_long mmap_h_lt_g(abi_ulong start, abi_ulong len, int host_prot,
return mmap_end(start, last, start, pass_last, mmap_flags, page_flags);
}
+/*
+ * Special case host page size > target page size.
+ *
+ * The two special cases are address and file offsets that are valid
+ * for the guest that cannot be directly represented by the host.
+ */
+static abi_long mmap_h_gt_g(abi_ulong start, abi_ulong len,
+ int target_prot, int host_prot,
+ int flags, int page_flags, int fd,
+ off_t offset, int host_page_size)
+{
+ void *p, *want_p = g2h_untagged(start);
+ off_t host_offset = offset & -host_page_size;
+ abi_ulong last, real_start, real_last;
+ bool misaligned_offset = false;
+ size_t host_len;
+
+ if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
+ /*
+ * Adjust the offset to something representable on the host.
+ */
+ host_len = len + offset - host_offset;
+ p = mmap(want_p, host_len, host_prot, flags, fd, host_offset);
+ if (p == MAP_FAILED) {
+ return -1;
+ }
+
+ /* Update start to the file position at offset. */
+ p += offset - host_offset;
+
+ start = h2g(p);
+ last = start + len - 1;
+ return mmap_end(start, last, start, last, flags, page_flags);
+ }
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ misaligned_offset = (start ^ offset) & (host_page_size - 1);
+
+ /*
+ * The fallback for misalignment is a private mapping + read.
+ * This carries none of semantics required of MAP_SHARED.
+ */
+ if (misaligned_offset && (flags & MAP_TYPE) != MAP_PRIVATE) {
+ errno = EINVAL;
+ return -1;
+ }
+ }
+
+ last = start + len - 1;
+ real_start = start & -host_page_size;
+ real_last = ROUND_UP(last, host_page_size) - 1;
+
+ /*
+ * Handle the start and end of the mapping.
+ */
+ if (real_start < start) {
+ abi_ulong real_page_last = real_start + host_page_size - 1;
+ if (last <= real_page_last) {
+ /* Entire allocation a subset of one host page. */
+ if (!mmap_frag(real_start, start, last, target_prot,
+ flags, fd, offset)) {
+ return -1;
+ }
+ return mmap_end(start, last, -1, 0, flags, page_flags);
+ }
+
+ if (!mmap_frag(real_start, start, real_page_last, target_prot,
+ flags, fd, offset)) {
+ return -1;
+ }
+ real_start = real_page_last + 1;
+ }
+
+ if (last < real_last) {
+ abi_ulong real_page_start = real_last - host_page_size + 1;
+ if (!mmap_frag(real_page_start, real_page_start, last,
+ target_prot, flags, fd,
+ offset + real_page_start - start)) {
+ return -1;
+ }
+ real_last = real_page_start - 1;
+ }
+
+ if (real_start > real_last) {
+ return mmap_end(start, last, -1, 0, flags, page_flags);
+ }
+
+ /*
+ * Handle the middle of the mapping.
+ */
+
+ host_len = real_last - real_start + 1;
+ want_p += real_start - start;
+
+ if (flags & MAP_ANONYMOUS) {
+ p = mmap(want_p, host_len, host_prot, flags, -1, 0);
+ } else if (!misaligned_offset) {
+ p = mmap(want_p, host_len, host_prot, flags, fd,
+ offset + real_start - start);
+ } else {
+ p = mmap(want_p, host_len, host_prot | PROT_WRITE,
+ flags | MAP_ANONYMOUS, -1, 0);
+ }
+ if (p != want_p) {
+ if (p != MAP_FAILED) {
+ do_munmap(p, host_len);
+ errno = EEXIST;
+ }
+ return -1;
+ }
+
+ if (misaligned_offset) {
+ /* TODO: The read could be short. */
+ if (pread(fd, p, host_len, offset + real_start - start) != host_len) {
+ do_munmap(p, host_len);
+ return -1;
+ }
+ if (!(host_prot & PROT_WRITE)) {
+ mprotect(p, host_len, host_prot);
+ }
+ }
+
+ return mmap_end(start, last, -1, 0, flags, page_flags);
+}
+
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
int target_prot, int flags, int page_flags,
int fd, off_t offset)
{
int host_page_size = qemu_real_host_page_size();
- abi_ulong ret, last, real_start, real_last, retaddr, host_len;
- abi_ulong passthrough_start = -1, passthrough_last = 0;
- off_t host_offset;
int host_prot;
- real_start = start & -host_page_size;
- host_offset = offset & -host_page_size;
-
/*
* For reserved_va, we are in full control of the allocation.
* Find a suitable hole and convert to MAP_FIXED.
@@ -745,6 +873,8 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
}
flags = (flags & ~MAP_FIXED_NOREPLACE) | MAP_FIXED;
} else if (!(flags & MAP_FIXED)) {
+ abi_ulong real_start = start & -host_page_size;
+ off_t host_offset = offset & -host_page_size;
size_t real_len = len + offset - host_offset;
abi_ulong align = MAX(host_page_size, TARGET_PAGE_SIZE);
@@ -766,150 +896,10 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
} else if (host_page_size < TARGET_PAGE_SIZE) {
return mmap_h_lt_g(start, len, host_prot, flags,
page_flags, fd, offset, host_page_size);
- }
-
- if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
- uintptr_t host_start;
- void *p;
-
- host_len = len + offset - host_offset;
- host_len = ROUND_UP(host_len, host_page_size);
-
- /* Note: we prefer to control the mapping address. */
- p = mmap(g2h_untagged(start), host_len, host_prot,
- flags | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
- if (p == MAP_FAILED) {
- return -1;
- }
- /* update start so that it points to the file position at 'offset' */
- host_start = (uintptr_t)p;
- if (!(flags & MAP_ANONYMOUS)) {
- p = mmap(g2h_untagged(start), len, host_prot,
- flags | MAP_FIXED, fd, host_offset);
- if (p == MAP_FAILED) {
- do_munmap(g2h_untagged(start), host_len);
- return -1;
- }
- host_start += offset - host_offset;
- }
- start = h2g(host_start);
- last = start + len - 1;
- passthrough_start = start;
- passthrough_last = last;
} else {
- last = start + len - 1;
- real_last = ROUND_UP(last, host_page_size) - 1;
-
- if (flags & MAP_FIXED_NOREPLACE) {
- /* Validate that the chosen range is empty. */
- if (!page_check_range_empty(start, last)) {
- errno = EEXIST;
- return -1;
- }
-
- /*
- * With reserved_va, the entire address space is mmaped in the
- * host to ensure it isn't accidentally used for something else.
- * We have just checked that the guest address is not mapped
- * within the guest, but need to replace the host reservation.
- *
- * Without reserved_va, despite the guest address check above,
- * keep MAP_FIXED_NOREPLACE so that the guest does not overwrite
- * any host address mappings.
- */
- if (reserved_va) {
- flags = (flags & ~MAP_FIXED_NOREPLACE) | MAP_FIXED;
- }
- }
-
- /*
- * worst case: we cannot map the file because the offset is not
- * aligned, so we read it
- */
- if (!(flags & MAP_ANONYMOUS) &&
- (offset & (host_page_size - 1)) != (start & (host_page_size - 1))) {
- /*
- * msync() won't work here, so we return an error if write is
- * possible while it is a shared mapping
- */
- if ((flags & MAP_TYPE) == MAP_SHARED
- && (target_prot & PROT_WRITE)) {
- errno = EINVAL;
- return -1;
- }
- retaddr = target_mmap(start, len, target_prot | PROT_WRITE,
- (flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))
- | MAP_PRIVATE | MAP_ANONYMOUS,
- -1, 0);
- if (retaddr == -1) {
- return -1;
- }
- if (pread(fd, g2h_untagged(start), len, offset) == -1) {
- return -1;
- }
- if (!(target_prot & PROT_WRITE)) {
- ret = target_mprotect(start, len, target_prot);
- assert(ret == 0);
- }
- return mmap_end(start, last, -1, 0, flags, page_flags);
- }
-
- /* handle the start of the mapping */
- if (start > real_start) {
- if (real_last == real_start + host_page_size - 1) {
- /* one single host page */
- if (!mmap_frag(real_start, start, last,
- target_prot, flags, fd, offset)) {
- return -1;
- }
- return mmap_end(start, last, -1, 0, flags, page_flags);
- }
- if (!mmap_frag(real_start, start,
- real_start + host_page_size - 1,
- target_prot, flags, fd, offset)) {
- return -1;
- }
- real_start += host_page_size;
- }
- /* handle the end of the mapping */
- if (last < real_last) {
- abi_ulong real_page = real_last - host_page_size + 1;
- if (!mmap_frag(real_page, real_page, last,
- target_prot, flags, fd,
- offset + real_page - start)) {
- return -1;
- }
- real_last -= host_page_size;
- }
-
- /* map the middle (easier) */
- if (real_start < real_last) {
- void *p, *want_p;
- off_t offset1;
- size_t len1;
-
- if (flags & MAP_ANONYMOUS) {
- offset1 = 0;
- } else {
- offset1 = offset + real_start - start;
- }
- len1 = real_last - real_start + 1;
- want_p = g2h_untagged(real_start);
-
- p = mmap(want_p, len1, host_prot, flags, fd, offset1);
- if (p != want_p) {
- if (p != MAP_FAILED) {
- do_munmap(p, len1);
- errno = EEXIST;
- }
- return -1;
- }
- passthrough_start = real_start;
- passthrough_last = real_last;
- }
+ return mmap_h_gt_g(start, len, target_prot, host_prot, flags,
+ page_flags, fd, offset, host_page_size);
}
- return mmap_end(start, last, passthrough_start, passthrough_last,
- flags, page_flags);
}
/* NOTE: all the constants are the HOST ones */
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 29/39] tests/tcg: Remove run-test-mmap-*
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (27 preceding siblings ...)
2024-02-22 20:43 ` [PULL 28/39] linux-user: Split out mmap_h_gt_g Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 30/39] tests/tcg: Extend file in linux-madvise.c Richard Henderson
` (10 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Ilya Leoshkevich, Helge Deller
These tests are confused, because -p does not change
the guest page size, but the host page size.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-25-richard.henderson@linaro.org>
---
tests/tcg/alpha/Makefile.target | 3 ---
tests/tcg/arm/Makefile.target | 3 ---
tests/tcg/hppa/Makefile.target | 3 ---
tests/tcg/i386/Makefile.target | 3 ---
tests/tcg/m68k/Makefile.target | 3 ---
tests/tcg/multiarch/Makefile.target | 9 ---------
tests/tcg/ppc/Makefile.target | 12 ------------
tests/tcg/sh4/Makefile.target | 3 ---
tests/tcg/sparc64/Makefile.target | 6 ------
9 files changed, 45 deletions(-)
delete mode 100644 tests/tcg/ppc/Makefile.target
delete mode 100644 tests/tcg/sparc64/Makefile.target
diff --git a/tests/tcg/alpha/Makefile.target b/tests/tcg/alpha/Makefile.target
index b94500a7d9..fdd7ddf64e 100644
--- a/tests/tcg/alpha/Makefile.target
+++ b/tests/tcg/alpha/Makefile.target
@@ -13,6 +13,3 @@ test-cmov: test-cond.c
$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $< -o $@ $(LDFLAGS)
run-test-cmov: test-cmov
-
-# On Alpha Linux only supports 8k pages
-EXTRA_RUNS+=run-test-mmap-8192
diff --git a/tests/tcg/arm/Makefile.target b/tests/tcg/arm/Makefile.target
index 3473f4619e..0a1965fce7 100644
--- a/tests/tcg/arm/Makefile.target
+++ b/tests/tcg/arm/Makefile.target
@@ -79,6 +79,3 @@ sha512-vector: sha512.c
ARM_TESTS += sha512-vector
TESTS += $(ARM_TESTS)
-
-# On ARM Linux only supports 4k pages
-EXTRA_RUNS+=run-test-mmap-4096
diff --git a/tests/tcg/hppa/Makefile.target b/tests/tcg/hppa/Makefile.target
index cdd0d572a7..ea5ae2186d 100644
--- a/tests/tcg/hppa/Makefile.target
+++ b/tests/tcg/hppa/Makefile.target
@@ -2,9 +2,6 @@
#
# HPPA specific tweaks - specifically masking out broken tests
-# On parisc Linux supports 4K/16K/64K (but currently only 4k works)
-EXTRA_RUNS+=run-test-mmap-4096 # run-test-mmap-16384 run-test-mmap-65536
-
# This triggers failures for hppa-linux about 1% of the time
# HPPA is the odd target that can't use the sigtramp page;
# it requires the full vdso with dwarf2 unwind info.
diff --git a/tests/tcg/i386/Makefile.target b/tests/tcg/i386/Makefile.target
index 9906f9e116..bbe2c44b2a 100644
--- a/tests/tcg/i386/Makefile.target
+++ b/tests/tcg/i386/Makefile.target
@@ -71,9 +71,6 @@ endif
I386_TESTS:=$(filter-out $(SKIP_I386_TESTS), $(ALL_X86_TESTS))
TESTS=$(MULTIARCH_TESTS) $(I386_TESTS)
-# On i386 and x86_64 Linux only supports 4k pages (large pages are a different hack)
-EXTRA_RUNS+=run-test-mmap-4096
-
sha512-sse: CFLAGS=-msse4.1 -O3
sha512-sse: sha512.c
$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $< -o $@ $(LDFLAGS)
diff --git a/tests/tcg/m68k/Makefile.target b/tests/tcg/m68k/Makefile.target
index 6ff214e60a..33f7b1b127 100644
--- a/tests/tcg/m68k/Makefile.target
+++ b/tests/tcg/m68k/Makefile.target
@@ -5,6 +5,3 @@
VPATH += $(SRC_PATH)/tests/tcg/m68k
TESTS += trap denormal
-
-# On m68k Linux supports 4k and 8k pages (but 8k is currently broken)
-EXTRA_RUNS+=run-test-mmap-4096 # run-test-mmap-8192
diff --git a/tests/tcg/multiarch/Makefile.target b/tests/tcg/multiarch/Makefile.target
index e10951a801..f11f3b084d 100644
--- a/tests/tcg/multiarch/Makefile.target
+++ b/tests/tcg/multiarch/Makefile.target
@@ -51,18 +51,9 @@ run-plugin-vma-pthread-with-%: vma-pthread
$(call skip-test, $<, "flaky on CI?")
endif
-# We define the runner for test-mmap after the individual
-# architectures have defined their supported pages sizes. If no
-# additional page sizes are defined we only run the default test.
-
-# default case (host page size)
run-test-mmap: test-mmap
$(call run-test, test-mmap, $(QEMU) $<, $< (default))
-# additional page sizes (defined by each architecture adding to EXTRA_RUNS)
-run-test-mmap-%: test-mmap
- $(call run-test, test-mmap-$*, $(QEMU) -p $* $<, $< ($* byte pages))
-
ifneq ($(GDB),)
GDB_SCRIPT=$(SRC_PATH)/tests/guest-debug/run-test.py
diff --git a/tests/tcg/ppc/Makefile.target b/tests/tcg/ppc/Makefile.target
deleted file mode 100644
index f5e08c7376..0000000000
--- a/tests/tcg/ppc/Makefile.target
+++ /dev/null
@@ -1,12 +0,0 @@
-# -*- Mode: makefile -*-
-#
-# PPC - included from tests/tcg/Makefile
-#
-
-ifneq (,$(findstring 64,$(TARGET_NAME)))
-# On PPC64 Linux can be configured with 4k (default) or 64k pages (currently broken)
-EXTRA_RUNS+=run-test-mmap-4096 #run-test-mmap-65536
-else
-# On PPC32 Linux supports 4K/16K/64K/256K (but currently only 4k works)
-EXTRA_RUNS+=run-test-mmap-4096 #run-test-mmap-16384 run-test-mmap-65536 run-test-mmap-262144
-endif
diff --git a/tests/tcg/sh4/Makefile.target b/tests/tcg/sh4/Makefile.target
index 47c39a44b6..16eaa850a8 100644
--- a/tests/tcg/sh4/Makefile.target
+++ b/tests/tcg/sh4/Makefile.target
@@ -3,9 +3,6 @@
# SuperH specific tweaks
#
-# On sh Linux supports 4k, 8k, 16k and 64k pages (but only 4k currently works)
-EXTRA_RUNS+=run-test-mmap-4096 # run-test-mmap-8192 run-test-mmap-16384 run-test-mmap-65536
-
# This triggers failures for sh4-linux about 10% of the time.
# Random SIGSEGV at unpredictable guest address, cause unknown.
run-signals: signals
diff --git a/tests/tcg/sparc64/Makefile.target b/tests/tcg/sparc64/Makefile.target
deleted file mode 100644
index 408dace783..0000000000
--- a/tests/tcg/sparc64/Makefile.target
+++ /dev/null
@@ -1,6 +0,0 @@
-# -*- Mode: makefile -*-
-#
-# sparc specific tweaks
-
-# On Sparc64 Linux support 8k pages
-EXTRA_RUNS+=run-test-mmap-8192
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 30/39] tests/tcg: Extend file in linux-madvise.c
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (28 preceding siblings ...)
2024-02-22 20:43 ` [PULL 29/39] tests/tcg: Remove run-test-mmap-* Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 31/39] *-user: Deprecate and disable -p pagesize Richard Henderson
` (9 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
When guest page size > host page size, this test can fail
due to the SIGBUS protection hack. Avoid this by making
sure that the file size is at least one guest page.
Visible with alpha guest on x86_64 host.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-26-richard.henderson@linaro.org>
---
tests/tcg/multiarch/linux/linux-madvise.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/tests/tcg/multiarch/linux/linux-madvise.c b/tests/tcg/multiarch/linux/linux-madvise.c
index 29d0997e68..539fb3b772 100644
--- a/tests/tcg/multiarch/linux/linux-madvise.c
+++ b/tests/tcg/multiarch/linux/linux-madvise.c
@@ -42,6 +42,8 @@ static void test_file(void)
assert(ret == 0);
written = write(fd, &c, sizeof(c));
assert(written == sizeof(c));
+ ret = ftruncate(fd, pagesize);
+ assert(ret == 0);
page = mmap(NULL, pagesize, PROT_READ, MAP_PRIVATE, fd, 0);
assert(page != MAP_FAILED);
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 31/39] *-user: Deprecate and disable -p pagesize
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (29 preceding siblings ...)
2024-02-22 20:43 ` [PULL 30/39] tests/tcg: Extend file in linux-madvise.c Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 32/39] cpu: Remove page_size_init Richard Henderson
` (8 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Warner Losh, Philippe Mathieu-Daudé, Helge Deller
This option controls the host page size. From the mis-usage in
our own testsuite, this is easily confused with guest page size.
The only thing that occurs when changing the host page size is
that stuff breaks, because one cannot actually change the host
page size. Therefore reject all but the no-op setting as part
of the deprecation process.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-27-richard.henderson@linaro.org>
---
docs/about/deprecated.rst | 10 ++++++++++
docs/user/main.rst | 3 ---
bsd-user/main.c | 10 +++++-----
linux-user/main.c | 12 ++++++------
4 files changed, 21 insertions(+), 14 deletions(-)
diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
index 5a2305ccd6..3074303b9c 100644
--- a/docs/about/deprecated.rst
+++ b/docs/about/deprecated.rst
@@ -63,6 +63,16 @@ as short-form boolean values, and passed to plugins as ``arg_name=on``.
However, short-form booleans are deprecated and full explicit ``arg_name=on``
form is preferred.
+User-mode emulator command line arguments
+-----------------------------------------
+
+``-p`` (since 9.0)
+''''''''''''''''''
+
+The ``-p`` option pretends to control the host page size. However,
+it is not possible to change the host page size, and using the
+option only causes failures.
+
QEMU Machine Protocol (QMP) commands
------------------------------------
diff --git a/docs/user/main.rst b/docs/user/main.rst
index 7e7ad07409..d5fbb78d3c 100644
--- a/docs/user/main.rst
+++ b/docs/user/main.rst
@@ -87,9 +87,6 @@ Debug options:
Activate logging of the specified items (use '-d help' for a list of
log items)
-``-p pagesize``
- Act as if the host page size was 'pagesize' bytes
-
``-g port``
Wait gdb connection to port
diff --git a/bsd-user/main.c b/bsd-user/main.c
index e5efb7b845..521b58b880 100644
--- a/bsd-user/main.c
+++ b/bsd-user/main.c
@@ -364,11 +364,11 @@ int main(int argc, char **argv)
} else if (!strcmp(r, "L")) {
interp_prefix = argv[optind++];
} else if (!strcmp(r, "p")) {
- qemu_host_page_size = atoi(argv[optind++]);
- if (qemu_host_page_size == 0 ||
- (qemu_host_page_size & (qemu_host_page_size - 1)) != 0) {
- fprintf(stderr, "page size must be a power of two\n");
- exit(1);
+ unsigned size, want = qemu_real_host_page_size();
+
+ if (qemu_strtoui(arg, NULL, 10, &size) || size != want) {
+ warn_report("Deprecated page size option cannot "
+ "change host page size (%u)", want);
}
} else if (!strcmp(r, "g")) {
gdbstub = g_strdup(argv[optind++]);
diff --git a/linux-user/main.c b/linux-user/main.c
index e540acb84a..bad03f06d3 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -332,11 +332,11 @@ static void handle_arg_ld_prefix(const char *arg)
static void handle_arg_pagesize(const char *arg)
{
- qemu_host_page_size = atoi(arg);
- if (qemu_host_page_size == 0 ||
- (qemu_host_page_size & (qemu_host_page_size - 1)) != 0) {
- fprintf(stderr, "page size must be a power of two\n");
- exit(EXIT_FAILURE);
+ unsigned size, want = qemu_real_host_page_size();
+
+ if (qemu_strtoui(arg, NULL, 10, &size) || size != want) {
+ warn_report("Deprecated page size option cannot "
+ "change host page size (%u)", want);
}
}
@@ -496,7 +496,7 @@ static const struct qemu_argument arg_table[] = {
{"D", "QEMU_LOG_FILENAME", true, handle_arg_log_filename,
"logfile", "write logs to 'logfile' (default stderr)"},
{"p", "QEMU_PAGESIZE", true, handle_arg_pagesize,
- "pagesize", "set the host page size to 'pagesize'"},
+ "pagesize", "deprecated change to host page size"},
{"one-insn-per-tb",
"QEMU_ONE_INSN_PER_TB", false, handle_arg_one_insn_per_tb,
"", "run with one guest instruction per emulated TB"},
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 32/39] cpu: Remove page_size_init
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (30 preceding siblings ...)
2024-02-22 20:43 ` [PULL 31/39] *-user: Deprecate and disable -p pagesize Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 33/39] accel/tcg: Disconnect TargetPageDataNode from page size Richard Henderson
` (7 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel
Cc: Warner Losh, Philippe Mathieu-Daudé, Ilya Leoshkevich,
Helge Deller
Move qemu_host_page_{size,mask} and HOST_PAGE_ALIGN into bsd-user.
It should be removed from bsd-user as well, but defer that cleanup.
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-28-richard.henderson@linaro.org>
---
bsd-user/qemu.h | 7 +++++++
include/exec/cpu-common.h | 7 -------
include/hw/core/cpu.h | 2 --
accel/tcg/translate-all.c | 1 -
bsd-user/main.c | 12 ++++++++++++
cpu-target.c | 13 -------------
system/vl.c | 1 -
7 files changed, 19 insertions(+), 24 deletions(-)
diff --git a/bsd-user/qemu.h b/bsd-user/qemu.h
index dc842fffa7..c05c512767 100644
--- a/bsd-user/qemu.h
+++ b/bsd-user/qemu.h
@@ -39,6 +39,13 @@ extern char **environ;
#include "qemu/clang-tsa.h"
#include "qemu-os.h"
+/*
+ * TODO: Remove these and rely only on qemu_real_host_page_size().
+ */
+extern uintptr_t qemu_host_page_size;
+extern intptr_t qemu_host_page_mask;
+#define HOST_PAGE_ALIGN(addr) ROUND_UP((addr), qemu_host_page_size)
+
/*
* This struct is used to hold certain information about the image. Basically,
* it replicates in user space what would be certain task_struct fields in the
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 9ead1be100..6346df17ce 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -20,13 +20,6 @@
void cpu_exec_init_all(void);
void cpu_exec_step_atomic(CPUState *cpu);
-/* Using intptr_t ensures that qemu_*_page_mask is sign-extended even
- * when intptr_t is 32-bit and we are aligning a long long.
- */
-extern uintptr_t qemu_host_page_size;
-extern intptr_t qemu_host_page_mask;
-
-#define HOST_PAGE_ALIGN(addr) ROUND_UP((addr), qemu_host_page_size)
#define REAL_HOST_PAGE_ALIGN(addr) ROUND_UP((addr), qemu_real_host_page_size())
/* The CPU list lock nests outside page_(un)lock or mmap_(un)lock */
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 4385ce54c9..5c2d55f6d2 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -1179,8 +1179,6 @@ bool target_words_bigendian(void);
const char *target_name(void);
-void page_size_init(void);
-
#ifdef NEED_CPU_H
#ifndef CONFIG_USER_ONLY
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index 1c695efe02..c1f57e894a 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -256,7 +256,6 @@ bool cpu_unwind_state_data(CPUState *cpu, uintptr_t host_pc, uint64_t *data)
void page_init(void)
{
- page_size_init();
page_table_config_init();
}
diff --git a/bsd-user/main.c b/bsd-user/main.c
index 521b58b880..4d6ce59af4 100644
--- a/bsd-user/main.c
+++ b/bsd-user/main.c
@@ -49,6 +49,13 @@
#include "host-os.h"
#include "target_arch_cpu.h"
+
+/*
+ * TODO: Remove these and rely only on qemu_real_host_page_size().
+ */
+uintptr_t qemu_host_page_size;
+intptr_t qemu_host_page_mask;
+
static bool opt_one_insn_per_tb;
uintptr_t guest_base;
bool have_guest_base;
@@ -307,6 +314,9 @@ int main(int argc, char **argv)
(void) envlist_setenv(envlist, *wrk);
}
+ qemu_host_page_size = getpagesize();
+ qemu_host_page_size = MAX(qemu_host_page_size, TARGET_PAGE_SIZE);
+
cpu_model = NULL;
qemu_add_opts(&qemu_trace_opts);
@@ -403,6 +413,8 @@ int main(int argc, char **argv)
}
}
+ qemu_host_page_mask = -qemu_host_page_size;
+
/* init debug */
{
int mask = 0;
diff --git a/cpu-target.c b/cpu-target.c
index 86444cc2c6..8763da51ee 100644
--- a/cpu-target.c
+++ b/cpu-target.c
@@ -474,16 +474,3 @@ const char *target_name(void)
{
return TARGET_NAME;
}
-
-void page_size_init(void)
-{
- /* NOTE: we can always suppose that qemu_host_page_size >=
- TARGET_PAGE_SIZE */
- if (qemu_host_page_size == 0) {
- qemu_host_page_size = qemu_real_host_page_size();
- }
- if (qemu_host_page_size < TARGET_PAGE_SIZE) {
- qemu_host_page_size = TARGET_PAGE_SIZE;
- }
- qemu_host_page_mask = -(intptr_t)qemu_host_page_size;
-}
diff --git a/system/vl.c b/system/vl.c
index b8469d9965..7913cc28aa 100644
--- a/system/vl.c
+++ b/system/vl.c
@@ -2118,7 +2118,6 @@ static void qemu_create_machine(QDict *qdict)
}
cpu_exec_init_all();
- page_size_init();
if (machine_class->hw_version) {
qemu_set_hw_version(machine_class->hw_version);
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 33/39] accel/tcg: Disconnect TargetPageDataNode from page size
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (31 preceding siblings ...)
2024-02-22 20:43 ` [PULL 32/39] cpu: Remove page_size_init Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 34/39] linux-user: Allow TARGET_PAGE_BITS_VARY Richard Henderson
` (6 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
Dynamically size the node for the runtime target page size.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-29-richard.henderson@linaro.org>
---
accel/tcg/user-exec.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 69b7429e31..3cac3a78c4 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -864,7 +864,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
typedef struct TargetPageDataNode {
struct rcu_head rcu;
IntervalTreeNode itree;
- char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned));
+ char data[] __attribute__((aligned));
} TargetPageDataNode;
static IntervalTreeRoot targetdata_root;
@@ -902,7 +902,8 @@ void page_reset_target_data(target_ulong start, target_ulong last)
n_last = MIN(last, n->last);
p_len = (n_last + 1 - n_start) >> TARGET_PAGE_BITS;
- memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE);
+ memset(t->data + p_ofs * TARGET_PAGE_DATA_SIZE, 0,
+ p_len * TARGET_PAGE_DATA_SIZE);
}
}
@@ -910,7 +911,7 @@ void *page_get_target_data(target_ulong address)
{
IntervalTreeNode *n;
TargetPageDataNode *t;
- target_ulong page, region;
+ target_ulong page, region, p_ofs;
page = address & TARGET_PAGE_MASK;
region = address & TBD_MASK;
@@ -926,7 +927,8 @@ void *page_get_target_data(target_ulong address)
mmap_lock();
n = interval_tree_iter_first(&targetdata_root, page, page);
if (!n) {
- t = g_new0(TargetPageDataNode, 1);
+ t = g_malloc0(sizeof(TargetPageDataNode)
+ + TPD_PAGES * TARGET_PAGE_DATA_SIZE);
n = &t->itree;
n->start = region;
n->last = region | ~TBD_MASK;
@@ -936,7 +938,8 @@ void *page_get_target_data(target_ulong address)
}
t = container_of(n, TargetPageDataNode, itree);
- return t->data[(page - region) >> TARGET_PAGE_BITS];
+ p_ofs = (page - region) >> TARGET_PAGE_BITS;
+ return t->data + p_ofs * TARGET_PAGE_DATA_SIZE;
}
#else
void page_reset_target_data(target_ulong start, target_ulong last) { }
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 34/39] linux-user: Allow TARGET_PAGE_BITS_VARY
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (32 preceding siblings ...)
2024-02-22 20:43 ` [PULL 33/39] accel/tcg: Disconnect TargetPageDataNode from page size Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 35/39] target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only Richard Henderson
` (5 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Helge Deller
If set, match the host and guest page sizes.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-30-richard.henderson@linaro.org>
---
linux-user/main.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/linux-user/main.c b/linux-user/main.c
index bad03f06d3..12bb839982 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -55,6 +55,7 @@
#include "loader.h"
#include "user-mmap.h"
#include "tcg/perf.h"
+#include "exec/page-vary.h"
#ifdef CONFIG_SEMIHOSTING
#include "semihosting/semihost.h"
@@ -680,6 +681,7 @@ int main(int argc, char **argv, char **envp)
int i;
int ret;
int execfd;
+ int host_page_size;
unsigned long max_reserved_va;
bool preserve_argv0;
@@ -791,6 +793,16 @@ int main(int argc, char **argv, char **envp)
opt_one_insn_per_tb, &error_abort);
ac->init_machine(NULL);
}
+
+ /*
+ * Finalize page size before creating CPUs.
+ * This will do nothing if !TARGET_PAGE_BITS_VARY.
+ * The most efficient setting is to match the host.
+ */
+ host_page_size = qemu_real_host_page_size();
+ set_preferred_target_page_bits(ctz32(host_page_size));
+ finalize_target_page_bits();
+
cpu = cpu_create(cpu_type);
env = cpu_env(cpu);
cpu_reset(cpu);
@@ -804,8 +816,6 @@ int main(int argc, char **argv, char **envp)
*/
max_reserved_va = MAX_RESERVED_VA(cpu);
if (reserved_va != 0) {
- int host_page_size = qemu_real_host_page_size();
-
if ((reserved_va + 1) % host_page_size) {
char *s = size_to_str(host_page_size);
fprintf(stderr, "Reserved virtual address not aligned mod %s\n", s);
@@ -904,7 +914,7 @@ int main(int argc, char **argv, char **envp)
* If we're in a chroot with no /proc, fall back to 1 page.
*/
if (mmap_min_addr == 0) {
- mmap_min_addr = qemu_real_host_page_size();
+ mmap_min_addr = host_page_size;
qemu_log_mask(CPU_LOG_PAGE,
"host mmap_min_addr=0x%lx (fallback)\n",
mmap_min_addr);
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 35/39] target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (33 preceding siblings ...)
2024-02-22 20:43 ` [PULL 34/39] linux-user: Allow TARGET_PAGE_BITS_VARY Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 36/39] linux-user: Bound mmap_min_addr by host page size Richard Henderson
` (4 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Helge Deller
Since aarch64 binaries are generally built for multiple
page sizes, it is trivial to allow the page size to vary.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-31-richard.henderson@linaro.org>
---
target/arm/cpu-param.h | 6 ++++-
target/arm/cpu.c | 51 ++++++++++++++++++++++++------------------
2 files changed, 34 insertions(+), 23 deletions(-)
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
index f9b462a98f..da3243ab21 100644
--- a/target/arm/cpu-param.h
+++ b/target/arm/cpu-param.h
@@ -19,9 +19,13 @@
#endif
#ifdef CONFIG_USER_ONLY
-#define TARGET_PAGE_BITS 12
# ifdef TARGET_AARCH64
# define TARGET_TAGGED_ADDRESSES
+/* Allow user-only to vary page size from 4k */
+# define TARGET_PAGE_BITS_VARY
+# define TARGET_PAGE_BITS_MIN 12
+# else
+# define TARGET_PAGE_BITS 12
# endif
#else
/*
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 5fa86bc8d5..2325d4007f 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -1809,7 +1809,6 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
ARMCPU *cpu = ARM_CPU(dev);
ARMCPUClass *acc = ARM_CPU_GET_CLASS(dev);
CPUARMState *env = &cpu->env;
- int pagebits;
Error *local_err = NULL;
#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
@@ -2100,28 +2099,36 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
!cpu_isar_feature(aa32_vfp_simd, cpu) ||
!arm_feature(env, ARM_FEATURE_XSCALE));
- if (arm_feature(env, ARM_FEATURE_V7) &&
- !arm_feature(env, ARM_FEATURE_M) &&
- !arm_feature(env, ARM_FEATURE_PMSA)) {
- /* v7VMSA drops support for the old ARMv5 tiny pages, so we
- * can use 4K pages.
- */
- pagebits = 12;
- } else {
- /* For CPUs which might have tiny 1K pages, or which have an
- * MPU and might have small region sizes, stick with 1K pages.
- */
- pagebits = 10;
- }
- if (!set_preferred_target_page_bits(pagebits)) {
- /* This can only ever happen for hotplugging a CPU, or if
- * the board code incorrectly creates a CPU which it has
- * promised via minimum_page_size that it will not.
- */
- error_setg(errp, "This CPU requires a smaller page size than the "
- "system is using");
- return;
+#ifndef CONFIG_USER_ONLY
+ {
+ int pagebits;
+ if (arm_feature(env, ARM_FEATURE_V7) &&
+ !arm_feature(env, ARM_FEATURE_M) &&
+ !arm_feature(env, ARM_FEATURE_PMSA)) {
+ /*
+ * v7VMSA drops support for the old ARMv5 tiny pages,
+ * so we can use 4K pages.
+ */
+ pagebits = 12;
+ } else {
+ /*
+ * For CPUs which might have tiny 1K pages, or which have an
+ * MPU and might have small region sizes, stick with 1K pages.
+ */
+ pagebits = 10;
+ }
+ if (!set_preferred_target_page_bits(pagebits)) {
+ /*
+ * This can only ever happen for hotplugging a CPU, or if
+ * the board code incorrectly creates a CPU which it has
+ * promised via minimum_page_size that it will not.
+ */
+ error_setg(errp, "This CPU requires a smaller page size "
+ "than the system is using");
+ return;
+ }
}
+#endif
/* This cpu-id-to-MPIDR affinity is used only for TCG; KVM will override it.
* We don't support setting cluster ID ([16..23]) (known as Aff2
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 36/39] linux-user: Bound mmap_min_addr by host page size
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (34 preceding siblings ...)
2024-02-22 20:43 ` [PULL 35/39] target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 37/39] target/ppc: Enable TARGET_PAGE_BITS_VARY for user-only Richard Henderson
` (3 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
Bizzarely, it is possible to set /proc/sys/vm/mmap_min_addr
to a value below the host page size. Fix that.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-32-richard.henderson@linaro.org>
---
linux-user/main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/linux-user/main.c b/linux-user/main.c
index 12bb839982..551acf1661 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -901,7 +901,7 @@ int main(int argc, char **argv, char **envp)
if ((fp = fopen("/proc/sys/vm/mmap_min_addr", "r")) != NULL) {
unsigned long tmp;
if (fscanf(fp, "%lu", &tmp) == 1 && tmp != 0) {
- mmap_min_addr = tmp;
+ mmap_min_addr = MAX(tmp, host_page_size);
qemu_log_mask(CPU_LOG_PAGE, "host mmap_min_addr=0x%lx\n",
mmap_min_addr);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 37/39] target/ppc: Enable TARGET_PAGE_BITS_VARY for user-only
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (35 preceding siblings ...)
2024-02-22 20:43 ` [PULL 36/39] linux-user: Bound mmap_min_addr by host page size Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 38/39] target/alpha: " Richard Henderson
` (2 subsequent siblings)
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Ilya Leoshkevich, Helge Deller
Since ppc binaries are generally built for multiple
page sizes, it is trivial to allow the page size to vary.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-33-richard.henderson@linaro.org>
---
target/ppc/cpu-param.h | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/target/ppc/cpu-param.h b/target/ppc/cpu-param.h
index 0a0416e0a8..b7ad52de03 100644
--- a/target/ppc/cpu-param.h
+++ b/target/ppc/cpu-param.h
@@ -31,6 +31,13 @@
# define TARGET_PHYS_ADDR_SPACE_BITS 36
# define TARGET_VIRT_ADDR_SPACE_BITS 32
#endif
-#define TARGET_PAGE_BITS 12
+
+#ifdef CONFIG_USER_ONLY
+/* Allow user-only to vary page size from 4k */
+# define TARGET_PAGE_BITS_VARY
+# define TARGET_PAGE_BITS_MIN 12
+#else
+# define TARGET_PAGE_BITS 12
+#endif
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 38/39] target/alpha: Enable TARGET_PAGE_BITS_VARY for user-only
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (36 preceding siblings ...)
2024-02-22 20:43 ` [PULL 37/39] target/ppc: Enable TARGET_PAGE_BITS_VARY for user-only Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-22 20:43 ` [PULL 39/39] linux-user: Remove pgb_dynamic alignment assertion Richard Henderson
2024-02-23 13:45 ` [PULL 00/39] tcg and linux-user patch queue Peter Maydell
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Pierrick Bouvier, Ilya Leoshkevich, Helge Deller
Since alpha binaries are generally built for multiple
page sizes, it is trivial to allow the page size to vary.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-34-richard.henderson@linaro.org>
---
target/alpha/cpu-param.h | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/target/alpha/cpu-param.h b/target/alpha/cpu-param.h
index 68c46f7998..c969cb016b 100644
--- a/target/alpha/cpu-param.h
+++ b/target/alpha/cpu-param.h
@@ -9,10 +9,22 @@
#define ALPHA_CPU_PARAM_H
#define TARGET_LONG_BITS 64
-#define TARGET_PAGE_BITS 13
/* ??? EV4 has 34 phys addr bits, EV5 has 40, EV6 has 44. */
#define TARGET_PHYS_ADDR_SPACE_BITS 44
-#define TARGET_VIRT_ADDR_SPACE_BITS (30 + TARGET_PAGE_BITS)
+
+#ifdef CONFIG_USER_ONLY
+/*
+ * Allow user-only to vary page size. Real hardware allows only 8k and 64k,
+ * but since any variance means guests cannot assume a fixed value, allow
+ * a 4k minimum to match x86 host, which can minimize emulation issues.
+ */
+# define TARGET_PAGE_BITS_VARY
+# define TARGET_PAGE_BITS_MIN 12
+# define TARGET_VIRT_ADDR_SPACE_BITS 63
+#else
+# define TARGET_PAGE_BITS 13
+# define TARGET_VIRT_ADDR_SPACE_BITS (30 + TARGET_PAGE_BITS)
+#endif
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PULL 39/39] linux-user: Remove pgb_dynamic alignment assertion
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (37 preceding siblings ...)
2024-02-22 20:43 ` [PULL 38/39] target/alpha: " Richard Henderson
@ 2024-02-22 20:43 ` Richard Henderson
2024-02-23 13:45 ` [PULL 00/39] tcg and linux-user patch queue Peter Maydell
39 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-22 20:43 UTC (permalink / raw)
To: qemu-devel; +Cc: Alexey Sheplyakov, Philippe Mathieu-Daudé
The assertion was never correct, because the alignment is a composite
of the image alignment and SHMLBA. Even if the image alignment didn't
match the image address, an assertion would not be correct -- more
appropriate would be an error message about an ill formed image. But
the image cannot be held to SHMLBA under any circumstances.
Fixes: ee94743034b ("linux-user: completely re-write init_guest_space")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2157
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reported-by: Alexey Sheplyakov <asheplyakov@yandex.ru>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
linux-user/elfload.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index f3f1ab4f69..d92d66ca1e 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -3022,8 +3022,6 @@ static void pgb_dynamic(const char *image_name, uintptr_t guest_loaddr,
uintptr_t brk, ret;
PGBAddrs ga;
- assert(QEMU_IS_ALIGNED(guest_loaddr, align));
-
/* Try the identity map first. */
if (pgb_addr_set(&ga, guest_loaddr, guest_hiaddr, true)) {
brk = (uintptr_t)sbrk(0);
--
2.34.1
^ permalink raw reply related [flat|nested] 42+ messages in thread
* Re: [PULL 00/39] tcg and linux-user patch queue
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
` (38 preceding siblings ...)
2024-02-22 20:43 ` [PULL 39/39] linux-user: Remove pgb_dynamic alignment assertion Richard Henderson
@ 2024-02-23 13:45 ` Peter Maydell
2024-02-23 22:26 ` Richard Henderson
39 siblings, 1 reply; 42+ messages in thread
From: Peter Maydell @ 2024-02-23 13:45 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
On Thu, 22 Feb 2024 at 20:49, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The following changes since commit 6630bc04bccadcf868165ad6bca5a964bb69b067:
>
> Merge tag 'pull-trivial-patches' of https://gitlab.com/mjt0k/qemu into staging (2024-02-22 12:42:52 +0000)
>
> are available in the Git repository at:
>
> https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20240222
>
> for you to fetch changes up to a06efc2615a1283e139e35ae8a8875925766268f:
>
> linux-user: Remove pgb_dynamic alignment assertion (2024-02-22 09:04:05 -1000)
>
> ----------------------------------------------------------------
> tcg/aarch64: Apple does not align __int128_t in even registers
> accel/tcg: Fixes for page tables in mmio memory
> linux-user: Remove qemu_host_page_{size,mask}, HOST_PAGE_ALIGN
> migration: Remove qemu_host_page_size
> hw/tpm: Remove qemu_host_page_size
> softmmu: Remove qemu_host_page_{size,mask}, HOST_PAGE_ALIGN
> linux-user: Split and reorganize target_mmap.
> *-user: Deprecate and disable -p pagesize
> linux-user: Allow TARGET_PAGE_BITS_VARY
> target/alpha: Enable TARGET_PAGE_BITS_VARY for user-only
> target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only
> target/ppc: Enable TARGET_PAGE_BITS_VARY for user-only
> linux-user: Remove pgb_dynamic alignment assertion
>
> ----------------------------------------------------------------
bsd-user fails to compile:
https://gitlab.com/qemu-project/qemu/-/jobs/6241616724
../bsd-user/main.c:379:30: error: use of undeclared identifier 'arg';
did you mean 'argv'?
if (qemu_strtoui(arg, NULL, 10, &size) || size != want) {
^~~
thanks
-- PMM
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PULL 00/39] tcg and linux-user patch queue
2024-02-23 13:45 ` [PULL 00/39] tcg and linux-user patch queue Peter Maydell
@ 2024-02-23 22:26 ` Richard Henderson
0 siblings, 0 replies; 42+ messages in thread
From: Richard Henderson @ 2024-02-23 22:26 UTC (permalink / raw)
To: Peter Maydell, Alex Bennée; +Cc: qemu-devel
On 2/23/24 03:45, Peter Maydell wrote:
> bsd-user fails to compile:
> https://gitlab.com/qemu-project/qemu/-/jobs/6241616724
>
> ../bsd-user/main.c:379:30: error: use of undeclared identifier 'arg';
> did you mean 'argv'?
> if (qemu_strtoui(arg, NULL, 10, &size) || size != want) {
> ^~~
Grr. I think it is An Error that make vm-build-freebsd does not test this.
Alex?
r~
^ permalink raw reply [flat|nested] 42+ messages in thread
end of thread, other threads:[~2024-02-23 22:50 UTC | newest]
Thread overview: 42+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-22 20:42 [PULL 00/39] tcg and linux-user patch queue Richard Henderson
2024-02-22 20:42 ` [PULL 01/39] tcg/aarch64: Apple does not align __int128_t in even registers Richard Henderson
2024-02-22 20:42 ` [PULL 02/39] accel/tcg: Set can_do_io at at start of lookup_tb_ptr helper Richard Henderson
2024-02-22 20:42 ` [PULL 03/39] tcg: Avoid double lock if page tables happen to be in mmio memory Richard Henderson
2024-02-22 20:42 ` [PULL 04/39] accel/tcg: Remove qemu_host_page_size from page_protect/page_unprotect Richard Henderson
2024-02-22 20:42 ` [PULL 05/39] linux-user: Adjust SVr4 NULL page mapping Richard Henderson
2024-02-22 20:42 ` [PULL 06/39] linux-user: Remove qemu_host_page_{size, mask} in probe_guest_base Richard Henderson
2024-02-22 20:42 ` [PULL 07/39] linux-user: Remove qemu_host_page_size from create_elf_tables Richard Henderson
2024-02-22 20:42 ` [PULL 08/39] linux-user/hppa: Simplify init_guest_commpage Richard Henderson
2024-02-22 20:42 ` [PULL 09/39] linux-user/nios2: Remove qemu_host_page_size from init_guest_commpage Richard Henderson
2024-02-22 20:42 ` [PULL 10/39] linux-user/arm: " Richard Henderson
2024-02-22 20:42 ` [PULL 11/39] linux-user: Remove qemu_host_page_{size, mask} from mmap.c Richard Henderson
2024-02-22 20:42 ` [PULL 12/39] linux-user: Remove REAL_HOST_PAGE_ALIGN " Richard Henderson
2024-02-22 20:42 ` [PULL 13/39] linux-user: Remove HOST_PAGE_ALIGN " Richard Henderson
2024-02-22 20:42 ` [PULL 14/39] migration: Remove qemu_host_page_size Richard Henderson
2024-02-22 20:42 ` [PULL 15/39] hw/tpm: Remove HOST_PAGE_ALIGN from tpm_ppi_init Richard Henderson
2024-02-22 20:43 ` [PULL 16/39] softmmu/physmem: Remove qemu_host_page_size Richard Henderson
2024-02-22 20:43 ` [PULL 17/39] softmmu/physmem: Remove HOST_PAGE_ALIGN Richard Henderson
2024-02-22 20:43 ` [PULL 18/39] linux-user: Remove qemu_host_page_size from main Richard Henderson
2024-02-22 20:43 ` [PULL 19/39] linux-user: Split out target_mmap__locked Richard Henderson
2024-02-22 20:43 ` [PULL 20/39] linux-user: Move some mmap checks outside the lock Richard Henderson
2024-02-22 20:43 ` [PULL 21/39] linux-user: Fix sub-host-page mmap Richard Henderson
2024-02-22 20:43 ` [PULL 22/39] linux-user: Split out mmap_end Richard Henderson
2024-02-22 20:43 ` [PULL 23/39] linux-user: Do early mmap placement only for reserved_va Richard Henderson
2024-02-22 20:43 ` [PULL 24/39] linux-user: Split out do_munmap Richard Henderson
2024-02-22 20:43 ` [PULL 25/39] linux-user: Use do_munmap for target_mmap failure Richard Henderson
2024-02-22 20:43 ` [PULL 26/39] linux-user: Split out mmap_h_eq_g Richard Henderson
2024-02-22 20:43 ` [PULL 27/39] linux-user: Split out mmap_h_lt_g Richard Henderson
2024-02-22 20:43 ` [PULL 28/39] linux-user: Split out mmap_h_gt_g Richard Henderson
2024-02-22 20:43 ` [PULL 29/39] tests/tcg: Remove run-test-mmap-* Richard Henderson
2024-02-22 20:43 ` [PULL 30/39] tests/tcg: Extend file in linux-madvise.c Richard Henderson
2024-02-22 20:43 ` [PULL 31/39] *-user: Deprecate and disable -p pagesize Richard Henderson
2024-02-22 20:43 ` [PULL 32/39] cpu: Remove page_size_init Richard Henderson
2024-02-22 20:43 ` [PULL 33/39] accel/tcg: Disconnect TargetPageDataNode from page size Richard Henderson
2024-02-22 20:43 ` [PULL 34/39] linux-user: Allow TARGET_PAGE_BITS_VARY Richard Henderson
2024-02-22 20:43 ` [PULL 35/39] target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only Richard Henderson
2024-02-22 20:43 ` [PULL 36/39] linux-user: Bound mmap_min_addr by host page size Richard Henderson
2024-02-22 20:43 ` [PULL 37/39] target/ppc: Enable TARGET_PAGE_BITS_VARY for user-only Richard Henderson
2024-02-22 20:43 ` [PULL 38/39] target/alpha: " Richard Henderson
2024-02-22 20:43 ` [PULL 39/39] linux-user: Remove pgb_dynamic alignment assertion Richard Henderson
2024-02-23 13:45 ` [PULL 00/39] tcg and linux-user patch queue Peter Maydell
2024-02-23 22:26 ` Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).