* [PATCH 01/33] accel/tcg: Remove qemu_host_page_size from page_protect/page_unprotect
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
@ 2023-08-18 17:11 ` Richard Henderson
2023-08-18 17:11 ` [PATCH 02/33] linux-user: Adjust SVr4 NULL page mapping Richard Henderson
` (31 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:11 UTC (permalink / raw)
To: qemu-devel
Use qemu_real_host_page_size instead. Except for the final mprotect
within page_protect, we already handled host < target page size.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/user-exec.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index ab48cb41e4..4c1697500a 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -650,16 +650,17 @@ void page_protect(tb_page_addr_t address)
{
PageFlagsNode *p;
target_ulong start, last;
+ int host_page_size = qemu_real_host_page_size();
int prot;
assert_memory_lock();
- if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
+ if (host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK;
last = start + TARGET_PAGE_SIZE - 1;
} else {
- start = address & qemu_host_page_mask;
- last = start + qemu_host_page_size - 1;
+ start = address & -host_page_size;
+ last = start + host_page_size - 1;
}
p = pageflags_find(start, last);
@@ -670,7 +671,7 @@ void page_protect(tb_page_addr_t address)
if (unlikely(p->itree.last < last)) {
/* More than one protection region covers the one host page. */
- assert(TARGET_PAGE_SIZE < qemu_host_page_size);
+ assert(TARGET_PAGE_SIZE < host_page_size);
while ((p = pageflags_next(p, start, last)) != NULL) {
prot |= p->flags;
}
@@ -678,7 +679,7 @@ void page_protect(tb_page_addr_t address)
if (prot & PAGE_WRITE) {
pageflags_set_clear(start, last, 0, PAGE_WRITE);
- mprotect(g2h_untagged(start), qemu_host_page_size,
+ mprotect(g2h_untagged(start), last - start + 1,
prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE);
}
}
@@ -724,18 +725,19 @@ int page_unprotect(target_ulong address, uintptr_t pc)
}
#endif
} else {
+ int host_page_size = qemu_real_host_page_size();
target_ulong start, len, i;
int prot;
- if (qemu_host_page_size <= TARGET_PAGE_SIZE) {
+ if (host_page_size <= TARGET_PAGE_SIZE) {
start = address & TARGET_PAGE_MASK;
len = TARGET_PAGE_SIZE;
prot = p->flags | PAGE_WRITE;
pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0);
current_tb_invalidated = tb_invalidate_phys_page_unwind(start, pc);
} else {
- start = address & qemu_host_page_mask;
- len = qemu_host_page_size;
+ start = address & -host_page_size;
+ len = host_page_size;
prot = 0;
for (i = 0; i < len; i += TARGET_PAGE_SIZE) {
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 02/33] linux-user: Adjust SVr4 NULL page mapping
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
2023-08-18 17:11 ` [PATCH 01/33] accel/tcg: Remove qemu_host_page_size from page_protect/page_unprotect Richard Henderson
@ 2023-08-18 17:11 ` Richard Henderson
2023-08-18 17:11 ` [PATCH 03/33] linux-user: Remove qemu_host_page_{size, mask} in probe_guest_base Richard Henderson
` (30 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:11 UTC (permalink / raw)
To: qemu-devel
Use TARGET_PAGE_SIZE and MAP_FIXED_NOREPLACE.
We really should be attending to this earlier during
probe_guest_base, as well as better detection and
emulation of various Linux personalities.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/elfload.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index ccfbf82836..9865f5e825 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -3749,8 +3749,9 @@ int load_elf_binary(struct linux_binprm *bprm, struct image_info *info)
and some applications "depend" upon this behavior. Since
we do not have the power to recompile these, we emulate
the SVr4 behavior. Sigh. */
- target_mmap(0, qemu_host_page_size, PROT_READ | PROT_EXEC,
- MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ target_mmap(0, TARGET_PAGE_SIZE, PROT_READ | PROT_EXEC,
+ MAP_FIXED_NOREPLACE | MAP_PRIVATE | MAP_ANONYMOUS,
+ -1, 0);
}
#ifdef TARGET_MIPS
info->interp_fp_abi = interp_info.fp_abi;
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 03/33] linux-user: Remove qemu_host_page_{size, mask} in probe_guest_base
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
2023-08-18 17:11 ` [PATCH 01/33] accel/tcg: Remove qemu_host_page_size from page_protect/page_unprotect Richard Henderson
2023-08-18 17:11 ` [PATCH 02/33] linux-user: Adjust SVr4 NULL page mapping Richard Henderson
@ 2023-08-18 17:11 ` Richard Henderson
2023-08-18 17:11 ` [PATCH 04/33] linux-user: Remove qemu_host_page_size from create_elf_tables Richard Henderson
` (29 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:11 UTC (permalink / raw)
To: qemu-devel
The host SHMLBA is by definition a multiple of the host page size.
Thus the remaining component of qemu_host_page_size is the
target page size.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/elfload.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 9865f5e825..3648d7048d 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2731,7 +2731,7 @@ static bool pgb_addr_set(PGBAddrs *ga, abi_ulong guest_loaddr,
/* Add any HI_COMMPAGE not covered by reserved_va. */
if (reserved_va < HI_COMMPAGE) {
- ga->bounds[n][0] = HI_COMMPAGE & qemu_host_page_mask;
+ ga->bounds[n][0] = HI_COMMPAGE & -qemu_real_host_page_size();
ga->bounds[n][1] = HI_COMMPAGE + TARGET_PAGE_SIZE - 1;
n++;
}
@@ -2913,7 +2913,7 @@ void probe_guest_base(const char *image_name, abi_ulong guest_loaddr,
abi_ulong guest_hiaddr)
{
/* In order to use host shmat, we must be able to honor SHMLBA. */
- uintptr_t align = MAX(SHMLBA, qemu_host_page_size);
+ uintptr_t align = MAX(SHMLBA, TARGET_PAGE_SIZE);
/* Sanity check the guest binary. */
if (reserved_va) {
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 04/33] linux-user: Remove qemu_host_page_size from create_elf_tables
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (2 preceding siblings ...)
2023-08-18 17:11 ` [PATCH 03/33] linux-user: Remove qemu_host_page_{size, mask} in probe_guest_base Richard Henderson
@ 2023-08-18 17:11 ` Richard Henderson
2023-08-18 17:11 ` [PATCH 05/33] linux-user/hppa: Simplify init_guest_commpage Richard Henderson
` (28 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:11 UTC (permalink / raw)
To: qemu-devel
AT_PAGESZ is supposed to advertise the guest page size.
The random adjustment made here using qemu_host_page_size
does not match anything else within linux-user.
The idea here is good, but should be done more systemically
via adjustment to TARGET_PAGE_SIZE.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/elfload.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 3648d7048d..b6af8f88aa 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2517,13 +2517,7 @@ static abi_ulong create_elf_tables(abi_ulong p, int argc, int envc,
NEW_AUX_ENT(AT_PHDR, (abi_ulong)(info->load_addr + exec->e_phoff));
NEW_AUX_ENT(AT_PHENT, (abi_ulong)(sizeof (struct elf_phdr)));
NEW_AUX_ENT(AT_PHNUM, (abi_ulong)(exec->e_phnum));
- if ((info->alignment & ~qemu_host_page_mask) != 0) {
- /* Target doesn't support host page size alignment */
- NEW_AUX_ENT(AT_PAGESZ, (abi_ulong)(TARGET_PAGE_SIZE));
- } else {
- NEW_AUX_ENT(AT_PAGESZ, (abi_ulong)(MAX(TARGET_PAGE_SIZE,
- qemu_host_page_size)));
- }
+ NEW_AUX_ENT(AT_PAGESZ, (abi_ulong)(TARGET_PAGE_SIZE));
NEW_AUX_ENT(AT_BASE, (abi_ulong)(interp_info ? interp_info->load_addr : 0));
NEW_AUX_ENT(AT_FLAGS, (abi_ulong)0);
NEW_AUX_ENT(AT_ENTRY, info->entry);
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 05/33] linux-user/hppa: Simplify init_guest_commpage
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (3 preceding siblings ...)
2023-08-18 17:11 ` [PATCH 04/33] linux-user: Remove qemu_host_page_size from create_elf_tables Richard Henderson
@ 2023-08-18 17:11 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 06/33] linux-user/nios2: Remove qemu_host_page_size from init_guest_commpage Richard Henderson
` (27 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:11 UTC (permalink / raw)
To: qemu-devel
If reserved_va, then we have already reserved the entire
guest virtual address space; no need to remap page.
If !reserved_va, then use MAP_FIXED_NOREPLACE.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/elfload.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index b6af8f88aa..1da77f4f71 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -1831,16 +1831,21 @@ static inline void init_thread(struct target_pt_regs *regs,
static bool init_guest_commpage(void)
{
- void *want = g2h_untagged(LO_COMMPAGE);
- void *addr = mmap(want, qemu_host_page_size, PROT_NONE,
- MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ /* If reserved_va, then we have already mapped 0 page on the host. */
+ if (!reserved_va) {
+ int host_page_size = qemu_real_host_page_size();
+ void *want, *addr;
- if (addr == MAP_FAILED) {
- perror("Allocating guest commpage");
- exit(EXIT_FAILURE);
- }
- if (addr != want) {
- return false;
+ want = g2h_untagged(LO_COMMPAGE);
+ addr = mmap(want, host_page_size, PROT_NONE,
+ MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED_NOREPLACE, -1, 0);
+ if (addr == MAP_FAILED) {
+ perror("Allocating guest commpage");
+ exit(EXIT_FAILURE);
+ }
+ if (addr != want) {
+ return false;
+ }
}
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 06/33] linux-user/nios2: Remove qemu_host_page_size from init_guest_commpage
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (4 preceding siblings ...)
2023-08-18 17:11 ` [PATCH 05/33] linux-user/hppa: Simplify init_guest_commpage Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 07/33] linux-user/arm: " Richard Henderson
` (26 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Use qemu_real_host_page_size.
If !reserved_va, use MAP_FIXED_NOREPLACE.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/elfload.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 1da77f4f71..b3b9232955 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -1375,10 +1375,14 @@ static bool init_guest_commpage(void)
0x3a, 0x68, 0x3b, 0x00, /* trap 0 */
};
- void *want = g2h_untagged(LO_COMMPAGE & -qemu_host_page_size);
- void *addr = mmap(want, qemu_host_page_size, PROT_READ | PROT_WRITE,
- MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ int host_page_size = qemu_real_host_page_size();
+ void *want, *addr;
+ want = g2h_untagged(LO_COMMPAGE & -host_page_size);
+ addr = mmap(want, host_page_size, PROT_READ | PROT_WRITE,
+ MAP_ANONYMOUS | MAP_PRIVATE |
+ (reserved_va ? MAP_FIXED : MAP_FIXED_NOREPLACE),
+ -1, 0);
if (addr == MAP_FAILED) {
perror("Allocating guest commpage");
exit(EXIT_FAILURE);
@@ -1387,9 +1391,9 @@ static bool init_guest_commpage(void)
return false;
}
- memcpy(addr, kuser_page, sizeof(kuser_page));
+ memcpy(g2h_untagged(LO_COMMPAGE), kuser_page, sizeof(kuser_page));
- if (mprotect(addr, qemu_host_page_size, PROT_READ)) {
+ if (mprotect(addr, host_page_size, PROT_READ)) {
perror("Protecting guest commpage");
exit(EXIT_FAILURE);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 07/33] linux-user/arm: Remove qemu_host_page_size from init_guest_commpage
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (5 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 06/33] linux-user/nios2: Remove qemu_host_page_size from init_guest_commpage Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 08/33] linux-user: Remove qemu_host_page_{size, mask} from mmap.c Richard Henderson
` (25 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Use qemu_real_host_page_size.
If the commpage is not within reserved_va, use MAP_FIXED_NOREPLACE.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/elfload.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index b3b9232955..7963081cd1 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -450,6 +450,7 @@ enum {
static bool init_guest_commpage(void)
{
ARMCPU *cpu = ARM_CPU(thread_cpu);
+ int host_page_size = qemu_real_host_page_size();
abi_ptr commpage;
void *want;
void *addr;
@@ -462,10 +463,12 @@ static bool init_guest_commpage(void)
return true;
}
- commpage = HI_COMMPAGE & -qemu_host_page_size;
+ commpage = HI_COMMPAGE & -host_page_size;
want = g2h_untagged(commpage);
- addr = mmap(want, qemu_host_page_size, PROT_READ | PROT_WRITE,
- MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ addr = mmap(want, host_page_size, PROT_READ | PROT_WRITE,
+ MAP_ANONYMOUS | MAP_PRIVATE |
+ (commpage < reserved_va ? MAP_FIXED : MAP_FIXED_NOREPLACE),
+ -1, 0);
if (addr == MAP_FAILED) {
perror("Allocating guest commpage");
@@ -478,12 +481,12 @@ static bool init_guest_commpage(void)
/* Set kernel helper versions; rest of page is 0. */
__put_user(5, (uint32_t *)g2h_untagged(0xffff0ffcu));
- if (mprotect(addr, qemu_host_page_size, PROT_READ)) {
+ if (mprotect(addr, host_page_size, PROT_READ)) {
perror("Protecting guest commpage");
exit(EXIT_FAILURE);
}
- page_set_flags(commpage, commpage | ~qemu_host_page_mask,
+ page_set_flags(commpage, commpage | (host_page_size - 1),
PAGE_READ | PAGE_EXEC | PAGE_VALID);
return true;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 08/33] linux-user: Remove qemu_host_page_{size, mask} from mmap.c
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (6 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 07/33] linux-user/arm: " Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 09/33] linux-user: Remove REAL_HOST_PAGE_ALIGN " Richard Henderson
` (24 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Use qemu_real_host_page_size instead.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 66 +++++++++++++++++++++++------------------------
1 file changed, 33 insertions(+), 33 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 9aab48d4a3..fc23192a32 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -121,6 +121,7 @@ static int target_to_host_prot(int prot)
/* NOTE: all the constants are the HOST ones, but addresses are target. */
int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
{
+ int host_page_size = qemu_real_host_page_size();
abi_ulong starts[3];
abi_ulong lens[3];
int prots[3];
@@ -145,13 +146,13 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
}
last = start + len - 1;
- host_start = start & qemu_host_page_mask;
+ host_start = start & -host_page_size;
host_last = HOST_PAGE_ALIGN(last) - 1;
nranges = 0;
mmap_lock();
- if (host_last - host_start < qemu_host_page_size) {
+ if (host_last - host_start < host_page_size) {
/* Single host page contains all guest pages: sum the prot. */
prot1 = target_prot;
for (abi_ulong a = host_start; a < start; a += TARGET_PAGE_SIZE) {
@@ -161,7 +162,7 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
prot1 |= page_get_flags(a + 1);
}
starts[nranges] = host_start;
- lens[nranges] = qemu_host_page_size;
+ lens[nranges] = host_page_size;
prots[nranges] = prot1;
nranges++;
} else {
@@ -174,10 +175,10 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
/* If the resulting sum differs, create a new range. */
if (prot1 != target_prot) {
starts[nranges] = host_start;
- lens[nranges] = qemu_host_page_size;
+ lens[nranges] = host_page_size;
prots[nranges] = prot1;
nranges++;
- host_start += qemu_host_page_size;
+ host_start += host_page_size;
}
}
@@ -189,9 +190,9 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
}
/* If the resulting sum differs, create a new range. */
if (prot1 != target_prot) {
- host_last -= qemu_host_page_size;
+ host_last -= host_page_size;
starts[nranges] = host_last + 1;
- lens[nranges] = qemu_host_page_size;
+ lens[nranges] = host_page_size;
prots[nranges] = prot1;
nranges++;
}
@@ -226,6 +227,7 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
int prot, int flags, int fd, off_t offset)
{
+ int host_page_size = qemu_real_host_page_size();
abi_ulong real_last;
void *host_start;
int prot_old, prot_new;
@@ -242,7 +244,7 @@ static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
return false;
}
- real_last = real_start + qemu_host_page_size - 1;
+ real_last = real_start + host_page_size - 1;
host_start = g2h_untagged(real_start);
/* Get the protection of the target pages outside the mapping. */
@@ -260,12 +262,12 @@ static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
* outside of the fragment we need to map. Allocate a new host
* page to cover, discarding whatever else may have been present.
*/
- void *p = mmap(host_start, qemu_host_page_size,
+ void *p = mmap(host_start, host_page_size,
target_to_host_prot(prot),
flags | MAP_ANONYMOUS, -1, 0);
if (p != host_start) {
if (p != MAP_FAILED) {
- munmap(p, qemu_host_page_size);
+ munmap(p, host_page_size);
errno = EEXIST;
}
return false;
@@ -280,7 +282,7 @@ static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
/* Adjust protection to be able to write. */
if (!(host_prot_old & PROT_WRITE)) {
host_prot_old |= PROT_WRITE;
- mprotect(host_start, qemu_host_page_size, host_prot_old);
+ mprotect(host_start, host_page_size, host_prot_old);
}
/* Read or zero the new guest pages. */
@@ -294,7 +296,7 @@ static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
/* Put final protection */
if (host_prot_new != host_prot_old) {
- mprotect(host_start, qemu_host_page_size, host_prot_new);
+ mprotect(host_start, host_page_size, host_prot_new);
}
return true;
}
@@ -329,17 +331,18 @@ static abi_ulong mmap_find_vma_reserved(abi_ulong start, abi_ulong size,
*/
abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
{
+ int host_page_size = qemu_real_host_page_size();
void *ptr, *prev;
abi_ulong addr;
int wrapped, repeat;
- align = MAX(align, qemu_host_page_size);
+ align = MAX(align, host_page_size);
/* If 'start' == 0, then a default start address is used. */
if (start == 0) {
start = mmap_next_start;
} else {
- start &= qemu_host_page_mask;
+ start &= -host_page_size;
}
start = ROUND_UP(start, align);
@@ -448,6 +451,7 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
int flags, int fd, off_t offset)
{
+ int host_page_size = qemu_real_host_page_size();
abi_ulong ret, last, real_start, real_last, retaddr, host_len;
abi_ulong passthrough_start = -1, passthrough_last = 0;
int page_flags;
@@ -493,8 +497,8 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
}
}
- real_start = start & qemu_host_page_mask;
- host_offset = offset & qemu_host_page_mask;
+ real_start = start & -host_page_size;
+ host_offset = offset & -host_page_size;
/*
* If the user is asking for the kernel to find a location, do that
@@ -523,8 +527,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
* may need to truncate file maps at EOF and add extra anonymous pages
* up to the targets page boundary.
*/
- if ((qemu_real_host_page_size() < qemu_host_page_size) &&
- !(flags & MAP_ANONYMOUS)) {
+ if (host_page_size < TARGET_PAGE_SIZE && !(flags & MAP_ANONYMOUS)) {
struct stat sb;
if (fstat(fd, &sb) == -1) {
@@ -551,11 +554,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
host_len = HOST_PAGE_ALIGN(host_len);
host_prot = target_to_host_prot(target_prot);
- /*
- * Note: we prefer to control the mapping address. It is
- * especially important if qemu_host_page_size >
- * qemu_real_host_page_size.
- */
+ /* Note: we prefer to control the mapping address. */
p = mmap(g2h_untagged(start), host_len, host_prot,
flags | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
if (p == MAP_FAILED) {
@@ -621,7 +620,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
* aligned, so we read it
*/
if (!(flags & MAP_ANONYMOUS) &&
- (offset & ~qemu_host_page_mask) != (start & ~qemu_host_page_mask)) {
+ (offset & (host_page_size - 1)) != (start & (host_page_size - 1))) {
/*
* msync() won't work here, so we return an error if write is
* possible while it is a shared mapping
@@ -650,7 +649,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
/* handle the start of the mapping */
if (start > real_start) {
- if (real_last == real_start + qemu_host_page_size - 1) {
+ if (real_last == real_start + host_page_size - 1) {
/* one single host page */
if (!mmap_frag(real_start, start, last,
target_prot, flags, fd, offset)) {
@@ -659,21 +658,21 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
goto the_end1;
}
if (!mmap_frag(real_start, start,
- real_start + qemu_host_page_size - 1,
+ real_start + host_page_size - 1,
target_prot, flags, fd, offset)) {
goto fail;
}
- real_start += qemu_host_page_size;
+ real_start += host_page_size;
}
/* handle the end of the mapping */
if (last < real_last) {
- abi_ulong real_page = real_last - qemu_host_page_size + 1;
+ abi_ulong real_page = real_last - host_page_size + 1;
if (!mmap_frag(real_page, real_page, last,
target_prot, flags, fd,
offset + real_page - start)) {
goto fail;
}
- real_last -= qemu_host_page_size;
+ real_last -= host_page_size;
}
/* map the middle (easier) */
@@ -739,6 +738,7 @@ fail:
static void mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
{
+ int host_page_size = qemu_real_host_page_size();
abi_ulong real_start;
abi_ulong real_last;
abi_ulong real_len;
@@ -748,7 +748,7 @@ static void mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
int prot;
last = start + len - 1;
- real_start = start & qemu_host_page_mask;
+ real_start = start & -host_page_size;
real_last = HOST_PAGE_ALIGN(last) - 1;
/*
@@ -757,7 +757,7 @@ static void mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
* The single page special case is required for the last page,
* lest real_start overflow to zero.
*/
- if (real_last - real_start < qemu_host_page_size) {
+ if (real_last - real_start < host_page_size) {
prot = 0;
for (a = real_start; a < start; a += TARGET_PAGE_SIZE) {
prot |= page_get_flags(a);
@@ -773,14 +773,14 @@ static void mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
prot |= page_get_flags(a);
}
if (prot != 0) {
- real_start += qemu_host_page_size;
+ real_start += host_page_size;
}
for (prot = 0, a = last; a < real_last; a += TARGET_PAGE_SIZE) {
prot |= page_get_flags(a + 1);
}
if (prot != 0) {
- real_last -= qemu_host_page_size;
+ real_last -= host_page_size;
}
if (real_last < real_start) {
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 09/33] linux-user: Remove REAL_HOST_PAGE_ALIGN from mmap.c
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (7 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 08/33] linux-user: Remove qemu_host_page_{size, mask} from mmap.c Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-21 7:08 ` Philippe Mathieu-Daudé
2023-08-18 17:12 ` [PATCH 10/33] linux-user: Remove HOST_PAGE_ALIGN " Richard Henderson
` (23 subsequent siblings)
32 siblings, 1 reply; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
We already have qemu_real_host_page_size() in a local variable.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index fc23192a32..48a6ef0af9 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -541,7 +541,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
* the hosts real pagesize. Additional anonymous maps
* will be created beyond EOF.
*/
- len = REAL_HOST_PAGE_ALIGN(sb.st_size - offset);
+ len = ROUND_UP(sb.st_size - offset, host_page_size);
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 10/33] linux-user: Remove HOST_PAGE_ALIGN from mmap.c
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (8 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 09/33] linux-user: Remove REAL_HOST_PAGE_ALIGN " Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 11/33] migration: Remove qemu_host_page_size Richard Henderson
` (22 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
This removes a hidden use of qemu_host_page_size, using instead
the existing host_page_size local within each function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 48a6ef0af9..35f270ec2e 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -147,7 +147,7 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
last = start + len - 1;
host_start = start & -host_page_size;
- host_last = HOST_PAGE_ALIGN(last) - 1;
+ host_last = ROUND_UP(last, host_page_size) - 1;
nranges = 0;
mmap_lock();
@@ -345,8 +345,7 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
start &= -host_page_size;
}
start = ROUND_UP(start, align);
-
- size = HOST_PAGE_ALIGN(size);
+ size = ROUND_UP(size, host_page_size);
if (reserved_va) {
return mmap_find_vma_reserved(start, size, align);
@@ -506,7 +505,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
*/
if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
host_len = len + offset - host_offset;
- host_len = HOST_PAGE_ALIGN(host_len);
+ host_len = ROUND_UP(host_len, host_page_size);
start = mmap_find_vma(real_start, host_len, TARGET_PAGE_SIZE);
if (start == (abi_ulong)-1) {
errno = ENOMEM;
@@ -551,7 +550,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
void *p;
host_len = len + offset - host_offset;
- host_len = HOST_PAGE_ALIGN(host_len);
+ host_len = ROUND_UP(host_len, host_page_size);
host_prot = target_to_host_prot(target_prot);
/* Note: we prefer to control the mapping address. */
@@ -581,7 +580,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
goto fail;
}
last = start + len - 1;
- real_last = HOST_PAGE_ALIGN(last) - 1;
+ real_last = ROUND_UP(last, host_page_size) - 1;
/*
* Test if requested memory area fits target address space
@@ -749,7 +748,7 @@ static void mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
last = start + len - 1;
real_start = start & -host_page_size;
- real_last = HOST_PAGE_ALIGN(last) - 1;
+ real_last = ROUND_UP(last, host_page_size) - 1;
/*
* If guest pages remain on the first or last host pages,
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 11/33] migration: Remove qemu_host_page_size
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (9 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 10/33] linux-user: Remove HOST_PAGE_ALIGN " Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 12/33] hw/tpm: Remove HOST_PAGE_ALIGN from tpm_ppi_init Richard Henderson
` (21 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Replace with the maximum of the real host page size
and the target page size. This is an exact replacement.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
migration/ram.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 9040d66e61..1cabf935f2 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3033,7 +3033,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
{
RAMState **rsp = opaque;
RAMBlock *block;
- int ret;
+ int ret, max_hg_page_size;
if (compress_threads_save_setup()) {
return -1;
@@ -3048,6 +3048,12 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
}
(*rsp)->pss[RAM_CHANNEL_PRECOPY].pss_channel = f;
+ /*
+ * ??? Mirrors the previous use of qemu_host_page_size below,
+ * but is this really what was intended for the migration?
+ */
+ max_hg_page_size = MAX(qemu_real_host_page_size(), TARGET_PAGE_SIZE);
+
WITH_RCU_READ_LOCK_GUARD() {
qemu_put_be64(f, ram_bytes_total_with_ignored()
| RAM_SAVE_FLAG_MEM_SIZE);
@@ -3056,8 +3062,8 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
qemu_put_byte(f, strlen(block->idstr));
qemu_put_buffer(f, (uint8_t *)block->idstr, strlen(block->idstr));
qemu_put_be64(f, block->used_length);
- if (migrate_postcopy_ram() && block->page_size !=
- qemu_host_page_size) {
+ if (migrate_postcopy_ram() &&
+ block->page_size != max_hg_page_size) {
qemu_put_be64(f, block->page_size);
}
if (migrate_ignore_shared()) {
@@ -3881,12 +3887,20 @@ static int ram_load_precopy(QEMUFile *f)
{
MigrationIncomingState *mis = migration_incoming_get_current();
int flags = 0, ret = 0, invalid_flags = 0, len = 0, i = 0;
+ int max_hg_page_size;
+
/* ADVISE is earlier, it shows the source has the postcopy capability on */
bool postcopy_advised = migration_incoming_postcopy_advised();
if (!migrate_compress()) {
invalid_flags |= RAM_SAVE_FLAG_COMPRESS_PAGE;
}
+ /*
+ * ??? Mirrors the previous use of qemu_host_page_size below,
+ * but is this really what was intended for the migration?
+ */
+ max_hg_page_size = MAX(qemu_real_host_page_size(), TARGET_PAGE_SIZE);
+
while (!ret && !(flags & RAM_SAVE_FLAG_EOS)) {
ram_addr_t addr, total_ram_bytes;
void *host = NULL, *host_bak = NULL;
@@ -3987,7 +4001,7 @@ static int ram_load_precopy(QEMUFile *f)
}
/* For postcopy we need to check hugepage sizes match */
if (postcopy_advised && migrate_postcopy_ram() &&
- block->page_size != qemu_host_page_size) {
+ block->page_size != max_hg_page_size) {
uint64_t remote_page_size = qemu_get_be64(f);
if (remote_page_size != block->page_size) {
error_report("Mismatched RAM page size %s "
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 12/33] hw/tpm: Remove HOST_PAGE_ALIGN from tpm_ppi_init
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (10 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 11/33] migration: Remove qemu_host_page_size Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 13/33] softmmu/physmem: Remove qemu_host_page_size Richard Henderson
` (20 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
The size of the allocation need not match the alignment.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
hw/tpm/tpm_ppi.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/hw/tpm/tpm_ppi.c b/hw/tpm/tpm_ppi.c
index 7f74e26ec6..91eeafd53a 100644
--- a/hw/tpm/tpm_ppi.c
+++ b/hw/tpm/tpm_ppi.c
@@ -47,8 +47,7 @@ void tpm_ppi_reset(TPMPPI *tpmppi)
void tpm_ppi_init(TPMPPI *tpmppi, MemoryRegion *m,
hwaddr addr, Object *obj)
{
- tpmppi->buf = qemu_memalign(qemu_real_host_page_size(),
- HOST_PAGE_ALIGN(TPM_PPI_ADDR_SIZE));
+ tpmppi->buf = qemu_memalign(qemu_real_host_page_size(), TPM_PPI_ADDR_SIZE);
memory_region_init_ram_device_ptr(&tpmppi->ram, obj, "tpm-ppi",
TPM_PPI_ADDR_SIZE, tpmppi->buf);
vmstate_register_ram(&tpmppi->ram, DEVICE(obj));
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 13/33] softmmu/physmem: Remove qemu_host_page_size
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (11 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 12/33] hw/tpm: Remove HOST_PAGE_ALIGN from tpm_ppi_init Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 14/33] softmmu/physmem: Remove HOST_PAGE_ALIGN Richard Henderson
` (19 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Use qemu_real_host_page_size() instead.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
softmmu/physmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 3df73542e1..6881b2d8f8 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -3448,7 +3448,7 @@ int ram_block_discard_range(RAMBlock *rb, uint64_t start, size_t length)
* fallocate works on hugepages and shmem
* shared anonymous memory requires madvise REMOVE
*/
- need_madvise = (rb->page_size == qemu_host_page_size);
+ need_madvise = (rb->page_size == qemu_real_host_page_size());
need_fallocate = rb->fd != -1;
if (need_fallocate) {
/* For a file, this causes the area of the file to be zero'd
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 14/33] softmmu/physmem: Remove HOST_PAGE_ALIGN
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (12 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 13/33] softmmu/physmem: Remove qemu_host_page_size Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 15/33] linux-user: Remove qemu_host_page_size from main Richard Henderson
` (18 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Align allocation sizes to the maximum of host and target page sizes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
softmmu/physmem.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 6881b2d8f8..9eff0acb2f 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -1663,7 +1663,8 @@ int qemu_ram_resize(RAMBlock *block, ram_addr_t newsize, Error **errp)
assert(block);
- newsize = HOST_PAGE_ALIGN(newsize);
+ newsize = TARGET_PAGE_ALIGN(newsize);
+ newsize = REAL_HOST_PAGE_ALIGN(newsize);
if (block->used_length == newsize) {
/*
@@ -1898,7 +1899,9 @@ RAMBlock *qemu_ram_alloc_from_fd(ram_addr_t size, MemoryRegion *mr,
return NULL;
}
- size = HOST_PAGE_ALIGN(size);
+ size = TARGET_PAGE_ALIGN(size);
+ size = REAL_HOST_PAGE_ALIGN(size);
+
file_size = get_file_size(fd);
if (file_size > offset && file_size < (offset + size)) {
error_setg(errp, "backing store size 0x%" PRIx64
@@ -1976,13 +1979,17 @@ RAMBlock *qemu_ram_alloc_internal(ram_addr_t size, ram_addr_t max_size,
{
RAMBlock *new_block;
Error *local_err = NULL;
+ int align;
assert((ram_flags & ~(RAM_SHARED | RAM_RESIZEABLE | RAM_PREALLOC |
RAM_NORESERVE)) == 0);
assert(!host ^ (ram_flags & RAM_PREALLOC));
- size = HOST_PAGE_ALIGN(size);
- max_size = HOST_PAGE_ALIGN(max_size);
+ align = qemu_real_host_page_size();
+ align = MAX(align, TARGET_PAGE_SIZE);
+ size = ROUND_UP(size, align);
+ max_size = ROUND_UP(max_size, align);
+
new_block = g_malloc0(sizeof(*new_block));
new_block->mr = mr;
new_block->resized = resized;
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 15/33] linux-user: Remove qemu_host_page_size from main
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (13 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 14/33] softmmu/physmem: Remove HOST_PAGE_ALIGN Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 16/33] linux-user: Split out target_mmap__locked Richard Henderson
` (17 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Use qemu_real_host_page_size() instead.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/main.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/linux-user/main.c b/linux-user/main.c
index 96be354897..c1058abc3c 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -783,7 +783,7 @@ int main(int argc, char **argv, char **envp)
}
cpu_type = parse_cpu_option(cpu_model);
- /* init tcg before creating CPUs and to get qemu_host_page_size */
+ /* init tcg before creating CPUs */
{
AccelState *accel = current_accel();
AccelClass *ac = ACCEL_GET_CLASS(accel);
@@ -806,8 +806,10 @@ int main(int argc, char **argv, char **envp)
*/
max_reserved_va = MAX_RESERVED_VA(cpu);
if (reserved_va != 0) {
- if ((reserved_va + 1) % qemu_host_page_size) {
- char *s = size_to_str(qemu_host_page_size);
+ int host_page_size = qemu_real_host_page_size();
+
+ if ((reserved_va + 1) % host_page_size) {
+ char *s = size_to_str(host_page_size);
fprintf(stderr, "Reserved virtual address not aligned mod %s\n", s);
g_free(s);
exit(EXIT_FAILURE);
@@ -904,7 +906,7 @@ int main(int argc, char **argv, char **envp)
* If we're in a chroot with no /proc, fall back to 1 page.
*/
if (mmap_min_addr == 0) {
- mmap_min_addr = qemu_host_page_size;
+ mmap_min_addr = qemu_real_host_page_size();
qemu_log_mask(CPU_LOG_PAGE,
"host mmap_min_addr=0x%lx (fallback)\n",
mmap_min_addr);
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 16/33] linux-user: Split out target_mmap__locked
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (14 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 15/33] linux-user: Remove qemu_host_page_size from main Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-21 6:43 ` Philippe Mathieu-Daudé
2023-08-18 17:12 ` [PATCH 17/33] linux-user: Move some mmap checks outside the lock Richard Henderson
` (16 subsequent siblings)
32 siblings, 1 reply; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
All "goto fail" may be transformed to "return -1".
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 62 ++++++++++++++++++++++++++---------------------
1 file changed, 35 insertions(+), 27 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 35f270ec2e..448f168df1 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -446,9 +446,9 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
}
}
-/* NOTE: all the constants are the HOST ones */
-abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
- int flags, int fd, off_t offset)
+static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
+ int target_prot, int flags,
+ int fd, off_t offset)
{
int host_page_size = qemu_real_host_page_size();
abi_ulong ret, last, real_start, real_last, retaddr, host_len;
@@ -456,30 +456,27 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
int page_flags;
off_t host_offset;
- mmap_lock();
- trace_target_mmap(start, len, target_prot, flags, fd, offset);
-
if (!len) {
errno = EINVAL;
- goto fail;
+ return -1;
}
page_flags = validate_prot_to_pageflags(target_prot);
if (!page_flags) {
errno = EINVAL;
- goto fail;
+ return -1;
}
/* Also check for overflows... */
len = TARGET_PAGE_ALIGN(len);
if (!len) {
errno = ENOMEM;
- goto fail;
+ return -1;
}
if (offset & ~TARGET_PAGE_MASK) {
errno = EINVAL;
- goto fail;
+ return -1;
}
/*
@@ -509,7 +506,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
start = mmap_find_vma(real_start, host_len, TARGET_PAGE_SIZE);
if (start == (abi_ulong)-1) {
errno = ENOMEM;
- goto fail;
+ return -1;
}
}
@@ -530,7 +527,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
struct stat sb;
if (fstat(fd, &sb) == -1) {
- goto fail;
+ return -1;
}
/* Are we trying to create a map beyond EOF?. */
@@ -557,7 +554,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
p = mmap(g2h_untagged(start), host_len, host_prot,
flags | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
if (p == MAP_FAILED) {
- goto fail;
+ return -1;
}
/* update start so that it points to the file position at 'offset' */
host_start = (uintptr_t)p;
@@ -566,7 +563,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
flags | MAP_FIXED, fd, host_offset);
if (p == MAP_FAILED) {
munmap(g2h_untagged(start), host_len);
- goto fail;
+ return -1;
}
host_start += offset - host_offset;
}
@@ -577,7 +574,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
} else {
if (start & ~TARGET_PAGE_MASK) {
errno = EINVAL;
- goto fail;
+ return -1;
}
last = start + len - 1;
real_last = ROUND_UP(last, host_page_size) - 1;
@@ -589,14 +586,14 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
*/
if (last < start || !guest_range_valid_untagged(start, len)) {
errno = ENOMEM;
- goto fail;
+ return -1;
}
if (flags & MAP_FIXED_NOREPLACE) {
/* Validate that the chosen range is empty. */
if (!page_check_range_empty(start, last)) {
errno = EEXIST;
- goto fail;
+ return -1;
}
/*
@@ -627,17 +624,17 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
if ((flags & MAP_TYPE) == MAP_SHARED
&& (target_prot & PROT_WRITE)) {
errno = EINVAL;
- goto fail;
+ return -1;
}
retaddr = target_mmap(start, len, target_prot | PROT_WRITE,
(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))
| MAP_PRIVATE | MAP_ANONYMOUS,
-1, 0);
if (retaddr == -1) {
- goto fail;
+ return -1;
}
if (pread(fd, g2h_untagged(start), len, offset) == -1) {
- goto fail;
+ return -1;
}
if (!(target_prot & PROT_WRITE)) {
ret = target_mprotect(start, len, target_prot);
@@ -652,14 +649,14 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
/* one single host page */
if (!mmap_frag(real_start, start, last,
target_prot, flags, fd, offset)) {
- goto fail;
+ return -1;
}
goto the_end1;
}
if (!mmap_frag(real_start, start,
real_start + host_page_size - 1,
target_prot, flags, fd, offset)) {
- goto fail;
+ return -1;
}
real_start += host_page_size;
}
@@ -669,7 +666,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
if (!mmap_frag(real_page, real_page, last,
target_prot, flags, fd,
offset + real_page - start)) {
- goto fail;
+ return -1;
}
real_last -= host_page_size;
}
@@ -695,7 +692,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
munmap(p, len1);
errno = EEXIST;
}
- goto fail;
+ return -1;
}
passthrough_start = real_start;
passthrough_last = real_last;
@@ -728,11 +725,22 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
qemu_log_unlock(f);
}
}
- mmap_unlock();
return start;
-fail:
+}
+
+/* NOTE: all the constants are the HOST ones */
+abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
+ int flags, int fd, off_t offset)
+{
+ abi_long ret;
+
+ trace_target_mmap(start, len, target_prot, flags, fd, offset);
+ mmap_lock();
+
+ ret = target_mmap__locked(start, len, target_prot, flags, fd, offset);
+
mmap_unlock();
- return -1;
+ return ret;
}
static void mmap_reserve_or_unmap(abi_ulong start, abi_ulong len)
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 17/33] linux-user: Move some mmap checks outside the lock
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (15 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 16/33] linux-user: Split out target_mmap__locked Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-21 6:47 ` Philippe Mathieu-Daudé
2023-08-18 17:12 ` [PATCH 18/33] linux-user: Fix sub-host-page mmap Richard Henderson
` (15 subsequent siblings)
32 siblings, 1 reply; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Basic validation of operands does not require the lock.
Hoist them from target_mmap__locked back into target_mmap.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 107 +++++++++++++++++++++++-----------------------
1 file changed, 53 insertions(+), 54 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 448f168df1..85d16a29c1 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -447,52 +447,14 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
}
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
- int target_prot, int flags,
+ int target_prot, int flags, int page_flags,
int fd, off_t offset)
{
int host_page_size = qemu_real_host_page_size();
abi_ulong ret, last, real_start, real_last, retaddr, host_len;
abi_ulong passthrough_start = -1, passthrough_last = 0;
- int page_flags;
off_t host_offset;
- if (!len) {
- errno = EINVAL;
- return -1;
- }
-
- page_flags = validate_prot_to_pageflags(target_prot);
- if (!page_flags) {
- errno = EINVAL;
- return -1;
- }
-
- /* Also check for overflows... */
- len = TARGET_PAGE_ALIGN(len);
- if (!len) {
- errno = ENOMEM;
- return -1;
- }
-
- if (offset & ~TARGET_PAGE_MASK) {
- errno = EINVAL;
- return -1;
- }
-
- /*
- * If we're mapping shared memory, ensure we generate code for parallel
- * execution and flush old translations. This will work up to the level
- * supported by the host -- anything that requires EXCP_ATOMIC will not
- * be atomic with respect to an external process.
- */
- if (flags & MAP_SHARED) {
- CPUState *cpu = thread_cpu;
- if (!(cpu->tcg_cflags & CF_PARALLEL)) {
- cpu->tcg_cflags |= CF_PARALLEL;
- tb_flush(cpu);
- }
- }
-
real_start = start & -host_page_size;
host_offset = offset & -host_page_size;
@@ -572,23 +534,9 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
passthrough_start = start;
passthrough_last = last;
} else {
- if (start & ~TARGET_PAGE_MASK) {
- errno = EINVAL;
- return -1;
- }
last = start + len - 1;
real_last = ROUND_UP(last, host_page_size) - 1;
- /*
- * Test if requested memory area fits target address space
- * It can fail only on 64-bit host with 32-bit target.
- * On any other target/host host mmap() handles this error correctly.
- */
- if (last < start || !guest_range_valid_untagged(start, len)) {
- errno = ENOMEM;
- return -1;
- }
-
if (flags & MAP_FIXED_NOREPLACE) {
/* Validate that the chosen range is empty. */
if (!page_check_range_empty(start, last)) {
@@ -733,13 +681,64 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
int flags, int fd, off_t offset)
{
abi_long ret;
+ int page_flags;
trace_target_mmap(start, len, target_prot, flags, fd, offset);
+
+ if (!len) {
+ errno = EINVAL;
+ return -1;
+ }
+
+ page_flags = validate_prot_to_pageflags(target_prot);
+ if (!page_flags) {
+ errno = EINVAL;
+ return -1;
+ }
+
+ /* Also check for overflows... */
+ len = TARGET_PAGE_ALIGN(len);
+ if (!len || len != (size_t)len) {
+ errno = ENOMEM;
+ return -1;
+ }
+
+ if (offset & ~TARGET_PAGE_MASK) {
+ errno = EINVAL;
+ return -1;
+ }
+ if (flags & (MAP_FIXED | MAP_FIXED_NOREPLACE)) {
+ if (start & ~TARGET_PAGE_MASK) {
+ errno = EINVAL;
+ return -1;
+ }
+ if (!guest_range_valid_untagged(start, len)) {
+ errno = ENOMEM;
+ return -1;
+ }
+ }
+
mmap_lock();
- ret = target_mmap__locked(start, len, target_prot, flags, fd, offset);
+ ret = target_mmap__locked(start, len, target_prot, flags,
+ page_flags, fd, offset);
mmap_unlock();
+
+ /*
+ * If we're mapping shared memory, ensure we generate code for parallel
+ * execution and flush old translations. This will work up to the level
+ * supported by the host -- anything that requires EXCP_ATOMIC will not
+ * be atomic with respect to an external process.
+ */
+ if (ret != -1 && (flags & MAP_TYPE) != MAP_PRIVATE) {
+ CPUState *cpu = thread_cpu;
+ if (!(cpu->tcg_cflags & CF_PARALLEL)) {
+ cpu->tcg_cflags |= CF_PARALLEL;
+ tb_flush(cpu);
+ }
+ }
+
return ret;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 18/33] linux-user: Fix sub-host-page mmap
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (16 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 17/33] linux-user: Move some mmap checks outside the lock Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 19/33] linux-user: Split out mmap_end Richard Henderson
` (14 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
We cannot skip over the_end1 to the_end, because we fail to
record the validity of the guest page with the interval tree.
Remove "the_end" and rename "the_end1" to "the_end".
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 85d16a29c1..e905b1b8f2 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -599,7 +599,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
target_prot, flags, fd, offset)) {
return -1;
}
- goto the_end1;
+ goto the_end;
}
if (!mmap_frag(real_start, start,
real_start + host_page_size - 1,
@@ -646,7 +646,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
passthrough_last = real_last;
}
}
- the_end1:
+ the_end:
if (flags & MAP_ANONYMOUS) {
page_flags |= PAGE_ANON;
}
@@ -663,7 +663,6 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
page_set_flags(passthrough_last + 1, last, page_flags);
}
}
- the_end:
trace_target_mmap_complete(start);
if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
FILE *f = qemu_log_trylock();
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 19/33] linux-user: Split out mmap_end
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (17 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 18/33] linux-user: Fix sub-host-page mmap Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-21 6:49 ` Philippe Mathieu-Daudé
2023-08-18 17:12 ` [PATCH 20/33] linux-user: Do early mmap placement only for reserved_va Richard Henderson
` (13 subsequent siblings)
32 siblings, 1 reply; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Use a subroutine instead of a goto within target_mmap__locked.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 69 +++++++++++++++++++++++++++--------------------
1 file changed, 40 insertions(+), 29 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index e905b1b8f2..caa76eb11a 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -446,6 +446,42 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
}
}
+/*
+ * Record a successful mmap within the user-exec interval tree.
+ */
+static abi_long mmap_end(abi_ulong start, abi_ulong last,
+ abi_ulong passthrough_start,
+ abi_ulong passthrough_last,
+ int flags, int page_flags)
+{
+ if (flags & MAP_ANONYMOUS) {
+ page_flags |= PAGE_ANON;
+ }
+ page_flags |= PAGE_RESET;
+ if (passthrough_start > passthrough_last) {
+ page_set_flags(start, last, page_flags);
+ } else {
+ if (start < passthrough_start) {
+ page_set_flags(start, passthrough_start - 1, page_flags);
+ }
+ page_set_flags(passthrough_start, passthrough_last,
+ page_flags | PAGE_PASSTHROUGH);
+ if (passthrough_last < last) {
+ page_set_flags(passthrough_last + 1, last, page_flags);
+ }
+ }
+ trace_target_mmap_complete(start);
+ if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
+ FILE *f = qemu_log_trylock();
+ if (f) {
+ fprintf(f, "page layout changed following mmap\n");
+ page_dump(f);
+ qemu_log_unlock(f);
+ }
+ }
+ return start;
+}
+
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
int target_prot, int flags, int page_flags,
int fd, off_t offset)
@@ -588,7 +624,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
ret = target_mprotect(start, len, target_prot);
assert(ret == 0);
}
- goto the_end;
+ return mmap_end(start, last, -1, 0, flags, page_flags);
}
/* handle the start of the mapping */
@@ -599,7 +635,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
target_prot, flags, fd, offset)) {
return -1;
}
- goto the_end;
+ return mmap_end(start, last, -1, 0, flags, page_flags);
}
if (!mmap_frag(real_start, start,
real_start + host_page_size - 1,
@@ -646,33 +682,8 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
passthrough_last = real_last;
}
}
- the_end:
- if (flags & MAP_ANONYMOUS) {
- page_flags |= PAGE_ANON;
- }
- page_flags |= PAGE_RESET;
- if (passthrough_start > passthrough_last) {
- page_set_flags(start, last, page_flags);
- } else {
- if (start < passthrough_start) {
- page_set_flags(start, passthrough_start - 1, page_flags);
- }
- page_set_flags(passthrough_start, passthrough_last,
- page_flags | PAGE_PASSTHROUGH);
- if (passthrough_last < last) {
- page_set_flags(passthrough_last + 1, last, page_flags);
- }
- }
- trace_target_mmap_complete(start);
- if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
- FILE *f = qemu_log_trylock();
- if (f) {
- fprintf(f, "page layout changed following mmap\n");
- page_dump(f);
- qemu_log_unlock(f);
- }
- }
- return start;
+ return mmap_end(start, last, passthrough_start, passthrough_last,
+ flags, page_flags);
}
/* NOTE: all the constants are the HOST ones */
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 20/33] linux-user: Do early mmap placement only for reserved_va
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (18 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 19/33] linux-user: Split out mmap_end Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 21/33] linux-user: Split out mmap_h_eq_g Richard Henderson
` (12 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
For reserved_va, place all non-fixed maps then proceed
as for MAP_FIXED.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index caa76eb11a..7d482df06d 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -495,17 +495,19 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
host_offset = offset & -host_page_size;
/*
- * If the user is asking for the kernel to find a location, do that
- * before we truncate the length for mapping files below.
+ * For reserved_va, we are in full control of the allocation.
+ * Find a suitible hole and convert to MAP_FIXED.
*/
- if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
+ if (reserved_va && !(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
host_len = len + offset - host_offset;
- host_len = ROUND_UP(host_len, host_page_size);
- start = mmap_find_vma(real_start, host_len, TARGET_PAGE_SIZE);
+ start = mmap_find_vma(real_start, host_len,
+ MAX(host_page_size, TARGET_PAGE_SIZE));
if (start == (abi_ulong)-1) {
errno = ENOMEM;
return -1;
}
+ start += offset - host_offset;
+ flags |= MAP_FIXED;
}
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 21/33] linux-user: Split out mmap_h_eq_g
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (19 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 20/33] linux-user: Do early mmap placement only for reserved_va Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 22/33] linux-user: Split out mmap_h_lt_g Richard Henderson
` (11 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Move the MAX_FIXED_NOREPLACE check for reserved_va earlier.
Move the computation of host_prot earlier.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 66 +++++++++++++++++++++++++++++++++++++----------
1 file changed, 53 insertions(+), 13 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 7d482df06d..7a0c0c1f35 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -482,6 +482,31 @@ static abi_long mmap_end(abi_ulong start, abi_ulong last,
return start;
}
+/*
+ * Special case host page size == target page size,
+ * where there are no edge conditions.
+ */
+static abi_long mmap_h_eq_g(abi_ulong start, abi_ulong len,
+ int host_prot, int flags, int page_flags,
+ int fd, off_t offset)
+{
+ void *p, *want_p = g2h_untagged(start);
+ abi_ulong last;
+
+ p = mmap(want_p, len, host_prot, flags, fd, offset);
+ if (p == MAP_FAILED) {
+ return -1;
+ }
+ if ((flags & MAP_FIXED_NOREPLACE) && p != want_p) {
+ errno = EEXIST;
+ return -1;
+ }
+
+ start = h2g(p);
+ last = start + len - 1;
+ return mmap_end(start, last, start, last, flags, page_flags);
+}
+
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
int target_prot, int flags, int page_flags,
int fd, off_t offset)
@@ -490,6 +515,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
abi_ulong ret, last, real_start, real_last, retaddr, host_len;
abi_ulong passthrough_start = -1, passthrough_last = 0;
off_t host_offset;
+ int host_prot;
real_start = start & -host_page_size;
host_offset = offset & -host_page_size;
@@ -498,16 +524,33 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
* For reserved_va, we are in full control of the allocation.
* Find a suitible hole and convert to MAP_FIXED.
*/
- if (reserved_va && !(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
- host_len = len + offset - host_offset;
- start = mmap_find_vma(real_start, host_len,
- MAX(host_page_size, TARGET_PAGE_SIZE));
- if (start == (abi_ulong)-1) {
- errno = ENOMEM;
- return -1;
+ if (reserved_va) {
+ if (flags & MAP_FIXED_NOREPLACE) {
+ /* Validate that the chosen range is empty. */
+ if (!page_check_range_empty(start, start + len - 1)) {
+ errno = EEXIST;
+ return -1;
+ }
+ flags = (flags & ~MAP_FIXED_NOREPLACE) | MAP_FIXED;
+ } else if (!(flags & MAP_FIXED)) {
+ size_t real_len = len + offset - host_offset;
+ abi_ulong align = MAX(host_page_size, TARGET_PAGE_SIZE);
+
+ start = mmap_find_vma(real_start, real_len, align);
+ if (start == (abi_ulong)-1) {
+ errno = ENOMEM;
+ return -1;
+ }
+ start += offset - host_offset;
+ flags |= MAP_FIXED;
}
- start += offset - host_offset;
- flags |= MAP_FIXED;
+ }
+
+ host_prot = target_to_host_prot(target_prot);
+
+ if (host_page_size == TARGET_PAGE_SIZE) {
+ return mmap_h_eq_g(start, len, host_prot, flags,
+ page_flags, fd, offset);
}
/*
@@ -543,12 +586,10 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
uintptr_t host_start;
- int host_prot;
void *p;
host_len = len + offset - host_offset;
host_len = ROUND_UP(host_len, host_page_size);
- host_prot = target_to_host_prot(target_prot);
/* Note: we prefer to control the mapping address. */
p = mmap(g2h_untagged(start), host_len, host_prot,
@@ -671,8 +712,7 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
len1 = real_last - real_start + 1;
want_p = g2h_untagged(real_start);
- p = mmap(want_p, len1, target_to_host_prot(target_prot),
- flags, fd, offset1);
+ p = mmap(want_p, len1, host_prot, flags, fd, offset1);
if (p != want_p) {
if (p != MAP_FAILED) {
munmap(p, len1);
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 22/33] linux-user: Split out mmap_h_lt_g
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (20 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 21/33] linux-user: Split out mmap_h_eq_g Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 23/33] linux-user: Split out mmap_h_gt_g Richard Henderson
` (10 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Work much harder to get alignment and mapping beyond the end
of the file correct. Both of which are excercised by our
test-mmap for alpha (8k pages) on any 4k page host.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 156 +++++++++++++++++++++++++++++++++++++---------
1 file changed, 125 insertions(+), 31 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 7a0c0c1f35..ed82b4bb75 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -507,6 +507,128 @@ static abi_long mmap_h_eq_g(abi_ulong start, abi_ulong len,
return mmap_end(start, last, start, last, flags, page_flags);
}
+/*
+ * Special case host page size < target page size.
+ *
+ * The two special cases are increased guest alignment, and mapping
+ * past the end of a file.
+ *
+ * When mapping files into a memory area larger than the file,
+ * accesses to pages beyond the file size will cause a SIGBUS.
+ *
+ * For example, if mmaping a file of 100 bytes on a host with 4K
+ * pages emulating a target with 8K pages, the target expects to
+ * be able to access the first 8K. But the host will trap us on
+ * any access beyond 4K.
+ *
+ * When emulating a target with a larger page-size than the hosts,
+ * we may need to truncate file maps at EOF and add extra anonymous
+ * pages up to the targets page boundary.
+ *
+ * This workaround only works for files that do not change.
+ * If the file is later extended (e.g. ftruncate), the SIGBUS
+ * vanishes and the proper behaviour is that changes within the
+ * anon page should be reflected in the file.
+ *
+ * However, this case is rather common with executable images,
+ * so the workaround is important for even trivial tests, whereas
+ * the mmap of of a file being extended is less common.
+ */
+static abi_long mmap_h_lt_g(abi_ulong start, abi_ulong len, int host_prot,
+ int mmap_flags, int page_flags, int fd,
+ off_t offset, int host_page_size)
+{
+ void *p, *want_p = g2h_untagged(start);
+ off_t fileend_adj = 0;
+ int flags = mmap_flags;
+ abi_ulong last, pass_last;
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ struct stat sb;
+
+ if (fstat(fd, &sb) == -1) {
+ return -1;
+ }
+ if (offset >= sb.st_size) {
+ /*
+ * The entire map is beyond the end of the file.
+ * Transform it to an anonymous mapping.
+ */
+ flags |= MAP_ANONYMOUS;
+ fd = -1;
+ offset = 0;
+ } else if (offset + len > sb.st_size) {
+ /*
+ * A portion of the map is beyond the end of the file.
+ * Truncate the file portion of the allocation.
+ */
+ fileend_adj = offset + len - sb.st_size;
+ }
+ }
+
+ if (flags & (MAP_FIXED | MAP_FIXED_NOREPLACE)) {
+ if (fileend_adj) {
+ p = mmap(want_p, len, host_prot, flags | MAP_ANONYMOUS, -1, 0);
+ } else {
+ p = mmap(want_p, len, host_prot, flags, fd, offset);
+ }
+ if (p != want_p) {
+ if (p != MAP_FAILED) {
+ munmap(p, len);
+ errno = EEXIST;
+ }
+ return -1;
+ }
+
+ if (fileend_adj) {
+ void *t = mmap(p, len - fileend_adj, host_prot,
+ (flags & ~MAP_FIXED_NOREPLACE) | MAP_FIXED,
+ fd, offset);
+ assert(t != MAP_FAILED);
+ }
+ } else {
+ size_t host_len, part_len;
+
+ /*
+ * Take care to align the host memory. Perform a larger anonymous
+ * allocation and extract the aligned portion. Remap the file on
+ * top of that.
+ */
+ host_len = len + TARGET_PAGE_SIZE - host_page_size;
+ p = mmap(want_p, host_len, host_prot, flags | MAP_ANONYMOUS, -1, 0);
+ if (p == MAP_FAILED) {
+ return -1;
+ }
+
+ part_len = (uintptr_t)p & (TARGET_PAGE_SIZE - 1);
+ if (part_len) {
+ part_len = TARGET_PAGE_SIZE - part_len;
+ munmap(p, part_len);
+ p += part_len;
+ host_len -= part_len;
+ }
+ if (len < host_len) {
+ munmap(p + len, host_len - len);
+ }
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ void *t = mmap(p, len - fileend_adj, host_prot,
+ flags | MAP_FIXED, fd, offset);
+ assert(t != MAP_FAILED);
+ }
+
+ start = h2g(p);
+ }
+
+ last = start + len - 1;
+ if (fileend_adj) {
+ pass_last = ROUND_UP(last - fileend_adj, host_page_size) - 1;
+ } else {
+ pass_last = last;
+ }
+ return mmap_end(start, last, start, pass_last, mmap_flags, page_flags);
+}
+
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
int target_prot, int flags, int page_flags,
int fd, off_t offset)
@@ -551,37 +673,9 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
if (host_page_size == TARGET_PAGE_SIZE) {
return mmap_h_eq_g(start, len, host_prot, flags,
page_flags, fd, offset);
- }
-
- /*
- * When mapping files into a memory area larger than the file, accesses
- * to pages beyond the file size will cause a SIGBUS.
- *
- * For example, if mmaping a file of 100 bytes on a host with 4K pages
- * emulating a target with 8K pages, the target expects to be able to
- * access the first 8K. But the host will trap us on any access beyond
- * 4K.
- *
- * When emulating a target with a larger page-size than the hosts, we
- * may need to truncate file maps at EOF and add extra anonymous pages
- * up to the targets page boundary.
- */
- if (host_page_size < TARGET_PAGE_SIZE && !(flags & MAP_ANONYMOUS)) {
- struct stat sb;
-
- if (fstat(fd, &sb) == -1) {
- return -1;
- }
-
- /* Are we trying to create a map beyond EOF?. */
- if (offset + len > sb.st_size) {
- /*
- * If so, truncate the file map at eof aligned with
- * the hosts real pagesize. Additional anonymous maps
- * will be created beyond EOF.
- */
- len = ROUND_UP(sb.st_size - offset, host_page_size);
- }
+ } else if (host_page_size < TARGET_PAGE_SIZE) {
+ return mmap_h_lt_g(start, len, host_prot, flags,
+ page_flags, fd, offset, host_page_size);
}
if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 23/33] linux-user: Split out mmap_h_gt_g
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (21 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 22/33] linux-user: Split out mmap_h_lt_g Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 24/33] tests/tcg: Remove run-test-mmap-* Richard Henderson
` (9 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/mmap.c | 288 ++++++++++++++++++++++------------------------
1 file changed, 139 insertions(+), 149 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index ed82b4bb75..6ab2f35e6f 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -223,7 +223,16 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
return ret;
}
-/* map an incomplete host page */
+/*
+ * Map an incomplete host page.
+ *
+ * Here be dragons. This case will not work if there is an existing
+ * overlapping host page, which is file mapped, and for which the mapping
+ * is beyond the end of the file. In that case, we will see SIGBUS when
+ * trying to write a portion of this page.
+ *
+ * FIXME: Work around this with a temporary signal handler and longjmp.
+ */
static bool mmap_frag(abi_ulong real_start, abi_ulong start, abi_ulong last,
int prot, int flags, int fd, off_t offset)
{
@@ -629,19 +638,138 @@ static abi_long mmap_h_lt_g(abi_ulong start, abi_ulong len, int host_prot,
return mmap_end(start, last, start, pass_last, mmap_flags, page_flags);
}
+/*
+ * Special case host page size > target page size.
+ *
+ * The two special cases are address and file offsets that are valid
+ * for the guest that cannot be directly represented by the host.
+ */
+static abi_long mmap_h_gt_g(abi_ulong start, abi_ulong len,
+ int target_prot, int host_prot,
+ int flags, int page_flags, int fd,
+ off_t offset, int host_page_size)
+{
+ void *p, *want_p = g2h_untagged(start);
+ off_t host_offset = offset & -host_page_size;
+ abi_ulong last, real_start, real_last;
+ bool misaligned_offset = false;
+ size_t host_len;
+
+ if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
+ /*
+ * Adjust the offset to something representable on the host.
+ */
+ host_len = len + offset - host_offset;
+ p = mmap(want_p, host_len, host_prot, flags, fd, host_offset);
+ if (p == MAP_FAILED) {
+ return -1;
+ }
+
+ /* Update start to the file position at offset. */
+ p += offset - host_offset;
+
+ start = h2g(p);
+ last = start + len - 1;
+ return mmap_end(start, last, start, last, flags, page_flags);
+ }
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ misaligned_offset = (start ^ offset) & (host_page_size - 1);
+
+ /*
+ * The fallback for misalignment is a private mapping + read.
+ * This carries none of semantics required of MAP_SHARED.
+ */
+ if (misaligned_offset && (flags & MAP_TYPE) != MAP_PRIVATE) {
+ errno = EINVAL;
+ return -1;
+ }
+ }
+
+ last = start + len - 1;
+ real_start = start & -host_page_size;
+ real_last = ROUND_UP(last, host_page_size) - 1;
+
+ /*
+ * Handle the start and end of the mapping.
+ */
+ if (real_start < start) {
+ abi_ulong real_page_last = real_start + host_page_size - 1;
+ if (last <= real_page_last) {
+ /* Entire allocation a subset of one host page. */
+ if (!mmap_frag(real_start, start, last, target_prot,
+ flags, fd, offset)) {
+ return -1;
+ }
+ return mmap_end(start, last, -1, 0, flags, page_flags);
+ }
+
+ if (!mmap_frag(real_start, start, real_page_last, target_prot,
+ flags, fd, offset)) {
+ return -1;
+ }
+ real_start = real_page_last + 1;
+ }
+
+ if (last < real_last) {
+ abi_ulong real_page_start = real_last - host_page_size + 1;
+ if (!mmap_frag(real_page_start, real_page_start, last,
+ target_prot, flags, fd,
+ offset + real_page_start - start)) {
+ return -1;
+ }
+ real_last = real_page_start - 1;
+ }
+
+ if (real_start > real_last) {
+ return mmap_end(start, last, -1, 0, flags, page_flags);
+ }
+
+ /*
+ * Handle the middle of the mapping.
+ */
+
+ host_len = real_last - real_start + 1;
+ want_p += real_start - start;
+
+ if (flags & MAP_ANONYMOUS) {
+ p = mmap(want_p, host_len, host_prot, flags, -1, 0);
+ } else if (!misaligned_offset) {
+ p = mmap(want_p, host_len, host_prot, flags, fd,
+ offset + real_start - start);
+ } else {
+ p = mmap(want_p, host_len, host_prot | PROT_WRITE,
+ flags | MAP_ANONYMOUS, -1, 0);
+ }
+ if (p != want_p) {
+ if (p != MAP_FAILED) {
+ munmap(p, host_len);
+ errno = EEXIST;
+ }
+ return -1;
+ }
+
+ if (misaligned_offset) {
+ /* TODO: The read could be short. */
+ if (pread(fd, p, host_len, offset + real_start - start) != host_len) {
+ munmap(p, host_len);
+ return -1;
+ }
+ if (!(host_prot & PROT_WRITE)) {
+ mprotect(p, host_len, host_prot);
+ }
+ }
+
+ return mmap_end(start, last, -1, 0, flags, page_flags);
+}
+
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
int target_prot, int flags, int page_flags,
int fd, off_t offset)
{
int host_page_size = qemu_real_host_page_size();
- abi_ulong ret, last, real_start, real_last, retaddr, host_len;
- abi_ulong passthrough_start = -1, passthrough_last = 0;
- off_t host_offset;
int host_prot;
- real_start = start & -host_page_size;
- host_offset = offset & -host_page_size;
-
/*
* For reserved_va, we are in full control of the allocation.
* Find a suitible hole and convert to MAP_FIXED.
@@ -655,6 +783,8 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
}
flags = (flags & ~MAP_FIXED_NOREPLACE) | MAP_FIXED;
} else if (!(flags & MAP_FIXED)) {
+ abi_ulong real_start = start & -host_page_size;
+ off_t host_offset = offset & -host_page_size;
size_t real_len = len + offset - host_offset;
abi_ulong align = MAX(host_page_size, TARGET_PAGE_SIZE);
@@ -676,150 +806,10 @@ static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
} else if (host_page_size < TARGET_PAGE_SIZE) {
return mmap_h_lt_g(start, len, host_prot, flags,
page_flags, fd, offset, host_page_size);
- }
-
- if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
- uintptr_t host_start;
- void *p;
-
- host_len = len + offset - host_offset;
- host_len = ROUND_UP(host_len, host_page_size);
-
- /* Note: we prefer to control the mapping address. */
- p = mmap(g2h_untagged(start), host_len, host_prot,
- flags | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
- if (p == MAP_FAILED) {
- return -1;
- }
- /* update start so that it points to the file position at 'offset' */
- host_start = (uintptr_t)p;
- if (!(flags & MAP_ANONYMOUS)) {
- p = mmap(g2h_untagged(start), len, host_prot,
- flags | MAP_FIXED, fd, host_offset);
- if (p == MAP_FAILED) {
- munmap(g2h_untagged(start), host_len);
- return -1;
- }
- host_start += offset - host_offset;
- }
- start = h2g(host_start);
- last = start + len - 1;
- passthrough_start = start;
- passthrough_last = last;
} else {
- last = start + len - 1;
- real_last = ROUND_UP(last, host_page_size) - 1;
-
- if (flags & MAP_FIXED_NOREPLACE) {
- /* Validate that the chosen range is empty. */
- if (!page_check_range_empty(start, last)) {
- errno = EEXIST;
- return -1;
- }
-
- /*
- * With reserved_va, the entire address space is mmaped in the
- * host to ensure it isn't accidentally used for something else.
- * We have just checked that the guest address is not mapped
- * within the guest, but need to replace the host reservation.
- *
- * Without reserved_va, despite the guest address check above,
- * keep MAP_FIXED_NOREPLACE so that the guest does not overwrite
- * any host address mappings.
- */
- if (reserved_va) {
- flags = (flags & ~MAP_FIXED_NOREPLACE) | MAP_FIXED;
- }
- }
-
- /*
- * worst case: we cannot map the file because the offset is not
- * aligned, so we read it
- */
- if (!(flags & MAP_ANONYMOUS) &&
- (offset & (host_page_size - 1)) != (start & (host_page_size - 1))) {
- /*
- * msync() won't work here, so we return an error if write is
- * possible while it is a shared mapping
- */
- if ((flags & MAP_TYPE) == MAP_SHARED
- && (target_prot & PROT_WRITE)) {
- errno = EINVAL;
- return -1;
- }
- retaddr = target_mmap(start, len, target_prot | PROT_WRITE,
- (flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))
- | MAP_PRIVATE | MAP_ANONYMOUS,
- -1, 0);
- if (retaddr == -1) {
- return -1;
- }
- if (pread(fd, g2h_untagged(start), len, offset) == -1) {
- return -1;
- }
- if (!(target_prot & PROT_WRITE)) {
- ret = target_mprotect(start, len, target_prot);
- assert(ret == 0);
- }
- return mmap_end(start, last, -1, 0, flags, page_flags);
- }
-
- /* handle the start of the mapping */
- if (start > real_start) {
- if (real_last == real_start + host_page_size - 1) {
- /* one single host page */
- if (!mmap_frag(real_start, start, last,
- target_prot, flags, fd, offset)) {
- return -1;
- }
- return mmap_end(start, last, -1, 0, flags, page_flags);
- }
- if (!mmap_frag(real_start, start,
- real_start + host_page_size - 1,
- target_prot, flags, fd, offset)) {
- return -1;
- }
- real_start += host_page_size;
- }
- /* handle the end of the mapping */
- if (last < real_last) {
- abi_ulong real_page = real_last - host_page_size + 1;
- if (!mmap_frag(real_page, real_page, last,
- target_prot, flags, fd,
- offset + real_page - start)) {
- return -1;
- }
- real_last -= host_page_size;
- }
-
- /* map the middle (easier) */
- if (real_start < real_last) {
- void *p, *want_p;
- off_t offset1;
- size_t len1;
-
- if (flags & MAP_ANONYMOUS) {
- offset1 = 0;
- } else {
- offset1 = offset + real_start - start;
- }
- len1 = real_last - real_start + 1;
- want_p = g2h_untagged(real_start);
-
- p = mmap(want_p, len1, host_prot, flags, fd, offset1);
- if (p != want_p) {
- if (p != MAP_FAILED) {
- munmap(p, len1);
- errno = EEXIST;
- }
- return -1;
- }
- passthrough_start = real_start;
- passthrough_last = real_last;
- }
+ return mmap_h_gt_g(start, len, target_prot, host_prot, flags,
+ page_flags, fd, offset, host_page_size);
}
- return mmap_end(start, last, passthrough_start, passthrough_last,
- flags, page_flags);
}
/* NOTE: all the constants are the HOST ones */
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 24/33] tests/tcg: Remove run-test-mmap-*
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (22 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 23/33] linux-user: Split out mmap_h_gt_g Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 25/33] tests/tcg: Extend file in linux-madvise.c Richard Henderson
` (8 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
These tests are confused, because -p does not change
the guest page size, but the host page size.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tests/tcg/alpha/Makefile.target | 3 ---
tests/tcg/arm/Makefile.target | 3 ---
tests/tcg/hppa/Makefile.target | 3 ---
tests/tcg/i386/Makefile.target | 3 ---
tests/tcg/m68k/Makefile.target | 3 ---
tests/tcg/multiarch/Makefile.target | 9 ---------
tests/tcg/ppc/Makefile.target | 12 ------------
tests/tcg/sh4/Makefile.target | 3 ---
tests/tcg/sparc64/Makefile.target | 6 ------
9 files changed, 45 deletions(-)
delete mode 100644 tests/tcg/ppc/Makefile.target
delete mode 100644 tests/tcg/sparc64/Makefile.target
diff --git a/tests/tcg/alpha/Makefile.target b/tests/tcg/alpha/Makefile.target
index b94500a7d9..fdd7ddf64e 100644
--- a/tests/tcg/alpha/Makefile.target
+++ b/tests/tcg/alpha/Makefile.target
@@ -13,6 +13,3 @@ test-cmov: test-cond.c
$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $< -o $@ $(LDFLAGS)
run-test-cmov: test-cmov
-
-# On Alpha Linux only supports 8k pages
-EXTRA_RUNS+=run-test-mmap-8192
diff --git a/tests/tcg/arm/Makefile.target b/tests/tcg/arm/Makefile.target
index 0038cef02c..4b8c9c334e 100644
--- a/tests/tcg/arm/Makefile.target
+++ b/tests/tcg/arm/Makefile.target
@@ -79,6 +79,3 @@ sha512-vector: sha512.c
ARM_TESTS += sha512-vector
TESTS += $(ARM_TESTS)
-
-# On ARM Linux only supports 4k pages
-EXTRA_RUNS+=run-test-mmap-4096
diff --git a/tests/tcg/hppa/Makefile.target b/tests/tcg/hppa/Makefile.target
index cdd0d572a7..ea5ae2186d 100644
--- a/tests/tcg/hppa/Makefile.target
+++ b/tests/tcg/hppa/Makefile.target
@@ -2,9 +2,6 @@
#
# HPPA specific tweaks - specifically masking out broken tests
-# On parisc Linux supports 4K/16K/64K (but currently only 4k works)
-EXTRA_RUNS+=run-test-mmap-4096 # run-test-mmap-16384 run-test-mmap-65536
-
# This triggers failures for hppa-linux about 1% of the time
# HPPA is the odd target that can't use the sigtramp page;
# it requires the full vdso with dwarf2 unwind info.
diff --git a/tests/tcg/i386/Makefile.target b/tests/tcg/i386/Makefile.target
index fdf757c6ce..f64d7bfbf5 100644
--- a/tests/tcg/i386/Makefile.target
+++ b/tests/tcg/i386/Makefile.target
@@ -71,9 +71,6 @@ endif
I386_TESTS:=$(filter-out $(SKIP_I386_TESTS), $(ALL_X86_TESTS))
TESTS=$(MULTIARCH_TESTS) $(I386_TESTS)
-# On i386 and x86_64 Linux only supports 4k pages (large pages are a different hack)
-EXTRA_RUNS+=run-test-mmap-4096
-
sha512-sse: CFLAGS=-msse4.1 -O3
sha512-sse: sha512.c
$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $< -o $@ $(LDFLAGS)
diff --git a/tests/tcg/m68k/Makefile.target b/tests/tcg/m68k/Makefile.target
index 1163c7ef03..73a16aedd2 100644
--- a/tests/tcg/m68k/Makefile.target
+++ b/tests/tcg/m68k/Makefile.target
@@ -5,6 +5,3 @@
VPATH += $(SRC_PATH)/tests/tcg/m68k
TESTS += trap
-
-# On m68k Linux supports 4k and 8k pages (but 8k is currently broken)
-EXTRA_RUNS+=run-test-mmap-4096 # run-test-mmap-8192
diff --git a/tests/tcg/multiarch/Makefile.target b/tests/tcg/multiarch/Makefile.target
index 43bddeaf21..fa1ac190f2 100644
--- a/tests/tcg/multiarch/Makefile.target
+++ b/tests/tcg/multiarch/Makefile.target
@@ -51,18 +51,9 @@ run-plugin-vma-pthread-with-%: vma-pthread
$(call skip-test, $<, "flaky on CI?")
endif
-# We define the runner for test-mmap after the individual
-# architectures have defined their supported pages sizes. If no
-# additional page sizes are defined we only run the default test.
-
-# default case (host page size)
run-test-mmap: test-mmap
$(call run-test, test-mmap, $(QEMU) $<, $< (default))
-# additional page sizes (defined by each architecture adding to EXTRA_RUNS)
-run-test-mmap-%: test-mmap
- $(call run-test, test-mmap-$*, $(QEMU) -p $* $<, $< ($* byte pages))
-
ifneq ($(HAVE_GDB_BIN),)
ifeq ($(HOST_GDB_SUPPORTS_ARCH),y)
GDB_SCRIPT=$(SRC_PATH)/tests/guest-debug/run-test.py
diff --git a/tests/tcg/ppc/Makefile.target b/tests/tcg/ppc/Makefile.target
deleted file mode 100644
index f5e08c7376..0000000000
--- a/tests/tcg/ppc/Makefile.target
+++ /dev/null
@@ -1,12 +0,0 @@
-# -*- Mode: makefile -*-
-#
-# PPC - included from tests/tcg/Makefile
-#
-
-ifneq (,$(findstring 64,$(TARGET_NAME)))
-# On PPC64 Linux can be configured with 4k (default) or 64k pages (currently broken)
-EXTRA_RUNS+=run-test-mmap-4096 #run-test-mmap-65536
-else
-# On PPC32 Linux supports 4K/16K/64K/256K (but currently only 4k works)
-EXTRA_RUNS+=run-test-mmap-4096 #run-test-mmap-16384 run-test-mmap-65536 run-test-mmap-262144
-endif
diff --git a/tests/tcg/sh4/Makefile.target b/tests/tcg/sh4/Makefile.target
index 47c39a44b6..16eaa850a8 100644
--- a/tests/tcg/sh4/Makefile.target
+++ b/tests/tcg/sh4/Makefile.target
@@ -3,9 +3,6 @@
# SuperH specific tweaks
#
-# On sh Linux supports 4k, 8k, 16k and 64k pages (but only 4k currently works)
-EXTRA_RUNS+=run-test-mmap-4096 # run-test-mmap-8192 run-test-mmap-16384 run-test-mmap-65536
-
# This triggers failures for sh4-linux about 10% of the time.
# Random SIGSEGV at unpredictable guest address, cause unknown.
run-signals: signals
diff --git a/tests/tcg/sparc64/Makefile.target b/tests/tcg/sparc64/Makefile.target
deleted file mode 100644
index 408dace783..0000000000
--- a/tests/tcg/sparc64/Makefile.target
+++ /dev/null
@@ -1,6 +0,0 @@
-# -*- Mode: makefile -*-
-#
-# sparc specific tweaks
-
-# On Sparc64 Linux support 8k pages
-EXTRA_RUNS+=run-test-mmap-8192
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 25/33] tests/tcg: Extend file in linux-madvise.c
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (23 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 24/33] tests/tcg: Remove run-test-mmap-* Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-21 6:54 ` Philippe Mathieu-Daudé
2023-08-18 17:12 ` [PATCH 26/33] linux-user: Deprecate and disable -p pagesize Richard Henderson
` (7 subsequent siblings)
32 siblings, 1 reply; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
When guest page size > host page size, this test can fail
due to the SIGBUS protection hack. Avoid this by making
sure that the file size is at least one guest page.
Visible with alpha guest on x86_64 host.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tests/tcg/multiarch/linux/linux-madvise.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/tests/tcg/multiarch/linux/linux-madvise.c b/tests/tcg/multiarch/linux/linux-madvise.c
index 29d0997e68..539fb3b772 100644
--- a/tests/tcg/multiarch/linux/linux-madvise.c
+++ b/tests/tcg/multiarch/linux/linux-madvise.c
@@ -42,6 +42,8 @@ static void test_file(void)
assert(ret == 0);
written = write(fd, &c, sizeof(c));
assert(written == sizeof(c));
+ ret = ftruncate(fd, pagesize);
+ assert(ret == 0);
page = mmap(NULL, pagesize, PROT_READ, MAP_PRIVATE, fd, 0);
assert(page != MAP_FAILED);
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* Re: [PATCH 25/33] tests/tcg: Extend file in linux-madvise.c
2023-08-18 17:12 ` [PATCH 25/33] tests/tcg: Extend file in linux-madvise.c Richard Henderson
@ 2023-08-21 6:54 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 44+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-08-21 6:54 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 18/8/23 19:12, Richard Henderson wrote:
> When guest page size > host page size, this test can fail
> due to the SIGBUS protection hack. Avoid this by making
> sure that the file size is at least one guest page.
>
> Visible with alpha guest on x86_64 host.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> tests/tcg/multiarch/linux/linux-madvise.c | 2 ++
> 1 file changed, 2 insertions(+)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH 26/33] linux-user: Deprecate and disable -p pagesize
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (24 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 25/33] tests/tcg: Extend file in linux-madvise.c Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-21 6:58 ` Philippe Mathieu-Daudé
2023-08-18 17:12 ` [PATCH 27/33] cpu: Remove page_size_init Richard Henderson
` (6 subsequent siblings)
32 siblings, 1 reply; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
This option controls the host page size. From the mis-usage in
our own testsuite, this is easily confused with guest page size.
The only thing that occurs when changing the host page size is
that stuff breaks, because one cannot actually change the host
page size. Therefore reject all but the no-op setting as part
of the deprecation process.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/main.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/linux-user/main.c b/linux-user/main.c
index c1058abc3c..3dd3310331 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -332,10 +332,11 @@ static void handle_arg_ld_prefix(const char *arg)
static void handle_arg_pagesize(const char *arg)
{
- qemu_host_page_size = atoi(arg);
- if (qemu_host_page_size == 0 ||
- (qemu_host_page_size & (qemu_host_page_size - 1)) != 0) {
- fprintf(stderr, "page size must be a power of two\n");
+ unsigned size, want = qemu_real_host_page_size();
+
+ if (qemu_strtoui(arg, NULL, 10, &size) || size != want) {
+ error_report("Deprecated page size option cannot "
+ "change host page size (%u)", want);
exit(EXIT_FAILURE);
}
}
@@ -496,7 +497,7 @@ static const struct qemu_argument arg_table[] = {
{"D", "QEMU_LOG_FILENAME", true, handle_arg_log_filename,
"logfile", "write logs to 'logfile' (default stderr)"},
{"p", "QEMU_PAGESIZE", true, handle_arg_pagesize,
- "pagesize", "set the host page size to 'pagesize'"},
+ "pagesize", "deprecated change to host page size"},
{"one-insn-per-tb",
"QEMU_ONE_INSN_PER_TB", false, handle_arg_one_insn_per_tb,
"", "run with one guest instruction per emulated TB"},
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* Re: [PATCH 26/33] linux-user: Deprecate and disable -p pagesize
2023-08-18 17:12 ` [PATCH 26/33] linux-user: Deprecate and disable -p pagesize Richard Henderson
@ 2023-08-21 6:58 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 44+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-08-21 6:58 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 18/8/23 19:12, Richard Henderson wrote:
> This option controls the host page size. From the mis-usage in
> our own testsuite, this is easily confused with guest page size.
>
> The only thing that occurs when changing the host page size is
> that stuff breaks, because one cannot actually change the host
> page size. Therefore reject all but the no-op setting as part
> of the deprecation process.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> linux-user/main.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
OK, but missing updates in docs/about/deprecated.rst
and docs/user/main.rst.
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH 27/33] cpu: Remove page_size_init
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (25 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 26/33] linux-user: Deprecate and disable -p pagesize Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-21 7:00 ` Philippe Mathieu-Daudé
2023-08-18 17:12 ` [PATCH 28/33] accel/tcg: Disconnect TargetPageDataNode from page size Richard Henderson
` (5 subsequent siblings)
32 siblings, 1 reply; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Move qemu_host_page_{size,mask} and HOST_PAGE_ALIGN into bsd-user.
It should be removed from bsd-user as well, but defer that cleanup.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
bsd-user/qemu.h | 7 +++++++
include/exec/cpu-common.h | 7 -------
include/hw/core/cpu.h | 2 --
accel/tcg/translate-all.c | 1 -
bsd-user/main.c | 12 ++++++++++++
cpu.c | 13 -------------
softmmu/vl.c | 1 -
7 files changed, 19 insertions(+), 24 deletions(-)
diff --git a/bsd-user/qemu.h b/bsd-user/qemu.h
index 8f2d6a3c78..9fe4e70890 100644
--- a/bsd-user/qemu.h
+++ b/bsd-user/qemu.h
@@ -38,6 +38,13 @@ extern char **environ;
#include "exec/gdbstub.h"
#include "qemu/clang-tsa.h"
+/*
+ * TODO: Remove these and rely only on qemu_real_host_page_size().
+ */
+extern uintptr_t qemu_host_page_size;
+extern intptr_t qemu_host_page_mask;
+#define HOST_PAGE_ALIGN(addr) ROUND_UP((addr), qemu_host_page_size)
+
/*
* This struct is used to hold certain information about the image. Basically,
* it replicates in user space what would be certain task_struct fields in the
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 87dc9a752c..1bf4616fa3 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -22,13 +22,6 @@ typedef uint64_t vaddr;
void cpu_exec_init_all(void);
void cpu_exec_step_atomic(CPUState *cpu);
-/* Using intptr_t ensures that qemu_*_page_mask is sign-extended even
- * when intptr_t is 32-bit and we are aligning a long long.
- */
-extern uintptr_t qemu_host_page_size;
-extern intptr_t qemu_host_page_mask;
-
-#define HOST_PAGE_ALIGN(addr) ROUND_UP((addr), qemu_host_page_size)
#define REAL_HOST_PAGE_ALIGN(addr) ROUND_UP((addr), qemu_real_host_page_size())
/* The CPU list lock nests outside page_(un)lock or mmap_(un)lock */
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index fdcbe87352..66575eec73 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -1025,8 +1025,6 @@ bool target_words_bigendian(void);
const char *target_name(void);
-void page_size_init(void);
-
#ifdef NEED_CPU_H
#ifndef CONFIG_USER_ONLY
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index b2d4e22c17..d84558dd95 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -255,7 +255,6 @@ bool cpu_unwind_state_data(CPUState *cpu, uintptr_t host_pc, uint64_t *data)
void page_init(void)
{
- page_size_init();
page_table_config_init();
}
diff --git a/bsd-user/main.c b/bsd-user/main.c
index 381bb18df8..3cb2b5f43c 100644
--- a/bsd-user/main.c
+++ b/bsd-user/main.c
@@ -49,6 +49,13 @@
#include "host-os.h"
#include "target_arch_cpu.h"
+
+/*
+ * TODO: Remove these and rely only on qemu_real_host_page_size().
+ */
+uintptr_t qemu_host_page_size;
+intptr_t qemu_host_page_mask;
+
static bool opt_one_insn_per_tb;
uintptr_t guest_base;
bool have_guest_base;
@@ -308,6 +315,9 @@ int main(int argc, char **argv)
(void) envlist_setenv(envlist, *wrk);
}
+ qemu_host_page_size = getpagesize();
+ qemu_host_page_size = MAX(qemu_host_page_size, TARGET_PAGE_SIZE);
+
cpu_model = NULL;
qemu_add_opts(&qemu_trace_opts);
@@ -407,6 +417,8 @@ int main(int argc, char **argv)
}
}
+ qemu_host_page_mask = -qemu_host_page_size;
+
/* init debug */
{
int mask = 0;
diff --git a/cpu.c b/cpu.c
index 1c948d1161..743c889ece 100644
--- a/cpu.c
+++ b/cpu.c
@@ -431,16 +431,3 @@ const char *target_name(void)
{
return TARGET_NAME;
}
-
-void page_size_init(void)
-{
- /* NOTE: we can always suppose that qemu_host_page_size >=
- TARGET_PAGE_SIZE */
- if (qemu_host_page_size == 0) {
- qemu_host_page_size = qemu_real_host_page_size();
- }
- if (qemu_host_page_size < TARGET_PAGE_SIZE) {
- qemu_host_page_size = TARGET_PAGE_SIZE;
- }
- qemu_host_page_mask = -(intptr_t)qemu_host_page_size;
-}
diff --git a/softmmu/vl.c b/softmmu/vl.c
index b0b96f67fa..bc2aab9aaa 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -2049,7 +2049,6 @@ static void qemu_create_machine(QDict *qdict)
}
cpu_exec_init_all();
- page_size_init();
if (machine_class->hw_version) {
qemu_set_hw_version(machine_class->hw_version);
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* Re: [PATCH 27/33] cpu: Remove page_size_init
2023-08-18 17:12 ` [PATCH 27/33] cpu: Remove page_size_init Richard Henderson
@ 2023-08-21 7:00 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 44+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-08-21 7:00 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 18/8/23 19:12, Richard Henderson wrote:
> Move qemu_host_page_{size,mask} and HOST_PAGE_ALIGN into bsd-user.
> It should be removed from bsd-user as well, but defer that cleanup.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> bsd-user/qemu.h | 7 +++++++
> include/exec/cpu-common.h | 7 -------
> include/hw/core/cpu.h | 2 --
> accel/tcg/translate-all.c | 1 -
> bsd-user/main.c | 12 ++++++++++++
> cpu.c | 13 -------------
> softmmu/vl.c | 1 -
> 7 files changed, 19 insertions(+), 24 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH 28/33] accel/tcg: Disconnect TargetPageDataNode from page size
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (26 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 27/33] cpu: Remove page_size_init Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-21 7:03 ` Philippe Mathieu-Daudé
2023-08-18 17:12 ` [PATCH 29/33] linux-user: Allow TARGET_PAGE_BITS_VARY Richard Henderson
` (4 subsequent siblings)
32 siblings, 1 reply; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Dynamically size the node for the runtime target page size.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/user-exec.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 4c1697500a..09dc85c851 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -863,7 +863,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
typedef struct TargetPageDataNode {
struct rcu_head rcu;
IntervalTreeNode itree;
- char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned));
+ char data[] __attribute__((aligned));
} TargetPageDataNode;
static IntervalTreeRoot targetdata_root;
@@ -901,7 +901,8 @@ void page_reset_target_data(target_ulong start, target_ulong last)
n_last = MIN(last, n->last);
p_len = (n_last + 1 - n_start) >> TARGET_PAGE_BITS;
- memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE);
+ memset(t->data + p_ofs * TARGET_PAGE_DATA_SIZE, 0,
+ p_len * TARGET_PAGE_DATA_SIZE);
}
}
@@ -909,7 +910,7 @@ void *page_get_target_data(target_ulong address)
{
IntervalTreeNode *n;
TargetPageDataNode *t;
- target_ulong page, region;
+ target_ulong page, region, p_ofs;
page = address & TARGET_PAGE_MASK;
region = address & TBD_MASK;
@@ -925,7 +926,8 @@ void *page_get_target_data(target_ulong address)
mmap_lock();
n = interval_tree_iter_first(&targetdata_root, page, page);
if (!n) {
- t = g_new0(TargetPageDataNode, 1);
+ t = g_malloc0(sizeof(TargetPageDataNode)
+ + TPD_PAGES * TARGET_PAGE_DATA_SIZE);
n = &t->itree;
n->start = region;
n->last = region | ~TBD_MASK;
@@ -935,7 +937,8 @@ void *page_get_target_data(target_ulong address)
}
t = container_of(n, TargetPageDataNode, itree);
- return t->data[(page - region) >> TARGET_PAGE_BITS];
+ p_ofs = (page - region) >> TARGET_PAGE_BITS;
+ return t->data + p_ofs * TARGET_PAGE_DATA_SIZE;
}
#else
void page_reset_target_data(target_ulong start, target_ulong last) { }
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 29/33] linux-user: Allow TARGET_PAGE_BITS_VARY
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (27 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 28/33] accel/tcg: Disconnect TargetPageDataNode from page size Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 30/33] target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only Richard Henderson
` (3 subsequent siblings)
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
If set, match the host and guest page sizes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/main.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/linux-user/main.c b/linux-user/main.c
index 3dd3310331..2334d7cc67 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -55,6 +55,7 @@
#include "loader.h"
#include "user-mmap.h"
#include "accel/tcg/perf.h"
+#include "exec/page-vary.h"
#ifdef CONFIG_SEMIHOSTING
#include "semihosting/semihost.h"
@@ -683,6 +684,7 @@ int main(int argc, char **argv, char **envp)
int i;
int ret;
int execfd;
+ int host_page_size;
unsigned long max_reserved_va;
bool preserve_argv0;
@@ -794,6 +796,16 @@ int main(int argc, char **argv, char **envp)
opt_one_insn_per_tb, &error_abort);
ac->init_machine(NULL);
}
+
+ /*
+ * Finalize page size before creating CPUs.
+ * This will do nothing if !TARGET_PAGE_BITS_VARY.
+ * The most efficient setting is to match the host.
+ */
+ host_page_size = qemu_real_host_page_size();
+ set_preferred_target_page_bits(ctz32(host_page_size));
+ finalize_target_page_bits();
+
cpu = cpu_create(cpu_type);
env = cpu->env_ptr;
cpu_reset(cpu);
@@ -807,8 +819,6 @@ int main(int argc, char **argv, char **envp)
*/
max_reserved_va = MAX_RESERVED_VA(cpu);
if (reserved_va != 0) {
- int host_page_size = qemu_real_host_page_size();
-
if ((reserved_va + 1) % host_page_size) {
char *s = size_to_str(host_page_size);
fprintf(stderr, "Reserved virtual address not aligned mod %s\n", s);
@@ -907,7 +917,7 @@ int main(int argc, char **argv, char **envp)
* If we're in a chroot with no /proc, fall back to 1 page.
*/
if (mmap_min_addr == 0) {
- mmap_min_addr = qemu_real_host_page_size();
+ mmap_min_addr = host_page_size;
qemu_log_mask(CPU_LOG_PAGE,
"host mmap_min_addr=0x%lx (fallback)\n",
mmap_min_addr);
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 30/33] target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (28 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 29/33] linux-user: Allow TARGET_PAGE_BITS_VARY Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-21 7:05 ` Philippe Mathieu-Daudé
2023-08-18 17:12 ` [PATCH 31/33] linux-user: Bound mmap_min_addr by host page size Richard Henderson
` (2 subsequent siblings)
32 siblings, 1 reply; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Since aarch64 binaries are generally built for multiple
page sizes, it is trivial to allow the page size to vary.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/cpu-param.h | 6 ++++-
target/arm/cpu.c | 51 ++++++++++++++++++++++++------------------
2 files changed, 34 insertions(+), 23 deletions(-)
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
index b3b35f7aa1..7585a810b2 100644
--- a/target/arm/cpu-param.h
+++ b/target/arm/cpu-param.h
@@ -19,9 +19,13 @@
#endif
#ifdef CONFIG_USER_ONLY
-#define TARGET_PAGE_BITS 12
# ifdef TARGET_AARCH64
# define TARGET_TAGGED_ADDRESSES
+/* Allow user-only to vary page size from 4k */
+# define TARGET_PAGE_BITS_VARY
+# define TARGET_PAGE_BITS_MIN 12
+# else
+# define TARGET_PAGE_BITS 12
# endif
#else
/*
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 93c28d50e5..cb05f7a8a8 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -1586,7 +1586,6 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
ARMCPU *cpu = ARM_CPU(dev);
ARMCPUClass *acc = ARM_CPU_GET_CLASS(dev);
CPUARMState *env = &cpu->env;
- int pagebits;
Error *local_err = NULL;
bool no_aa32 = false;
@@ -1953,28 +1952,36 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
!cpu_isar_feature(aa32_vfp_simd, cpu) ||
!arm_feature(env, ARM_FEATURE_XSCALE));
- if (arm_feature(env, ARM_FEATURE_V7) &&
- !arm_feature(env, ARM_FEATURE_M) &&
- !arm_feature(env, ARM_FEATURE_PMSA)) {
- /* v7VMSA drops support for the old ARMv5 tiny pages, so we
- * can use 4K pages.
- */
- pagebits = 12;
- } else {
- /* For CPUs which might have tiny 1K pages, or which have an
- * MPU and might have small region sizes, stick with 1K pages.
- */
- pagebits = 10;
- }
- if (!set_preferred_target_page_bits(pagebits)) {
- /* This can only ever happen for hotplugging a CPU, or if
- * the board code incorrectly creates a CPU which it has
- * promised via minimum_page_size that it will not.
- */
- error_setg(errp, "This CPU requires a smaller page size than the "
- "system is using");
- return;
+#ifndef CONFIG_USER_ONLY
+ {
+ int pagebits;
+ if (arm_feature(env, ARM_FEATURE_V7) &&
+ !arm_feature(env, ARM_FEATURE_M) &&
+ !arm_feature(env, ARM_FEATURE_PMSA)) {
+ /*
+ * v7VMSA drops support for the old ARMv5 tiny pages,
+ * so we can use 4K pages.
+ */
+ pagebits = 12;
+ } else {
+ /*
+ * For CPUs which might have tiny 1K pages, or which have an
+ * MPU and might have small region sizes, stick with 1K pages.
+ */
+ pagebits = 10;
+ }
+ if (!set_preferred_target_page_bits(pagebits)) {
+ /*
+ * This can only ever happen for hotplugging a CPU, or if
+ * the board code incorrectly creates a CPU which it has
+ * promised via minimum_page_size that it will not.
+ */
+ error_setg(errp, "This CPU requires a smaller page size "
+ "than the system is using");
+ return;
+ }
}
+#endif
/* This cpu-id-to-MPIDR affinity is used only for TCG; KVM will override it.
* We don't support setting cluster ID ([16..23]) (known as Aff2
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 31/33] linux-user: Bound mmap_min_addr by host page size
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (29 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 30/33] target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-18 17:12 ` [PATCH 32/33] target/ppc: Enable TARGET_PAGE_BITS_VARY for user-only Richard Henderson
2023-08-18 17:12 ` [PATCH 33/33] target/alpha: " Richard Henderson
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Bizzarely, it is possible to set /proc/sys/vm/mmap_min_addr
to a value below the host page size. Fix that.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
linux-user/main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/linux-user/main.c b/linux-user/main.c
index 2334d7cc67..1925c275ed 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -904,7 +904,7 @@ int main(int argc, char **argv, char **envp)
if ((fp = fopen("/proc/sys/vm/mmap_min_addr", "r")) != NULL) {
unsigned long tmp;
if (fscanf(fp, "%lu", &tmp) == 1 && tmp != 0) {
- mmap_min_addr = tmp;
+ mmap_min_addr = MAX(tmp, host_page_size);
qemu_log_mask(CPU_LOG_PAGE, "host mmap_min_addr=0x%lx\n",
mmap_min_addr);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 32/33] target/ppc: Enable TARGET_PAGE_BITS_VARY for user-only
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (30 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 31/33] linux-user: Bound mmap_min_addr by host page size Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
2023-08-21 7:06 ` Philippe Mathieu-Daudé
2023-08-18 17:12 ` [PATCH 33/33] target/alpha: " Richard Henderson
32 siblings, 1 reply; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Since ppc binaries are generally built for multiple
page sizes, it is trivial to allow the page size to vary.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/ppc/cpu-param.h | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/target/ppc/cpu-param.h b/target/ppc/cpu-param.h
index 0a0416e0a8..b7ad52de03 100644
--- a/target/ppc/cpu-param.h
+++ b/target/ppc/cpu-param.h
@@ -31,6 +31,13 @@
# define TARGET_PHYS_ADDR_SPACE_BITS 36
# define TARGET_VIRT_ADDR_SPACE_BITS 32
#endif
-#define TARGET_PAGE_BITS 12
+
+#ifdef CONFIG_USER_ONLY
+/* Allow user-only to vary page size from 4k */
+# define TARGET_PAGE_BITS_VARY
+# define TARGET_PAGE_BITS_MIN 12
+#else
+# define TARGET_PAGE_BITS 12
+#endif
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 33/33] target/alpha: Enable TARGET_PAGE_BITS_VARY for user-only
2023-08-18 17:11 [PATCH 00/33] linux-user: Improve host and guest page size handling Richard Henderson
` (31 preceding siblings ...)
2023-08-18 17:12 ` [PATCH 32/33] target/ppc: Enable TARGET_PAGE_BITS_VARY for user-only Richard Henderson
@ 2023-08-18 17:12 ` Richard Henderson
32 siblings, 0 replies; 44+ messages in thread
From: Richard Henderson @ 2023-08-18 17:12 UTC (permalink / raw)
To: qemu-devel
Since alpha binaries are generally built for multiple
page sizes, it is trivial to allow the page size to vary.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/alpha/cpu-param.h | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/target/alpha/cpu-param.h b/target/alpha/cpu-param.h
index 68c46f7998..c969cb016b 100644
--- a/target/alpha/cpu-param.h
+++ b/target/alpha/cpu-param.h
@@ -9,10 +9,22 @@
#define ALPHA_CPU_PARAM_H
#define TARGET_LONG_BITS 64
-#define TARGET_PAGE_BITS 13
/* ??? EV4 has 34 phys addr bits, EV5 has 40, EV6 has 44. */
#define TARGET_PHYS_ADDR_SPACE_BITS 44
-#define TARGET_VIRT_ADDR_SPACE_BITS (30 + TARGET_PAGE_BITS)
+
+#ifdef CONFIG_USER_ONLY
+/*
+ * Allow user-only to vary page size. Real hardware allows only 8k and 64k,
+ * but since any variance means guests cannot assume a fixed value, allow
+ * a 4k minimum to match x86 host, which can minimize emulation issues.
+ */
+# define TARGET_PAGE_BITS_VARY
+# define TARGET_PAGE_BITS_MIN 12
+# define TARGET_VIRT_ADDR_SPACE_BITS 63
+#else
+# define TARGET_PAGE_BITS 13
+# define TARGET_VIRT_ADDR_SPACE_BITS (30 + TARGET_PAGE_BITS)
+#endif
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 44+ messages in thread