* [PULL 0/3] tcg/linux-user patch queue
@ 2025-10-14 17:23 Richard Henderson
2025-10-14 17:23 ` [PULL 1/3] accel/tcg: Add clear_flags argument to page_set_flags Richard Henderson
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Richard Henderson @ 2025-10-14 17:23 UTC (permalink / raw)
To: qemu-devel
The following changes since commit f3f2ad119347e8c086b72282febcaac5d731b343:
Merge tag 'pull-target-arm-20251010' of https://gitlab.com/pm215/qemu into staging (2025-10-10 08:26:09 -0700)
are available in the Git repository at:
https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20251014
for you to fetch changes up to ec03dd9723781c7e9d4b4f70c7f54d12da9459d5:
accel/tcg: Hoist first page lookup above pointer_wrap (2025-10-14 07:33:21 -0700)
----------------------------------------------------------------
linux-user: Support MADV_DONTDUMP, MADV_DODUMP
accel/tcg: Hoist first page lookup above pointer_wrap
----------------------------------------------------------------
Jon Wilson (1):
linux-user: Support MADV_DONTDUMP, MADV_DODUMP
Richard Henderson (2):
accel/tcg: Add clear_flags argument to page_set_flags
accel/tcg: Hoist first page lookup above pointer_wrap
bsd-user/bsd-mem.h | 7 +--
include/exec/page-protection.h | 21 ++++----
include/user/page-protection.h | 9 +++-
target/arm/cpu.h | 1 -
accel/tcg/cputlb.c | 23 +++++----
accel/tcg/user-exec.c | 114 +++++++++++------------------------------
bsd-user/mmap.c | 6 +--
linux-user/arm/elfload.c | 2 +-
linux-user/elfload.c | 4 +-
linux-user/hppa/elfload.c | 2 +-
linux-user/mmap.c | 38 +++++++++-----
linux-user/x86_64/elfload.c | 2 +-
12 files changed, 98 insertions(+), 131 deletions(-)
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PULL 1/3] accel/tcg: Add clear_flags argument to page_set_flags
2025-10-14 17:23 [PULL 0/3] tcg/linux-user patch queue Richard Henderson
@ 2025-10-14 17:23 ` Richard Henderson
2025-10-14 19:37 ` Philippe Mathieu-Daudé
2025-10-14 17:23 ` [PULL 2/3] linux-user: Support MADV_DONTDUMP, MADV_DODUMP Richard Henderson
` (2 subsequent siblings)
3 siblings, 1 reply; 6+ messages in thread
From: Richard Henderson @ 2025-10-14 17:23 UTC (permalink / raw)
To: qemu-devel
Expand the interface of page_set_flags to separate the
set of flags to be set and the set of flags to be cleared.
This allows us to replace PAGE_RESET with the PAGE_VALID
bit within clear_flags.
Replace PAGE_TARGET_STICKY with TARGET_PAGE_NOTSTICKY;
aarch64-linux-user is the only user.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
bsd-user/bsd-mem.h | 7 +-
include/exec/page-protection.h | 13 ++--
include/user/page-protection.h | 9 ++-
target/arm/cpu.h | 1 -
accel/tcg/user-exec.c | 114 +++++++++------------------------
bsd-user/mmap.c | 6 +-
linux-user/arm/elfload.c | 2 +-
linux-user/hppa/elfload.c | 2 +-
linux-user/mmap.c | 32 +++++----
linux-user/x86_64/elfload.c | 2 +-
10 files changed, 71 insertions(+), 117 deletions(-)
diff --git a/bsd-user/bsd-mem.h b/bsd-user/bsd-mem.h
index 1be906c591..416d0f8c23 100644
--- a/bsd-user/bsd-mem.h
+++ b/bsd-user/bsd-mem.h
@@ -390,8 +390,9 @@ static inline abi_long do_bsd_shmat(int shmid, abi_ulong shmaddr, int shmflg)
raddr = h2g(host_raddr);
page_set_flags(raddr, raddr + shm_info.shm_segsz - 1,
- PAGE_VALID | PAGE_RESET | PAGE_READ |
- (shmflg & SHM_RDONLY ? 0 : PAGE_WRITE));
+ PAGE_VALID | PAGE_READ |
+ (shmflg & SHM_RDONLY ? 0 : PAGE_WRITE),
+ PAGE_VALID);
for (int i = 0; i < N_BSD_SHM_REGIONS; i++) {
if (bsd_shm_regions[i].start == 0) {
@@ -428,7 +429,7 @@ static inline abi_long do_bsd_shmdt(abi_ulong shmaddr)
abi_ulong size = bsd_shm_regions[i].size;
bsd_shm_regions[i].start = 0;
- page_set_flags(shmaddr, shmaddr + size - 1, 0);
+ page_set_flags(shmaddr, shmaddr + size - 1, 0, PAGE_VALID);
mmap_reserve(shmaddr, size);
}
}
diff --git a/include/exec/page-protection.h b/include/exec/page-protection.h
index c43231af8b..5a18f98a3a 100644
--- a/include/exec/page-protection.h
+++ b/include/exec/page-protection.h
@@ -23,8 +23,11 @@
* Low-Address-Protection. Used with PAGE_WRITE in tlb_set_page_with_attrs()
*/
#define PAGE_WRITE_INV 0x0020
-/* For use with page_set_flags: page is being replaced; target_data cleared. */
-#define PAGE_RESET 0x0040
+/*
+ * For linux-user, indicates that the page is mapped with the same semantics
+ * in both guest and host.
+ */
+#define PAGE_PASSTHROUGH 0x40
/* For linux-user, indicates that the page is MAP_ANON. */
#define PAGE_ANON 0x0080
@@ -32,10 +35,4 @@
#define PAGE_TARGET_1 0x0200
#define PAGE_TARGET_2 0x0400
-/*
- * For linux-user, indicates that the page is mapped with the same semantics
- * in both guest and host.
- */
-#define PAGE_PASSTHROUGH 0x0800
-
#endif
diff --git a/include/user/page-protection.h b/include/user/page-protection.h
index 4bde664e4a..41b23e72fc 100644
--- a/include/user/page-protection.h
+++ b/include/user/page-protection.h
@@ -23,14 +23,19 @@ int page_get_flags(vaddr address);
* page_set_flags:
* @start: first byte of range
* @last: last byte of range
- * @flags: flags to set
+ * @set_flags: flags to set
+ * @clr_flags: flags to clear
* Context: holding mmap lock
*
* Modify the flags of a page and invalidate the code if necessary.
* The flag PAGE_WRITE_ORG is positioned automatically depending
* on PAGE_WRITE. The mmap_lock should already be held.
+ *
+ * For each page, flags = (flags & ~clr_flags) | set_flags.
+ * If clr_flags includes PAGE_VALID, this indicates a new mapping
+ * and page_reset_target_data will be called as well.
*/
-void page_set_flags(vaddr start, vaddr last, int flags);
+void page_set_flags(vaddr start, vaddr last, int set_flags, int clr_flags);
void page_reset_target_data(vaddr start, vaddr last);
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 1d4e13320c..bf221e6f97 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -2642,7 +2642,6 @@ extern const uint64_t pred_esz_masks[5];
*/
#define PAGE_BTI PAGE_TARGET_1
#define PAGE_MTE PAGE_TARGET_2
-#define PAGE_TARGET_STICKY PAGE_MTE
/* We associate one allocation tag per 16 bytes, the minimum. */
#define LOG2_TAG_GRANULE 4
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 916f18754f..1800dffa63 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -269,48 +269,6 @@ static void pageflags_create(vaddr start, vaddr last, int flags)
interval_tree_insert(&p->itree, &pageflags_root);
}
-/* A subroutine of page_set_flags: remove everything in [start,last]. */
-static bool pageflags_unset(vaddr start, vaddr last)
-{
- bool inval_tb = false;
-
- while (true) {
- PageFlagsNode *p = pageflags_find(start, last);
- vaddr p_last;
-
- if (!p) {
- break;
- }
-
- if (p->flags & PAGE_EXEC) {
- inval_tb = true;
- }
-
- interval_tree_remove(&p->itree, &pageflags_root);
- p_last = p->itree.last;
-
- if (p->itree.start < start) {
- /* Truncate the node from the end, or split out the middle. */
- p->itree.last = start - 1;
- interval_tree_insert(&p->itree, &pageflags_root);
- if (last < p_last) {
- pageflags_create(last + 1, p_last, p->flags);
- break;
- }
- } else if (p_last <= last) {
- /* Range completely covers node -- remove it. */
- g_free_rcu(p, rcu);
- } else {
- /* Truncate the node from the start. */
- p->itree.start = last + 1;
- interval_tree_insert(&p->itree, &pageflags_root);
- break;
- }
- }
-
- return inval_tb;
-}
-
/*
* A subroutine of page_set_flags: nothing overlaps [start,last],
* but check adjacent mappings and maybe merge into a single range.
@@ -356,15 +314,6 @@ static void pageflags_create_merge(vaddr start, vaddr last, int flags)
}
}
-/*
- * Allow the target to decide if PAGE_TARGET_[12] may be reset.
- * By default, they are not kept.
- */
-#ifndef PAGE_TARGET_STICKY
-#define PAGE_TARGET_STICKY 0
-#endif
-#define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY)
-
/* A subroutine of page_set_flags: add flags to [start,last]. */
static bool pageflags_set_clear(vaddr start, vaddr last,
int set_flags, int clear_flags)
@@ -377,7 +326,7 @@ static bool pageflags_set_clear(vaddr start, vaddr last,
restart:
p = pageflags_find(start, last);
if (!p) {
- if (set_flags) {
+ if (set_flags & PAGE_VALID) {
pageflags_create_merge(start, last, set_flags);
}
goto done;
@@ -391,11 +340,12 @@ static bool pageflags_set_clear(vaddr start, vaddr last,
/*
* Need to flush if an overlapping executable region
- * removes exec, or adds write.
+ * removes exec, adds write, or is a new mapping.
*/
if ((p_flags & PAGE_EXEC)
&& (!(merge_flags & PAGE_EXEC)
- || (merge_flags & ~p_flags & PAGE_WRITE))) {
+ || (merge_flags & ~p_flags & PAGE_WRITE)
+ || (clear_flags & PAGE_VALID))) {
inval_tb = true;
}
@@ -404,7 +354,7 @@ static bool pageflags_set_clear(vaddr start, vaddr last,
* attempting to merge with adjacent regions.
*/
if (start == p_start && last == p_last) {
- if (merge_flags) {
+ if (merge_flags & PAGE_VALID) {
p->flags = merge_flags;
} else {
interval_tree_remove(&p->itree, &pageflags_root);
@@ -424,12 +374,12 @@ static bool pageflags_set_clear(vaddr start, vaddr last,
interval_tree_insert(&p->itree, &pageflags_root);
if (last < p_last) {
- if (merge_flags) {
+ if (merge_flags & PAGE_VALID) {
pageflags_create(start, last, merge_flags);
}
pageflags_create(last + 1, p_last, p_flags);
} else {
- if (merge_flags) {
+ if (merge_flags & PAGE_VALID) {
pageflags_create(start, p_last, merge_flags);
}
if (p_last < last) {
@@ -438,18 +388,18 @@ static bool pageflags_set_clear(vaddr start, vaddr last,
}
}
} else {
- if (start < p_start && set_flags) {
+ if (start < p_start && (set_flags & PAGE_VALID)) {
pageflags_create(start, p_start - 1, set_flags);
}
if (last < p_last) {
interval_tree_remove(&p->itree, &pageflags_root);
p->itree.start = last + 1;
interval_tree_insert(&p->itree, &pageflags_root);
- if (merge_flags) {
+ if (merge_flags & PAGE_VALID) {
pageflags_create(start, last, merge_flags);
}
} else {
- if (merge_flags) {
+ if (merge_flags & PAGE_VALID) {
p->flags = merge_flags;
} else {
interval_tree_remove(&p->itree, &pageflags_root);
@@ -497,7 +447,7 @@ static bool pageflags_set_clear(vaddr start, vaddr last,
g_free_rcu(p, rcu);
goto restart;
}
- if (set_flags) {
+ if (set_flags & PAGE_VALID) {
pageflags_create(start, last, set_flags);
}
@@ -505,42 +455,36 @@ static bool pageflags_set_clear(vaddr start, vaddr last,
return inval_tb;
}
-void page_set_flags(vaddr start, vaddr last, int flags)
+void page_set_flags(vaddr start, vaddr last, int set_flags, int clear_flags)
{
- bool reset = false;
- bool inval_tb = false;
-
- /* This function should never be called with addresses outside the
- guest address space. If this assert fires, it probably indicates
- a missing call to h2g_valid. */
+ /*
+ * This function should never be called with addresses outside the
+ * guest address space. If this assert fires, it probably indicates
+ * a missing call to h2g_valid.
+ */
assert(start <= last);
assert(last <= guest_addr_max);
- /* Only set PAGE_ANON with new mappings. */
- assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET));
assert_memory_lock();
start &= TARGET_PAGE_MASK;
last |= ~TARGET_PAGE_MASK;
- if (!(flags & PAGE_VALID)) {
- flags = 0;
- } else {
- reset = flags & PAGE_RESET;
- flags &= ~PAGE_RESET;
- if (flags & PAGE_WRITE) {
- flags |= PAGE_WRITE_ORG;
- }
+ if (set_flags & PAGE_WRITE) {
+ set_flags |= PAGE_WRITE_ORG;
+ }
+ if (clear_flags & PAGE_WRITE) {
+ clear_flags |= PAGE_WRITE_ORG;
}
- if (!flags || reset) {
+ if (clear_flags & PAGE_VALID) {
page_reset_target_data(start, last);
- inval_tb |= pageflags_unset(start, last);
+ clear_flags = -1;
+ } else {
+ /* Only set PAGE_ANON with new mappings. */
+ assert(!(set_flags & PAGE_ANON));
}
- if (flags) {
- inval_tb |= pageflags_set_clear(start, last, flags,
- ~(reset ? 0 : PAGE_STICKY));
- }
- if (inval_tb) {
+
+ if (pageflags_set_clear(start, last, set_flags, clear_flags)) {
tb_invalidate_phys_range(NULL, start, last);
}
}
diff --git a/bsd-user/mmap.c b/bsd-user/mmap.c
index 47e317517c..24ba1728eb 100644
--- a/bsd-user/mmap.c
+++ b/bsd-user/mmap.c
@@ -122,7 +122,7 @@ int target_mprotect(abi_ulong start, abi_ulong len, int prot)
if (ret != 0)
goto error;
}
- page_set_flags(start, start + len - 1, prot | PAGE_VALID);
+ page_set_flags(start, start + len - 1, prot, PAGE_RWX);
mmap_unlock();
return 0;
error:
@@ -652,7 +652,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
}
}
the_end1:
- page_set_flags(start, start + len - 1, prot | PAGE_VALID);
+ page_set_flags(start, start + len - 1, prot | PAGE_VALID, PAGE_VALID);
the_end:
#ifdef DEBUG_MMAP
printf("ret=0x" TARGET_ABI_FMT_lx "\n", start);
@@ -763,7 +763,7 @@ int target_munmap(abi_ulong start, abi_ulong len)
}
if (ret == 0) {
- page_set_flags(start, start + len - 1, 0);
+ page_set_flags(start, start + len - 1, 0, PAGE_VALID);
}
mmap_unlock();
return ret;
diff --git a/linux-user/arm/elfload.c b/linux-user/arm/elfload.c
index b1a4db4466..fef61022a3 100644
--- a/linux-user/arm/elfload.c
+++ b/linux-user/arm/elfload.c
@@ -243,7 +243,7 @@ bool init_guest_commpage(void)
}
page_set_flags(commpage, commpage | (host_page_size - 1),
- PAGE_READ | PAGE_EXEC | PAGE_VALID);
+ PAGE_READ | PAGE_EXEC | PAGE_VALID, PAGE_VALID);
return true;
}
diff --git a/linux-user/hppa/elfload.c b/linux-user/hppa/elfload.c
index 018034f244..4600708702 100644
--- a/linux-user/hppa/elfload.c
+++ b/linux-user/hppa/elfload.c
@@ -42,6 +42,6 @@ bool init_guest_commpage(void)
* Special case the entry points during translation (see do_page_zero).
*/
page_set_flags(LO_COMMPAGE, LO_COMMPAGE | ~TARGET_PAGE_MASK,
- PAGE_EXEC | PAGE_VALID);
+ PAGE_EXEC | PAGE_VALID, PAGE_VALID);
return true;
}
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 847092a28a..527ca5f211 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -165,6 +165,13 @@ static int target_to_host_prot(int prot)
(prot & PROT_EXEC ? PROT_READ : 0);
}
+/* Target bits to be cleared by mprotect if not present in target_prot. */
+#ifdef TARGET_AARCH64
+#define TARGET_PAGE_NOTSTICKY PAGE_BTI
+#else
+#define TARGET_PAGE_NOTSTICKY 0
+#endif
+
/* NOTE: all the constants are the HOST ones, but addresses are target. */
int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
{
@@ -262,7 +269,7 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
}
}
- page_set_flags(start, last, page_flags);
+ page_set_flags(start, last, page_flags, PAGE_RWX | TARGET_PAGE_NOTSTICKY);
ret = 0;
error:
@@ -561,17 +568,17 @@ static abi_long mmap_end(abi_ulong start, abi_ulong last,
if (flags & MAP_ANONYMOUS) {
page_flags |= PAGE_ANON;
}
- page_flags |= PAGE_RESET;
if (passthrough_start > passthrough_last) {
- page_set_flags(start, last, page_flags);
+ page_set_flags(start, last, page_flags, PAGE_VALID);
} else {
if (start < passthrough_start) {
- page_set_flags(start, passthrough_start - 1, page_flags);
+ page_set_flags(start, passthrough_start - 1,
+ page_flags, PAGE_VALID);
}
page_set_flags(passthrough_start, passthrough_last,
- page_flags | PAGE_PASSTHROUGH);
+ page_flags | PAGE_PASSTHROUGH, PAGE_VALID);
if (passthrough_last < last) {
- page_set_flags(passthrough_last + 1, last, page_flags);
+ page_set_flags(passthrough_last + 1, last, page_flags, PAGE_VALID);
}
}
shm_region_rm_complete(start, last);
@@ -1088,7 +1095,7 @@ int target_munmap(abi_ulong start, abi_ulong len)
mmap_lock();
ret = mmap_reserve_or_unmap(start, len);
if (likely(ret == 0)) {
- page_set_flags(start, start + len - 1, 0);
+ page_set_flags(start, start + len - 1, 0, PAGE_VALID);
shm_region_rm_complete(start, start + len - 1);
}
mmap_unlock();
@@ -1179,10 +1186,10 @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
} else {
new_addr = h2g(host_addr);
prot = page_get_flags(old_addr);
- page_set_flags(old_addr, old_addr + old_size - 1, 0);
+ page_set_flags(old_addr, old_addr + old_size - 1, 0, PAGE_VALID);
shm_region_rm_complete(old_addr, old_addr + old_size - 1);
page_set_flags(new_addr, new_addr + new_size - 1,
- prot | PAGE_VALID | PAGE_RESET);
+ prot | PAGE_VALID, PAGE_VALID);
shm_region_rm_complete(new_addr, new_addr + new_size - 1);
}
mmap_unlock();
@@ -1428,9 +1435,10 @@ abi_ulong target_shmat(CPUArchState *cpu_env, int shmid,
last = shmaddr + m_len - 1;
page_set_flags(shmaddr, last,
- PAGE_VALID | PAGE_RESET | PAGE_READ |
+ PAGE_VALID | PAGE_READ |
(shmflg & SHM_RDONLY ? 0 : PAGE_WRITE) |
- (shmflg & SHM_EXEC ? PAGE_EXEC : 0));
+ (shmflg & SHM_EXEC ? PAGE_EXEC : 0),
+ PAGE_VALID);
shm_region_rm_complete(shmaddr, last);
shm_region_add(shmaddr, last);
@@ -1471,7 +1479,7 @@ abi_long target_shmdt(abi_ulong shmaddr)
if (rv == 0) {
abi_ulong size = last - shmaddr + 1;
- page_set_flags(shmaddr, last, 0);
+ page_set_flags(shmaddr, last, 0, PAGE_VALID);
shm_region_rm_complete(shmaddr, last);
mmap_reserve_or_unmap(shmaddr, size);
}
diff --git a/linux-user/x86_64/elfload.c b/linux-user/x86_64/elfload.c
index 1e7000c6bc..5914f76e83 100644
--- a/linux-user/x86_64/elfload.c
+++ b/linux-user/x86_64/elfload.c
@@ -37,7 +37,7 @@ bool init_guest_commpage(void)
}
page_set_flags(TARGET_VSYSCALL_PAGE,
TARGET_VSYSCALL_PAGE | ~TARGET_PAGE_MASK,
- PAGE_EXEC | PAGE_VALID);
+ PAGE_EXEC | PAGE_VALID, PAGE_VALID);
return true;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PULL 2/3] linux-user: Support MADV_DONTDUMP, MADV_DODUMP
2025-10-14 17:23 [PULL 0/3] tcg/linux-user patch queue Richard Henderson
2025-10-14 17:23 ` [PULL 1/3] accel/tcg: Add clear_flags argument to page_set_flags Richard Henderson
@ 2025-10-14 17:23 ` Richard Henderson
2025-10-14 17:23 ` [PULL 3/3] accel/tcg: Hoist first page lookup above pointer_wrap Richard Henderson
2025-10-14 19:46 ` [PULL 0/3] tcg/linux-user patch queue Richard Henderson
3 siblings, 0 replies; 6+ messages in thread
From: Richard Henderson @ 2025-10-14 17:23 UTC (permalink / raw)
To: qemu-devel; +Cc: Jon Wilson
From: Jon Wilson <jonwilson030981@gmail.com>
Set and clear PAGE_DONTDUMP, and honor that in vma_dump_size.
Signed-off-by: Jon Wilson <jonwilson030981@gmail.com>
[rth: Use new page_set_flags semantics; also handle DODUMP]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/exec/page-protection.h | 6 +++++-
linux-user/elfload.c | 4 ++--
linux-user/mmap.c | 6 ++++++
3 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/include/exec/page-protection.h b/include/exec/page-protection.h
index 5a18f98a3a..c50ce57d15 100644
--- a/include/exec/page-protection.h
+++ b/include/exec/page-protection.h
@@ -30,7 +30,11 @@
#define PAGE_PASSTHROUGH 0x40
/* For linux-user, indicates that the page is MAP_ANON. */
#define PAGE_ANON 0x0080
-
+/*
+ * For linux-user, indicates that the page should not be
+ * included in a core dump.
+ */
+#define PAGE_DONTDUMP 0x0100
/* Target-specific bits that will be used via page_get_flags(). */
#define PAGE_TARGET_1 0x0200
#define PAGE_TARGET_2 0x0400
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 1370ec59be..0002d5be2f 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2127,8 +2127,8 @@ static void bswap_note(struct elf_note *en)
*/
static size_t vma_dump_size(vaddr start, vaddr end, int flags)
{
- /* The area must be readable. */
- if (!(flags & PAGE_READ)) {
+ /* The area must be readable and dumpable. */
+ if (!(flags & PAGE_READ) || (flags & PAGE_DONTDUMP)) {
return 0;
}
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 527ca5f211..423c77856a 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -1248,6 +1248,12 @@ abi_long target_madvise(abi_ulong start, abi_ulong len_in, int advice)
*/
mmap_lock();
switch (advice) {
+ case MADV_DONTDUMP:
+ page_set_flags(start, start + len - 1, PAGE_DONTDUMP, 0);
+ break;
+ case MADV_DODUMP:
+ page_set_flags(start, start + len - 1, 0, PAGE_DONTDUMP);
+ break;
case MADV_WIPEONFORK:
case MADV_KEEPONFORK:
ret = -EINVAL;
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PULL 3/3] accel/tcg: Hoist first page lookup above pointer_wrap
2025-10-14 17:23 [PULL 0/3] tcg/linux-user patch queue Richard Henderson
2025-10-14 17:23 ` [PULL 1/3] accel/tcg: Add clear_flags argument to page_set_flags Richard Henderson
2025-10-14 17:23 ` [PULL 2/3] linux-user: Support MADV_DONTDUMP, MADV_DODUMP Richard Henderson
@ 2025-10-14 17:23 ` Richard Henderson
2025-10-14 19:46 ` [PULL 0/3] tcg/linux-user patch queue Richard Henderson
3 siblings, 0 replies; 6+ messages in thread
From: Richard Henderson @ 2025-10-14 17:23 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-stable, Philippe Mathieu-Daudé
For strict alignment targets we registered cpu_pointer_wrap_notreached,
but generic code used it before recognizing the alignment exception.
Hoist the first page lookup, so that the alignment exception happens first.
Cc: qemu-stable@nongnu.org
Buglink: https://bugs.debian.org/1112285
Fixes: a4027ed7d4be ("target: Use cpu_pointer_wrap_notreached for strict align targets")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
accel/tcg/cputlb.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 3010dd4f5d..631f1fe135 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1742,6 +1742,7 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType type, MMULookupLocals *l)
{
bool crosspage;
+ vaddr last;
int flags;
l->memop = get_memop(oi);
@@ -1751,13 +1752,15 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
l->page[0].addr = addr;
l->page[0].size = memop_size(l->memop);
- l->page[1].addr = (addr + l->page[0].size - 1) & TARGET_PAGE_MASK;
+ l->page[1].addr = 0;
l->page[1].size = 0;
- crosspage = (addr ^ l->page[1].addr) & TARGET_PAGE_MASK;
+ /* Lookup and recognize exceptions from the first page. */
+ mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra);
+
+ last = addr + l->page[0].size - 1;
+ crosspage = (addr ^ last) & TARGET_PAGE_MASK;
if (likely(!crosspage)) {
- mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra);
-
flags = l->page[0].flags;
if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) {
mmu_watch_or_dirty(cpu, &l->page[0], type, ra);
@@ -1767,18 +1770,18 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
}
} else {
/* Finish compute of page crossing. */
- int size0 = l->page[1].addr - addr;
+ vaddr addr1 = last & TARGET_PAGE_MASK;
+ int size0 = addr1 - addr;
l->page[1].size = l->page[0].size - size0;
l->page[0].size = size0;
-
l->page[1].addr = cpu->cc->tcg_ops->pointer_wrap(cpu, l->mmu_idx,
- l->page[1].addr, addr);
+ addr1, addr);
/*
- * Lookup both pages, recognizing exceptions from either. If the
- * second lookup potentially resized, refresh first CPUTLBEntryFull.
+ * Lookup and recognize exceptions from the second page.
+ * If the lookup potentially resized the table, refresh the
+ * first CPUTLBEntryFull pointer.
*/
- mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra);
if (mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra)) {
uintptr_t index = tlb_index(cpu, l->mmu_idx, addr);
l->page[0].full = &cpu->neg.tlb.d[l->mmu_idx].fulltlb[index];
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PULL 1/3] accel/tcg: Add clear_flags argument to page_set_flags
2025-10-14 17:23 ` [PULL 1/3] accel/tcg: Add clear_flags argument to page_set_flags Richard Henderson
@ 2025-10-14 19:37 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 6+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-10-14 19:37 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 14/10/25 19:23, Richard Henderson wrote:
> Expand the interface of page_set_flags to separate the
> set of flags to be set and the set of flags to be cleared.
>
> This allows us to replace PAGE_RESET with the PAGE_VALID
> bit within clear_flags.
>
> Replace PAGE_TARGET_STICKY with TARGET_PAGE_NOTSTICKY;
> aarch64-linux-user is the only user.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> bsd-user/bsd-mem.h | 7 +-
> include/exec/page-protection.h | 13 ++--
> include/user/page-protection.h | 9 ++-
> target/arm/cpu.h | 1 -
> accel/tcg/user-exec.c | 114 +++++++++------------------------
> bsd-user/mmap.c | 6 +-
> linux-user/arm/elfload.c | 2 +-
> linux-user/hppa/elfload.c | 2 +-
> linux-user/mmap.c | 32 +++++----
> linux-user/x86_64/elfload.c | 2 +-
> 10 files changed, 71 insertions(+), 117 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
https://lore.kernel.org/qemu-devel/1bf93222-c3be-4a11-9a5b-71029595d74b@linaro.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PULL 0/3] tcg/linux-user patch queue
2025-10-14 17:23 [PULL 0/3] tcg/linux-user patch queue Richard Henderson
` (2 preceding siblings ...)
2025-10-14 17:23 ` [PULL 3/3] accel/tcg: Hoist first page lookup above pointer_wrap Richard Henderson
@ 2025-10-14 19:46 ` Richard Henderson
3 siblings, 0 replies; 6+ messages in thread
From: Richard Henderson @ 2025-10-14 19:46 UTC (permalink / raw)
To: qemu-devel
On 10/14/25 10:23, Richard Henderson wrote:
> The following changes since commit f3f2ad119347e8c086b72282febcaac5d731b343:
>
> Merge tag 'pull-target-arm-20251010' ofhttps://gitlab.com/pm215/qemu into staging (2025-10-10 08:26:09 -0700)
>
> are available in the Git repository at:
>
> https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20251014
>
> for you to fetch changes up to ec03dd9723781c7e9d4b4f70c7f54d12da9459d5:
>
> accel/tcg: Hoist first page lookup above pointer_wrap (2025-10-14 07:33:21 -0700)
>
> ----------------------------------------------------------------
> linux-user: Support MADV_DONTDUMP, MADV_DODUMP
> accel/tcg: Hoist first page lookup above pointer_wrap
Applied, thanks. Please update https://wiki.qemu.org/ChangeLog/10.2 as appropriate.
r~
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-10-14 19:47 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-14 17:23 [PULL 0/3] tcg/linux-user patch queue Richard Henderson
2025-10-14 17:23 ` [PULL 1/3] accel/tcg: Add clear_flags argument to page_set_flags Richard Henderson
2025-10-14 19:37 ` Philippe Mathieu-Daudé
2025-10-14 17:23 ` [PULL 2/3] linux-user: Support MADV_DONTDUMP, MADV_DODUMP Richard Henderson
2025-10-14 17:23 ` [PULL 3/3] accel/tcg: Hoist first page lookup above pointer_wrap Richard Henderson
2025-10-14 19:46 ` [PULL 0/3] tcg/linux-user patch queue Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).