qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] linux-user: Rewrite target_shmat
@ 2024-02-28 20:25 Richard Henderson
  2024-02-28 20:25 ` [PATCH v2 1/5] linux-user/x86_64: Handle the vsyscall page in open_self_maps_{2, 4} Richard Henderson
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Richard Henderson @ 2024-02-28 20:25 UTC (permalink / raw)
  To: qemu-devel; +Cc: laurent, richard.purdie, mjt, iii

There are multiple issues with the implementation of shmat().

(1) With reserved_va, which is the default for 32-on-64-bit, we mmap the
    entire guest address space.  Unlike mmap, shmat refuses to replace an
    existing mapping without setting SHM_REMAP.  This is the original
    subject of issue #115, though it quicky gets distracted by
    something else.

(2) With target page size > host page size, and a shm area
    that is not a multiple of the target page size, we leave
    an unmapped hole that the target expects to be mapped.
    This is the subject of 

	https://lore.kernel.org/qemu-devel/2no4imvz2zrar5kchz2l3oddqbgpj77jgwcuf7aritkn2ok763@i2mvpcihztho/

    wherein qemu itself expects a mapping to exist, and
    dies in open_self_maps_2.

So: reimplement the thing.

Changes for v2:
  - Include Ilya's test case, which caught extra errors: Yay!
  - Include x86_64 /proc/self/maps fix, which the test triggers.
  - Dropped r-b for the shmat rewrite due to number of changes.


r~


Based-on: <20240222204323.268539-1-richard.henderson@linaro.org>
("[PULL 00/39] tcg and linux-user patch queue")
(Which is technically now out of date, waiting on the coredump
rewrite to solve -Wvla werrors.)




Ilya Leoshkevich (1):
  tests/tcg: Check that shmat() does not break /proc/self/maps

Richard Henderson (4):
  linux-user/x86_64: Handle the vsyscall page in open_self_maps_{2,4}
  linux-user/loongarch64: Remove TARGET_FORCE_SHMLBA
  linux-user: Add strace for shmat
  linux-user: Rewrite target_shmat

 linux-user/loongarch64/target_syscall.h      |   7 -
 linux-user/mmap.c                            | 172 +++++++++++++++----
 linux-user/strace.c                          |  23 +++
 linux-user/syscall.c                         |  16 ++
 tests/tcg/multiarch/linux/linux-shmat-maps.c |  55 ++++++
 linux-user/strace.list                       |   2 +-
 6 files changed, 231 insertions(+), 44 deletions(-)
 create mode 100644 tests/tcg/multiarch/linux/linux-shmat-maps.c

-- 
2.34.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/5] linux-user/x86_64: Handle the vsyscall page in open_self_maps_{2, 4}
  2024-02-28 20:25 [PATCH v2 0/5] linux-user: Rewrite target_shmat Richard Henderson
@ 2024-02-28 20:25 ` Richard Henderson
  2024-02-29  9:48   ` Philippe Mathieu-Daudé
  2024-02-28 20:25 ` [PATCH v2 2/5] linux-user/loongarch64: Remove TARGET_FORCE_SHMLBA Richard Henderson
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Richard Henderson @ 2024-02-28 20:25 UTC (permalink / raw)
  To: qemu-devel; +Cc: laurent, richard.purdie, mjt, iii

This is the only case in which we expect to have no host memory backing
for a guest memory page, because in general linux user processes cannot
map any pages in the top half of the 64-bit address space.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2170
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 linux-user/syscall.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index e384e14248..bc8c06522f 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -7994,6 +7994,10 @@ static void open_self_maps_4(const struct open_self_maps_data *d,
         path = "[heap]";
     } else if (start == info->vdso) {
         path = "[vdso]";
+#ifdef TARGET_X86_64
+    } else if (start == TARGET_VSYSCALL_PAGE) {
+        path = "[vsyscall]";
+#endif
     }
 
     /* Except null device (MAP_ANON), adjust offset for this fragment. */
@@ -8082,6 +8086,18 @@ static int open_self_maps_2(void *opaque, target_ulong guest_start,
     uintptr_t host_start = (uintptr_t)g2h_untagged(guest_start);
     uintptr_t host_last = (uintptr_t)g2h_untagged(guest_end - 1);
 
+#ifdef TARGET_X86_64
+    /*
+     * Because of the extremely high position of the page within the guest
+     * virtual address space, this is not backed by host memory at all.
+     * Therefore the loop below would fail.  This is the only instance
+     * of not having host backing memory.
+     */
+    if (guest_start == TARGET_VSYSCALL_PAGE) {
+        return open_self_maps_3(opaque, guest_start, guest_end, flags);
+    }
+#endif
+
     while (1) {
         IntervalTreeNode *n =
             interval_tree_iter_first(d->host_maps, host_start, host_start);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/5] linux-user/loongarch64: Remove TARGET_FORCE_SHMLBA
  2024-02-28 20:25 [PATCH v2 0/5] linux-user: Rewrite target_shmat Richard Henderson
  2024-02-28 20:25 ` [PATCH v2 1/5] linux-user/x86_64: Handle the vsyscall page in open_self_maps_{2, 4} Richard Henderson
@ 2024-02-28 20:25 ` Richard Henderson
  2024-02-28 20:25 ` [PATCH v2 3/5] linux-user: Add strace for shmat Richard Henderson
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Richard Henderson @ 2024-02-28 20:25 UTC (permalink / raw)
  To: qemu-devel; +Cc: laurent, richard.purdie, mjt, iii, Song Gao

The kernel abi was changed with

    commit d23b77953f5a4fbf94c05157b186aac2a247ae32
    Author: Huacai Chen <chenhuacai@kernel.org>
    Date:   Wed Jan 17 12:43:08 2024 +0800

        LoongArch: Change SHMLBA from SZ_64K to PAGE_SIZE

during the v6.8 cycle.

Reviewed-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 linux-user/loongarch64/target_syscall.h | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/linux-user/loongarch64/target_syscall.h b/linux-user/loongarch64/target_syscall.h
index 8b5de52124..39f229bb9c 100644
--- a/linux-user/loongarch64/target_syscall.h
+++ b/linux-user/loongarch64/target_syscall.h
@@ -38,11 +38,4 @@ struct target_pt_regs {
 #define TARGET_MCL_FUTURE  2
 #define TARGET_MCL_ONFAULT 4
 
-#define TARGET_FORCE_SHMLBA
-
-static inline abi_ulong target_shmlba(CPULoongArchState *env)
-{
-    return 64 * KiB;
-}
-
 #endif
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 3/5] linux-user: Add strace for shmat
  2024-02-28 20:25 [PATCH v2 0/5] linux-user: Rewrite target_shmat Richard Henderson
  2024-02-28 20:25 ` [PATCH v2 1/5] linux-user/x86_64: Handle the vsyscall page in open_self_maps_{2, 4} Richard Henderson
  2024-02-28 20:25 ` [PATCH v2 2/5] linux-user/loongarch64: Remove TARGET_FORCE_SHMLBA Richard Henderson
@ 2024-02-28 20:25 ` Richard Henderson
  2024-03-01 13:34   ` Philippe Mathieu-Daudé
  2024-02-28 20:25 ` [PATCH v2 4/5] linux-user: Rewrite target_shmat Richard Henderson
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Richard Henderson @ 2024-02-28 20:25 UTC (permalink / raw)
  To: qemu-devel; +Cc: laurent, richard.purdie, mjt, iii

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 linux-user/strace.c    | 23 +++++++++++++++++++++++
 linux-user/strace.list |  2 +-
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/linux-user/strace.c b/linux-user/strace.c
index cf26e55264..47d6ec3263 100644
--- a/linux-user/strace.c
+++ b/linux-user/strace.c
@@ -670,6 +670,25 @@ print_semctl(CPUArchState *cpu_env, const struct syscallname *name,
 }
 #endif
 
+static void
+print_shmat(CPUArchState *cpu_env, const struct syscallname *name,
+            abi_long arg0, abi_long arg1, abi_long arg2,
+            abi_long arg3, abi_long arg4, abi_long arg5)
+{
+    static const struct flags shmat_flags[] = {
+        FLAG_GENERIC(SHM_RND),
+        FLAG_GENERIC(SHM_REMAP),
+        FLAG_GENERIC(SHM_RDONLY),
+        FLAG_GENERIC(SHM_EXEC),
+    };
+
+    print_syscall_prologue(name);
+    print_raw_param(TARGET_ABI_FMT_ld, arg0, 0);
+    print_pointer(arg1, 0);
+    print_flags(shmat_flags, arg2, 1);
+    print_syscall_epilogue(name);
+}
+
 #ifdef TARGET_NR_ipc
 static void
 print_ipc(CPUArchState *cpu_env, const struct syscallname *name,
@@ -683,6 +702,10 @@ print_ipc(CPUArchState *cpu_env, const struct syscallname *name,
         print_ipc_cmd(arg3);
         qemu_log(",0x" TARGET_ABI_FMT_lx ")", arg4);
         break;
+    case IPCOP_shmat:
+        print_shmat(cpu_env, &(const struct syscallname){ .name = "shmat" },
+                    arg1, arg4, arg2, 0, 0, 0);
+        break;
     default:
         qemu_log(("%s("
                   TARGET_ABI_FMT_ld ","
diff --git a/linux-user/strace.list b/linux-user/strace.list
index 6655d4f26d..dfd4237d14 100644
--- a/linux-user/strace.list
+++ b/linux-user/strace.list
@@ -1398,7 +1398,7 @@
 { TARGET_NR_sgetmask, "sgetmask" , NULL, NULL, NULL },
 #endif
 #ifdef TARGET_NR_shmat
-{ TARGET_NR_shmat, "shmat" , NULL, NULL, print_syscall_ret_addr },
+{ TARGET_NR_shmat, "shmat" , NULL, print_shmat, print_syscall_ret_addr },
 #endif
 #ifdef TARGET_NR_shmctl
 { TARGET_NR_shmctl, "shmctl" , NULL, NULL, NULL },
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 4/5] linux-user: Rewrite target_shmat
  2024-02-28 20:25 [PATCH v2 0/5] linux-user: Rewrite target_shmat Richard Henderson
                   ` (2 preceding siblings ...)
  2024-02-28 20:25 ` [PATCH v2 3/5] linux-user: Add strace for shmat Richard Henderson
@ 2024-02-28 20:25 ` Richard Henderson
  2024-02-28 20:25 ` [PATCH v2 5/5] tests/tcg: Check that shmat() does not break /proc/self/maps Richard Henderson
  2024-02-29 12:27 ` [PATCH v2 0/5] linux-user: Rewrite target_shmat Richard Purdie
  5 siblings, 0 replies; 9+ messages in thread
From: Richard Henderson @ 2024-02-28 20:25 UTC (permalink / raw)
  To: qemu-devel; +Cc: laurent, richard.purdie, mjt, iii

Handle combined host and guest alignment requirements.
Handle host and guest page size differences.
Handle SHM_EXEC.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/115
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 linux-user/mmap.c | 172 ++++++++++++++++++++++++++++++++++++----------
 1 file changed, 136 insertions(+), 36 deletions(-)

diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 82f4026283..4505fd7376 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -1236,69 +1236,161 @@ static inline abi_ulong target_shmlba(CPUArchState *cpu_env)
 }
 #endif
 
+#if defined(__arm__) || defined(__mips__) || defined(__sparc__)
+#define HOST_FORCE_SHMLBA 1
+#else
+#define HOST_FORCE_SHMLBA 0
+#endif
+
 abi_ulong target_shmat(CPUArchState *cpu_env, int shmid,
                        abi_ulong shmaddr, int shmflg)
 {
     CPUState *cpu = env_cpu(cpu_env);
-    abi_ulong raddr;
     struct shmid_ds shm_info;
     int ret;
-    abi_ulong shmlba;
+    int h_pagesize;
+    int t_shmlba, h_shmlba, m_shmlba;
+    size_t t_len, h_len, m_len;
 
     /* shmat pointers are always untagged */
 
-    /* find out the length of the shared memory segment */
+    /*
+     * Because we can't use host shmat() unless the address is sufficiently
+     * aligned for the host, we'll need to check both.
+     * TODO: Could be fixed with softmmu.
+     */
+    t_shmlba = target_shmlba(cpu_env);
+    h_pagesize = qemu_real_host_page_size();
+    h_shmlba = (HOST_FORCE_SHMLBA ? SHMLBA : h_pagesize);
+    m_shmlba = MAX(t_shmlba, h_shmlba);
+
+    if (shmaddr) {
+        if (shmaddr & (m_shmlba - 1)) {
+            if (shmflg & SHM_RND) {
+                /*
+                 * The guest is allowing the kernel to round the address.
+                 * Assume that the guest is ok with us rounding to the
+                 * host required alignment too.  Anyway if we don't, we'll
+                 * get an error from the kernel.
+                 */
+                shmaddr &= ~(m_shmlba - 1);
+                if (shmaddr == 0 && (shmflg & SHM_REMAP)) {
+                    return -TARGET_EINVAL;
+                }
+            } else {
+                int require = TARGET_PAGE_SIZE;
+#ifdef TARGET_FORCE_SHMLBA
+                require = t_shmlba;
+#endif
+                /*
+                 * Include host required alignment, as otherwise we cannot
+                 * use host shmat at all.
+                 */
+                require = MAX(require, h_shmlba);
+                if (shmaddr & (require - 1)) {
+                    return -TARGET_EINVAL;
+                }
+            }
+        }
+    } else {
+        if (shmflg & SHM_REMAP) {
+            return -TARGET_EINVAL;
+        }
+    }
+    /* All rounding now manually concluded. */
+    shmflg &= ~SHM_RND;
+
+    /* Find out the length of the shared memory segment. */
     ret = get_errno(shmctl(shmid, IPC_STAT, &shm_info));
     if (is_error(ret)) {
         /* can't get length, bail out */
         return ret;
     }
+    t_len = TARGET_PAGE_ALIGN(shm_info.shm_segsz);
+    h_len = ROUND_UP(shm_info.shm_segsz, h_pagesize);
+    m_len = MAX(t_len, h_len);
 
-    shmlba = target_shmlba(cpu_env);
-
-    if (shmaddr & (shmlba - 1)) {
-        if (shmflg & SHM_RND) {
-            shmaddr &= ~(shmlba - 1);
-        } else {
-            return -TARGET_EINVAL;
-        }
-    }
-    if (!guest_range_valid_untagged(shmaddr, shm_info.shm_segsz)) {
+    if (!guest_range_valid_untagged(shmaddr, m_len)) {
         return -TARGET_EINVAL;
     }
 
     WITH_MMAP_LOCK_GUARD() {
-        void *host_raddr;
+        bool mapped = false;
+        void *want, *test;
         abi_ulong last;
 
-        if (shmaddr) {
-            host_raddr = shmat(shmid, (void *)g2h_untagged(shmaddr), shmflg);
-        } else {
-            abi_ulong mmap_start;
-
-            /* In order to use the host shmat, we need to honor host SHMLBA.  */
-            mmap_start = mmap_find_vma(0, shm_info.shm_segsz,
-                                       MAX(SHMLBA, shmlba));
-
-            if (mmap_start == -1) {
+        if (!shmaddr) {
+            shmaddr = mmap_find_vma(0, m_len, m_shmlba);
+            if (shmaddr == -1) {
                 return -TARGET_ENOMEM;
             }
-            host_raddr = shmat(shmid, g2h_untagged(mmap_start),
-                               shmflg | SHM_REMAP);
+            mapped = !reserved_va;
+        } else if (shmflg & SHM_REMAP) {
+            /*
+             * If host page size > target page size, the host shmat may map
+             * more memory than the guest expects.  Reject a mapping that
+             * would replace memory in the unexpected gap.
+             * TODO: Could be fixed with softmmu.
+             */
+            if (t_len < h_len &&
+                !page_check_range_empty(shmaddr + t_len,
+                                        shmaddr + h_len - 1)) {
+                return -TARGET_EINVAL;
+            }
+        } else {
+            if (!page_check_range_empty(shmaddr, shmaddr + m_len - 1)) {
+                return -TARGET_EINVAL;
+            }
         }
 
-        if (host_raddr == (void *)-1) {
-            return get_errno(-1);
-        }
-        raddr = h2g(host_raddr);
-        last = raddr + shm_info.shm_segsz - 1;
+        /* All placement is now complete. */
+        want = (void *)g2h_untagged(shmaddr);
 
-        page_set_flags(raddr, last,
+        /*
+         * Map anonymous pages across the entire range, then remap with
+         * the shared memory.  This is required for a number of corner
+         * cases for which host and guest page sizes differ.
+         */
+        if (h_len != t_len) {
+            int mmap_p = PROT_READ | (shmflg & SHM_RDONLY ? 0 : PROT_WRITE);
+            int mmap_f = MAP_PRIVATE | MAP_ANONYMOUS
+                       | (reserved_va || (shmflg & SHM_REMAP)
+                          ? MAP_FIXED : MAP_FIXED_NOREPLACE);
+
+            test = mmap(want, m_len, mmap_p, mmap_f, -1, 0);
+            if (unlikely(test != want)) {
+                /* shmat returns EINVAL not EEXIST like mmap. */
+                ret = (test == MAP_FAILED && errno != EEXIST
+                       ? get_errno(-1) : -TARGET_EINVAL);
+                if (mapped) {
+                    do_munmap(want, m_len);
+                }
+                return ret;
+            }
+            mapped = true;
+        }
+
+        if (reserved_va || mapped) {
+            shmflg |= SHM_REMAP;
+        }
+        test = shmat(shmid, want, shmflg);
+        if (test == MAP_FAILED) {
+            ret = get_errno(-1);
+            if (mapped) {
+                do_munmap(want, m_len);
+            }
+            return ret;
+        }
+        assert(test == want);
+
+        last = shmaddr + m_len - 1;
+        page_set_flags(shmaddr, last,
                        PAGE_VALID | PAGE_RESET | PAGE_READ |
-                       (shmflg & SHM_RDONLY ? 0 : PAGE_WRITE));
+                       (shmflg & SHM_RDONLY ? 0 : PAGE_WRITE) |
+                       (shmflg & SHM_EXEC ? PAGE_EXEC : 0));
 
-        shm_region_rm_complete(raddr, last);
-        shm_region_add(raddr, last);
+        shm_region_rm_complete(shmaddr, last);
+        shm_region_add(shmaddr, last);
     }
 
     /*
@@ -1312,7 +1404,15 @@ abi_ulong target_shmat(CPUArchState *cpu_env, int shmid,
         tb_flush(cpu);
     }
 
-    return raddr;
+    if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
+        FILE *f = qemu_log_trylock();
+        if (f) {
+            fprintf(f, "page layout changed following shmat\n");
+            page_dump(f);
+            qemu_log_unlock(f);
+        }
+    }
+    return shmaddr;
 }
 
 abi_long target_shmdt(abi_ulong shmaddr)
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 5/5] tests/tcg: Check that shmat() does not break /proc/self/maps
  2024-02-28 20:25 [PATCH v2 0/5] linux-user: Rewrite target_shmat Richard Henderson
                   ` (3 preceding siblings ...)
  2024-02-28 20:25 ` [PATCH v2 4/5] linux-user: Rewrite target_shmat Richard Henderson
@ 2024-02-28 20:25 ` Richard Henderson
  2024-02-29 12:27 ` [PATCH v2 0/5] linux-user: Rewrite target_shmat Richard Purdie
  5 siblings, 0 replies; 9+ messages in thread
From: Richard Henderson @ 2024-02-28 20:25 UTC (permalink / raw)
  To: qemu-devel; +Cc: laurent, richard.purdie, mjt, iii

From: Ilya Leoshkevich <iii@linux.ibm.com>

Add a regression test for a recently fixed issue, where shmat()
desynced the guest and the host view of the address space and caused
open("/proc/self/maps") to SEGV.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <jwyuvao4apydvykmsnvacwshdgy3ixv7qvkh4dbxm3jkwgnttw@k4wpaayou7oq>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 tests/tcg/multiarch/linux/linux-shmat-maps.c | 55 ++++++++++++++++++++
 1 file changed, 55 insertions(+)
 create mode 100644 tests/tcg/multiarch/linux/linux-shmat-maps.c

diff --git a/tests/tcg/multiarch/linux/linux-shmat-maps.c b/tests/tcg/multiarch/linux/linux-shmat-maps.c
new file mode 100644
index 0000000000..0ccf7a973a
--- /dev/null
+++ b/tests/tcg/multiarch/linux/linux-shmat-maps.c
@@ -0,0 +1,55 @@
+/*
+ * Test that shmat() does not break /proc/self/maps.
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+#include <assert.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <sys/ipc.h>
+#include <sys/shm.h>
+#include <unistd.h>
+
+int main(void)
+{
+    char buf[128];
+    int err, fd;
+    int shmid;
+    ssize_t n;
+    void *p;
+
+    shmid = shmget(IPC_PRIVATE, 1, IPC_CREAT | 0600);
+    assert(shmid != -1);
+
+    /*
+     * The original bug required a non-NULL address, which skipped the
+     * mmap_find_vma step, which could result in a host mapping smaller
+     * than the target mapping.  Choose an address at random.
+     */
+    p = shmat(shmid, (void *)0x800000, SHM_RND);
+    if (p == (void *)-1) {
+        /*
+         * Because we are now running the testcase for all guests for which
+         * we have a cross-compiler, the above random address might conflict
+         * with the guest executable in some way.  Rather than stopping,
+         * continue with a system supplied address, which should never fail.
+         */
+        p = shmat(shmid, NULL, 0);
+        assert(p != (void *)-1);
+    }
+
+    fd = open("/proc/self/maps", O_RDONLY);
+    assert(fd != -1);
+    do {
+        n = read(fd, buf, sizeof(buf));
+        assert(n >= 0);
+    } while (n != 0);
+    close(fd);
+
+    err = shmdt(p);
+    assert(err == 0);
+    err = shmctl(shmid, IPC_RMID, NULL);
+    assert(err == 0);
+
+    return EXIT_SUCCESS;
+}
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/5] linux-user/x86_64: Handle the vsyscall page in open_self_maps_{2, 4}
  2024-02-28 20:25 ` [PATCH v2 1/5] linux-user/x86_64: Handle the vsyscall page in open_self_maps_{2, 4} Richard Henderson
@ 2024-02-29  9:48   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 9+ messages in thread
From: Philippe Mathieu-Daudé @ 2024-02-29  9:48 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: laurent, richard.purdie, mjt, iii

On 28/2/24 21:25, Richard Henderson wrote:
> This is the only case in which we expect to have no host memory backing
> for a guest memory page, because in general linux user processes cannot
> map any pages in the top half of the 64-bit address space.
> 
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2170
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   linux-user/syscall.c | 16 ++++++++++++++++
>   1 file changed, 16 insertions(+)
> 
> diff --git a/linux-user/syscall.c b/linux-user/syscall.c
> index e384e14248..bc8c06522f 100644
> --- a/linux-user/syscall.c
> +++ b/linux-user/syscall.c
> @@ -7994,6 +7994,10 @@ static void open_self_maps_4(const struct open_self_maps_data *d,
>           path = "[heap]";
>       } else if (start == info->vdso) {
>           path = "[vdso]";
> +#ifdef TARGET_X86_64

Alternatively "#ifdef TARGET_VSYSCALL_PAGE" like in
i386_tr_translate_insn()?

> +    } else if (start == TARGET_VSYSCALL_PAGE) {
> +        path = "[vsyscall]";
> +#endif

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 0/5] linux-user: Rewrite target_shmat
  2024-02-28 20:25 [PATCH v2 0/5] linux-user: Rewrite target_shmat Richard Henderson
                   ` (4 preceding siblings ...)
  2024-02-28 20:25 ` [PATCH v2 5/5] tests/tcg: Check that shmat() does not break /proc/self/maps Richard Henderson
@ 2024-02-29 12:27 ` Richard Purdie
  5 siblings, 0 replies; 9+ messages in thread
From: Richard Purdie @ 2024-02-29 12:27 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: laurent, mjt, iii, Pavel Zhukov

On Wed, 2024-02-28 at 10:25 -1000, Richard Henderson wrote:
> There are multiple issues with the implementation of shmat().
> 
> (1) With reserved_va, which is the default for 32-on-64-bit, we mmap
> the
>     entire guest address space.  Unlike mmap, shmat refuses to
> replace an
>     existing mapping without setting SHM_REMAP.  This is the original
>     subject of issue #115, though it quicky gets distracted by
>     something else.
> 
> (2) With target page size > host page size, and a shm area
>     that is not a multiple of the target page size, we leave
>     an unmapped hole that the target expects to be mapped.
>     This is the subject of 
> 
> 	
> https://lore.kernel.org/qemu-devel/2no4imvz2zrar5kchz2l3oddqbgpj77jg
> wcuf7aritkn2ok763@i2mvpcihztho/
> 
>     wherein qemu itself expects a mapping to exist, and
>     dies in open_self_maps_2.
> 
> So: reimplement the thing.
> 
> Changes for v2:
>   - Include Ilya's test case, which caught extra errors: Yay!
>   - Include x86_64 /proc/self/maps fix, which the test triggers.
>   - Dropped r-b for the shmat rewrite due to number of changes.

I tested these against our problem with webkitgkt and an happy to
report it does solve the segfault we were seeing, thanks!

Cheers,

Richard


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/5] linux-user: Add strace for shmat
  2024-02-28 20:25 ` [PATCH v2 3/5] linux-user: Add strace for shmat Richard Henderson
@ 2024-03-01 13:34   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 9+ messages in thread
From: Philippe Mathieu-Daudé @ 2024-03-01 13:34 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: laurent, richard.purdie, mjt, iii

On 28/2/24 21:25, Richard Henderson wrote:
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   linux-user/strace.c    | 23 +++++++++++++++++++++++
>   linux-user/strace.list |  2 +-
>   2 files changed, 24 insertions(+), 1 deletion(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-03-01 13:35 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-28 20:25 [PATCH v2 0/5] linux-user: Rewrite target_shmat Richard Henderson
2024-02-28 20:25 ` [PATCH v2 1/5] linux-user/x86_64: Handle the vsyscall page in open_self_maps_{2, 4} Richard Henderson
2024-02-29  9:48   ` Philippe Mathieu-Daudé
2024-02-28 20:25 ` [PATCH v2 2/5] linux-user/loongarch64: Remove TARGET_FORCE_SHMLBA Richard Henderson
2024-02-28 20:25 ` [PATCH v2 3/5] linux-user: Add strace for shmat Richard Henderson
2024-03-01 13:34   ` Philippe Mathieu-Daudé
2024-02-28 20:25 ` [PATCH v2 4/5] linux-user: Rewrite target_shmat Richard Henderson
2024-02-28 20:25 ` [PATCH v2 5/5] tests/tcg: Check that shmat() does not break /proc/self/maps Richard Henderson
2024-02-29 12:27 ` [PATCH v2 0/5] linux-user: Rewrite target_shmat Richard Purdie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).