* [PATCH v2 1/3] include/exec: Add WITH_MMAP_LOCK_GUARD
2023-07-22 11:35 [PATCH for-8.1 v2 0/3] accel/tcg: Take mmap_lock in load_atomic*_or_exit Richard Henderson
@ 2023-07-22 11:35 ` Richard Henderson
2023-07-23 14:18 ` Peter Maydell
2023-07-22 11:35 ` [PATCH v2 2/3] accel/tcg: Fix sense of read-only probes in ldst_atomicity Richard Henderson
2023-07-22 11:35 ` [PATCH v2 3/3] accel/tcg: Take mmap_lock in load_atomic*_or_exit Richard Henderson
2 siblings, 1 reply; 8+ messages in thread
From: Richard Henderson @ 2023-07-22 11:35 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/exec/exec-all.h | 10 ++++++++++
bsd-user/mmap.c | 1 +
linux-user/mmap.c | 1 +
3 files changed, 12 insertions(+)
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 5fa0687cd2..d02517e95f 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -629,6 +629,15 @@ void TSA_NO_TSA mmap_lock(void);
void TSA_NO_TSA mmap_unlock(void);
bool have_mmap_lock(void);
+static inline void mmap_unlock_guard(void *unused)
+{
+ mmap_unlock();
+}
+
+#define WITH_MMAP_LOCK_GUARD() \
+ for (int _mmap_lock_iter __attribute__((cleanup(mmap_unlock_guard))) \
+ = (mmap_lock(), 0); _mmap_lock_iter == 0; _mmap_lock_iter = 1)
+
/**
* adjust_signal_pc:
* @pc: raw pc from the host signal ucontext_t.
@@ -683,6 +692,7 @@ G_NORETURN void cpu_loop_exit_sigbus(CPUState *cpu, target_ulong addr,
#else
static inline void mmap_lock(void) {}
static inline void mmap_unlock(void) {}
+#define WITH_MMAP_LOCK_GUARD()
void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length);
void tlb_set_dirty(CPUState *cpu, vaddr addr);
diff --git a/bsd-user/mmap.c b/bsd-user/mmap.c
index aca8764356..74ed00b9fe 100644
--- a/bsd-user/mmap.c
+++ b/bsd-user/mmap.c
@@ -32,6 +32,7 @@ void mmap_lock(void)
void mmap_unlock(void)
{
+ assert(mmap_lock_count > 0);
if (--mmap_lock_count == 0) {
pthread_mutex_unlock(&mmap_mutex);
}
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 44b53bd446..a5dfb56545 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -36,6 +36,7 @@ void mmap_lock(void)
void mmap_unlock(void)
{
+ assert(mmap_lock_count > 0);
if (--mmap_lock_count == 0) {
pthread_mutex_unlock(&mmap_mutex);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/3] include/exec: Add WITH_MMAP_LOCK_GUARD
2023-07-22 11:35 ` [PATCH v2 1/3] include/exec: Add WITH_MMAP_LOCK_GUARD Richard Henderson
@ 2023-07-23 14:18 ` Peter Maydell
2023-07-23 15:01 ` Richard Henderson
0 siblings, 1 reply; 8+ messages in thread
From: Peter Maydell @ 2023-07-23 14:18 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
On Sat, 22 Jul 2023 at 12:35, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> include/exec/exec-all.h | 10 ++++++++++
> bsd-user/mmap.c | 1 +
> linux-user/mmap.c | 1 +
> 3 files changed, 12 insertions(+)
>
> diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
> index 5fa0687cd2..d02517e95f 100644
> --- a/include/exec/exec-all.h
> +++ b/include/exec/exec-all.h
> @@ -629,6 +629,15 @@ void TSA_NO_TSA mmap_lock(void);
> void TSA_NO_TSA mmap_unlock(void);
> bool have_mmap_lock(void);
>
> +static inline void mmap_unlock_guard(void *unused)
> +{
> + mmap_unlock();
> +}
> +
> +#define WITH_MMAP_LOCK_GUARD() \
> + for (int _mmap_lock_iter __attribute__((cleanup(mmap_unlock_guard))) \
> + = (mmap_lock(), 0); _mmap_lock_iter == 0; _mmap_lock_iter = 1)
All our other WITH_FOO macros seem to use g_autoptr rather than
a raw attribute((cleanup)); is it worth being consistent?
(This one also doesn't allow nested uses, I think.)
Either way
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
since it would be nice to fix this for the next rc.
thanks
-- PMM
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/3] include/exec: Add WITH_MMAP_LOCK_GUARD
2023-07-23 14:18 ` Peter Maydell
@ 2023-07-23 15:01 ` Richard Henderson
0 siblings, 0 replies; 8+ messages in thread
From: Richard Henderson @ 2023-07-23 15:01 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel
On 7/23/23 15:18, Peter Maydell wrote:
> On Sat, 22 Jul 2023 at 12:35, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>> include/exec/exec-all.h | 10 ++++++++++
>> bsd-user/mmap.c | 1 +
>> linux-user/mmap.c | 1 +
>> 3 files changed, 12 insertions(+)
>>
>> diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
>> index 5fa0687cd2..d02517e95f 100644
>> --- a/include/exec/exec-all.h
>> +++ b/include/exec/exec-all.h
>> @@ -629,6 +629,15 @@ void TSA_NO_TSA mmap_lock(void);
>> void TSA_NO_TSA mmap_unlock(void);
>> bool have_mmap_lock(void);
>>
>> +static inline void mmap_unlock_guard(void *unused)
>> +{
>> + mmap_unlock();
>> +}
>> +
>> +#define WITH_MMAP_LOCK_GUARD() \
>> + for (int _mmap_lock_iter __attribute__((cleanup(mmap_unlock_guard))) \
>> + = (mmap_lock(), 0); _mmap_lock_iter == 0; _mmap_lock_iter = 1)
>
> All our other WITH_FOO macros seem to use g_autoptr rather than
> a raw attribute((cleanup)); is it worth being consistent?
I didn't think it worthwhile, no, since that requires even more boilerplate.
> (This one also doesn't allow nested uses, I think.)
It does, since each variable will shadow the next within each context.
r~
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 2/3] accel/tcg: Fix sense of read-only probes in ldst_atomicity
2023-07-22 11:35 [PATCH for-8.1 v2 0/3] accel/tcg: Take mmap_lock in load_atomic*_or_exit Richard Henderson
2023-07-22 11:35 ` [PATCH v2 1/3] include/exec: Add WITH_MMAP_LOCK_GUARD Richard Henderson
@ 2023-07-22 11:35 ` Richard Henderson
2023-07-23 14:20 ` Peter Maydell
2023-07-22 11:35 ` [PATCH v2 3/3] accel/tcg: Take mmap_lock in load_atomic*_or_exit Richard Henderson
2 siblings, 1 reply; 8+ messages in thread
From: Richard Henderson @ 2023-07-22 11:35 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
In the initial commit, cdfac37be0d, the sense of the test is incorrect,
as the -1/0 return was confusing. In bef6f008b981, we mechanically
invert all callers while changing to false/true return, preserving the
incorrectness of the test.
Now that the return sense is sane, it's easy to see that if !write,
then the page is not modifiable (i.e. most likely read-only, with
PROT_NONE handled via SIGSEGV).
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/ldst_atomicity.c.inc | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc
index 4de0a80492..de70531a7a 100644
--- a/accel/tcg/ldst_atomicity.c.inc
+++ b/accel/tcg/ldst_atomicity.c.inc
@@ -159,7 +159,7 @@ static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
* another process, because the fallback start_exclusive solution
* provides no protection across processes.
*/
- if (page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
+ if (!page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
uint64_t *p = __builtin_assume_aligned(pv, 8);
return *p;
}
@@ -194,7 +194,7 @@ static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
* another process, because the fallback start_exclusive solution
* provides no protection across processes.
*/
- if (page_check_range(h2g(p), 16, PAGE_WRITE_ORG)) {
+ if (!page_check_range(h2g(p), 16, PAGE_WRITE_ORG)) {
return *p;
}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 2/3] accel/tcg: Fix sense of read-only probes in ldst_atomicity
2023-07-22 11:35 ` [PATCH v2 2/3] accel/tcg: Fix sense of read-only probes in ldst_atomicity Richard Henderson
@ 2023-07-23 14:20 ` Peter Maydell
0 siblings, 0 replies; 8+ messages in thread
From: Peter Maydell @ 2023-07-23 14:20 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
On Sat, 22 Jul 2023 at 12:35, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> In the initial commit, cdfac37be0d, the sense of the test is incorrect,
> as the -1/0 return was confusing. In bef6f008b981, we mechanically
> invert all callers while changing to false/true return, preserving the
> incorrectness of the test.
>
> Now that the return sense is sane, it's easy to see that if !write,
> then the page is not modifiable (i.e. most likely read-only, with
> PROT_NONE handled via SIGSEGV).
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 3/3] accel/tcg: Take mmap_lock in load_atomic*_or_exit
2023-07-22 11:35 [PATCH for-8.1 v2 0/3] accel/tcg: Take mmap_lock in load_atomic*_or_exit Richard Henderson
2023-07-22 11:35 ` [PATCH v2 1/3] include/exec: Add WITH_MMAP_LOCK_GUARD Richard Henderson
2023-07-22 11:35 ` [PATCH v2 2/3] accel/tcg: Fix sense of read-only probes in ldst_atomicity Richard Henderson
@ 2023-07-22 11:35 ` Richard Henderson
2023-07-23 14:22 ` Peter Maydell
2 siblings, 1 reply; 8+ messages in thread
From: Richard Henderson @ 2023-07-22 11:35 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
For user-only, the probe for page writability may race with another
thread's mprotect. Take the mmap_lock around the operation. This
is still faster than the start/end_exclusive fallback.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/ldst_atomicity.c.inc | 32 ++++++++++++++++++--------------
1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc
index de70531a7a..e5c590a499 100644
--- a/accel/tcg/ldst_atomicity.c.inc
+++ b/accel/tcg/ldst_atomicity.c.inc
@@ -159,9 +159,11 @@ static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
* another process, because the fallback start_exclusive solution
* provides no protection across processes.
*/
- if (!page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
- uint64_t *p = __builtin_assume_aligned(pv, 8);
- return *p;
+ WITH_MMAP_LOCK_GUARD() {
+ if (!page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
+ uint64_t *p = __builtin_assume_aligned(pv, 8);
+ return *p;
+ }
}
#endif
@@ -186,25 +188,27 @@ static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
return atomic16_read_ro(p);
}
-#ifdef CONFIG_USER_ONLY
/*
* We can only use cmpxchg to emulate a load if the page is writable.
* If the page is not writable, then assume the value is immutable
* and requires no locking. This ignores the case of MAP_SHARED with
* another process, because the fallback start_exclusive solution
* provides no protection across processes.
+ *
+ * In system mode all guest pages are writable. For user mode,
+ * we must take mmap_lock so that the query remains valid until
+ * the write is complete -- tests/tcg/multiarch/munmap-pthread.c
+ * is an example that can race.
*/
- if (!page_check_range(h2g(p), 16, PAGE_WRITE_ORG)) {
- return *p;
- }
+ WITH_MMAP_LOCK_GUARD() {
+#ifdef CONFIG_USER_ONLY
+ if (!page_check_range(h2g(p), 16, PAGE_WRITE_ORG)) {
+ return *p;
+ }
#endif
-
- /*
- * In system mode all guest pages are writable, and for user-only
- * we have just checked writability. Try cmpxchg.
- */
- if (HAVE_ATOMIC128_RW) {
- return atomic16_read_rw(p);
+ if (HAVE_ATOMIC128_RW) {
+ return atomic16_read_rw(p);
+ }
}
/* Ultimate fallback: re-execute in serial context. */
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] accel/tcg: Take mmap_lock in load_atomic*_or_exit
2023-07-22 11:35 ` [PATCH v2 3/3] accel/tcg: Take mmap_lock in load_atomic*_or_exit Richard Henderson
@ 2023-07-23 14:22 ` Peter Maydell
0 siblings, 0 replies; 8+ messages in thread
From: Peter Maydell @ 2023-07-23 14:22 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
On Sat, 22 Jul 2023 at 12:35, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> For user-only, the probe for page writability may race with another
> thread's mprotect. Take the mmap_lock around the operation. This
> is still faster than the start/end_exclusive fallback.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> accel/tcg/ldst_atomicity.c.inc | 32 ++++++++++++++++++--------------
> 1 file changed, 18 insertions(+), 14 deletions(-)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
^ permalink raw reply [flat|nested] 8+ messages in thread