* [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW
@ 2025-05-24 14:40 Richard Henderson
2025-05-26 18:13 ` Philippe Mathieu-Daudé
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Richard Henderson @ 2025-05-24 14:40 UTC (permalink / raw)
To: qemu-devel; +Cc: Jonathan Cameron
When we moved TLB_MMIO and TLB_DISCARD_WRITE to TLB_SLOW_FLAGS_MASK,
we failed to update atomic_mmu_lookup to properly reconstruct flags.
Fixes: 24b5e0fdb543 ("include/exec: Move TLB_MMIO, TLB_DISCARD_WRITE to slow flags")
Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 5f6d7c601c..86d0deb08c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1871,8 +1871,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
goto stop_the_world;
}
- /* Collect tlb flags for read. */
+ /* Finish collecting tlb flags for both read and write. */
+ full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
tlb_addr |= tlbe->addr_read;
+ tlb_addr &= TLB_FLAGS_MASK & ~TLB_FORCE_SLOW;
+ tlb_addr |= full->slow_flags[MMU_DATA_STORE];
+ tlb_addr |= full->slow_flags[MMU_DATA_LOAD];
/* Notice an IO access or a needs-MMU-lookup access */
if (unlikely(tlb_addr & (TLB_MMIO | TLB_DISCARD_WRITE))) {
@@ -1882,13 +1886,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
}
hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
- full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
notdirty_write(cpu, addr, size, full, retaddr);
}
- if (unlikely(tlb_addr & TLB_FORCE_SLOW)) {
+ if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
int wp_flags = 0;
if (full->slow_flags[MMU_DATA_STORE] & TLB_WATCHPOINT) {
@@ -1897,10 +1900,8 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
if (full->slow_flags[MMU_DATA_LOAD] & TLB_WATCHPOINT) {
wp_flags |= BP_MEM_READ;
}
- if (wp_flags) {
- cpu_check_watchpoint(cpu, addr, size,
- full->attrs, wp_flags, retaddr);
- }
+ cpu_check_watchpoint(cpu, addr, size,
+ full->attrs, wp_flags, retaddr);
}
return hostaddr;
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW
2025-05-24 14:40 [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW Richard Henderson
@ 2025-05-26 18:13 ` Philippe Mathieu-Daudé
2025-05-27 11:10 ` Jonathan Cameron via
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-05-26 18:13 UTC (permalink / raw)
To: Richard Henderson, qemu-devel; +Cc: Jonathan Cameron, Pierrick Bouvier
On 24/5/25 16:40, Richard Henderson wrote:
> When we moved TLB_MMIO and TLB_DISCARD_WRITE to TLB_SLOW_FLAGS_MASK,
> we failed to update atomic_mmu_lookup to properly reconstruct flags.
>
> Fixes: 24b5e0fdb543 ("include/exec: Move TLB_MMIO, TLB_DISCARD_WRITE to slow flags")
Cc'ing Pierrick
> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> accel/tcg/cputlb.c | 15 ++++++++-------
> 1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 5f6d7c601c..86d0deb08c 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1871,8 +1871,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> goto stop_the_world;
> }
>
> - /* Collect tlb flags for read. */
> + /* Finish collecting tlb flags for both read and write. */
> + full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
> tlb_addr |= tlbe->addr_read;
> + tlb_addr &= TLB_FLAGS_MASK & ~TLB_FORCE_SLOW;
> + tlb_addr |= full->slow_flags[MMU_DATA_STORE];
> + tlb_addr |= full->slow_flags[MMU_DATA_LOAD];
>
> /* Notice an IO access or a needs-MMU-lookup access */
> if (unlikely(tlb_addr & (TLB_MMIO | TLB_DISCARD_WRITE))) {
> @@ -1882,13 +1886,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> }
>
> hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
> - full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
>
> if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
> notdirty_write(cpu, addr, size, full, retaddr);
> }
>
> - if (unlikely(tlb_addr & TLB_FORCE_SLOW)) {
> + if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
> int wp_flags = 0;
>
> if (full->slow_flags[MMU_DATA_STORE] & TLB_WATCHPOINT) {
> @@ -1897,10 +1900,8 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> if (full->slow_flags[MMU_DATA_LOAD] & TLB_WATCHPOINT) {
> wp_flags |= BP_MEM_READ;
> }
> - if (wp_flags) {
> - cpu_check_watchpoint(cpu, addr, size,
> - full->attrs, wp_flags, retaddr);
> - }
> + cpu_check_watchpoint(cpu, addr, size,
> + full->attrs, wp_flags, retaddr);
> }
>
> return hostaddr;
Patch LGTM but this is outside my comfort zone, so better wait for
a second review ;)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW
2025-05-24 14:40 [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW Richard Henderson
2025-05-26 18:13 ` Philippe Mathieu-Daudé
@ 2025-05-27 11:10 ` Jonathan Cameron via
2025-05-27 20:45 ` Pierrick Bouvier
2025-05-27 20:45 ` Pierrick Bouvier
3 siblings, 0 replies; 7+ messages in thread
From: Jonathan Cameron via @ 2025-05-27 11:10 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
On Sat, 24 May 2025 15:40:31 +0100
Richard Henderson <richard.henderson@linaro.org> wrote:
> When we moved TLB_MMIO and TLB_DISCARD_WRITE to TLB_SLOW_FLAGS_MASK,
> we failed to update atomic_mmu_lookup to properly reconstruct flags.
>
> Fixes: 24b5e0fdb543 ("include/exec: Move TLB_MMIO, TLB_DISCARD_WRITE to slow flags")
> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
I've run basic tests (the ones that were tripping over this 100% of the time)
and all looks good. Thanks! I'll run some more comprehensive testing this afternoon
but looking good.
Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Way outside my comfort zone so not appropriate for me to say more than
I tested it!
> ---
> accel/tcg/cputlb.c | 15 ++++++++-------
> 1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 5f6d7c601c..86d0deb08c 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1871,8 +1871,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> goto stop_the_world;
> }
>
> - /* Collect tlb flags for read. */
> + /* Finish collecting tlb flags for both read and write. */
> + full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
> tlb_addr |= tlbe->addr_read;
> + tlb_addr &= TLB_FLAGS_MASK & ~TLB_FORCE_SLOW;
> + tlb_addr |= full->slow_flags[MMU_DATA_STORE];
> + tlb_addr |= full->slow_flags[MMU_DATA_LOAD];
>
> /* Notice an IO access or a needs-MMU-lookup access */
> if (unlikely(tlb_addr & (TLB_MMIO | TLB_DISCARD_WRITE))) {
> @@ -1882,13 +1886,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> }
>
> hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
> - full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
>
> if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
> notdirty_write(cpu, addr, size, full, retaddr);
> }
>
> - if (unlikely(tlb_addr & TLB_FORCE_SLOW)) {
> + if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
> int wp_flags = 0;
>
> if (full->slow_flags[MMU_DATA_STORE] & TLB_WATCHPOINT) {
> @@ -1897,10 +1900,8 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> if (full->slow_flags[MMU_DATA_LOAD] & TLB_WATCHPOINT) {
> wp_flags |= BP_MEM_READ;
> }
> - if (wp_flags) {
> - cpu_check_watchpoint(cpu, addr, size,
> - full->attrs, wp_flags, retaddr);
> - }
> + cpu_check_watchpoint(cpu, addr, size,
> + full->attrs, wp_flags, retaddr);
> }
>
> return hostaddr;
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW
2025-05-24 14:40 [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW Richard Henderson
2025-05-26 18:13 ` Philippe Mathieu-Daudé
2025-05-27 11:10 ` Jonathan Cameron via
@ 2025-05-27 20:45 ` Pierrick Bouvier
2025-05-28 6:42 ` Richard Henderson
2025-05-27 20:45 ` Pierrick Bouvier
3 siblings, 1 reply; 7+ messages in thread
From: Pierrick Bouvier @ 2025-05-27 20:45 UTC (permalink / raw)
To: Richard Henderson, qemu-devel; +Cc: Jonathan Cameron
On 5/24/25 7:40 AM, Richard Henderson wrote:
> When we moved TLB_MMIO and TLB_DISCARD_WRITE to TLB_SLOW_FLAGS_MASK,
> we failed to update atomic_mmu_lookup to properly reconstruct flags.
>
> Fixes: 24b5e0fdb543 ("include/exec: Move TLB_MMIO, TLB_DISCARD_WRITE to slow flags")
> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> accel/tcg/cputlb.c | 15 ++++++++-------
> 1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 5f6d7c601c..86d0deb08c 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
[...]
> @@ -1882,13 +1886,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> }
>
> hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
> - full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
>
> if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
> notdirty_write(cpu, addr, size, full, retaddr);
> }
>
> - if (unlikely(tlb_addr & TLB_FORCE_SLOW)) {
> + if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
> int wp_flags = 0;
>
> if (full->slow_flags[MMU_DATA_STORE] & TLB_WATCHPOINT) {
> @@ -1897,10 +1900,8 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> if (full->slow_flags[MMU_DATA_LOAD] & TLB_WATCHPOINT) {
> wp_flags |= BP_MEM_READ;
> }
> - if (wp_flags) {
> - cpu_check_watchpoint(cpu, addr, size,
> - full->attrs, wp_flags, retaddr);
> - }
> + cpu_check_watchpoint(cpu, addr, size,
> + full->attrs, wp_flags, retaddr);
> }
>
> return hostaddr;
The watchpoint part is an additional cleanup, (BP_MEM_READ or
BP_MEM_WRITE implies TLB_WATCHPOINT is set). No problem to include it
though, it might just be confusing for the reviewer.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW
2025-05-24 14:40 [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW Richard Henderson
` (2 preceding siblings ...)
2025-05-27 20:45 ` Pierrick Bouvier
@ 2025-05-27 20:45 ` Pierrick Bouvier
3 siblings, 0 replies; 7+ messages in thread
From: Pierrick Bouvier @ 2025-05-27 20:45 UTC (permalink / raw)
To: Richard Henderson, qemu-devel; +Cc: Jonathan Cameron
On 5/24/25 7:40 AM, Richard Henderson wrote:
> When we moved TLB_MMIO and TLB_DISCARD_WRITE to TLB_SLOW_FLAGS_MASK,
> we failed to update atomic_mmu_lookup to properly reconstruct flags.
>
> Fixes: 24b5e0fdb543 ("include/exec: Move TLB_MMIO, TLB_DISCARD_WRITE to slow flags")
> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> accel/tcg/cputlb.c | 15 ++++++++-------
> 1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 5f6d7c601c..86d0deb08c 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1871,8 +1871,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> goto stop_the_world;
> }
>
> - /* Collect tlb flags for read. */
> + /* Finish collecting tlb flags for both read and write. */
> + full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
> tlb_addr |= tlbe->addr_read;
> + tlb_addr &= TLB_FLAGS_MASK & ~TLB_FORCE_SLOW;
> + tlb_addr |= full->slow_flags[MMU_DATA_STORE];
> + tlb_addr |= full->slow_flags[MMU_DATA_LOAD];
>
[...]
Looks good to me.
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW
2025-05-27 20:45 ` Pierrick Bouvier
@ 2025-05-28 6:42 ` Richard Henderson
2025-05-28 14:37 ` Pierrick Bouvier
0 siblings, 1 reply; 7+ messages in thread
From: Richard Henderson @ 2025-05-28 6:42 UTC (permalink / raw)
To: Pierrick Bouvier, qemu-devel; +Cc: Jonathan Cameron
On 5/27/25 21:45, Pierrick Bouvier wrote:
> On 5/24/25 7:40 AM, Richard Henderson wrote:
>> When we moved TLB_MMIO and TLB_DISCARD_WRITE to TLB_SLOW_FLAGS_MASK,
>> we failed to update atomic_mmu_lookup to properly reconstruct flags.
>>
>> Fixes: 24b5e0fdb543 ("include/exec: Move TLB_MMIO, TLB_DISCARD_WRITE to slow flags")
>> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>> accel/tcg/cputlb.c | 15 ++++++++-------
>> 1 file changed, 8 insertions(+), 7 deletions(-)
>>
>> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
>> index 5f6d7c601c..86d0deb08c 100644
>> --- a/accel/tcg/cputlb.c
>> +++ b/accel/tcg/cputlb.c
>
> [...]
>
>> @@ -1882,13 +1886,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr,
>> MemOpIdx oi,
>> }
>> hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
>> - full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
>> if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
>> notdirty_write(cpu, addr, size, full, retaddr);
>> }
>> - if (unlikely(tlb_addr & TLB_FORCE_SLOW)) {
>> + if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
>> int wp_flags = 0;
>> if (full->slow_flags[MMU_DATA_STORE] & TLB_WATCHPOINT) {
>> @@ -1897,10 +1900,8 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr,
>> MemOpIdx oi,
>> if (full->slow_flags[MMU_DATA_LOAD] & TLB_WATCHPOINT) {
>> wp_flags |= BP_MEM_READ;
>> }
>> - if (wp_flags) {
>> - cpu_check_watchpoint(cpu, addr, size,
>> - full->attrs, wp_flags, retaddr);
>> - }
>> + cpu_check_watchpoint(cpu, addr, size,
>> + full->attrs, wp_flags, retaddr);
>> }
>> return hostaddr;
>
> The watchpoint part is an additional cleanup, (BP_MEM_READ or BP_MEM_WRITE implies
> TLB_WATCHPOINT is set). No problem to include it though, it might just be confusing for
> the reviewer.
The watchpoint cleanup is required, since I remove TLB_FORCE_SLOW from the flags. I
suppose *that* isn't strictly necessary, but it's what we do elsewhere while combining
"fast" and slow_flags.
r~
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW
2025-05-28 6:42 ` Richard Henderson
@ 2025-05-28 14:37 ` Pierrick Bouvier
0 siblings, 0 replies; 7+ messages in thread
From: Pierrick Bouvier @ 2025-05-28 14:37 UTC (permalink / raw)
To: Richard Henderson, qemu-devel; +Cc: Jonathan Cameron
On 5/27/25 11:42 PM, Richard Henderson wrote:
> On 5/27/25 21:45, Pierrick Bouvier wrote:
>> On 5/24/25 7:40 AM, Richard Henderson wrote:
>>> When we moved TLB_MMIO and TLB_DISCARD_WRITE to TLB_SLOW_FLAGS_MASK,
>>> we failed to update atomic_mmu_lookup to properly reconstruct flags.
>>>
>>> Fixes: 24b5e0fdb543 ("include/exec: Move TLB_MMIO, TLB_DISCARD_WRITE to slow flags")
>>> Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>>> ---
>>> accel/tcg/cputlb.c | 15 ++++++++-------
>>> 1 file changed, 8 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
>>> index 5f6d7c601c..86d0deb08c 100644
>>> --- a/accel/tcg/cputlb.c
>>> +++ b/accel/tcg/cputlb.c
>>
>> [...]
>>
>>> @@ -1882,13 +1886,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr,
>>> MemOpIdx oi,
>>> }
>>> hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
>>> - full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
>>> if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
>>> notdirty_write(cpu, addr, size, full, retaddr);
>>> }
>>> - if (unlikely(tlb_addr & TLB_FORCE_SLOW)) {
>>> + if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
>>> int wp_flags = 0;
>>> if (full->slow_flags[MMU_DATA_STORE] & TLB_WATCHPOINT) {
>>> @@ -1897,10 +1900,8 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr,
>>> MemOpIdx oi,
>>> if (full->slow_flags[MMU_DATA_LOAD] & TLB_WATCHPOINT) {
>>> wp_flags |= BP_MEM_READ;
>>> }
>>> - if (wp_flags) {
>>> - cpu_check_watchpoint(cpu, addr, size,
>>> - full->attrs, wp_flags, retaddr);
>>> - }
>>> + cpu_check_watchpoint(cpu, addr, size,
>>> + full->attrs, wp_flags, retaddr);
>>> }
>>> return hostaddr;
>>
>> The watchpoint part is an additional cleanup, (BP_MEM_READ or BP_MEM_WRITE implies
>> TLB_WATCHPOINT is set). No problem to include it though, it might just be confusing for
>> the reviewer.
>
> The watchpoint cleanup is required, since I remove TLB_FORCE_SLOW from the flags. I
> suppose *that* isn't strictly necessary, but it's what we do elsewhere while combining
> "fast" and slow_flags.
>
Yes! I was just referring to the last part where you remove if
(wp_flags) but kept the whole diff chunk for convenience.
>
> r~
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-05-28 14:38 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-24 14:40 [PATCH] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW Richard Henderson
2025-05-26 18:13 ` Philippe Mathieu-Daudé
2025-05-27 11:10 ` Jonathan Cameron via
2025-05-27 20:45 ` Pierrick Bouvier
2025-05-28 6:42 ` Richard Henderson
2025-05-28 14:37 ` Pierrick Bouvier
2025-05-27 20:45 ` Pierrick Bouvier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).